The following work aims to cover some of the the most important concepts and definitions of algebraic topology in order to explore how deep neural networks operate in their hidden layers and why they are so amazingly efficient, the key idea being that the topology of the data changes, progressively becoming more and more simple.
This work does not pretend to be a comprehensive account of any of the concepts here described. It is more of a personal project digging into topics I feel passionate about. With this in mind, any comments or suggestions are totally welcome.