- [[Lecture - Theories of Deep Learning MT25, II, Why deep learning]]U
- [[Lecture - Theories of Deep Learning MT25, III, Exponential expressivity with depth]]U
- [[Lecture - Theories of Deep Learning MT25, IV, Data classes for which DNNs can overcome the curse of dimensionality]]U
- [[Lecture - Theories of Deep Learning MT25, V, Controlling the exponential growth of variance and correlation]]U
- [[Lecture - Theories of Deep Learning MT25, VI, Controlling the variance of the Jacobian’s spectrum]]U
- [[Lecture - Theories of Deep Learning MT25, VII, Stochastic gradient descent and its extensions]]U
- [[Lecture - Theories of Deep Learning MT25, VIII, Optimisation algorithms for training DNNs]]U
- [[Lecture - Theories of Deep Learning MT25, XI, Visualising the filters and response in a CNN]]U
- [[Lecture - Theories of Deep Learning MT25, XII, The scattering transform and into auto-encoders]]U
- [[Lecture - Theories of Deep Learning MT25, XIII, Autoencoders]]U
- [[Lecture - Theories of Deep Learning MT25, XIV, Generative adversarial networks]]U
- [[Lecture - Theories of Deep Learning MT25, XV, A few things we missed and a summary]]U
- [[Lecture - Theories of Deep Learning MT25, XVI, Ingredients for a successful mini-project report]]U