Lecture - Theories of Deep Learning MT25, XII, Vulnerabilities in deep learning models
- Scattering transform
- Given a task, consider all the invariants you want to model and then design the activation functions to remove these. The scattering transform does this for translation
- Autoencoders
- Principal component analysis can be viewed as an autoencoder
- Autoencoders can be viewed as an extension of PCA by allowing transformations that aren’t just low dimensional projections
- $k$-sparse autoencoders:
- Adversarial examples
- How do the decision region diagrams change when you have an additional “don’t know” class?