Lecture - Theories of Deep Learning MT25, XVI, Ingredients for a successful mini-project report


  • [[Course - Theories of Deep Learning MT25]]U
    • Report should include:
      • A discussion of some theoretical portion of deep learning along
      • Numerical simulations
    • Don’t write about an application (e.g. this method gets 2% better), this project is about the theory of deep learning
      • But you can highlight how an application raises questions about deep nets, and how the theory of deep learning could be used to overcome these problems
    • One approach:
      • Pick two papers that came out at the same time and then compare them
      • Combine the two approaches
    • You’re not expected to do conference-level research but there should be some aspect of originality. The examiners should hear your voice
      • E.g. if you’re comparing two papers, your report shouldn’t be useless if the examiner has read the two papers you have compared
    • Pick a topic that you are excited about
      • It should read like a 20 page report that’s been compressed into a 5 page report
    • Adapt your code from others
      • Pick papers that have code already to build on
    • Don’t just focus on one paper
      • Compare different aspects of multiple papers
      • Look at papers that came out at the same time and haven’t been directly compared
    • Work out what you don’t like when reading a paper, and don’t do that
    • Clearly state what is new, be upfront and don’t present other results as your own
      • It can be jarring to say “I did X” but it makes the examiner’s life much easier
    • Don’t pick papers that are very close to papers presented in the lectures
    • You can pick older papers but don’t go back more than about 10 years
    • You can copy figures from other papers but don’t make all your figures other people’s

    • Examples
      • Robustness and accuracy: are we trying to have our cake and eat it too?
        • Nice bibliography
        • A nice mathematical tone
        • Novel experiment
        • Didn’t have a complete answer in the end
      • On manifold mixup for deep learning
        • Had a really good summary of the topic and the bibliography
        • Lots of originality, above what is expected
        • The examiner couldn’t find the new content anywhere else, it was genuinely new
      • Backpropagation and predictive coding: an experimental comparison
        • Contrasting
        • Ex
      • What were these papers missing to be exceptional?
        • Sometimes the lecturer gets
        • Great literature review
        • Really good experiments
        • With more page length it seems like it could be amazing
    • Start with an outline
    • Fill things in
    • Go over the page length
    • Then condense by selecting the most essential parts of the discussion
    • Re-read and improve your report

    • A literature review (done well) would get you a 60
    • Putting some of yourself into the report would improve your score
    • The more originality, the higher your score (roughly)

    • Don’t generally recommend e.g. modifying a proof and doing lots of math, better to do numerical experiments instead



Related posts