’How neural networks learn’ - Part III: The learning dynamics behind generalization and overfitting
In this third episode on “How neural nets learn“ I dive into a bunch of academical research that tries to explain why neural networks generalize as wel as they do. We first look at the remarkable capability of DNNs to simply memorize huge amounts of (random) data. We then see how this picture is more subtle when training on real data and finally dive into some beautiful analysis from the viewpoint on information theory.
Main papers discussed in this video:
First paper on Memorization in DNNs: https://arxiv
41 view
76
20
3 months ago 02:37:35 1
David Goggins: How to Build Immense Inner Strength
4 months ago 00:09:35 1
Interview with Inventor of Neural Nets Warren McCulloch, neurologist who helped start it way back.
4 months ago 08:37:35 1
Elon Musk: Neuralink and the Future of Humanity | Lex Fridman Podcast #438
4 months ago 01:41:39 1
Optimal Protocols for Studying & Learning
4 months ago 00:08:37 1
Are Hallucinations Popping the AI Bubble?
4 months ago 00:01:21 1
Lab-Grown Human Brain Living in a Virtual World
4 months ago 00:02:57 1
All Neural Networks. All Autonomous. All 1X speed | 1X AI Update
4 months ago 00:12:39 1
NEW Photo Restoration Filter in Photoshop!
4 months ago 00:52:07 1
Unlock new gameplay using Runtime AI with Unity Sentis | Unite 2024
4 months ago 03:42:50 1
Dr. Paul Conti: How to Understand & Assess Your Mental Health | Huberman Lab Guest Series
4 months ago 00:04:21 1
AI could “take control“ and “make us irrelevant“ as it advances, Nobel Prize winner warns
4 months ago 00:04:38 1
NBA Draft 1st Picks Every Year 1947-2024. Zaccharie Risacher