62 min listen
BI 176 David Poeppel Returns
FromBrain Inspired
ratings:
Length:
84 minutes
Released:
Oct 14, 2023
Format:
Podcast episode
Description
Support the show to get full episodes and join the Discord community.
David runs his lab at NYU, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.
Poeppel lab
Twitter: @davidpoeppel.
Related papers
We don’t know how the brain stores anything, let alone words.
Memory in humans and deep language models: Linking hypotheses for model augmentation.
The neural ingredients for a language of thought are available.
0:00 - Intro
11:17 - Across levels
14:598 - Nature of memory
24:12 - Using the right tools for the right question
35:46 - LLMs, what they need, how they've shaped David's thoughts
44:55 - Across levels
54:07 - Speed of progress
1:02:21 - Neuroethology and mental illness - patreon
1:24:42 - Language of Thought
David runs his lab at NYU, where they stud`y auditory cognition, speech perception, language, and music. On the heels of the episode with David Glanzman, we discuss the ongoing mystery regarding how memory works, how to study and think about brains and minds, and the reemergence (perhaps) of the language of thought hypothesis.
Poeppel lab
Twitter: @davidpoeppel.
Related papers
We don’t know how the brain stores anything, let alone words.
Memory in humans and deep language models: Linking hypotheses for model augmentation.
The neural ingredients for a language of thought are available.
0:00 - Intro
11:17 - Across levels
14:598 - Nature of memory
24:12 - Using the right tools for the right question
35:46 - LLMs, what they need, how they've shaped David's thoughts
44:55 - Across levels
54:07 - Speed of progress
1:02:21 - Neuroethology and mental illness - patreon
1:24:42 - Language of Thought
Released:
Oct 14, 2023
Format:
Podcast episode
Titles in the series (99)
BI 105 Sanjeev Arora: Off the Convex Path: Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldnt or shouldnt work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren by Brain Inspired