Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

BI 163 Ellie Pavlick: The Mind of a Language Model

BI 163 Ellie Pavlick: The Mind of a Language Model

FromBrain Inspired


BI 163 Ellie Pavlick: The Mind of a Language Model

FromBrain Inspired

ratings:
Length:
82 minutes
Released:
Mar 20, 2023
Format:
Podcast episode

Description

Support the show to get full episodes and join the Discord community.







Check out my free video series about what's missing in AI and Neuroscience











Ellie Pavlick runs her Language Understanding and Representation Lab at Brown University, where she studies lots of topics related to language. In AI, large language models, sometimes called foundation models, are all the rage these days, with their ability to generate convincing language, although they still make plenty of mistakes. One of the things Ellie is interested in is how these models work, what kinds of representations are being generated in them to produce the language they produce. So we discuss how she's going about studying these models. For example, probing them to see whether something symbolic-like might be implemented in the models, even though they are the deep learning neural network type, which aren't suppose to be able to work in a symbol-like manner. We also discuss whether grounding is required for language understanding - that is, whether a model that produces language well needs to connect with the real world to actually understand the text it generates. We talk about what language is for, the current limitations of large language models, how the models compare to humans, and a lot more.




Language Understanding and Representation Lab



Twitter: @Brown_NLP



Related papers

Semantic Structure in Deep Learning.



Pretraining on Interactions for Learning Grounded Affordance Representations.



Mapping Language Models to Grounded Conceptual Spaces.






0:00 - Intro
2:34 - Will LLMs make us dumb?
9:01 - Evolution of language
17:10 - Changing views on language
22:39 - Semantics, grounding, meaning
37:40 - LLMs, humans, and prediction
41:19 - How to evaluate LLMs
51:08 - Structure, semantics, and symbols in models
1:00:08 - Dimensionality
1:02:08 - Limitations of LLMs
1:07:47 - What do linguists think?
1:14:23 - What is language for?
Released:
Mar 20, 2023
Format:
Podcast episode

Titles in the series (99)

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.