Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Ed Grefenstette: Language, Semantics, Cohere

Ed Grefenstette: Language, Semantics, Cohere

FromThe Gradient: Perspectives on AI


Ed Grefenstette: Language, Semantics, Cohere

FromThe Gradient: Perspectives on AI

ratings:
Length:
74 minutes
Released:
Mar 2, 2023
Format:
Podcast episode

Description

In episode 62 of The Gradient Podcast, Daniel Bashir speaks to Ed Grefenstette.Ed is Head of Machine Learning at Cohere and an Honorary Professor at University College London. He previously held research scientist positions at Facebook AI Research and DeepMind, following a stint as co-founder and CTO of Dark Blue Labs. Before his time in industry, Ed worked at Oxford’s Department of Computer Science as a lecturer and Fulford Junior Research Fellow at Somerville College. Ed also received his MSc and DPhil from Oxford’s Computer Science Department.Have suggestions for future podcast guests (or other feedback)? Let us know here!Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:18) The Ed Grefenstette Origin Story* (08:15) Distributional semantics and Ed’s PhD research* (14:30) Extending the distributional hypothesis, later Wittgenstein* (18:00) Recovering parse trees in LMs, can LLMs understand communication and not just bare language?* (23:15) LMs capture something about pragmatics, proxies for grounding and pragmatics* (25:00) Human-in-the-loop training and RLHF—what is the essential differentiator? * (28:15) A convolutional neural network for modeling sentences, relationship to attention* (34:20) Difficulty of constructing supervised learning datasets, benchmark-driven development* (40:00) Learning to Transduce with Unbounded Memory, Neural Turing Machines* (47:40) If RNNs are like finite state machines, where are transformers? * (51:40) Cohere and why Ed joined* (56:30) Commercial applications of LLMs and Cohere’s product* (59:00) Ed’s reply to stochastic parrots and thoughts on consciousness* (1:03:30) Lessons learned about doing effective science* (1:05:00) Where does scaling end? * (1:07:00) Why Cohere is an exciting place to do science* (1:08:00) Ed’s advice for aspiring ML {researchers, engineers, etc} and the role of communities in science* (1:11:45) Cohere for AI plug!* (1:13:30) OutroLinks:* Ed’s homepage and Twitter* (some of) Ed’s Papers* Experimental support for a categorical compositional distributional model of meaning* Multi-step regression learning* “Not not bad” is not “bad”* Towards a formal distributional semantics* A CNN for modeling sentences* Teaching machines to read and comprehend* Reasoning about entailment with neural attention* Learning to Transduce with Unbounded Memory* Teaching Artificial Agents to Understand Language by Modelling Reward* Other things mentioned* Large language models are not zero-shot communicators (Laura Ruis + others and Ed)* Looped Transformers as Programmable Computers and our Update 43 covering this paper* Cohere and Cohere for AI (+ earlier episode w/ Sara Hooker on C4AI)* David Chalmers interview on AI + consciousness Get full access to The Gradient at thegradientpub.substack.com/subscribe
Released:
Mar 2, 2023
Format:
Podcast episode

Titles in the series (100)

Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more. thegradientpub.substack.com