Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

#74 Dr. ANDREW LAMPINEN - Symbolic behaviour in AI [UNPLUGGED]

#74 Dr. ANDREW LAMPINEN - Symbolic behaviour in AI [UNPLUGGED]

FromMachine Learning Street Talk (MLST)


#74 Dr. ANDREW LAMPINEN - Symbolic behaviour in AI [UNPLUGGED]

FromMachine Learning Street Talk (MLST)

ratings:
Length:
66 minutes
Released:
Apr 14, 2022
Format:
Podcast episode

Description

Please note that in this interview Dr. Lampinen was expressing his personal opinions and they do not necessarily represent those of DeepMind. 

Patreon: https://www.patreon.com/mlst
Discord: https://discord.gg/ESrGqhf5CB
YT version: https://youtu.be/yPMtSXXn4OY

 Dr. Andrew Lampinen is a Senior Research Scientist at DeepMind, and he thinks that symbols are subjective in the relativistic sense. Dr. Lampinen completed his PhD in Cognitive Psychology at Stanford University. His background is in mathematics, physics, and machine learning. Andrew has said that his research interests are in cognitive flexibililty and generalization, and how these abilities are enabled by factors like language, memory, and embodiment.  Andrew with his coauthors has just released a paper called symbolic behaviour in artificial intelligence. Andrew lead in the paper by saying the human ability to use symbols has yet to be replicated in machines. He thinks that one of the key areas to bridge the gap here is considering how symbol meaning is established, and he strongly believes it is the symbol users themselves who agree upon the symbol meaning, And that the use of symbols entails behaviours which coalesce agreements about their meaning. Which in plain English means that symbols are defined by behaviours rather than their content.

[00:00:00] Intro to Andrew and Symbolic Behaviour paper
[00:07:01] Semantics underpins the unreasonable effectiveness of symbols
[00:12:56] The Depth of Subjectivity
[00:21:03] Walid Saba - universal cognitive templates
[00:27:47] Insufficiently Darwinian 
[00:30:52] Discovered vs invented
[00:34:19] Does language have primacy
[00:35:59] Research directions
[00:39:43] Comparison to BenG OpenCog and human compatible AI
[00:42:53] Aligning AI with our culture
[00:47:55] Do we need to model the worst aspects of human behaviour? 
[00:50:57] Fairness
[00:54:24] Memorisatation on LLMs
[01:00:38] Wason selection task
[01:03:45] Would an Andrew hashtable robot be intelligent?

Dr. Andrew Lampinen
https://lampinen.github.io/
https://twitter.com/AndrewLampinen

Symbolic Behaviour in Artificial Intelligence
https://arxiv.org/abs/2102.03406

Imitating Interactive Intelligence
https://arxiv.org/abs/2012.05672
https://www.deepmind.com/publications/imitating-interactive-intelligence

Impact of Pretraining Term Frequencies on Few-Shot Reasoning [Yasaman Razeghi]
https://arxiv.org/abs/2202.07206

Big bench dataset
https://github.com/google/BIG-bench

Teaching Autoregressive Language Models Complex Tasks By Demonstration [Recchia]
https://arxiv.org/pdf/2109.02102.pdf

Wason selection task
https://en.wikipedia.org/wiki/Wason_selection_task

Gary Lupyan
https://psych.wisc.edu/staff/lupyan-gary/
Released:
Apr 14, 2022
Format:
Podcast episode

Titles in the series (100)

This is the audio podcast for the ML Street Talk YouTube channel at https://www.youtube.com/c/MachineLearningStreetTalk Thanks for checking us out! We think that scientists and engineers are the heroes of our generation. Each week we have a hard-hitting discussion with the leading thinkers in the AI space. Street Talk is unabashedly technical and non-commercial, so you will hear no annoying pitches. Corporate- and MBA-speak is banned on street talk, "data product", "digital transformation" are banned, we promise :) Dr. Tim Scarfe, Dr. Yannic Kilcher and Dr. Keith Duggar.