62 min listen
BI 100.3 Special: Can We Scale Up to AGI with Current Tech?
FromBrain Inspired
ratings:
Length:
69 minutes
Released:
Mar 17, 2021
Format:
Podcast episode
Description
Part 3 in our 100th episode celebration. Previous guests answered the question:
Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):
Do you think the current trend of scaling compute can lead to human level AGI? If not, what's missing?
It likely won't surprise you that the vast majority answer "No." It also likely won't surprise you, there is differing opinion on what's missing.
Timestamps:
0:00 - Intro
3:56 - Wolgang Maass
5:34 - Paul Humphreys
9:16 - Chris Eliasmith
12:52 - Andrew Saxe
16:25 - Mazviita Chirimuuta
18:11 - Steve Potter
19:21 - Blake Richards
22:33 - Paul Cisek
26:24 - Brad Love
29:12 - Jay McClelland
34:20 - Megan Peters
37:00 - Dean Buonomano
39:48 - Talia Konkle
40:36 - Steve Grossberg
42:40 - Nathaniel Daw
44:02 - Marcel van Gerven
45:28 - Kanaka Rajan
48:25 - John Krakauer
51:05 - Rodrigo Quian Quiroga
53:03 - Grace Lindsay
55:13 - Konrad Kording
57:30 - Jeff Hawkins
102:12 - Uri Hasson
1:04:08 - Jess Hamrick
1:06:20 - Thomas Naselaris
Given the continual surprising progress in AI powered by scaling up parameters and using more compute, while using fairly generic architectures (eg. GPT-3):
Do you think the current trend of scaling compute can lead to human level AGI? If not, what's missing?
It likely won't surprise you that the vast majority answer "No." It also likely won't surprise you, there is differing opinion on what's missing.
Timestamps:
0:00 - Intro
3:56 - Wolgang Maass
5:34 - Paul Humphreys
9:16 - Chris Eliasmith
12:52 - Andrew Saxe
16:25 - Mazviita Chirimuuta
18:11 - Steve Potter
19:21 - Blake Richards
22:33 - Paul Cisek
26:24 - Brad Love
29:12 - Jay McClelland
34:20 - Megan Peters
37:00 - Dean Buonomano
39:48 - Talia Konkle
40:36 - Steve Grossberg
42:40 - Nathaniel Daw
44:02 - Marcel van Gerven
45:28 - Kanaka Rajan
48:25 - John Krakauer
51:05 - Rodrigo Quian Quiroga
53:03 - Grace Lindsay
55:13 - Konrad Kording
57:30 - Jeff Hawkins
102:12 - Uri Hasson
1:04:08 - Jess Hamrick
1:06:20 - Thomas Naselaris
Released:
Mar 17, 2021
Format:
Podcast episode
Titles in the series (99)
BI 105 Sanjeev Arora: Off the Convex Path: Sanjeev and I discuss some of the progress toward understanding how deep learning works, specially under previous assumptions it wouldnt or shouldnt work as well as it does. Deep learning theory poses a challenge for mathematics, because its methods aren by Brain Inspired