Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

AI Alignment & AGI Fire Alarm - Connor Leahy

AI Alignment & AGI Fire Alarm - Connor Leahy

FromMachine Learning Street Talk (MLST)


AI Alignment & AGI Fire Alarm - Connor Leahy

FromMachine Learning Street Talk (MLST)

ratings:
Length:
125 minutes
Released:
Nov 1, 2020
Format:
Podcast episode

Description

This week Dr. Tim Scarfe, Alex Stenlake and Yannic Kilcher speak with AGI and AI alignment specialist Connor Leahy a machine learning engineer from Aleph Alpha and founder of EleutherAI.

Connor believes that AI alignment is philosophy with a deadline and that we are on the precipice, the stakes are astronomical. AI is important, and it will go wrong by default. Connor thinks that the singularity or intelligence explosion is near. Connor says that AGI is like climate change but worse, even harder problems, even shorter deadline and even worse consequences for the future. These problems are hard, and nobody knows what to do about them.

00:00:00 Introduction to AI alignment and AGI fire alarm 
00:15:16 Main Show Intro 
00:18:38 Different schools of thought on AI safety 
00:24:03 What is intelligence? 
00:25:48 AI Alignment 
00:27:39 Humans dont have a coherent utility function 
00:28:13 Newcomb's paradox and advanced decision problems 
00:34:01 Incentives and behavioural economics 
00:37:19 Prisoner's dilemma 
00:40:24 Ayn Rand and game theory in politics and business 
00:44:04 Instrumental convergence and orthogonality thesis 
00:46:14 Utility functions and the Stop button problem 
00:55:24 AI corrigibality - self alignment 
00:56:16 Decision theory and stability / wireheading / robust delegation 
00:59:30 Stop button problem 
01:00:40 Making the world a better place 
01:03:43 Is intelligence a search problem? 
01:04:39 Mesa optimisation / humans are misaligned AI 
01:06:04 Inner vs outer alignment / faulty reward functions 
01:07:31 Large corporations are intelligent and have no stop function 
01:10:21 Dutch booking / what is rationality / decision theory 
01:16:32 Understanding very powerful AIs 
01:18:03 Kolmogorov complexity 
01:19:52 GPT-3 - is it intelligent, are humans even intelligent? 
01:28:40 Scaling hypothesis 
01:29:30 Connor thought DL was dead in 2017 
01:37:54 Why is GPT-3 as intelligent as a human 
01:44:43 Jeff Hawkins on intelligence as compression and the great lookup table 
01:50:28 AI ethics related to AI alignment? 
01:53:26 Interpretability 
01:56:27 Regulation 
01:57:54 Intelligence explosion 


Discord: https://discord.com/invite/vtRgjbM
EleutherAI: https://www.eleuther.ai
Twitter: https://twitter.com/npcollapse
LinkedIn: https://www.linkedin.com/in/connor-j-leahy/
Released:
Nov 1, 2020
Format:
Podcast episode

Titles in the series (100)

This is the audio podcast for the ML Street Talk YouTube channel at https://www.youtube.com/c/MachineLearningStreetTalk Thanks for checking us out! We think that scientists and engineers are the heroes of our generation. Each week we have a hard-hitting discussion with the leading thinkers in the AI space. Street Talk is unabashedly technical and non-commercial, so you will hear no annoying pitches. Corporate- and MBA-speak is banned on street talk, "data product", "digital transformation" are banned, we promise :) Dr. Tim Scarfe, Dr. Yannic Kilcher and Dr. Keith Duggar.