Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

#111 - AI moratorium, Eliezer Yudkowsky, AGI risk etc

#111 - AI moratorium, Eliezer Yudkowsky, AGI risk etc

FromMachine Learning Street Talk (MLST)


#111 - AI moratorium, Eliezer Yudkowsky, AGI risk etc

FromMachine Learning Street Talk (MLST)

ratings:
Length:
27 minutes
Released:
Apr 1, 2023
Format:
Podcast episode

Description

Support us! https://www.patreon.com/mlst
MLST Discord: https://discord.gg/aNPkGUQtc5
Send us a voice message which you want us to publish: https://podcasters.spotify.com/pod/show/machinelearningstreettalk/message

In a recent open letter, over 1500 individuals called for a six-month pause on the development of advanced AI systems, expressing concerns over the potential risks AI poses to society and humanity. However, there are issues with this approach, including global competition, unstoppable progress, potential benefits, and the need to manage risks instead of avoiding them.

Decision theorist Eliezer Yudkowsky took it a step further in a Time magazine article, calling for an indefinite and worldwide moratorium on Artificial General Intelligence (AGI) development, warning of potential catastrophe if AGI exceeds human intelligence. Yudkowsky urged for an immediate halt to all large AI training runs and the shutdown of major GPU clusters, calling for international cooperation to enforce these measures.

However, several counterarguments question the validity of Yudkowsky's concerns:

1. Hard limits on AGI
2. Dismissing AI extinction risk
3. Collective action problem
4. Misplaced focus on AI threats

While the potential risks of AGI cannot be ignored, it is essential to consider various arguments and potential solutions before making drastic decisions. As AI continues to advance, it is crucial for researchers, policymakers, and society as a whole to engage in open and honest discussions about the potential consequences and the best path forward. With a balanced approach to AGI development, we may be able to harness its power for the betterment of humanity while mitigating its risks.

Eliezer Yudkowsky: https://en.wikipedia.org/wiki/Eliezer_Yudkowsky
Connor Leahy: https://twitter.com/NPCollapse (we will release that interview soon)
Gary Marcus: http://garymarcus.com/index.html
Tim Scarfe is the innovation CTO of XRAI Glass: https://xrai.glass/

Gary clip filmed at AIUK https://ai-uk.turing.ac.uk/programme/ and our appreciation to them for giving us a press pass. Check out their conference next year!
WIRED clip from Gary came from here: https://www.youtube.com/watch?v=Puo3VkPkNZ4

Refs:

Statement from the listed authors of Stochastic Parrots on the “AI pause” letterTimnit Gebru, Emily M. Bender, Angelina McMillan-Major, Margaret Mitchell
https://www.dair-institute.org/blog/letter-statement-March2023

Eliezer Yudkowsky on Lex: https://www.youtube.com/watch?v=AaTRHFaaPG8

Pause Giant AI Experiments: An Open Letter
https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Pausing AI Developments Isn't Enough. We Need to Shut it All Down (Eliezer Yudkowsky)
https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Released:
Apr 1, 2023
Format:
Podcast episode

Titles in the series (100)

This is the audio podcast for the ML Street Talk YouTube channel at https://www.youtube.com/c/MachineLearningStreetTalk Thanks for checking us out! We think that scientists and engineers are the heroes of our generation. Each week we have a hard-hitting discussion with the leading thinkers in the AI space. Street Talk is unabashedly technical and non-commercial, so you will hear no annoying pitches. Corporate- and MBA-speak is banned on street talk, "data product", "digital transformation" are banned, we promise :) Dr. Tim Scarfe, Dr. Yannic Kilcher and Dr. Keith Duggar.