Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

“Munk AI debate: confusions and possible cruxes” by Steven Byrnes

“Munk AI debate: confusions and possible cruxes” by Steven Byrnes

FromEA Forum Podcast (Curated & popular)


“Munk AI debate: confusions and possible cruxes” by Steven Byrnes

FromEA Forum Podcast (Curated & popular)

ratings:
Length:
15 minutes
Released:
Jun 27, 2023
Format:
Podcast episode

Description

There was a debate on the statement “AI research and development poses an existential threat” (“x-risk” for short), with Max Tegmark and Yoshua Bengio arguing in favor, and Yann LeCun and Melanie Mitchell arguing against. The YouTube link is here, and a previous discussion on this forum is here.The first part of this blog post is a list of five ways that I think the two sides were talking past each other. The second part is some apparent key underlying beliefs of Yann and Melanie, and how I might try to change their minds.[1]While I am very much on the “in favor” side of this debate, I didn’t want to make this just a “why Yann’s and Melanie’s arguments are all wrong” blog post. OK, granted, it’s a bit of that, especially in the second half. But I hope people on the “anti” side will find this post interesting and not-too-annoying.Five ways people were talking past each other1. Treating efforts to solve the problem as exogenous or notThis subsection doesn’t apply to Melanie, who rejected the idea that there is any existential risk in the foreseeable future. But Yann suggested that there was no existential risk because we will solve it; whereas Max and Yoshua argued that we should acknowledge that there is an existential risk so that we can solve it.By analogy, fires tend not to spread through cities because the fire department and fire codes keep them from spreading. Two perspectives on this are:If you’re an outside observer, you can say that “fires can spread through a city” is evidently not a huge problem in practice.If you’re the chief of the fire department, or if you’re developing and enforcing fire codes, then “fires can spread through a city” is an extremely serious problem that you’re thinking about constantly.I don’t think this was a major source of talking-past-each-other, but added a nonzero amount of confusion.2. Ambiguously changing the subject to “timelines to x-risk-level AI”, or to “whether large language models (LLMs) will scale to x-risk-level AI”The statement under debate was “AI research and development poses an existential threat”. This statement does not refer to any particular line of AI research, nor any particular time interval. The four participants’ positions in this regard seemed to be:Max and Yoshua: Superhuman AI might happen in 5-20 years, and LLMs have a lot to do with why a reasonable person might believe that.Yann: Human-level AI might happen in 5-20 years, but LLMs have nothing to do with that. LLMs have fundamental limitations. But other types of ML research could get there—e.g. my (Yann’s) own research program.Melanie: LLMs have fundamental limitations, and Yann’s research program is doomed to fail as well. The kind of AI that might pose an x-risk will absolutely not happen in the foreseeable future. (She didn’t quantify how many years is the “foreseeable future”.)It seemed to me that all four participants (and the moderator!) were making timelines and LLM-related arguments, in ways that were both annoyingly vague, and unrelated to the statement under debate.(If astronomers found a [...]
Source:
https://forum.effectivealtruism.org/posts/LEEcSn4gt7nBwBghk/munk-ai-debate-confusions-and-possible-cruxes
---
Narrated by TYPE III AUDIO.
Share feedback on this narration.
Released:
Jun 27, 2023
Format:
Podcast episode

Titles in the series (100)

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.