14 min listen
"Counterarguments to the basic AI risk case" by Katja_Grace
"Counterarguments to the basic AI risk case" by Katja_Grace
ratings:
Length:
75 minutes
Released:
Nov 27, 2022
Format:
Podcast episode
Description
This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems. To start, here’s an outline of what I take to be the basic case:I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lightsIII. If most goal-directed superhuman AI systems have bad goals, the future will very likely be bad Original article:https://forum.effectivealtruism.org/posts/zoWypGfXLmYsDFivk/counterarguments-to-the-basic-ai-risk-caseNarrated for the Effective Altruism Forum by TYPE III AUDIO.Share feedback on this narration.
Released:
Nov 27, 2022
Format:
Podcast episode
Titles in the series (100)
"Case for emergency response teams" by Gavin, Jan_Kulveit by EA Forum Podcast (Curated & popular)