14 min listen
"High-level hopes for AI alignment" by Holden Karnofsky
"High-level hopes for AI alignment" by Holden Karnofsky
ratings:
Length:
24 minutes
Released:
Dec 22, 2022
Format:
Podcast episode
Description
In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding.I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.But while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my high-level hopes for how we might end up avoiding this risk.Original article:https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignmentNarrated by Holden Karnofsky for the Cold Takes blog.Share feedback on this narration.
Released:
Dec 22, 2022
Format:
Podcast episode
Titles in the series (100)
"Case for emergency response teams" by Gavin, Jan_Kulveit by EA Forum Podcast (Curated & popular)