Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

"High-level hopes for AI alignment" by Holden Karnofsky

"High-level hopes for AI alignment" by Holden Karnofsky

FromEA Forum Podcast (Curated & popular)


"High-level hopes for AI alignment" by Holden Karnofsky

FromEA Forum Podcast (Curated & popular)

ratings:
Length:
24 minutes
Released:
Dec 22, 2022
Format:
Podcast episode

Description

In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding.I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.But while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my high-level hopes for how we might end up avoiding this risk.Original article:https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignmentNarrated by Holden Karnofsky for the Cold Takes blog.Share feedback on this narration.
Released:
Dec 22, 2022
Format:
Podcast episode

Titles in the series (100)

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.