Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

[Linkpost] “Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy” by Garrison

[Linkpost] “Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy” by Garrison

FromEA Forum Podcast (Curated & popular)


[Linkpost] “Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy” by Garrison

FromEA Forum Podcast (Curated & popular)

ratings:
Length:
7 minutes
Released:
Feb 10, 2024
Format:
Podcast episode

Description

If you enjoy this, please consider subscribing to my Substack. Sam Altman has said he thinks that developing artificial general intelligence (AGI) could lead to human extinction, but OpenAI is trying to build it ASAP. Why? The common story for how AI could overpower humanity involves an “intelligence explosion,” where an AI system becomes smart enough to further improve its capabilities, bootstrapping its way to superintelligence. Even without any kind of recursive self-improvement, some AI safety advocates argue that a large enough number of copies of a genuinely human-level AI system could pose serious problems for humanity. (I discuss this idea in more detail in my recent Jacobin cover story.) Some people think the transition from human-level AI to superintelligence could happen in a matter of months, weeks, days, or even hours. The faster the takeoff, the more dangerous, the thinking goes. Sam Altman, circa February 2023, agrees [...] ---
First published:
February 10th, 2024

Source:
https://forum.effectivealtruism.org/posts/vBjSyNNnmNtJvmdAg/sam-altman-s-chip-ambitions-undercut-openai-s-safety

Linkpost URL:https://garrisonlovely.substack.com/p/sam-altmans-chip-ambitions-undercut
---
Narrated by TYPE III AUDIO.
Released:
Feb 10, 2024
Format:
Podcast episode

Titles in the series (100)

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.