Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

The 4 Cs of Superintelligence

The 4 Cs of Superintelligence

FromLondon Futurists


The 4 Cs of Superintelligence

FromLondon Futurists

ratings:
Length:
33 minutes
Released:
Jun 16, 2023
Format:
Podcast episode

Description

The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two.Topics addressed in this episode include:*) Reasons why superintelligence might never be created*) Timelines for the arrival of superintelligence have been compressed*) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance?*) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring?*) The flaws in the "Level zero futurist" position*) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there*) A startling illustration of the dramatic power of exponential growth*) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks*) Why the "Cease" option is looking more credible nowadays than it did a few years ago*) Might "Cease" become a "Plan B" option?*) Examples of political dictators who turned away from acquiring or using various highly risky weapons*) Challenges facing a "Turing Police" who monitor for dangerous AI developments*) If a superintelligence has agency (volition), it seems that "Control" is impossible*) Ideas for designing superintelligence without agency or volition*) Complications with emergent sub-goals (convergent instrumental goals)*) A badly configured superintelligent coffee fetcher*) Bad actors may add agency to a superintelligence, thinking it will boost its performance*) The possibility of changing social incentives to reduce the dangers of people becoming bad actors*) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever*) Human civilisations contain many diametrically opposed goals*) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values?*) A cliff-hanger endingThe survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Released:
Jun 16, 2023
Format:
Podcast episode

Titles in the series (82)

Anticipating and managing exponential impact - hosts David Wood and Calum Chace