Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Catastrophe and consent

Catastrophe and consent

FromLondon Futurists


Catastrophe and consent

FromLondon Futurists

ratings:
Length:
32 minutes
Released:
Jun 21, 2023
Format:
Podcast episode

Description

In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns?The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence.Topics addressed in this episode include:*) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence*) The existential threat of the entire human species being wiped out*) The vulnerabilities of our shared infrastructure*) An AGI may pursue goals even without it being conscious or having agency*) The risks of accidental and/or coincidental catastrophe*) A single technical fault caused the failure of automated passport checking throughout the UK*) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia*) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles*) Reasons why an AGI might decide to eliminate humans*) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders*) Why "Consent" is a better name than "Celebration"*) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems*) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent"*) Enhanced human intelligence could play a role in avoiding a surge of panic*) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans*) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us*) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans*) Is collaboration a self-evident virtue?*) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring?*) The inscrutability of AGI motivation*) A reason to consider "Consent" as the most likely outcome*) A fifth 'C' word, as discussed by Max Tegmark*) A reason to keep working on a moonshot solution for "Control"*) Practical steps to reduce the risk of public panicMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Released:
Jun 21, 2023
Format:
Podcast episode

Titles in the series (81)

Anticipating and managing exponential impact - hosts David Wood and Calum Chace