Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

The Singularity Principles

The Singularity Principles

FromLondon Futurists


The Singularity Principles

FromLondon Futurists

ratings:
Length:
31 minutes
Released:
Nov 2, 2022
Format:
Podcast episode

Description

Co-hosts Calum and David dig deep into aspects of David's recent new book "The Singularity Principles". Calum (CC) says he is, in part, unconvinced. David  (DW) agrees that the projects he recommends are hard, but suggests some practical ways forward.0.25 The technological singularity may be nearer than we think1.10 Confusions about the singularity1.35 “Taking back control of the singularity”2.40 The “Singularity Shadow”: over-confident predictions which repulse people3.30 The over-confidence includes predictions of timescale…4.00 … and outcomes4.45 The Singularity as the Rapture of the Nerds?5.20 The Singularity is not a religion …5.40 .. although if positive, it will confer almost godlike powers6.35 Much discussion of the Singularity is dystopian, but there could be enormous benefits, including…7.15 Digital twins for cells and whole bodies, and super longevity7.30 A new enlightenment7.50 Nuclear fusion8.10 Humanity’s superpower is intelligence8.30 Amplifying our intelligence should increase our power9.50 DW’s timeline: 50% chance of AGI by 2050, 10% by 203010.10 The timeline is contingent on human actions10.40 Even if AGI isn’t coming until 2070, we should be working on AI alignment today11.10 AI Impact’s survey of all contributors to NeurIPS11.35 Median view: 50% chance of AGI in 2059, and many were pessimistic12.15 This discussion can’t be left to AI researchers12.40 A bad beta version might be our last invention13.00 A few hundred people are now working on AI alignment, and tens of thousands on advancing AI13.35 The growth of the AI research population is still faster13.40 CC: Three routes to a positive outcome13.55 1. Luck. The world turns out to be configured in our favour14.30 2. Mathematical approaches to AI alignment succeed14.45 We either align AIs forever, or manage to control them. This is very hard14.55 3. We merge with the superintelligent machines15.40 Uploading is a huge engineering challenge15.55 Philosophical issues raised by uploading: is the self retained?16.10 DW: routes 2 and 3 are too binary. A fourth route is solving morality18.15 Individual humans will be augmented, indeed we already are18.55 But augmented humans won’t necessarily be benign19.30 DW: We have to solve beneficence20.00 CC: We can’t hope to solve our moral debates before AGI arrives20.20 In which case we are relying on route 1 – luck20.30 DW: Progress in philosophy *is* possible, and must be accelerated21.15 The Universal Declaration of Human Rights shows that generalised moral principles can be agreed22.25 CC: That sounds impossible. The UDHR is very broad and often ignored23.05 Solving morality is even harder than the MIRI project, and reinforces the idea that route 3 is our best hope23.50 It’s not unreasonable to hope that wisdom correlates with intelligence24.00 DW: We can proceed step by step, starting with progress on facial recognition, autonomous weapons, and such intermediate questions25.10 CC: We are so far from solving moral questions. Americans can’t even agree if a coup against their democracy was a bad thing25.40 DW: We have to make progress, and quickly. AI might help us.26.50 The essence of transhumanism is that we can use technology to improve ourselves27.20 CC: If you had a magic wand, your first wish should probably be to make all humans see each other as members of the same tribe27.50 Is AI ethics a helpful term?28.05 AI ethics is a growing profession, but if problems are ethical then people who disagree with you are bad, not just wrong28.55 AI ethics makes debates about AI harder to resolve, and more angry29.15 AI researchers are understandably offended by finger-wagging, self-proclaimed AI ethicists who may not understand what they are talking about
Released:
Nov 2, 2022
Format:
Podcast episode

Titles in the series (81)

Anticipating and managing exponential impact - hosts David Wood and Calum Chace