39 min listen
Artificial Intelligence
ratings:
Length:
42 minutes
Released:
Nov 16, 2018
Format:
Podcast episode
Description
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.) Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher. Learn more about your ad-choices at https://news.iheart.com/podcast-advertisers
Released:
Nov 16, 2018
Format:
Podcast episode
Titles in the series (12)
EP03: X Risks: Humanity could have a future billions of years long – or we might not make it past the next century. If we have a trip through the Great Filter ahead of us, then we appear to be entering it now. It looks like existential risks will be our filter. by The End Of The World with Josh Clark