Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

“A note of caution about recent AI risk coverage” by Sean_o_h

“A note of caution about recent AI risk coverage” by Sean_o_h

FromEA Forum Podcast (Curated & popular)


“A note of caution about recent AI risk coverage” by Sean_o_h

FromEA Forum Podcast (Curated & popular)

ratings:
Length:
8 minutes
Released:
Jun 7, 2023
Format:
Podcast episode

Description

Epistemic status: some thoughts I wanted to get out quicklyA lot of fantastic work has been done by people in the AI existential risk research community and related communities over the last several months in raising awareness about risks from advanced AI. However, I have some cause for unease that I’d like to share.These efforts may have been too successful too soon.Or, more specifically, this level of outreach success this far ahead of the development of AI capable of posing existential risk may have fallout. We should consider steps to mitigate this. (1)  TimelinesI know that there are well-informed people in the AI and existential risk communities who believe AI capable of posing existential risk may be developed within 10 years. I certainly can’t rule this out, and even a small chance of this is worth working to prevent or mitigate to the extent possible, given the possible consequences. My own timelines are longer, although my intuitions don’t have a rigorous model underpinning them (my intuitions line up similarly to the 15-40 year timelines mentioned in this recent blog post by Matthew Barnett from Epoch).Right now the nature of media communications means that the message is coming across with a lot of urgency. From speaking to lay colleagues, impressions often seem to be of short timelines (and some folks e.g. Geoff Hinton have explicitly said 5-20 years, sometimes with uncertainty caveats and sometimes without).It may be that those with short (<10 years) timelines are right. And even if they’re not, and we’ve got decades before this technology poses an existential threat, many of the attendant challenges – alignment, governance, distribution of benefits – will need that additional time to be addressed. And I think it’s entirely plausible that the current level of buy-in will be needed in order to initiate the steps needed to avoid the worst outcomes, e.g. recruiting expertise and resources to alignment, development and commitment to robust regulation, even coming to agreements not to pursue certain technological developments beyond a certain point. However, if short timelines do not transpire, I believe there’s a need to consider a scenario I think is reasonably likely.(2)  Crying wolfI propose that it is most likely we are in a world where timelines are >10 years, perhaps >20 or 30 years. Right now this issue has a lot of the most prominent AI scientists and CEOs signed up, and political leaders worldwide committing to examining the issue seriously (examples from last week). What happens then in the >10 year-timeline world? The extinction-level outcomes that the public is hearing, and that these experts are raising and policymakers making costly reputational investments in, don’t transpire.  What does happen is all the benefits of near-term AI that have been talked about, plus all the near-term harms that are being predominantly raised by the AI ethics/FAccT communities. Perhaps these harms include somewhat more extreme versions than what is currently talked about, but nowhere near catastrophic. Suddenly the year is 2028, and that whole 2023 furore is starting to look [...]
Source:
https://forum.effectivealtruism.org/posts/weJZjku3HiNgQC4ER/a-note-of-caution-about-recent-ai-risk-coverage
---
Narrated by TYPE III AUDIO.
Share feedback on this narration.
Released:
Jun 7, 2023
Format:
Podcast episode

Titles in the series (100)

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.