Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

[Linkpost] “Results from an Adversarial Collaboration on AI Risk (FRI)” by Forecasting Research Institute, Jhrosenberg, AvitalM, Molly Hickman, rosehadshar

[Linkpost] “Results from an Adversarial Collaboration on AI Risk (FRI)” by Forecasting Research Institute, Jhrosenberg, AvitalM, Molly Hickman, roseha…

FromEA Forum Podcast (Curated & popular)


[Linkpost] “Results from an Adversarial Collaboration on AI Risk (FRI)” by Forecasting Research Institute, Jhrosenberg, AvitalM, Molly Hickman, roseha…

FromEA Forum Podcast (Curated & popular)

ratings:
Length:
20 minutes
Released:
Mar 12, 2024
Format:
Podcast episode

Description

Authors of linked report: Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Rose Hadshar, Zachary Jacobs, Philip Tetlock[1] Today, the Forecasting Research Institute (FRI) released “Roots of Disagreement on AI Risk: Exploring the Potential and Pitfalls of Adversarial Collaboration,” which discusses the results of an adversarial collaboration focused on forecasting risks from AI. In this post, we provide a brief overview of the methods, findings, and directions for further research. For much more analysis and discussion, see the full report: https://forecastingresearch.org/s/AIcollaboration.pdf Abstract. We brought together generalist forecasters and domain experts (n=22) who disagreed about the risk AI poses to humanity in the next century. The “concerned” participants (all of whom were domain experts) predicted a 20% chance of an AI-caused existential catastrophe by 2100, while the “skeptical” group (mainly “superforecasters”) predicted a 0.12% chance. Participants worked together to find the strongest near-term cruxes: forecasting questions resolving by 2030 that [...] ---Outline:(02:13) Extended Executive Summary(02:44) Methods(03:53) Results: What drives (and doesn’t drive) disagreement over AI risk(04:32) Hypothesis #1 - Disagreements about AI risk persist due to lack of engagement among participants, low quality of participants, or because the skeptic and concerned groups did not understand each others arguments(05:11) Hypothesis #2 - Disagreements about AI risk are explained by different short-term expectations (e.g. about AI capabilities, AI policy, or other factors that could be observed by 2030)(07:53) Hypothesis #3 - Disagreements about AI risk are explained by different long-term expectations(10:35) Hypothesis #4 - These groups have fundamental worldview disagreements that go beyond the discussion about AI(11:31) Results: Forecasting methodology(12:15) Broader scientific implications(13:09) Directions for further researchThe original text contained 10 footnotes which were omitted from this narration. ---
First published:
March 11th, 2024

Source:
https://forum.effectivealtruism.org/posts/orhjaZ3AJMHzDzckZ/results-from-an-adversarial-collaboration-on-ai-risk-fri

Linkpost URL:https://forecastingresearch.org/s/AIcollaboration.pdf
---
Narrated by TYPE III AUDIO.
Released:
Mar 12, 2024
Format:
Podcast episode

Titles in the series (100)

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.