Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

“EA and Longtermism: not a crux for saving the world” by ClaireZabel

“EA and Longtermism: not a crux for saving the world” by ClaireZabel

FromEA Forum Podcast (Curated & popular)


“EA and Longtermism: not a crux for saving the world” by ClaireZabel

FromEA Forum Podcast (Curated & popular)

ratings:
Length:
19 minutes
Released:
Jun 4, 2023
Format:
Podcast episode

Description

This is partly based on my experiences working as a Program Officer leading Open Phil’s Longtermist EA Community Growth team, but it’s a hypothesis I have about how some longtermists could have more of an impact by their lights, not an official Open Phil position.Context: I originally wrote this in July 2022 as a memo for folks attending a retreat I was going to. I find that I refer to it pretty frequently and it seems relevant to ongoing discussions about how much meta effort done by EAs should focus on engaging more EAs vs. other non-EA people. I am publishing it with light-ish editing, and some parts are outdated, though for the most part I more strongly hold most of the conclusions than I did when I originally wrote it. Tl;dr: I think that recruiting and talent pipeline work done by EAs who currently prioritize x-risk reduction (“we” or “us” in this post, though I know it won’t apply to all readers) should put more emphasis on ideas related to existential risk, the advent of transformative technology, and the ‘most important century’ hypothesis, and less emphasis on effective altruism and longtermism, in the course of their outreach. A lot of EAs who prioritize existential risk reduction are making increasingly awkward and convoluted rhetorical maneuvers to use “EAs” or “longtermists” as the main label for people we see as aligned with our goals and priorities. I suspect this is suboptimal and, in the long term, infeasible. In particular, I’m concerned that this is a reason we’re failing to attract and effectively welcome some people who could add a lot of value. The strongest counterargument I can think of right now is that I know of relatively few people who are doing full-time work on existential risk reduction on AI and biosecurity who have been drawn in by just the “existential risk reduction” frame [this seemed more true in 2022 than 2023]. This is in the vein of Neel Nanda’s "Simplify EA Pitches to "Holy Shit, X-Risk"" and Scott Alexander’s “Long-termism vs. Existential Risk”, but I want to focus more on the hope of attracting people to do priority work even if their motivations are neither longtermist nor neartermist EA, but instead mostly driven by reasons unrelated to EA. EA and longtermism: not a crux for doing the most important workRight now, my priority in my professional life is helping humanity navigate the imminent creation of potential transformative technologies, to try to make the future better for sentient beings than it would otherwise be. I think that’s likely the most important thing anyone can do these days. And I don’t think EA or longtermism is a crux for this prioritization anymore. A lot of us (EAs who currently prioritize x-risk reduction) were “EA-first” —  we came to these goals first via broader EA principles and traits, like caring deeply about others; liking rigorous research, scope sensitivity, and expected value-based reasoning; and wanting to meet others with similar traits. Next, we were exposed to a cluster of philosophical and empirical arguments about [...]
Source:
https://forum.effectivealtruism.org/posts/cP7gkDFxgJqHDGdfJ/ea-and-longtermism-not-a-crux-for-saving-the-world
---
Narrated by TYPE III AUDIO.
Share feedback on this narration.
Released:
Jun 4, 2023
Format:
Podcast episode

Titles in the series (100)

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125+ karma.