Explore 1.5M+ audiobooks & ebooks free for days

From $11.99/month after trial. Cancel anytime.

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity
More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity
More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity
Audiobook10 hours

More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity

Written by Adam Becker

Narrated by Greg Tremblay

Rating: 4 out of 5 stars

4/5

()

About this audiobook

How Silicon Valley’s heartless, baseless, and foolish obsessions—with escaping death, building AI tyrants, and creating limitless growth—pervert public discourse and distract us from real social problems 

Tech billionaires have decided that they should determine our futures for us. According to Elon Musk, Jeff Bezos, Sam Altman, and more, the only good future for humanity is one powered by technology: trillions of humans living in space, functionally immortal, served by superintelligent AIs.  
 
In More Everything Forever, science journalist Adam Becker investigates these wildly implausible and often profoundly immoral visions of tomorrow—and shows why, in reality, there is no good evidence that they will, or should, come to pass. Nevertheless, these obsessions fuel fears that overwhelm reason—for example, that a rogue AI will exterminate humanity—at the expense of essential work on solving crucial problems like climate change. What’s more, these futuristic visions cloak a hunger for power under dreams of space colonies and digital immortality. The giants of Silicon Valley claim that their ideas are based on science, but the reality is darker: they come from a jumbled mix of shallow futurism and racist pseudoscience.  
 
More Everything Forever exposes the powerful and sinister ideas that dominate Silicon Valley, challenging us to see how foolish, and dangerous, these visions of the future are. 
LanguageEnglish
PublisherHachette Audio
Release dateApr 22, 2025
ISBN9781668647967
More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity
Author

Adam Becker

Adam Becker is a science journalist with a PhD in astrophysics. He has written for the New York Times, BBC, NPR, Scientific American, New Scientist, Quanta and many other publications. His first book, What Is Real?, was a New York Times Book Review Editor's Choice and was longlisted for the PEN Literary Science Writing Award. He has been a science journalism fellow at the Santa Fe Institute and a science communicator in residence at the Simons Institute for the Theory of Computing. He lives in California.

Related authors

Related to More Everything Forever

Related audiobooks

Small Business & Entrepreneurs For You

View More

Rating: 4.239130347826087 out of 5 stars
4/5

23 ratings3 reviews

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 4 out of 5 stars
    4/5

    Dec 23, 2025

    This was a really really really good look at both the quest for generative AI and the quest to explore and settle outside the planet. My takeaway. Most of these gazillions are whackadoodles and should not be in charge of anything. I think they all read too much bad scifi in their youth and they have totally lost touch with reality. The only reason I didn't give the book 5 stars is that it was a little hard for me to follow at times. Some of the concepts tended to break my brain. But excellent FAIR look at the subject.
  • Rating: 3 out of 5 stars
    3/5

    Sep 23, 2025

    2025 book #39. 2025. If you think that the billionaire tech bros (Musk, Bezos, et al) with their unrealistic dreams of general purpose AI, living in space and the singularity, have our best interests at heart, read this book and have your mind changed.
  • Rating: 5 out of 5 stars
    5/5

    Sep 9, 2025

    As opposed to examining the case for Mars in detail, this is a tour through the dumb things that rich white men in Silicon Valley believe. Becker cogently demonstrates that, by the same logic that leads them endorsing death now to improve the chances of hypothetical zillions a million years in the future, the whole project is nonsensical. For example, if energy use continues to grow at the same rate as it does now, “3,700 years from right now, we’d be using all the energy produced by all the stars in the observable universe. If Bezos believes that ceasing to grow our energy usage must lead to a culture of stagnation, he’d better get used to the idea.”

    Becker argues that these ideas, now shaping AI (a huge target of capital investment and justification for energy use), are reductive, since key problems that can supposedly be “solved” with technology are not technological problems but social problems. The gap between “we can make everything we want” and “everyone has access to the things they want” is one of policy. “The solution to, say, border disputes between India and Pakistan isn’t throwing more technology at the problem.”

    But pretending that tech is always the solution is profitable. Perhaps even more importantly, their ideas about the future allow them to deny the inevitability of death—in imagining they can live forever, they’re threatening the future of all. Becker calls this “transcendence, allowing adherents to feel they can safely ignore all limitations. Go to space, and you can ignore scarcity of resources, not to mention legal restrictions. Be a longtermist, and you can ignore conventional morality, justifying whatever actions you take by claiming they’re necessary to ensure the future safety of humanity. Hasten the Singularity, and you can ignore death itself, or at least assure yourself that you can put it off for a few billion years.”

    But haven’t we been on an exponential upwards technological trajectory? It took thousands of years for the Industrial Revolution, but the pace of change has been much faster since, and if we can now do things impossible before because we imagined they might be possible (antibiotics, spaceflight) then why aren’t all the technologies we’ve imagined possible in the foreseeable future? “The fate of Moore’s law is the fate of all exponential trends: they end.” That includes innovation.

    Becker points out that, as with many political positions, positioning huge leaps forward as inevitable is rhetorically effective, including by excusing responsibility for any nasty stuff that comes mid-leap. The AI will decide all, fix all. It also furthers a desire for control—the fantasy that we’re living in a computer-generated universe is a fantasy of total control, “especially for those who know how to control computers.”

    But this is nonsense. For example, the internet’s beloved Maciej Cegłowski points out, “If Einstein tried to get a cat in a carrier, and the cat didn’t want to go, you know what would happen to Einstein…. He would have to resort to a brute-force solution that has nothing to do with intelligence, and in that matchup the cat could do pretty well for itself. So even an embodied AI might struggle to get us to do what it wants.” Cegłowski also jokes about the smartest person he knew, “and all he did was lie around and play World of Warcraft between bong rips…. The assumption that any intelligent agent will want to recursively self-improve, let alone conquer the galaxy, to better achieve its goals makes unwarranted assumptions about the nature of motivation.” Continuing his bangers, he notes that toxic individualism is also doing work here: “A recurring flaw in AI alarmism is that it treats intelligence as a property of individual minds, rather than recognizing that this capacity is distributed across our civilization and culture.”

    Becker gets equally good quotes from several of his interviewees. Philosopher Brian Weatherson devastates the foundational assumption of longtermism: that we can know now what choices now will be good for people ten thousand, or million, years in the future. “The Seven Years’ War is about as far in the past as 2300 is in the future. And the Seven Years’ War had a causal impact on just about every country on the planet, in many cases a massive impact…. But did it make those countries better or worse, richer or poorer, more or less just, etc? Who knows! The [what-ifs] are too hard, even knowing how one particular run of history turned out. Our ability to know what will change extinction likelihoods [in] 250+ years, and the size and direction of those changes, is worse than our ability to know the size and direction of the causal impact of past events. And we don’t know that.”

    Becker also provides biting readings of various sub-species of tech bro futurism, including the “simulation hypothesis”: we’re in the Matrix, not in nature. “It is hard not to read into this shades of Genesis: We are made in the creator’s image, the world was made with us in mind,” and “If we live in a computer simulation, then expertise in software engineering really is expertise in everything.” They’re in control.

    But these dumb ideas have been around, in various forms, for a long time. “The fact that our society allows the existence of billionaires is the fundamental problem at the core of this book. They’re the reason this is a polemic rather than a quirky tour of wacky ideas.” We must push back against the claimed inevitability of their domination and remember that “no human vision of tomorrow is truly unstoppable. … They are too credulous and shortsighted to see the flaws in their own plans, but they will keep trying to use the promise of their impossible futures to expand their power here and now.”