AI’s ‘Fog of War’
This is Atlantic Intelligence, an eight-week series in which The Atlantic’s leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.
Earlier this year, The Atlantic published a story by Gary Marcus, a well-known AI expert who has agitated for the technology to be regulated, both in his Substack newsletter and before the Senate. (Marcus, a cognitive scientist and an entrepreneur, has founded AI companies himself and has explored launching another.) Marcus argued that “this is a moment of immense peril,” and that we are teetering toward an “information-sphere disaster, in which bad actors weaponize large language models, distributing their ill-gotten gains through armies of ever more sophisticated bots.”
I was interested in following up with Marcus given recent events. In the past six weeks, we’ve seen an from the Biden administration at the influential company OpenAI; and this Wednesday, the release of , a GPT competitor from Google. What we have not seen, yet, is total catastrophe of the sort Marcus and others have warned about. Perhaps it looms on the horizon—some experts have fretted over the AI might play in the 2024 election, while others believe we are close to developing advanced AI models that could acquire “unexpected and dangerous capabilities,” as my colleague Karen Hao has . But perhaps fears of existential risk have become their own kind of AI hype, understandable yet unlikely to materialize. My own opinions seem to shift by the day.
You’re reading a preview, subscribe to read more.
Start your free 30 days