A STRANGE THING IS HAPPENING in the world of artificial intelligence. The very people who are leading its development are warning of the immense risks of their work. A recent statement released by the nonprofit Center for AI Safety, signed by hundreds of important AI executives and researchers, said: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Extinction? Nuclear war? If they’re so worried, why don’t these scientists just stop?
It’s easier said than done. Nuclear scientists didn’t stop until they perfected the bomb. And AI has innumerable benefits, too. But the statement, alongside a chorus of recent calls for government regulation of AI, raises several questions: What should the rules governing the development of AI look like? Who crafts them? Who polices them? How do these norms exist in tandem with society’s existing laws? How do we account for differences among cultures and countries?
For answers, I turned to the academic and policy advisor Alondra Nelson, who served in the White House for the first two years of U.S. President