Building Superintelligence Is Riskier Than Russian Roulette
Every day there seems to be a new headline: Yet another scientific luminary warns about how recklessly fast companies are creating more and more advanced forms of AI, and the great dangers this tech poses to humanity.
We share many of the concerns people like AI researchers Geoffrey Hinton, Yoshua Bengio, and Eliezer Yudkowsky; philosopher Nick Bostrom; cognitive scientist Douglas Hofstadter, and others have expressed about the risks of failing to regulate or control AI as it becomes exponentially more intelligent than human beings. This is known as “the control problem,” and it is AI’s “hard problem.”
Once AI is able to improve itself, it will quickly become much smarter than us on almost every aspect of intelligence, then a thousand times smarter, then a million, then a billion … What does it mean to be a billion times more intelligent than a human? Well, we can’t know, in the same way that an ant has no idea what it’s like to have a mind like Einstein’s. In such a scenario, the best we can hope for is benign neglect of our presence. We would quickly become like ants at its feet. Imagining
You’re reading a preview, subscribe to read more.
Start your free 30 days