AI Shouldn’t Decide What’s True
Can artificial intelligence be trained to seek—and speak—only the truth? The idea seems enticing, seductive even.
And earlier this spring, billionaire business magnate Elon Musk announced that he intends to create “TruthGPT,” an AI chatbot designed to rival GPT-4 not just economically, but in the domain of distilling and presenting only “truth.” A few days later, Musk purchased about 10,000 GPUs, likely to begin building, what he called, a “maximum truth-seeking AI” through his new company X.AI.
This ambition introduces yet another vexing facet of trying to foretell—and direct—the future of AI: Can, or should, chatbots have a monopoly on truth?
AI chatbots present the antithesis of transparency.
There are innumerable concerns about this rapidly evolving technology going horribly, terribly awry. Chatbots like GPT-4 are now testing at the 90th percentile or above in a range of standardized tests, and are, according to a Microsoft team (which runs a version of ChatGPT on its search human-level intelligence. Given access to the internet, they can already accomplish complex goals, enlisting humans to help them along the way. Even Sam Altman, the CEO of OpenAI—the company behind ChatGPT— this week that AI could “cause significant harm to the world.” He noted: “If this technology goes wrong, it can go quite wrong,” manipulating people or even controlling armed drones. (Indeed, Musk himself was a signatory on the March open letter calling for a pause in any further AI development.)
You’re reading a preview, subscribe to read more.
Start your free 30 days