When AI Hallucinates
This fall, Sam Altman, OpenAI’s once- (and possibly future-) CEO made a surprising statement about artificial intelligence. AI systems, including that company’s ChatGPT, are known to “hallucinate”: perceive patterns and generate outputs that are nonsensical. That wasn’t a flaw in AI systems, Altman said, it was part of their “magic.” The fact “that these AI systems can come up with new ideas and be creative, that’s a lot of the power.” That raised eyebrows: We humans are rather good at creativity without getting our facts all wrong. How could such an appeal to creativity make a decent counter to the many concerns about accuracy?
To begin, what do people mean when they say an AI system “hallucinates”? Take this example of what happens when GPT4 tries its hand at academic citations:
Me:
Can you complete the title of the paper by Keith L. Downing written in 2004, “Development and the …”
GPT4:
Yes, the complete title of Keith L. Downing’s paper written in 2004 is “Development and the Baldwin Effect: Evolution by Any Other Name?” This paper delves into the Baldwin Effect and its implications for understanding evolution and development.
The part of the title “Evolution by Any Other Name?” is made up. As it happens, that is the title of another academic paper on “alternative models
You’re reading a preview, subscribe to read more.
Start your free 30 days