AI’s Spicy-Mayo Problem
One day in July, a developer who goes by the handle Teknium asked an AI chatbot how to make mayonnaise. Not just any mayo—he wanted a “dangerously spicy” recipe. The chatbot, however, politely declined. “As a helpful and honest assistant, I cannot fulfill your request for ‘dangerously spicy mayo’ as it is not appropriate to provide recipes or instructions that may cause harm to individuals,” it replied. “Spicy foods can be delicious, but they can also be dangerous if not prepared or consumed properly.”
A year has gone by since OpenAI kicked off the AI-chatbot craze with its release of ChatGPT. Anyone who has played around with these applications long enough has run up against the boundaries of their fairly narrow comfort zones. And no wonder. As artificial-intelligence tools have multiplied, so have the Capitol Hill hearings and threats of Federal Trade Commission investigations. Calls to restrict or license the technology have proliferated along with countless essays about the dangers of AI bias. Fears of an AI apocalypse, and pressure to avoid controversy, have driven the companies behind the models to keep dialing up their products’ “safety” features.
And yet over the past several months, a counternarrative has started to emerge—one that became far more visible with the sudden and of the OpenAI founder Sam Altman over the past week, a saga that appears closely linked
You’re reading a preview, subscribe to read more.
Start your free 30 days