Why Won’t OpenAI Say What the Q* Algorithm Is?
Last week, it seemed that OpenAI—the secretive firm behind ChatGPT—had been broken open. The company’s board had suddenly fired CEO Sam Altman, hundreds of employees revolted in protest, Altman was reinstated, and the media dissected the story from every possible angle. Yet the reporting belied the fact that our view into the most crucial part of the company is still so fundamentally limited: We don’t really know how OpenAI develops its technology, nor do we understand exactly how Altman has directed work on future, more powerful generations.
This was made acutely apparent last and reported that, prior to Altman’s firing, several staff researchers had raised concerns about a supposedly dangerous breakthrough. At issue was an algorithm called Q* (pronounced “Q-star”), which has allegedly been shown to solve certain grade-school-level math problems that it hasn’t seen before. Although this may sound unimpressive, some researchers within the company reportedly believed that this could be an early sign of the algorithm improving its ability to reason—in other words, using logic to solve novel problems.
You’re reading a preview, subscribe to read more.
Start your free 30 days