The use of AI to create dynamic player experiences is hardly new. Mike Cook, a game designer and AI researcher at King’s College London, points out that in the 1990s, in the Creatures series, AI was being used to create characters that learned from their experiences, and Lionhead’s Black & White did similar a few years later. “The difference today,” he says, “is that lots of proposals to use AI in games are to replace things people already do just fine, like writing or art.”
Case in point: the tech demo showcased by Nvidia at the Consumer Electronics Show in January, where users were able to talk directly with the proprietor of a cyberpunk ramen shop via a microphone, the spoken responses coming from generative AI. “I don’t think
Nvidia’s vision of AI-driven NPCs is very good, or very sensible,” Cook says. “It makes for an eye-catching tech demo, but it’s not a very well-thought-out idea.” From a technical standpoint, he reckons getting something like this to perform consistently would be a nightmare. “How can you QA an NPC that can talk about anything, to make sure it’s not harmful or misleading?”
narrative designer for games including and is unimpressed with a technology that essentially removes the writer from the equation. “It’s boring,”sees it, comes down to a question of meaning. Adding generative AI to NPCs won’t mean they’re more interesting, he thinks – “it just means they say more.”