Nautilus

Moving Beyond Mimicry in Artificial Intelligence

What makes pre-trained AI models so impressive—and potentially harmful. The post Moving Beyond Mimicry in Artificial Intelligence appeared first on Nautilus | Science Connected.

Imagine asking a computer to make a digital painting, or a poem—and happily getting what you asked for. Or imagine chatting with it about various topics, and feeling it was a real interaction. What once was science fiction is becoming reality. In June, Google engineer Blake Lemoine told the Washington Post he was convinced Google’s AI chatbot, LaMDA, was sentient. “I know a person when I talk to it,” Lemoine said. Therein lies the rub: As algorithms are getting increasingly good at producing the kind of “outputs” we once thought were distinctly human, it’s easy to be dazzled. To be sure, getting computers to generate compelling text and images is a remarkable feat; but it is not in itself evidence of sentience, or human-like intelligence. Current AI systems hold up a mirror to our online minds. Like Narcissus, we can get lost gazing at the reflection, even though it is not always flattering. We ought to ask ourselves: Is there more to these algorithms than mindless copy? The answer is not straightforward.

AI research is converging on a way to deal with many problems that once called for piecemeal or specific solutions: training large machine learning models, on vast amounts of data, to perform a broad range of tasks they have not been explicitly designed for. A group of researchers from Stanford coined the suggestive phrase” to capture the significance of this trend, although we may prefer the more neutral label “large pre-trained models,” which loosely refers to a family of models that share a few important characteristics. They are trained through self-supervision, that is, without relying on humans manually labeling data; and they can adapt to novel tasks without further training. What’s more, just scaling up their size and training data has proven surprisingly effective at improving their capabilities—no substantial changes to the underlying architecture required. As a result, much of the recent progress in AI has been driven by sheer engineering prowess, rather than groundbreaking theoretical innovation.

You’re reading a preview, subscribe to read more.

More from Nautilus

Nautilus4 min readMotivational
The Psychology of Getting High—a Lot
Famous rapper Snoop Dogg is well known for his love of the herb: He once indicated that he inhales around five to 10 blunts per day—extreme even among chronic cannabis users. But the habit doesn’t seem to interfere with his business acumen: Snoop has
Nautilus6 min read
The Prizefighters
Gutsy. Bloody-minded. Irresponsible. Devious. Cavalier. Reckless. Tough. There’s a Nobel Prize for each of those characteristics. The recipient of 2023’s Nobel for Medicine was certainly gutsy. To stay in the United States in 1988, Katalin Karikó, bo
Nautilus3 min read
Archaeology At The Bottom Of The Sea
1 Archaeology has more application to recent history than I thought In the preface of my book, A History of the World in Twelve Shipwrecks, I emphasize that it is a history of the world, not the history; the choice of sites for each chapter reflects

Related Books & Audiobooks