Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

From80,000 Hours Podcast


#151 – Ajeya Cotra on accidentally teaching AI models to deceive us

From80,000 Hours Podcast

ratings:
Length:
170 minutes
Released:
May 12, 2023
Format:
Podcast episode

Description

Imagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don't get to see any resumes or do reference checks. And because you're so rich, tonnes of people apply for the job — for all sorts of reasons.Today's guest Ajeya Cotra — senior research analyst at Open Philanthropy — argues that this peculiar setup resembles the situation humanity finds itself in when training very general and very capable AI models using current deep learning methods.Links to learn more, summary and full transcript.As she explains, such an eight-year-old faces a challenging problem. In the candidate pool there are likely some truly nice people, who sincerely want to help and make decisions that are in your interest. But there are probably other characters too — like people who will pretend to care about you while you're monitoring them, but intend to use the job to enrich themselves as soon as they think they can get away with it.Like a child trying to judge adults, at some point humans will be required to judge the trustworthiness and reliability of machine learning models that are as goal-oriented as people, and greatly outclass them in knowledge, experience, breadth, and speed. Tricky!Can't we rely on how well models have performed at tasks during training to guide us? Ajeya worries that it won't work. The trouble is that three different sorts of models will all produce the same output during training, but could behave very differently once deployed in a setting that allows their true colours to come through. She describes three such motivational archetypes:
Saints — models that care about doing what we really want
Sycophants — models that just want us to say they've done a good job, even if they get that praise by taking actions they know we wouldn't want them to
Schemers — models that don't care about us or our interests at all, who are just pleasing us so long as that serves their own agenda
And according to Ajeya, there are also ways we could end up actively selecting for motivations that we don't want.In today's interview, Ajeya and Rob discuss the above, as well as:
How to predict the motivations a neural network will develop through training
Whether AIs being trained will functionally understand that they're AIs being trained, the same way we think we understand that we're humans living on planet Earth
Stories of AI misalignment that Ajeya doesn't buy into
Analogies for AI, from octopuses to aliens to can openers
Why it's smarter to have separate planning AIs and doing AIs
The benefits of only following through on AI-generated plans that make sense to human beings
What approaches for fixing alignment problems Ajeya is most excited about, and which she thinks are overrated
How one might demo actually scary AI failure mechanisms
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio mastering: Ryan Kessler and Ben CordellTranscriptions: Katy Moore
Released:
May 12, 2023
Format:
Podcast episode

Titles in the series (100)

Unusually in-depth conversations about the world's most pressing problems and what you can do to solve them. Subscribe by searching for '80,000 Hours' wherever you get podcasts. Produced by Keiran Harris. Hosted by Rob Wiblin, Head of Research at 80,000 Hours.