Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

#66 – Michael Cohen on Input Tampering in Advanced RL Agents

#66 – Michael Cohen on Input Tampering in Advanced RL Agents

FromHear This Idea


#66 – Michael Cohen on Input Tampering in Advanced RL Agents

FromHear This Idea

ratings:
Length:
152 minutes
Released:
Jun 25, 2023
Format:
Podcast episode

Description

Michael Cohen is is a DPhil student at the University of Oxford with Mike Osborne. He will be starting a postdoc with Professor Stuart Russell at UC Berkeley, with the Center for Human-Compatible AI. His research considers the expected behaviour of generally intelligent artificial agents, with a view to designing agents that we can expect to behave safely.
You can see more links and a full transcript at www.hearthisidea.com/episodes/cohen.
We discuss:

What is reinforcement learning, and how is it different from supervised and unsupervised learning?
Michael's recently co-authored paper titled 'Advanced artificial agents intervene in the provision of reward'
Why might it be hard to convey what we really want to RL learners — even when we know exactly what we want?
Why might advanced RL systems might tamper with their sources of input, and why could this be very bad?
What assumptions need to hold for this "input tampering" outcome?
Is reward really the optimisation target? Do models "get reward"?
What's wrong with the analogy between RL systems and evolution?

Key links:

Michael's personal website
'Advanced artificial agents intervene in the provision of reward' by Michael K. Cohen, Marcus Hutter, and Michael A. Osborne
'Pessimism About Unknown Unknowns Inspires Conservatism' by Michael Cohen and Marcus Hutter
'Intelligence and Unambitiousness Using Algorithmic Information Theory' by Michael Cohen, Badri Vallambi, and Marcus Hutter
'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica Taylor
'RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning' by Marc Rigter, Bruno Lacerda, and Nick Hawes
'Quantilizers: A Safer Alternative to Maximizers for Limited Optimization' by Jessica Taylor
Season 40 of Survivor
Released:
Jun 25, 2023
Format:
Podcast episode

Titles in the series (82)

Hear This Idea is a podcast showcasing new thinking in philosophy, the social sciences, and effective altruism. Each episode has an accompanying write-up at www.hearthisidea.com/episodes.