Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Marc Bellemare: Distributional Reinforcement Learning

Marc Bellemare: Distributional Reinforcement Learning

FromThe Gradient: Perspectives on AI


Marc Bellemare: Distributional Reinforcement Learning

FromThe Gradient: Perspectives on AI

ratings:
Length:
72 minutes
Released:
Dec 8, 2022
Format:
Podcast episode

Description

Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 52 of The Gradient Podcast, Daniel Bashir speaks to Professor Marc Bellemare. Professor Bellemare leads the reinforcement learning efforts at Google Brain Montréal and is a core industry member at Mila, where he also holds the Canada CIFAR AI Chair. His PhD work, completed at the University of Alberta, proposed the use of Atari 2600 video games to benchmark progress in reinforcement learning (RL). He was a research scientist at DeepMind from 2013-2017, and his Arcade Learning Environment was very influential in DeepMind’s early RL research and remains one of the most widely-used RL benchmarks today. More recently he collaborated with Loon to deploy deep reinforcement learning to navigate stratospheric balloons. His book on distributional reinforcement learning, published by MIT Press, will be available in Spring 2023.Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (03:10) Marc’s intro to AI and RL* (07:00) Cross-pollination of deep learning research and RL in McGill and UDM* (09:50) PhD work at U Alberta, continual learning, origins of the Arcade Learning Environment (ALE)* (14:40) Challenges in the ALE, how the ALE drove RL research* (23:10) Marc’s thoughts on the Avalon benchmark and what makes a good RL benchmark* (28:00) Opinions on “Reward is Enough” and whether RL gets us to AGI* (32:10) How Marc thinks about priors in learning, “reincarnating RL”* (36:00) Distributional Reinforcement Learning and the problem of distribution estimation* (43:00) GFlowNets and distributional RL* (45:05) Contraction in RL and distributional RL, theory-practice gaps* (52:45) Representation learning for RL* (55:50) Structure of the value function space* (1:00:00) Connections to open-endedness / evolutionary algorithms / curiosity* (1:03:30) RL for stratospheric balloon navigation with Loon* (1:07:30) New ideas for applying RL in the real world* (1:10:15) Marc’s advice for young researchers* (1:12:37) OutroLinks:* Professor Bellemare’s Homepage* Distributional Reinforcement Learning book* Papers* The Arcade Learning Environment: An Evaluation Platform for General Agents* A Distributional Perspective on Reinforcement Learning* Distributional Reinforcement Learning with Quantile Regression* Distributional Reinforcement Learning with Linear Function Approximation* Autonomous navigation of stratospheric balloons using reinforcement learning* A Geometric Perspective on Optimal Representations for Reinforcement Learning* The Value Function Polytope in Reinforcement Learning Get full access to The Gradient at thegradientpub.substack.com/subscribe
Released:
Dec 8, 2022
Format:
Podcast episode

Titles in the series (100)

Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more. thegradientpub.substack.com