Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Percy Liang on Machine Learning Robustness, Foundation Models, and Reproducibility

Percy Liang on Machine Learning Robustness, Foundation Models, and Reproducibility

FromThe Gradient: Perspectives on AI


Percy Liang on Machine Learning Robustness, Foundation Models, and Reproducibility

FromThe Gradient: Perspectives on AI

ratings:
Length:
51 minutes
Released:
Jan 27, 2022
Format:
Podcast episode

Description

In interview 21 of The Gradient Podcast, we talk to Percy Liang, an Associate Professor of Computer Science at Stanford University and the director of the Center for Research on Foundation Models.Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterPercy Liang’s research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning.  He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets.  His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and multiple paper awards at ACL, EMNLP, ICML, and COLT.Sections:(00:00) Intro(01:21) Start in AI(06:52) Interest in Language(10:17) Start of PhD(12:22) Semantic Parsing(17:49) Focus on ML robustness(22:30) Foundation Models, model robustness(28:55) Foundation Model bias(34:48) Foundation Model research by academia(37:13) Current research interests(39:40) Surprising robustness results(44:24) Reproducibility and CodaLab(50:17) OutroPapers / Topics discussed:* On the Opportunities and Risks of Foundation Models* Reflections on Foundation Models* Removing spurious features can hurt accuracy and affect groups disproportionately.* Selective classification can magnify disparities across groups * Just train twice: improving group robustness without training group information * LILA: language-informed latent actions * CodaLab Get full access to The Gradient at thegradientpub.substack.com/subscribe
Released:
Jan 27, 2022
Format:
Podcast episode

Titles in the series (100)

Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more. thegradientpub.substack.com