Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

#91 - HATTIE ZHOU - Teaching Algorithmic Reasoning via In-context Learning #NeurIPS

#91 - HATTIE ZHOU - Teaching Algorithmic Reasoning via In-context Learning #NeurIPS

FromMachine Learning Street Talk (MLST)


#91 - HATTIE ZHOU - Teaching Algorithmic Reasoning via In-context Learning #NeurIPS

FromMachine Learning Street Talk (MLST)

ratings:
Length:
21 minutes
Released:
Dec 20, 2022
Format:
Podcast episode

Description

Support us! https://www.patreon.com/mlst

Hattie Zhou, a PhD student at Université de Montréal and Mila, has set out to understand and explain the performance of modern neural networks, believing it a key factor in building better, more trusted models. Having previously worked as a data scientist at Uber, a private equity analyst at Radar Capital, and an economic consultant at Cornerstone Research, she has recently released a paper in collaboration with the Google Brain team, titled ‘Teaching Algorithmic Reasoning via In-context Learning’. In this work, Hattie identifies and examines four key stages for successfully teaching algorithmic reasoning to large language models (LLMs): formulating algorithms as skills, teaching multiple skills simultaneously, teaching how to combine skills, and teaching how to use skills as tools. Through the application of algorithmic prompting, Hattie has achieved remarkable results, with an order of magnitude error reduction on some tasks compared to the best available baselines. This breakthrough demonstrates algorithmic prompting’s viability as an approach for teaching algorithmic reasoning to LLMs, and may have implications for other tasks requiring similar reasoning capabilities.

TOC
[00:00:00] Hattie Zhou
[00:19:49] Markus Rabe [Google Brain]

Hattie's Twitter - https://twitter.com/oh_that_hat
Website - http://hattiezhou.com/

Teaching Algorithmic Reasoning via In-context Learning [Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi]
https://arxiv.org/pdf/2211.09066.pdf

Markus Rabe [Google Brain]:
https://twitter.com/markusnrabe
https://research.google/people/106335/
https://www.linkedin.com/in/markusnrabe

Autoformalization with Large Language Models [Albert Jiang Charles Edgar Staats Christian Szegedy Markus Rabe Mateja Jamnik Wenda Li Yuhuai Tony Wu]
https://research.google/pubs/pub51691/

Discord: https://discord.gg/aNPkGUQtc5
YT: https://youtu.be/80i6D2TJdQ4
Released:
Dec 20, 2022
Format:
Podcast episode

Titles in the series (100)

This is the audio podcast for the ML Street Talk YouTube channel at https://www.youtube.com/c/MachineLearningStreetTalk Thanks for checking us out! We think that scientists and engineers are the heroes of our generation. Each week we have a hard-hitting discussion with the leading thinkers in the AI space. Street Talk is unabashedly technical and non-commercial, so you will hear no annoying pitches. Corporate- and MBA-speak is banned on street talk, "data product", "digital transformation" are banned, we promise :) Dr. Tim Scarfe, Dr. Yannic Kilcher and Dr. Keith Duggar.