Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Alex Tamkin on Self-Supervised Learning and Large Language Models

Alex Tamkin on Self-Supervised Learning and Large Language Models

FromThe Gradient: Perspectives on AI


Alex Tamkin on Self-Supervised Learning and Large Language Models

FromThe Gradient: Perspectives on AI

ratings:
Length:
71 minutes
Released:
Nov 11, 2021
Format:
Podcast episode

Description

In episode 15 of The Gradient Podcast, we talk to Stanford PhD Candidate Alex TamkinSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSAlex Tamkin is a fourth-year PhD student in Computer Science at Stanford, advised by Noah Goodman and part of the Stanford NLP Group. His research focuses on understanding, building, and controlling pretrained models, especially in domain-general or multimodal settings.We discuss:* Viewmaker Networks: Learning Views for Unsupervised Representation Learning* DABS: A Domain-Agnostic Benchmark for Self-Supervised Learning* On the Opportunities and Risks of Foundation Models* Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models* Mentoring, teaching and fostering a healthy and inclusive research culture* Scientific communication and breaking down walls between fieldsPodcast Theme: “MusicVAE: Trio 16-bar Sample #2” from "MusicVAE: A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music" Get full access to The Gradient at thegradientpub.substack.com/subscribe
Released:
Nov 11, 2021
Format:
Podcast episode

Titles in the series (100)

Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more. thegradientpub.substack.com