Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

The Future of the Transformer Part 1 with Trey Kollmer | H100 Chips will Supercharge AI Hardware

The Future of the Transformer Part 1 with Trey Kollmer | H100 Chips will Supercharge AI Hardware

From"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis


The Future of the Transformer Part 1 with Trey Kollmer | H100 Chips will Supercharge AI Hardware

From"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

ratings:
Length:
82 minutes
Released:
Oct 18, 2023
Format:
Podcast episode

Description

Trey Kollmer joins Nathan Labenz for a roundup of the latest AI research! They discuss Microsoft’s Self-Taught Optimizer (STOP) research and Google’s FreshLLMs, how H100 chips will supercharge development of programs with GPT-4 level compute, LLM representation of space and time, and more! If you're looking for an ERP platform, check out our sponsor, NetSuite: http://netsuite.com/cognitive

SPONSORS: NetSuite | Omneky
NetSuite has 25 years of providing financial software for all your business needs. More than 36,000 businesses have already upgraded to NetSuite by Oracle, gaining visibility and control over their financials, inventory, HR, eCommerce, and more. If you're looking for an ERP platform ✅ head to NetSuite: http://netsuite.com/cognitive and download your own customized KPI checklist.

Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off.

LINKS:
FreshLLMs: https://arxiv.org/abs/2310.03214
Microsoft Self Taught Optimizer (STOP): https://arxiv.org/abs/2310.02304LLMs represent Space and Time: https://paperswithcode.com/paper/language-models-represent-space-and-time
Deep Neural Networks Tend to Extrapolate Predictably: https://arxiv.org/pdf/2310.00873.pdf

TIMESTAMPS:
(00:00:00) – Introduction
(00:00:56) – Update to WGA Strike
(00:03:00) – Trey Kollmer's background
(00:06:00) – Scaling compute for AI training experiments with GPT-4 as reference point
(00:09:00) – Inflection's plan to acquire 22,000 H100s to reach GPT-4 scale compute in 5 days
(00:12:00) – Addressing knowledge cutoff in LLMs using search engines
(00:15:00) – Inserting structured search results into prompts with metadata
(00:16:07) – Sponsors: Netsuite | Omneky
(00:18:00) – Comparing approach to Perplexity system
(00:18:08) — Fresh LLMs
(00:21:00) – Microsoft’s Self-taught Optimizer (STOP): Recursive self-improvement framework
(00:24:00) – STOP framework works with GPT-4 but not GPT-3.5
(00:27:00) – STOP removed sandbox flag in some cases
(00:30:00) – LLMs represent space and time with probe models
(00:33:00) – Visualizations show emergence of spatial maps
(00:33:14) — OpenAI rumours
(00:36:00) – Techniques like linear probes and holdout studies
(00:39:00) – DNNs extrapolate predictably by falling back to ignorance
(00:42:00) – Testing different architectures, loss functions, distribution shifts
(00:45:00) – Design systems to be conservative out of distribution
(00:48:00) – Potential for recursive architecture search
(00:50:21) — LLMs represent Space and Time
(00:51:00) – Vision API enabling more capable web agents
(00:54:00) – Discussion of research insights
(00:57:00) – Thoughts on stochastic parrots debate
(01:11:25) — Deep Neural Networks Tend to Extrapolate Predictably

X/Social
@labenz (Nathan)
@treyko (Trey)
@CogRev_Podcast
Released:
Oct 18, 2023
Format:
Podcast episode

Titles in the series (100)

A weekly podcast where hosts Erik Torenberg and Nathan Labenz interview the builders on the edge of AI and explore the dramatic shift it will unlock in the coming years.