Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

An interview with an A.I. (with GPT-3 and Jeremy Nixon)

An interview with an A.I. (with GPT-3 and Jeremy Nixon)

FromClearer Thinking with Spencer Greenberg


An interview with an A.I. (with GPT-3 and Jeremy Nixon)

FromClearer Thinking with Spencer Greenberg

ratings:
Length:
114 minutes
Released:
Sep 30, 2021
Format:
Podcast episode

Description

Read the full transcriptWhat is machine learning? What are neural networks? How can humans interpret the meaning or functionality of the various layers of a neural network? What is a transformer, and how does it build on the idea of a neural network? Does a transformer have a conceptual advantage over neural nets, or is a transformer basically the equivalent of neural nets plus a lot of compute power? Why have we started hearing so much about neural nets in just the last few years even though they've existed conceptually for many decades? What kind of ML model is GPT-3? What learning sub-tasks are encapsulated in the process of learning how to autocomplete text? What is "few-shot" learning? What is the difference between GPT-2 and GPT-3? How big of a deal is GPT-3? Right now, GPT-3's responses are not guaranteed to contain true statements; is there a way to train future GPT or similar models to say only true things (or to indicate levels of confidence in the truthfulness of its statements)? Should people whose jobs revolve around writing or summarizing text be worried about being replaced by GPT-3? What are the relevant copyright issues related to text generation models? A website's "robots.txt" file or a "noindex" HTML attribute in its pages' meta tags tells web crawlers which content they can and cannot access; could a similar solution exist for writers, programmers, and others who want to limit or prevent their text from being used as training data for models like GPT-3? What are some of the scarier features of text generation models? What does the creation of models like GPT-3 tell us (if anything) about how and when we might create artificial general intelligence?Learn more about GPT-3 here. And learn more about Jeremy Nixon and listen to his episode here.Further reading:"Kanye West, Donald Trump And Jim Brown: The Full Transcript"lsusr's website"The Humans Are Dead" by Flight of the Conchords
Released:
Sep 30, 2021
Format:
Podcast episode

Titles in the series (100)

Clearer Thinking is a podcast about ideas that truly matter. Join Spencer Greenberg each week as he has fun, in-depth conversations with brilliant people, exploring useful ideas related to psychology, society, behavior change, philosophy, science, artificial intelligence, math, economics, self-help, mental health, and technology. If you enjoy learning about powerful, practical concepts and frameworks, wish you had more deep, intellectual conversations in your life, or are looking for non-BS self-improvement, then we think you'll love this podcast! Because this is the podcast about "ideas that matter," we prioritize ideas that can be applied right now to make life better and that can help you better understand yourself and the world. In other words, we want to highlight the very best tools to enhance your learning, self-improvement efforts, and decision-making. We take on important, thorny questions like: What's the best way to help a friend or loved one going through a difficult time? How can we make our worldviews more accurate, and how can we hone the accuracy of our thinking? What are the advantages of using our "gut" to make decisions, and when should we expect careful, analytical reflection to be more effective? Why do societies sometimes collapse, and what can we do to reduce the chance that ours collapses? Why is the world today so much worse than it could be, and what can we do to make it better? What is good and what is bad about tradition, and are there more meaningful and ethical ways of carrying out important rituals, such as honoring the dead? How can we move beyond zero-sum, adversarial negotiations, and create more positive-sum interactions?