Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Can We Develop Truly Beneficial AI? George Hotz and Connor Leahy

Can We Develop Truly Beneficial AI? George Hotz and Connor Leahy

FromMachine Learning Street Talk (MLST)


Can We Develop Truly Beneficial AI? George Hotz and Connor Leahy

FromMachine Learning Street Talk (MLST)

ratings:
Length:
90 minutes
Released:
Aug 4, 2023
Format:
Podcast episode

Description

Patreon: https://www.patreon.com/mlst
Discord: https://discord.gg/ESrGqhf5CB

George Hotz and Connor Leahy discuss the crucial challenge of developing beneficial AI that is aligned with human values. Hotz believes truly aligned AI is impossible, while Leahy argues it's a solvable technical challenge.Hotz contends that AI will inevitably pursue power, but distributing AI widely would prevent any single AI from dominating. He advocates open-sourcing AI developments to democratize access. Leahy counters that alignment is necessary to ensure AIs respect human values. Without solving alignment, general AI could ignore or harm humans.They discuss whether AI's tendency to seek power stems from optimization pressure or human-instilled goals. Leahy argues goal-seeking behavior naturally emerges while Hotz believes it reflects human values. Though agreeing on AI's potential dangers, they differ on solutions. Hotz favors accelerating AI progress and distributing capabilities while Leahy wants safeguards put in place.While acknowledging risks like AI-enabled weapons, they debate whether broad access or restrictions better manage threats. Leahy suggests limiting dangerous knowledge, but Hotz insists openness checks government overreach. They concur that coordination and balance of power are key to navigating the AI revolution. Both eagerly anticipate seeing whose ideas prevail as AI progresses.

Transcript and notes: https://docs.google.com/document/d/1smkmBY7YqcrhejdbqJOoZHq-59LZVwu-DNdM57IgFcU/edit?usp=sharing

TOC:
[00:00:00] Introduction to George Hotz and Connor Leahy
[00:03:10] George Hotz's Opening Statement: Intelligence and Power
[00:08:50] Connor Leahy's Opening Statement: Technical Problem of Alignment and Coordination
[00:15:18] George Hotz's Response: Nature of Cooperation and Individual Sovereignty
[00:17:32] Discussion on individual sovereignty and defense
[00:18:45] Debate on living conditions in America versus Somalia
[00:21:57] Talk on the nature of freedom and the aesthetics of life
[00:24:02] Discussion on the implications of coordination and conflict in politics
[00:33:41] Views on the speed of AI development / hard takeoff
[00:35:17] Discussion on potential dangers of AI
[00:36:44] Discussion on the effectiveness of current AI
[00:40:59] Exploration of potential risks in technology
[00:45:01] Discussion on memetic mutation risk
[00:52:36] AI alignment and exploitability
[00:53:13] Superintelligent AIs and the assumption of good intentions
[00:54:52] Humanity’s inconsistency and AI alignment
[00:57:57] Stability of the world and the impact of superintelligent AIs
[01:02:30] Personal utopia and the limitations of AI alignment
[01:05:10] Proposed regulation on limiting the total number of flops
[01:06:20] Having access to a powerful AI system
[01:18:00] Power dynamics and coordination issues with AI
[01:25:44] Humans vs AI in Optimization
[01:27:05] The Impact of AI's Power Seeking Behavior
[01:29:32] A Debate on the Future of AI
Released:
Aug 4, 2023
Format:
Podcast episode

Titles in the series (100)

This is the audio podcast for the ML Street Talk YouTube channel at https://www.youtube.com/c/MachineLearningStreetTalk Thanks for checking us out! We think that scientists and engineers are the heroes of our generation. Each week we have a hard-hitting discussion with the leading thinkers in the AI space. Street Talk is unabashedly technical and non-commercial, so you will hear no annoying pitches. Corporate- and MBA-speak is banned on street talk, "data product", "digital transformation" are banned, we promise :) Dr. Tim Scarfe, Dr. Yannic Kilcher and Dr. Keith Duggar.