Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

92. Daniel Filan - Peering into neural nets for AI safety

92. Daniel Filan - Peering into neural nets for AI safety

FromTowards Data Science


92. Daniel Filan - Peering into neural nets for AI safety

FromTowards Data Science

ratings:
Length:
66 minutes
Released:
Jul 14, 2021
Format:
Podcast episode

Description

Many AI researchers think it’s going to be hard to design AI systems that continue to remain safe as AI capabilities increase. We’ve seen already on the podcast that the field of AI alignment has emerged to tackle this problem, but a related effort is also being directed at a separate dimension of the safety problem: AI interpretability.
Our ability to interpret how AI systems process information and make decisions will likely become an important factor in assuring the reliability of AIs in the future. And my guest for this episode of the podcast has focused his research on exactly that topic. Daniel Filan is an AI safety researcher at Berkeley, where he’s supervised by AI pioneer Stuart Russell. Daniel also runs AXRP, a podcast dedicated to technical AI alignment research.
Released:
Jul 14, 2021
Format:
Podcast episode

Titles in the series (100)

Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.