18 min listen
Alignment Newsletter #113: Checking the ethical intuitions of large language models
Alignment Newsletter #113: Checking the ethical intuitions of large language models
ratings:
Length:
17 minutes
Released:
Aug 19, 2020
Format:
Podcast episode
Description
Recorded by Robert Miles More information about the newsletter here
Released:
Aug 19, 2020
Format:
Podcast episode
Titles in the series (100)
Alignment Newsletter #78: Formalizing power and instrumental convergence, and the end-of-year AI safety charity comparison: Formalizing power and instrumental convergence, and the end-of-year AI safety charity comparison by Alignment Newsletter Podcast