Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

417: Hume AI with Alan Cowen

417: Hume AI with Alan Cowen

FromGiant Robots Smashing Into Other Giant Robots


417: Hume AI with Alan Cowen

FromGiant Robots Smashing Into Other Giant Robots

ratings:
Length:
40 minutes
Released:
Apr 7, 2022
Format:
Podcast episode

Description

Dr. Alan Cowen is the Executive Director of The Hume Initiative (https://thehumeinitiative.org/), a non-profit dedicated to the responsible advancement of AI with empathy, and CEO of Hume AI (https://hume.ai/), an AI research lab and empathetic AI company that is hoping to pave the way for AI that improves our emotional well-being.
Chad talks with Alan about forming clear ethical guidelines around how this technology should be used because there is a problem in that the public is skeptical about whether technology is used for good or bad. The Hume Initiative is intended to lay out what concrete use cases will be and what use cases shouldn't be supported. Hume AI is built for developers to construct empathic abilities into their applications.
The Hume Initiative (https://thehumeinitiative.org/)
Hume AI (https://hume.ai/)
Follow Hume AI on Twitter (https://twitter.com/hume__ai) or LinkedIn (https://www.linkedin.com/company/hume-ai/).
Follow Alan on Twitter (https://twitter.com/AlanCowen) or LinkedIn (https://www.linkedin.com/in/alan-cowen/).
Follow thoughtbot on Twitter (https://twitter.com/thoughtbot) or LinkedIn (https://www.linkedin.com/company/150727/).
Become a Sponsor (https://thoughtbot.com/sponsorship) of Giant Robots!
Transcript:
CHAD: This is the Giant Robots Smashing Into Other Giant Robots Podcast, where we explore the design, development, and business of great products. I'm your host, Chad Pytel. And with me today is Dr. Alan Cowen, the Executive Director of The Hume Initiative, a non-profit dedicated to the responsible advancement of AI with empathy, and CEO of Hume AI, an AI research lab and empathetic AI company. Alan, thank you for joining me.
DR. COWEN: Thanks for having me on.
CHAD: That's a lot of words in that introduction. I'm glad I got through it in one take. Let's take a step back a little bit and talk about the two different things, The Hume Initiative and Hume AI. And what came first?
DR. COWEN: So they were conceptualized at the same time. Practically speaking, Hume AI was started first only because it currently is the sole supporter of The Hume Initiative. But they were both conceptualized as this way to adjust two of the main problems that people have faced bringing empathic abilities to technology. Technology needs to have empathic abilities. If AI is going to get smart enough to make decisions on our behalf, it should understand whether those decisions are good or bad for our well-being. And a big part of that is understanding people's emotions because emotions are really what determine our well-being.
The Hume Initiative addresses one of the challenges, which is the formation of clear ethical guidelines around how this technology should be used. And it's not because the companies pursuing this have bad intents; that's not the point at all. The problem is that the public is probably justifiably skeptical of whether this technology will be used for them or against them. And The Hume Initiative is intended as a way of laying out what the concrete use cases will be and what use cases shouldn't be supported.
Hume AI is introducing solutions to the problem of how we build empathic AI algorithms. And the challenge there has been the data. So there have been a lot of attempts at building empathic AI or emotion AI, whatever you call it, basically ways of reading facial expression, emotion in the voice, and language. And there's been a few challenges, but most of them come down to the fact that the data tends to be based on outdated theories of emotion and/or it tends to be based on people's perceptual ratings of images largely from the internet or videos that are collected in more of an observational way without experimental control.
And within those perceptual judgments, you see gender biases, sometimes racial biases, biases by what people are wearing, whether they're wearing sunglasses, for example, because people with sunglasses for some reason are perceived as being proud. [laughter] And the algorithm
Released:
Apr 7, 2022
Format:
Podcast episode

Titles in the series (100)

A podcast about the design, development, and business of great software. Each week thoughtbot's Chad Pytel (CEO) and Lindsey Christensen (CMO) are joined by the people who build and nurture the products we love.