22 min listen
EP 141: How To Understand and Fix Biased AI
EP 141: How To Understand and Fix Biased AI
ratings:
Length:
31 minutes
Released:
Nov 9, 2023
Format:
Podcast episode
Description
Why are AI models so biased? Whether it's ChatGPT or an AI image generator, LLMs often have certain biases and tendencies. Nick Schmidt, Founder & CTO of SolasAI & BLDS, LLC, joins us to discuss how to understand and fix biased AI.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Nick and Jordan questions about AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTimestamps:[00:01:20] Daily AI news[00:04:00] About Nick and Solas AI[00:07:14] Algorithm misuse can lead to discrimination[00:11:54] 3-step burden shifting process to address discrimination[00:14:18] Internet usage leads to biased data collection[00:17:30] AI bias, accessibility, and user control insights[00:22:59] Algorithm fairness through regulations[00:26:16] Algorithmic decisioning and human biases[00:27:32] How to address biases in AI models?Topics Covered in This Episode:1. Prevalence of Bias in AI Models2. Detection and Mitigation of Bias in Algorithms3. Practical Solutions for Addressing Bias in AIKeywords:AI bias, discrimination, image generators, language models, input data, burden shifting process, biased information, societal biases, fairness, exclusion, collective punishment, biased AI, practical advice, best practices, everyday users, legal framework, AI news, smart devices, NVIDIA, animated films, detection, mitigation, discriminatory outcomes, generative AI, model development, algorithmic decision-making, dynamic models, reinforcement, algorithmic fairness, Solas AI, newsletter, daily AI
Released:
Nov 9, 2023
Format:
Podcast episode
Titles in the series (100)
EP 1: Will AI Take Your Job? by Everyday AI Podcast – An AI and ChatGPT Podcast