29 min listen
Machine Learning Bias and Fairness with Timnit Gebru and Margaret Mitchell
Machine Learning Bias and Fairness with Timnit Gebru and Margaret Mitchell
ratings:
Length:
43 minutes
Released:
Feb 14, 2018
Format:
Podcast episode
Description
This week, we dive into machine learning bias and fairness from a social and technical perspective with machine learning research scientists Timnit Gebru from Microsoft and Margaret Mitchell (aka Meg, aka M.) from Google.
They share with Melanie and Mark about ongoing efforts and resources to address bias and fairness including diversifying datasets, applying algorithmic techniques and expanding research team expertise and perspectives. There is not a simple solution to the challenge, and they give insights on what work in the broader community is in progress and where it is going.
Timnit Gebru
Timnit Gebru works in the Fairness Accountability Transparency and Ethics (FATE) group at the New York Lab. Prior to joining Microsoft Research, she was a PhD student in the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her main research interest is in data mining large-scale, publicly available images to gain sociological insight, and working on computer vision problems that arise as a result, including fine-grained image recognition, scalable annotation of images, and domain adaptation. The Economist and others have recently covered part of this work. She is currently studying how to take dataset bias into account while designing machine learning algorithms, and the ethical considerations underlying any data mining project. As a cofounder of the group Black in AI, she works to both increase diversity in the field and reduce the impact of racial bias in the data.
Margaret Mitchell
M. Mitchell is a Senior Research Scientist in Google’s Research & Machine Intelligence group, working on artificial intelligence. Her research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence toward positive goals. Margaret’s work combines machine learning, computer vision, natural language processing, social media, and insights from cognitive science. Before Google, Margaret was a founding member of Microsoft Research’s “Cognition” group, focused on advancing artificial intelligence, and a researcher in Microsoft Research’s Natural Language Processing group.
Cool things of the week
GPS/Cellular Asset Tracking using Google Cloud IoT Core, Firestore and MongooseOS blog
GPUs in Kubernetes Engine now available in beta blog
Announcing Spring Cloud GCP - integrating your favorite Java framework with Google Cloud blog
Interview
PAIR | People+AI Research Initiative site
FATE | Fairness, Accountability, Transparency and Ethics in AI site
Fat* Conference site & resources
Joy Buolamwini site
Algorithmic Justice Leaguge site
ProPublica Machine Bias article
AI Ethics & Society Conference site
Ethics in NLP Conference site
FACETS site
TensorFlow Lattice repo
Sample papers on bias and fairness:
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification paper
Facial Recognition is Accurate, if You’re a White Guy article
Mitigating Unwanted Biases with Adversarial Learning paper
Improving Smiling Detection with Race and Gender Diversity paper
Fairness Through Awareness paper
Avoiding Discrimination through Casual Reasoning paper
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings paper
Satisfying Real-world Goals with Dataset Constraints paper
Axiomatic Attribution for Deep Networks paper
Monotonic Calibrated Interpolated Look-Up Tables paper
Equality of Opportunity in Machine Learning blog
Additional links:
Bill Nye Saves the World Episode 3: Machines Take Over the World (includes Margaret Mitchell) site
“We’re in a diversity crisis”: Black in AI’s founder on what’s poisoning the algorithms in our lives article
Using Deep Learning and Google Street View to Estimate Demographics with Timnit Gebru TWiML & AI podcast
Security and Safety in AI: Adversarial Examples, Bias and Trust with Mustapha Cisse TWiML & AI podcast
How we can build AI to help humans, not hurt us TED
PAIR Symposium conference
Q
They share with Melanie and Mark about ongoing efforts and resources to address bias and fairness including diversifying datasets, applying algorithmic techniques and expanding research team expertise and perspectives. There is not a simple solution to the challenge, and they give insights on what work in the broader community is in progress and where it is going.
Timnit Gebru
Timnit Gebru works in the Fairness Accountability Transparency and Ethics (FATE) group at the New York Lab. Prior to joining Microsoft Research, she was a PhD student in the Stanford Artificial Intelligence Laboratory, studying computer vision under Fei-Fei Li. Her main research interest is in data mining large-scale, publicly available images to gain sociological insight, and working on computer vision problems that arise as a result, including fine-grained image recognition, scalable annotation of images, and domain adaptation. The Economist and others have recently covered part of this work. She is currently studying how to take dataset bias into account while designing machine learning algorithms, and the ethical considerations underlying any data mining project. As a cofounder of the group Black in AI, she works to both increase diversity in the field and reduce the impact of racial bias in the data.
Margaret Mitchell
M. Mitchell is a Senior Research Scientist in Google’s Research & Machine Intelligence group, working on artificial intelligence. Her research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence toward positive goals. Margaret’s work combines machine learning, computer vision, natural language processing, social media, and insights from cognitive science. Before Google, Margaret was a founding member of Microsoft Research’s “Cognition” group, focused on advancing artificial intelligence, and a researcher in Microsoft Research’s Natural Language Processing group.
Cool things of the week
GPS/Cellular Asset Tracking using Google Cloud IoT Core, Firestore and MongooseOS blog
GPUs in Kubernetes Engine now available in beta blog
Announcing Spring Cloud GCP - integrating your favorite Java framework with Google Cloud blog
Interview
PAIR | People+AI Research Initiative site
FATE | Fairness, Accountability, Transparency and Ethics in AI site
Fat* Conference site & resources
Joy Buolamwini site
Algorithmic Justice Leaguge site
ProPublica Machine Bias article
AI Ethics & Society Conference site
Ethics in NLP Conference site
FACETS site
TensorFlow Lattice repo
Sample papers on bias and fairness:
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification paper
Facial Recognition is Accurate, if You’re a White Guy article
Mitigating Unwanted Biases with Adversarial Learning paper
Improving Smiling Detection with Race and Gender Diversity paper
Fairness Through Awareness paper
Avoiding Discrimination through Casual Reasoning paper
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings paper
Satisfying Real-world Goals with Dataset Constraints paper
Axiomatic Attribution for Deep Networks paper
Monotonic Calibrated Interpolated Look-Up Tables paper
Equality of Opportunity in Machine Learning blog
Additional links:
Bill Nye Saves the World Episode 3: Machines Take Over the World (includes Margaret Mitchell) site
“We’re in a diversity crisis”: Black in AI’s founder on what’s poisoning the algorithms in our lives article
Using Deep Learning and Google Street View to Estimate Demographics with Timnit Gebru TWiML & AI podcast
Security and Safety in AI: Adversarial Examples, Bias and Trust with Mustapha Cisse TWiML & AI podcast
How we can build AI to help humans, not hurt us TED
PAIR Symposium conference
Q
Released:
Feb 14, 2018
Format:
Podcast episode
Titles in the series (100)
Kubernetes 1.3 with Carter Morgan: Carter Morgan tells your cohosts Francesc and Mark all the new features of Kubernetes 1.3, the latest version of the open source container orchestration framework. by Google Cloud Platform Podcast