Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Responsible AI: Implementing Ethical and Unbiased Algorithms
Responsible AI: Implementing Ethical and Unbiased Algorithms
Responsible AI: Implementing Ethical and Unbiased Algorithms
Ebook336 pages3 hours

Responsible AI: Implementing Ethical and Unbiased Algorithms

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book is written for software product teams that use AI to add intelligent models to their products or are planning to use it. As AI adoption grows, it is becoming important that all AI driven products can demonstrate they are not introducing any bias to the AI-based decisions they are making, as well as reducing any pre-existing bias or discrimination.

 The responsibility to ensure that the AI models are ethical and make responsible decisions does not lie with the data scientists alone. The product owners and the business analysts are as important in ensuring bias-free AI as the data scientists on the team. This book addresses the part that these roles play in building a fair, explainable and accountable model, along with ensuring model and data privacy. Each chapter covers the fundamentals for the topic and then goes deep into the subject matter – providing the details that enable the business analysts and the data scientists to implement these fundamentals. 

AI research is one of the most active and growing areas of computer science and statistics. This book includes an overview of the many techniques that draw from the research or are created by combining different research outputs. Some of the techniques from relevant and  popular libraries are covered, but deliberately not drawn very heavily from as they are already well documented, and new research is likely to replace some of it.


LanguageEnglish
PublisherSpringer
Release dateSep 13, 2021
ISBN9783030768607
Responsible AI: Implementing Ethical and Unbiased Algorithms

Related to Responsible AI

Related ebooks

Intelligence (AI) & Semantics For You

View More

Related articles

Reviews for Responsible AI

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Responsible AI - Sray Agarwal

    © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021

    S. Agarwal, S. MishraResponsible AIhttps://doi.org/10.1007/978-3-030-76860-7_1

    1. Introduction

    Sray Agarwal¹   and Shashin Mishra¹  

    (1)

    London, UK

    Sray Agarwal (Corresponding author)

    Email: sray.agarwal@publicissapient.com

    Shashin Mishra

    Email: shashin.mishra@publicissapient.com

    The machine ethics were mostly a topic for science fiction before the twenty-first century. The easy access and low cost of data storage and computing capabilities have meant that the use of machine-driven intelligence has increased significantly in the last two decades and the topic is not one for entertainment or a theoretical one anymore. The machine learning algorithms or AI now impacts our lives in innumerable ways – making decisions that can have material impact on the lives of the individuals affected by these decisions.

    Through the year 2020, due to the global pandemic, even the sectors that shied away from technology have embraced it. However, the adoption has not been without problems, and as the number of lives touched by it have increased, so has the impact – positive or not. An example from very close to our homes is the grading fiasco for the A-level students (equivalent to the high school). An algorithm was used in the UK to predict grades that the students were likely to achieve as the examinations were cancelled due to the pandemic. The predicted grades were nowhere close to the centre-assessed grades (CAG), the grades that the schools, college or exam centre predict for the students. The CAG is used by the universities in the UK to give provisional places to the students as they are a reliable indicator of what the student is likely to achieve. With predicted grades far off from the CAG, many students lost their places in the universities.

    It was later found that the algorithm, in order to fit the curve, increased grades for the students at small private schools, which are mostly around affluent neighbourhoods, and lowered it for large public schools, where a lot of students belong to ethnic minorities or from families with poor socio-economic status. This was quickly acknowledged as algorithmic bias, the algorithm was decommissioned, and CAG were awarded as final grades, but for a lot of students the damage was already done.

    The problem of bias and lack of fairness is not limited to the first time, built in a rush model alone. A large number of discriminatory algorithms can simply go undetected as the impacted user group may be concentrated in a small geographical area as compared to the overall area targeted by the model, take, for instance, the price of The Princeton Review’s online SAT tutoring packages. The price ranged from as low as $6600 to as high as $8400 based on the zip code of the user. Interestingly, the zip codes with the higher prices were also the ones with Asian majority population, including even the lower income neighbourhoods.

    There are numerous examples of algorithms (and by extension products) that fail in implementing fairness, removing bias from their decisions or properly explaining these decisions. The reason these algorithms, or any algorithm for that matter, fail is because the facets of responsible AI are not considered when the data is being analysed, the objective function being defined, or the model pipeline being set up.

    A big reason such products exist is because there aren’t any resources that help identify the different facets to creating a responsible product and the pitfalls to avoid. Another challenge that the teams face is that the role of product managers/owners and business analysts in the lifecycle of the model is not well defined. Our experience has shown us that all roles within a product team contribute to building an effective AI product, and, in this book, we have tried to cover the content from the point of view of all the roles and not just the data engineers and the data scientists. The book does get in detail, but wherever we have introduced math, we have tried to explain the concept as well as why is it necessary to understand. Let’s begin by defining what is responsible AI.

    What Is Responsible AI

    The goals of responsible AI are manifold. It should be able to take decisions that reward users based on their achievements (approve credit for someone with good income and credit history) and should not discriminate against someone because of data attributes that are not in user control (reject credit for someone with good income and credit who lives in an otherwise poorer neighbourhood). It should be usable in a system built for the future with high levels of fairness and freedom from bias and yet should allow positive discrimination to correct the wrongs of the past. These requirements can be contradictory, and to overcome the inherent contradiction and build an AI that does this is what we want to talk about.

    As the awareness is rising around creating responsible AI products, the actual implementations and the regulatory requirements, if any, are still catching up. In April 2020, the US Federal Trade Commission, in their blog titled Using Artificial Intelligence and Algorithms, discussed why an algorithm needs to be transparent and discrimination-free. But transparency alone is not sufficient until it is accompanied with explainability, e.g. a 3-digit credit score is not helpful for the user unless it also tells them why they have a certain score and the actions that will help them improve the score. Just the transparency alone hides more than it reveals. The Ethics guidelines for trustworthy AI as published by the European Union in 2019 goes further and has defined a much wider

    Enjoying the preview?
    Page 1 of 1