Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

AI Literacy: Understanding Shifts in our Digital Ecosystem
AI Literacy: Understanding Shifts in our Digital Ecosystem
AI Literacy: Understanding Shifts in our Digital Ecosystem
Ebook298 pages4 hours

AI Literacy: Understanding Shifts in our Digital Ecosystem

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Author Daniel Bashir's AI Literacy is a book about AI at its best and worst: AI systems can and will do incredible things but why we need to be careful as we develop and use them. The book focuses on AI literacy,

LanguageEnglish
Release dateJan 28, 2022
ISBN9798885041027
AI Literacy: Understanding Shifts in our Digital Ecosystem

Related to AI Literacy

Related ebooks

Technology & Engineering For You

View More

Related articles

Reviews for AI Literacy

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    AI Literacy - Daniel Bashir

    New Degree Press

    Copyright © 2021 Daniel Bashir

    All rights reserved.

    AI Literacy

    Understanding Shifts in Our Digital Ecosystem

    ISBN 978-1-63730-673-4 Paperback

    ISBN 978-1-63730-762-5 Kindle Ebook

    ISBN 979-8-88504-102-7 Ebook

    Contents

    Introduction

    Chapter 1 History of Tech and Rights

    Chapter 2 Public Literacy and Rule of Law

    Chapter 3 Where are we, really?

    Chapter 4 AI in the Media

    Chapter 5 Feats of AI

    Chapter 6 Long-term AI Today

    Chapter 7 Techno-Optimism

    Chapter 8 Good Intentions don’t matter

    Chapter 9 Losing Face

    Chapter 10 Divided, Distracted

    Chapter 11 Pixels and Skin

    Chapter 12 Scholars in Disarray

    Chapter 13 How Does it Work

    Chapter 14 Rolling out AI

    Chapter 15 Cultural Shifts

    Chapter 16 AI Literacy for Students

    Chapter 17 AI Literacy for Technologists

    Chapter 18 AI Literacy for Policymakers

    Chapter 19 Case Studies in Algorithmic Governance

    Chapter 20 Case Studies in AI Governance

    Conclusion

    Acknowledgements

    Appendix

    Introduction

    I never thought I’d have to explain to my daughters why Daddy got arrested. How does one explain to two little girls that a computer got it wrong, but the police listened to it, anyway?

    If the quotation you just read sounds like a plotline for Black Mirror, you’re right. Unfortunately, these are the words of a very real man: on a Thursday in January 2020, Robert Julian-Worchak Williams found himself in a detention center in Detroit. By the time he reached the interrogation room at noon the next day, Robert still had no clear idea why he was being held. According to Kashmir Hill of the New York Times, Robert was only shown a piece of paper with his photo and the words ‘felony warrant’ and ‘larceny’.

    Robert had received a call from the Detroit police department that Thursday afternoon, telling him to submit himself for arrest at the station. Thinking the call was a prank, Robert didn’t act on it. But when he arrived home later that day, a police car pulled up behind him in his driveway, and two police officers handcuffed him in front of his wife and daughters.

    After his fingerprints and other information were taken, Robert was taken to an interrogation room, and shown a blurry photo from a surveillance video, showing a heavyset man, dressed in black and wearing a red St. Louis Cardinals cap, standing in front of a watch display. Robert was shocked when the detective interrogating him asked if he was the man in the photo: while the photo was blurry, he and the man looked nothing alike.

    If you own a phone made in the last few years, then the device living inside your pocket has the same technology that led to Robert’s wrongful arrest. The cameras in phones, computers, and cameras are increasingly linked with software that can identify faces. Unfortunately, that software doesn’t always work correctly—the data fed to these systems can cause them to be worse at identifying a Black or Asian person than a White person, for example.

    If the facial recognition technology that lets you stare at your phone to unlock it doesn’t work, you’ll face the mild inconvenience of having to actually use your thumbs to punch in a passcode. But if the same technology goes wrong in the hands of a police officer, you could be arrested. While Robert’s case was the first of its kind to be exposed, two others have already joined it—all three cases appear to share consistent misuse of facial recognition algorithms that are known to be imperfect. It is entirely possible that other cases exist, but these three are the only ones publicly known.

    When I thought through Robert’s story, what struck me most was that at several steps along the way to Robert’s arrest and interrogation, the entire situation might have been prevented. Detroit Police Chief James Craig has since admitted that the facial recognition technology his department uses, provided by a company named DataWorks Plus, almost never brings back a direct match and almost always misidentifies people. The department could have done their research to find a better performing facial recognition solution—DataWorks’ technology has been blasted as unreliable.

    Facial recognition, when it performs well, has the potential to allow police to more effectively solve cases and find criminals. But blind trust in its abilities can lead to even more wrongful arrests like Robert’s. Even though the best facial recognition systems might correctly identify people more often than not, they don’t work equally well for everyone. Users and vendors should think carefully about procedures for when the algorithms might be incorrect—a policeman’s facial recognition app will not announce to him that it made a mistake.

    Just as I believe we should be careful with facial recognition algorithms but maintain optimism in their usefulness, so too should we maintain a cautious optimism about Artificial Intelligence (AI) technologies in general.

    AI technologies, which perform tasks deemed as cognitive in nature, promise to be fundamentally transformative. While the phrase might inspire visions of futuristic worlds and science fiction-inspired humanoids, the AI systems of today are more pedestrian. That is not to say they are boring and incapable—they have bested world champions at Go and can classify images better than humans—but their sometimes superhuman capabilities are narrow, limited to specific tasks such as picking which caption corresponds to a photo.

    The rapid spread of technology in general portends how quickly AI systems can spread throughout our world. A report from the World Economic Forum shows that the speed of adoption of new technologies has been getting faster decade by decade. Computers went from 20.70% adoption in 1992 to 89.30% in 2016, but social media went from 5% adoption in 2005 to 79% in 2019.

    AI technologies themselves are being gradually adopted: an IBM survey of over 4500 technology decision makers in 2020 found that 45% of the respondents from large companies (1,000+ employees) said they have adopted AI, while 29% of small and medium-sized businesses (under 1,000 employees) responding said they have adopted the technology. While that is still not a majority, it is a significant fraction.

    The COVID-19 pandemic has only sped up the adoption: among the 950 business and IT decision-makers with at least a moderate amount of AI knowledge surveyed by KPMG in March 2021, at least 80% of both large and small business leaders said AI helped their companies during the pandemic. As an article in the Harvard Business Review detailed in 2018, AI systems can automate routine tasks that might sap employees’ time, identify events like credit fraud, and even respond to customer support questions.

    But as Robert’s story should signal, the picture is not entirely rosy. Half of the business leaders KPMG surveyed in industrial manufacturing, retail, and tech said AI is moving faster than it should. This is a reasonable concern—AI systems are well known to exhibit bias and extract incredible environmental costs. There is plenty of work on solving both problems, but neither has a ready solution.

    Although we already struggle to understand how current AI models make decisions, they continue getting bigger and more complicated. Furthermore, 92% of business leaders [in financial services, retail, and tech] with high AI knowledge wanted more government regulation for AI technology. 45% of the respondents cited bias as the largest risk for AI adoption.

    Indeed, the significant limitations of AI technologies have been well-documented. The Gender Shades study from Joy Buolamwini of the MIT Media Lab and Timnit Gebru, a Microsoft Research postdoc at the time, found that industrial facial recognition services worked far worse for dark-skinned women than light-skinned men.

    We are fortunate that the public is becoming more attuned to the risks of the technologies we are using today. There are many calls for restrictions on facial recognition technology, while social media giants have faced skepticism for how their algorithms decide what content users see, among other things.

    At the same time, there is wide recognition that AI systems have an incredibly high potential to help our society, from enabling cars to drive autonomously to simplifying the hiring process. While much of the news-following public has been exposed to the basic ideas of AI technology, few know a lot about the technology.

    Although many reports, such as a February 2020 exclusive from Axios, show public trust in technology companies has been eroding, the sector remains one of the most trusted overall. There has been a fair amount of trust in AI technologies as recently as 2016: a Harvard Business Review (HBR) study that year found that [m]ore than 50% of respondents trust AI to provide elder care, health advice, financial guidance, and social media content creation. At the same time, a studyfrom the Future of Humanity Institute (FHI) found that [s]upport for developing AI is greater among those who are wealthy, educated, male, or have experience with technology.

    It is telling that those more familiar with technology are more likely to support the development of AI systems. In the FHI study, over 55% respondents with computer science and engineering degrees or experience support AI development, while less than 40% of those without such background support its development.

    The fact that many technologists come from a few demographics—the technology industry has long had a gender imbalance problem, and securing lucrative jobs in the field requires a fair amount of education—also may contribute to why those technologists don’t perceive issues with AI systems: technologies like facial recognition work perfectly fine for them. As a Pakistani-American male, I am not exactly a minority in the technology field—if anything, South Asian males are overrepresented in the field. But even I can see how the traces of Pakistani accents my parents possess confuse voice recognition systems like Siri.

    It feels concerning to learn how AI technologies might replace us or make mistakes, especially against a background of depictions of AI that portray killer robots and idle human beings. I and many others care a great deal about ensuring that the technologies we build truly help us create a better future and don’t leave people behind. Helping create that future involves

    Throughout the book, we’ll explore what it means for AI to become a greater part of our lives, and what the benefits and risks look like. It can be hard to trust technologies that seem opaque, and we’ll talk about how we might lift the hood and learn enough to trust those technologies without knowing everything.

    While a technologist myself, I aim to be careful in not endorsing unconsidered techno-optimism: technology in general, and AI technology in particular, poses challenges and concerns. As these systems encroach more and more into our everyday lives, their limitations can have a great deal of impact on us. I believe that AI technologies can bring a significant benefit to humanity, but because they will affect nearly every member of society, we need a more global AI literacy.

    I first became attuned to the potential risks of AI technologies when a lab-mate in my first research lab in college introduced me to the world of AI safety. In large part inspired by Nick Bostrom’s seminal book Superintelligence, published in 2014, the nascent field believes that the creation of Artificial General Intelligence (AGI)—the ability for an agent to understand or accomplish any intellectual task a human can—presents an existential risk to humanity. Figures such as Elon Musk and Stephen Hawking have echoed the sentiment that AI could destroy the human race, or present a similar risk.

    Since learning about the field and reading Bostrom’s work, I’ve had the chance to develop an understanding of where the machine learning world is myself. I spent my college years doing applied and theoretical research in machine learning, where I learned how brittle and limited machine learning models can be. Working on the SageMaker team at Amazon Web Services and on the Personalization team at Amazon, I’ve seen what machine learning research and application looks like in the industry.

    While I believe the creation of AGI is quite far off, I began to think more deeply about the fact that many people and companies want to build AI technologies that will cover the world with their impact. If these technologies are not released and used with considerable care and attention to detail, we might not even have to create AGI. As we will discuss later, drivers have already lost their lives to blind faith in a Tesla’s ability to navigate the road. It would be a death by a thousand cuts.

    I believe that AI literacy is an important first step towards ensuring that the impact of AI technologies on the world is positive and achieves the dreams of technologists today. In my ideal world, the average citizen would not only understand what the technology is, but also have a picture of its potential benefits, risks, and limitations. Broad AI literacy will not only enable AI technologies to realize their best form, but also make sure that we leave no one behind as we move into a future riddled with advanced technology.

    As the recent EU Draft Artificial Intelligence Act has demonstrated, calls for regulation have arisen among governments and members of the public. But the Act also demonstrates the difficulties in developing such legislation—technology moves far more quickly than legislation can adapt, and it is difficult to specify many of the effects of cognitive technologies in legal terminology. I think certain legislative provisions, such as prohibiting the development of technologies that would only be harmful, are vital.

    But developing effective legislation will be slow, and technology will race further and further ahead. While legislation struggles to catch up, I believe the best means we have to ensure AI technologies benefit our society are an informed public moral code. If we develop a collective sense of our interests as citizens, we can better understand how the new technologies of the day and laws at different levels support or harm those interests. Fortunately, deep technical knowledge of AI technologies is not necessary to understand how they affect us in everyday life.

    I primarily aim to reach members of the public without deep knowledge of AI technologies. But I also understand that both technologists like myself and policymakers have a lot to learn about these topics. Both groups have lots to learn from one another, and I hope readers with technical and legislative expertise will find plenty of insights as well.

    In this book, you’ll learn about how AI technologies are being used in a range of industries, from autonomous cars to hiring. You’ll come away with an informed take on the benefits and limitations of these technologies that will allow you to do more than take news about AI systems at face value. We’ll explore efforts already underway to better understand, govern, and improve these technologies, and how we might achieve a better world with them in the future.

    Chapter 1

    History of Tech and Rights

    The Prime Now service is finally true to its name—upon clicking a button, your chicken parmesan is sitting on the table. It’s faster than a restaurant.

    Your road rage is a thing of the past. In fact, you don’t even look at the road when going from place to place—your autonomous car affords you the luxury to bury your head in a book or catch up on the news as you rush to your lunch meeting. You don’t even notice that someone spray-painted Dumbhead! on the back of your car yesterday, but fortunately no one else does either. Despite being a journalist, you haven’t written an article in years: you type a few words of context into a text box on your computer and let it generate the rest of the document.

    This almost certainly isn’t your future, but might remind you of fictional worlds such as that in the 2008 Pixar movie WALL-E. We develop technology to make our lives easier, and AI technology is no different. But there is a particular irony in our desire to build technology that can outdo us cognitively. Where is the human if automated systems can drive, craft economic policy, and create art? There seems to be a place ahead, on the path of technological development, where we might need an intense reckoning with what it means to be human. But we are a long way from systems that can do much on their own. It is difficult to forecast exactly what will happen as AI systems mature, but we have already seen the effects of technological advances.

    From the Industrial Revolution to AI

    As the World Economic Forum recounts, the 19th century Industrial Revolution caused a shift in the nature of work, as people moved into cities and steam-powered machines were driving unprecedented growth in output of items previously hand-crafted by artisan workers, while farm workers had to adopt machines to handle demand from growing populations. In pure economic terms, these technological advancements represented an improvement. But along with the problems it solves, novel technology introduces new issues of its own.

    While the Industrial Revolution marked a change in how we interact with physical objects, allowing sturdier and stronger machines to do the heavy lifting for us, more recent changes have informed how we interact with pictures, ideas, and other non-tangible objects. The rise of the Internet in the 1990s allowed humans to virtualize information. Ideas and facts used to be found in books, encyclopedias, and cave walls. We had to physically travel to libraries and find experts to learn new things. Now, we can sit in our rooms and tap away to summon all that information to a tiny screen.

    Just over two decades later, computers beat humans at identifying images: yes, algorithms running on your computer can distinguish cats from other animals better than you can. A Stanford PhD candidate at the time, Andrej Karpathy attempted to compete against one of these algorithms in an image recognition challenge in 2014. He edged out the state-of-the-art image recognition system, only to post 5 months later that he had been surpassed by not one, but several reported results.

    Image recognition is just one flavor of a machine learning algorithm, a computer program that learns from experience—when I refer to AI technologies throughout this book, I will typically be talking about machine learning systems. By being fed millions of photos and labels indicating what is in those photos, an algorithm can pattern match: it can distinguish between different animals and different people, for instance. And as Karpathy’s crusade shows, just as we developed machines that could handle physical tasks more efficiently than we could, we are now developing systems that can deal with information better than we can.

    Less than a decade after these promising results, we have made incredible progress in teaching computers to perform complex tasks. The 2021 AI Index Report from Stanford’s Institute for Human-Centered Artificial Intelligence states that today’s AI systems can not only classify images as objects, but compose text, audio, and images. And they are getting much better at it—the quality of these AI systems’ productions, in particular circumstances, is high enough that humans have trouble differentiating between generated and non-generated media. Such developments present a clear argument that we need to develop and use these technologies carefully.

    AI Winters and Summers

    How did we get here? Before the field of AI managed such large strides, there were decades of slow progress, stagnation, and a lack of interest. But the field began with an intense feeling of optimism. Before Alan Turing wrote his seminal paper Computing Machinery and Intelligence in 1950 to propose the idea, the field of thinking machines had several names.

    Turing himself was a man of many talents and accomplishments—The Imitation Game tells the riveting story of his role in cracking ciphers so the Allies could intercept Nazi messages during World War II. Every computer science major will encounter the Turing Machine, his mathematical model of computation that can simulate any algorithm. But the Turing Test, also called The Imitation Game—his thought experiment to decide whether a machine was intelligent or not—likely stands out most in popular memory.

    The name Artificial Intelligence was not picked up until 5 years after Turing's paper, when a young Assistant Professor of Mathematics at Dartmouth College named John McCarthy put together a group to refine ideas about Turing’s thinking machines, coining the name Artificial Intelligence for the new field. His 2-month, 10-man study of AI, the Dartmouth Summer Research Project on Artificial Intelligence, was carried out in the summer of 1956 and is the event that begat the field.

    In his Proposal for the Dartmouth Project, McCarthy stated, We think that a significant advance can be made in one or more of these problems [of making machines use language, form abstractions and concepts, solve problems that humans normally solve, and improve themselves] if a carefully selected group of scientists work on it together for a summer. Indeed, the program included talks and discussions that laid the foundation for a number of research directions.

    Symbolic AI was one such direction: early AI pioneers believed they could create a general intelligence using symbolic reasoning, and the method remained the dominant paradigm until the late 1980s. [Symbolic AI] systems are essentially piles of nested if-then statements drawing conclusions about entities (human-readable concepts) and their relations (expressed in well understood semantics like X is-a man or X lives-in Acapulco). The "Logic Theorist," considered the first prototype of an AI program, was presented at the Conference. Created by Herbert Simon, Alan Newell, and John Shaw, the program could prove mathematical theorems—it quickly proved 38 of the first 52 theorems in chapter 2 of the Principia Mathematica.

    Although the workshop was exciting for at least some of its participants, McCarthy’s optimistic proposal almost foreshadowed the AI winters—periods of time where public interest and both academic and industrial funding in AI

    Enjoying the preview?
    Page 1 of 1