Literary Hub

Autonomous Everything: How Algorithms Are Taking Over Our World

At their core, computers run software algorithms.

Machine learning is a particular class of software algorithm. It’s basically a way of instructing a computer to learn by feeding it an enormous amount of data and telling it when it’s doing better or worse. The machine-learning algorithm modifies itself to do better more often.

Machine-learning algorithms are popping up everywhere because they do things faster and better than humans, especially when large amounts of data are involved. They give us our search results, determine what’s on our social network news feeds, score our creditworthiness, and determine which government services we’re eligible for. They already know what we’ve watched and read, and they use that information to recommend books and movies we might like. They categorize photographs and translate text from one language to another. They play Go as well as a master; read X-rays and diagnose cancers; and inform bail, sentencing, and parole decisions. They analyze speech to assess suicide risk and analyze faces to predict homosexuality. They’re better than we are at predicting the quality of fine Bordeaux wine, hiring blue-collar employees, and deciding whether to punt in football. Machine learning is used to detect spam and phishing e-mails, and also to make phishing e-mails more individual and believable, and therefore more effective.

Because these algorithms essentially program themselves, it can be impossible for humans to understand what they do. For example, Deep Patient is a machine-learning system that has surprising success at predicting schizophrenia, diabetes, and some cancers—in many cases performing better than expert humans. But although the system works, no one knows how, even after analyzing the machine-learning algorithm and its results.

On the whole, we like this. We prefer the more accurate machine-learning diagnostic system over the human technician, even though it can’t explain itself. For this reason, machine-learning systems are becoming more pervasive in many areas of society.

For the same reasons, we’re allowing algorithms to become more autonomous. Autonomy is the ability of systems to act independently, without human supervision or control. Autonomous systems will soon be everywhere. A 2014 book, Autonomous Technologies, has chapters on autonomous vehicles in farming, autonomous landscaping applications, and autonomous environmental monitors. Cars now have autonomous features such as staying within lane markers, following a fixed distance behind another car, and braking without human intervention to avert a collision. Agents—software programs that do things on your behalf, like buying a stock if the price drops below a certain point—are already common.

“If we let computers think for us and the underlying input data is corrupt, they’ll do the thinking badly and we might not ever know it.”

We’re also allowing algorithms to have physical agency; they can affect the world in a direct physical manner. When you look around, computers with physical agency are everywhere, from embedded medical devices to cars to nuclear power plants.

Some algorithms that might not seem autonomous actually are. While it might be technically true that human judges make bail decisions, if they all do what the algorithm recommends because they believe the algorithm is less biased, then the algorithm is as good as autonomous. Similarly, if a doctor never contradicts an algorithm that makes decisions about cancer surgery—possibly out of fear of a malpractice suit—or if an army officer never contradicts an algorithm that makes decisions about where to target a drone strike, then those algorithms are as good as autonomous. Inserting a human into the loop doesn’t count unless that human actually makes the call.

The risks in all of these cases are considerable.

Algorithms can be hacked. Algorithms are executed using software, and software can be hacked.

Algorithms require accurate inputs. Algorithms need data—often data about the real world—in order to function properly. We need to ensure that the data is available when those algorithms need it, and that the data is accurate. Sometimes the data is naturally biased. And one of the ways of attacking algorithms is to manipulate their input data. Basically, if we let computers think for us and the underlying input data is corrupt, they’ll do the thinking badly and we might not ever know it.

In situations of what’s called adversarial machine-learning, the attacker tries to figure out how to feed the system specific data that causes it to fail in a specific manner. One research project focused on imageclassifying algorithms and found they were able to create images that were totally unrecognizable by humans and yet classified with high confidence by machine-learning networks. A related research project was able to fool visual sensors on cars with fake road signs in ways that wouldn’t fool human eyes and brains. Yet another project tricked an algorithm into classifying rifles as helicopters, without knowing anything about the algorithm’s design. (It’s now a standard assignment in university computer science classes: fool the image classifier.)

Like the Microsoft chatbot Tay, which became racist and misogynistic because of deliberately fed data, hackers can train all sorts of machine-learning algorithms to do unexpected things. Spammers could similarly figure out how to fool anti-spam machine-learning algorithms. As machine algorithms become more prevalent and more powerful, we should expect more of these kinds of attacks.

There are also new risks in algorithms’ speed. Computers make decisions and do things much faster than people. They can make stock trades in milliseconds, or shut power off for millions of homes at the same time. Algorithms can be replicated repeatedly in different computers, with each instance of an algorithm making millions of decisions per second. On the one hand, this is great because algorithms can scale in ways people can’t—or at least can’t easily, cheaply, and consistently. But speed can also make it harder to put meaningful checks on an algorithm’s behavior.

Often, the only thing that slows algorithms down is interaction with people. When algorithms interact with each other at computer speeds, the combined results can quickly spiral out of control. What makes an autonomous system more dangerous is that it can do serious damage before a human intervenes.

In 2017, Dow Jones accidentally published a story about Google buying Apple. It was obviously a hoax, and any human reading it would have immediately realized it, but automated stock-trading bots were fooled—and stock prices were affected for two minutes until the story was retracted.

That was just a minor problem. In 2010, autonomous high-speed financial trading systems unexpectedly caused a “flash crash.” Within minutes, a trillion dollars of stock market value was wiped out by unintended machine interactions, and the incident ended up bankrupting the company that caused the problem. And in 2013, hackers broke into the Associated Press’s Twitter account and falsely reported an attack on the White House. This sent the stock markets down 1% within seconds.

We should also expect autonomous machine-learning systems to be used by attackers: to invent new attack techniques, to mine personal data for purposes of fraud, to create more believable phishing e-mails. They will only get more sophisticated and capable in the coming years.

At the DefCon conference in 2016, the US Defense Advanced Research Projects Agency (DARPA) sponsored a new kind of hacking contest. “Capture the Flag” is a popular hacking sport: organizers create a network filled with bugs and vulnerabilities, and teams defend their own part of the network while attacking other teams’ parts. The Cyber Grand Challenge was similar, except teams submitted programs that tried to do the same automatically. The results were impressive. One program found a previously undetected vulnerability in the network, patched itself against the bug, and then proceeded to exploit it to attack other teams. In a later contest that had both human and computer teams, some computer teams outperformed some human teams.

These algorithms will only get more sophisticated and more capable. Attackers will use software to analyze defenses, develop new attack techniques, and then launch those attacks. Most security experts expect offensive autonomous attack software to become common in the near future. And then it’s just a matter of the technology improving. Expect the computer attackers to get better at a much faster rate than the human attackers; in another five years, autonomous programs might routinely beat all human teams.

“Weapons that can’t be recalled or turned off—and also operate at computer speeds—could cause all sorts of lethal problems for friend and foe alike.”

As Mike Rogers, the commander of US Cyber Command and the director of the NSA, said in 2016: “Artificial intelligence and machine learning . . . is foundational to the future of cybersecurity. . . . We have got to work our way through how we’re going to deal with this. It is not the if, it’s only the when to me.”

Robots offer the most evocative example of software autonomy combined with physical agency. Researchers have already exploited vulnerabilities in robots to remotely take control of them, and have found vulnerabilities in tele-operated surgical robots and industrial robots.

Autonomous military systems deserve special mention. The US Department of Defense defines an autonomous weapon as one that selects a target and fires without intervention from a human operator. All weapons systems are lethal, and they are all prone to accidents. Adding autonomy increases the risk of accidental death significantly. As weapons become computerized—well before they’re actual robot soldiers—they, too, will be vulnerable to hacking. Weapons can be disabled or otherwise caused to malfunction. If they are autonomous, they might be hacked to turn on each other or their human allies in large numbers. Weapons that can’t be recalled or turned off—and also operate at computer speeds—could cause all sorts of lethal problems for friend and foe alike.

All of this comes together in artificial intelligence. Over the past few years, we’ve read some dire predictions about the dangers of AI. Technologists Bill Gates, Elon Musk, and Stephen Hawking, and philosopher Nick Bostrom, have all warned of a future where artificial intelligence—either as intelligent robots or as something less personified—becomes so powerful that it takes over the world and enslaves, exterminates, or ignores humanity. The risks might be remote, they argue, but they’re so serious that it would be foolish to ignore them.

I am less worried about AI; I regard fear of AI more as a mirror of our own society than as a harbinger of the future. AI and intelligent robotics are the culmination of several precursor technologies, like machinelearning algorithms, automation, and autonomy. The security risks from those precursor technologies are already with us, and they’re increasing as the technologies become more powerful and more prevalent. So, while I am worried about intelligent and even driverless cars, most of the risks are already prevalent in Internet-connected drivered cars. And while I am worried about robot soldiers, most of the risks are already prevalent in autonomous weapons systems.

Also, as roboticist Rodney Brooks pointed out, “Long before we see such machines arising there will be the somewhat less intelligent and belligerent machines. Before that there will be the really grumpy machines. Before that the quite annoying machines. And before them the arrogant unpleasant machines.” I think we’ll see any new security risks coming long before they get here.

__________________________________

Click Here to Kill Everybody

From Click Here to Kill Everybody: Security and Survival in a Hyper-connected WorldUsed with permission of W. W. Norton & Company. Copyright © 2018 by Bruce Schneier.

More from Literary Hub

Literary Hub13 min readPsychology
On Struggling With Drug Addiction And The System Of Incarceration
There is a lie, thin as paper, folded between every layer of the criminal justice system, that says you deserve whatever happens to you in the system, because you belong there. Every human at the helm of every station needs to believe it—judge, attor
Literary Hub1 min read
WATCH: Reyna Grande in Conversation with John Freeman
Click below to watch the virtual meeting of the Alta California Book Club, which Books Editor of Alta Journal David Ulin describes as: an opportunity for us to rethink the book club as a kind of ongoing process involving events, involving posts and i
Literary Hub6 min read
Lit Hub Asks: 5 Authors, 7 Questions, No Wrong Answers
The Lit Hub Author Questionnaire is a monthly interview featuring seven questions for five authors. This month we talk to one author with a new book and four we missed the first time around in 2020: * Andrew DuBois (Start to Figure: Fugitive Essays,

Related Books & Audiobooks