What Happens When Your Bomb-Defusing Robot Becomes a Weapon
Micah Xavier Johnson spent the last day of his life in a standoff, holed up in a Dallas community-college building. By that point, he had already shot 16 people. Negotiators were called in, but it was 2:30 in the morning and the police chief was tired. He’d lost two officers. Nine others were injured. Three of them would later die. In the early hours of July 7, 2016, the chief asked his SWAT team to come up with a plan that wouldn’t put anyone else in Johnson’s line of fire.
Within 30 minutes, their Remotec Andros Mark 5A-1, a four-wheeled robot made by Northrop Grumman, was on the scene. The Mark 5A-1 had originally been purchased for help with bomb disposal. But that morning, the police attached a pound of C4 explosives to the robot’s extended arm, and sent it down the hallway where Johnson had barricaded himself. The bomb killed him instantly. The machine remained functional.
Johnson had served in Afghanistan before being discharged. It’s possible that he recognized the robot before it blew him up.
Nearly 20 years earlier, a young roboticist named Helen Greiner was lecturing at a tech company in Boston. Standing in front of the small crowd, Greiner would have been in her late 20s, with hooded eyes, blonde hair, and a faint British accent masked by a lisp. She was showing off videos of Pebbles, a bright-blue robot built out of sheet metal.
For many years, the field of AI struggled with a key problem: How do you make robots for the real world? A robot that followed a script was simple; but to handle the unforeseen (say, a pothole or a fence), programmers would have to code instructions for every imaginable scenario. To engineers, that meant creating devices with ever more complex brains.
Greiner’s professor, Rodney Brooks, thought that approach was a
You’re reading a preview, subscribe to read more.
Start your free 30 days