Robots Show Us Who We Are
In 2016, Alan Winfield gave an “IdeasLab” talk at the World Economic Forum about building ethical robots. “Could we build a moral machine?” Winfield asked his audience. Behind him, pictured on a flatscreen TV, was one of the bots Winfield used in his experiments—a short, cutesy, white and blue human-like machine. Just a few years ago, he said, he believed it to be impossible: You couldn’t build a robot capable of acting on the basis of ethical rules. But that was before he realized what you could get robots to do if they had an imagination—or less gradiosely a “consequence engine,” a simulated internal model of itself and the world outside.
Winfield of his experiments at the Bristol Robotics Lab in England. In one, a blue robot saved a red robot from walking into a “danger zone” by (gently) colliding with it. In another, two red robots were heading for danger zones and the blue robot can only save one—an ethical dilemma that endearingly caused it to dither between the two. “The robot behaves ethically not because it chooses to but because it’s programmed to do so,” Winfield said. “We call it an ethical zombie.” Its reasoning was completely transparent. “If something goes wrong, we can replay what the robot was thinking.” Winfield believes this will be crucial for the future. “Autonomous robots will need the equivalent of a flight-data recorder in an aircraft—an ethical black box.” This ethical black box, Winfield believes, would allow us to understand the “what if” questions the robot was asking
You’re reading a preview, subscribe to read more.
Start your free 30 days