FixingAI one data base entry at a time
Workers killed by robots. A pedestrian run down by a driverless car. Algorithmically generated animated videos that terrify children. Machine-learning image analysis that means police arrest the wrong person, invariably a Black man. Horror stories abound of artificial intelligence (AI) gone wrong – and Sean McGregor wants to hear them all.
McGregor is the project lead at the AI Incident Database (AIID), a repository of missteps and mistakes made by supposedly smart systems that’s part of the wider efforts of industry group, the Partnership on AI. “One of the project ideas was to build a taxonomy of AI failures to really understand how things can go wrong,” McGregor said, referencing a classification system that can help machines better understand language.
The problem was that there wasn’t any data – and, as a trained machine-learning researcher, McGregor is accustomed to first gathering data, learning from it and then making decisions. “We didn’t have a dataset,” he said. “We had a few ad hoc lists of failures, but they weren’t really systematised, or collected or brought together in one place.” In a moment of what McGregor describes as “bravado”, he offered to build it.
“It’s likely to grow quickly as the technology becomes more commonplace and there’s more opportunity for things
You’re reading a preview, subscribe to read more.
Start your free 30 days