Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Managing Vulnerabilities in Machine Learning and Artificial Intelligence Systems

Managing Vulnerabilities in Machine Learning and Artificial Intelligence Systems

FromSoftware Engineering Institute (SEI) Podcast Series


Managing Vulnerabilities in Machine Learning and Artificial Intelligence Systems

FromSoftware Engineering Institute (SEI) Podcast Series

ratings:
Length:
41 minutes
Released:
Jun 4, 2021
Format:
Podcast episode

Description

The robustness and security of artificial intelligence, and specifically machine learning (ML), is of vital importance. Yet, ML systems are vulnerable to adversarial attacks. These can range from an attacker attempting to make the ML system learn the wrong thing (data poisoning), do the wrong thing (evasion attacks), or reveal the wrong thing (model inversion). Although there are several efforts to provide detailed taxonomies of the kinds of attacks that can be launched against a machine learning system, none are organized around operational concerns. In this podcast, Jonathan Spring, Nathan VanHoudnos, and Allen Householder, all researchers at the Carnegie Mellon University Software Engineering Institute, discuss the management of vulnerabilities in ML systems as well as the Adversarial ML Threat Matrix, which aims to close this gap between academic taxonomies and operational concerns.
Released:
Jun 4, 2021
Format:
Podcast episode

Titles in the series (100)

The SEI Podcast Series presents conversations in software engineering, cybersecurity, and future technologies.