22 min listen
Managing Vulnerabilities in Machine Learning and Artificial Intelligence Systems
Managing Vulnerabilities in Machine Learning and Artificial Intelligence Systems
ratings:
Length:
41 minutes
Released:
Jun 4, 2021
Format:
Podcast episode
Description
The robustness and security of artificial intelligence, and specifically machine learning (ML), is of vital importance. Yet, ML systems are vulnerable to adversarial attacks. These can range from an attacker attempting to make the ML system learn the wrong thing (data poisoning), do the wrong thing (evasion attacks), or reveal the wrong thing (model inversion). Although there are several efforts to provide detailed taxonomies of the kinds of attacks that can be launched against a machine learning system, none are organized around operational concerns. In this podcast, Jonathan Spring, Nathan VanHoudnos, and Allen Householder, all researchers at the Carnegie Mellon University Software Engineering Institute, discuss the management of vulnerabilities in ML systems as well as the Adversarial ML Threat Matrix, which aims to close this gap between academic taxonomies and operational concerns.
Released:
Jun 4, 2021
Format:
Podcast episode
Titles in the series (100)
Building Staff Competence in Security: In this podcast, Barbara Laswell describes specifications that define the knowledge, skills, and competencies required for a range of security positions. by Software Engineering Institute (SEI) Podcast Series