22 min listen
Uncertainty Quantification in Machine Learning: Measuring Confidence in Predictions
Uncertainty Quantification in Machine Learning: Measuring Confidence in Predictions
ratings:
Length:
32 minutes
Released:
Aug 6, 2021
Format:
Podcast episode
Description
In this SEI Podcast, Dr. Eric Heim, a senior machine learning research scientist at Carnegie Mellon University's Software Engineering Institute (SEI), discusses the quantification of uncertainty in machine-learning (ML) systems. ML systems can make wrong predictions and give inaccurate estimates for the uncertainty of their predictions. It can be difficult to predict when their predictions will be wrong. Heim also discusses new techniques to quantify uncertainty, identify causes of uncertainty, and efficiently update ML models to reduce uncertainty in their predictions. The work of Heim and colleagues at the SEI Emerging Technology Center closes the gap between the scientific and mathematical advances from the ML research community and the practitioners who use the systems in real-life contexts, such as software engineers, software developers, data scientists, and system developers.
Released:
Aug 6, 2021
Format:
Podcast episode
Titles in the series (100)
Building Staff Competence in Security: In this podcast, Barbara Laswell describes specifications that define the knowledge, skills, and competencies required for a range of security positions. by Software Engineering Institute (SEI) Podcast Series