10 min listen
Model Interpretation (and Trust Issues)
ratings:
Length:
17 minutes
Released:
Apr 25, 2016
Format:
Podcast episode
Description
Machine learning algorithms can be black boxes--inputs go in, outputs come out, and what happens in the middle is anybody's guess. But understanding how a model arrives at an answer is critical for interpreting the model, and for knowing if it's doing something reasonable (one could even say... trustworthy). We'll talk about a new algorithm called LIME that seeks to make any model more understandable and interpretable.
Relevant Links:
http://arxiv.org/abs/1602.04938
https://github.com/marcotcr/lime/tree/master/lime
Relevant Links:
http://arxiv.org/abs/1602.04938
https://github.com/marcotcr/lime/tree/master/lime
Released:
Apr 25, 2016
Format:
Podcast episode
Titles in the series (100)
Facial Recognition with Eigenfaces: A true classic topic in ML: Facial recognition is… by Linear Digressions