44 min listen
Adversarial Examples, Protein Folding, and Shapley Values
FromJournal Club
ratings:
Length:
46 minutes
Released:
Apr 28, 2020
Format:
Podcast episode
Description
George dives into his blog post experimenting with Scott Lundberg's SHAP library. By training an XGBoost model on a dataset about academic attainment and alcohol consumption can we develop a global interpretation of the underlying relationships? Lan leads the discussion of the paper Adversarial Examples Are Not Bugs, They Are Features by Ilyas and colleagues. This papers proposes a new perspective on adversarial susceptibility of machine learning models by teasing apart the 'robust' and the 'non-robust' features in a dataset. The authors summarizes the key take away message as "Adversarial vulnerability is a direct result of the models’ sensitivity to well-generalizing, ‘non-robust’ features in the data." Last but not least, Kyle discusses Alphafold!
Released:
Apr 28, 2020
Format:
Podcast episode
Titles in the series (27)
Chess Transformer, Kaggle Scandal, and Interpretability Zoo: Welcome to a brand new show from Data Skeptic entitled "Journal Club". Each episode will feature a regular panel and one revolving guest seat. The group will discuss a few topics related to data science and focus on one featured scholarly paper which... by Journal Club