Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

LM101-077: How to Choose the Best Model using BIC

LM101-077: How to Choose the Best Model using BIC

FromLearning Machines 101


LM101-077: How to Choose the Best Model using BIC

FromLearning Machines 101

ratings:
Length:
24 minutes
Released:
May 2, 2019
Format:
Podcast episode

Description

In this 77th episode of www.learningmachines101.com , we explain the proper semantic interpretation of the Bayesian Information Criterion (BIC) and emphasize how this semantic interpretation is fundamentally different from AIC (Akaike Information Criterion) model selection methods. Briefly, BIC is used to estimate the probability of the training data given the probability model, while AIC is used to estimate out-of-sample prediction error. The probability of the training data given the model is called the “marginal likelihood”.  Using the marginal likelihood, one can calculate the probability of a model given the training data and then use this analysis to support selecting the most probable model, selecting a model that minimizes expected risk, and support Bayesian model averaging. The assumptions which are required for BIC to be a valid approximation for the probability of the training data given the probability model are also discussed.
Released:
May 2, 2019
Format:
Podcast episode

Titles in the series (85)

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. In this podcast series, we examine such questions such as: How do these devices work? Where do they come from? And how can we make them even smarter and more human-like? These are the questions which will be addressed in the podcast series Learning Machines 101.