13 min listen
Multi - Armed Bandits
ratings:
Length:
11 minutes
Released:
Mar 7, 2016
Format:
Podcast episode
Description
Multi-armed bandits: how to take your randomized experiment and make it harder better faster stronger. Basically, a multi-armed bandit experiment allows you to optimize for both learning and making use of your knowledge at the same time. It's what the pros (like Google Analytics) use, and it's got a great name, so... winner!
Relevant link: https://support.google.com/analytics/answer/2844870?hl=en
Relevant link: https://support.google.com/analytics/answer/2844870?hl=en
Released:
Mar 7, 2016
Format:
Podcast episode
Titles in the series (100)
Um Detector 1: So, um... what about machine learning for audio a… by Linear Digressions