10 min listen
Neural Net Dropout
ratings:
Length:
19 minutes
Released:
Oct 2, 2017
Format:
Podcast episode
Description
Neural networks are complex models with many parameters and can be prone to overfitting. There's a surprisingly simple way to guard against this: randomly destroy connections between hidden units, also known as dropout. It seems counterintuitive that undermining the structural integrity of the neural net makes it robust against overfitting, but in the world of neural nets, weirdness is just how things go sometimes.
Relevant links: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
Relevant links: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
Released:
Oct 2, 2017
Format:
Podcast episode
Titles in the series (100)
Facial Recognition with Eigenfaces: A true classic topic in ML: Facial recognition is… by Linear Digressions