Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

Pre-training language models for natural language processing problems

Pre-training language models for natural language processing problems

FromLinear Digressions


Pre-training language models for natural language processing problems

FromLinear Digressions

ratings:
Length:
28 minutes
Released:
Jan 14, 2019
Format:
Podcast episode

Description

When you build a model for natural language processing (NLP), such as a recurrent neural network, it helps a ton if you’re not starting from zero. In other words, if you can draw upon other datasets for building your understanding of word meanings, and then use your training dataset just for subject-specific refinements, you’ll get farther than just using your training dataset for everything. This idea of starting with some pre-trained resources has an analogue in computer vision, where initializations from ImageNet used for the first few layers of a CNN have become the new standard. There’s a similar progression under way in NLP, where simple(r) embeddings like word2vec are giving way to more advanced pre-processing methods that aim to capture more sophisticated understanding of word meanings, contexts, language structure, and more.

Relevant links:
https://thegradient.pub/nlp-imagenet/
Released:
Jan 14, 2019
Format:
Podcast episode

Titles in the series (100)

Linear Digressions is a podcast about machine learning and data science. Machine learning is being used to solve a ton of interesting problems, and to accomplish goals that were out of reach even a few short years ago.