NLP Demystified 12: Capturing Word Meaning with Embeddings
Future Mojo Future Mojo
5.12K subscribers
9,083 views
136

 Published On Jul 11, 2022

Course playlist:    • Natural Language Processing Demystified  

We'll learn a method to vectorize words such that words with similar meanings have closer vectors (aka "embeddings"). This was a breakthrough in NLP and boosted performance on a variety of NLP problems while addressing the shortcomings of previous approaches. We'll look at how to create these word embeddings and how to use them in our models.

Colab notebook: https://colab.research.google.com/git...

Timestamps
00:00:00 Word Vectors
00:00:37 One-Hot Encoding and its shortcomings
00:02:07 What embeddings are and why they're useful
00:05:12 Similar words share similar contexts
00:06:15 Word2Vec, a way to automatically create word embeddings
00:08:08 Skip-Gram With Negative Sampling (SGNS)
00:17:11 Three ways to use word vectors in models
00:18:48 DEMO: Training and using word vectors
00:41:29 The weaknesses of static word embeddings

This video is part of Natural Language Processing Demystified --a free, accessible course on NLP.

Visit https://www.nlpdemystified.org/ to learn more.

show more

Share/Embed