LLMs | Word Representation: GloVe | Lec 4.2
LCS2 LCS2
3.9K subscribers
623 views
10

 Published On Premiered Aug 1, 2024

tl;dr: This video presentation explores the GloVe model's innovative methodology that effectively combines count-based and prediction-based techniques for learning word representations, illuminating the significant advancements in the field of NLP.

🎓 Lecturer: Tanmoy Chakraborty [https://tanmoychak.com]
🔗 Get the Slides Here: http://lcs2.in/llm2401
📚 Suggested Readings:
GloVe: Global Vectors for Word Representation [https://aclanthology.org/D14-1162.pdf]
📚 Optional Readings on Bias Captured by Word Embeddings:
1. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings [https://papers.nips.cc/paper_files/pa...]
2. Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change [https://aclanthology.org/P16-1141.pdf]
3. Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes [https://arxiv.org/pdf/1711.08412]


In this lecture, the approach of combining count-based and prediction-based methods in word representation through the GloVE model. This lecture offers an in-depth exploration of GloVe, or Global Vectors, which provides a robust framework for word embeddings. Perfect for students and researchers aiming to enhance their understanding of semantic representation in NLP, this session bridges the gap between theoretical concepts and practical applications.

show more

Share/Embed