Latent Space Visualisation: PCA, t-SNE, UMAP | Deep Learning Animated
Deepia Deepia
6.16K subscribers
35,903 views
2.2K

 Published On Aug 5, 2024

In this video you will learn about three very common methods for data dimensionality reduction: PCA, t-SNE and UMAP. These are especially useful when you want to visualise the latent space of an autoencoder.

If you want to learn more about these techniques, here are some key papers:
- UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction https://arxiv.org/abs/1802.03426
- Stochastic Neighbor Embedding https://papers.nips.cc/paper_files/pa...
- Visualizing Data using t-SNE https://www.jmlr.org/papers/volume9/v...

And if you want to learn about even more recent techniques such as TriMAP and PACMAP, here are the papers:
- TriMap: Large-scale Dimensionality Reduction Using Triplets https://arxiv.org/abs/1910.00204
- PaCMAP https://arxiv.org/abs/2012.04456

Chapters:
00:36 PCA
05:15 t-SNE
13:30 UMAP
18:02 Conclusion

This video features animations created with Manim, inspired by Grant Sanderson's work at @3blue1brown. Here is the code that I used to make this video: https://github.com/ytdeepia/Latent-Sp...

If you enjoyed the content, please like, comment, and subscribe to support the channel!


#DeepLearning #PCA #ArtificialIntelligence #tsne #DataScience #LatentSpace #Manim #Tutorial #machinelearning #education #somepi

show more

Share/Embed