Hello trend,
This is Satya Mallick from LearnOpenCV.com.
How do we visualize high dimensional space? We can't. Such is the misery of our 3D existence! If you think very smart people can surely visualize higher dimensional spaces, let me share a quote from Dr. Geoffrey Hinton who is the godfather of Deep Learning.
Fortunately, the situation is not hopeless.
We can use dimensionality reduction techniques to project points in higher dimensions to lower dimensions in such a way that they preserve the distances in lower dimensions.
In other words, if two points were far apart in the higher dimensions, we can construct a new low dimensional space ( e.g. 2D plane ) where the projections of the points are also far apart.
Conversely, if the points were close together in the higher dimensionality space, they will also be close in this lower dimensional space.
In today's post, we will learn how a dimensionality reduction algorithm called t-Distributed Stochastic Neighbor Embedding (t-SNE) works and how to use it to visualize features of a neural network.
Without further ado, let's jump into the post
t-SNE for Feature Visualization |
Accompanying TensorFlow code for the blog post can be found on our Github. Your encouragement is the driving force behind our resources/blog. Do give us a star on Github.
Download Code (GitHub) |
Cheers!
Satya
Courses / YouTube / Facebook / LinkedIn / Twitter / Instagram
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.