This is an implementation of Word2Vec model using Gensim on Game of Thrones corpora.
Word2vec is a very popular Natural Language Processing technique, that uses a neural network to learn the vector representations of words called "word embeddings" in a particular text.
After training the Word2Vec model, we will use t-Distributed Stochastic Neighbor Embedding (t-SNE) from sklearn to visualize the learned embeddings vectors.
Here, the Word2Vec model is trained on the Game of thrones corpora which consist of five books of Game of Thrones. Each corpus has been preprocessed to extract only the tokens and removed all the stopwords.
This Word2Vec model is implemented using Gensim Packages.
pip install --upgrade gensim
Please check the requirements.txt file for all the packages which you need to install. Most of the packages can be installed using the pip package manager.
The model is trained for 50 epochs for 5 times and the sentences in the corpora are shuffled for each time. After training the model, let's see some of the results.
model.wv.most_similar('stark')
model.wv.most_similar('aerys')
We visualize the learned embeddings using t-SNE. It is a tool for data visualization that reduces the dimensionality of data to 2 or 3 dimensions so that it can be plotted easily.
Let's visualize the entire data
Let's visualize a particular bag of words
Word2Vec model can be an important process in Deep Learning where each text can be represented in the form of word embeddings so that deep neural network can understand better.
If you're having issues with or have questions about the code, file an issue in this repository so that I can get back to you soon.