State of the art in 2014 and now regarded as a classic. Here are a couple of my favorite resources on the pre-trained vector method:
An overview of word embeddings and their connection to distributional semantic models [1]
Global Vectors for Word Representation presented by: Richard Socher of Stanford University [2]
Paper Dissected: “Glove: Global Vectors for Word Representation” Explained [3]
You can find my own detailed explanation in section 7 of this post [4]
Reference for the implementation:
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014 [5]