In the last model, the embedding matrix was initialized randomly. What if we could use pre-trained word embeddings to intialize it instead?
Let’s take an example: imagine that you have the word pizza in your corpus. Following the previous architecture, you would initialize it to a 300 dimension vector of random float values. This is perfectly fine. You can do that, and this embedding will adjust an evolve throughout training. However, what you could do instead of randomly picking a vector for pizza is using an embedding for this word that has been learnt from another model on a very large corpus. This is a special kind of transfer learning.
Using the knowledge from an external embedding can enhance the precision of your RNN because it integrates new information (lexical and semantic) about the words, an information that has been trained and distilled on a very large corpus of data.
The pre-trained embedding we’ll be using is GloVe.
Official documentation: GloVe is an unsupervised learning algorithm for obtaining vector representations for words. Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and the resulting representations showcase interesting linear substructures of the word vector space.
The GloVe embeddings I’ll be using are trained on a very large common internet crawl that includes:
-
840 Billion tokens,
-
2.2 million size vocab
The zipped file is 2.03 GB download. Beware, this file cannot be easily loaded on a standard laptop.
The dimension of GloVe embeddings is 300.