MLPs (Multi-Layer Perceptrons) are great for many classification and regression tasks. However, it is hard for MLPs to do classification and regression on sequences. In this Python deep learning tutorial, a GRU is implemented in TensorFlow. Tensorflow is one of the many Python Deep Learning libraries.
By the way, another great article on Machine Learning is this article on Machine Learning fraud detection. If you are interested in another article on RNNs, you should definitely read this article on the Elman RNN.
What is a GRU or RNN?
A sequence is an ordered set of items and sequences appear everywhere. In the stock market, the closing price is a sequence. Here, time is the ordering. In sentences, words follow a certain ordering. Therefore, sentences can be viewed as sequences. A gigantic MLP could learn parameters based on sequences, but this would be infeasible in terms of computation time. The family of Recurrent Neural Networks (RNNs) solve this by specifying hidden states which do not only depend on the input, but also on the previous hidden state. GRUs are one of the simplest RNNs. Vanilla RNNs are even simpler, but these models suffer from the Vanishing Gradient problem.
Mathematical GRU Model
The key idea of GRUs is that the gradient chains do not vanish due to the length of sequences. This is done by allowing the model to pass values completely through the cells. The model is defined as the following [1]:
$latex z_t = \sigma(W^{(z)} x_t + U^{(z)} h_{t-1} + b^{(z)})latex \tilde{h}t = \tanh(W^{(h)} x_t + U^{(h)} h{t-1} \circ r_t + b^{(h)})$$latex h_t = (1 – z_t) \circ h_{t – 1} + z_t \circ \tilde{h}_t$
I had a hard time understanding this model, but it turns out that it is not too hard to understand. In the definitions, $latex \circ$ is used as the Hadamard product, which is just a fancier name for element-wise multiplication. $latex \sigma(x)$ is the Sigmoid function which is defined as $latex \sigma(x) = \frac{1}{1 + e^{-x}}$. Both the Sigmoid function ($latex \sigma$) and the Hyperbolic Tangent function ($latex \tanh$) are used to squish the values between $latex 0$ and $latex 1$.
$latex z_t$ functions as a filter for the previous state. If $latex z_t$ is low (near $latex 0$), then a lot of the previous state is reused! The input at the current state ($latex x_t$) does not influence the output a lot. If $latex z_t$ is high, then the output at the current step is influenced a lot by the current input ($latex x_t$), but it is not influenced a lot by the previous state ($latex h_{t-1}$).
$latex r_t$ functions as forget gate (or reset gate). It allows the cell to forget certain parts of the state.
The Task: Adding Numbers
In the code example, a simple task is used for testing the GRU. Given two numbers $latex a$ and $latex b$, their sum is computed: $latex c = a + b$. The numbers are first converted to reversed bitstrings. The reversal is also what most people would do by adding up two numbers. You start at the right from the number and if the sum is larger than $latex 10$, you carry (memorize) a certain number. The model is capable of learning what to carry. As an example, consider the number $latex a = 3$ and $latex b = 1$. In bitstrings (of length 3), we have $latex a = [0, 1, 1]$ and $latex b = [0, 0, 1]$. In reversed bitstring representation, we have that $latex a = [1, 1, 0]$ and $latex b = [1, 0, 0]$. The sum of these numbers is $latex c = [0, 0, 1]$ in reversed bitstring representation. This is $latex [1, 0, 0]$ in normal bitstring representation and this is equivalent to $latex 4$. These are all the steps which are also done by the code automatically.
The Code
The code is self-explaining. If you have any questions, feel free to ask! The code can also be found on GitHub. Sharing (or Starring) is Caring :-)!
1 |
|
Results
After ~2000 iterations, the model has fully learned how to add 2 integer numbers!
Conclusion (TL;DR)
This Python deep learning tutorial showed how to implement a GRU in Tensorflow. The implementation of the GRU in TensorFlow takes only ~30 lines of code! There are some issues with respect to parallelization, but these issues can be resolved using the TensorFlow API efficiently. In this tutorial, the model is capable of learning how to add two integer numbers (of any length).
References
[1] Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.