Hands-on: Creating Neural Networks using Chainer

This tutorial is a practical guide which helps you to create Neural Networks in Chainer. The focus is not on the architecture of the networks (more about Neural Network architectures is found in this post), but it is focused on creating a pipeline. We will take a simple classification problem as an example and create the pipeline for training and testing the network and how to evaluate the model.

Why Chainer?

Chainer is a Python neural network library. This neural network framework is not limited to only neural networks, but it is also possible to create any kind of mathematical models. For example, it is also possible to implemented models based on reinforcement learning (using the ChainerRL library). Chainer has support for backpropagation out-of-the-box. It is called chainer because it allows you to chain functions and models and all of the links have a backpropagation method.

The main difference with frameworks like Tensorflow is that Chainer automatically computes the computational graph. In a framework like Tensorflow, you first have to specify a computational graph and then run data through that graph. That method is computationally efficient, but it is not flexible for programming constructs like loops and conditions. This is the main reason why I am a big fan of Chainer. It allows you to easily create and test neural network architectures.

Loading Data

For my task, I had many files. It will take a long time to load all the files at once, but luckily Chainer has something called a dataset transformation. I can load whenever the files are requested. What I did was the following: I created a list of all the files containing the train and the test data. Then, I created two dataset iterators. One for the training data and one for the testing data. This is my data file loader. It loads the data, decodes the JSON and applies a preprocessing method on the data:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
class JSONFileLoader:
 """The file loader class used for a data iterator such that it loads JSON data when requested."""

 def __init__(self, preprocessor=None):
 """Initialize the file loader.

 Parameters
 ----------
 preprocessor : method, optional
 Method which is applied to the data when loaded (default: None).
 """
 self.preprocessor = preprocessor

 def load_file(self, path):
 """Loads (open, read and JSON decode) a file.

 Parameters
 ----------
 path : str
 The path of the file.

 Returns
 -------
 dict
 The preprocessed version of the data found in the file.
 """
 with open(path, 'r') as input_handle:
 data = json.load(input_handle)
 if self.preprocessor is None:
 return data
 else:
 return self.preprocessor(data)

Then, I used this class as a transformation (such that it is applied when the data is needed) and created the iterators:

1
2
3
4
5
6
7
8
# Create file loaders and transformations
file_loader = JSONFileLoader(preprocessor)
dataset = TransformDataset(files, file_loader.load_file)

# Split the dataset and initialize the dataset iterators
test_set, train_set = split_dataset(dataset, args.test_size)
train_iter = SerialIterator(train_set, batch_size=1, repeat=True, shuffle=True)
test_iter = SerialIterator(test_set[:args.test_size], batch_size=args.test_size, repeat=False, shuffle=False)

Since it is an iterator, you can just use a for loop to go through the data.

Training/Testing pipeline

This is my resulting pipeline:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
 create_my_network()

# Setup the optimizer
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)

# Create the updater and trainer
updater = training.StandardUpdater(train_iter, optimizer=optimizer, converter=lambda *arguments: arguments[0],
 loss_func=loss_model.__call__, device=-1)
trainer = training.Trainer(updater, (args.epochs, 'epoch'), out='result')
trainer.extend(extensions.Evaluator(test_iter, loss_model, converter=lambda *arguments: arguments[0]),
 trigger=(args.validation_iterations, 'iteration'))
trainer.extend(extensions.LogReport(trigger=(args.log_iterations, 'iteration')))
trainer.extend(extensions.PrintReport(['epoch', 'iteration', 'loss', 'accuracy', 'precision', 'recall', 'f1']))
trainer.extend(extensions.ProgressBar())
trainer.extend(extensions.snapshot(), trigger=(args.snapshot_iterations, 'iteration'))
trainer.extend(extensions.snapshot_object(model, 'model_iter_{.updater.iteration}'), trigger=(args.snapshot_iterations, 'iteration'))

# Resume from a specified snapshot
if args.resume:
 chainer.serializers.load_npz(args.resume, trainer)

# Run the trainer
trainer.run()

First, the model is created. I will not go into the details of the model. The trainer makes use of the updater and updates the model using backpropagation. The trainer has an extension called Evaluator which will evaluate the model on the test dataset.  There are a few more extensions used here: the LogReport and PrintReport are used for logging (to a file) and printing (to the standard output) respectively. The extensions have a trigger which specifies when an extension should run. It either is a (‘iteration’, n) tuple when the extension should run after n iterations or a (‘epoch’, n) tuple when the extension should run after n epochs. The progressbar extension shows a progressbar and the snapshot extensions create snapshots of the current state. The first snapshot extension makes a snapshot of the trainer object and the second snapshot extension makes a snapshot of the model. This will be used for evaluating the model on real data.

Executing the model

I created a Flask server which loads the model once. It is easy to continue from a snapshot:

1
2
# Load the model_iter_2500 model
chainer.serializers.load_npz('result/model_iter_2500', model)

Then, whenever a request is send to the server, the model is executed. This resulted in the following minimalistic web application for me:

Conclusion

In this Chainer hands-on you learned how you can setup a pipeline and how you can evaluate your model on real data. If you have any suggestions or comments, feel free to ask!