AWS Deep Learning AMIs now include ONNX, enabling model portability across deep learning frameworks

The AWS Deep Learning AMIs (DLAMI) for Ubuntu and Amazon Linux are now pre-installed and fully configured with Open Neural Network Exchange (ONNX), enabling model portability across deep learning frameworks. In this blog post we’ll introduce ONNX, and demonstrate how ONNX can be used on the DLAMI to port models across frameworks.

What is ONNX?

ONNX is an open source library and serialization format to encode and decode deep learning models. ONNX defines the format for the neural network computational graph and an extensive list of operators used in neural network architectures. ONNX is already supported by popular deep learning frameworks such as Apache MXNet, PyTorch, Chainer, Cognitive Toolkit, TensorRT, and others. The growing support for ONNX across popular tools enables machine learning developers to move their models across tools, picking and choosing the right tool for the task at hand.

Exporting a Chainer model to ONNX

Let’s go over the steps to export a Chainer model to an ONNX file.

We’ll start by launching an instance of the DLAMI, on either Ubuntu or Amazon Linux. If you have not done this before, review this great tutorial showing how to get started with the DLAMI.

Once we are connected to DLAMI over SSH, let’s activate the Chainer Python 3.6 Conda environment, which is pre-installed and configured on the DLAMI. Note that this environment is now also pre-installed and configured with ONNX and onnx-chainer, an add-on package that adds ONNX support for Chainer.

$ source activate chainer_p36

1
2
3

Next, start a Python shell, and run the following commands to load a VGG-16 convolutional neural network for object recognition, and export the neural network into an ONNX file.

import numpy as np import onnx_chainer from chainercv.links import VGG16

This will download a pre-trained model, and load it as a Chainer model

model = VGG16(pretrained_model=’imagenet’)

Creating synthetic input and using it to export the model to ONNX

x = np.zeros((1, 3, 224, 224), dtype=np.float32) out = onnx_chainer.export(model, x, filename=’vgg16.onnx’)

1
2
3
4
5
6
7
8
9

With just a few lines of code, you now have exported the Chainer model into the ONNX format and have stored the file in the current directory.

### Importing an ONNX model into MXNet

Now that we have exported our Chainer model into ONNX, let’s see how we can import this model into MXNet, and run inference.

We’ll start by activating the DLAMI’s MXNet Python 3.6 Conda environment, which comes installed with ONNX and MXNet 1.2.1. MXNet 1.2 introduced the ONNX import API that we will use to import the ONNX model into MXNet.

$ source deactivate $ source activate mxnet_p36

1
2
3

Next, start a Python shell, and run the following commands to load the ONNX model you just exported from Chainer:

from mxnet.contrib import onnx as onnx_mxnet sym, arg_params, aux_params = onnx_mxnet.import_model(“vgg16.onnx”)

1
2
3
4
5

That’s it! You have just loaded the ONNX model into MXNet, and have the symbolic graph and params available. Let’s continue to run inference with our newly loaded model.

We will first download an image, and the ImageNet class labels which the model was trained on, so we can test our object recognition model:

import mxnet as mx mx.test_utils.download(‘https://s3.amazonaws.com/onnx-mxnet/dlami-blogpost/hare.jpg’) mx.test_utils.download(‘http://data.mxnet.io/models/imagenet/synset.txt’) with open(‘synset.txt’, ‘r’) as f: labels = [l.rstrip() for l in f]

1
2
3
4
5
6
7
8

Here’s what our input image looks like:

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/26/onnx-rabbit.jpg)


Next we will load the image, and pre-process it into a tensor that fits the model required input tensor shape:

import matplotlib.pyplot as plt import numpy as np from mxnet import nd

image = plt.imread(“hare.jpg”) image = np.expand_dims(np.transpose(image, (2,0,1)),axis=0).astype(np.float32) input = nd.array(image)

1
2
3

We are now ready to initialize and bind our MXNet module:

Input name is the model’s input node name, and is defined by the exporting library

input_name = sym.list_inputs()[0] data_shapes = [(input_name, input.shape)]

Initialize and bind the Module

mod = mx.mod.Module(symbol=sym, context=mx.cpu(), data_names=[input_name], label_names=None) mod.bind(for_training=False, data_shapes=data_shapes, label_shapes=None) mod.set_params(arg_params=arg_params, aux_params=aux_params)

1
2
3

And lastly, run inference and print the top probability and class:

mod.forward(mx.io.DataBatch([input]))

probabilities = mod.get_outputs()[0].asnumpy()[0] max_probability = np.max(probabilities) max_class = labels[np.argmax(probabilities)]

print(‘Highest probability=%f, class=%s’ %(max_probability, max_class))

1
2
3
4

The output shows the model prediction. The model predicts the image to be a hare with 97.9% probability!

Highest probability=0.979566, class=n02326432 hare

Conclusion and getting started with the deep learning AMIs

In this blog post you learned how you can use ONNX on the DLAMI to port models across frameworks. With the portability enabled by ONNX, you can pick and choose the right tool for the task at hand, be it training a new model, fine tuning a pre-trained model, executing inference or model serving.

You can get started quickly with the AWS Deep Learning AMIs by using our getting started tutorial. Check out the DLAMI ONNX tutorials, as well as our developer guide for more tutorials, resources, and release notes. The latest AMIs are now available on the AWS Marketplace. You can also subscribe to our discussion forum to get new launch announcements and post your questions.


 

About the Authors

Anirudh Acharya is a Software Development Engineer for AWS Deep Learning. He works on building deep learning systems and toolkits to democratize AI. In his spare time he enjoys reading and biking.

 

Hagay Lupesko is an Engineering Leader for AWS Deep Learning. He focuses on building deep learning systems that enable developers and scientists to build intelligent applications. In his spare time, he enjoys reading, hiking, and spending time with his family.