We are very excited to announce that you can deploy your computer vision model trained using TensorFlow (version 1.4) to AWS DeepLens. Head pose detection is part of the AWS DeepLens sample projects. In this blog post, we will show you how to train a model from scratch using a P2 training instance of Amazon SageMaker. We will use a ResNet-50 model and save the trained model in the “frozen” protobuff format. Although TensorFlow offers a variety of data formats to save a model graph (.such as checkpoint, .ckpt-XXX.meta, .ckpt_XXX.index, .ckpt_XXX.data-00000-of-00001, .pbtxt, optimized protobuff, frozen protobuff, etc.), the AWS DeepLens model optimizer supports the frozen protobuff format.
We use the same Prima head pose dataset that we previously described in Deploy Gluon models to AWS DeepLens using a simple Python API**.
Create an Amazon S3 bucket
Just like we did in our last blog post, we are first going to create an Amazon S3 bucket using the Amazon S3 console. In this example, we are going to name the S3 bucket “deeplens-sagemaker-0001” hosted in the N. Virginia (US East 1) AWS Region. (If you want to deploy your trained model artifacts straight into AWS DeepLens, the Region must be N. Virginia (US East 1)).
Inside the bucket, we have a folder named “headpose.” Inside the “headpose” folder, we have 4 sub-folders named “TFartifacts,” “customTFcodes,” “datasets,” and “testIMs.”
You are going to host the head pose dataset from the previous blog post linked to earlier (HeadPoseData_trn_test_x15_py2.pkl
) in the datasets folder.
That is it for the preparation.
Amazon SageMaker notebook
Now, let’s launch Amazon SageMaker. After you open your Amazon SageMaker notebook, upload our sample notebook, TensorFlow ResNet script, and entry point Python script (tensorflow_resnet_headpose_for_deeplens.ipynb, resnet_model_headpose.py, and resnet_headpose, respectively). These scripts are modified from Amazon SageMaker sample scripts.
After you add the notebook and Python scripts, there are only three steps for you to run the training.
First, specify your S3 bucket name in the sample Amazon SageMaker notebook (tensorflow_resnet_headpose_for_deeplens.ipynb
). In this part, you also specify other folders inside your S3 bucket such as the “headpose” folder as well as the “TFartifacts” and “customTFcodes” folders underneath it.
1 |
|
Second, specify the training instance and other parameters in the TensorFlow object. In this example, we use a ml.p2.xlarge instance for the training.
1 |
|
AWS DeepLens currently supports TensorFlow version 1.4 (as of August 9, 2018). Amazon SageMaker helps you with version control by simply stating framework_version = 1.4.
Finally, we run the training by calling the “.fit” method.
1 |
|
After the training is over, you will find a set of trained TensorFlow model artifacts (model.tar.gz
) inside an output folder in the TFartifacts folder of your S3 bucket.
Make a frozen protobuff file for AWS DeepLens
We are going to generate a frozen protobuff file from model.tar.gz
, which we just made. In this tutorial, we use the TensorFlow Python API in the same Amazon SageMaker notebook.
First, we download the compressed file containing the model artifacts into the Amazon SageMaker notebook local directory.
1 |
|
After you decompress the file, you will find three separate files, saved_model.pb
, variables/variables.index
, and variables/variables.data-00000-of-00001
inside the export/Servo/{*Assigned by Amazon SageMaker*}
directory.
1 |
|
Here is the code to freeze the graph and save it in the frozen protobuff format.
1 |
|
We use the tf.graph_util.convert_variables_to_constants
API to freeze the graph, then tf.graph_util.remove_training_nodes
to remove all unnecessary nodes. Then, we use optimize_for_inference_lib.optimize_for_inference
to generate the inference graph_def. Finally, we serialize and save the file as a protobuff.
1 |
|
It should be noted that we knew the names of input and output nodes in advance (that is, ‘Const_1’ and ‘softmax_tensor’). It is important to look into the graph using TensorBoard. In addition, it is also a good practice to name every single layer in the model script (resnet_model_headpose.py
) to avoid the unexpected stray nodes/namespaces inside the graph.
After the frozen protobuff file, frozen_mode.pb is generated, we are going to put it back into the output folder inside the TFartifacts folder of the S3 bucket.
1 |
|
You will find frozen_model.pb
in your S3 bucket.
You can immediately deploy the model into your AWS DeepLens device.
The model is now ready. AWS DeepLens requires an AWS Lambda function to run the model. The Lambda function comes with one of the sample projects of AWS DeepLens.
If you want to learn how to write your own Lambda functions for AWS DeepLens, take a look at this blog post.
Conclusion
In this blog post, we trained a head-pose estimator ResNet-50 model in TensorFlow on Amazon SageMaker. Then we processed the trained model artifact file so that we can deploy it to the AWS DeepLens device. Now, you can develop your own AWS DeepLens model using TensorFlow on Amazon SageMaker.
About the Authors
Tatsuya Arai Ph.D. is a biomedical engineer turned deep learning data scientist on the Amazon Machine Learning Solutions Lab team. He believes in the true democratization of AI and that the power of AI shouldn’t be exclusive to computer scientists or mathematicians.
Eddie Calleja is a Software Development Engineer for AWS Deep Learning. He is one of the developers of the DeepLens device. As a former physicist he spends his spare time thinking about applying AI techniques to modern day physics problems.
Jyothi Nookula is a Senior Product Manager for AWS DeepLens. She loves to build products that delight her customers. In her spare time, she loves to paint and host charity fund raisers for her art exhibitions.