Transfer learning for custom labels using a TensorFlow container and “bring your own algorithm” in Amazon SageMaker

Data scientists and developers can use the Amazon SageMaker fully managed machine learning service to build and train machine learning (ML) models, and then directly deploy them into a production-ready hosted environment.

In this blog post we’ll show you  how to use Amazon SageMaker to do transfer learning using a TensorFlow container with our own code for training and inference.

Transfer learning is a well-known technique that is used in computer vision problems to re-train an existing trained neural network, such as AlexNet or ResNet[1], for additional custom labels. Amazon SageMaker also supports transfer learning for image classification through the built-in image classification algorithm and you could use your own labelled image data to re-train a ResNet[1] network. See this image classification documentation for more details on Amazon SageMaker. To understand when to use transfer learning and related guidelines, refer to this.

Although the Amazon SageMaker built-in image classification algorithm is great for a variety of use cases, you might have scenarios where you need a different combination of the pre-trained network and the image data on which it has been trained. For example, some of the criteria  to keep in mind are – similarity of new data set to the original data set, size of the new data set, number of labels required, accuracy of the model, size of the trained model, and, last but not least, the amount of compute power needed to re-train. If, for example, you are trying to deploy the trained model on a handheld device, you might be better off with a smaller footprint model such as MobileNet. Alternatively, if you want more compute-efficient models, Xception would be better than VGG16 or Inception.

In this blog post, we take an inception v3 network pre-trained on an ImageNet dataset and re-train it using the Caltech-256 dataset (Griffin, G. Holub, AD. Perona, P. The Caltech 256. Caltech Technical Report). Amazon SageMaker makes it very easy to bundle your own container and import it into Amazon Elastic Container Registry (Amazon ECR). Alternatively,  you could use the container provided by Amazon SageMaker: https://github.com/aws/sagemaker-tensorflow-containers. We will customize the Amazon SageMaker TensorFlow container with our own transfer learning code in TensorFlow framework. Then we’ll import this container into Amazon ECR, and use it for model training and inferencing.

I hope this gives you sufficient background, now let us start putting it together.

Launching and preparing the environment

We’ll leverage the Jupyter Notebook instance provided by Amazon SageMaker to customize the provided TensorFlow container. We will import the container to this notebook environment before registering into Amazon ECR. The same container will be used both for training and inferencing. Note that we’ll build on a previous blog post,  which explains the basics of importing your own container. The difference is that in this blog post we are customizing the TensorFlow container.

Launch an Amazon SageMaker notebook instance

Log in to the AWS Management Console and go to the Amazon SageMaker console. The notebook instance has all the building blocks to build a custom container and the Docker container Image.

Open the Amazon SageMaker Dashboard. Choose Create notebook instance.

For the purposes of this blog post, we will do the following:

  • Place the notebook instance in a subnet inside a VPC, which has internet access.

  • You can pick up any of the instance options from the drop-down list, but we recommend ml.m4.xlarge at the minimum.

  • Create a new IAM role or attach an existing one. Make sure that this IAM role allows Amazon ECR full access and S3 Bucket access. This access is in addition to the default access that you get when creating a new IAM role for Amazon SageMaker. For this blog post, these permissions should be sufficient to get started, however, if you want to further narrow down the scope of permissions and limit them to the resources, see Amazon SageMaker Roles in the documentation.

  • Also, create a security group in this VPC, which has at least port 80 and 8888 opened.

  • Enable internet access for this notebook. For purposes of this blog, this should be sufficient, but for additional security considerations of a notebook instance, review Notebook Instance Security in the documentation.

  • Keep the rest of the options at the default settings.

Choose Create notebook instance and you should see an instance launching. (It’s in Pendingstatus.) –

Wait for the status to change from Pending to InService.

After the notebook instance is in service, choose Open. Now you have a fully working Jupyter notebook with variety of different environments pre-configured and pre-built for you.

A screen similar to the following opens.

You can see all the pre-configured environments by choosing on New in the top-right corner.

Choose Terminal to launch a terminal.

 

We will be using this terminal screen for most of our workshop.

Get the Amazon SageMaker provided TensorFlow container

The containers for TensorFlow, MXNet, and Chainer have been provided. We will customize the TensorFlow container with our own training and inference code. To begin, we will first clone the TensorFlow container Git repository from AWS.

In the terminal window, execute the following command:

sh$ git clone https://github.com/aws/sagemaker-tensorflow-containers.git

1
2
3
4
5
6
7
8
9
10
11
12
13

This should clone the repository. See the following screenshot:

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/18/transfer-learning-customer-labels-10.gif)


Now our environment is ready to customize the container.

## Customize the TensorFlow container for transfer learning

Let’s clone the source code required for the TensorFlow Docker. Go to the directory containing the sample Dockerfiles:

sh$ cd sagemaker-tensorflow-containers/docker/1.6.0/base

Execute the following to clone the TensorFlow code for transfer learning and inferencing:

1
2
3
4
/home/ec2-user/sagemaker-tensorflow-containers/docker/1.6.0/base/tfblog/

sh# cp Dockerfile_blog.cpu Dockerfile.cpu

Check the files present in tfblog/tensorflow_code. Look for important files, such as train, which is used while training the model. The Amazon SageMaker invokes Docker container so that this script is called during training process. Similarly, there is a script called predictor.py that loads the trained model parameters and through a wrapper function in serve waits for prediction queries. It parses the input data (image/ text/ csv) and calls the model function. The predictions are then replied back to the user. Note that the code in train and predictor.py is taken from the image retraining part and labeling of images part of the code. The primary changes ensure that the inference code runs as a Flask application instance and responds to ping and invocation requests from Amazon SageMaker. This sample notebook shows you how to package the Docker container and the folder structure required for it to work in Amazon SageMaker.

Now build the container and give it a name, such ‘blog_img_5’or a name that you prefer. This image will be registered in ECR:

sh# bash build_and_push.sh blog_img_5 (Or whatever name you prefer for the image instead of blog_img_5)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

Execute the script and you should see a screen similar to the following:

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-12-1.gif)


Your container is built and registered with ECR. You can verify this:

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-14-1.gif)


## Test the container locally for training and inference

Now that the Dockerfile is ready with training and prediction code, it’s time to upload some sample data and test out the model locally before publishing to Amazon SageMaker. Ensure that the training data is placed in this path because this folder path is referenced by the container. The helper script for initiating local training will mount this path to the container instance (see https://github.com/amitkshgit/tfblog/blob/master/local_test/train_local.sh).

sh# cd local_test/test_dir/input/data/train/

You can use the Caltech-256 dataset, as mentioned earlier.

Make sure that the folder structure has all the images belonging to a label grouped in their own folder:

Now, let’s test the training locally from within the SageMaker notebook instance. Go to folder called local_test, which has utilities for building, deploying, and running predictions locally before publishing into Amazon SageMaker.

This is very useful for testing and debugging purposes. Note the helper script called train_local.sh and how it launches the container, mounts the training data path, and passes ‘train’ as the entry point.

Copy a small amount of data to test the functionality ( for example, to ensure that  ‘train’ and ‘predict’ are invoked) and not the training accuracy.

To start the local training:

sh# bash train_local.sh blog_img_5

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104

You should see the training progress kicked-off. TensorFlow first downloads the InceptionV3 model, which has already been trained on ImageNet, and starts training the new labels from our sample test data.

## Using the container through the Amazon SageMaker console

Now that we have locally tested and validated the functionality of the container, we can use it from the Amazon SageMaker console because its image has already been registered in ECR. Instead of copying data locally, Amazon SageMaker allows us to use the data from Amazon S3. In addition, we can view the logs in Amazon CloudWatch and track the progress of the training. Most importantly we can create the prediction endpoints, which can then be used for inferences.

Make a note of the ECR repository name because it will be needed for our training job as well as for creating the model endpoint.. Open the Amazon ECS console, choose Repositories, and you should see the Repository URL. Take a note of the entire URI.

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/20/transfer-learning-customer-labels-17-2.gif)


## Run training through Amazon SageMaker

Now we’ll create the training job from the Amazon SageMaker console. Choose **Training Jobs**, then  **Create Training Job**.. You should see page similar to the one below – make sure to

- Provide the **IAM role** that was used while launching the notebook instance.

- Set **Algorithm** to **Custom**because we are using our own imported container now

- Provide the right ECR image URI for the Docker image that was imported for the TensorFlow container

- For the purposes of this blog post, the settings we just discussed are sufficient. For production purposes, if you want to read more about how training jobs can be protected by using VPC, read Protect Training Jobs by Using an Amazon Virtual Private Cloud in the documentation.


![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-18.gif)


For **Resource configuration**, **Instance type** choose ml.m4.4xlarge, and increase **Additional volume per instance (GB)** (storage) to 20.

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-20.gif)


We are not changing hyperparameters in this scenario.

Next, we’ll load the training data and specify the right paths in Amazon S3. Make sure that the data is organized as files in the labelled folders and available in an Amazon S3 path. Provide this path to the S3 location as shown in the following screenshot –

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-21.gif)


Choose **Create training job**. It will take a few minutes to launch the training container and copy all of the data to it. The training does not start until the container is up and running, and the data has been loaded. A successful job submission should look like the following:

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-22.gif)


Choose the job name and go to the CloudWatch console to review the metrics that are being streamed. You should see all* stdout* messages being printed to CloudWatch Logs.

When the status of the job changes to **Completed**, this means that the job has uploaded the model to Amazon S3. We’ll take this model artifact to create the endpoints for prediction.

## Deploy the trained model from Amazon S3 in Amazon SageMaker

Now we’ll deploy the model and call it for predictions. After the training job is complete, choose it, and you should see an option for **Create Model**:

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-23.gif)


 

Populate the page with the rest of the details for this model, as shown in the following screenshot:

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-24.gif)


Leave the rest of the options at the defaults. Notice that the console has automatically picked the right container image and model artifacts, which are the parameters for the trained model.  For the purposes of this blog post these settings are sufficient, but if you want to read more about the protecting your models using VPC, see Protect Models by Using an Amazon Virtual Private Cloud in the documentation.

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-25.gif)


After the model has been deployed, we’ll create the endpoint configuration. Endpoint configurations specify which models to deploy, relative traffic weighting, and hardware requirements.

Choose the deployed model, and you should see options for **Endpoint configurations**.

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-26.gif)



 

Create the endpoint configuration. Then, choose **Endpoints** to create an endpoint that will use this endpoint configuration. Give a name to this endpoint and select the existing endpoint configuration:

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-27.gif)


Select the endpoint configuration that you created in the last step. After it has been created, choose it to see the progress of creation of this endpoint:

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-28.gif)


It will take a while to create the endpoint. But after it is created, it can be invoked from the notebook. After it is created, its status changes to **InService**.

## Run predictions by calling the Amazon SageMaker endpoint

Open the conda_python2 environment:

![](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2018/07/19/transfer-learning-customer-labels-29.gif)



 

Paste the following code into a cell:

 

Code start

import boto3 runtime = boto3.Session().client(service_name=’runtime.sagemaker’) url = ‘ https://images.unsplash.com/photo-1519046947096-f43d6481532b’ response = runtime.invoke_endpoint(EndpointName=’blog-ep-new, ContentType=’text/plain’, Body=url) result = response[‘Body’].read() result

Code end

1
2
3
4

If the mode is trained and deployed properly you should get a response similar to the following:

{"aerobatics": "0.0019493427", "bmx": "0.007112476", "rugby union": "0.0011374813", "fencing": "0.0015482939", "beach volleyball": "0.98825246"}

Clean-up and delete the resources

Remember to clean up the resources that were provisioned so you can avoid costs after running this scenario.

Delete endpoint

To delete the endpoint, highlight the endpoint that was created and select Delete:

Delete endpoint configuration

Similarly, delete the endpoint configuration by selecting the appropriate endpoint configuration and selecting Delete:

Delete model

Next we’ll delete the model:

Delete notebook instance

Finally delete the notebook instance:

Conclusion

To conclude, I hope this blog post gave you some ideas for taking existing code in a framework, bundling it into an Amazon SageMaker-provided container, and porting it over to Amazon SageMaker. The containerized approach that this service provides makes it very easy for developers and data scientists to focus on their core strengths. You can use any framework you choose,  but you still can leverage the wide array of AWS services, such as Amazon S3, CloudWatch, Amazon ECR, and IAM to scale and operate your infrastructure.


About the Author

Amit Sharma is an AWS solutions architect specializing in Analytics and Machine Leaning services. He helps various AWS customers and partners with technical guidance on related projects, thus enabling them to leverage services for their business benefit.