Amazon SageMaker Batch Transform now supports Amazon VPC and AWS KMS-based encryption

Amazon SageMaker now supports running Batch Transform jobs in Amazon Virtual Private Cloud (Amazon VPC) and using AWS Key Management Service (AWS KMS). Amazon VPC allows you to control access to your machine learning (ML) model containers and data so that they are private and aren’t accessible over the internet. AWS KMS enables you to encrypt data on the storage volume attached to the ML compute instances that run the Batch Transform job. This ensures that the model artifacts, logs, and temporary files used in your Batch Transform jobs are always secure. In this blog, I show you how to apply these features to Batch Transform jobs.

Amazon SageMaker Batch Transform is ideal for scenarios where you have large batches of data, need to pre-process and transform training data, or don’t need sub-second latency. You can use Batch Transform on a range of data sets, from petabytes of data to very small data sets. Existing machine learning models work seamlessly with this new capability, without any changes. Amazon SageMaker manages the provisioning of resources at the start of Batch Transform jobs. It then releases them when the jobs are complete, so you pay only for the resources used during the execution of the jobs.

Using a VPC helps to protect your model containers and the AWS resources they access such as the Amazon S3 buckets where you store your data and model artifacts because you can configure your VPC so that it is private and not connected to the internet. When you use a VPC, you can also monitor all network traffic in and out of your model containers by using VPC flow logs. If you don’t specify a VPC, Amazon SageMaker will run Batch Transform jobs in a VPC by default.

Amazon SageMaker Batch Transform already supports Amazon S3 SSE encryption of input and output data. Now, using AWS KMS, you can protect the storage volumes used during Batch Transform jobs with the encryption keys that you control. You can take advantage of AWS KMS features, such as centralized key management, key usage audit logging, and master key rotation, when you are running inferences or transforming batches of data. You can create, import, rotate, disable, delete, define usage policies for, and audit the use of encryption keys used to encrypt your data. AWS KMS is also integrated with AWS CloudTrail to provide you with logs of all key usage to help meet your regulatory and compliance needs.

Let’s take a look at how we run a Batch Transform job for the built-in Object Detection algorithm. I followed this example notebook to train my object detection model.

Now I’ll go to the Amazon SageMaker console and open the Models page to create a model and associate my private subnets and security groups.

From there I can create a model, using the object detection image and the model artifact generated from my training.

Here I name my model and specify the IAM role that provides the required permissions. Here I also specify the subnets and security groups in my private VPC to control access to my container and Amazon S3 data. As shown in the following screenshot, after I choose the VPC, Amazon SageMaker will auto-populate the subnets and security groups in that VPC. I have provided multiple subnets and security groups from my private VPC in this example. Amazon SageMaker Batch Transform will choose a single subnet to create elastic network interfaces (ENIs) in my VPC and attaches them to the model containers. These ENIs provide your model containers with a network connection so that my Batch Transform jobs can connect to resources in my private VPC without going over the internet.

Additionally, Amazon SageMaker will associate all security groups that I provide with the network interfaces created in my private VPC.

Separately, I have created a S3 VPC endpoint that allows my model containers to access the Amazon S3 bucket where my input data is stored, even if internet access is disabled. I recommend that you also create a custom policy that allows only requests from your private VPC to access to your S3 buckets. For more information, see Endpoints for Amazon S3.

I now proceed with specifying the inference container image Amazon Elastic Container Registry (Amazon ECR location) and the location of my model artifact from my object detection training job.

That’s it! We now have a model that is configured to run securely within a VPC.

Now, I will create a new Batch Transform job using the previously trained object detection model on the Selfies Dataset. Let’s go the Batch Transform page in the Amazon SageMaker console.

First, open the Amazon SageMaker console, choose Batch Transform, and choose Create Batch Transform job.

In the Batch Transform job configuration window, complete the form as shown in the following example. For more information about completing this form, see Using Batch Transform (Console)

Note that I have also specified the encryption key, using the new AWS KMS feature. Amazon SageMaker uses it to encrypt data on the storage volumes attached to the ML compute instances that run the Batch Transform job.

Next specify the input location. I can use either a manifest file or load all the files in an S3 location. Because I’m dealing with images, I specified my input content-type.

I also created a VPC endpoint that allows my model containers to access the Amazon S3 bucket where my input data is stored, even offline. I also recommend that you create a custom policy that allows requests only from your private VPC to access to your S3 buckets. For more information, see Endpoints for Amazon S3.

Finally, I’ll configure the output location and start the job, as shown in the following example. As mentioned before, you can specify a KMS encryption key for output, so that your Batch Transform outputs are encrypted server side with S3 KMS SSE.

After you start your transform job, you can open the job details page and follow the links to the metrics and the logs in Amazon CloudWatch.

Using Amazon CloudWatch, I can see that the transform job is running. If I look at my results in Amazon S3 I can see the predicted labels for each image, as shown in the following example:

The transform job creates a JSON file for each input file that contains the detected objects.

In your S3 bucket, it’s efficient to create a table for a bucket in AWS Glue. You then either query the results with Amazon Athena or visualize them using Amazon QuickSight.

It’s also possible to start transform jobs programmatically using the high-level Python library or the low-level SDK (boto3). For more information about how to use Batch Transform using your own containers, see Getting Inferences by Using Amazon SageMaker Batch Transform.

Batch Transform is ideal for use cases when you have large batches of data that need predictions, don’t need sub-second latency, or need to pre-process and transform training data. In this blog, we discussed how you can use Amazon VPC and AWS KMS to increase the security of your Batch Transform jobs. Amazon VPC and AWS KMS for Batch Transform are available in all AWS Regions where Amazon SageMaker is available. For more information, see the Amazon SageMaker developer guide.


About the Authors

Urvashi Chowdhary is a Senior Product Manager for Amazon SageMaker. She is passionate about working with customers and making machine learning more accessible. In her spare time, she loves sailing, paddle boarding, and kayaking.

 

Jeffrey Geevarghese is a Senior Engineer in Amazon AI where he’s passionate about building scalable infrastructure for deep learning. Prior to this he was working on machine learning algorithms and platforms and was part of the launch teams for both Amazon SageMaker and Amazon Machine Learning.