Segmenting brain tissue using Apache MXNet with Amazon SageMaker and AWS Greengrass ML Inference – Part 2

In Part 1 of this blog post, we demonstrated how to train and deploy neural networks to automatically segment brain tissue from an MRI scan in a simple, streamlined way using Amazon SageMaker. We used Apache MXNet to train a convolutional neural network (CNN) on Amazon SageMaker using the Bring Your Own Script paradigm. We trained two networks: U-Net and the efficient, low-latency ENet. Now we show how to use AWS Greengrass ML Inference to deploy ENet to a portable edge device for offline inference in low- or no-connectivity environments.

While this use case deals with medical imaging as raw images and not Protected Health Information (PHI), please note the following:

AWS Greengrass is not an AWS HIPAA Eligible Service at the time of this writing. Consistent with the AWS Business Associate Addendum (BAA), AWS Greengrass should not be used to create, receive, maintain, or transmit Protected Health Information (PHI) under the U.S. Health Insurance Portability and Accountability Act (HIPAA). It is each customer’s responsibility to determine whether they are subject to HIPAA, and if so, how best to comply with HIPAA and its implementing regulations. Accounts that create, receive, maintain, or transmit PHI using a HIPAA Eligible Service should encrypt PHI as required under the BAA. For a current list of HIPAA Eligible Services, and for more information generally, see the AWS HIPAA Compliance page.

Use case

As we mentioned in Part 1, edge deployment of models is of great interest to a variety of use cases. Running inference offline at the edge has potential for significant impact in medical image annotation. Given the dearth of medical professionals in parts of the world with limited or no internet connectivity, a portable, low-power solution that can automate annotation locally has many advantages. We show how to deploy models trained in Amazon SageMaker to the edge using AWS Greengrass. This service enables you to securely run local compute, messaging, data caching, sync, and ML inference capabilities for connected devices.

Deploying to the edge

We trained two models on Amazon SageMaker, U-Net and ENet, to perform brain tissue segmentation. We showed how to deploy both models to Amazon SageMaker endpoints in the cloud for inference, and we compared their respective accuracy and latency. Now we show how to deploy ENet to a Raspberry Pi 3 (RPi) as an offline endpoint using AWS Greengrass ML Inference.

Training an Amazon SageMaker model in MXNet 0.11

First, we need a model that is compatible with the precompiled MXNet library that AWS Greengrass provides, which currently is version 0.11 (version 1.2.1 is available soon). Version 0.11 isn’t supported by default through the Amazon SageMaker MXNetEstimator object, so we have to manually set the image ID to one that has MXNet 0.11 and then train.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
enet_eleven_job = 'DEMO-enet-eleven-job-' + \
 time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())

enet_eleven_estimator = MXNet(entry_point='brain_segmentation.py',
 base_job_name=enet_eleven_job,
 source_dir='source_dir',
 role=role,
 train_instance_count=1,
 train_instance_type='ml.p3.2xlarge',
 hyperparameters={
 'learning_rate': 1E-3,
 'class_weights': [[1.35, 17.18, 8.29, 12.42]],
 'network': 'enet',
 'batch_size': 32,
 })

region = sagemaker_session.boto_session.region_name
enet_eleven_estimator.train_image = lambda: '780728360657.dkr.ecr.{}.amazonaws.com/sagemaker-mxnet-py2-gpu:1.0'.format(
 region)

enet_eleven_estimator.fit({'train': train_s3, 'test': validation_s3})

Later steps use the model trained here as an ML resource, so take note.

Installing the AWS Greengrass core on RPi

Follow the instructions in Module 1: Environment Setup for Greengrass and Module 2: Installing the Greengrass Core Software in the AWS Greengrass Developer Guide.

Configuring ML inference

Follow the instructions for Configuring Machine Learning Inference, with the following caveats:

  • Ignore requirements for Raspberry Pi Camera because we won’t be using one.

  • Skip step 3. We get our model package directly from Amazon SageMaker.

  • Step 4 refers to a deployment package that is already provided through the MXNet installation. We use a different deployment package called greengrassBrainSegmentation.zip.

The deployment package that we’re using deploys a Flask app to the RPi that serves inference using our model.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
import greengrasssdk
import io
from processing import *
from load_model import load_model
import json
from flask import Flask, request, send_file

client = greengrasssdk.client('iot-data')

app = Flask(__name__)

@app.route('/', methods=['POST'])
def transform():
 client.publish(topic='inference/brain_segmentation', payload='Image received.')
 try:
 img = decode_request(request)
 batch = prep_batch(img)
 net.forward(batch)
 raw_output = net.get_outputs()[0].asnumpy()
 postprocess(raw_output)
 client.publish(topic='inference/brain_segmentation', payload='Image processed.')
 with open('/tmp/mask.png', 'rb') as img:
 return send_file(io.BytesIO(img.read()),
 attachment_filename='/tmp/mask.png',
 mimetype='image/png')
 except Exception as e:
 client.publish(topic='inference/brain_segmentation', payload='Error: %s.' % (str(e)))

model_path = '/greengrass-machine-learning/mxnet/segmentation-net/'
net = load_model(model_path, 'model', 0)
client.publish(topic='inference/brain_segmentation', payload='Model loaded.')
client.publish(topic='inference/brain_segmentation', payload='App starting.')

app.run(debug=True, host='0.0.0.0')

def function_handler(event, context):
 return

This serves as the handler for the local AWS Lambda function. Make sure to replace greengrassObjectClassification.function_handler in the setup instructions with greengrassBrainSegmentationApp.lambda_handler and replace naming conventions.

Along with the required AWS Greengrass and MXNet libraries, this deployment package contains helper modules load_model.py and processing.py and png.py from the PyPNG module for extremely lightweight PNG encoding.

In step 6, skip the videoCore resources because we don’t use a camera. Follow the instructions to add an ML resource. Choose Use an existing Amazon SageMaker model instead of Locate or upload a model in Amazon S3 for model source and select the name of the training job for ENet in MXNet 0.11. Likewise, for local path, enter the following:

/greengrass-machine-learning/mxnet/segmentation-net/

This is where the handler function looks for the model.

Stop in step 8 after deployment.

Testing the edge endpoint

Now that you have deployed your Lambda function to your device, test it. Make sure that you’re connected to the same network as the RPi. The Flask app by default listens on port 5000.

1
2
3
4
5
import requests

files = {'Body': open('test_input.png','rb')}
response = requests.post("http://<YOUR.RPi.IP.ADD>:5000", files=files)

Next, write the response content to disk.

1
2
with open(‘test_prediction.png', 'wb') as f:
 f.write(response.content)

Open the image to get your result!

Starting AWS Greengrass on boot

Now let’s make sure that AWS Greengrass runs on boot. That way, we don’t have to connect by Secure Shell (SSH) and restart the daemon every time we power it back on. To do so, use the following command to edit rc.local.

1
sudo nano /etc/rc.local

Add the following lines to rc.local.

1
2
cd /greengrass/ggc/core/
sudo ./greengrassd start

And that’s it! Now you have a portable, offline endpoint serving inference with ENet for brain-tissue segmentation.

Conclusion

In this post, we show how AWS Greengrass integrates seamlessly with Amazon SageMaker to effortlessly enable edge deployment of ML resources. We deployed an efficient neural network (ENet) trained for brain tissue segmentation in Amazon SageMaker to a Raspberry Pi 3 as a portable offline endpoint using AWS Greengrass ML Inference.

Hopefully you now feel comfortable using Amazon SageMaker to train your own models and deploy them to edge devices using AWS Greengrass. You don’t need to limit this approach to medical imaging. The applications are limitless, and the potential is without bound. Go forth and build!

Acknowledgements

This work was made possible with data provided by Open Access Series of Imaging Studies (OASIS), OASIS-1, by Marcus et al., 2007, used under CC BY 4.0.

Data were provided by OASIS:

  • OASIS-3: Principal Investigators: T. Benzinger, D. Marcus, J. Morris; NIH P50AG00561, P30NS09857781, P01AG026276, P01AG003991, R01AG043434, UL1TR000448, R01EB009352. AV-45 doses were provided by Avid Radiopharmaceuticals, a wholly owned subsidiary of Eli Lilly.

  • OASIS: Cross-Sectional: Principal Investigators: D. Marcus, R, Buckner, J, Csernansky J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382.

  • OASIS: Longitudinal: Principal Investigators: D. Marcus, R, Buckner, J. Csernansky, J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382.

Publications:

  • Open Access Series of Imaging Studies (OASIS): Cross-Sectional MRI Data in Young, Middle Aged, Nondemented, and Demented Older Adults.Marcus, DS, Wang, TH, Parker, J, Csernansky, JG, Morris, JC, Buckner, RL. Journal of Cognitive Neuroscience, 19, 1498-1507. doi: 10.1162/jocn.2007.19.9.1498

 


About the Author

Brad Kenstler is a Data Scientist on the Amazon Machine Learning Solutions Lab team. As part of the ML Solutions Lab, he helps AWS customers leverage ML & AI within their own organization for their own business use-cases and processes. His primary field of interest lies in the intersection of computer vision and deep learning. Outside of work, Brad enjoys listening to heavy metal, tasting new bourbons, and watching the San Francisco 49ers..