Using deep learning on AWS to lower property damage losses from natural disasters

Natural disasters like the 2017 Santa Rosa fires and Hurricane Harvey cost hundreds of billions of dollars in property damages every year, wreaking economic havoc in the lives of homeowners. Insurance companies do their best to evaluate affected homes, but it could take weeks before assessments are available and salvaging and protecting the homes can begin. EagleView, a property data analytics company, is tackling this challenge with deep learning on AWS.

“Traditionally, the insurance companies would send out adjusters for property damage evaluation, but that could take several weeks because the area is flooded or otherwise not accessible,” explains Shay Strong, director of data science and machine learning at EagleView. “Using satellite, aerial, and drone images, EagleView runs deep learning models on the AWS Cloud to make accurate assessments of property damage within 24 hours. We provide this data to both large national insurance carriers and small regional carriers alike, to inform the homeowners and prepare next steps much more rapidly.”

Often, this quick turnaround can save millions of dollars in property damages. “During the flooding in Florida from Hurricane Irma, our clients used this timely data to learn where to deploy tarps so they could cover some of the homes to prevent additional water damage,” elaborates Strong.

Matching the accuracy of human adjusters in property assessments requires EagleView to use a rich set of images covering the entire multi-dimensional space (spatial, temporal, and spectral) of a storm-affected region. To solve this challenge, EagleView captures ultra-high resolution aerial images across the U.S. at sub-1” pixel resolution using a fleet of 120+ aircraft. The imagery is then broken down into small image tiles—often parcel-specific tiles or generic 256×256 TMS tiles—to run through deep learning image classifiers, object detectors, and semantic segmentation architectures. Each image tile can be associated with the corresponding geospatial and time coordinates, which are kept as additional metadata and maintained throughout the learning and inference processes. Post-inference, tiles can be stitched together using the geospatial data to form a geo-registered map of information of the area of interest, including the neural network predictions. The predictions can also be aggregated to a property-level database for persistent storage, maintained in the AWS Cloud.

The following image demonstrates the accuracy of damage predictions by EagleView’s deep learning model for a portion of Rockport, Texas, after Hurricane Harvey in 2017. The green blobs in the image on the left are the properties where catastrophic structural damage occurred, based on human analysis. The pink blobs in the image on the right are segmented damage predictions that the model made. For this data, the model has a per-address accuracy of 96% compared to human analysis.

“We also use deep learning for interim pre-processing capabilities to determine such things as whether the image is of good quality (e.g., not cloudy or blurry) prior to generating address-level attributes and whether the image even contains the correct property of interest. We daisy-chain together intermediate neural nets to pre-process the imagery to improve the efficiency and accuracy of the neural nets generating the property attributes,” adds Strong.

EagleView built its deep learning models using the Apache MXNet framework. The models are trained using Amazon EC2 P2, P3, and G3 GPU instances on AWS.  Once ready, the models are deployed onto a massive fleet of Amazon ECS containers to process the terabytes of aerial imagery that EagleView collects daily. The company has accumulated petabytes of property-focused aerial imagery in total, all of which is stored on Amazon S3, which the deep learning models process. The results are stored in a combination of Amazon Redshift, Amazon Aurora, and Amazon S3, based on data type. For example, deep learning imagery products like segmented raster maps are stored on S3 and referenced as a function of street address in Amazon Redshift databases. The resulting information is served to EagleView’s clients using APIs or custom user interfaces.

As to why EagleView chose MXNet over other deep learning frameworks, Strong says, “It’s the flexibility, scalability, and the pace of innovation that led us to adopt MXNet. With MXNet, we can train the models on the powerful P3 GPU instances, allowing us to quickly iterate and build the model. We can then deploy them to low-cost CPU instances for inference. MXNet can also handle the kind of scale that we require for operation, which includes petabytes of image storage and associated data. Lastly, the pace of innovation around MXNet makes it easy for us to keep up with the advances in the deep learning space.”

One of EagleView’s next steps is to use Gluon, an open-source deep learning interface, to translate R&D models developed natively in TensorFlow, PyTorch, or other frameworks into MXNet. Then EagleView can bring machine learning models developed by either its data scientists or other open-source authors in these other frameworks into MXNet for running inference in production at a large scale.

“The affordability and scalability of AWS makes it possible these days to run deep learning models to the level of accuracy that humans can achieve for many tasks, such as insurance assessments, but with a level of consistency never seen before. For EagleView’s insurance clients, consistency, accuracy, and scale is imperative,” concludes Strong. “This has the potential to disrupt traditional industries like insurance, adding tremendous value.”

To get started with deep learning, you can try MXNet as a fully managed experience in the Amazon SageMaker ML service.

 


About the Author

Chander Matrubhutam is a principal product marketing manager at AWS, helping customers understand and adopt deep learning.