Sponsored Post.
By Johanna Pingel, Product Marketing Manager, MathWorks
I remember the journey of creating my first deep learning network. It all started by typing that first search into Google to understand the technology: “What is deep learning?”
After hours of deep learning trial and error, I was left with a painstaking discovery: this is going to take a lot of time.
This is not another “what is deep learning?” piece. I’m here to talk about the intermediate stage of deep learning, and the trends that are emerging in response to the challenges at this stage. Let’s explore three trends that could help alleviate some of the more time-consuming tasks of deep learning.
Trend #1: Cloud Computing – We all know training complicated networks takes time. Adding to that, techniques like Bayesian optimization – which will train your network multiple times with different training parameters – can provide powerful results at a cost: more time. An option to alleviate some of this pain is to move from local resources to clusters (HPC) or the cloud. The cloud is emerging as a great resource: providing the latest hardware, multiple GPUs at one time, and only paying for resources when they are needed.
Trend #2: Interoperability – Let’s face it: There isn’t a single framework that can provide ‘best-in-class’ for everything about deep learning from start to finish. The trend of interoperability between deep learning frameworks, primarily through ONNX.ai, is allowing users to switch in and out of deep learning frameworks at their convenience. Companies like Facebook, Microsoft, and MathWorks are pushing this trend forward, which is why it’s a great time to check out a variety of deep learning frameworks.
Trend #3: Multi-deployment options – Let’s say you’ve made it to the finish line: You have a deep learning model to perform the task you envisioned. Now you need to get the model to its final destination. Multi-deployment can have various definitions, so let me define multi-deployment as “deploy your model to the right location depending on your specific need.” This could be the web, your phone, embedded processors, or GPUs.
If your goal is GPUs, CUDA can provide significant speedup through code optimization. Yes, CUDA has been around for a while, but optimization libraries like TensorRT and Thrust are worth a look. While results may vary, NVIDIA boasts TensorRT speeds up to 40x higher vs. CPU-only platforms for inference.
With Release R2018b, MATLAB has the capabilities allowing you to fully embrace these trends: cloud computing through reference architectures and NGC containers, ONNX.ai import and export capabilities, and multi-deployment options like CUDA with TensorRT. Interoperability between deep learning frameworks (Trend #2) makes this a great time to check out MATLAB’s deep learning features and apps, even if you intend to keep using your current deep learning tools. Thanks for reading, happy deep learning!
Learn more about the topic covered in this blog post: