Distilled News

9 AI trends on our radar

  1. We’ll start to see technologies enable partial automation of a variety of tasks.2. AI in the enterprise will build upon existing analytic applications.3. In an age of partial automation and human-in-the-loop solutions, UX/UI design will be critical.4. We’ll see specialized hardware for sensing, model training, and model inference.5. AI solutions will continue to rely on hybrid models.6. AI successes will spur investments in new tools and processes.7. Machine deception will remain a serious challenge.8. Reliability and safety will take center stage.9. Democratizing access to large training data will level the playing field.

Neo4j Bookshelf

The neo4j Bookshelf. Everything needed to know to learn working with and implementing graphs also with free available books.

Everyday Ethics for Artificial Intelligence

Everyday Ethics for Artificial Intelligence is a framework for AI ethics that you and your team can immediately put into practice. We partnered with Francesca Rossi, IBM’s global leader for AI ethics, to distill a variety of information and perspectives into a digestible and actionable guide for designers and developers.

The Whitepaper: Everyday Ethics for Artificial Intelligence

This document represents the beginning of a conversation defining Everyday Ethics for AI. Ethics must be embedded into the design and development process from the very beginning of AI creation. This is meant to stimulate ideas and provoke thought. The idea here is to start simple and iterate. Rather than strive for perfection first, we’re releasing this to allow all who read and use this to comment, critique and participate in all future iterations. So please experiment, play, use, and break what you find here and send us your feedback. Designers and developers of AI systems are encouraged to be aware of these concepts and seize opportunities to intentionally put these ideas into practice. As you work with your team and others, please share this guide with them.

A query language for your API

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

Build Intelligent Web App with Machine Learning Service

In this article, we will walk through the process of consuming an ML service from a single-page app (SPA) built using the popular progressive web apps (PWA) framework Vue.js. At a very high level, it involves:• Generating API client from the Swagger JSON file of the service• Linking in the client as npm module package• Adding Vue component with javascript code that leverages the generated client to make service consumption API callsBefore we start, we assume you have gone through the previous article: How to operationalize TensorFlow models in Microsoft Machine Learning Server, and published the image classification ML model as a service and download the swagger.json file, and you have also installed a version of Node.js that is current.

How to operationalize TensorFlow models in Microsoft Machine Learning Server

We have seen how to operationalize Keras models as web services in R and Python in a previous blog. Now we will see how to deploy a TensorFlow image classification model to Microsoft Machine Learning Server. Click here to know more about Microsoft Machine Learning Server Operationalization. You can configure Machine Learning Server to operationalize analytics on a single machine (One-box) or multiple web and compute nodes that are configured on multiple machines along with other enterprise features.

Bringing Artificial Intelligence to the Edge with Offline First Web Apps

The world of machine learning technologies is exploding. People are finding new and inventive ways to take advantage of this technology and the possibilities feel endless. Machine learning models can serve as the base components for advanced artificial intelligence capabilities such as episodic memory, commonsense reasoning, and fluent conversations. But, currently the ability to utilize this technology on the web is limited by network connectivity, the feasibility of sending the data to be analyzed across the Internet, and resource limitations within the web browser environment. This makes machine learning not very accessible to web developers. Considering the vibrant ecosystem of Offline First Progressive Web Apps, perhaps it would be possible to apply these concepts so that machine learning inferencing can be embedded within apps allowing it to work offline, in low bandwidth scenarios, or when the size of the data used for inferencing is too large to send to the cloud for inferencing (e.g. video understanding).

Amazon’s own ‘Machine Learning University’ now available to all developers

Today, I’m excited to share that, for the first time, the same machine learning courses used to train engineers at Amazon are now available to all developers through AWS. We’ve been using machine learning across Amazon for more than 20 years. With thousands of engineers focused on machine learning across the company, there are very few Amazon retail pages, products, fulfillment technologies, stores which haven’t been improved through the use of machine learning in one way or another. Many AWS customers share this enthusiasm, and our mission has been to take machine learning from something which had previously been only available to the largest, most well-funded technology companies, and put it in the hands of every developer. Thanks to services such as Amazon SageMaker, Amazon Rekognition, Amazon Comprehend, Amazon Transcribe, Amazon Polly, Amazon Translate, and Amazon Lex, tens of thousands of developers are already on their way to building more intelligent applications through machine learning. Regardless of where they are in their machine learning journey, one question I hear frequently from customers is: ‘how can we accelerate the growth of machine learning skills in our teams?’ These courses, available as part of a new AWS Training and Certification Machine Learning offering, are now part of my answer.

Map Plots Created With R And Ggmap

In my previous tutorial we created heat maps of Seattle 911 call volume by various time periods and groupings. The analysis was based on a dataset which provides Seattle 911 call metadata. It’s available as part of the data.gov open data project. The results were really neat, but it got me thinking – we have latitude and longitude within the dataset, so why don’t we create some geographical graphs? Y’all know I’m a sucker for a beautiful data visualizations and I know just the package to do this: GGMAP! The ggmap package extends the ever-amazing ggplot core features to allow for spatial and geographic visualization. So let’s get going!

What is a Generative Adversarial Network?

Before we even think about starting to talk about Generative Adversarial Networks (GANs), it is worth asking the question of what’s in a generative model? Why do we even want to have such a thing? What is the goal? These questions can help seed our thought process to better engage with GANs. So why do we want a generative model? Well, it’s in the name! We wish to generate something. But what do we wish to generate? Typically, we wish to generate data (I know, not very specific). More than that though, it is likely that we wish to generate data that is never before seen, yet still fits into some data distribution (i.e. some pre-defined data set that we have already set aside).

Hypothesis Testing In Machine Learning

In this tutorial, you’ll learn about the basics of Hypothesis Testing and its relevance in Machine Learning.

Gaussian Processes are Not So Fancy

Gaussian Processes have a mystique related to the dense probabilistic terminology that’s already evident in their name. But Gaussian Processes are just models, and they’re much more like k-nearest neighbors and linear regression than may at first be apparent.

Yet Another Tutorial on Variational Auto Encoder – but in Pytorch 1.0

VAE is a generative model that leverages Neural Network as function approximator to model a continuous latent variable with intractable posterior distribution. If you are interested in the theory of VAE I suggest to look at the original paper variable or this awesome tutorial by Carl Doersch. In this tutorial I aim to explain how to implement a VAE in Pytorch.

Networks to reinvent insurance?

The theory of networks, or graphs, was born in 1735, following the work of Leonard Euler, who tried to find a walk – starting from a given point – that would bring us back to that point by passing once and only once through each of the seven bridges in the city of Königsberg. These networks can be compared to metro networks, consisting of stations (nodes), linked between two by rails, or not, or more generally a road network, which can give rise to congestion studies, for example. But today, networks are mainly social, connecting people through friendships, professional, family, or monetary ties. Network analysis makes it possible to create relatively homogeneous communities, accepting to share a risk, recreating a mutualisation.

Gartners Top Data and Analytics Predicts for 2019

Data and Analytics Strategy• Corporate strategies will explicitly mention information as a critical enterprise asset and analytics as an essential competency.• Data literacy will become an explicit and necessary driver of business value.• CDOs will partner with their CFO.• Organizations will require a professional code of conduct incorporating ethical use of data and AI.• Business systems will incorporate continuous intelligence that uses real-time context data to improve decisions.