Distilled News

What is Machine Learning?

Typing ‘what is machine learning?’ into a Google search opens up a pandora’s box of forums, academic research, and here-say – and the purpose of this article is to simplify the definition and understanding of machine learning thanks to the direct help from our panel of machine learning researchers. In addition to an informed, working definition of machine learning (ML), we aim to provide a succinct overview of the fundamentals of machine learning, the challenges and limitations of getting machine to ‘think’, some of the issues being tackled today in deep learning (the ‘frontier’ of machine learning), and key takeaways for developing machine learning applications.

Anomaly detection: Machine learning platforms for real-time decision making

Ever since the rise of big data enterprises of all sizes have been in a state of uncertainty. Today we have more data available than ever before, but few have been able to implement the procedures to turn this data into insights. To the human eye, there is just too much data to process. Tim Keary looks at anomaly detection in this first of a series of articles. Unmanageable datasets have become a problem as organizations are needing to make faster decision in real-time. Machine learning has emerged as one of the critical technologies confronting this challenge. In fact, according to Adobe around 15% of enterprises are using AI solutions with 31% expected to incorporate AI solutions over the next 12 months. While people are no match for large datasets organizations are leveraging machine learning and anomaly detection to process data faster than ever before. Anomaly detection is bridging the gap between metrics and business processes to provide more efficiency.

Time Series Anomaly Detection

This article describes how to use the Time Series Anomaly Detection module in Azure Machine Learning Studio, to detect anomalies in time series data. The module learns the normal operating characteristics of a time series that you provide as input, and uses that information to detect deviations from the normal pattern. The module can detect both changes in the overall trend, and changes in the magnitude or range of values. Detecting changes in time series data has wide applications. For example, you could use it for near-real-time monitoring of sensors, networks, or resource usage. By tracking service errors, service usage, and other KPIs, you can respond quickly to critical anomalies. Other applications include health care and finance.

An Introduction to GPU Programming in Julia

This article aims to give a quick introduction about how GPUs work and specifically give an overlook of the current Julia GPU ecosystem and how easy it is to get simple GPU programs running. To make things easier, you can run all the code samples directly in the article if you have an account and click on edit. First of all, what is a GPU anyways? A GPU is a massively parallel processor, with a couple of thousand parallel processing units. For example the Tesla k80, which is used in this article, offers 4992 parallel CUDA cores. GPUs are quite different from CPUs in terms of frequencies, latencies and hardware capabilities, but this is somewhat similar to a slow CPU with 4992 cores!

Faster R-CNN and Mask R-CNN in PyTorch 1.0

This project aims at providing the necessary building blocks for easily creating detection and segmentation models using PyTorch 1.0.

An Introduction to Text Summarization using the TextRank Algorithm (with Python implementation)

Text Summarization is one of the applications of Natural Language Processing (NLP) which is bound to have a huge impact on our lives. With growing digital media and ever growing publishing – who has the time to go through entire articles / documents / books to decide whether they are useful or not? Thankfully – this technology is already here. Have you come across the mobile app inshorts? It’s an innovative news app that converts news articles into a 60-word summary. And that is exactly what we are going to learn in this article – Automatic Text Summarization. Automatic Text Summarization is one of the most challenging and interesting problems in the field of Natural Language Processing (NLP). It is a process of generating a concise and meaningful summary of text from multiple text resources such as books, news articles, blog posts, research papers, emails, and tweets. The demand for automatic text summarization systems is spiking these days thanks to the availability of large amounts of textual data. Through this article, we will explore the realms of text summarization. We will understand how the TextRank algorithm works, and will also implement it in Python. Strap in, this is going to be a fun ride!

Reinforcement Learning with Prediction-Based Rewards

We’ve developed Random Network Distillation (RND), a prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity, which for the first time1 exceeds average human performance on Montezuma’s Revenge. RND achieves state-of-the-art performance, periodically finds all 24 rooms and solves the first level without using demonstrations or having access to the underlying state of the game. RND incentivizes visiting unfamiliar states by measuring how hard it is to predict the output of a fixed random neural network on visited states. In unfamiliar states it’s hard to guess the output, and hence the reward is high. It can be applied to any reinforcement learning algorithm, is simple to implement and efficient to scale. Below we release a reference implementation of RND that can reproduce the results from our paper.

Multilevel Modeling Solves the Multiple Comparison Problem: An Example with R

Multiple comparisons of group-level means is a tricky problem in statistical inference. A standard practice is to adjust the threshold for statistical significance according to the number of pairwise tests performed. For example, according to the widely-known Bonferonni method, if we have 3 different groups for which we want to compare the means of a given variable, we would divide the standard significance level (.05 by convention) by the number of tests performed (3 in this case), and only conclude that a given comparison is statistically significant if its p-value falls below .017 (e.g. .05/3). In this post, we’ll employ an approach often recommended by Andrew Gelman, and use a multi-level model to examine pairwise comparisons of group means. In this method, we use the results of a model built with all the available data to draw simulations for each group from the posterior distribution estimated from our model. We use these simulations to compare group means, without resorting to the expensive corrections typical of standard multiple comparison analyses (e.g. the Bonferroni method referenced above).

Use Pseudo-Aggregators to Add Safety Checks to Your Data-Wrangling Workflow

One of the concepts we teach in both Practical Data Science with R and in our theory of data shaping is the importance of identifying the roles of columns in your data.For example, to think in terms of multi-row records it helps to identify:• Which columns are keys (together identify rows or records).• Which columns are data/payload (are considered free varying data).• Which columns are ‘derived’ (functions of the keys).In this note we will show how to use some of these ideas to write safer data-wrangling code.

Analysing UK Traffic Trends with PCA

The PCA (also known as Principal Component Analysis) is quite a handy tool for solving unsupervised learning problems. In other words, PCA can allow us to group unsupervised data into meaningful clusters, and visualize this in a way that allows us to make sense of our data. Let’s see how PCA can be used to analyse traffic patterns. In this example, we are using traffic data available from the UK Department of Transport website.

Data Labeling Full-Text Datasets for AI Predictive Lift

To paraphrase James Kobielus (SiliconANGLE), it is well understood by data scientists that without high-quality labeled training data, supervised learning falls apart and there is no way to ensure that models can predict with any accuracy. But with full-text datasets, high quality, fully labeled training sets are hard to come by. As if that isn’t bad enough, it is also unclear what a high quality, fully labeled full-text dataset looks like. We all know what a labeled dataset looks like for image recognition, it basically consists of two columns: the image, and the label (see the table above). But with full-text data there are no acknowledged standards or best practices for data labeling. At Informatics4AI, most of our customers create the labeled training data from their own raw unlabeled text. In this whitepaper we examine the current state of data labeling for full-text datasets such as doctors’ notes, human conversations, tweets, etc.

How machines understand our language: an introduction to Natural Language Processing

Natural Language Processing is for me one of the most captivating fields of data science. The fact that a machine can understand the content of a text with a certain accuracy is just fascinating, and sometimes scary. The applications of NLP are endless. This is how a machine classifies whether an email is spam or not, if a review is positive or negative, and how a search engine recognizes what type of person you are based on the content of your query to customize the response accordingly. But how does that work in practice? This post introduces the concepts at the base of Natural Language Processing, and focuses on the nltk package to use in Python.

New Poll: How Important is Understanding Machine Learning Models?

New KDnuggets poll is asking: When building Machine Learning / Data Science models in 2018, how often was it important that the model be humanly understandable/explainable? Please vote

Enterprise AI – An Application Perspective

Enterprise AI: An applications perspective takes a use case driven approach to understand the deployment of AI in the Enterprise. Designed for strategists and developers, the book provides a practical and straightforward roadmap based on application use cases for AI in Enterprises. The authors (Ajit Jaokar and Cheuk Ting Ho) are data scientists and AI researchers who have deployed AI applications for Enterprise domains. The book is used as a reference for Ajit and Cheuk’s new course on Implementing Enterprise AIAfter reading this book, the reader should be able to• Understand Enterprise AI use cases• De-mystify Enterprise AI• Understand what problems Enterprise AI solves (and does not solve)The term ‘Enterprise’ can be understood in terms of Enterprise workflows. These are already familiar through deployments of ERP systems like SAP or Oracle Financials. We consider both core Enterprise and the Wider enterprise workflows (including supply chain). The book is thus concerned with understanding the touchpoints which depict how Enterprise workflows could evolve when AI is deployed in the Enterprise.

Digital Decisioning Platforms – A New Way to Slice the Pie

Digital Decisioning Platforms is a new segment identified by Forrester that marries Business Process Automation, Business Rules Management, and Advanced Analytics. For platform developers it’s a new way to slice the market. For users it eases integration of predictive models into the production environment.

Recommender systems with deep learning architectures

This post adresses the general problem of constructing a deep learning based recommender system. The particular architecture discribed in the paper is the one powering the new smart feed of the iki service, pushing your skills on daily basis?-?to check its performance, please try product beta. If you feel familiar with the general idea of recommender systems, mainstream approaches and would like to go straight to the details of our solution, please skip first two sections of the paper.

How to extract data from MS Word Documents using Python

This blog will go into detail on extracting information from Word Documents locally. Since many companies and roles are inseparable from the Microsoft Office Suite, this is a useful blog for anyone faced with data transferred through .doc or .docx formats.

Understanding Neural Networks: What, How and Why?

Neural networks is one of the most powerful and widely used algorithms when it comes to the subfield of machine learning called deep learning. At first look, neural networks may seem a black box; an input layer gets the data into the ‘hidden layers’ and after a magic trick we can see the information provided by the output layer. However, understanding what the hidden layers are doing is the key step to neural network implementation and optimization. In our path to understand neural networks, we are going to answer three questions: What, How and Why?

Demystifying Statistical Analysis 6: Moderation & Mediation (Why Are They Even Paired Together?!)

Differentiating the terms ‘Moderation’ and ‘Mediation’ is the bane of many students’ lives in learning statistics. Because the two analyses deal with extraneous variables, and they both sound very similar, people often confuse them together. But if we uncover the essence of what each analysis does, their differences become so distinct to the point you might even wonder why they are always paired together. Before we dive into what each analysis does, I hope the following sentences help to form the basis of how to differentiate them in the simplest way:• Believe it or not, the test for moderation is simply a check of whether or not an interaction effect exists, and nothing more complicated than that.• On the other hand, the test for mediation requires at least a slightly more complicated model comparison approach that checks for causality.

Like this:

Like Loading…

Related