Distilled News

Model evaluation, model selection, and algorithm selection in machine learning

Machine learning has become a central part of our life – as consumers, customers, and hopefully as researchers and practitioners! Whether we are applying predictive modeling techniques to our research or business problems, I believe we have one thing in common: We want to make ‘good’ predictions! Fitting a model to our training data is one thing, but how do we know that it generalizes well to unseen data? How do we know that it doesn´t simply memorize the data we fed it and fails to make good predictions on future samples, samples that it hasn´t seen before? And how do we select a good model in the first place? Maybe a different learning algorithm could be better-suited for the problem at hand?

Principal Component Analysis in 3 Simple Steps

Principal Component Analysis (PCA) is a simple yet popular and useful linear transformation technique that is used in numerous applications, such as stock market predictions, the analysis of gene expression data, and many more. In this tutorial, we will see that PCA is not just a ‘black box’, and we are going to unravel its internals in 3 basic steps.

Naive Bayes and Text Classification – Introduction and Theory

Naive Bayes classifiers, a family of classifiers that are based on the popular Bayes´ probability theorem, are known for creating simple yet well performing models, especially in the fields of document classification and disease prediction. In this first part of a series, we will take a look at the theory of naive Bayes classifiers and introduce the basic concepts of text classification. In following articles, we will implement those concepts to train a naive Bayes spam filter and apply naive Bayes to song classification based on lyrics.

Predictive Modeling, Supervised Machine Learning, and Pattern Classification – the big Picture

When I was working on my next pattern classification application, I realized that it might be worthwhile to take a step back and look at the big picture of pattern classification in order to put my previous topics into context and to provide and introduction for the future topics that are going to follow. Pattern classification and machine learning are very hot topics and used in almost every modern application: Optical Character Recognition (OCR) in the post office, spam filtering in our email clients, barcode scanners in the supermarket … the list is endless. In this article, I want to give a quick overview about the main concepts of a typical supervised learning task as a primer for future articles and implementations of various learning algorithms and applications.

Linear Discriminant Analysis – Bit by Bit

Linear Discriminant Analysis (LDA) is most commonly used as dimensionality reduction technique in the pre-processing step for pattern-classification and machine learning applications. The goal is to project a dataset onto a lower-dimensional space with good class-separability in order avoid overfitting (‘curse of dimensionality’) and also reduce computational costs.

Dixon’s Q test for outlier identification – A questionable practice

I recently faced the impossible task to identify outliers in a dataset with very, very small sample sizes and Dixon´s Q test caught my attention. Honestly, I am not a big fan of this statistical test, but since Dixon´s Q-test is still quite popular in certain scientific fields (e.g., chemistry) that it is important to understand its principles in order to draw your own conclusion of the presented research data that you might stumble upon in research articles or scientific talks.

About Feature Scaling and Normalization – and the Effect of Standardization for Machine Learning Algorithms

The result of standardization (or Z-score normalization) is that the features will be rescaled so that they´ll have the properties of a standard normal distribution …

Entry Point Data – Using Python’s sci-Packages to Prepare Data for Machine Learning Tasks and Other Data Analyses

In this short tutorial I want to provide a short overview of some of my favorite Python tools for common procedures as entry points for general pattern classification and machine learning tasks, and various other data analyses.

An Introduction to Parallel Programming Using Python’s Multiprocessing Module – Using Python’s Multiprocessing Module

CPUs with multiple cores have become the standard in the recent development of modern computer architectures and we can not only find them in supercomputer facilities but also in our desktop machines at home, and our laptops; even Apple´s iPhone 5S got a 1.3 Ghz Dual-core processor in 2013. However, the default Python interpreter was designed with simplicity in mind and has a thread-safe mechanism, the so-called ‘GIL’ (Global Interpreter Lock). In order to prevent conflicts between threads, it executes only one statement at a time (so-called serial processing, or single-threading). In this introduction to Python´s multiprocessing module, we will see how we can spawn multiple subprocesses to avoid some of the GIL´s disadvantages.

Numeric Matrix Manipulation – The Cheat Sheet for MATLAB, Python NumPy, R, and Julia

At its core, this article is about a simple cheat sheet for basic operations on numeric matrices, which can be very useful if you working and experimenting with some of the most popular languages that are used for scientific computing, statistics, and data analysis.

Implementing a Principal Component Analysis (PCA) – in Python, Step by Step

The main purposes of a principal component analysis are the analysis of data to identify patterns and finding patterns to reduce the dimensions of the dataset with minimal loss of information. Here, our desired outcome of the principal component analysis is to project a feature space (our dataset consisting of n n d d -dimensional samples) onto a smaller subspace that represents our data ‘well’. A possible application would be a pattern classification task, where we want to reduce the computational costs and the error of parameter estimation by reducing the number of dimensions of our feature space by extracting a subspace that describes our data ‘best’.

Unit Testing in Python – Why we Want to Make it a Habit

Let´s be honest, code testing is everything but a joyful task. However, a good unit testing framework makes this process as smooth as possible. Eventually, testing becomes a regular and continuous process, accompanied by the assurance that our code will operate just as exact and seamlessly as a Swiss clockwork.

A short tutorial for decent heat maps in R

I received many questions from people who want to quickly visualize their data via heat maps – ideally as quickly as possible. This is the major issue of exploratory data analysis, since we often don´t have the time to digest whole books about the particular techniques in different software packages to just get the job done. But once we are happy with our initial results, it might be worthwhile to dig deeper into the topic in order to further customize our plots and maybe even polish them for publication. In this post, my aim is to briefly introduce one of R´s several heat map libraries for a simple data analysis. I chose R, because it is one of the most popular free statistical software packages around. Of course there are many more tools out there to produce similar results (and even in R there are many different packages for heat maps), but I will leave this as an open topic for another time.

Like this:

Like Loading…

Related