I’ve got a sensor network - a collection of hipster detectors planted at various locations in Brooklyn. Due to power limitations, the sensors are not connected to any network on a regular basis. Rather than immediately transmitting information to the central data collector, the sensors instead wake up at random times and ping the network at some later time. I.e., there is a delayed reaction between the event occurring and the information of the event being transmitted.
Blog has migrated from Ghost to Jekyll
In the past few days I spent time migrating the blog from Ghost to Jekyll.
How to score 0.8134 in Titanic Kaggle Challenge
This post is the opportunity to share my solution with you.
IMDB Data Visualizations with D3 + Dimple.js
contact@andreykurenkov.com
发表于
Notes: not optimized for mobile (or much else). Full page version here, visualization code here. I don’t get into the technical aspects here, but feel free to take a look.
Playing with convolutions in TensorFlow
In this post we will try to develop a practical intuition about convolutions and visualize different steps used in convolutional neural network architectures. The code used for this tutorial can be found here.
The Convexity of Improbability: How Rare are K-Sigma Effects?
In my experience, people seldom appreciate just how much more compelling a 5-sigma effect is than a 2-sigma effect. I suspect part of the problem is that p-values don’t invoke the visceral sense of magnitude that statements of the form, “this would happen 1 in K times”, would invoke.
Boosting (in Machine Learning) as a Metaphor for Diverse Teams
Note – I wrote this article in one sitting, and definitely want to come back later to improve it and add references, but I don’t want to hold it up from being published just because I’m hungry for dinner. :) So I’m hitting publish, but please be aware that the content may change later. And feel free to give suggestions in the comments. -Renee
Moscow Math Olympiad Puzzle
The 2016 Olympic Games are currently happening in Rio. Let’s take a look at a puzzle from another Olympiad: The Moscow Math Olympiad. This is a puzzle that featured in the Spring Olympiad in 2000. The puzzle recently came to my attention because, at the time, there was a little controversy in what kind of answers were acceptable solutions.
A intuitive explanation of natural gradient descent
A term that sometimes shows up in machine learning is the “natural gradient”. While there hasn’t been much of a focus on using it in practice, a variety of algorithms can be shown as a variation of the natural gradient.
Variational Autoencoders Explained
In my previous post about generative adversarial networks, I went over a simple method to training a network that could generate realistic-looking images.