I want to hire some people to help me update my websites more frequently, do the maintenance stuff, and to help edit the podcast so I can produce episodes more frequently.
Properties of Interpretability
In my last two posts, I wrote about model interpretability, with the goal of trying to understanding what it means and how to measure it. In the first post, I described the disconnect between our mental models and algorithmic models, and how interpretability could potentially reduce it. In the second post, I laid out four things that a model interpretation should allow us to do – mitigate bias, account for context, extract knowledge and generalize. In this post, I want to discuss a number of desirable properties that have been suggested for model interpretations, and that might be used to judge whether and how much a model or explanation is interpretable.
Experiments in Handwriting with a Neural Network
Let’s start with generating new strokes based on your handwriting input
Colorizing the DRAW Model
In my last post I described the DRAW model of recurrent auto-encoders. As far as I’ve seen, the only implementations of DRAW floating around Github deal with the MNIST dataset. While they are helpful for reference, I wanted to have a model that could successfully generate photographs, not just black-and-white digits.
Using Keras' Pretrained Neural Networks for Visual Similarity Recommendations
To close out our series on building recommendation models using Sketchfab data, I will venture far from the previous posts’ factorization-based methods and instead explore an unsupervised, deep learning-based model. You’ll find that the implementation is fairly simple with remarkably promising results which is almost a smack in the face to all of that effort put in earlier.
Kinesis Advantage2: Impressions
** Sun 04 December 2016
Ackerman Steering
How do cars turn corners? Whether driven by the front wheels or the back, cars are steered by turning the front wheels.* ||*Why? It’s a stability issue. If they were steered from the rear, minor deviations in angle would be amplified with positive feedback, increasing further the angle. It’s like the difference between dragging a pencil across a table by the tip of your finger, or attempting to push it across with your finger from behind.|
Forecast double seasonal time series with multiple linear regression in R
I will continue in describing forecast methods, which are suitable to seasonal (or multi-seasonal) time series. In the previous post smart meter data of electricity consumption were introduced and a forecast method using similar day approach was proposed. ARIMA and exponential smoothing (common methods of time series analysis) were used as forecast methods. The biggest disadvantage of this approach was that we created multiple models at once for different days in the week, which is computationally expensive and it can be a little bit unclear. Regression methods are more suitable for multi-seasonal times series. They can handle multiple seasonalities through independent variables (inputs of a model), so just one model is needed. In this post, I will introduce the most basic regression method - multiple linear regression (MLR).
Salon des Refusés
Don't Panic: Deep Learning will be Mostly Harmless
This is a blog post summarizing topics covered in a talk at the Dagstuhl workshop on “New Directions in Kernels and Gaussian Processes”.