TensorFlow in a Nutshell — Part Two: Hybrid Learning

TensorFlow in a Nutshell — Part Two: Hybrid Learning

Make sure to check out Part One: Basics

In this article we will Demonstrate a Wide ‘N Deep Network that will use wide linear model trained simultaneously with a feed forward network for more accurate predictions than some tradition machine learning techniques. This hybrid learning method will be used to predict Survival probability of Titanic passengers.

These hybrid learning methods are already in production by Google in the Play store for app suggestions. Even Youtube is using similar hybrid learning techniques to suggest videos.

The code for this article is available here.

Wide and Deep Network

A Wide and Deep Network combines a linear model with a feed forward neural net so that our predictions will have memorization and generalization. This type of model can be used for classification and regression problems. This allows for less feature engineering with relatively accurate predictions. Thus, getting the best of both worlds.

The Data

We are going to be using the Titanic Kaggle data to predict whether or not the passenger will survive based on certain attributes like Name, Sex, what ticket they had, the fare they paid the cabin they stayed in etc. For more information on this data set check out here at Kaggle.

First off we’re going to define all of our columns as Continuos or Categorical.

Continuous columns — any numerical value in a continuous range. Pretty much if it is a numerical representation like money, or age.

Categorical columns — part of a finite set. Like male or female, or even what country someone is from.

Since we are only looking to see if a person survived, this is a binary classification problem. We predict a 1 if that person survives and a 0… if they do not � , We then create a column solely for our survived category.

The Network

Now we can get to creating the columns and adding embedding layers. When we build our model were going to want to change our categorical columns into a sparse column. For our columns with a small set of categories such as Sex or Embarked (S, Q, or C) we will transform them into sparse columns with keys

The other categorical columns have many more options than we want to put keys, and since we don’t have a vocab file to map all of the possible categories into an integer we will hash them.

Our continuous columns we want to use their real value. The reason passengerId is in continuous and not categorical is because they’re not in string format and they’re already an integer ID.

We are going to bucket the ages. Bucketization allows us to find the survival correlation by certain age groups and not by all the ages as a whole, thus increasing our accuracy.

Almost done, we are going to define our wide columns and our deep columns. Our wide columns are going to effectively memorize interactions between our features. Our wide columns don’t generalize our features, this is why we have our deep columns.

The benefit of having these deep columns is that it takes our sparse high dimension features and reduces them into low dimensions.

We finish off our function by creating our classifier with our deep columns and wide columns,

The last thing we will have to do before running the network is create mappings for our continuous and categorical columns. What we are doing here by creating this function, and this is standard throughout the Tensorflow learning code, is creating an input function for our dataframe. This converts our dataframe into something that Tensorflow can manipulate. The benefit of this is that we can change and tweak how our tensors are being created. If we wanted we could pass feature columns into .fit .feature .predict as an individually created column like we have above with our features, but this is a much cleaner solution.

Now after all this we can write our training function

We read in our csv files that were preprocessed, like effectively imputed missing values, for simplicity sake. Details on how the files were preprocessed along with the code are contained in the repo.

These csv’s are converted to tensors using our input_fn by lambda. we build our estimator then we print our predictions and print out our evaluation results.

Results

Running our code as is gives us reasonably good results with out adding any extra columns or doing any great acts of feature engineering. With very little fine tuning this model can be used to achieve relatively good results.

The ability of adding an embedding layer along with tradition wide linear models allows for accurate predictions by reducing sparse dimensionality down to low dimensionality.

Conclusion

This part deviates from traditional Deep Learning to illustrate the many uses and applications of Tensorflow. This article is heavily based on the paper and code provided by Google for wide and deep learning. The research paper can be found here. Google uses this model as a product recommendation engine for the Google Play store and has helped them increase sales on app suggestions. Youtube has also released a paper about their recommendation system using hybrid learning as well available here. These models are starting to be more prevalent for recommendation by various companies and will likely continue to be for their embedding ability.