Recommender systems have become a very important part of the retail, social networking, and entertainment industries. From providing advice on songs for you to try, suggesting books for you to read, or finding clothes to buy, recommender systems have greatly improved the ability of customers to make choices more easily.
Why is it so important for customers to have support in decision making? A well-cited study (over 2000 citations so far!) by Iyengar and Lepper ran an experiment where they had two stands of jam on two different days. One stand had 24 varieties of jam while the second had only six. The stand with 24 varieties of jam only converted 3% of the customers to a sale, while the stand with only six varieties converted 30% of the customers. This was an increase in sales of nearly ten-fold!
Given the number of possible choices available, especially for online shopping, having some extra guidance on these choices can really make a difference. Xavier Amatriain, now at Quora and previously at Netflix, gave an absolutely outstanding talk on recommender systems at Carnegie Mellon in 2014. I have included the talk below if you would like to see it.
Some of the key statistics about recommender systems he notes are the following:
-
At Netflix, 2/3 of the movies watched are recommended
-
At Google, news recommendations improved click-through rate (CTR) by 38%
-
For Amazon, 35% of sales come from recommendations
In addition, at Hulu, incorporating a recommender system improved their CTR by three times over just recommending the most popular shows back in 2011. When well implemented, recommender systems can give your company a great edge.
That doesn’t mean your company should necessarily build one, however. Valerie Coffman posted this article back in 2013, explaining that you need a fairly large amount of data on your customers and product purchases in order to have enough information for an effective recommender system. It’s not for everyone, but if you have enough data, it’s a good idea to consider it.
So, let’s assume you do have enough data on your customers and items to go about building one. How would you do it? Well, that depends a lot on several factors, such as:
-
The kind of data you have about your users/items
-
Ability to scale
-
Recommendation transparency
I will cover a few of the options available and reveal the basic methodology behind each one.
Content Based (Pandora)
In the case of Pandora, the online streaming music company, they decided to engineer features from all of the songs in their catalog as part of the Music Genome Project. Most songs are based on a feature vector of approximately 450 features, which were derived in a very long and arduous process. Once you have this feature set, one technique that works well enough is to treat the recommendation problem as a binary classification problem. This allows one to use more traditional machine learning techniques that output a probability for a certain user to like a specific song based on a training set of their song listening history. Then, simply recommend the songs with the greatest probability of being liked.
Most of the time, however, you aren’t going to have features already encoded for all of your products. This would be very difficult, and it took Pandora several years to finish so it probably won’t be a great option.
Demographic Based (Facebook)
If you have a lot of demographic information about your users like Facebook or LinkedIn does, you may be able to recommend based on similar users and their past behavior. Similar to the content based method, you could derive a feature vector for each of your users and generate models that predict probabilities of liking certain items.
Again, this requires a lot of information about your users that you probably don’t have in most cases.
So if you need a method that doesn’t care about detailed information regarding your items or your users, collaborative filtering is a very powerful method that works with surprising efficacy.
Collaborative Filtering
This is based on the relationship between users and items, with no information about the users or the items required! All you need is a rating of some kind for each user/item interaction that occurred where available. There are two kinds of data available for this type of interaction: explicit and implicit.
-
Explicit: A score, such as a rating or a like
-
Implicit: Not as obvious in terms of preference, such as a click, view, or purchase
The most common example discussed is movie ratings, which are given on a numeric scale. We can easily see whether a user enjoyed a movie based on the rating provided. The problem, however, is that most of the time, people don’t provide ratings at all (I am totally guilty of this on Netflix!), so the amount of data available is quite scarce. Netflix at least knows whether I watched something, which requires no further input on my part. It may be the case that I watched something but didn’t like it afterwards. So, it can be more difficult to infer whether this type of movie should be considered a positive recommendation or not.
Regardless of this disadvantage, implicit feedback is usually the way to go. Hulu, in a blog post about their recommendation system states:
As the quantity of implicit data at Hulu far outweighs the amount of explicit feedback, our system should be designed primarily to work with implicit feedback data.
Since more data usually means a better model, implicit feedback is where our efforts should be focused. While there are a variety of ways to tackle collaborative filtering with implicit feedback, I will focus on the method included in Spark’s library used for collaborative filtering, alternating least squares (ALS).
Alternating Least Squares
Before we start building our own recommender system on an example problem, I want to explain some of the intuition behind how this method works and why it likely is the only chosen method in Spark’s library. We discussed before how collaborative filtering doesn’t require any information about the users or items. Well, is there another way we can figure out how the users and the items are related to each other?
It turns out we can if we apply matrix factorization. Often, matrix factorization is applied in the realm of dimensionality reduction, where we are trying to reduce the number of features while still keeping the relevant information. This is the case with principal component analysis (PCA) and the very similar singular value decomposition (SVD).
Essentially, can we take a large matrix of user/item interactions and figure out the latent (or hidden) features that relate them to each other in a much smaller matrix of user features and item features? That’s exactly what ALS is trying to do through matrix factorization.
As the image below demonstrates, let’s assume we have an original ratings matrix $R$ of size $MxN$, where $M$ is the number of users and $N$ is the number of items. This matrix is quite sparse, since most users only interact with a few items each. We can factorize this matrix into two separate smaller matrices: one with dimensions $MxK$ which will be our latent user feature vectors for each user $(U)$ and a second with dimensions $KxN$, which will have our latent item feature vectors for each item $(V)$. Multiplying these two feature matrices together approximates the original matrix, but now we have two matrices that are dense including a number of latent features $K$ for each of our items and users.
In order to solve for $U$ and $V$, we could either utilize SVD (which would require inverting a potentially very large matrix and be computationally expensive) to solve the factorization more precisely or apply ALS to approximate it. In the case of ALS, we only need to solve one feature vector at a time, which means it can be run in parallel! (This large advantage is probably why it is the method of choice for Spark). To do this, we can randomly initialize $U$ and solve for $V$. Then we can go back and solve for $U$ using our solution for $V$. Keep iterating back and forth like this until we get a convergence that approximates $R$ as best as we can.
After this has been finished, we can simply take the dot product of $U$ and $V$ to see what the predicted rating would be for a specific user/item interaction, even if there was no prior interaction. This basic methodology was adopted for implicit feedback problems in the paper Collaborative Filtering for Implicit Feedback Datasets by Hu, Koren, and Volinsky. We will use this paper’s method on a real dataset and build our own recommender system.
Processing the Data
The data we are using for this example comes from the infamous UCI Machine Learning repository. The dataset is called “Online Retail” and is found here. As you can see in the description, this dataset contains all purchases made for an online retail company based in the UK during an eight month period.
We need to take all of the transactions for each customer and put these into a format ALS can use. This means we need each unique customer ID in the rows of the matrix, and each unique item ID in the columns of the matrix. The values of the matrix should be the total number of purchases for each item by each customer.
First, let’s load some libraries that will help us out with the preprocessing step. Pandas is always helpful!
1 |
|
The first step is to load the data in. Since the data is saved in an Excel file, we can use Pandas to load it.
1 |
|
Now that the data has been loaded, we can see what is in it.
InvoiceNo | StockCode | Description | Quantity | InvoiceDate | UnitPrice | CustomerID | Country | |
---|---|---|---|---|---|---|---|---|
0 | 536365 | 85123A | WHITE HANGING HEART T-LIGHT HOLDER | 6 | 2010-12-01 08:26:00 | 2.55 | 17850.0 | United Kingdom |
1 | 536365 | 71053 | WHITE METAL LANTERN | 6 | 2010-12-01 08:26:00 | 3.39 | 17850.0 | United Kingdom |
2 | 536365 | 84406B | CREAM CUPID HEARTS COAT HANGER | 8 | 2010-12-01 08:26:00 | 2.75 | 17850.0 | United Kingdom |
3 | 536365 | 84029G | KNITTED UNION FLAG HOT WATER BOTTLE | 6 | 2010-12-01 08:26:00 | 3.39 | 17850.0 | United Kingdom |
4 | 536365 | 84029E | RED WOOLLY HOTTIE WHITE HEART. | 6 | 2010-12-01 08:26:00 | 3.39 | 17850.0 | United Kingdom |
The dataset includes the invoice number for different purchases, along with the StockCode (or item ID), an item description, the number purchased, the date of purchase, the price of the items, a customer ID, and the country of origin for the customer.
Let’s check to see if there are any missing values in the data.
1 |
|
Most columns have no missing values, but Customer ID is missing in several rows. If the customer ID is missing, we don’t know who bought the item. We should drop these rows from our data first. We can use the pd.isnull to test for rows with missing data and only keep the rows that have a customer ID.
1 |
|
<class ‘pandas.core.frame.DataFrame’> Int64Index: 406829 entries, 0 to 541908 Data columns (total 8 columns): InvoiceNo 406829 non-null object StockCode 406829 non-null object Description 406829 non-null object Quantity 406829 non-null int64 InvoiceDate 406829 non-null datetime64[ns] UnitPrice 406829 non-null float64 CustomerID 406829 non-null float64 Country 406829 non-null object dtypes: datetime64ns, float64(2), int64(1), object(4) memory usage: 27.9+ MB
1 |
|
item_lookup = cleaned_retail[[‘StockCode’, ‘Description’]].drop_duplicates() # Only get unique item/description pairs item_lookup[‘StockCode’] = item_lookup.StockCode.astype(str) # Encode as strings for future lookup ease
1 |
|
cleaned_retail[‘CustomerID’] = cleaned_retail.CustomerID.astype(int) # Convert to int for customer ID cleaned_retail = cleaned_retail[[‘StockCode’, ‘Quantity’, ‘CustomerID’]] # Get rid of unnecessary info grouped_cleaned = cleaned_retail.groupby([‘CustomerID’, ‘StockCode’]).sum().reset_index() # Group together grouped_cleaned.Quantity.loc[grouped_cleaned.Quantity == 0] = 1 # Replace a sum of zero purchases with a one to
indicate purchased
grouped_purchased = grouped_cleaned.query(‘Quantity > 0’) # Only get customers where purchase totals were positive
1 |
|
grouped_purchased.head()
1 |
|
customers = list(np.sort(grouped_purchased.CustomerID.unique())) # Get our unique customers products = list(grouped_purchased.StockCode.unique()) # Get our unique products that were purchased quantity = list(grouped_purchased.Quantity) # All of our purchases
rows = grouped_purchased.CustomerID.astype(‘category’, categories = customers).cat.codes
Get the associated row indices
cols = grouped_purchased.StockCode.astype(‘category’, categories = products).cat.codes
Get the associated column indices
purchases_sparse = sparse.csr_matrix((quantity, (rows, cols)), shape=(len(customers), len(products)))
1 |
|
<4338x3664 sparse matrix of type ‘<class ‘numpy.int64’>’ with 266723 stored elements in Compressed Sparse Row format>
1 |
|
matrix_size = purchases_sparse.shape[0]purchases_sparse.shape[1] # Number of possible interactions in the matrix num_purchases = len(purchases_sparse.nonzero()[0]) # Number of items interacted with sparsity = 100(1 - (num_purchases/matrix_size)) sparsity
1 |
|
def make_train(ratings, pct_test = 0.2): ‘’’ This function will take in the original user-item matrix and “mask” a percentage of the original ratings where a user-item interaction has taken place for use as a test set. The test set will contain all of the original ratings, while the training set replaces the specified percentage of them with a zero in the original ratings matrix. parameters:
ratings - the original ratings matrix from which you want to generate a train/test set. Test is just a complete copy of the original set. This is in the form of a sparse csr_matrix.
pct_test - The percentage of user-item interactions where an interaction took place that you want to mask in the training set for later comparison to the test set, which contains all of the original ratings. returns:
training_set - The altered version of the original data with a certain percentage of the user-item pairs that originally had interaction set back to zero.
test_set - A copy of the original ratings matrix, unaltered, so it can be used to see how the rank order compares with the actual interactions.
user_inds - From the randomly selected user-item indices, which user rows were altered in the training data. This will be necessary later when evaluating the performance via AUC. ‘’’ test_set = ratings.copy() # Make a copy of the original set to be the test set. test_set[test_set != 0] = 1 # Store the test set as a binary preference matrix training_set = ratings.copy() # Make a copy of the original data we can alter as our training set. nonzero_inds = training_set.nonzero() # Find the indices in the ratings data where an interaction exists nonzero_pairs = list(zip(nonzero_inds[0], nonzero_inds[1])) # Zip these pairs together of user,item index into list random.seed(0) # Set the random seed to zero for reproducibility num_samples = int(np.ceil(pct_test*len(nonzero_pairs))) # Round the number of samples needed to the nearest integer samples = random.sample(nonzero_pairs, num_samples) # Sample a random number of user-item pairs without replacement user_inds = [index[0] for index in samples] # Get the user row indices item_inds = [index[1] for index in samples] # Get the item column indices training_set[user_inds, item_inds] = 0 # Assign all of the randomly chosen user-item pairs to zero training_set.eliminate_zeros() # Get rid of zeros in sparse array storage after update to save space return training_set, test_set, list(set(user_inds)) # Output the unique list of user rows that were altered
1 |
|
product_train, product_test, product_users_altered = make_train(purchases_sparse, pct_test = 0.2)
1 |
|
def implicit_weighted_ALS(training_set, lambda_val = 0.1, alpha = 40, iterations = 10, rank_size = 20, seed = 0): ‘’’ Implicit weighted ALS taken from Hu, Koren, and Volinsky 2008. Designed for alternating least squares and implicit feedback based collaborative filtering. parameters:
training_set - Our matrix of ratings with shape m x n, where m is the number of users and n is the number of items. Should be a sparse csr matrix to save space.
lambda_val - Used for regularization during alternating least squares. Increasing this value may increase bias but decrease variance. Default is 0.1.
alpha - The parameter associated with the confidence matrix discussed in the paper, where Cui = 1 + alpha*Rui. The paper found a default of 40 most effective. Decreasing this will decrease the variability in confidence between various ratings.
iterations - The number of times to alternate between both user feature vector and item feature vector in alternating least squares. More iterations will allow better convergence at the cost of increased computation. The authors found 10 iterations was sufficient, but more may be required to converge.
rank_size - The number of latent features in the user/item feature vectors. The paper recommends varying this between 20-200. Increasing the number of features may overfit but could reduce bias. seed - Set the seed for reproducible results returns:
The feature vectors for users and items. The dot product of these feature vectors should give you the expected “rating” at each point in your original matrix. ‘’’ # first set up our confidence matrix
conf = (alpha*training_set) # To allow the matrix to stay sparse, I will add one later when each row is taken # and converted to dense. num_user = conf.shape[0] num_item = conf.shape[1] # Get the size of our original ratings matrix, m x n
# initialize our X/Y feature vectors randomly with a set seed rstate = np.random.RandomState(seed)
X = sparse.csr_matrix(rstate.normal(size = (num_user, rank_size))) # Random numbers in a m x rank shape Y = sparse.csr_matrix(rstate.normal(size = (num_item, rank_size))) # Normally this would be rank x n but we can # transpose at the end. Makes calculation more simple. X_eye = sparse.eye(num_user) Y_eye = sparse.eye(num_item) lambda_eye = lambda_val * sparse.eye(rank_size) # Our regularization term lambda*I. # We can compute this before iteration starts. # Begin iterations
for iter_step in range(iterations): # Iterate back and forth between solving X given fixed Y and vice versa # Compute yTy and xTx at beginning of each iteration to save computing time yTy = Y.T.dot(Y) xTx = X.T.dot(X) # Being iteration to solve for X based on fixed Y for u in range(num_user): conf_samp = conf[u,:].toarray() # Grab user row from confidence matrix and convert to dense pref = conf_samp.copy() pref[pref != 0] = 1 # Create binarized preference vector CuI = sparse.diags(conf_samp, [0]) # Get Cu - I term, don’t need to subtract 1 since we never added it yTCuIY = Y.T.dot(CuI).dot(Y) # This is the yT(Cu-I)Y term yTCupu = Y.T.dot(CuI + Y_eye).dot(pref.T) # This is the yTCuPu term, where we add the eye back in # Cu - I + I = Cu X[u] = spsolve(yTy + yTCuIY + lambda_eye, yTCupu) # Solve for Xu = ((yTy + yT(Cu-I)Y + lambdaI)^-1)yTCuPu, equation 4 from the paper # Begin iteration to solve for Y based on fixed X for i in range(num_item): conf_samp = conf[:,i].T.toarray() # transpose to get it in row format and convert to dense pref = conf_samp.copy() pref[pref != 0] = 1 # Create binarized preference vector CiI = sparse.diags(conf_samp, [0]) # Get Ci - I term, don’t need to subtract 1 since we never added it xTCiIX = X.T.dot(CiI).dot(X) # This is the xT(Cu-I)X term xTCiPi = X.T.dot(CiI + X_eye).dot(pref.T) # This is the xTCiPi term Y[i] = spsolve(xTx + xTCiIX + lambda_eye, xTCiPi) # Solve for Yi = ((xTx + xT(Cu-I)X) + lambdaI)^-1)xTCiPi, equation 5 from the paper # End iterations return X, Y.T # Transpose at the end to make up for not being transposed at the beginning. # Y needs to be rank x n. Keep these as separate matrices for scale reasons.
1 |
|
user_vecs, item_vecs = implicit_weighted_ALS(product_train, lambda_val = 0.1, alpha = 15, iterations = 1, rank_size = 20)
1 |
|
user_vecs[0,:].dot(item_vecs).toarray()[0,:5]
1 |
|
This is a sample of the first five items out of the 3664 in our stock. The first user in our matrix has the fifth item with the greatest recommendation out of the first five items. However, notice we only did one iteration because our algorithm was so slow! You should iterate at least ten times according to the authors so that $U$ and $V$ converge. We could wait 15 minutes to let this run, or . . . use someone else’s code that is much faster!
Speeding Up ALS
This code in its raw form is just too slow. We have to do a lot of looping, and we haven’t taken advantage of the fact that our algorithm is embarrasingly parallel, since we could do each iteration of the item and user vectors independently. Fortunately, as I was still finishing this up, Ben Frederickson at Flipboard had perfect timing and came out with a version of ALS for Python utilizing Cython and parallelizing the code among threads. You can read his blog post about using it for finding similar music artists using matrix factorization here and his implicit library here. He claims it is even faster than Quora’s C++ QMF, but I haven’t tried theirs. All I can tell you is that it is over 1000 times faster than this bare bones pure Python version when I tested it. Install this library before you continue and follow the instructions. If you have conda installed, just do pip install implicit and you should be good to go.
First, import his library so we can utilize it for our matrix factorization.
His version of the code doesn’t have a parameter for the weighting $\alpha$ and assumes you are doing this to the ratings matrix before using it as an input. I did some testing and found the following settings to work the best. Also make sure that we set the type of our matrix to double for the ALS function to run properly.
1 |
|
Much faster, right? We now have recommendations for all of our users and items. However, how do we know if these are any good?
Evaluating the Recommender System
Remember that our training set had 20% of the purchases masked? This will allow us to evaluate the performance of our recommender system. Essentially, we need to see if the order of recommendations given for each user matches the items they ended up purchasing. A commonly used metric for this kind of problem is the area under the Receiver Operating Characteristic (or ROC) curve. A greater area under the curve means we are recommending items that end up being purchased near the top of the list of recommended items. Usually this metric is used in more typical binary classification problems to identify how well a model can predict a positive example vs. a negative one. It will also work well for our purposes of ranking recommendations.
In order to do that, we need to write a function that can calculate a mean area under the curve (AUC) for any user that had at least one masked item. As a benchmark, we will also calculate what the mean AUC would have been if we had simply recommended the most popular items. Popularity tends to be hard to beat in most recommender system problems, so it makes a good comparison.
First, let’s make a simple function that can calculate our AUC. Scikit-learn has one we can alter a bit.
1 |
|
def auc_score(predictions, test): ‘’’ This simple function will output the area under the curve using sklearn’s metrics. parameters:
- predictions: your prediction output
-
test: the actual target result you are comparing to returns:
- AUC (area under the Receiver Operating Characterisic curve) ‘’’ fpr, tpr, thresholds = metrics.roc_curve(test, predictions) return metrics.auc(fpr, tpr)
1 |
|
def calc_mean_auc(training_set, altered_users, predictions, test_set): ‘’’ This function will calculate the mean AUC by user for any user that had their user-item matrix altered. parameters:
training_set - The training set resulting from make_train, where a certain percentage of the original user/item interactions are reset to zero to hide them from the model
predictions - The matrix of your predicted ratings for each user/item pair as output from the implicit MF. These should be stored in a list, with user vectors as item zero and item vectors as item one. altered_users - The indices of the users where at least one user/item pair was altered from make_train function test_set - The test set constucted earlier from make_train function
returns:
The mean AUC (area under the Receiver Operator Characteristic curve) of the test set only on user-item interactions there were originally zero to test ranking ability in addition to the most popular items as a benchmark. ‘’’
store_auc = [] # An empty list to store the AUC for each user that had an item removed from the training set popularity_auc = [] # To store popular AUC scores pop_items = np.array(test_set.sum(axis = 0)).reshape(-1) # Get sum of item iteractions to find most popular item_vecs = predictions[1] for user in altered_users: # Iterate through each user that had an item altered training_row = training_set[user,:].toarray().reshape(-1) # Get the training set row zero_inds = np.where(training_row == 0) # Find where the interaction had not yet occurred # Get the predicted values based on our user/item vectors user_vec = predictions[0][user,:] pred = user_vec.dot(item_vecs).toarray()[0,zero_inds].reshape(-1) # Get only the items that were originally zero # Select all ratings from the MF prediction for this user that originally had no iteraction actual = test_set[user,:].toarray()[0,zero_inds].reshape(-1) # Select the binarized yes/no interaction pairs from the original full data # that align with the same pairs in training pop = pop_items[zero_inds] # Get the item popularity for our chosen items store_auc.append(auc_score(pred, actual)) # Calculate AUC for the given user and store popularity_auc.append(auc_score(pop, actual)) # Calculate AUC using most popular and score # End users iteration
return float(‘%.3f’%np.mean(store_auc)), float(‘%.3f’%np.mean(popularity_auc)) # Return the mean AUC rounded to three decimal places for both test and popularity benchmark
1 |
|
calc_mean_auc(product_train, product_users_altered, [sparse.csr_matrix(user_vecs), sparse.csr_matrix(item_vecs.T)], product_test)
AUC for our recommender system
1 |
|
customers_arr = np.array(customers) # Array of customer IDs from the ratings matrix products_arr = np.array(products) # Array of product IDs from the ratings matrix
1 |
|
def get_items_purchased(customer_id, mf_train, customers_list, products_list, item_lookup): ‘’’ This just tells me which items have been already purchased by a specific user in the training set. parameters: customer_id - Input the customer’s id number that you want to see prior purchases of at least once mf_train - The initial ratings training set used (without weights applied) customers_list - The array of customers used in the ratings matrix products_list - The array of products used in the ratings matrix item_lookup - A simple pandas dataframe of the unique product ID/product descriptions available returns:
A list of item IDs and item descriptions for a particular customer that were already purchased in the training set ‘’’ cust_ind = np.where(customers_list == customer_id)[0][0] # Returns the index row of our customer id purchased_ind = mf_train[cust_ind,:].nonzero()[1] # Get column indices of purchased items prod_codes = products_list[purchased_ind] # Get the stock codes for our purchased items return item_lookup.loc[item_lookup.StockCode.isin(prod_codes)]
1 |
|
array([12346, 12347, 12348, 12349, 12350])
1 |
|
get_items_purchased(12346, product_train, customers_arr, products_arr, item_lookup)
1 |
|
from sklearn.preprocessing import MinMaxScaler
1 |
|
Essentially, this will retrieve the $N$ highest ranking dot products between our user and item vectors for a particular user. Items already purchased are not recommended to the user. For now, let’s use a default of 10 items and see what the recommender system decides to pick for our customer.
1 |
|
StockCode | Description | |
---|---|---|
0 | 23167 | SMALL CERAMIC TOP STORAGE JAR |
1 | 23165 | LARGE CERAMIC TOP STORAGE JAR |
2 | 22963 | JAM JAR WITH GREEN LID |
3 | 23294 | SET OF 6 SNACK LOAF BAKING CASES |
4 | 22980 | PANTRY SCRUBBING BRUSH |
5 | 23296 | SET OF 6 TEA TIME BAKING CASES |
6 | 23293 | SET OF 12 FAIRY CAKE BAKING CASES |
7 | 22978 | PANTRY ROLLING PIN |
8 | 23295 | SET OF 12 MINI LOAF BAKING CASES |
9 | 22962 | JAM JAR WITH PINK LID |
These recommendations seem quite good! Remember that the recommendation system has no real understanding of what a ceramic jar is. All it knows is the purchase history. It identified that people purchasing a medium sized jar may also want to purchase jars of a differing size. The recommender system also suggests jar magnets and a sugar dispenser, which is similar in use to a storage jar. I personally was blown away by how well the system seems to pick up on these sorts of shopping patterns. Let’s try another user that hasn’t made a large number of purchases.
1 |
|
StockCode | Description | |
---|---|---|
2148 | 37446 | MINI CAKE STAND WITH HANGING CAKES |
2149 | 37449 | CERAMIC CAKE STAND + HANGING CAKES |
4859 | 37450 | CERAMIC CAKE BOWL + HANGING CAKES |
5108 | 22890 | NOVELTY BISCUITS CAKE STAND 3 TIER |
This person seems like they want to make cakes. What kind of items does the recommender system think they would be interested in?
1 |
|
StockCode | Description | |
---|---|---|
0 | 22645 | CERAMIC HEART FAIRY CAKE MONEY BANK |
1 | 22055 | MINI CAKE STAND HANGING STRAWBERY |
2 | 22644 | CERAMIC CHERRY CAKE MONEY BANK |
3 | 37447 | CERAMIC CAKE DESIGN SPOTTED PLATE |
4 | 37448 | CERAMIC CAKE DESIGN SPOTTED MUG |
5 | 22059 | CERAMIC STRAWBERRY DESIGN MUG |
6 | 22063 | CERAMIC BOWL WITH STRAWBERRY DESIGN |
7 | 22649 | STRAWBERRY FAIRY CAKE TEAPOT |
8 | 22057 | CERAMIC PLATE STRAWBERRY DESIGN |
9 | 22646 | CERAMIC STRAWBERRY CAKE MONEY BANK |
It certainly picked up on the cake theme along with ceramic items. Again, these recommendations seem very impressive given the system doesn’t understand the content behind the recommendations. Let’s try one more.
1 |
|
StockCode | Description | |
---|---|---|
34 | 22326 | ROUND SNACK BOXES SET OF4 WOODLAND |
35 | 22629 | SPACEBOY LUNCH BOX |
37 | 22631 | CIRCUS PARADE LUNCH BOX |
93 | 20725 | LUNCH BAG RED RETROSPOT |
369 | 22382 | LUNCH BAG SPACEBOY DESIGN |
547 | 22328 | ROUND SNACK BOXES SET OF 4 FRUITS |
549 | 22630 | DOLLY GIRL LUNCH BOX |
1241 | 22555 | PLASTERS IN TIN STRONGMAN |
58132 | 20725 | LUNCH BAG RED SPOTTY |
This customer seems like they are buying products suitable for lunch time. What other items does the recommender system think they might like?
1 |
|
StockCode | Description | |
---|---|---|
0 | 22662 | LUNCH BAG DOLLY GIRL DESIGN |
1 | 20726 | LUNCH BAG WOODLAND |
2 | 20719 | WOODLAND CHARLOTTE BAG |
3 | 22383 | LUNCH BAG SUKI DESIGN |
4 | 20728 | LUNCH BAG CARS BLUE |
5 | 23209 | LUNCH BAG DOILEY PATTERN |
6 | 22661 | CHARLOTTE BAG DOLLY GIRL DESIGN |
7 | 20724 | RED RETROSPOT CHARLOTTE BAG |
8 | 23206 | LUNCH BAG APPLE DESIGN |
9 | 22384 | LUNCH BAG PINK POLKADOT |
Once again, the recommender system comes through! Definitely a lot of bags and lunch related items in this recommendation list. Feel free to play around with the recommendations for other users and see what the system came up with!
Summary
In this post, we have learned about how to design a recommender system with implicit feedback and how to provide recommendations. We also covered how to test the recommender system.
In real life, if the size of your ratings matrix will not fit on a single machine very easily, utilizing the implementation in Spark is going to be more practical. If you are interested in taking recommender systems to the next level, a hybrid system would be best that incorporates information about your users/items along with the purchase history. A Python library called LightFM from Maciej Kula at Lyst looks very interesting for this sort of application. You can find it here.
Last, there are several other advanced methods you can incorporate in recommender systems to get a bump in performance. Part 2 of Xavier Amatriain’s lecture would be a great place to start.
If you are more interested in recommender systems with explicit feedback (such as with movie reviews) there are a couple of great posts that cover this in detail:
Alternating Least Squares Method for Collaborative Filtering by Bugra Akyildiz
Explicit Matrix Factorization: ALS, SGD, and All That Jazz by Ethan Rosenthal
If you are looking for great datasets to try a recommendation system out on for yourself, I found this gist helpful. Some of the links don’t work anymore but it’s a great place to start looking for data to try a system out on your own!
If you would like the Jupyter Notebook for this blog post, you can find it here.