Help! I can’t reproduce a machine learning project!

Have you ever sat down with the code and data for an existing machine learning project, trained the same model, checked your results… and found that they were different from the original results?

 

Not being able to reproduce someone else’s results is super frustrating. Not being able to reproduce your own *results is frustrating *and embarrassing. And tracking down the exact reason that you aren’t able to reproduce results can take ages; it took me a solid week to reproduce this NLP paper, even with the original authors’ exact code and data.

But there’s good news: Reproducibility breaks down in three main places: the code, the data and the environment. I’ve put together this guide to help you narrow down where your reproducibility problems are, so you can focus on fixing them. Let’s go through the three potential offenders one by one, talk about what kind of problems arise and then see how to fix them.

Non-deterministic code I’ve called this section “non-deterministic code� rather than “differences in code� because in a lot of machine learning or statistical applications you can end up with completely different results from the same code. This is because many machine learning and statistical algorithms are non-deterministic: randomness is an essential part of how they work.

If you come from a more traditional computer science or software engineering background, this can be pretty surprising: imagine if a sorting algorithm intentionally returned inputs in a slightly different order every time you ran it! In machine learning and statistical computing, however, randomness shows up in many places, including:

  • Bagging, where you train many small models on different randomly-sampled subsets of your dataset; and boosting, where only the first data subset is completely random and the rest are dependent on it

  • Initializing weights in a neural network, where the initial weights are sampled from a specific distribution (generally using the method proposed by He et al, 2015 for networks using ReLU activation)

  • Splitting data into testing, training and validation subsets

  • Any methods that rely on sampling, like Markov chain Monte Carlo-based methods used for Bayesian inference or Gaussian mixture models

  • Pretty much any method that talks about “randomâ€�, “stochasticâ€�, “sampleâ€�, “normalâ€� or “distribution” somewhere in the documentation is likely to have some random element in it

Randomness is your friend when it comes to things like escaping local minima, but it can throw a wrench in the works when you’re trying to reproduce those same results later. **In order to make machine learning results reproducible, you need to make sure that the random elements are the same every time you run your code. **In other words, you need to make sure you “randomly� generate the same set of random numbers. You can do this by making sure to set every random seed that your code relies on.

 

Random seed: The number used by a pseudorandom generator to determine the order in which numbers are generated. If the same generator is given the same seed, it will generate the same sequence every time it restarts.

 

Unfortunately, if you’re working from a project where the random seed was never set in the first place, you probably won’t be able to get those same results again. In that case, your best bet is to retrain a new model that does have a seed set. Here’s a quick primer on how to do that:

 

In R: Most packages depend on the global random seed, which you can set using set.seed(). The exceptions are packages that are actually wrappers around software written in other languages, like XGBoost or some of the packages that rely heavily on rcpp.

 

In Python: You can set the global random seed in Python using seed() from the random module. Unfortunately, most packages in the Python data science ecosystem tend to have their own internal random seed. My best piece of advice is to quickly search the documentation for each package you’re using and see if it has a function for setting the random seed. (I’ve compiled a list of the methods for some of the most common packages in the “Controlling randomness� section of this notebook.)

 

Differences in Data

Another thing that can potentially break reproducibility is differences in the data. While this happens less often, it can be especially difficult to pin down. (This is one of the reasons Kaggle datasets have versioning and why we’ve recently added notes on what’s changed between versions. This dataset is a good example: scroll down to the “History� section.)

 

You might not be lucky enough to be working from a versioned datasets, however. If you have access to the original data files, there are some ways you can check to make sure that you’re working with the same data the original project used:

 

  • You can use cmp in Bash to make sure that all the bytes two files are exactly the same.

  • You can hash the files and then compare the hashes. You can do this with the hashlib library in Python or either UNF package (for tabular data) or the md5sum() function in R. (Do note that there’s a small chance that two different files might create the same hash.)

 

Another thing that can be helpful is knowing what can introduce differences in your data files. By far **the biggest culprit here is opening data in word processing or spreadsheet software. **I’ve personally been bitten in the butt by this one more than once. A lot of the nice helpful changes made to improve the experience of working with data files for humans can be just enough to end up breaking reproducibility. Here’s are two of the biggest sneaky problem areas.

 

Automatic date formatting: This is actually a huge problem for scientific researchers. One study found that gene names have been automatically converted to dates in one-fifth of published genomics papers. In addition to changing non-dates into dates like that, the format of your dates will sometimes be edited to be more in line with your computer locale, like 6/4/2018 being changed to 4/6/2018.

 

Character encodings: This is an especially sneaky one because a lot of text editors will open files with different character encodings with no problems… but then save them using whatever your system default character encoding is. That means that your text might not look any different in the editor, but all the underlying bytes have been completely changed. Of course if you always use UTF-8 this isn’t generally a problem, but that’s not always an option.

 

Because of these problems, I strongly recommend that you don’t check or edit your data files in word processors or spreadsheet software. Or, if you do, do it with a second copy of your data that you can discard later. I tend to use a text editor to check out datasets instead. (I like Gedit or Notepad++ but I know better than to wade into an argument about which text editor is better than the other. 😉 If you’re comfortable working in the command line, you can also check your data there.

 

Differences in environments

So you’ve triple-double-checked your code and data, and you’re 100% sure that they aren’t accounting for differences between your runs. What’s left? The answer is the computational environment. This includes everything needed to run the code, including things like what packages and package versions were used to run the code and, if you reference them, file directory structures.

 

Getting a lot of “File not found� errors? They’re probably showing up because you’re trying to run code that references a specific directory structure in a directory with a different structure. Problems related to the file directory structures are pretty easy to fix: just make sure you’re using relative rather than absolute paths and configure your . (If you’re unsure what that means, this notebook goes into more details.) You may have to go back and redo some things by hand, but it’s a pretty easy fix once you know what you’re looking for.

 

It’s much harder to fix problems that show up because of dependency mismatches. If you’re trying to reproduce your own work, hopefully you still have access to the environment you originally ran your code in. If so, check out the “Computing environment� section of this notebook to learn how to get you can quickly get information on what packages and their versions you used to run code. You can then use that list to make sure you’re using the same packages versions in your current environment.

 

Pro tip: Make sure you check the language version too! While major versions will definitely break reproducibility (looking at you, Python 2 vs. Python 3), even subversion updates can introduce problems. In particular, differences in the subversion can make it difficult or impossible to load serialized data formats, like pickles in Python or .RData files in R.

 

The amount of information you have about the environment used to run the code will determine how difficult is to reproduce. You might have…

 

  • Information on what was in the environment using something like a requirements or init file. This takes the most work to reproduce, since you need to handle getting the environment set up yourself.

  • A complete environment using a container or virtual machine. These bundle together all the necessary information, and you just need to get it set up and run it.

  • A complete **hosted runnable environment **(like, say, a Kaggle Kernel ;). These are the easiest to use; you just need to point your browser at the address and run the code.

 

(This taxonomy is discussed in depth in this paper, if you’re curious. )

 

But what if you don’t already have access to any information about the original environment? If it’s your own project and you don’t have access to it anymore because you dropped it into a lake or something, you may be out of luck. If you’re trying to reproduce someone else’s work, through, the best advice I can give you is to reach out to the person whose work you’re trying to reproduce. I’ve actually had a fair amount of success reaching out to researchers on this front, especially if I’m very polite and specific about what I’m asking.

 


 

In the majority of cases, you should be able to track down problems in reproducibility to one of these three places: the code, data or environment. It can take a while to laser in on the exact problems with your project, but having a rough guide should help you narrow down your search.

 

That said, there are a small number of cases where it’s literally impossible to reproduce modeling results. Perhaps the best example is deep-learning projects that rely on the cuDNN library, which is part of the NVIDIA Deep Learning SDK. Some key methods used for CNNs and bi-directional LSTMs are currently non-deterministic. (But check the documentation for up-to-date information). My advice is to consider not using CUDA if reproducibility is a priority. The tradeoff here is that not using CUDA is that your models might take much longer to train, so if you’re prioritizing speed instead this will be an important consideration for you.

Reproducibility is one of those areas where the whole “an ounce of prevention is worth a pound of cure� idea is very applicable. The easiest way to avoid running into problems with reproducibility is for everyone to make sure their work is reproductible from the get-go. If this is something you’re willing to invest some time in, I’ve written a notebook that walks through best practices for reproducible research here to get you started.