Profiling Top Kagglers: Martin Henze (AKA Heads or Tails), World's First Kernels Grandmaster

Let me begin by introducing myself: My name is Martin. I’m an astrophysics postdoc working on understanding exploding stars in nearby galaxies. From the very beginning of my studies, I was using data analysis to try to unveil the mysteries of the universe. From deep images taken with ground- and spaced-based telescopes, through time series measuring the heartbeats of extreme stars, to population correlations probing the fundamental physics behind incredibly powerful eruptions: learning the secrets of a complex cosmos requires all the tools you can get your hands on.

One year ago I started my first Kaggle Kernel. I had always been aware of the website and finally decided to take the plunge into a new and exciting adventure. My immediate goal was to systematically improve my very rudimentary knowledge of machine learning tools and methods. To learn new skills of a rapidly growing field and to use them in astronomy research. Become a better scientist. But soon I discovered that there was much more to the Kaggle community; and to the data sets everyone had at their disposal. My curiosity was kindled. Different challenges pushed me to expand my repertoire. I had tons of fun.

Before all that, though, the very first challenge was to find a catchy username; probably the most difficult step of all. Despite my affinity for statistics and randomness it took me embarrassingly long to come up with ‘Heads or Tails’.

Martin Henze

I recently became the very first Kaggle Kernels Grandmaster thanks to the ongoing support of a great community. Like many of us, I started out with the famous Titanic competition which proved to be only the tip of the iceberg in terms of fascinating data sets. Encouraged by positive feedback from other Kagglers I joined new, live competitions. The slightly scary prospect of competing for real was made so much easier by an overwhelmingly friendly and supportive community. I made it my personal challenge to produce a fast and extensive EDA for each new challenge. Partly to give other competitors a head start, partly because I really enjoy exploring new data sets. Moreover, I discovered that I enjoy dissecting non-astronomical data sets just as much as my star-studded ones, and I’m becoming more and more interested in real-world data analysis.

In my view, Kaggle Kernels are a remarkable success story that allow truly reproducible data analysis and add a much more collaborative angle to any competition. Through Kernels, we are able to demonstrate a diverse set of problem solving methods, discuss technical intricacies, and most importantly learn from each other how to improve our skills in each competition and with each data set. I’m honoured and proud to be a part of this success story, and in the following I will happily respond to questions sent to me by the Kaggle team. If you would like to know more, or chat about data, just drop me a message or let me know in the comments. Have fun!

I think that an insightful kernel can be written on any topic. The kernel format is so flexible that it allows us to adapt to a diverse set of challenges. For me it’s all about how to approach a problem; not so much about the nature of the particular problem. The large diversity of popular Kernels on Kaggle underlines this.

How long does it take you to write a kernel, on average? Do you do it all at once or do you break the work up into parts?

My Kernels focus primarily on detailed EDA - ideally with a baseline prediction model derived from it. The full kernel takes me about a week, maybe two, depending on the complexity of the data set. Since I love exploring, I normally aim to make quick progress in the early days of the competition to have a comprehensive view of the data.

I normally have the fundamental properties of the data set covered with a day or two, by which time I have a also defined a roadmap on how to conduct the more detailed analysis of individual features. As this analysis progresses, other insights are likely to be revealed that merit a dedicated follow-up treatment. Learning new analysis tricks and methods takes up at least a couple of hours per kernel. There is always something new to learn which is great.

I prefer to break my EDA into distinct but related parts such as the single- vs multi-parameter visualizations, correlation tests, or feature engineering. Those fundamental steps are similar from kernel to kernel, but their extent and importance can vary dramatically. This approach makes it easy to keep an overview of the big picture. Attempting to write an entire analysis kernel in one go can be a daunting task, especially for beginners, and I would advise against it.

This visualization is from the feature engineering section of Martin’s DonorsChoose.org competition EDA titled “An Educated Guess”.

**Any tips to share in terms of kernel format best practices?**

In my view, a good kernel should have a clear analysis flow that covers all the important aspects of the data. A step-by-step examination of the data characteristics. This greatly improves the understanding of the reader but also makes the life of the kernel author much easier. It is tempting to dive into specific details too quickly and lose yourself in the analysis. The aforementioned analysis steps help to prevent this and make sure to keep the big picture in mind.

If you find a significant insight during your detailed exploration then I recommend to tidy up the code and presentation immediately before moving on to the next step. This approach makes sure that specific aspects of your result don’t get lost during the course of the analysis. It also provides you with a clear and succinct picture of what you have found. Occasionally, upon closer examination you might also have to change your interpretation.

I highly recommend to document all of your ideas and approaches, including those that did not lead to useful results. Thereby, you won’t repeat an unsuccessful line of inquiry. In addition, other Kagglers can read up on your analysis and save time by avoiding those approaches; or possibly even get inspiration on how to tweak your ideas to make them more successful.

Do you promote your kernels on Kaggle somehow? If so, how do you do it?

Not in any significant way. I focus on competitions, so I sometimes post puzzling questions from my analysis to the corresponding discussion board to get feedback from a wider audience. Or I let the author of an external data set know that I included their contribution in my analysis. That’s about it.

How important is telling a good story to the kernels that you write?

A compelling narrative flow is an important feature of both an effective analysis and an engaging data presentation at the same time. Most Kaggle competitions have a specific aim; be it predicting taxi ride durations or forecasting website visits. Building a narrative around this goal helps to keep the big picture in mind and allows for a data-driven presentation. Having a story is good, as long as it organically arises from the data and underlines the main findings, rather than to overshadow them. Exceptions might exist for halloween-themed challenges 😉

What do I have to do in a kernel to impress you?

First off: I’m impressed by anyone who decides to write their first kernel on Kaggle and to share it with the community. This step can seem a bit scary, but it will provide you with valuable feedback to hone your skills as a data scientist. From there on, the most impressive skill is to be able to use this feedback to gradually improve your Kernels. Ideally, this would include learning not only from the comments on your own Kernels but also from related Kernels and the corresponding discussions.

Beyond that, I’m a big fan of data visualization and engaging narratives. The right plot says way more than the proverbial 1000 words. Stringing together a series of visuals to dive deep into the meaning behind the data is a feat that I will always be impressed with.

Example of elegant visualization contributing to a strong narrative. This is an alluvial plot from the Martin’s NYC Taxi EDA. The vertical sizes of the blocks and the widths of the stripes (called “alluvia”) are proportional to the frequency. This nicely shows how the fast vs slow taxi groups fan out into the different categories (during work time, time of day, JFK airport trip, etc.).

Are there any other kernel authors on Kaggle whose work you enjoy reading?

Allow me to shine a spotlight on three authors (in no particular order) whose work, I think, deserves more exposure. First, Jonathan Bouchet who constructs beautiful kernels with stunning visualizations to explore selected data sets. Second, Bukun who is getting better and better at the rapid exploration of competition data by providing EDA key points and prediction baselines. Third, Pranav Pandya and his polished highcharter visualizations with accompanying interpretation. These three Kagglers are definitely worth checking out. I’m certain you will continue to see more high-quality content from them in the near future.

I have learned a lot from SRK’s style of kernel design and Anisotropic’s comprehensive analyses. I admire the productivity and versatility of the1owl who can jump into any competition and provide robust benchmark code (and fun Kernel titles). I really like the Kernels of Philip Spachtholz and I hope he will find more time again to spend on Kaggle in the future. The various specialised Kernels of olivier, Andy Harless, Tilii, and Bojan Tunguz always provide great insights and benchmark code in competitions.

I could go on like this and still run out of space to list all the Kernel authors I enjoy reading. Every competition contains a mix of established favourites and new talented members of the community.

What is your personal favorite kernel that you’ve written?

My favourite one is probably The fast and the curious EDA for the Taxi Playground competition. I knew very little about geospatial visualization when it started and pushed myself to learn quickly about leaflet, maps, and how to dissect traffic patterns. This led to a challenging and fun contest for the first Kernels prize with Beluga, who’s excellent modelling Kernel had him leading the prediction score board from an early point on. In many ways, the competition was a lot of fun and showcased the immense potential of the Kernel format. It’s no coincidence that we had already two other playground competitions since then.

How about favorite kernels by other kernel authors?

There are so many great kernels out there that it’s hard to pick specific ones. Here are five (again in no particular order) that I enjoyed reading: (if you’re interested in more check out my list of upvoted Kernels)

Jonathan Bouchet’s Flight Maps are showcasing R’s power for geospatial plots and containing one of my all-time favourite plots on Kaggle (you know which one when you see it).

Anisotropic’s benchmark stacking Kernel for Titanic has taught many of us, including me, important fundamentals on how to improve the performance of individual classifiers by combining their predictions.

Selfish Gene’s narrative Kernel for the Taxi Playground Competition was weaving great visuals into a compelling exploration of the spatio-temporal features of traffic flows.

DrGuillermo’s NYC Dynamics animation Kernel for the same competition was a prime example on how to construct a focussed analysis around a specific, well-defined aspect of a data set.

Pranav Pandya’s Kernel on the Work Injuries data set introduced me for the first time to the power of highcharter plots and is a great and succinct introduction to high-quality visualisations. He even makes pie charts look aesthetically pleasing - and I can’t stand pie charts 😉

Is working on competitions and writing kernels complimentary, or are they two different things?

In my view, a successful understanding of the data - and therefore a succcessful prediction - is based on a thourough EDA. My kernels focus primarily on this EDA with the goal to unveil the impactful features in the data and their relation. It is of course possible to train a successful predictor without plotting anything at all. However, in my view even in this case the interpretation of the findings can be greatly enhanced by studying visual representations of the interconnections within the data set.

In addition, I’m a very visual person. Plotting a data set from many different angles, and with many different styles and tools, helps me immensely in discovering patterns and correlations. Not everyone will appreciate data visualisations as much as I do; and that’s perfectly fine. The Kernel format gives us the flexibility to implement many different styles of data analysis. More importantly: A Kernel is a perfect lab book in which to document your approach and results - and therefore a great foundation for a successful competition contribution. In my view, learning how to plan, execute, and document your work is one of the most fundamental building blocks for the success of any data-related project.

What would you say you have you learned from writing kernels? From viewing other people’s kernels?

Where to start? My data science journey, beyond my narrow academic field, pretty much began on Kaggle Kernels and is still in its early days. I’m learning a lot with every Kernel and every competition - and I enjoy diving into the intricacies of ML models and EDA styles.

From writing Kernels I learned how to apply my academic methods to a large diversity of data sets. I took joining Kaggle as an opportunity to learn more about the Python packages pandas and sklearn, and to extend my experience with the notebook format. In R, I had only ever used the base graphics and I decided to learn about the tidyverse and ggplot2; which turned out to be a rather fortuitous decision. I documented my learning process in my (early) Kernels; picking up Rmarkdown tricks on the way. Most tools and specific visualisation methods that you see in my Kernels I learned during the last year.

Reading other people’s Kernels was hugely important for my rapid progress on Kaggle in the first months; and continues to be a major driver for innovations and improvements in my analysis style. Kagglers like SRK, Anisotropic, or Philip Spachtholz played a large role in shaping my EDA approach. The Titanic competition was, and remains, a treasure trove for tutorials, code snippets, and visualisation tricks. Whenever I learned something new from an well-written Kernel I tried to find an application for it in my analysis. An example are the compelling alluvial plots I first encountered in retrospectprospect’s work and then employed in my Taxi challenge kernel.

How important is being one of the top three kernels on a dataset for visibility?

It is important, but I believe that useful and successful Kernels will often gain popularity quickly and rise to the top. There certainly is a correlation between quality and popularity. However, I also see considerable variance around this general trend. One factor to take into account is that some Kernels will be excellent but only cater to a specific audience / aspect of the analysis. Those might not get the popularity they deserve. In turn, most EDAs address a wide audience, ideally including beginners, and will as such receive more exposure. My Kernels have certainly benefitted from this and it is important to stress that despite their popularity they will not necessarily the best Kernels in a certain competition, nor the most applicable to any situation or question you are facing. The large diversity of Kernel styles is a major strength of Kaggle and I encourage everyone to read more than just the top popular Kernels for each competition or dataset.

In general, instead of publishing many short Kernels, I recommend to authors to build one or two more comprehensive Kernels that grow and evolve as the competition progresses (or the time after the data set publication elapses). This includes adding interpretation, documentation, and context to your code. Those Kernels will naturally gather more visibility with every substantial update. I would like to take the opportunity to emphasise the word “substantial� here, and to discourage Kagglers from running an unchanged Kernel over and over (and over) for maximum visibility. Lately, there have been extreme cases of this behaviour and they create a lot of noise in the otherwise successfull popularity ranking.

Users occasionally share meta-analyses - kernels analyzing other users, or other kernels, or Kaggle itself. Do you read these, and if so, are there any in particular you enjoy?

I like these meta-kernels, especially the ones that focussed on the 2017 Kaggle survey. The Kaggle community itself is a rich source of data from which to derive insights into our demographics and software preferences. Two examples I enjoyed reading were comprehensive EDAs by I,Coder and Mhamed Jabri on the path from novice to grandmaster and Kaggler’s insights into data science. Those works focussed on a specific narrative to explored the available data.

At the same time, meta-kernels are often facing the danger of merely touching the surface level of descriptive charts without offering a novel angle or in-depth interpretation. While it is always nice to find familiar names and works to be featured in overview plots, there is so much more that such meta-kernels can accomplish. I think there is substantial potential for a detailed analysis of many different aspects of our community. I encourage you to explore your meta-kernel ideas and let me know about them.

Highly polished kernels tend to feature great data visualization. Do you have any data visualization libraries or tools that you are especially partial to?

Without a doubt: Hadley Wickham’s ggplot2 package. For me, its style and versatility are head and shoulders above the competition, and it aligns so well with my natural analysis style that it’s almost a bit spooky. The ggplot2 notation and approach also interface well with the programming philosophies that underlie Hadley’s tidyverse collection of general data analysis packages. I can highly recommend to check out these tools if the analysis in my Kernels flows smoothly for you. Everyone has their own style of data treasure hunting and I think it is important to find tools that are a natural extension of this style.

 

In terms of more specialised R tools there is the leaflet package, building on the eponymous javascript libraries, which was a revelation for me when dealing with geospatial maps for the first time. I also recommend the ggExtra package, an independent extension to ggplot2, for its multiple useful features. In terms of Python plotting libraries I prefer seaborn. In general, I have a strong penchant for scripted tools instead of interactive ones because of their ease of reproducibility and documentation.

In many of your kernels you partake in extended discussions with other authors. Any particularly interesting or insightful comments you would highlight?

The discussions within the comments are very useful and educational for me; be it for my Kernels or those of others. I truly appreciate everyone who takes the time to offer their thoughts or feedback on my work. Especially any suggestions for improval of my coding or visualisations. And of course the puns. I love the puns. Keep them coming. Occasionally I would pose questions in my Kernels and I’m always delighted when people contribute their answers. In the same way I’m happy to answer specific queries that other people might have for my Kernels. Often it simply makes my day when I receive a positive comment.

Some specific examples (many thanks to all!): I remember getting help with coding problems in my first Titanic Kernel. In several of my Kernels Oscar Takeshita has given me valuable feedback. Sometimes important additional information is first contributed in a comment. In the Mercedes Competition last year my Kernels got a lot of great feedback and contributions from Marcel Spitzer based on his own excellent Kernel.

Example of how feedback from Marcel Spitzer resulted in a better Kernel.

Now that you’re a kernels grandmaster, what is your next goal on Kaggle?

Honestly, I see myself still at the beginning of my Kaggle journey and I know that I have so much to learn. I’m planning to focus more on competition predictions and model building, which I had somewhat neglected by tackling many EDAs in a short time. As many of us, I would like to gain more experience with deep learning and the corresponding, rapidly evolving analysis tools and libraries. Placing higher than bronze in a competition is my first short-term goal; reaching the Competitions Master status is the next one.

I will focus on a few solo competitions to get my modelling skills up to speed, but I would also enjoy being part of a team. I foresee that collaborative work will become an even stronger focus on Kaggle due to the Kernel sharing features. Plus, working as a team is fun.

Of course, my love for exploring will also have me contributing more EDAs in the future. There are a number of visualisation tricks and tools that I would like to try out in future Kernels. And hopefully my work will continue to be useful to the community. There’s always more to explore.

Do you think there will be another Kernels Grandmaster anytime soon? If so, any ideas on who it might be?

It is a testament to the dynamic nature and fast progress of the Kaggle community that while I was pondering these questions, SRK became the next Kernels Grandmaster. Congratulations! His achievement is amazing: reaching the highest level in the Competitions and Kernels categories simultaneously. I hope to read a similar interview with him in the near future.

Even beyond SRK: based on the data I’ve seen, I predict that the next Kernels Grandmasters will claim their title relatively soon. I don’t want to mention specific names (since everyone sets their own targets) but there are several strong contenders. You know who you are 😉

In general, the popularity of the kernels continues to rise and we’re seeing more and more high-quality contributions to receive a large number of well-deserved upvotes. The key to reaching to the Grandmaster level is continuity. There are several Kagglers who have displayed this continuity since way before I joined the community (and way before kernels were available). In addition, there are several other Kagglers who have yet to reach their full potential. Many more new contributors will join them over the coming months. It will be an exciting time for the Kaggle community and I’m looking very much forward to it.