Quarterly product update: Create your data science projects on Kaggle

We’re building Kaggle into a platform where you can collaboratively create all of your data science projects. This past quarter, we’ve increased the breadth and scope of work you can build on our platform by launching many new features and expanding computational resources.

It is now possible for you to load private datasets you’re working with, develop complex analyses on them in our cloud-based data science environment, and share the project with collaborators in a reproducible way.

Upload private datasets to Kaggle

We first launched Kaggle Kernels and Datasets as public products, where everything created and shared needed to be public. Last June, we enabled you to create private Kaggle Kernels. This transformed how many of you used Kaggle: 94.4% of kernels created since then have been private.

However, this story has been incomplete: you’ve been limited to running kernels on public data. This prevented you from using Kaggle for your own private projects.

This past quarter, we launched private datasets. This lets you upload private datasets to Kaggle and run Python or R code on them in kernels. You can upload an unlimited number of private datasets, up to a 20GB quota. All new datasets default to private. You can create a dataset by clicking “New Dataset” on www.kaggle.com/datasets or “Upload a Dataset” from the data tab on the kernel editor.

Once you’ve created the private dataset, you can keep it updated by publishing new versions through the Kaggle API, which we launched in January and extended in March. This API enables you to download data and make competition submissions from the command line as well.

A new editing experience for Kaggle Kernels

Now that you’ve created a private dataset, you can load it into Kaggle Kernels.

Kaggle Kernels enables you to create interactive Python/R coding sessions in the cloud with a click of a button. These coding sessions run in Docker containers, which provide versioned compute environments and include much of the Python and R analytics ecosystems.

We have two distinct running modes for kernels: interactive and batch. Interactive sessions enable you to write Python or R code on a live session, so you can run a selection of code and see the output right away. Once you’re done with a session, you can click “Commit & Run� to save the version of code and run a batch version top-to-bottom in a clean environment. You can close your laptop and walk away - this batch run will complete in the cloud.

When you come back, you’ll have the complete version history for all the batch runs you’ve created. If you didn’t “Commit & Run� at the end of your session, your latest edits will be saved as a working draft that you’ll see next time you edit the kernel.

We’ve always had notebooks enabled in interactive mode, and launched interactive support for scripts this quarter.

Alongside interactive scripts, we updated and unified the script and notebook editors for Kaggle Kernels. This gives you access to a console, shows the variables currently in the session, and enables you to see the current compute usage in the interactive session. It also lays the groundwork for many exciting future extensions.

Create more complex projects in Kaggle Kernels

We focused this past quarter on expanding the work you could do in Kaggle Kernels. Enabling you to work with private data was one part of this.

We expanded the compute limits in Kaggle Kernels from one hour to six hours. This increases the size and complexity of the models you can run and datasets you can analyze. These expanded compute limits apply to both interactive and batch sessions.

We added the ability to install custom packages in your kernel. You can do this from the “Settings� tab on the kernel editor. In Python, run a “pip install� command for packages on PyPI or GitHub. In R, run a “devtools::install_github� command for packages on GitHub. This extends our base container to include the added package. Subsequent kernel forks/edits are run in this custom container, making it easier for you and others to reproduce and build on your results.

Additionally, we focused on improving the robustness of Kaggle Kernels. The changes we’ve made behind the scenes will keep Kernels running more reliably and smoothly. If you experience any issues here, please let us know.

Share your projects with collaborators

Once you’ve uploaded a dataset or written a kernel to start a new project, you can share the work with collaborators. This will enable them to see, comment, and build on your project.

You can add collaborators as either viewers or editors.

Viewers on a dataset can see, download, and write kernels on the data. Editors can also create new dataset versions.

Viewers on a kernel can see the kernel and fork it. If they have access to all the underlying datasets, they can also reproduce and extend it. Editors on a kernel can edit the kernel directly, creating a new version.

When you create a kernel as part of a competition team, it is shared with the rest of your team by default. We’ve heard many competition teams have had a tough time collaborating due to different compute environments, and we hope this makes it easier for you to work together on a competition.

Additional updates

There’s several more product updates I wanted to call out.

We launched Kaggle Learn as a fast, structured way for you to get more hands-on experience with analytics, machine learning, and data visualization. It includes a series of quick tutorials and exercises across six tracks that you can complete entirely in your browser.

We completed our second kernels competition, where all submissions to the competition needed to be made through kernels. We were blown away by the participation—2,384 teams took part. Thanks for all the thoughtful feedback on this new competition format. We learned that limiting compute functions as an incredibly effective regularizer on model complexity. We also learned about some frustrations with the kernels-only format, including variable compute performance. Overall, this second kernels competition was very successful, and we aim to iterate more on this competition format in the future alongside making improvements based on your feedback.

We launched an integration to BigQuery Public Datasets, which enables you to query larger and more complex datasets like GitHub Repos and Bitcoin Blockchain from kernels.

Many of you have told us that you want more control over content you previously published and to be able to delete it. We heard you. You can now delete datasets, kernels, topics, and comments that you’ve written on Kaggle. These leave a [deleted] shell, so that related kernels or comments still have some context.

We published an overview page of the different topics on Kaggle to make it easier for you to browse datasets, competitions, and kernels by topic.

Thanks

I’d like to give a huge thanks to Kaggle’s team, who worked hard to land these updates and continue to build the best place to collaborate on data science projects in the world.

Most of all—I want to thank you, for being part of the Kaggle community. Our platform can’t exist without you. We’re constantly amazed at the creative solutions you’ve built for competitions, the insights you share through kernels, and how you help each other grow to become better data scientists and engineers.

Do you have feedback for us? We’d love to hear it—please share your thoughts in our Product Feedback forums.