The year is coming to an end. I did not write nearly as much as I had planned to. But I’m hoping to change that next year, with more tutorials around Reinforcement Learning, Evolution, and Bayesian Methods coming to WildML! And what better way to start than with a summary of all the amazing things that happened in 2017? Looking back through my Twitter history and the WildML newsletter, the following topics repeatedly came up. I’ll inevitably miss some important milestones, so please let me know about it in the comments!
Python Data Science jobs list into 2018
I’ve been building my data-science jobs list for a couple of years now. Almost 800 folk are on the list, they receive an email update once every two weeks containing around seven job ads. Many active members of PyDataLondon are on the list.
Linked Lists
Linked Lists are incredibly useful programming data structures; they store both data and order information in a dynamic way. Before delving too deep, however, let’s first examine another staple data structure of programmers, arrays.
2017 Winners and Losers
At this point it is pretty clear that the stock market was a yuge winner in 2017. So was bitcoin. How did other assets do? Currencies? Energies? Let’s take a look.
Weekly Review: 12/23/2017
Happy Holidays people! If you live in the Bay Area then the next week is probably your time off, so I hope you have fun and enjoy the holiday season! As for Robotics, I just finished Week 2 of Perception, and will probably kick off Week 3 in 2018. I am excited for the last ‘real’ course (Estimation & Learning), and then building my own robot as part of the ‘Capstone’ project after that :-D.
Setting Up Selenium on RaspberryPi 2/3
Selenium is a great tool for Internet scraping or automated testing for websites. I personally use it for scrapping on dynamic content website in which the content is created by JavaScript routines. Lately, I also tried to run Selenium on Raspberry and found out that it is not easy to install all requirements. Here I like to share my commands to make things easier for you.
Large-Scale Health Data Analytics with OHDSI
Data analytics is increasingly being brought to bear to treat human disease, but as more and more health data is stored in computer databases, one significant challenge is how to perform analyses across these disparate databases. In this post I take a look at the Observational Health Data Sciences and Informatics (or OHDSI, pronounced “Odyssey”) program that was formed to address this challenge, and which today accounts for 1.26 billion patient records collectively stored across 64 databases in 17 countries.
k-server, part 2: continuous time mirror descent
We continue our
-server series (see post 1 here). In this post we briefly discuss the concept of a fractional solution for
-server, which by analogy with MTS will in fact be a fractional “anti-solution”. Then we introduce the continuous time version of MTS and explain how it applies for
-server. Finally the most important part of the post is the description of the basic potential based analysis of mirror descent and how to interpret it in the context of MTS.
Simulating Chutes & Ladders in Python
Instead of brute force simulation, we might think about the game probabilistically. On any given turn, there are six equally probable options: rolling a 1, 2, 3, 4, 5, or 6.
Depending on which space you start on, these lead to six well-defined results.
For example, the first turn, the possibilities are the squares 38, 2, 3, 14, 5, or 6, each with equal probability. We could encode this set of probabilities as a vector of length 101, with 1/6
in each associated index (here the zeroth element represents the start of the game, off of the board):
Why mere Machine Learning cannot predict Bitcoin price
Lately, I study time series to see something more out the limit of my experience. I decide to use what I learn in cryptocurrency price predictions with a hunch of being rich. Kidding? Or not :). As I see more about the intricacies of the problem I got deeper and I got a new challenge out of this. Now, I am in a process of creating something new using traditional machine learning to latest reinforcement learning achievements.