The Price is Right
|| ||In the TV game show “The Price is Right”, contestants who win individual games advance, in groups of three, to compete in another game called “The Big Wheel”. The winner of this game advances to the “Showcase Showdown” where there is a chance to win very large prizes.| ||The Big wheel is similar to a roulette wheel, but mounted on its edge. The wheel contains 20 sections showing cash values from $0.05 to $1.00 in 5¢ increments. Contestants spin the wheel with the aim of getting as close to $1.00 as possible (without going over). At the end of their first spin, they are given the option of spinning again. If they spin a second time, the value obtained on the second spin is added to that first spin, and this is their total. There is no option to spin more than twice.Each contestant goes to the wheel and spins (once or twice). The player who gets closest to $1.00, without going over, advances to the “Showcase Showdown”.In the event of a winning tie score, the tied players spin again (one spin only Vasily), as a tie-break to determine the winner (repeating the process, if necessary). This is called a “Spin off”|
T-Shirts!!
The Becoming a Data Scientist tees are ready to sell! I ordered a couple myself before posting them for sale, to make sure the quality was good. They came out great!! And if you order from Teespring before MarchApril 1, 2017 using this link: Becoming a Data Scientist Store – Free Shipping, you’ll get free shipping on your order!
Introduction to XGBoost
Before diving deep into XGBoost, let us first understand Gradient Boosting.Boosting is just taking random samples of data from our dataset and learning a weak learner (a predictor with not so great accuracy) for it. There is one catch though, we give more weights to those data points which were misclassified by the previous learners.I know that this is making a little sense to you. Let us clear it with the help of an example and a visualization.
Deconstruction with Discrete Embeddings
In my post Beyond Binary, I showed how easy it is to create trainable “one-hot” neurons with the straight-through estimator. My motivation for this is made clear in this post, in which I demonstrate the potential of discrete embeddings. In short, discrete embeddings allow for explicit deconstruction of inherently fuzzy data, which allows us to apply explicit reasoning and algorithms over the data, and communicate fuzzy ideas with concrete symbols. Using discrete embeddings, we can (1) create a language model over the embeddings, which immediately gives us access to RNN-based generation of internal embeddings (and sequences thereof), and (2) index sub-parts of the embeddings, instead of entire embedding vectors, which gives us (i.e., our agents) access to search techniques that go beyond cosine similarity, such as phrase search and search using lightweight structure.
10 famous TV shows related to Data science & AI (Artificial Intelligence)
“If you want to become one, first get inspired by one”
Reasons I left academia
I recently made the transition from academia to industry. Some people were surprised I decided to leave - I had had a pretty successful academic career. So why did I want to leave? I’ve read a lot of articles and opinions on reasons people leave academia. The common reasons I often saw people cite for why they decided to leave didn’t resonate with me. It’s usually things like that the academic job market is too tough, or that industry pays better. The better pay is certainly nice, but that wasn’t an impetus for my switch. I had never really worried about the job market. So why did I decide to leave?
Facts and Fallacies of Software Engineering - Book Review
I read Facts and Fallacies of Software Engineering and I quite enjoyed it. I enjoyed so much that I wanted a review and wanted to write down the quotes that I took.
Extreme IO performance with parallel Apache Parquet in Python
** Fri 10 February 2017