The Lean Startup Methodology
Top Stories, Oct 15-21: Graphs Are The Next Frontier In Data Science; The Main Approaches to Natural Language Processing Tasks
Update on the R Consortium Census Working Group
It’s been a while since I shared any information about the R Consortium’s Census Working Group.
Document worth reading: “Fractal AI: A fragile theory of intelligence”
Fractal AI is a theory for general artificial intelligence. It allows to derive new mathematical tools that constitute the foundations for a new kind of stochastic calculus, by modelling information using cellular automaton-like structures instead of smooth functions. In the repository included we are presenting a new Agent, derived from the first principles of the theory, which is capable of solving Atari games several orders of magnitude more efficiently than other similar techniques, like Monte Carlo Tree Search. The code provided shows how it is now possible to beat some of the current state of the art benchmarks on Atari games, without previous learning and using less than 1000 samples to calculate each one of the actions when standard MCTS uses 3 Million samples. Among other things, Fractal AI makes it possible to generate a huge database of top performing examples with very little amount of computation required, transforming Reinforcement Learning into a supervised problem. The algorithm presented is capable of solving the exploration vs exploitation dilemma on both the discrete and continuous cases, while maintaining control over any aspect of the behavior of the Agent. From a general approach, new techniques presented here have direct applications to other areas such as: Non-equilibrium thermodynamics, chemistry, quantum physics, economics, information theory, and non-linear control theory. Fractal AI: A fragile theory of intelligence
Don’t miss Big Data LDN 2018
Sponsored Post.
How to Define a Machine Learning Problem Like a Detective
By Spencer Norris, Data Scientist, Independent Journalist.
Maximized Monte Carlo Testing with MCHT
Introduction
I introduced MCHT two weeks ago and presented it as a package for Monte Carlo and boostrap hypothesis testing. Last week, I delved into important technical details and showed how to make self-contained MCHTest
objects that don’t suffer side effects from changes in the global namespace. In this article I show how to perform maximized Monte Carlo hypothesis testing using MCHT, as described in [1].
Speak at Mega-PAW Vegas 2019 – on Machine Learning Deployment (Apply by Nov 15)
Distilled News
automl package: part 1/2 why and how
University of Rhode Island: Assistant Professor of Data Science [Kingston, RI]
At: University of Rhode Island
Location: Kingston, RIWeb: www.uri.eduPosition: Assistant Professor of Data Science