During the 60s and 70s, AI researchers explored intuitions about intelligence by writing programs that displayed intelligent behavior. Many good ideas came out from this work but programs written by hand were not robust or general. After the 80s, research increasingly shifted to the development of learners capable of inferring behavior and functions from experience and data, and solvers capable of tackling well-defined but intractable models like SAT, classical planning, Bayesian networks, and POMDPs. The learning approach has achieved considerable success but results in black boxes that do not have the flexibility, transparency, and generality of their model-based counterparts. Model-based approaches, on the other hand, require models and scalable algorithms. Model-free learners and model-based solvers have close parallels with Systems 1 and 2 in current theories of the human mind: the first, a fast, opaque, and inflexible intuitive mind; the second, a slow, transparent, and flexible analytical mind. In this paper, I review developments in AI and draw on these theories to discuss the gap between model-free learners and model-based solvers, a gap that needs to be bridged in order to have intelligent systems that are robust and general. Model-free, Model-based, and General Intelligence
Redmonk Language Rankings, June 2018
The latest Redmonk Language Rankings are out. These rankings are published twice a year, and the top three positions in the June 2018 rankings remain unchanged from last time: JavaScript at #1, Java at #2, and Python at #3. The Redmonk rankings are based on Github repositories (as a proxy for developer activity) and StackOverflow activity (as a proxy for user discussion), as shown in the chart below.
Distilled News
Model evaluation, model selection, and algorithm selection in machine learning
R Packages worth a look
Check Text Files Content at a Glance (fpeek)Tools to help text files importation. It can return the number of lines; print the first and last lines; convert encoding. Operations are made without …
Whats new on arXiv
A Case Study on the Impact of Similarity Measure on Information Retrieval based Software Engineering Tasks
Cryptocurrency: Your Current Options
Since its development in 2009, cryptocurrency has been in the financial space as both a threat and an innovation to the business and economic scene. Budget investors have been swayed by the virtual monetary device that offers anonymity, easy international transactions, and feasibility as an investment instrument.
If you did not already know
Instrumental Variables Estimation
Jump to search In statistics, econometrics, epidemiology and related disciplines, the method of instrumental variables (IV) is used to estimate causal relationships when controlled experiments are not feasible or when a treatment is not successfully delivered to every unit in a randomized experiment.[1] Intuitively, IV is used when an explanatory variable of interest is correlated with the error term, in which case ordinary least squares and ANOVA gives biased results. A valid instrument induces changes in the explanatory variable but has no independent effect on the dependent variable, allowing a researcher to uncover the causal effect of the explanatory variable on the dependent variable. Instrumental variable methods allow for consistent estimation when the explanatory variables (covariates) are correlated with the error terms in a regression model. Such correlation may occur 1) when changes in the dependent variable change the value of at least one of the covariates (‘reverse’ causation), 2) when there are omitted variables that affect both the dependent and independent variables, or 3) when the covariates are subject to non-random measurement error. Explanatory variables which suffer from one or more of these issues in the context of a regression are sometimes referred to as endogenous. In this situation, ordinary least squares produces biased and inconsistent estimates.[2] However, if an instrument is available, consistent estimates may still be obtained. An instrument is a variable that does not itself belong in the explanatory equation but is correlated with the endogenous explanatory variables, conditional on the value of other covariates. …
Amazon Rekognition is now available in the Asia Pacific (Seoul) and Asia Pacific (Mumbai) Regions
Amazon Rekognition is now available in the Asia Pacific (Seoul) and Asia Pacific (Mumbai) AWS Regions.
Marinus Analytics fights human trafficking using Amazon Rekognition
Marinus Analytics is a woman-owned company, founded in 2014, that builds AI tools that turn big data into actionable intelligence. They are dedicated to using artificial intelligence, such as the facial recognition features available in Amazon Rekognition, to help find human trafficking victims and reunite them with their families.
Data Notes: From Hate Speech to Russian Troll Tweets
Enjoy these new, intriguing, and overlooked datasets and kernels.