System Zero: What Kind of AI have we Created?
Apparent rapid advances in artificial intelligence are plugging into deep-seated fears we have about the fate of humanity. These fears are not new, they go back as far Kubrick and Clarke’s “2001: A Space Odyssey” an envisaged future. For that film technical advice was provided by Jack Good, one of the originators of the concept of the ‘technological singularity’. That is the idea that at some point machines will become so intelligent that they can design themselves. Then they’ll begin to propagate more and more intelligent versions of themselves.
Give me five
Give me five is an open source Chrome extension that allows you to recommend the content you push to Lateral based on the content of the page you’re currently visiting. It’s the same code base that the NewsBot Chrome extension is built upon. The screencast shows the extension in action:
Some Observations on Winsorization and Trimming
Over the last few months, I’ve had a lot of conversations with people about the use of winsorization to deal with heavy-tailed data that is positively skewed because of large outliers. After a conversation with my friend Chris Said this past week, it became clear to me that I needed to do some simulation studies to understand the design space of techniques for dealing with outliers.
Common Probability Distributions: The Data Scientist’s Crib Sheet
Data scientists have hundreds of probability distributions from which to choose. Where to start?
Conference on the Economics of Machine Intelligence-Dec 15
The Creative Destruction Lab at the University of Toronto is hosting a conference on the Economics of Machine Intelligence on December 15 in Toronto: “Machine Learning and the Market for Intelligence.”
Interactive association rules exploration app
andrew brooks (andrewbrooksct@gmail.com)
发表于
In a previous post, I wrote about what I use association rules for and mentioned a Shiny application I developed to explore and visualize rules. This post is about that app. The app is mainly a wrapper around the arules and arulesViz packages developed by Michael Hahsler.
The TensorFlow perspective on neural networks
A few weeks ago, Google announced that it was open sourcing an internal system called TensorFlow that allows one to build neural networks, as well as other types of machine learning models. (Disclaimer: I work for Google.) Because TensorFlow is designed to be more general than just a neural network framework, it takes a fairly abstract perspective compared to the way we usually talk about neural networks. But (not coincidentally) this perspective is very close to what I described in my last post, with rows of neurons defining output vectors and the connections between these rows defining matrices of weights. In today’s post, I want to describe the TensorFlow perspective, explain how it matches up with the traditional way of thinking about neural networks, and explain how TensorFlow generalizes the vector and matrix approach to include more general structures called tensors.
Mazes
In this article I’m going to take a look at one of the many algorithms that can be used to generate mazes. The technique I am going to describe uses a depth first search strategy. It is also given the name of a recursive backtracking algorithm. Both of these names should give you clues as to how the technique works. | |
A maze is a complex structure of interconnected passageways. There should be (at least) one way to get from a designated start location to a designated end. Typically the path is convoluted and branched (these branches can also be branched, and often leading to dead-ends) making it not obvious to the naked eye the correct path to take (even when exposed to a God’s eye view from above with all information exposed).Mazes are even more challenging to solve when you are inside one and are only exposed to the information you can immediately see! |
Ten Tips for Writing CS Papers, Part 1
Sebastian Nowozin
发表于
As a non-native English speaker I can relate to the challenge of writing concise and clear English. Scientific writing is particularly challenging because the audience is only partially known at the time of writing: at best, the paper will still be read in 10 or 20 years from the time of writing by people from all over the world.