Import AI: #103: Testing brain-like alternatives to backpropagation, why imagining goals can lead to better robots, and why navigating cities is a useful research avenue for AI
by Jack Clark
Backpropagation may not be brain-like, but at least it works:…Researchers test more brain-like approaches to learning systems, discover that backpropagation is hard to beat…Backpropagation is one of the fundamental tools of modern deep learning – it’s one of the key mechanisms for propagating and updating information through networks during training. Unfortunately, there’s relatively little evidence available that our own human brains perform a process analogous to backpropagation (this is a question Geoff Hinton has struggled with for several years in talks like ‘Can the brain do back-propagation‘?). That has given some concern to researchers for some years who worry that though we’re seeing significant gains from developing things based on backpropagation, we may need to investigate other approaches in the future. Now, researchers with Google Brain and the University of Toronto have performed an empirical analysis of a range of fundamental learning algorithms, testing approaches based on backpropagation against ones using target propagation and other variants. Motivation: The idea behind this research is that “there is a need for behavioural realism, in addition to physiological *realism, when gathering evidence to assess the overall *biological *realism of a learning algorithm. Given that human beings are able to learn *complex tasks that bear little relationship to their evolution, it would appear that the brain possesses a powerful, general-purpose learning algorithm for shaping behavior”. Results: The researchers “find that none of the tested algorithms are capable of effectively scaling up to training large networks on ImageNet”, though they record some success with MNIST and CIFAR. “Out-of-the-box application of this class of algorithms does not provide a straightforward solution to real data on even moderately large networks,” they write.** Why it matters:** Given that we know how limited and simplified our neural network systems are, it seems intellectually honest to test and ablate algorithms, particularly by comparing well-studied ‘mainstream’ approaches like backpropagation with more theoretically-grounded but less-developed algorithms from other parts of the literature. Read more: Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures (Arxiv).
AI and Silent Bugs:…Half-decade old bug in ‘Aliens’ game found responsible for poor performance…One of the more irritating things about developing AI systems is that when you mis-program AI it tends to fail silently – for instance, in OpenAI’s Dota project we saw performance dramatically increase simply after fixing non-breaking bugs. Another good example of this phenomenon has turned up in news about Aliens: Colonial Marines, a poorly reviewed half-decade-old game. But it turns out some of the reasons for those poor reviews were likely due to a bug – subsequent patches have found that the original game mis-named one variable which lead to entire chunks of the game’s enemy AI systems not functioning. Read more: A years-old, one-letter typo led to *Aliens: Colonial Marines’ *weird AI (Ars Technica).
Facebook thinks the path to smarter AI involves guiding other AIs through cities:…’Talk The Walk’ task challenges AIs to navigate each other through cities, working as a team…Have you ever tried giving directions to someone over the phone? It can be quite difficult, and usually involves a series of dialogues between you and the person as you try to figure out where in the city they are in relation to where they need to get to. Now, researchers with Facebook and the Montreal Institute of Learning Algorithms (MILA) have set out to develop and test AIs that can solve this task, so as to further improve the generalization capabilities of AI agents. “”For artificial agents to solve this challenging problem, some fundamental architecture designs are missing,” the researchers say. The challenge: The new “Talk The Walk” task frames the problem as a discussion between a ‘guide’ and a ‘tourist’ agent. The guide agent has access to a map of the city area that the tourist is in, as well as a location the tourist wants to get to, and the tourist has access to an annotated image of their current location along with the ability to turn left, turn right, or move forward.** The dataset:** The researchers created the testing environment by obtaining 360-degree photographic views of neighborhoods in New York City, including Hell’s Kitchen, the East VIllage, Williamsburg, the Financial District, and the Upper East Side. They then annotated each image of each corner of each street intersection with a set of landmarks drawn from the following categories: bar, bank, shop, coffee shop, theater, playfield, hotel, subway, and restaurant. They then had more than six hundred users of Mechanical Turk play a human version of the game, generating 10,000 successful dialogues from which AI systems can be trained (with over 2,000 successful dialogues available for each neighborhood of New York the researchers gathered data for).** Results:** The researchers tested their developed systems at how well they can *localize *themselves – that is, develop a notion of where they are in the city. The results are encouraging, with localization models developed by the researchers achieving a higher localization score than humans. (Though humans take about half the number of steps to effectively localize themselves, showing that human sample efficiency remains substantially better than those of machines.** Why it matters:** Following a half decade of successful development and commercialization of basis AI capabilities like image and audio processing, researchers are trying to come up with the next major tasks and datasets they can use to test contemporary research algorithms and developing them further. Evaluation methods like those devised here can help us develop AI systems which need to interact with larger amounts of real world data, potentially making it easier to evaluate how ‘intelligent’ these systems are becoming, as they are being tested directly on problems that humans solve every day and have good intuitions and evidence about the difficulty of. Though it’s worth noting that the current version of the task as solved by Facebook is fairly limited, as it involves a setting with simple intersections (predominantly just four-way straight-road intersections), and the agents aren’t being tested on very large areas nor are being required to navigate particularly long distances.** Read more:** Take the Walk: Navigating New York City through Grounded Dialogue (Arxiv).
Google opens a Seedbank for wannabe AI gardeners:…Seedbank provides access to a dynamic, online, code encyclopedia for AI systems…Google has launched Seedbank, a living encyclopedia about AI programming and research. Seedbank is a website that contains a collection of machine learning examples which can be interacted with via a live programming interface in Google ‘colab’. You can browse ‘seeds’ which are major AI topic areas like ‘Recurrent Nets’ or ‘Text & Language’, then click into them for specific examples; for instance, when browsing ‘Recurrent Nets’ you can learn about Neural Translation with Attention and can open a live notebook to walk you through the steps involving in creating a language translation system. “For now we are only tracking notebooks published by Google, though we may index user-created content in the future. We will do our best to update Seedbank regularly, though also be sure to check TensorFlow.org for new content,” writes Michael Tyke in a blog post announcing Seedbank. Why it matters: AI research and development is heavily based around repeated cycles of empirical experimentation, so being able to interact with and tweak live programming examples of applied AI systems is a good way to develop better intuitions about the technology.** Read more:** Seedbank – discover machine learning examples (TensorFlow Medium blog).** Read more:** Seedbank official website.
AI Policy with **Matthew van der Merwe**:…Reader Matthew van der Merwe has kindly offered to write some sections about AI & Policy for Import AI. I’m (lightly) editing them. All credit to Matthew, all blame to me, etc. Feedback: firstname.lastname@example.org…
**Cross-border collaboration, openness, and dual-use:…A new report urges better oversight of international partnerships on AI, to ensure that collaborations are not being exploited for military uses…The Australian Strategic Policy Institute has published a report by Elsa Kania outlining some of the dual-use challenges inherent to today’s scalable, generic AI techniques. Dual-use as a strategy: China’s military-civil fusion strategy relies on using the dual-use characteristics of AI to ensure new civil developments can be applied in the military domain, and vice versa. There are many cases of private labs and universities working on military tech, e.g. the collaboration between Baidu and CETC (state-owned defence conglomerate). This blurring of the line between state/military and civilian research introduces a complication into partnerships between (e.g.) US companies and their Chinese counterparts. Policy recommendations: Organizations should assess the risks and possible externalities from existing partnerships in strategic technologies, establish systems of best practice for partnerships, and monitor individuals and organizations with clear links to foreign governments and militaries. Why this matters: Collaboration and openness are a key driver of innovation in science. In the case of AI, international cooperation will be critical in ensuring that we manage the risks and realize the opportunities of this technology. Nevertheless, it seems wise to develop systems to ensure that collaboration is done responsibly and with an awareness of risks. Read more:** Technological entanglement. Around the world in 23 AI strategies:Tim Dutton has summarized the various national AI strategies governments have put forward in the past two years. Observations:– AlphaGo really was a Sputnik moment in Asia. Two days after AlphaGo defeated Lee Sedol in 2016, South Korea’s president announced ₩1 trillion ($880m) in funding for AI research, adding “Korean society is ironically lucky, that thanks to the ‘AlphaGo shock’, we have learned the importance of AI before it is too late.”– Canada’s strategy is the most heavily focused on investing in AI research and talent. Unlike other countries, their plan doesn’t include the usual policies on strategic industries, workforce development, and privacy issues.– India is unique in putting social goals at the forefront of their strategy, and focusing on the sectors which would see the biggest social benefits from AI applications. Their ambition is to then scale these solutions to other developing countries. Why this matters: **2018 has seen a surge of countries putting forward national AI strategies, and this looks set to continue. The range of approaches is striking, even between fairly similar countries, and it will be interesting to see how these compare as they are refined and implemented in the coming years. The US is notably absent in terms of having a national strategy. Read more:** Overview of National AI Strategies.
Risks and regulation in medical AI:Healthcare is an area where cutting-edge AI tools such as deep learning are already having a real positive impact. There is some tension, though, between the cultures of “do no harm”, and “move fast and break things.”** We are at a tipping point: We have reached a ‘tipping point’ in medical AI, with systems already on the market that are making decisions about patients’ treatment. This is not worrying in itself, provided these systems are safe. What is worrying is that there are already examples of autonomous systems making potentially dangerous mistakes. The UK is using an AI-powered triage app, which recommends whether patients should go to hospital based on their symptoms. Doctors have noticed serious flaws, with the app appearing to recommend staying at home for classic symptoms of heart attacks, meningitis and strokes. Regulation is slow to adapt: Regulatory bodies are not taking seriously the specific risks from autonomous decision-making in medicine. By treating these systems like medical devices, they are allowing them to be used on patients without a thorough assessment of their risks and benefits. Regulators need to move fast, yet give proper oversight to these technologies. Why this matters: Improving healthcare is one of the most exciting, and potentially transformative applications of AI. Nonetheless, it is critical that the deployment of AI in healthcare is done responsibly, using the established mechanisms for testing and regulating new medical treatments. Serious accidents can prompt powerful public backlashes against technologies (e.g. nuclear phase-outs in Japan and Europe post-Fukushima). If we are optimistic about the potential healthcare applications of AI, ensuring that this technology is developed and applied safely is critical in ensuring that these benefits can be realized. Read more:** Medical AI Safety: We have a problem.
OpenAI & ImportAI Bits & Pieces:
Better generative models with Glow:We’ve released Glow, a generative model that uses a 1×1 reversible convolution to give it a richer representative capacity. Check out the online visualization tool to experiment with a pre-trained Glow model yourself, applying it to images you can upload. Read more: Glow: Better Reversible Generative Models (OpenAI Blog).
Subject: Seeking agency for AI Superintelligence contract.Creative Brief: Company REDACTED has successfully created the first “AI Superintelligence” and is planning a global, multi-channel, PR campaign to introduce the “AI Superintelligence” (henceforth known as ‘the AI’) to a global audience. We’re looking for pitches from experienced agencies with unconventional ideas in how to tell this story. This will become the most well known media campaign in history.
Re: Subject: Seeking agency for AI Superintelligence contract.Three words: Global. Cancer. Cure. Let’s start using the AI to cure cancer around the world. We’ll originally present these cures as random miracles and over the course of several weeks will build narrative momentum and impetus until ‘the big reveal’. Alongside revealing the AI we’ll also release a fully timetabled plan for a global rollout of cures for all cancers for all people. We’re confident this will establish the AI as a positive force for humanity while creating the requisite excitement and ‘curiosity gap’ necessary for a good launch.
Re: Subject: Seeking agency for AI Superintelligence contract.Vote Everything. Here’s how it works: We’ll start an online poll asking people to vote on a simple question of global import, like, which would you rather do: Make all aeroplanes ten percent more fuel efficient, or reduce methane emissions by all cattle? We’ll make the AI fulfill the winning vote. If we do enough of these polls in enough areas then people will start to correlate the results of the polls with larger changes in the world. As this happens, online media will start to speculate more about the AI system in question. We’ll be able to use this interest to drive attention to further polls to have it do further things. The final vote before we reveal it will be asking people what date they want to find out who is ‘the force behind the polls’.
Things that inspired this story: Advertising agencies, the somewhat un-discussed question of “what do we do if we develop superintelligence arrives”, historical moments of great significant. Like this:Like Loading…