Update: readers of the post have also pointed out this critique by Ernest Davis and this response to Davis by Rob Bensinger.
Update 2: Both Rob Bensinger and Michael Tetelman rightly pointed out that my intelligence definition was sloppily defined. I’ve added a clarification that the defintion is ‘for a given task’.
Update 3: The Future of Humanity Institute kindly invited me to give a seminar and have a discussion about these issues. You can find a recording of the talk here.
Update 4: In October Maciej Ceglowski gave talk on “Superintelligence: The Idea that Eats Smart People” and provided the transcript here.
Cover of Superintelligence
This post is a discussion of Nick Bostrom’s book “Superintelligence”. The book has had an effect on the thinking of many of the world’s thought leaders. Not just in artificial intelligence, but in a range of different domains (politicians, physicists, business leaders). In that light, and given this series of blog posts is about the “Future of AI”, it seemed important to read the book and discuss his ideas.
In an ideal world, this post would certainly have contained more summaries of the books arguments and perhaps a later update will improve on that aspect. For the moment the review focuses on counter-arguments and perceived omissions (the post already got too long with just covering those).
Bostrom considers various routes we have to forming intelligent machines and what the possible outcomes might be from developing such technologies. He is a professor of philosophy but has an impressive array of background degrees in areas such as mathematics, logic, philosophy and computational neuroscience.
So let’s start at the beginning and put the book in context by trying to understand what is meant by the term “superintelligence”
Defining Intelligence
In common with many contributions to the debate on artificial intelligence, Bostrom never defines what he means by intelligence. Obviously, this can be problematic. On the other hand, superintelligence is defined as outperforming humans in every intelligent capability that they express.
Personally, I’ve developed the following definition of intelligence: “Use of information to take decisions which save energy in pursuit of a given task”. Here by information I might mean data or facts or rules, and by saving energy I mean saving ‘free’ energy.
However, accepting Bostrom’s lack of definition of intelligence (and perhaps taking note of my own), we can still consider the routes to superintelligence Bostrom proposes. It is important to bear in mind that Bostrom is worried about the effect of intelligence on 30 year (and greater) timescales. These are timescales which are difficult to predict over. I think it is admirable that Nick is trying to address this, but I’m also keen to ensure that particular ideas which are at best implausible, but at worst a misrepresentation of current research, don’t become memes in the very important debate on the future of machine intelligence.
Technological Singularities
A technological singularity is when a technology becomes ‘transhuman’ in its possibilities, moving beyond our own capabilities through self improvement. It’s a simple idea, and often there’s nothing to be afraid of. For example, in mechanical engineering, we long ago began to make tools that could manufacture other tools. And indeed, the precision of the manufactured tools outperformed those that we could make by hand. This led to a ‘technological singularity’ of precision made tools. We developed ‘transhuman’ milling machines and lathes. We developed ‘superprecision’, precision that is beyond the capabilities of any human. Of course there are physical limits on how far this particular technological singularity has taken us. We cannot achieve infinitely precise machining tolerances.
In machining, the concept of ‘precision’ can be defined in terms of the ‘tolerance’ that the resulting parts are made to. Unfortunately, the lack of a definition of intelligence in Bostrom’s book makes it harder to ground the argument. In practice this means that the book often exploits different facets of intelligence and combines them in worse case scenarios while simultaneously conflating conflicting principles.
Embodied Intelligence
The book gives little thought to the differing natures of machine and human intelligence. For example, there is no acknowledgment of the embodied nature of our intelligence. There are physical constraints on communication rates. For humans these constraints are much stronger than for machines. Machine intelligences communicate with one another in gigabits per second. Humans in bits per second. For our relative computational abilities the best estimates are that, in terms of underlying computation in the brain, we are computing much quicker than machines. This means humans have a very high compute/communicate ratio. We might think of that as an embodiment factor. We can compute far more than we can communicate, leading to a backlog of conclusions within our own minds. Much of our human intelligence seems doomed to remain within ourselves. This dominates the nature of human intelligence. In contrast, this phenomenon is only weakly observed in computers, if at all. Computers can distribute the results of their intelligence at approximately the same rate that they compute them.
Bostrom’s idea of superintelligence is an intelligence that outperforms us in all its facets. But if our emotional intelligence is a result of our limited communication ability, then it might be impossible to emulate it without also implementing the limited communication. Since communication also affects other facets of our intelligence we can see how it may, therefore, be impossible to dominate human abilities in the manner which the concept of superintelligence envisages. A better definition of intelligence would have helped resolve these arguments.
My own belief is that we became individually intelligent through a need to model each other (and ourselves) to perform better planning. So we evolved to undertake collaborative planning and developed complex social interactions. As a result our species, our collective intelligence, became increasingly complex (on evolutionary timescales) as we evolved greater intelligence within each of the individuals that made up our social group. Because of this process I find it difficult to fully separate our collective intelligence from our individual intelligences. I don’t think Bostrom suffers with this dichotomy because my impression is that his book only views human intelligence as an individual characteristic. My feeling is that this is limiting because any algorithmics we create to emulate our intelligence will actually operate on societal scales and the interaction of the artificial intelligence with our own should be considered in that context.
Prediction, Uncertainty and Intelligence Saturation
As humans, we are a complex society of interacting intelligences. Any predictions we make within that society would seem particularly fraught. Intelligent decision making relies on such predictions to quantify the value of a particular decision (in terms of the energy it might save). But when we want to consider future plausible scenarios we are faced with exponential growth of complexity in an already extremely complex system.
In practice we can make progress with our predictions by compressing the complex world into abstractions: simplifications of the world around that are sufficiently predictive for our purposes, but retain tractability. However, using such abstractions involves introducing model uncertainty. Model uncertainty reflects the unknown way in which the actual world will differ from our simplifications.
Practitioners who have performed sensitivity analysis on time series prediction will know how quickly uncertainty accumulates as you try to look forward in time. There is normally a time frame ahead of which things become too misty to compute any more. Further computational power doesn’t help you in this instance, because uncertainty dominates. Reducing model uncertainty requires exponentially greater computation. We might try to handle this uncertainty by quantifying it, but even this can prove intractable.
So just like the elusive concept of infinite precision in mechanical machining, there is likely a limit on the degree to which an entity can be intelligent. We cannot predict with infinite precision and this will render our predictions useless on some particular time horizon.
The limit on predictive precision is imposed by the exponential growth in complexity of exact simulation, coupled with the accumulation of error associated with the necessary abstraction of our predictive models. As we predict forward these uncertainties can saturate dominating our predictions. As a result we often only have a very vague notion of what is to come. This limit on our predictive ability places a fundamental limit on our ability to make intelligent decisions.
There was a time when people believed in perpetual motion machines (and quite a lot of effort was put into building them). Physical limitations of such machines were only understood in the late 19th century (for example the limit on efficiency of heat engines was theoretically formulated by Carnot). We don’t yet know the theoretical limits of intelligence, but the intellectual gymnastics of some of the entities described in Superintelligence will likely be curtailed by the underlying mathematics. In practice the singularity will saturate, it’s just a question of where that saturation will occur relative to our current intelligence. Bostrom thinks it will be a long way ahead, I tend to agree but I don’t think that the results will be as unimaginable as is made out. Machines are already a long way ahead of us in many areas (weather prediction for example) but I don’t find that unimaginable either.
Unfortunately, in his own analysis, Bostrom hardly makes any use of uncertainty when envisaging future intelligences. In practice correct handling of uncertainty is critical in intelligent systems. By ignoring it Bostrom can give the impression that a superintelligence would act with unerving confidence. Indeed the only point where I recollect the mention of uncertainty is when it is used to unnerve us further. Bostrom refers to how he thinks a sensible Bayesian agent would respond to being given a particular goal. Bostrom suggests that due to uncertainty it would believe it might not have achieved its goal and continue to consume world resource in an effort to do so. In this respect the agent appears to be taking the inverse action of that suggested by the Greek skeptic Aenesidemus, who advocated suspension of judgment, or epoché, in the presence of uncertainty. Suspension of judgment (delay of decision making) meaning specifically ‘refrain from action’. That is indeed the intelligent reaction to uncertainty. Don’t needlessly expend energy when the outcome is uncertain (to do so would contradict my definition of intelligent behavior). This idea emerges as optimal behavior from a mathematical treatment of such systems when uncertainty is incorporated.
This meme occurs through out the book. The “savant idiot”, a gifted intelligence that does a particular thing really stupidly. As such it contradicts the concept of superintelligence. The superintelligence is better in all ways than us, but then somehow must also be taught values and morals. Values and morals are part of our complex emergent human behaviour. Part of both our innate and our developed intelligence, both individually and collectively as a species. They are part of our natural conservatism that constrains extreme behavior. Constraints on extreme behaviour are necessary because of the general futility of absolute prediction. Just as in machining, we cannot achieve infinitely precise prediction.
Another way the savant idiot expresses itself in the book is through extreme confidence about its predictions in the future. The premise is that it will agressively follow a strategy (potentially to the severe detriment of humankind) in an effort to fulfill a defined ‘final goal’. We’ll address the mistaken idea of a simplistic final goal below.
With a shallow reading Bostrom’s ideas seem to provide an interesting narrative. In the manner of an Ian Fleming novel, the narrative is littered with technical detail to increase the plausibility for the reader. However, in the same way that so many of Blofeld’s schemes are quite fragile when exposed to deeper analysis, many of Bostrom’s ideas are as well.
In reality, challenges associated with abstracting the world render the future inherently unpredictable, both to humans and to our computers. Even when many aspects of a system are broadly understood (such as our weather) prediction far into the future is untenable due to propagation of uncertainty through the system. Uncertainty tends to inflate as time passes rendering only near term prediction plausible. Inherent to any intelligent behavior is an understanding of the limits of prediction. Intelligent behaviour withdraws, when appropriate, to the “suspension of judgement”, inactivity, the epoché. This simple idea finesses many of the challenges of artificial intelligence that Bostrom identifies.
Whole Brain Emulation
Large sections of the book are dedicated to whole brain emulation, under the premise that this might be achievable before we have understood intelligence (superintelligence could then achieved by hitting the turbo button and running those brains faster). Simultaneously, hybrid brain-machine systems are rejected as a route forward due to the perceived difficulty of developing such interfaces.
Such unevenhanded treatment of future possible paths to AI makes the book a very frustrating read. If we had the level of understanding we need to fully emulate the brain, then we would know what is important to emulate in the brain to recreate intelligence. The path to that achievement would also involve improvements of our ability to directly interface with the brain. Given that there are immediate applications with patients, e.g. with spinal problems or suffering from ALS, I think we will have developed hybrid systems that interface directly with the brain a long time before we have managed a full emulation of the human brain. Indeed, such applications may prove to be critical to developing our understanding of how the brain implements intelligence.
Perhaps Bostrom’s naive premise about the ease of brain emulation comes form a lack of understanding of what it would involve. It could not involve an exact simulation of each neuron in the brain down to the quantum level (and if it did, it would be many orders of magnitude more computationally demanding than is suggested in the text). Instead it would involve some level of abstraction. Abstraction as to those aspects of the biochemistry and physics of the brain that are important in generating our intelligence. Modelling and simulation of the brain would require that our simulations replace actual mechanism with those salient parts of those mechanisms that the brain makes use of for intelligence.
As we’ve mentioned in the context of uncertainty, an understanding of this sort of abstraction is missing from Superintelligence, but it is vital in modelling, and, I believe, it is vital in intelligence. Such abstractions require a deep understanding of how the brain is working, and such understandings are exactly what Bostrom says are impossible to determine for developing hybrid systems.
Over the 30 year time horizons that Bostrom is interested in, hybrid human-machine systems could become very important. They are highly likely to arise before a full understanding of the brain is developed, and if they did then they would change the way society would evolve. That’s not to say that we won’t experience societal challenges, but they are likely to be very different from the threats that Bostrom perceives. Importantly, when considering humans and computers, the line of separation between the two may not be as distinctly drawn as Bostrom suggests. It wouldn’t be human vs computer, but augmented human vs computer.
Dedication of Resources and Control of Progress
One aspect that, it seems, must be hard to understand if you’re not an active researcher is nature of technological advance at the cutting edge. The impression Bostrom gives is that research in AI is all a set of journeys with predefined goals. It’s therefore merely a matter of assigning resources, planning, and navigating your way there. In his strategies for reacting to the potential dangers of AI, Bostrom suggests different areas in which we should focus our advances (which of these expeditions should we fund, and which should we impede). In reality, we cannot switch on and off research directions in such a simplistic manner. Most research in AI is less of an organized journey, but more of an exploration of uncharted terrain. You set sail from Spain with government backing and a vague notion of a shortcut to the spice trade of Asia, but instead you stumble on an unknown continent of gold-ridden cities. Even then you don’t realize the truth of what you discovered within your own lifetime.
Even for the technologies that are within our reach, when we look to the past, we see that people were normally overly optimistic about how rapidly new advances could be deployed and assimilated by society. In the 1970s Xerox PARC focused on the idea that the ‘office of the future’ would be paperless. It was a sensible projection, but before it came about (indeed it’s not quite here yet) there was an enormous proliferation of the use of paper, so the demand for paper increased.
Rather than the sudden arrival of the singleton, I suspect we’ll experience something very similar to our ‘journey to the paperless office’ with artificial intelligence technologies. As we develop AI further, we will likely require more sophistication from humans. For example, we won’t be able to replace doctors immediately, first we will need doctors who have a more sophisticated understanding of data. They’ll need to interpret the results of, e.g., high resolution genetic testing. They’ll need to assimilate that understanding with their other knowledge. The hybrid human-machine nature of the emergence of artificial intelligence is given only sparse treatment by Bostrom. Perhaps because the narrative of such co-evolution is much more difficult to describe than an independent evolution.
The explorative nature of research adds to the uncertainties about where we’ll be at any given time. Bostrom talks about how to control and guide our research in AI, but the inherent uncertainties require much more sophisticated thinking about control than Bostrom offers. In a stochastic system, a controller needs to be more intelligent and more reactive. The right action depends crucially on the time horizon. These horizons are unknown. Of course, that does not mean the research should be totally unregulated, but it means that those that suggest regulation need to be much closer to the nature of research and its capabilities. They need to work in collaboration with the community.
Arguments for large amounts of preparatory work for regulation are also undermined by the imprecision with which we can predict the nature of what will arrive and when it will come. In 1865 Jules Verne correctly envisaged that one day humans would reach the moon. However, the manner in which they reached the moon in his book proved very different from how we arrived in reality. Verne’s idea was that we’d do it using a very big gun. A good idea, but not correct. Verne was, however, correct that the Americans would get there first. One hundred and four years after he wrote the goal was achieved through rocket power (and without any chickens inside the capsule).
This is not to say that we shouldn’t be concerned about the paths we are taking. There are many issues that the increasing use of algorithmic decision making raises and they need to be addressed. It is to say that the nature of the concerns that Bostrom raises are implausible because of the imprecision of our predictions over such time frames.
Final Goals and AI
Some of Bostrom’s perspectives may also come from a lack of experience in deploying systems in practice. The book focuses a great deal on the programmed ‘final goal’ of our artificial intelligences. It is true that most machine learning systems have objective functions, but an objective function doesn’t really map very nicely to the idea of a ‘final goal’ for an intelligent system. The objective functions we normally develop are really only effective for simplistic tasks, such as classification or regression. Perhaps the more complex notion of a reward in reinforcement learning is closer, but even then the reward tends to be task specific.
Arguably, if the system does have a simplistic ‘final goal’, then it is already failing its test of superintelligence, even the simplest human is a robust combination of, sometimes conflicting, goals that reflect the uncertainties around us. So if we are goal driven in our intelligence, then it is by sophisticated goals (akin to multi-objective optimisation) and each of us weights those goals according to sets of values that we each evolve, both across generations and within generations. We are sophisticated about our goals, rather than simplistic, because our environment itself is evolving, implying that our ways of behaviour need to evolve as well. Any AI with a simplistic final goal would fail the test of being a ‘dominant intelligence’. It would not be a superintelligence because it would under-perform humans in one or more critical aspects.
Data and the Reality of Current Intelligence
One of the routes explored by Bostrom to superintelligence involves speeding up implementations of our own intelligence. Such speed would not necessarily bring about significant advances in all domains of intelligence, due to fundamental limits on predictability. Linear improvements in speed cannot deal with exponential increases in computational tractability. But Bostrom also seems to assume that speeding up intelligences will necessarily take them beyond our comprehension or control. Of course in practice there are many examples where this is not the case. IBM Watson’s won Jeopardy. But it did it by storing a lot more knowledge than we ever could, then it used some simplistic techniques from language processing to recover those facts: it was a fancy search engine. These systems outperform us, but they are by no means beyond our comprehension. Still, that does not mean we shouldn’t fear this phenomenon.
Given the quantity of data we are making available about our own behaviors and the rapid ability of computers to assimilate and intercommunicate, it is already conceivable that machines can predict our behavior better than we can. Not by superintelligence but by scaling up of simple systems. They’ve finessed the uncertainty by access to large quantities of data. These are the advances we should be wary of, yet they are not beyond our understanding. Such speeding up of compute and acquisition of large data is exactly what has led to the recent revolution in convolutional neural networks and recurrent neural networks. All our recent successes are just more compute and more data.
This brings me to another major omission of the book, and this one is ironic, because it is the fuel for the current breakthroughs in artificial intelligence. Those breakthroughs are driven by machine learning. And machine learning is driven by data. Very often our personal data. Machines do not need to exceed our capabilities in intelligence to have a highly significant social effect. They outperform us so greatly in their ability to process large volumes of data that they are able to second guess us without expressing any form of higher intelligence. This is not the future of AI, this is here today.
Deep neural networks of today are not performant because someone did something new and clever. Those methods did not work with the amount of data we had available in the 1990s. They work with the quantity of data we have now. They require a lot more data than any human uses to perform similar tasks. So already, the nature of the intelligence around us is data dominated. Any future advances will capitalise further on this phenomenon.
The data we have comes about because of rapid interconnectivity and high storage (this is connected to the low embodiment factor of the computer). It is the consequence of the successes of the past and it will feed the successes of the future. Because current AI breakthroughs are based on accumulation of personal data, there is opportunity to control its development by reformation of our rules on data.
Unfortunately, this most obvious route to our AI futures is not addressed at all in the book.
Summary
Debates about the future of AI and machine learning are very important for society. People need to be well informed so that they continue to retain their individual agency when making decisions about their lives.
I welcome the entry of philosophers to this debate, but I don’t think Superintelligence is contributing as positively as it could have done to the challenges we face. In its current form many of its arguments are distractingly irrelevant.
I am not an apologist for machine learning, or a promoter of an unthinking march to algorithmic dominance. I have my own fears about how these methods will effect our society, and those fears are immediate. Bostrom’s book has the feel of an argument for doomsday prepping. But a challenge for all doomsday preppers is the quandary of exactly which doomsday they are preparing for. Problematically, if we become distracted with those images of Armageddon, we are in danger of ignoring existent challenges that urgently need to be addressed.
This is post 6 in a series. Previous post here