In 2012, physicist David Deutsch published an Aeon Magazine article titled Creative Blocks, exploring why, despite six decades of work on artificial intelligence, we are still nowhere close to creating an AGI: artificial general intelligence.
We’ve made great leaps in making machines smarter in very specific areas: playing chess and Jeopardy, driving cars, recognizing spoken language, recognizing faces. But as impressive as these things are, they are tasks for which there are well-defined correct responses to any given set of inputs or circumstances. These are all problems that can be addressed through traditional computer programming methods, devising algorithms that produce the desired result when given specific types of input.
Deutsch argues that what we consider to be General Intelligence – the kind of intelligence human beings possess – is qualitatively different, and cannot be produced by any programming technique we are familiar with. The key difference is that general intelligence involves creativity: the ability to produce new explanations. This is an extraordinarily difficult problem to solve, but in principle, not impossible:
Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation. This entails that everything that the laws of physics require a physical object to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.
I’ll summarize Deutsch’s argument with some excerpts from his article:
Unlike any functionality that has ever been programmed to date, this one can be achieved neither by a specification nor a test of the outputs. What is needed is nothing less than a breakthrough in philosophy, a new epistemological theory that explains how brains create explanatory knowledge and hence defines, in principle, without ever running them as programs, which algorithms possess that functionality and which do not.
Such a theory is beyond present-day knowledge. What we do know about epistemology implies that any approach not directed towards that philosophical breakthrough must be futile.
The prevailing misconception is that by assuming that ‘the future will be like the past’, it [an AGI] can ‘derive’ (or ‘extrapolate’ or ‘generalise’) theories from repeated experiences by an alleged process called ‘induction’… This casts thinking as a process of predicting that future patterns of sensory experience will be like past ones. That looks like extrapolation — which computers already do all the time (once they are given a theory of what causes the data). But in reality, only a tiny component of thinking is about prediction at all, let alone prediction of our sensory experiences. We think about the world: not just the physical world but also worlds of abstractions such as right and wrong, beauty and ugliness, the infinite and the infinitesimal, causation, fiction, fears, and aspirations — and about thinking itself.
The truth is that knowledge consists of conjectured explanations — guesses about what really is (or really should be, or might be) out there in all those worlds. Even in the hard sciences, these guesses have no foundations and don’t need justification. Why? Because genuine knowledge, though by definition it does contain truth, almost always contains error as well. So it is not ‘true’ in the sense studied in mathematics and logic. Thinking consists of criticising and correcting partially true guesses with the intention of locating and eliminating the errors and misconceptions in them, not generating or justifying extrapolations from sense data.
An AGI is qualitatively, not quantitatively, different from all other computer programs. The Skynet misconception likewise informs the hope that AGI is merely an emergent property of complexity, or that increased computer power will bring it forth (as if someone had already written an AGI program but it takes a year to utter each sentence). It is behind the notion that the unique abilities of the brain are due to its ‘massive parallelism’ or to its neuronal architecture, two ideas that violate computational universality. Expecting to create an AGI without first understanding in detail how it works is like expecting skyscrapers to learn to fly if we build them tall enough.
Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose ‘thinking’ is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity.
Clearing this logjam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.
I question Deutsch’s assumption that humans are the only animals with the capacity for creativity. I have read examples of nonhuman animals coming up with novel behaviors that are useful in getting access to food, for example. Aside from that quibble, I think his main thesis is correct and important to understand.