After I read Vernor Vinge’s article in the Whole Earth Review, I wrote the following response. I don’t remember now (in 2013) whether I sent this as a letter to Whole Earth Review or posted it on The WELL, where I was very active at the time. Maybe both.
It’s odd that Vinge’s article hasn’t generated much discussion yet. I haven’t been able to get it out of my mind since I read it. It’s fascinating, and frightening. But I’ve had this nagging feeling that there’s something wrong with it. I’ll try to put some of my thoughts down in a semi-organized way.
I think the idea of machine consciousness might be a red herring. Consciousness is a slippery concept. We all know instinctively what it means in humans, but it’s much less clear what it means for other animals or for machines. I don’t know whether machine consciousness is possible. But I think it is neither necessary nor sufficient to produce the effects Vinge expects.
It’s easy to show that it’s not sufficient. Suppose we build a machine that is the functional equivalent of a human, including human-like consciousness. Well, we have several billion beings with human consciousness on Earth already; one more is not going to push us into a technological singularity; not unless that one more exceeds human abilities in some other ways.
Is machine consciousness necessary? I don’t think so. Computers today solve staggeringly complex problems without a whiff of consciousness. Perhaps more to the point, our own unconscious minds are capable of amazing things. Consider the chemist (whose name escapes me) who discovered the molecular structure of benzene in his sleep. It’s plausible to me that computers could eventually be able to design better computers without the benefit of anything we would consider consciousness or self awareness.
When Vinge talks about superhuman intelligence, he falls into the trap of treating intelligence as a one-dimensional, quantitative measure, like an IQ score. Intelligence is another slippery concept, but whatever it is, it’s multi-dimensional.
Who was more intelligent, Einstein or Mozart? It’s a nonsensical question. Clearly they were both much more “intelligent” than an average human, but their intellectual skills were very different. And it’s doubtful that either of them would have been much good at designing computers.
Computers already exceed human intelligence in some obvious ways. We tend to discount the skills at which they excel because they seem mundane: digesting enormous amounts of information, doing computations, logical operations, etc., but those are all aspects of what we consider intelligence in humans.
Computers can now defeat most humans at chess; it might not be long before they can reliably defeat all humans. Such a machine could fairly be said to have superhuman chess-playing intelligence, but it won’t be able to design a better version of itself.
Another example: computers are also now composing music. They aren’t challenging human composers yet but that might just a matter of time. I can imagine a computer someday composing a symphony worthy of Mozart but in 1/1000th the time. It would have a superhuman musical intelligence, but it won’t know how to design a better computer.
But what about a computer that is specifically built to design computers? I suspect that such a thing is even farther away than a Mozart computer, but suppose that eventually a computer is able, on its own, to design a better version of itself.
THAT MIGHT BE ALL IT CAN DO. It might result in nothing more than computers that build better computers that build better computers… endlessly. And even the 500th generation might not be able to design a toaster oven.
Actually, I don’t think we would reach a 500th generation without considerable human input. A computer designed to design better computers will have to work within the constraints of (1) the physical world, (2) what is currently known about the physical world, and (3) current engineering and manufacturing techniques. Would a computer-designing computer necessarily be able to perform the basic scientific and engineering research needed to break through various barriers?
Does any of this prove that Vinge’s technological singularity is impossible? No, but it makes me doubt that it’s as inevitable (or as close) as he says. But this brings me to my
We might already be in a technological singularity, as Vinge defines it.
One of the means by which a singularity could occur, according to Vinge, is: “Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.”
I would argue that we’re already there. A human/computer team is already capable of intellectual feats far beyond anything a computerless human can do. It doesn’t matter that computers are not directly wired into our brains, or whatever; if you look at the human/computer team as a black box it can outperform humans now in most tasks requiring intelligence.
The computers we have today probably could not exist without the CADCAM technology used to design and manufacture them. And the process is continuing. We might have crossed the event horizon a couple of decades ago when we first started using computers to design better computers.
Despite all this, it’s clearly true that technological change is happening faster and faster all the time. Where does it lead? It’s a scary question. I have no idea.