The June 2011 issue of Scientific American featured an article by Christof Koch and Giulio Tononi which put forth a test to determine whether a machine had become sentient – i.e., had reached a state of conscious awareness.
A preview of the article is here. (I guess you have to be a paying subscriber to read it in full.)
Essentially, they propose that consciousness is simply integrated information. It’s possible, they say, to calculate how much integrated information there is in a network, and they call this quantity phi. Testing for consciousness, then, is simply a matter of measuring phi. How is one to do that? The authors propose a test that consists of presenting an image that is somehow obviously “wrong” – for example, a man floating in the sky above a landscape, or an elephant perched on top of the Eiffel Tower. They presume that only a conscious machine would be able to identify “what’s wrong with this picture” because it would need to be conscious of many things about living in the world in order to do so.
I don’t buy it.
Koch and Tononi state that “to be conscious, you need to be a single, integrated entity with a large repertoire of distinguishable states.” The computer sitting on my desk is such an entity, yet nobody would say that it is conscious. I suppose the authors would argue that this is because my computer’s level of “phi” is not high enough. But here’s the problem: they simply state that an entity with a sufficiently high level of “phi” is conscious, without providing any evidence or reason to believe that this is so. This is either a breathtaking leap of faith, or a new definition of the term “consciousness” that may not have much to do with our own experience of awareness of ourselves and our surroundings.
I know I am conscious because I experience my own awareness. I believe that other people are conscious because they’re made of the same stuff as me and it’s beyond credibility that I would be so unique as to be the only conscious human. Similarly, I believe that dogs and dolphins and chimpanzees have their own subjective experience because they behave as if they do, they are branches off the same evolutionary tree that produced me, and our biology is largely the same. Perhaps the phenomenon that we call consciousness is a product of that specific biology. How can we know otherwise?
Koch and Tononi propose testing for consciousness by asking a machine to determine whether the scene in a picture is “right” or “wrong”. But it seems to me that the ability to produce a correct response to this test is neither necessary nor sufficient for establishing consciousness. Why is it not necessary? Consider that a human child under a certain age would not be able to pass this test, nor would an adult in a dream state, or under the influence of hallucinogenic drugs. Yet nobody would deny that these humans are capable of subjective experience.
Why is it not sufficient? Because the test is simply a computational problem. Granted, it’s a problem that is beyond our current technology. But it is not hard to imagine that we could get there within a decade or two by extending current technology that can recognize faces, find edges of objects in a scene, etc.. Combine these enhanced technologies with a sufficiently large library of recognizable objects and rules about how they typically behave, and you have a machine that can pass the test without any more need for conscious awareness than Watson (the computer that defeated the reigning Jeopardy champion) has.
Machine consciousness may well be possible, and I’m actually very curious about this question. I just don’t believe that the test Koch and Tononi propose would answer it.
Still, Dr. Tononi has some interesting ideas about consciousness. He’s done research that shows some promise in detecting whether people who appear to be in a vegetative coma have any conscious awareness. Tononi’s work is explored further in this article from the New York Times.
So… how COULD we know when a computer becomes conscious?