Let’s Digitize Our Minds and Live Forever! Or… Maybe Not.
“Millions long for immortality who don’t know what to do with themselves on a rainy Sunday afternoon.” – Susan Ertz
“The only thing wrong with immortality is that it tends to go on forever.” – Herb Caen
Some futurists and transhumanists, notably Ray Kurzweil, have proposed that eventually we will have the technology to upload the content and functionality of our brains into computers, and thereby live forever. Live forever after a fashion, that is, since the biological body would still die. But the idea is that the important things that give us our unique identities, memories, personalities, and ongoing experience of living would be preserved digitally, so we could go on being ourselves indefinitely.
I’m quite skeptical of this idea, for two reasons.
Reason 1: We are our entire bodies, not just our brains.
Granted, the brain is the center of rational thought, language, decision making, processing sensory input, memories of things that happened five seconds or fifty years ago, and much else that makes us who we are.
But there is no clean boundary between “brain” and “body.” It’s all one deeply interconnected system. Most of our conscious awareness of our body comes through neural pathways that let us feel everything from a pebble in our shoe to a mosquito landing on our neck. But even if a spinal cord injury cuts off all those neural connections, the brain still remains in active communication with the rest of the body by means of hormones carried through the bloodstream. Hormones are released within the brain itself by the pineal and pituitary glands. Others are produced in other parts of the body: the thyroid, the adrenal gland, the pancreas, testes and ovaries, to name a few. All of these, as well as neurally transmitted sensations from your head, face, limbs, torso, digestive tract, etc. contribute hugely to our experience of living and our sense of self. Oh, and let’s not forget the sexy bits! Living without all of that would be an impoverished experience indeed.
A fascinating BBC Future article, The Mind-Bending Effects of Feeling Two Hearts, describes the disorienting psychological effects of having a mechanical heart implanted. One such patient, referred to as Carlos in this story, was studied by neuroscientist Augustin Ibanez.
Ibanez wondered what would happen when you are fitted with an artificial heart? If Carlos experienced substantial changes, it would offer important new evidence that our mind extends well beyond the brain.
And that is exactly what he found. When Carlos tapped out his pulse, for instance, he followed the machine’s rhythms rather than his own heartbeat. The fact that this also changed other perceptions of his body – seeming to expand the size of his chest, for instance – is perhaps to be expected. But crucially, it also seemed to have markedly altered certain social and emotional skills. Carlos seemed to lack empathy when he viewed pictures of people having a painful accident, for instance. He also had more general problems with his ability to read others’ motives, and, crucially, intuitive decision making – all of which is in line with the idea that the body rules emotional cognition.
The article goes on to describe research showing that people with better body awareness, as determined by their ability to track their own heartbeats, have richer, more vivid emotional lives, and are better able to recognize the emotional states of others. Poor body awareness also seems to be linked to depression, a reduced ability to feel joy, more difficulty in making decisions, and a diminished sense of self.
It seems that a biological human brain, separated from a body but artificially kept alive, would be likely to rapidly descend at least into deep depression. Perhaps even psychosis.
Of course, the idea here is not about keeping a human brain alive, but rather transferring the memories and functionality of a biological brain into a digital device. In order to preserve some semblance of a human experience it would be necessary to connect this digital brain to artificial sensory devices providing at least vision and hearing. I imagine most transhumanists consider this a given. But the evidence seems to suggest we would have to go much further, and construct artificial replacements not only for other neural sensory input, but for the effects that our biological endocrine systems provide, as well. Otherwise, while we might retain memories of our lives as humans, we would be leaving behind a huge part of what it feels like to be ourselves. The artificial entities that inherit our rational thoughts and memories would not quite BE us.
Now we’ve made the already difficult task of replacing our biological selves with artificial selves much, much harder.
But alright. Let’s suppose we are somehow able to develop the technology to build, or digitally simulate, an artificially constructed body that mimics every aspect of the human one. Great. Now we are confronted with…
Reason 2: We don’t know if conscious experience is possible in a non-biological brain.
The fact is, while we’re learning more all the time about how our brains work, we still have very little understanding of how we come by conscious experience. David Chalmers calls this the hard problem of consciousness. Why couldn’t we humans live our lives doing everything we do now but as automatons, simply behaving in response to environmental stimuli according to internal programming – albeit quite complex programming with the ability to self-modify as needed? We don’t know the answer to that. And yet most of us are conscious. Those of us who are not conscious are in a state we refer to as comatose, and are incapable of functioning.
Consider the vast diversity of creatures that we can imagine have some level of conscious experience. Whether we’re looking at dogs, snakes, sparrows, or fruit flies, their brains are made of more or less the same stuff as ours, and work in fundamentally similar ways.
It’s easy to imagine moving our minds to computers, because computers superficially seem rather like our own brains. They can manipulate information and perform many tasks that we can do in our heads. And the seemingly human-like intelligence of computers grows as they become more able to perform functions that used to be our exclusive domain: recognizing faces, parsing natural language, and so on.
But not only are computers made of completely different stuff than brains are, the way they work is actually nothing like the way brains work. Watson, IBM’s Jeopardy-playing computer, is able to handily defeat the best human players. But it’s really just some cleverly programmed algorithms linked to a massive database, designed for the specific purpose of winning at Jeopardy. When Watson correctly identifies Lady Madonna as the character linked to the Beatles lyric “Children at your feet wonder how you manage to make ends meet” the computer appears very smart. But Watson has no real understanding of the concept of what it means to “make ends meet” or even what children are – at least nothing resembling a human’s understanding of those words.
It is possible to program a computer to simulate human brain activity. But consider this: In the most accurate simulation of a human brain to date, a Japanese supercomputer required 40 minutes to simulate a single second’s worth of activity from just one percent of the neuronal network of a human brain.
Software-based artificial neural networks mimic some basic aspects of the way brains work, and have found use in applications like Google’s voice recognition application and Netflix’s movie recommendation engine. But they are still simply software programs running on very un-brain-like hardware. I’m a software developer myself. As confident as I am that a dog has conscious experience that is much like my own in many ways, I’m just as confident that neither these programs, nor the computers they run on, have any conscious experience whatsoever.
A Thought Experiment on the Pro-Digital-Brain Side
Here’s a thought experiment I’ve seen proposed a few times: Imagine that we replace a single neuron in a human brain with an artificial, digital neuron that performs exactly the same function. That is, it has an artificial axon and artificial dendrites that connect to all the same neurons the original cell did, and internally it employs artificial logic gates to convert its dendritic input into the same axonal output as the original. This artificial cell’s internal digital processor would have to use the same electrochemical means to communicate with its neighbor neurons as regular neurons do. While we don’t currently have the technology to do this, it seems like, in theory, it could work.
If we build out from there, converting neurons one by one from biological to digital, we could do away with the electrochemical interactions wherever two digital cells meet, replacing them with digital communication. Assuming we could find a way to do this without damaging the biological neurons in the process, presumably the human we’re operating on wouldn’t notice any difference. Little by little, more and more of the brain is replaced, until ultimately it is entirely digital and artificial. Now, so the thinking goes, we have a fully conscious digital person.
On the Other Hand, We Have These Rocks…
That’s one thought experiment, and as such, its outcome seems fairly plausible. But let me propose another thought experiment. To introduce it, have a look at this XKCD cartoon by Randall Munroe:
The concept here is that our narrator is simulating the entire universe, the placement of rocks in the sand serving to digitally represent the state of every quantum of space, matter, and energy from moment to moment. And ultimately it is revealed that this arrangement of rocks isn’t just simulating the universe, it actually is the universe – or rather, it’s the hidden reality underlying everything.
While the cartoon guy uses rocks in the sand to represent reality, it would be logically equivalent (and a whole lot easier) to do so using bits in a digital computer. Either method is an implementation of a universal Turing machine, capable of performing any computation.
But representing everything in the entire universe at the quantum level in a computer is, ummm, rather beyond us. So let’s eliminate 99.9999999999999999999% of that task and consider representing just a single human brain.
Naturally, we’d be inclined to build this representation of a brain in a computer, because that would be the easiest. But as I pointed out before, it’s logically equivalent to do so by moving rocks around in the sand. For any given state that a computer’s memory can be in, we can represent that same state with some pattern of rocks. And for any transition from one state of a computer’s memory to another, we can represent the same transition by flipping some rocks around. It’s slower and a lot more work, but still logically identical.
Now the key question:
If we find it plausible that a sufficiently detailed digital representation of a human brain within a computer could actually be conscious, shouldn’t we be just as ready to accept that a representation of a brain consisting of a bunch of rocks in the sand could be conscious as well?
And yet… it’s pretty damn hard to imagine consciousness arising from a bunch of rocks being shuffled around in the sand, isn’t it? If we reject that notion, why shouldn’t we also reject the notion of consciousness arising in a manufactured device consisting of silicon-based semiconductors?
Accurately simulating the workings of a fully-functioning human brain in a computer is plenty difficult; we are still a long way from achieving it. But it seems to me that building a computer that has the power to simulate a human brain in real time is a prerequisite to even considering the possibility of transferring an existing human mind into a computer. Even when we do create a computer capable of such a simulation (and I think it’s likely we eventually will) it’s an open question whether that computer would, or could, have its own conscious experience – or even how we’d be able to tell if it did. If we asked it, it might tell us that yes, it’s awake. But would that mean that it really is?
It’s entirely possible that the phenomenon of consciousness is a property of certain configurations of living, organic matter and that it cannot be reproduced any other way. We simply don’t know.
A Difference That Makes a Difference
In a biological brain, the action of a given neuron has an inherent meaning. It’s easiest to recognize the inherent meaning of neurons involved with the lowest level of sensory perception: we know that there are neurons that specialize in detecting colors, edges of objects, etc. Presumably this carries up to higher levels of organization, such that certain patterns of firing have inherent meaning in the brain and to the organism as a whole.
I would theorize that this is a fundamental difference between an actual brain and any digital simulation of a brain, be it a simulation based in electronic circuitry or one based on rocks in the sand. In a computer, any bit of memory is exactly like any other bit. A given pattern of bits in a computer can represent an infinite number of different things: an image, a sound, text, numbers in a spreadsheet, whatever. The meaning of any particular pattern depends entirely and arbitrarily on how some programmer decided to encode data or program instructions. The same physical bits of computer memory might hold the text of an email in one instant, a picture of a walrus the next instant, and the digits of Pi a moment later. I don’t believe this is true of brains.
(Footnote: I recognize that in today’s computers there are specialized components that handle functions like display of graphics, production of sound, etc. But I’m speaking here of the computer’s general-purpose CPU and its associated random-access memory, which is where the main action takes place regardless of whether you’re running a web browser, a word processor, or a game of Tetris.)
The Question of Identity
Let’s assume for the moment that it’s possible to make a conscious digital version of yourself on a computer. It raises an interesting conundrum. The very fact that it’s digital means that any number of exact duplicates can be made. A thousand copies could be installed on a thousand computers, each of them with identical copies of your personality and memories. Each one would believe itself to be “you.”
Which one is you?
The Question of Continuity
Suppose that it’s possible to make a digital copy of your mind painlessly and nondestructively. You might decide to have it done while you’re still relatively young and at the peak of your intellectual prowess. Imagine then that this digital record is installed in a computer, so you can converse with it and assure yourself that it really is a perfect copy of your personality and memories.
Are you ready now to drink the hemlock? Bite the cyanide pill? Or do you still have the will to continue living your biological life?
Suppose that this digital version of you is not just a mind in a box, but is installed into an artificial body that is virtually indistinguishable from you in every way. This new you can do your job. It can take your place in your bowling league and your book club. It will love your spouse and your children exactly the way you do. It will take them on family vacations and camping trips. It’ll get angry when your teenage daughter takes the car without permission, and be proud when she graduates from college. It will experience all these things as richly and fully as you would. Does this change your answer?
If you’re not ready to give up your biological life and let the new you take over as your replacement, why not? And what does this say about the type of immortality you’ve achieved by copying your essence into a machine?
Final Thoughts
It’s been almost 50 years since we saw the HAL 9000 computer develop a mind of its own in 2001: A Space Odyssey. But despite the fact that we now have computers that can beat humans at chess and Jeopardy, and apps like Apple’s Siri that give us the illusion of communicating with a conscious thinking being like ourselves, building a computer that thinks like HAL is still far beyond us. It remains to be seen whether we ever will.
Even if we eventually manage to do that, digitizing the contents and functioning of a human mind and installing it in such a thinking computer is not exactly a slam dunk.
And even then, whether such a digital mind would actually be a self-aware and conscious being, rather than merely a convincing simulacrum, is not a given. I seriously doubt that it will happen.
I’d be happy to be proven wrong, though.
Further Reading
“Mind Uploading” – Wikipedia article
The Ravenous Brain, by Daniel BorBor’s book comes closer to explaining the phenomenon of consciousness than anything else I’ve come across on the subject.
A few days after I posted this piece, Tim Wu published his essay “How To Live Forever” in the New Yorker. (http://www.newyorker.com/business/currency/live-forever)
I’ll quote the most interesting part of Wu’s essay:
“Perhaps a better approach for future Nicolas Flamels—or Ray Kurzweils or Dmitry Itskovs—is not copying our brains but, rather, trying to migrate the self to a new physical host. Like a hermit crab seeking a new shell, immortality may not really be about copying ourselves but about creating a process in which we slowly leave behind our current, biological homes and move somewhere more durable, a point made by Steven Novella, a clinical neurologist and an assistant professor at Yale.
How might this work? In the past two decades, scientists have gained a better understanding of neuroplasticity, or the idea that the brain is continually rewiring itself. Stroke victims, for example, sometimes recover lost functions after their brains reallocate control of certain actions from a damaged area. The idea would be to encourage the brain’s activities to slowly begin migrating to a massively interconnected electronic brain. Over time, if things went well, our intelligence and identity might be coaxed into leaving behind the old brain and taking refuge in a more durable unit (which would be attached to the robot body mentioned earlier).”
While this idea still presumes that consciousness in a nonbiological brain is possible, it does address one of my questions. If it’s possible to migrate one’s consciousness little by little from the original brain to an artificial one, it seems that continuity would be preserved.
That’s a big IF, though.
hi there hummans reqier way too much storage fact a humman brain store over 2.5 peta bytes a nano gram of dna stores exabytes and todays hard drives only store 8 terabtes thats a nano percent of hamman storage sooo a singe humman would reqiere at least a hard drive storage capacity of google plix bytes of storage minium
Pingback: Come together | Tom klikt