Future Computing: Incredibly Powerful, Incredibly Cheap, Incredibly Small, and Everywhere

Stanford engineering professor Jonathan Koomey points out that computing efficiency – the number of computations completed per kilowatt-hour of electricity used – has doubled about every 18 months ever since the 1940′s. It’s analogous to Moore’s law, which says that computing performance – the number of computations performed per second – doubles every 18 months.

Extrapolating both of these trends out a few years, Koomey foresees computing devices that will use so little power they can scavenge what they need from ambient energy in the environment and run continuously for decades. Pointing the way toward this future, physicists have built a transistor from a single phosphorous atom.

So let’s imagine a future some decades from now in which computing devices are a billion times faster, a billion times smaller, a billion times cheaper, and a billion times more energy efficient compared to the best computers today. What does that future look like?

Read more about “Koomey’s Law” at Graphic Speak.

P.S. Jonathan Koomey gave this talk on March 15, 2012. Or in other words, 17 months ago – which means that processors should now be just about twice as fast and twice as efficient as they were when he spoke.

PLATO Notes Released 40 Years Ago Today

On August 7, 1973, the first incarnation of Notes was released to the PLATO community. I was 17 years old at the time, and had been developing the Notes software for the past two or three months. It was what we would now call a “message board” or “forum” where people could publicly post messages and others could read and respond – in other words, a many-to-many communications medium.

Since I had never seen an online message board (although a couple of other people were independently tinkering with similar ideas in other places) I had no model to work from, and only a vague notion of what might develop out of it. The concept of an online community did not yet exist.

David Woolley in the CERL PLATO computer room circa 1973

Me in the CERL PLATO computer room, circa 1973

It was one thing to write a bunch of program code and debug it by writing test messages and responses to myself. It was another thing entirely to experience the social interaction that program enabled as large numbers of people began using it in ways I couldn’t have imagined.

By the end of 1974, PLATO also had real-time group chat (“Talkomatic”), one-to-one chat (“Term-Talk”), email (“Personal Notes” or “P-Notes”) and many multiplayer games, all of the infrastructure to support the world’s first online community. It was arguably the beginning of what we now think of as “social media”.

Mark Zuckerberg would be born ten years later.

PLATO Notes

The PLATO Notes opening screen as it appeared about 1976


Brian Dear has published an excellent 40-year anniversary piece about PLATO Notes at Medium.com.

Huw Price and the Existential Risk of Artificial Intelligence

Huw Price is the Bertrand Russell professor of philosophy at Cambridge University. In January 2013, the New York Times published an opinion piece he’d written, “Cambridge, Cabs and Copenhagen: My Route to Existential Risk”. Along with Jaan Tallinn and Martin Rees, he is working to establish at Cambridge the Centre for the Study of Existential Risk.

A few excerpts from Price’s article follow.

Regarding why it’s reasonable to think superhuman artificial intelligence is possible:

These are my reasons for thinking that at some point over the horizon, there’s a major tipping point awaiting us, when intelligence escapes its biological constraints; and that it is far from clear that that’s good news, from our point of view. To sum it up briefly, the argument rests on three propositions: (i) the level and general shape of human intelligence is highly contingent, a product of biological constraints and accidents; (ii) despite its contingency in the big scheme of things, it is essential to us – it is who we are, more or less, and it accounts for our success; (iii) technology is likely to give us the means to bypass the biological constraints, either altering our own minds or constructing machines with comparable capabilities, and thereby reforming the landscape.

Regarding the difficulty of defining what we mean by “intelligence”:

Don’t think about what intelligence is, think about what it does. Putting it rather crudely, the distinctive thing about our peak in the present biological landscape is that we tend to be much better at controlling our environment than any other species. In these terms, the question is then whether machines might at some point do an even better job (perhaps a vastly better job). If so, then all the above concerns seem to be back on the table, even though we haven’t mentioned the word “intelligence,” let alone tried to say what it means.

Regarding why super artificial intelligence is an unpredictable game changer:

The claim that we face this prospect may seem contestable. Is it really plausible that technology will reach this stage (ever, let alone soon)? I’ll come back to this. For the moment, the point I want to make is simply that if we do suppose that we are going to reach such a stage – a point at which technology reshapes our human Mount Fuji, or builds other peaks elsewhere – then it’s not going to be business as usual, as far as we are concerned. Technology will have modified the one thing, more than anything else, that has made it “business as usual” so long as we have been human.

Price concludes:

At present, then, I see no good reason to believe that intelligence is never going to escape from the head, or that it won’t do so in time scales we could reasonably care about. Hence it seems to me eminently sensible to think about what happens if and when it does so, and whether there’s something we can do to favor good outcomes over bad, in that case. That’s how I see what Rees, Tallinn and I want to do in Cambridge (about this kind of technological risk, as about others): we’re trying to assemble an organization that will use the combined intellectual power of a lot of gifted people to shift some probability from the bad side to the good.

Worrying about catastrophic risk may have similar image problems. We tend to be optimists, and it might be easier, and perhaps in some sense cooler, not to bother. So I finish with two recommendations. First, keep in mind that in this case our fate is in the hands, if that’s the word, of what might charitably be called a very large and poorly organized committee – collectively shortsighted, if not actually reckless, but responsible for guiding our fast-moving vehicle through some hazardous and yet completely unfamiliar terrain. Second, remember that all the children – all of them – are in the back. We thrill-seeking grandparents may have little to lose, but shouldn’t we be encouraging the kids to buckle up?

Unsatisfying Encounters with Academics of Renown, Episode 2: Russell Targ

It was 1983. February, I think. I was in the San Francisco Bay Area to see friends. This particular evening I was visiting my friend Mike in Palo Alto.

Mike’s a really great guy. The previous year he and his roommate had let me sleep on the floor of their tiny apartment at Stanford for several weeks while I worked on Project JANUS, John Lilly’s dolphin communication effort. Mike had moved out of that apartment, though, and was now renting a room in the home of his employer, Joan, who ran a computer literacy program for grade school kids.

Joan’s last name was Targ1. Mike had told me this. But it didn’t occur to me that she might be married to Russell Targ, even though I’d read one of Russell Targ’s books, and even though I knew he worked at Stanford Research Institute. I didn’t make the connection until I was actually in Russell Targ’s house and Mike was introducing me to him.

So I knew who he was, but of course he didn’t know me from Adam. And in any case he was busy making yogurt, and paying a lot more attention to his cultured dairy product than to the stranger in his kitchen. I said that I’d read his book, but that didn’t seem to make much of a dent in his consciousness. He just asked if I was a student. Which of course I wasn’t, having graduated from college six years prior.

By way of introduction, Mike mentioned that I’d been working on John Lilly’s project. THAT got a reaction. Targ said, “John Lilly’s a crackpot. There’s no reason to think that dolphins could ever use language.

I didn’t feel like disputing the point, partly because his mind was obviously made up, and partly because I really had no idea myself whether dolphins could learn to use language. I had an open mind on the question, but I actually hadn’t seen much evidence one way or the other while working on Lilly’s project. So I just let it go, and Mike and I left Dr. Targ to his yogurt.

Well, so what, you may be thinking. Lots of people would feel that trying to teach dolphins a language is a nutty idea.

Indeed.

But here’s why this incident struck me as weird and slightly humorous: Russell Targ’s own field of study is the paranormal. You know, ESP. He had published at least half a dozen books on the subject. The one I’d read was called “Mind Reach: Scientists Look at Psychic Ability.” In it, Targ talks about experiments he’s done with what he calls “remote viewing.” One person goes off somewhere across town and randomly chooses some scene to look at — a bridge, say — while another person sits in the laboratory and tries to “receive” what the first person is seeing. Targ claims that the receivers, who are just ordinary, untrained blokes like you and me, are able to draw pretty accurate renditions of what the senders (also just regular joes) are looking at. Pretty interesting stuff – although to my knowledge, nobody has ever replicated Targ’s results. I tried it myself a couple of times with a friend who shares my interest in how minds work, and we got zilch.

As for dolphin language: Dolphins have large brains, much larger than those of the apes that have been taught a form of sign language, and their brain to body weight ratio is second only to humans. They demonstrate their intelligence in many ways, they obviously communicate with each other, and the sounds they make are mathematically as rich in information content as human speech. And to date we have made very little progress in understanding dolphin vocalizations, or how and what they communicate to each other.

None of this proves that dolphins do or can use language comparable to human language, but it does leave the question wide open.

Anyway, it struck me as strange that Russell Targ, of all people, would be so close-minded about dolphin language as to believe that the question isn’t even worth considering. Talk about people who live in glass houses.

So why would he say such a thing?

One possibility that comes to mind is that it was a defensive reaction. His own research puts him so far out on a limb that his credibility as a scientist is pretty tenuous, so he avoids at all cost any appearance of being remotely associated with anything else that might seem flakey.

In any case, it’s another example of a scientist succumbing to the very human tendency to hold firm beliefs based on little more than a popular preconceived notion. Science dogma, in other words.

Here’s Russell Targ in a short video titled “An Open Mind”.

You can read more about Targ’s work and see videos of him speaking at his web site, ESPresearch.


  1. I later learned that Joan Targ was the older sister of chess champion Bobby Fischer. Which has nothing to do with this story; it’s just an interesting tangent. []

Science vs. Science Dogma

“There is a conflict in the heart of science between science as a method of inquiry based on reason, evidence, hypothesis, and collective investigation, and science as a belief system, or a world view. And unfortunately the world view aspect of science has come to inhibit and constrict the free inquiry which is the very lifeblood of the scientific endeavor.” – Rupert Sheldrake

Here is Rupert Sheldrake’s very controversial TEDx talk from January 2013, which he titled “The Science Delusion.”

Sheldrake says that since the 19th century, science has been conducted under a the world view of philosophical materialism. He goes on to identify ten dogmas or assumptions of science:

1. Nature is mechanical, machine-like.
2. Matter is unconscious.
3. The laws of nature are fixed.
4. The total amount of matter & energy never changes (except at the moment of the big bang).
5. Nature is purposeless; evolution has no direction.
6. Biological heredity is material.
7. Memories are stored in your brain as material traces.
8. Your mind is entirely inside your head.
9. Psychic phenomena like telepathy are impossible because of assumption #8.
10. Mechanistic medicine is the only kind that works.

I would take issue with several claims Sheldrake makes in his talk, in two general areas. First, he depicts scientists as being more rigid than they actually are. For example, he ascribes to them a firm belief that the fundamental constants of physics never change, whereas this is actually the subject of considerable investigation and debate. Second, he mentions several beliefs of his own, such as the hypothesis of “morphic resonance” – the idea that everything has a collective memory – and claims that there is evidence for these things without actually presenting any.

Despite that, I’m convinced he is correct in his point that there is a very prevalent science dogma that tends to reject out of hand anything that does not fit the materialist world view. Scientists who investigate anything that smacks of “psychic” phenomena, for instance, are typically sidelined and widely regarded as quacks.

The scientific method asks us to be wary of any fixed beliefs, and always be open to new observations and data even if it runs counter to what we have always assumed to be true. Even if it runs counter to a scientific theory that has been tested and retested for decades. It’s always possible that we simply haven’t discovered the theory’s flaws or limitations yet.

It seems very odd to me that most scientists can be so close-minded about a subject like the location and limits of consciousness, while at the same time openly recognizing that we have very little understanding of how consciousness works, or even why we are conscious at all. I have personally had a few experiences that do not seem explainable by any mechanism known to science. (Here’s one example.)

It seems odder yet that a scientist who’s actually working in one of these off-limits areas would have a knee-jerk “that’s hokum” reaction to work another scientist is doing in some other off-limits area. But I’ve encountered this, as well. I’ll write about that experience in a future post.


The above video is labeled as “banned” by TED, which is not quite accurate. What actually happened is that the talk was taken out of the main library of TED talks at the recommendation of the TED science board due to factual problems, and moved to the TED blog where a lively discussion has taken place. You can read the entire controversy here:

Open for discussion: Graham Hancock and Rupert Sheldrake from TEDxWhitechapel