I got this in a fortune cookie a couple of years ago. Still not sure how to tell whether it has come true.
In the summer of 2000, my friend Sheila Kelleher asked me for help with a project. She was working at the Hennepin County Medical Center at the time, and saw firsthand how a lot of young children were terrified of doctor visits – and especially of getting shots, of course. She came up with the idea of making a photographic picture book for preschool kids showing everything a child would experience during a typical “well child” visit at the pediatric clinic. Being a skilled photographer, Sheila felt confident about her ability to create the illustrations. But she wasn’t a writer, and that’s why she asked for my help.
It sounded to me like a fun and worthwhile project. Besides, for a long time I’d had thoughts of writing books for children but hadn’t done much about it, and here was an opportunity landing in my lap.
We made some key decisions at the outset:
- We would use a teddy bear as a stand-in for the child and avoid any reference to the bear’s gender. That way the bear could represent a boy or a girl of any race.
- We would use actual medical professionals in their real clinical setting, so as to make the pictures depict what a child would experience as accurately as possible.
- We would confront the scariest part of the visit – the shots – honestly and head-on.
- We would have the bear narrate the story, so we could depict the experience from the child’s point of view.
- As much as possible, we would have the bear making choices and being an active participant, rather than just having things done to him/her.
- We would use a racially diverse cast of male and female professionals in the photographs.
- We would make the story fun to read for both kids and their parents.
Sheila did most of the heavy lifting on the project, which included selling the idea to Hennepin County Medical Center, getting their financial support and permission to use their staff and their facilities. And of course, she did all the photography. Meanwhile, I just wrote the story.
“Well Bear, Brave Bear: My Visit to the Doctor” was finally completed in 2003, with HCMC printing 12,000 copies to distribute to patients in their pediatric clinic. We had the story translated into Spanish, and had the English and Spanish printed side-by-side on every page. In 2005, we printed another 6,000 copies for Hennepin County Baby Tracks, a program to promote and track child immunizations.
Then somehow another eight years went by. We thought about doing a follow-up project, like Well Bear Goes to the Dentist, but that presented more difficulties than the doctor visit. For one thing, our bear didn’t have a mouth.
For years, I’ve intended to put the whole book online for free viewing. I finally got around to it last week. And here it is:
Stanford engineering professor Jonathan Koomey points out that computing efficiency – the number of computations completed per kilowatt-hour of electricity used – has doubled about every 18 months ever since the 1940’s. It’s analogous to Moore’s law, which says that computing performance – the number of computations performed per second – doubles every 18 months.
Extrapolating both of these trends out a few years, Koomey foresees computing devices that will use so little power they can scavenge what they need from ambient energy in the environment and run continuously for decades. Pointing the way toward this future, physicists have built a transistor from a single phosphorous atom.
So let’s imagine a future some decades from now in which computing devices are a billion times faster, a billion times smaller, a billion times cheaper, and a billion times more energy efficient compared to the best computers today. What does that future look like?
P.S. Jonathan Koomey gave this talk on March 15, 2012. Or in other words, 17 months ago – which means that processors should now be just about twice as fast and twice as efficient as they were when he spoke.
On August 7, 1973, the first incarnation of Notes was released to the PLATO community. I was 17 years old at the time, and had been developing the Notes software for the past two or three months. It was what we would now call a “message board” or “forum” where people could publicly post messages and others could read and respond – in other words, a many-to-many communications medium.
Since I had never seen an online message board (although a couple of other people were independently tinkering with similar ideas in other places) I had no model to work from, and only a vague notion of what might develop out of it. The concept of an online community did not yet exist.
It was one thing to write a bunch of program code and debug it by writing test messages and responses to myself. It was another thing entirely to experience the social interaction that program enabled as large numbers of people began using it in ways I couldn’t have imagined.
By the end of 1974, PLATO also had real-time group chat (“Talkomatic”), one-to-one chat (“Term-Talk”), email (“Personal Notes” or “P-Notes”) and many multiplayer games, all of the infrastructure to support the world’s first online community. It was arguably the beginning of what we now think of as “social media”.
Mark Zuckerberg would be born ten years later.
Brian Dear has published an excellent 40-year anniversary piece about PLATO Notes at Medium.com.
Huw Price is the Bertrand Russell professor of philosophy at Cambridge University. In January 2013, the New York Times published an opinion piece he’d written, “Cambridge, Cabs and Copenhagen: My Route to Existential Risk”. Along with Jaan Tallinn and Martin Rees, he is working to establish at Cambridge the Centre for the Study of Existential Risk.
A few excerpts from Price’s article follow.
Regarding why it’s reasonable to think superhuman artificial intelligence is possible:
These are my reasons for thinking that at some point over the horizon, there’s a major tipping point awaiting us, when intelligence escapes its biological constraints; and that it is far from clear that that’s good news, from our point of view. To sum it up briefly, the argument rests on three propositions: (i) the level and general shape of human intelligence is highly contingent, a product of biological constraints and accidents; (ii) despite its contingency in the big scheme of things, it is essential to us – it is who we are, more or less, and it accounts for our success; (iii) technology is likely to give us the means to bypass the biological constraints, either altering our own minds or constructing machines with comparable capabilities, and thereby reforming the landscape.
Regarding the difficulty of defining what we mean by “intelligence”:
Don’t think about what intelligence is, think about what it does. Putting it rather crudely, the distinctive thing about our peak in the present biological landscape is that we tend to be much better at controlling our environment than any other species. In these terms, the question is then whether machines might at some point do an even better job (perhaps a vastly better job). If so, then all the above concerns seem to be back on the table, even though we haven’t mentioned the word “intelligence,” let alone tried to say what it means.
Regarding why super artificial intelligence is an unpredictable game changer:
The claim that we face this prospect may seem contestable. Is it really plausible that technology will reach this stage (ever, let alone soon)? I’ll come back to this. For the moment, the point I want to make is simply that if we do suppose that we are going to reach such a stage – a point at which technology reshapes our human Mount Fuji, or builds other peaks elsewhere – then it’s not going to be business as usual, as far as we are concerned. Technology will have modified the one thing, more than anything else, that has made it “business as usual” so long as we have been human.
At present, then, I see no good reason to believe that intelligence is never going to escape from the head, or that it won’t do so in time scales we could reasonably care about. Hence it seems to me eminently sensible to think about what happens if and when it does so, and whether there’s something we can do to favor good outcomes over bad, in that case. That’s how I see what Rees, Tallinn and I want to do in Cambridge (about this kind of technological risk, as about others): we’re trying to assemble an organization that will use the combined intellectual power of a lot of gifted people to shift some probability from the bad side to the good.
Worrying about catastrophic risk may have similar image problems. We tend to be optimists, and it might be easier, and perhaps in some sense cooler, not to bother. So I finish with two recommendations. First, keep in mind that in this case our fate is in the hands, if that’s the word, of what might charitably be called a very large and poorly organized committee – collectively shortsighted, if not actually reckless, but responsible for guiding our fast-moving vehicle through some hazardous and yet completely unfamiliar terrain. Second, remember that all the children – all of them – are in the back. We thrill-seeking grandparents may have little to lose, but shouldn’t we be encouraging the kids to buckle up?