Huw Price and the Existential Risk of Artificial Intelligence
Huw Price is the Bertrand Russell professor of philosophy at Cambridge University. In January 2013, the New York Times published an opinion piece he’d written, “Cambridge, Cabs and Copenhagen: My Route to Existential Risk”. Along with Jaan Tallinn and Martin Rees, he is working to establish at Cambridge the Centre for the Study of Existential Risk.
A few excerpts from Price’s article follow.
Regarding why it’s reasonable to think superhuman artificial intelligence is possible:
These are my reasons for thinking that at some point over the horizon, there’s a major tipping point awaiting us, when intelligence escapes its biological constraints; and that it is far from clear that that’s good news, from our point of view. To sum it up briefly, the argument rests on three propositions: (i) the level and general shape of human intelligence is highly contingent, a product of biological constraints and accidents; (ii) despite its contingency in the big scheme of things, it is essential to us – it is who we are, more or less, and it accounts for our success; (iii) technology is likely to give us the means to bypass the biological constraints, either altering our own minds or constructing machines with comparable capabilities, and thereby reforming the landscape.
Regarding the difficulty of defining what we mean by “intelligence”:
Don’t think about what intelligence is, think about what it does. Putting it rather crudely, the distinctive thing about our peak in the present biological landscape is that we tend to be much better at controlling our environment than any other species. In these terms, the question is then whether machines might at some point do an even better job (perhaps a vastly better job). If so, then all the above concerns seem to be back on the table, even though we haven’t mentioned the word “intelligence,” let alone tried to say what it means.
Regarding why super artificial intelligence is an unpredictable game changer:
The claim that we face this prospect may seem contestable. Is it really plausible that technology will reach this stage (ever, let alone soon)? I’ll come back to this. For the moment, the point I want to make is simply that if we do suppose that we are going to reach such a stage – a point at which technology reshapes our human Mount Fuji, or builds other peaks elsewhere – then it’s not going to be business as usual, as far as we are concerned. Technology will have modified the one thing, more than anything else, that has made it “business as usual” so long as we have been human.
Price concludes:
At present, then, I see no good reason to believe that intelligence is never going to escape from the head, or that it won’t do so in time scales we could reasonably care about. Hence it seems to me eminently sensible to think about what happens if and when it does so, and whether there’s something we can do to favor good outcomes over bad, in that case. That’s how I see what Rees, Tallinn and I want to do in Cambridge (about this kind of technological risk, as about others): we’re trying to assemble an organization that will use the combined intellectual power of a lot of gifted people to shift some probability from the bad side to the good.
Worrying about catastrophic risk may have similar image problems. We tend to be optimists, and it might be easier, and perhaps in some sense cooler, not to bother. So I finish with two recommendations. First, keep in mind that in this case our fate is in the hands, if that’s the word, of what might charitably be called a very large and poorly organized committee – collectively shortsighted, if not actually reckless, but responsible for guiding our fast-moving vehicle through some hazardous and yet completely unfamiliar terrain. Second, remember that all the children – all of them – are in the back. We thrill-seeking grandparents may have little to lose, but shouldn’t we be encouraging the kids to buckle up?