“An Inferior Intelligence Will Always Depend On A Superior One For Its Survival”

gorilla678

As I mentioned last week, Elon Musk, among other Silicon Valley stalwarts, has been on a Nick Bostrom bender ever since the publication of Superintelligence. In a smart Guardian profile by Tim Adams, the Oxford philosopher is depicted as being of two minds, believing technology may be the Holy Grail or it could read us our Last Rites. That’s the dual reality of a Transhumanist and Existentialist.

Bostrom tells his interviewer he thinks the risk of human extinction by AI will likely be largely ignored despite his clarion call. “It will come gradually and seamlessly without us really addressing it,” he says.

There seem to be only two cautions in regards to Bostrom’s work: 1) Attention could shift from immediate crises (e.g., climate change) to longer-term ones, and 2) Rules developed today for a possible future explosion of machine intelligence will have to be very flexible since there’s so much information we currently don’t possess/can’t comprehend. 

An excerpt:

Bostrom sees those implications as potentially Darwinian. If we create a machine intelligence superior to our own, and then give it freedom to grow and learn through access to the internet, there is no reason to suggest that it will not evolve strategies to secure its dominance, just as in the biological world. He sometimes uses the example of humans and gorillas to describe the subsequent one-sided relationship and – as last month’s events in Cincinnati zoo highlighted – that is never going to end well. An inferior intelligence will always depend on a superior one for its survival.

There are times, as Bostrom unfolds various scenarios in Superintelligence, when it appears he has been reading too much of the science fiction he professes to dislike. One projection involves an AI system eventually building covert “nanofactories producing nerve gas or target-seeking mosquito-like robots [which] might then burgeon forth simultaneously from every square metre of the globe” in order to destroy meddling and irrelevant humanity. Another, perhaps more credible vision, sees the superintelligence “hijacking political processes, subtly manipulating financial markets, biasing information flows, or hacking human-made weapons systems” to bring about the extinction.

Does he think of himself as a prophet?

He smiles. “Not so much. It is not that I believe I know how it is going to happen and have to tell the world that information. It is more I feel quite ignorant and very confused about these things but by working for many years on probabilities you can get partial little insights here and there. And if you add those together with insights many other people might have, then maybe it will build up to some better understanding.”

Bostrom came to these questions by way of the transhumanist movement, which tends to view the digital age as one of unprecedented potential for optimising our physical and mental capacities and transcending the limits of our mortality. Bostrom still sees those possibilities as the best case scenario in the superintelligent future, in which we will harness technology to overcome disease and illness, feed the world, create a utopia of fulfilling creativity and perhaps eventually overcome death. He has been identified in the past as a member of Alcor, the cryogenic initiative that promises to freeze mortal remains in the hope that, one day, minds can be reinvigorated and uploaded in digital form to live in perpetuity. He is coy about this when I ask directly what he has planned.

“I have a policy of never commenting on my funeral arrangements,” he says.•

Tags: ,