Oxford philosopher Nick Bostrom believes “superintelligence”–machines dwarfing our intellect–is the leading existential threat of our era to humans. He’s either wrong and not alarmed enough by, say, climate change, or correct and warning us of the biggest peril we’ll ever face. Most likely, such a scenario will be a real challenge in the long run, though it’s probably not currently the most paramount one.
In John Thornhill’s Financial Times article about Bostrom, the writer pays some mind to those pushing back at what they feel is needless alarmism attending the academic’s work. An excerpt:
Some AI experts have accused Bostrom of alarmism, suggesting that we remain several breakthroughs short of ever making a machine that “thinks”, let alone surpasses human intelligence. A sceptical fellow academic at Oxford, who has worked with Bostrom but doesn’t want to be publicly critical of his work, says: “If I were ranking the existential threats facing us, then runaway ‘superintelligence’ would not even be in the top 10. It is a second half of the 21st century problem.”
But other leading scientists and tech entrepreneurs have echoed Bostrom’s concerns. Britain’s most famous scientist, Stephen Hawking, whose synthetic voice is facilitated by a basic form of AI, has been among the most strident. “The development of full artificial intelligence could spell the end of the human race,” he told the BBC.
Elon Musk, the billionaire entrepreneur behind Tesla Motors and an active investor in AI research, tweeted: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”
Although Bostrom has a reputation as an AI doomster, he starts our discussion by emphasising the extraordinary promise of machine intelligence, in both the short and long term. “I’m very excited about AI and I think it would be a tragedy if this kind of superintelligence were never developed.” He says his main aim, both modest and messianic, is to help ensure that this epochal transition goes smoothly, given that humankind only has one chance to get it right.
“So much is at stake that it’s really worth doing everything we can to maximise the chances of a good outcome,” he says.•