Yuval Noah Harari writes this in his great book Sapiens:
Were, say, Spanish peasant to have fallen asleep in A.D. 1000 and woken up 500 years later, to the din of Columbus’ sailors boarding the Nina, Pinta, and Santa Maria, the world would have seemed to him quite familiar. Despite many changes in technology, manners and political boundaries, this medieval Rip Van Winkle would have felt at home. But had one of Columbus’ sailors fallen into a similar slumber and woken up to the ringtone of a twenty-first century iPhone, he would have found himself in a world strange beyond comprehension. ‘Is this heaven?’ he might well have asked himself. ‘Or perhaps — hell?’
What kind of peasants will we be? Is the road forward a high-speed one that will render tomorrow unrecognizable? It would seem so, except if calamity were to sideswipe us and delay (or permanently make impossible) the next phase. But if we are fortunate enough to have a safe travel, will a ruin of our own making await us in the form of Strong AI? I doubt it’s right around the bend as some feel, but it can’t hurt to consider such a scenario. From philosopher Stephen Cave’s Financial Times review of a slate of recent books about the perils of superintelligence:
It is tempting to suppose that AI would be a tool like any other; like the wheel or the laptop, an invention that we could use to further our interests. But the brilliant British mathematician IJ Good, who worked with Alan Turing both on breaking the Nazis’ secret codes and subsequently in developing the first computers, realised 50 years ago why this would not be so. Once we had a machine that was even slightly more intelligent than us, he pointed out, it would naturally take over the intellectual task of designing further intelligent machines. Because it was cleverer than us, it would be able to design even cleverer machines, which could in turn design even cleverer machines, and so on. In Good’s words: “There would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
Good’s prophecy is at the heart of the book Our Final Invention: Artificial Intelligence and the End of the Human Era, in which writer and film-maker James Barrat interviews leading figures in the development of super-clever machines and makes a clear case for why we should be worried. It is true that progress towards human-level AI has been slower than many predicted — pundits joke that it has been 20 years away for the past half-century. But it has, nonetheless, achieved some impressive milestones, such as the IBM computers that beat grandmaster Garry Kasparov at chess in 1997 and won the US quiz show Jeopardy! in 2011. In response to Barrat’s survey, more than 40 per cent of experts in the field expected the invention of intelligent machines within 15 years from now and the great majority expected it by mid-century at the latest.
Following Good, Barrat then shows how artificial intelligence could become super-intelligence within a matter of days, as it starts fixing its own bugs, rewriting its own software and drawing on the wealth of knowledge now available online. Once this “intelligence explosion” happens, we will no longer be able to understand or predict the machine, any more than a mouse can understand or predict the actions of a human.Good’s prophecy is at the heart of the book Our Final Invention: Artificial Intelligence and the End of the Human Era, in which writer and film-maker James Barrat interviews leading figures in the development of super-clever machines and makes a clear case for why we should be worried. It is true that progress towards human-level AI has been slower than many predicted — pundits joke that it has been 20 years away for the past half-century. But it has, nonetheless, achieved some impressive milestones, such as the IBM computers that beat grandmaster Garry Kasparov at chess in 1997 and won the US quiz show Jeopardy! in 2011. In response to Barrat’s survey, more than 40 per cent of experts in the field expected the invention of intelligent machines within 15 years from now and the great majority expected it by mid-century at the latest.•