Speaking of the emergence of really smart machines, philosopher Nick Bostrom’s new book, Superintelligence, has just been published in the UK (with the U.S. edition available later this year). Here’s a piece from Clive Cookson’s Financial Times review:
“Since the 1950s proponents of artificial intelligence have maintained that machines thinking like people lie just a couple of decades in the future. In Superintelligence – a thought-provoking look at the past, present and above all the future of AI – Nick Bostrom, founding director of Oxford’s university’s Future of Humanity Institute, starts off by mocking the futurists.
‘We are still far from real AI despite last month’s widely publicised ‘Turing test’ stunt, in which a computer mimicked a 13-year-old boy with some success in a brief text conversation. About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075. Bostrom takes a cautious view of the timing but believes that, once made, human-level AI is likely to lead to a far higher level of ‘superintelligence’ faster than most experts expect – and that its impact is likely either to be very good or very bad for humanity.
The book enters more original territory when discussing the emergence of superintelligence. The sci-fi scenario of intelligent machines taking over the world could become a reality very soon after their powers surpass the human brain, Bostrom argues. Machines could improve their own capabilities far faster than human computer scientists.
‘Machines have a number of fundamental advantages, which will give them overwhelming superiority,’ he writes. ‘Biological humans, even if enhanced, will be outclassed.’ He outlines various ways for AI to escape the physical bonds of the hardware in which it developed. For example, it might use its hacking superpower to take control of robotic manipulators and automated labs; or deploy its powers of social manipulation to persuade human collaborators to work for it. There might be a covert preparation stage in which microscopic entities capable of replicating themselves by nanotechnology or biotechnology are deployed worldwide at an extremely low concentration. Then at a pre-set time nanofactories producing nerve gas or target-seeking mosquito-like robots might spring forth (though, as Bostrom notes, superintelligence could probably devise a more effective takeover plan than him).
What would the world be like after the takeover? It would contain far more intricate and intelligent structures than anything we can imagine today – but would lack any type of being that is conscious or whose welfare has moral significance. ‘A society of economic miracles and technological awesomeness, with nobody there to benefit,’ as Bostrom puts it. ‘A Disneyland without children.'”
Tags: Clive Cookson, Nick Bostrom