The Economist has a good if brief review of three recent titles about Artificial Intelligence and what it means for humans, John Markoff’s Machines of Loving Grace, Pedro Domingos’ The Master Algorithm and Jerry Kaplan’s Humans Need Not Apply.
I quote the opening of the piece below because I think it gets at an error in judgement some people make about technological progress, in regards to both Weak AI and Strong AI. There’s the idea that humans are in charge and can regulate machine progress, igniting and controlling it as we do fire. I don’t believe that’s ultimately so even if it’s our goal.
Such decisions aren’t made in cool, sober ways inside a vacuum but in a messy world full of competition and differing priorities. If the United States decided to ban robots or gene editing but China used them and prospered from the use, we would have to also enter the race. It’s similar to how America was largely a non-militaristic country before WWII but since then has been armed to the teeth.
The only thing that halts technological progress is a lack of knowledge. Once attained, it will be used because that makes us feel clever and proud. And it gives us a sense of safety, even when it makes things more dangerous. That’s human nature as applied to Artificial Intelligence.
ARTIFICIAL INTELLIGENCE (AI) is quietly everywhere, powering Google’s search engine, Amazon’s recommendations and Facebook’s facial recognition. It is how post offices decipher handwriting and banks read cheques. But several books in recent years have spewed fire and brimstone, claiming that algorithms are poised to obliterate white-collar knowledge-work in the 21st century, just as automation displaced blue-collar manufacturing work in the 20th. Some people go further, arguing that artificial intelligence threatens the human race. Elon Musk, an American entrepreneur, says that developing the technology is “summoning the demon.”
Now several new books serve as replies. In Machines of Loving Grace, John Markoff of the New York Times focuses on whether researchers should build true artificial intelligence that replaces people, or aim for “intelligence augmentation” (IA), in which the computers make people more effective. This tension has been there from the start. In the 1960s, at one bit of Stanford University John McCarthy, a pioneer of the field, was gunning for AI (which he had named in 1955), while across campus Douglas Engelbart, the inventor of the computer mouse, aimed at IA. Today, some Google engineers try to improve search engines so that people can find information better, while others develop self-driving cars to eliminate drivers altogether.•