“The Field Of Artificial Intelligence Has A Long History Of Over-Promising And Under-Delivering”

OlegRobot (1)

John Markoff doesn’t think technology is prone to the work of a blind watchmaker, but I’m not so sure. It would be great if rational thinking governed this area, but technology seems to pull us as much as we push it. Competition, contrasting priorities and simple curiosity can drive us in directions that may not be best for us, even if they are best for progress in a larger sense. The progress of intelligence, I mean. We’re not moths to a flame, but it’s difficult for a mere human being to look away from an inferno.

In his latest New York Times article, Markoff argues that superintelligence is not upon us, that most if not all of us will not live to see the Singularity. On this point, I agree. Perhaps there’ll emerge a clever workaround that allows Moore’s Law to continue apace, but I don’t think that guarantees superintelligence in a few decades. Anyone alive in 2016 who’s planning their day around conscious machines or radical life extension, twin dreams of the Singularitarians, will likely wind up sorely disappointed.

An excerpt:

Recently several well-known technologists and scientists, including Stephen Hawking, Elon Musk and Bill Gates, have issued warnings about runaway technological progress leading to superintelligent machines that might not be favorably disposed to humanity.

What has not been shown, however, is scientific evidence for such an event. Indeed, the idea has been treated more skeptically by neuroscientists and a vast majority of artificial intelligence researchers.

For starters, biologists acknowledge that the basic mechanisms for biological intelligence are still not completely understood, and as a result there is not a good model of human intelligence for computers to simulate.

Indeed, the field of artificial intelligence has a long history of over-promising and under-delivering. John McCarthy, the mathematician and computer scientist who coined the term artificial intelligence, told his Pentagon funders in the early 1960s that building a machine with human levels of intelligence would take just a decade. Even earlier, in 1958 The New York Times reported that the Navy was planning to build a “thinking machine” based on the neural network research of the psychologist Frank Rosenblatt. The article forecast that it would take about a year to build the machine and cost about $100,000.

The notion of the Singularity is predicated on Moore’s Law, the 1965 observation by the Intel co-founder Gordon Moore, that the number of transistors that can be etched onto a sliver of silicon doubles at roughly two year intervals. This has fostered the notion of exponential change, in which technology advances slowly at first and then with increasing rapidity with each succeeding technological generation.

At this stage Moore’s Law seems to be on the verge of stalling.•

Tags: