At the New Yorker Elements blog, NYU psychology professor Gary Marcus has a post about the AI community, which seems more interested in creating machines that are better at sleight of hand than depth of thought. An excerpt:
“In a terrific paper just presented at the premier international conference on artificial intelligence, [Henry] Levesque, a University of Toronto computer scientist who studies these questions, has taken just about everyone in the field of A.I. to task. He argues that his colleagues have forgotten about the ‘intelligence’ part of artificial intelligence.
Levesque starts with a critique of Alan Turing’s famous ‘Turing test,’ in which a human, through a question-and-answer session, tries to distinguish machines from people. You’d think that if a machine could pass the test, we could safely conclude that the machine was intelligent. But Levesque argues that the Turing test is almost meaningless, because it is far too easy to game. Every year, a number of machines compete in the challenge for real, seeking something called the Loebner Prize. But the winners aren’t genuinely intelligent; instead, they tend to be more like parlor tricks, and they’re almost inherently deceitful. If a person asks a machine ‘How tall are you?’ and the machine wants to win the Turing test, it has no choice but to confabulate. It has turned out, in fact, that the winners tend to use bluster and misdirection far more than anything approximating true intelligence. One program worked by pretending to be paranoid; others have done well by tossing off one-liners that distract interlocutors. The fakery involved in most efforts at beating the Turing test is emblematic: the real mission of A.I. ought to be building intelligence, not building software that is specifically tuned toward fixing some sort of arbitrary test.”
Tags: Gary Marcus, Henry Levesque