I don’t anticipate human-level AI at any time in the near future if at all. Silicon does some things incredibly well and so does carbon, but they’re not necessarily the same things. Even when they both successfully tackle the same problem successfully, it’s executed differently. For instance: Machines haven’t started writing great film reviews but instead use algorithms that help people choose movies. It’s a different process–and a different experience.
I would guess that if machines are to ever to truly understand in a human way, it will be because there’s been a synthesis of biology and technology and not because the latter has “learned” the ways of the former. In a New Yorker blog item, NYU psychologist Gary Marcus offers a riposte to the recent New York Times article which strongly suggested we’re at the dawn of a new age of human-like smart machines. An excerpt:
“There have been real innovations, like driverless cars, that may soon become commercially available. Neuromorphic engineering and deep learning are genuinely exciting, but whether they will really produce human-level A.I. is unclear—especially, as I have written before, when it comes to challenging problems like understanding natural language.
The brainlike I.B.M. system that the Times mentioned on Sunday has never, to my knowledge, been applied to language, or any other complex form of learning. Deep learning has been applied to language understanding, but the results are feeble so far. Among publicly available systems, the best is probably a Stanford project, called Deeply Moving, that applies deep learning to the task of understanding movie reviews. The cool part is that you can try it for yourself, cutting and pasting text from a movie review and immediately seeing the program’s analysis; you even teach it to improve. The less cool thing is that the deep-learning system doesn’t really understand anything.
It can’t, say, paraphrase a review or mention something the reviewer liked, things you’d expect of an intelligent sixth-grader. About the only thing the system can do is so-called sentiment analysis, reducing a review to a thumbs-up or thumbs-down judgment. And even there it falls short; after typing in ‘better than Cats!’ (which the system correctly interpreted as positive), the first thing I tested was a Rotten Tomatoes excerpt of a review of the last movie I saw, American Hustle: ‘A sloppy, miscast, hammed up, overlong, overloud story that still sends you out of the theater on a cloud of rapture.’ The deep-learning system couldn’t tell me that the review was ironic, or that the reviewer thought the whole was more than the sum of the parts. It told me only, inaccurately, that the review was very negative. When I sent the demo to my collaborator, Ernest Davis, his luck was no better than mine. Ernie tried ‘This is not a book to be ignored’ and ‘No one interested in the subject can afford to ignore this book.’ The first came out as negative, the second neutral. If Deeply Moving is the best A.I. has to offer, true A.I.—of the sort that can read a newspaper as well as a human can—is a long way away.”