“There Is Still No Machine Remotely Flexible Enough To Deal With The Real World”

Another really smart post by NYU psychology professor Gary Marcus at the New Yorker News Desk blog, this one entitled, “Why Making Robots Is So Darned Hard.” An excerpt:

“Meanwhile, whether a robot looks like a human or hockey puck, it is only as clever as the software within. And artificial intelligence is still very much a work-in-progress, with no machine approaching the full flexibility of the human mind. There is no shortage of strategies—ranging from simulations of biological brains to deep learning and to older techniques drawn from classical artificial intelligence—but there is still no machine remotely flexible enough to deal with the real world. The best robot-vision systems, for example, work far better with isolated objects than with complex scenes involving many objects; a robot can easily learn to tell the difference between a person and a basketball, but it’s far harder to learn why the people are passing the ball a certain way. Visual recognition of complex flexible objects, like strands of cooked spaghetti and opening and closing humans hands, present tremendous challenges, too. Even further away is a robust way of embodying computers with common sense.

In virtually every robot that’s ever been built, the key challenge is generalization, and moving things from the laboratory to the real world. It’s one thing to get a robot to fold a colorful towel in an empty room; it’s another to get it to succeed in a busy apartment with visual distractions that the machine can’t quite parse.”

Tags: