AI has not traditionally excelled at pattern recognition, capable of recognizing only single objects but unable to decipher the meaning of actions or interactions. Such an advance would make driverless cars and other oh-so-close wonders a reality. Stanford and Google have just announced breakthroughs. From John Markoff at the New York Times:
“During the past 15 years, video cameras have been placed in a vast number of public and private spaces. In the future, the software operating the cameras will not only be able to identify particular humans via facial recognition, experts say, but also identify certain types of behavior, perhaps even automatically alerting authorities.
Two years ago Google researchers created image-recognition software and presented it with 10 million images taken from YouTube videos. Without human guidance, the program trained itself to recognize cats — a testament to the number of cat videos on YouTube.
Current artificial intelligence programs in new cars already can identify pedestrians and bicyclists from cameras positioned atop the windshield and can stop the car automatically if the driver does not take action to avoid a collision.
But ‘just single object recognition is not very beneficial,’ said Ali Farhadi, a computer scientist at the University of Washington who has published research on software that generates sentences from digital pictures. ‘We’ve focused on objects, and we’ve ignored verbs,’ he said, adding that these programs do not grasp what is going on in an image.
Both the Google and Stanford groups tackled the problem by refining software programs known as neural networks, inspired by our understanding of how the brain works. Neural networks can ‘train’ themselves to discover similarities and patterns in data, even when their human creators do not know the patterns exist.”
___________________________
“Why not devote your powers to discerning patterns?”
Tags: John Markoff