“They’re Solving Things Like Speech Recognition, Face Recognition, Motion Recognition, Gesture Recognition, All Of This Kind of Stuff”

Reading a new Phys.org article about Google moving more quickly than anticipated with its driverless dreams reminded me of a passage from a Five Books interview with Robopocalypse author Daniel H. Wilson. An excerpt from each piece follows.

__________________________

From Five Books:

Question:

Isn’t machine learning still at a relatively early stage? 

Daniel H. Wilson:

I disagree. I think machine learning has actually pretty much ripened and matured. Machine learning arguably started in the 1950s, and the term artificial intelligence was coined by John McCarthy in 1956. Back then we didn’t know anything – but scientists were really convinced that they had this thing nipped in the bud, that pretty soon they were going to replace all humans. This was because whenever you are teaching machines to think, the lowest hanging fruit is to give them problems that are very constrained. For example, the rules of a board game. So if you have a certain number of rules and you can have a perfect model of your whole world and you know how everything works within this game, well, yes, a machine is going to kick the crap out of people at chess. 

What those scientists didn’t realise is how complicated and unpredictable and full of noise the real world is. That’s what mathematicians and artificial intelligence researchers have been working on since then. And we’re getting really good at it. In terms of applications, they’re solving things like speech recognition, face recognition, motion recognition, gesture recognition, all of this kind of stuff. So we’re getting there, the field is maturing.

“What those scientists didn’t realise is how complicated and unpredictable and full of noise the real world is. That’s what mathematicians and artificial intelligence researchers have been working on since then. And we’re getting really good at it. In terms of applications, they’re solving things like speech recognition, face recognition, motion recognition, gesture recognition, all of this kind of stuff. So we’re getting there, the field is maturing.•

__________________________

From Phys. org:

The head of self-driving cars for Google expects real people to be using them on public roads in two to five years.

Chris Urmson says the cars would still be test vehicles, and Google would collect data on how they interact with other vehicles and pedestrians.

Google is working on sensors to detect road signs and other vehicles, and software that analyzes all the data. The small, bulbous cars without steering wheels or pedals are being tested at a Google facility in California.

Urmson wouldn’t give a date for putting driverless cars on roads en masse, saying that the system has to be safe enough to work properly.

He told reporters Wednesday at the Automotive News World Congress in Detroit that Google doesn’t know yet how it will make money on the cars.

Urmson wants to reach the point where his test team no longer has to pilot the cars. “What we really need is to get to the point where we’re learning about how people interact with it, how they are using it, and how can we best bring that to market as a product that people care for,” he said.•

Tags: ,