I wonder if it’s necessity or ego telling us that AI has to think the same we do to be on our level. Couldn’t it operate otherly and best us the way animals on four legs outrun humans on two? Is thinking only one thing or can it be another thing again? From “Unthinking Computers Perform Clever Parlor Tricks,” Richard Waters’ middling enthusiasm for deep learning in the Financial Times:
“The success of deep learning is a product of the times. The idea is decades old: that a batch of processors, fed with enough data, could be made to function like a network of artificial neurons. Grouping and sorting information in progressively more refined ways, they could ‘learn’ how to parse it in something akin to the way the human brain is believed to function.
It has taken the massive computing power concentrated in cloud data centres to train neural networks enough to make them useful. It sounds like a dream of artificial intelligence as conjured up by Google: ingest all the world’s data and apply enough processing power, and the secrets of the universe will reveal themselves to you.
Deep learning has produced some impressive results. In a project known as DeepFace, Facebook recently reported that it had reached 97.35 per cent accuracy in identifying the faces of 4,000 people in a collection of 4m images, far better than had been achieved before. Such feats of pattern recognition come naturally to humans, but they are hard for computer scientists to copy. Even trite-sounding results can point to important advances. Google’s report two years ago that it had designed a system that identified cats in YouTube videos still reverberates around the field.
Using the same techniques to ‘understand’ language or solve other problems that rely on pattern recognition could make machines far better at interpreting the world around them. By analysing what people are doing and comparing it to what they (and thousands of others) have done in similar situations in the past, they could also anticipate what they might do next.
The result could be behavioural systems that truly understand your behaviour and recommendation engines capable of suggesting things you actually want. These may sound eerie. But done properly, machines could come to anticipate our needs and act as lifetime guides.
But there is a risk of equating the output of systems such as these with the products of actual human intelligence. In reality, they are parlour tricks, albeit impressive ones. The important thing will be to know where to apply their skills – and how far to trust them.”
Tags: Richard Waters