In Michael Crichton’s original 1973 Westworld, when the machines begin to rise, the operators are flummoxed about what measures to take to prevent calamity. One dejected scientist says resignedly about the robots run amok: “They’ve been designed by other computers…we don’t know exactly how they work.”
As algorithms grow more complex, they begin to escape us. Deep learning will encourage such opaqueness if it develops unchecked, which is currently the most likely scenario. Some who should know better have repeated the ridiculous idea that if things go wrong, we can just pull out one plug or another and all will be fine.
There will be no plug. Even if there was, in a highly technological society, yanking it from the wall would mean the end of us.
In “The Dark Secret at the Heart of AI,” an excellent Will Knight Technology Review article, the author speaks to the problem of machines teaching themselves, a powerful tool and, perhaps, weapon. He warns that “we’ve never before built machines that operate in ways their creators don’t understand.”
The opening:
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.•
Tags: Will Knight