I agree with two very smart people working in Artificial Intelligence, Andrew Ng and Hod Lipson, when I say that I’m not worried about any near-term scenario in which Strong AI extincts Homo Sapiens the way we did Neanderthals. It’s not that it’s theoretically impossible in the long run, but we would likely first need to know precisely how the human brain operates, to understand the very nature of consciousness, to give “life” to our eliminators. While lesser AI than that could certainly be dangerous on a large scale, I don’t think it’s moving us back down the food chain today or tomorrow.
But like Ng and Lipson, the explosion of Weak AI throughout society in the form of autonomous machines is very concerning to me. It’s an incredible victory of ingenuity that can become a huge loss if we aren’t able to politically reconcile free-market societies with highly autonomous ones. An excerpt from Robert Hof at Forbes’ horribly designed site:
“Historically technology has created challenges for labor,” [Ng] noted. But while previous technological revolutions also eliminating many types of jobs and created some displacement, the shift happened slowly enough to provide new opportunities to successive generations of workers. “The U.S. took 200 years to get from 98% to 2% farming employment,” he said. “Over that span of 200 years we could retrain the descendants of farmers.”
But he says the rapid pace of technological change today has changed everything. “With this technology today, that transformation might happen much faster,” he said. Self-driving cars, he suggested could quickly put 5 million truck drivers out of work.
Retraining is a solution often suggested by the technology optimists. But Ng, who knows a little about education thanks to his cofounding of Coursera, doesn’t believe retraining can be done quickly enough. “What our educational system has never done is train many people who are alive today. Things like Coursera are our best shot, but I don’t think they’re sufficient. People in the government and academia should have serious discussions about this.”
His concerns were echoed by Hod Lipson, director of Cornell University’s Creative Machines Lab. “If AI is going to threaten humanity, it’s going to be through the fact that it does almost everything better than almost anyone,” he said.•