I’m not entirely convinced Elon Musk doesn’t have more in common with Donald Trump in regard to politics than we know. Not saying that he is a raging Libertarian monster like his pal Peter Thiel, but it’s not likely he’s the lovable billionaire that Iron Man cameos would have us believe.
Now that his harebrained attempt to “stage manage” the orange supremacist is happily over, the entrepreneur has fully returned to his normal chores, which are, of course, abnormal. There are two different Musks at work.
Good Elon creates gigafactories and gives people the opportunity to power their homes with solar. As these tools spread, through his efforts and those of his competitors, the Silicon Valley magnate will have made a major contribution to potentially saving our species from the existential threat of climate change.
Bad Elon is a sort of lower-case Nikola Tesla, whose name he borrowed, of course, for his EV company. And it’s the worst of the Serbian-American inventor that he emulates: grandiose, egotistical, desperate to awe with brilliance even when the logic doesn’t quite cohere. Like Tesla’s final patented invention, the Flivver Plane, which would never have been able to fly even if it was built, Musk often concentrates his attention where it’s not most needed on things that won’t happen.
Much of this baffling overconfidence can be seen in his near-term plan to become a Martian. Some of it is also on view in his deathly fear of killer robots, a stance he developed after going on a Bostrom bender. Intelligent machines are a very-long-term risk for our species (if we’re not first done in by our own dimness or perhaps a solar flare), but they shouldn’t be a primary concern to anyone presently. Not when children even in a wealthy country like America still drink lead-contaminated water, relatively dumb AI can cause employment within industries to collapse and new technological tools are exacerbating wealth inequality.
In a Wired piece, Tom Simonite contextualizes Musk’s foolhardy sci-fi AI fears as well as anyone has. The opening:
IMAGINE YOU HAD a chance to tell 50 of the most powerful politicians in America what urgent problem you think needs prompt government action. Elon Musk had that chance this past weekend at the National Governors Association Summer Meeting in Rhode Island. He chose to recommend the gubernatorial assembly get serious about preventing artificial intelligence from wiping out humanity.
“AI is a fundamental existential risk for human civilization and I don’t think people fully appreciate that,” Musk said. He asked the governors to consider a hypothetical scenario in which a stock-trading program orchestrated the 2014 missile strike that downed a Malaysian airliner over Ukraine—just to boost its portfolio. And he called for the establishment of a new government regulator that would force companies building artificial intelligence technology to slow down. “When the regulator’s convinced it’s safe to proceed then you can go, but otherwise slow down,” he said.
Musk’s remarks made for an enlivening few minutes on a day otherwise concerned with more quotidian matters such as healthcare and education. But Musk’s call to action was something of a missed opportunity. People who spend more time working on artificial intelligence than the car, space, and solar entrepreneur say his eschatological scenarios risk distracting from more pressing concerns as artificial intelligence technology percolates into every industry.
Pedro Domingos, a professor who works on machine learning at the University of Washington, summed up his response to Musk’s talk on Twitter with a single word: Sigh. “Many of us have tried to educate him and others like him about real vs. imaginary dangers of AI, but apparently none of it has made a dent,” Domingos says. America’s governmental chief executives would be better advised to consider the negative effects of today’s limited AI, such as how it is giving disproportionate market power to a few large tech companies, he says. Iyad Rahwan, who works on matters of AI and society at MIT, agrees. Rather than worrying about trading bots eventually becoming smart enough to start wars as an investment strategy, we should consider how humans might today use dumb bots to spread misinformation online, he says.
Rahwan doesn’t deny that Musk’s nightmare scenarios could eventually happen, but says attending to today’s AI challenges is the most pragmatic way to prepare. “By focusing on the short-term questions, we can scaffold a regulatory architecture that might help with the more unpredictable, super-intelligent AI scenarios.”•