Elon Musk says he’ll put humans on Mars within a decade, but perhaps it’s the Botox talking.
The Space X founder has had his fears about Artificial Intelligence shift somewhat since the Nick Bostrom bender he went on at around the time of the publication of Superintelligence. He’s not so worried about conscious AI obviating us now but instead thinks the concentration of such knowledge among a handful of countries and/or companies is highly dangerous. He wants to democratize AI, so that it instead can be accessed by all. Otherwise, he fears, dictators or rogue states can steal the science and use it to try to dominate the world.
But it’s possible Musk’s plan will end up making things more dangerous. Sixty years ago, President Eisenhower launched “Atoms for Peace,” sharing with the world nuclear knowledge and supplies, a move aimed at providing participating nations with relatively cheap energy and making the world less likely to end in Armageddon. This policy led to the building of the first nuclear reactors in some nations, Iran and Pakistan included, and a proliferation of WMDs. Everyone having similar weapons and knowledge only precludes brinkmanship if all actors involved are rational, and in that sense, the world is not flat.
From Y Combinator:
Speaking of really important problems, AI. You have been outspoken about AI. Could you talk about what you think the positive future for AI looks like and how we get there?
Okay, I mean I do want to emphasize that this is not really something that I advocate or this is not prescriptive. This is simply, hopefully, predictive. Because you will hear some say, well, like this is something that I want to occur instead of this is something I think that probably is the best of the available alternatives. The best of the available alternatives that I can come up with, and maybe someone else can come up with a better approach or better outcome, is that we achieve democratization of AI technology. Meaning that no one company or small set of individuals has control over advanced AI technology. I think that’s very dangerous. It could also get stolen by somebody bad, like some evil dictator or country could send their intelligence agency to go steal it and gain control. It just becomes a very unstable situation, I think, if you’ve got any incredibly powerful AI. You just don’t know who’s going to control that.
So it’s not that I think that the risk is that the AI would develop a will of its own right off the bat. I think the concern is that someone may use it in a way that is bad. Or even if they weren’t going to use it in a way that’s bad but somebody could take it from them and use it in a way that’s bad, that, I think, is quite a big danger. So I think we must have democratization of AI technology to make it widely available. And that’s the reason that obviously you, me, and the rest of the team created OpenAI was to help spread out AI technology so it doesn’t get concentrated in the hands of a few. But then, of course, that needs to be combined with solving the high-bandwidth interface to the cortex.•
Tags: Elon Musk