In the adrenaline rush to create a mind-blowing new technology (and profit from it directly or indirectly), ethical questions can be lost in an institutional fog and in competition among companies and countries. Richard Feynman certainly felt he’d misplaced his moral compass in just such a way during the Manhattan Project.
The attempt to create Artificial General Intelligence is something of a Manhattan Project for the mind, and while the point is the opposite of destruction, some believe that even if it doesn’t end humans with a bang, AGI may lead our species to a whimpering end. The main difference today is those working on such projects seem keenly aware of the dangers that may arise while we’re harnessing the power of these incredible tools. That doesn’t mean the future is assured–there’ll be twists and turns we can’t yet imagine–but it’s a hopeful sign.
Bloomberg Neural Net reporter Jack Clark conducted a smart Q&A with DeepMind CEO Demis Hassabis, discussing not only where his work fits into the scheme of Alphabet but also the larger implications of superintelligence. An excerpt:
You’ve said it could be decades before you’ve truly developed artificial general intelligence. Do you think it will happen within your lifetime?
Well, it depends on how much sleep deprivation I keep getting, I think, because I’m sure that’s not good for your health. So I am a little bit worried about that. I think it’s many decades away for full AI. I think it’s feasible. It could be done within our natural lifetimes, but it may be it’s the next generation. It depends. I’d be surprised if it took more than, let’s say, 100 years.
So once you’ve created a general intelligence, after having drunk the Champagne or whatever you do to celebrate, do you retire?
No. No, because …
You want to study science?
Yeah, that’s right. That’s what I really want to build the AI for. That’s what I’ve always dreamed about doing. That’s why I’ve been working on AI my whole life: I see it as the fastest way to make amazing progress in science.
Say you succeed and create a super intelligence. What happens next? Do you donate the technology to the United Nations?
I think it should be. We’ve talked about this a lot. Actually Eric Schmidt [executive chairman of Alphabet, Google’s parent] has mentioned this. We’ve talked to him. We think that AI has to be used for the benefit of everyone. It should be used in a transparent way, and we should build it in an open way, which we’ve been doing with publishing everything we write. There should be scrutiny and checks and balances on that.
I think ultimately the control of this technology should belong to the world, and we need to think about how that’s done. Certainly, I think the benefits of it should accrue to everyone. Again, there are some very tricky questions there and difficult things to go through, but certainly that’s our belief of where things should go.•