In a short Washington Post Q&A conducted by Robert Gebelhoff, philosopher Nick Bostrom explains why he favors human enhancement. It will be a thorny thing to implement, but it’s going to happen if humans don’t succumb first to an existential risk. In fact, cognitive enhancement may be the only way we don’t become extinct. An excerpt:
Question:
You’ve written in favor of human enhancement — which includes everything from genetic engineering to “mind-uploading” — to curb the risks AI might bring. How should we balance the risks of human enhancement and artificial intelligence?
Nick Bostrom:
I don’t think human enhancement should be evaluated solely in terms of how it might influence the AI development trajectory. But it is interesting to think about how different technologies and capabilities could interact. For example, humanity might eventually be able to reach a high level of technology and scientific understanding without cognitive enhancement, but with cognitive enhancement we could get there sooner.
And the character of our progress might also be different if we were smarter: less like that of a billion monkeys hammering away furiously at a billion typewriters until something usable appears by chance, and more like the work of insight and purpose. This might increase the odds that certain hazards would be foreseen and avoided. If machine superintelligence is to be built, one may wish the folks building it to be as competent as possible.•
Tags: Nick Bostrom, Robert Gebelhoff