Nautilus has published its Consciousness issue, and one of the highlights is Steve Paulson’s Q&A with neuroscientist Christof Koch, which bubbles with bold ideas on the issue’s theme but also on related topics like evolution.
The subject argues that seeing the brain as analagous to a computer is a fraught enterprise, though he doesn’t wax poetic like a mystic about the existence of a soul made of some “special substance that can’t be tracked by science.” In a wider sense, he’s not romantic about humans: “We’re not the dominant species on the planet because we are wiser or swifter or more powerful. It’s because we’re more intelligent and ruthless.”
For all his skepticism about Homo sapiens, Koch retains a belief in the universe as “wonderful,” a place we can greatly enjoy if we don’t annihilate ourselves, a formidable challenge for a technological culture.
From an exchange about the existential threat of Strong AI:
You really believe artificial intelligence could develop a certain level of complexity and wipe us out?
This is independent of the question of computer consciousness. Yes, if you have an entity that has enough AI and deep machine learning and access to the Cloud, etc., it’s possible in our lifetime that we’ll see creatures that we can talk to with almost the same range of fluidity and depth of conversation that you and I have. Once you have one of them, you replicate them in software and you can have billions of them. If you link them together, you could get superhuman intelligence. That’s why I think it behooves all of us to think hard about this before it may be too late. Yes, there’s a promise of untold benefits, but we all know human nature. It has its dark side. People will misuse it for their own purposes.
How do we build in those checks to make sure computers don’t rule the world?
That’s a very good question. The only reason we don’t have a nuclear bomb in every backyard is because you can’t build it easily. It’s hard to get the material. It takes a nation state and tens of thousands of people. But that may be different with AI. If current trends accelerate, it may be that 10 programmers in Timbuktu could unleash something truly malevolent onto mankind. These days, I’m getting more pessimistic about the fate of a technological species such as ours. Of course, this might also explain the Fermi paradox.
Remind us what the Fermi paradox is.
We have yet to detect a single intelligent species, even though we know there are probably trillions of planets. Why is that? Well, one explanation is it’s just extremely unlikely for life to arise and we’re the only one. But I think a more likely possibility is that any time you get life that’s sufficiently complex, with advanced technology, it has somehow managed to annihilate itself, either by nuclear war or by the rise of machines.
You are a pessimist! You really think any advanced civilization is going to destroy itself?
If it’s very aggressive like ours and it’s based in technology.•