Christof Koch

You are currently browsing articles tagged Christof Koch.

Nautilus has published its Consciousness issue, and one of the highlights is Steve Paulson’s Q&A with neuroscientist Christof Koch, which bubbles with bold ideas on the issue’s theme but also on related topics like evolution.

The subject argues that seeing the brain as analagous to a computer is a fraught enterprise, though he doesn’t wax poetic like a mystic about the existence of a soul made of some “special substance that can’t be tracked by science.” In a wider sense, he’s not romantic about humans: “We’re not the dominant species on the planet because we are wiser or swifter or more powerful. It’s because we’re more intelligent and ruthless.”

For all his skepticism about Homo sapiens, Koch retains a belief in the universe as “wonderful,” a place we can greatly enjoy if we don’t annihilate ourselves, a formidable challenge for a technological culture.

From an exchange about the existential threat of Strong AI:

Question:

You really believe artificial intelligence could develop a certain level of complexity and wipe us out?

Christof Koch:

This is independent of the question of computer consciousness. Yes, if you have an entity that has enough AI and deep machine learning and access to the Cloud, etc., it’s possible in our lifetime that we’ll see creatures that we can talk to with almost the same range of fluidity and depth of conversation that you and I have. Once you have one of them, you replicate them in software and you can have billions of them. If you link them together, you could get superhuman intelligence. That’s why I think it behooves all of us to think hard about this before it may be too late. Yes, there’s a promise of untold benefits, but we all know human nature. It has its dark side. People will misuse it for their own purposes.

Question:

How do we build in those checks to make sure computers don’t rule the world?

Christof Koch:

That’s a very good question. The only reason we don’t have a nuclear bomb in every backyard is because you can’t build it easily. It’s hard to get the material. It takes a nation state and tens of thousands of people. But that may be different with AI. If current trends accelerate, it may be that 10 programmers in Timbuktu could unleash something truly malevolent onto mankind. These days, I’m getting more pessimistic about the fate of a technological species such as ours. Of course, this might also explain the Fermi paradox.

Question:

Remind us what the Fermi paradox is.

Christof Koch:

We have yet to detect a single intelligent species, even though we know there are probably trillions of planets. Why is that? Well, one explanation is it’s just extremely unlikely for life to arise and we’re the only one. But I think a more likely possibility is that any time you get life that’s sufficiently complex, with advanced technology, it has somehow managed to annihilate itself, either by nuclear war or by the rise of machines.

Question:

You are a pessimist! You really think any advanced civilization is going to destroy itself?

Christof Koch:

If it’s very aggressive like ours and it’s based in technology.•

Tags: ,

c1c5bd8e07a8965ef7dfaed2a64efb6b

The Scientific American piece “20 Big Questions about the Future of Humanity” is loads of fun, setting the huge issues (consciousness, space colonization, etc.) before top-shelf scientists. The only disappointment is University of New Mexico professor Carlton Caves stating that human extinction via machine intelligence “can be avoided by unplugging them.” One can only hope he was being flippant, though it’s not a useful response regardless. Three entries:

1. Does humanity have a future beyond Earth?
“I think it’s a dangerous delusion to envisage mass emigration from Earth. There’s nowhere else in the solar system that’s as comfortable as even the top of Everest or the South Pole. We must address the world’s problems here. Nevertheless, I’d guess that by the next century, there will be groups of privately funded adventurers living on Mars and thereafter perhaps elsewhere in the solar system. We should surely wish these pioneer settlers good luck in using all the cyborg techniques and biotech to adapt to alien environments. Within a few centuries they will have become a new species: the post-human era will have begun. Travel beyond the solar system is an enterprise for post-humans — organic or inorganic.”
—Martin Rees, British cosmologist and astrophysicist

3. Will we ever understand the nature of consciousness?
“Some philosophers, mystics and other confabulatores nocturne pontificate about the impossibility of ever understanding the true nature of consciousness, of subjectivity. Yet there is little rationale for buying into such defeatist talk and every reason to look forward to the day, not that far off, when science will come to a naturalized, quantitative and predictive understanding of consciousness and its place in the universe.”
Christof Koch, president and CSO at the Allen Institute for Brain Science; member of the Scientific American Board of Advisers

10. Can we avoid a “sixth extinction”?
“It can be slowed, then halted, if we take quick action. The greatest cause of species extinction is loss of habitat. That is why I’ve stressed an assembled global reserve occupying half the land and half the sea, as necessary, and in my book ‘Half-Earth,’ I show how it can be done. With this initiative (and the development of a far better species-level ecosystem science than the one we have now), it will also be necessary to discover and characterize the 10 million or so species estimated to remain; we’ve only found and named two million to date. Overall, an extension of environmental science to include the living world should be, and I believe will be, a major initiative of science during the remainder of this century.”
Edward O. Wilson, University Research Professor emeritus at Harvard University•

Tags: , ,

Yes, eventually you’ll have the implant, and those brain chips may arrive in two waves: initially for the treatment of chronic illness and then for performance enhancement. Because of the military’s interest in the latter, however, those waves might come crashing down together. From “The Future of Brain Implants,” an article by Gary Marcus and Christof Koch in the Wall Street Journal:

“Many people will resist the first generation of elective implants. There will be failures and, as with many advances in medicine, there will be deaths. But anybody who thinks that the products won’t sell is naive. Even now, some parents are willing to let their children take Adderall before a big exam. The chance to make a ‘superchild’ (or at least one guaranteed to stay calm and attentive for hours on end during a big exam) will be too tempting for many.

Even if parents don’t invest in brain implants, the military will. A continuing program at Darpa, a Pentagon agency that invests in cutting-edge technology, is already supporting work on brain implants that improve memory to help soldiers injured in war. Who could blame a general for wanting a soldier with hypernormal focus, a perfect memory for maps and no need to sleep for days on end? (Of course, spies might well also try to eavesdrop on such a soldier’s brain, and hackers might want to hijack it. Security will be paramount, encryption de rigueur.)

An early generation of enhancement implants might help elite golfers improve their swing by automating their mental practice. A later generation might allow weekend golfers to skip practice altogether. Once neuroscientists figure out how to reverse-engineer the end results of practice, “neurocompilers” might be able to install the results of a year’s worth of training directly into the brain, all in one go.

That won’t happen in the next decade or maybe even in the one after that. But before the end of the century, our computer keyboards and trackpads will seem like a joke; even Google Glass 3.0 will seem primitive.”

Tags: ,