“Um, The Virtual Reality Inside Our Head?”

In Carl Sagan’s 1969 article “Mr. X,” the physicist wrote of his marijuana experience, summing it up most colorfully this way: “When I closed my eyes, I was stunned to find that there was a movie going on the inside of my eyelids.” Dude!

A movie going on inside the head is one metaphor the Australian philosopher David Chalmers uses to try to describe that inscrutable thing called consciousness. He comes up with several other analogies in a Reddit AMA about the hard problem. Below are some of the more accessible exchanges. 


Question:

In your TED talk you metaphorically characterized consciousness as “a movie playing inside your head,” and more comically in an IAI video as “that annoying thing between naps”. Do you have or have you come across any other metaphors of consciousness that you find fruitful when trying to get across just exactly what consciousness is?

David Chalmers:

um, the virtual reality inside our head? (probably better than a movie!) the thing mary wouldn’t know about from inside her black and white room, despite knowing all about the physical processes in the brain? the thing that makes us different from zombies or robots without an inner life? the first-person point of view?


Question:

When we talk about the dangers of AI, we may be talking about the danger of having a self-driven car and its decision making, or a more general AI and whether it will lead to an AI+, and ultimately to a larger danger concerning all of us. I am interested when philosophers (specifically) talk about the imminent dangers of the second type of AI, based on recent achievements (general Atari game playing, beating Go champion, usage in medical environments etc) and my question is: What do you think should be the relationship between academic philosophers, who focus on how imminent the AI danger is, and the actual engineering behind the aforementioned achievements? Should academic philosophers incorporate into their arguments what are the specific modelling techniques or search algorithms (eg monte carlo tree search, back-propagation, deep neural nets) and how they work when they argue about how close to the possible danger we are? If not, is the imminent part argued in a satisfactory way in your opinion?

David Chalmers:

i don’t know if philosophers are the best judges of just how imminent human-level AI or AI+ is. in my own work on the topic (e.g., the paper on the singularity) i’ve stressed that a lot of the philosophical issues are fairly independent of timeframe. of course it’s true that the question of imminence is highly relevant for practical purposes. i think that to assess it one has to pay close attention to the current state of AI as well as related fields: e.g. in the current situation, to try to figure out just what deep learning can and can’t do, what are the main obstacles, and what are the prospects for overcoming them. but the fact is that even experts in this area have widely varied views about the timeframe and are wary of making confident predictions. i chaired a panel on just this topic at the recent asilomar conference on beneficial AI, with eight leading AI researchers, and few of them were willing to make confident predictions (though consensus high-confidence area seems to be somewhere between 20 and 100 years). so i think that we should think about and plan for human-level AI in a way that is fairly robust over different timeframes.


Question:

Do you watch Westworld? It’s an amazing TV show that covers topics like AI and consciousness. If yes, what do you think about it?

David Chalmers:

i love westworld. it’s really well-done. i do think its reliance on julian jaynes’ long-discarded theory of consciousness (that it involves realizing the voices in your head are your own) is disappointing, though perhaps it’s somewhat cinematic. in general i think although the show presents itself as a meditation on consciousness in AIs and others, i think it’s much more of an exploration of free will. it seems to me that the AIs in the show are pretty obviously conscious, but there are real questions of what sort if any of free will they might have, given the way their actions are grounded in routines. and the “journey” of the AIs seems more like a journey toward free will and perhaps toward greater self-consciousness than toward consciousness per se. of course there are also very rich materials in the show for thinking about the ethics of AI.


Question:

Do you think any substantial progress on the hard problem of consciousness will be made in time for the debate on AI rights? If by that time we still haven’t made any progress on the hard problem of consciousness, how should humanity value the life of an “apparently sentient” AI, especially relative to a human life?

David Chalmers:

i hope so, but there are no guarantees. on the other hand, we can have an informed discussion about the distribution of consciousness even without solving the hard problem. we’re doing that currently in the case of consciousness in non-human animals, where most people (including me) agree that there is strong evidence of consciousness in many soecies. i think it’s conceivable we could get into a situation like that with AI, though there would no doubt be many hard cases. i do think that when an AI is “apparently sentient” based on behavior, we should adopt a principle of assuming it is conscious, unless there’s some very good reason not to. and if it’s conscious in the way that we are, i think prima facie its life should have value comparable to ours (though perhaps there will also be all sorts of differences that make a moral difference).


Question:

Do you think sustained consciousness will be worth it without the pleasures of the body?

David Chalmers:

i hope we don’t have to choose!•

Tags: