“If The Computer Thinks It’s You, Then Maybe It Really Is”

In “How to Live Forever,” a lively New Yorker blog post, Tim Wu considers whether the self would continue should we eventually be able to upload our consciousness into a computer. No, we certainly wouldn’t remain in the same sense. Of course, we never remain the same. If we were somehow able to live indefinitely, we’d be markedly different as time went by. Even within our current relatively puny lifespans, great changes occur within us and the through line we tell ourselves exists may be just a narrative trick. But I grant that some sort of container-based consciousness makes for a more radical departure than merely the depredations of time. From the second the changeover occurs, life, or something like it, is altered. From Wu:

Some people don’t consider that a problem. After all, if a copy thinks it is you, perhaps that would be good enough. David Chalmers, a philosopher at the Australian National University, points out that we lose consciousness every night when we go to sleep. When we regain it, we think nothing of it. “Each waking is really like a new dawn that’s a bit like the commencement of a new person,” Chalmers has said. “That’s good enough…. And if that’s so, then reconstructive uploading will also be good enough.”

If the self has no meaning, its death has less significance; if the computer thinks it’s you, then maybe it really is. The philosopher Derek Parfit captures this idea when he says that “my death will break the more direct relations between my present experiences and future experiences, but it will not break various other relations. This is all there is to the fact that there will be no one living who will be me.”

I suspect, however, that most people seeking immortality rather strongly believe that they have a self, which is why they are willing to spend so much money to keep it alive. They wouldn’t be satisfied knowing that their brains keep on living without them, like a clone. This is the self-preserving, or selfish, version of everlasting life, in which we seek to be absolutely sure that immortality preserves a sense of ourselves, operating from a particular point of view.

The fact that we cannot agree on whether our sense of self would survive copying is a reminder that our general understanding of consciousness and self-awareness is incredibly weak and limited. Scientists can’t define it, and philosophers struggle, too. Giulio Tononi, a theorist based at the University of Wisconsin, defines consciousness simply as “what fades when we fall into dreamless sleep.” In recent years, he and other scientists, like Christof Koch, at Caltech, have made progress in understanding when consciousness arises, namely from massive complexity and linkages between different parts of the brain. “To be conscious,” Koch has written, “you need to be a single, integrated entity with a large repertoire of highly differentiated states.” That is pretty abstract. And it still gives us little to no sense of what it would mean to transfer ourselves to some other vessel.

With just an uploaded brain and no body, would you even be conscious in a meaningful sense?•

Tags: