Nick Bostrom

You are currently browsing articles tagged Nick Bostrom.

froggerseinfeld5

In the aughts, when first exposed to philosopher Nick Bostrom’s idea that we’re living inside a computer simulation rather than reality, I accepted the premise as a fun thought experiment and something perfectly fine to consider.

It’s lost a lot of charm ever since Elon Musk went on a Bostrom bender after reading Superintelligence in 2014, as he and some other Silicon Valley stalwarts have taken this notion, theoretically possible if unlikely, and transformed it into almost a sure thing. Musk, an aspirant Martian, has said that “there’s a billion to one chance we’re living in base reality.” It’s this certitude, an almost religious fervor, that seems the actual threat to reality.

It reminds me of when I worked for Internet companies during the tail end of Web 1.0, a time of supposedly self-fulfilling prophecies, when everyone, it seemed, was sure NASDAQ would soon leapfrog the Dow, right before the tech bubble burst.

In “Silicon Valley Questions the Meaning of Life,” a smart Vanity Fair “Hive” piece, Nick Bilton articulates exactly why a mere philosophical exercise has become so disquieting. An excerpt:

The theories espoused by many of the prominent figures in the tech industry can sometimes sound as though they were pulled from The Matrix. That’s not really as unusual as it sounds. Hollywood, after all, has been exploring strands of the simulation idea for decades. World on a Wire, Brainstorm, Inception, the entire Matrix franchise, Total Recall, and many other movies have envisioned this theory in one way or another. Most of the technologies we use on a daily basis were first envisioned by sci-fi writers many years ago, including smartphones, tablets, and even a version of Twitter.

But these ideas are often put forth for the purpose of entertainment—the movies end, and we all leave the seemingly real theater, and go back to our real, seemingly un-simulated lives. What’s fascinating, however, is the velocity with which the fictional premise has become a serious, and seriously considered, theory in the Valley. I have been asked, on more than one occasion, if I believe we’re in a simulation. And I have listened, on more than one occasion, as people carefully articulate how our very conversation could be taking place in a simulation. Like a lot of things in the Valley, I have lost track of the line between where the joke ends, if that line even existed at all.

Whatever the case, the conversation is moving from the confines of cubicles and research labs to the mainstream.•

Tags: , ,

Lo-And-Behold-Poster_1200_1781_s

Oxford philosopher Nick Bostrom believes “superintelligence”–machines dwarfing our intellect–is the leading existential threat of our era to humans. He’s either wrong and not alarmed enough by, say, climate change, or correct and warning us of the biggest peril we’ll ever face. Most likely, such a scenario will be a real challenge in the long run, though it’s probably not currently the most paramount one.

In John Thornhill’s Financial Times article about Bostrom, the writer pays some mind to those pushing back at what they feel is needless alarmism attending the academic’s work. An excerpt:

Some AI experts have accused Bostrom of alarmism, suggesting that we remain several breakthroughs short of ever making a machine that “thinks”, let alone surpasses human intelligence. A sceptical fellow academic at Oxford, who has worked with Bostrom but doesn’t want to be publicly critical of his work, says: “If I were ranking the existential threats facing us, then runaway ‘superintelligence’ would not even be in the top 10. It is a second half of the 21st century problem.”

But other leading scientists and tech entrepreneurs have echoed Bostrom’s concerns. Britain’s most famous scientist, Stephen Hawking, whose synthetic voice is facilitated by a basic form of AI, has been among the most strident. “The development of full artificial intelligence could spell the end of the human race,” he told the BBC.

Elon Musk, the billionaire entrepreneur behind Tesla Motors and an active investor in AI research, tweeted: “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.”

Although Bostrom has a reputation as an AI doomster, he starts our discussion by emphasising the extraordinary promise of machine intelligence, in both the short and long term. “I’m very excited about AI and I think it would be a tragedy if this kind of superintelligence were never developed.” He says his main aim, both modest and messianic, is to help ensure that this epochal transition goes smoothly, given that humankind only has one chance to get it right.

“So much is at stake that it’s really worth doing everything we can to maximise the chances of a good outcome,” he says.•

Tags: ,

gorilla678

As I mentioned last week, Elon Musk, among other Silicon Valley stalwarts, has been on a Nick Bostrom bender ever since the publication of Superintelligence. In a smart Guardian profile by Tim Adams, the Oxford philosopher is depicted as being of two minds, believing technology may be the Holy Grail or it could read us our Last Rites. That’s the dual reality of a Transhumanist and Existentialist.

Bostrom tells his interviewer he thinks the risk of human extinction by AI will likely be largely ignored despite his clarion call. “It will come gradually and seamlessly without us really addressing it,” he says.

There seem to be only two cautions in regards to Bostrom’s work: 1) Attention could shift from immediate crises (e.g., climate change) to longer-term ones, and 2) Rules developed today for a possible future explosion of machine intelligence will have to be very flexible since there’s so much information we currently don’t possess/can’t comprehend. 

An excerpt:

Bostrom sees those implications as potentially Darwinian. If we create a machine intelligence superior to our own, and then give it freedom to grow and learn through access to the internet, there is no reason to suggest that it will not evolve strategies to secure its dominance, just as in the biological world. He sometimes uses the example of humans and gorillas to describe the subsequent one-sided relationship and – as last month’s events in Cincinnati zoo highlighted – that is never going to end well. An inferior intelligence will always depend on a superior one for its survival.

There are times, as Bostrom unfolds various scenarios in Superintelligence, when it appears he has been reading too much of the science fiction he professes to dislike. One projection involves an AI system eventually building covert “nanofactories producing nerve gas or target-seeking mosquito-like robots [which] might then burgeon forth simultaneously from every square metre of the globe” in order to destroy meddling and irrelevant humanity. Another, perhaps more credible vision, sees the superintelligence “hijacking political processes, subtly manipulating financial markets, biasing information flows, or hacking human-made weapons systems” to bring about the extinction.

Does he think of himself as a prophet?

He smiles. “Not so much. It is not that I believe I know how it is going to happen and have to tell the world that information. It is more I feel quite ignorant and very confused about these things but by working for many years on probabilities you can get partial little insights here and there. And if you add those together with insights many other people might have, then maybe it will build up to some better understanding.”

Bostrom came to these questions by way of the transhumanist movement, which tends to view the digital age as one of unprecedented potential for optimising our physical and mental capacities and transcending the limits of our mortality. Bostrom still sees those possibilities as the best case scenario in the superintelligent future, in which we will harness technology to overcome disease and illness, feed the world, create a utopia of fulfilling creativity and perhaps eventually overcome death. He has been identified in the past as a member of Alcor, the cryogenic initiative that promises to freeze mortal remains in the hope that, one day, minds can be reinvigorated and uploaded in digital form to live in perpetuity. He is coy about this when I ask directly what he has planned.

“I have a policy of never commenting on my funeral arrangements,” he says.•

Tags: ,

morgue11_games_bart_tank

Elon Musk has been on a Nick Bostrom bender for awhile now, spending big money hoping to counter Homo sapiens-eradicating AI, after devouring the Oxford philosopher’s book Superintelligence. This week, the Mars-positive mogul contended humans are almost definitely merely characters in a more advanced civilization’s video game, something Bostrom has theorized for quite some time. Two excerpts follow: 1) The opening of John Tierney’s excellent 2007 NYT article, “Our Lives, Controlled From Some Guy’s Couch,” and 2) Ezra Klein’s Vox piece about Musk’s Sims-friendly statements.


From Tierney:

Until I talked to Nick Bostrom, a philosopher at Oxford University, it never occurred to me that our universe might be somebody else’s hobby. I hadn’t imagined that the omniscient, omnipotent creator of the heavens and earth could be an advanced version of a guy who spends his weekends building model railroads or overseeing video-game worlds like the Sims.

But now it seems quite possible. In fact, if you accept a pretty reasonable assumption of Dr. Bostrom’s, it is almost a mathematical certainty that we are living in someone else’s computer simulation.

This simulation would be similar to the one in The Matrix, in which most humans don’t realize that their lives and their world are just illusions created in their brains while their bodies are suspended in vats of liquid. But in Dr. Bostrom’s notion of reality, you wouldn’t even have a body made of flesh. Your brain would exist only as a network of computer circuits.

You couldn’t, as in The Matrix, unplug your brain and escape from your vat to see the physical world. You couldn’t see through the illusion except by using the sort of logic employed by Dr. Bostrom, the director of the Future of Humanity Institute at Oxford.

Dr. Bostrom assumes that technological advances could produce a computer with more processing power than all the brains in the world, and that advanced humans, or “posthumans,” could run “ancestor simulations” of their evolutionary history by creating virtual worlds inhabited by virtual people with fully developed virtual nervous systems.•


From Klein:

By far the best moment of Recode’s annual Code Conference was when Elon Musk took the stage and explained that though we think we’re flesh-and-blood participants in a physical world, we are almost certainly computer-generated entities living inside a more advanced civilization’s video game.

Don’t believe me? Here’s Musk’s argument in full: 

The strongest argument for us being in a simulation probably is the following. Forty years ago we had pong. Like, two rectangles and a dot. That was what games were.

Now, 40 years later, we have photorealistic, 3D simulations with millions of people playing simultaneously, and it’s getting better every year. Soon we’ll have virtual reality, augmented reality.

If you assume any rate of improvement at all, then the games will become indistinguishable from reality, even if that rate of advancement drops by a thousand from what it is now. Then you just say, okay, let’s imagine it’s 10,000 years in the future, which is nothing on the evolutionary scale.

So given that we’re clearly on a trajectory to have games that are indistinguishable from reality, and those games could be played on any set-top box or on a PC or whatever, and there would probably be billions of such computers or set-top boxes, it would seem to follow that the odds that we’re in base reality is one in billions.

Tell me what’s wrong with that argument. Is there a flaw in that argument?•

Tags: , , ,

spacecolony8“Greed is good,” proclaimed fictional robber baron Gordon Gekko in 1987, echoing a speech from a year earlier by the very real Ivan Boesky, who by the time Wall Street opened had traded the Four Seasons for the Graybar Hotel, his desires having pried him from the penthouse. The point is well-taken, however, when applied correctly: Unhealthy desires can be useful. You don’t get people to risk life and limb–emigrating to the “New” World or participating in the dangerous Manifest Destiny–unless there’s a potential for a better life, and, often, a bigger bank account. 

I’ve posted previously about my queasiness over recent U.S. regulation which unilaterally allows its corporations to lay claim to bodies in space, but perhaps the quest to go for the gold in out there has a silver lining. While it’s gross for those already fabulously wealthy to be wondering who will use asteroid mining to become the first trillionaire, Grayson Cary considers in a smart Aeon essay that perhaps avarice is a necessary evil if we are to colonize space and safeguard our species against single-planet calamity. As the writer states, past multinational treaties may inhibit unfettered speculation, but probably not. Private, public, U.S., China, etc.–it’s going to be a land rush that sorts itself out as we go, and go we will. As Cary writes, “There comes a point at which Earthbound opinions hardly matter.”

An excerpt:

Over the 2015 Thanksgiving holiday – which, in the spirit of appropriation, seems appropriate – President Barack Obama signed into law the Spurring Private Aerospace Competitiveness and Entrepreneurship (SPACE) Act. It had emerged from House and Senate negotiations with surprisingly robust protections for US asteroid miners. In May, the House had gone only so far as to say that ‘[a]ny asteroid resources obtained in outer space are the property of the entity that obtained them’. In the Senate, commercial space legislation had moved forward without an answer to the question of property. In the strange crucible of the committee process, the bill ended up broader, bolder and more patriotic than either parent.

‘A United States citizen,’ Congress resolved, ‘engaged in commercial recovery of an asteroid resource or a space resource under this chapter shall be entitled to any asteroid resource or space resource obtained, including to possess, own, transport, use and sell the asteroid resource or space resource obtained.’ It’s a turning point, maybe a decisive one, in a remarkable debate over the administration of celestial bodies. It’s an approach with fierce critics – writing for Jacobin magazine in 2015, Nick Levine called it a vision for ‘trickle-down astronomics’ – and the stakes, if you squint, are awfully high. A small step for 535 lawmakers could amount to one giant leap for humankind.

If you hew to the right frame of mind, decisions about space policy have enormous consequences for the future of human welfare. Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford, offered a stark version of that view in a paper called ‘Astronomical Waste: The Opportunity Cost of Delayed Technological Development’ (2003). By one estimate, he wrote, ‘the potential for approximately 1038 human lives is lost every century that colonisation of our local supercluster is delayed; or, equivalently, about 1029 potential human lives per second’. Suppose you accept that perspective, or for any other reason feel an urgent need to get humanity exploring space. How might a species hurry things up?

For a vocal chorus of pro-space, pro-market experts, the answer starts with property: to boldly go and buy and sell. ‘The only way to interest investors in building space settlements,’ writes the non-profit Space Settlement Institute on its website, ‘is to make doing so very profitable.’ In other words: show me the money.•

Tags: ,

kingkong8

Catastrophist philosopher Nick Bostrom believes machine superintelligence may be the greatest existential risk facing humankind, that it could, perhaps sooner than later, be the end of us if we’re not careful. There’s nothing theoretically impossible about that, though I seriously doubt the sooner part. First maybe McDonald’s will be fully automated, and then much, much, much later on we face a robot-inspired endgame. I actually think it’s more likely that such computer intelligence will help us engineer our own evolution into whatever it is we become in the long run, though miscalculation leading to a cascading disaster might become a plausible scenario at some point.

In a Washington Post piece, Joel Achenbach explores the so-called Artificial Intelligence threat and the professional worriers who analyze it and exhort us to shape the future. MIT computer scientist Daniela Rus is presented as a counterpoint to Bostrom, physicist Mark Tegmark and other thinkers who fear an AI-inspired end is near. The opening:

The world’s spookiest philosopher is Nick Bostrom, a thin, soft-spoken Swede. Of all the people worried about runaway artificial intelligence, and Killer Robots, and the possibility of a technological doomsday, Bostrom conjures the most extreme scenarios. In his mind, human extinction could be just the beginning.

Bostrom’s favorite apocalyptic hypothetical involves a machine that has been programmed to make paper clips (although any mundane product will do). This machine keeps getting smarter and more powerful, but never develops human values. It achieves “superintelligence.” It begins to convert all kinds of ordinary materials into paper clips. Eventually it decides to turn everything on Earth — including the human race (!!!) — into paper clips.

Then it goes interstellar.

“You could have a superintelligence whose only goal is to make as many paper clips as possible, and you get this bubble of paper clips spreading through the universe,” Bostrom calmly told an audience in Santa Fe, N.M., earlier this year.

He added, maintaining his tone of understatement, “I think that would be a low-value future.”

Bostrom’s underlying concerns about machine intelligence, unintended consequences and potentially malevolent computers have gone mainstream. You can’t attend a technology conference these days without someone bringing up the A.I. anxiety. It hovers over the tech conversation with the high-pitched whine of a 1950s-era Hollywood flying saucer.•

Tags: ,

data1111

Despite his intelligence–or perhaps because of it–philosopher Nick Bostrom could have just as readily fallen through the cracks as rose to prominence, making an unlikely space for himself with the headiest of endeavors, calculating the likelihood of humans to escape extinction. He’s a risk manager on the grandest scale.

Far from a crank screaming of catastrophes, the Oxford academic is a rigorous researcher and intellectual screaming of catastrophes, especially the one he sees as most likely to eradicate us: superintelligent machines. In fact, he thinks self-teaching AI of a soaring IQ is even scarier than climate change. In a New Yorker piece on Bostrom, the best profile yet of the philosopher, Raffi Khatchadourian writes that the Superintelligence author sees himself as a “cartographer rather than a polemicist,” though he’s clearly both.

In addition to attempting to name the threats that may be hurtling our way, Bostrom takes on the biggest of the other big questions. For example: What will life be like a million years from now? He argues that long-term forecasting is easier than the short- and mid-term types, because the assumption of continued existence means most visions will be realized. He refers to this idea as the “Technological Completion Conjecture,” saying that “if scientific-and technological-development efforts do not effectively cease, then all impor­t­­­ant basic capabilities that could be obtained through some possible technology will be obtained.”

My own thoughts on these matters remain the same: In the long run, we either become what those of us alive right now would consider a Posthuman species, the next evolution, or we’ll cease to be altogether. A museum city can linger for a long spell, beautiful in its languor, but humans doubling as statues from the past will eventually be toppled.

An excerpt:

Bostrom has a reinvented man’s sense of lost time. An only child, he grew up—as Niklas Boström—in Helsingborg, on the southern coast of Sweden. Like many exceptionally bright children, he hated school, and as a teen-ager he developed a listless, romantic persona. In 1989, he wandered into a library and stumbled onto an anthology of nineteenth-century German philosophy, containing works by Nietzsche and Schopenhauer. He read it in a nearby forest, in a clearing that he often visited to think and to write poetry, and experienced a euphoric insight into the possibilities of learning and achievement. “It’s hard to convey in words what that was like,” Bostrom told me; instead he sent me a photograph of an oil painting that he had made shortly afterward. It was a semi-representational landscape, with strange figures crammed into dense undergrowth; beyond, a hawk soared below a radiant sun. He titled it “The First Day.”

Deciding that he had squandered his early life, he threw himself into a campaign of self-education. He ran down the citations in the anthology, branching out into art, literature, science. He says that he was motivated not only by curiosity but also by a desire for actionable knowledge about how to live. To his parents’ dismay, Bostrom insisted on finishing his final year of high school from home by taking special exams, which he completed in ten weeks. He grew distant from old friends: “I became quite fanatical and felt quite isolated for a period of time.”

When Bostrom was a graduate student in Stockholm, he studied the work of the analytic philosopher W. V. Quine, who had explored the difficult relationship between language and reality. His adviser drilled precision into him by scribbling “not clear” throughout the margins of his papers. “It was basically his only feedback,” Bostrom told me. “The effect was still, I think, beneficial.” His previous academic interests had ranged from psychology to mathematics; now he took up theoretical physics. He was fascinated by technology. The World Wide Web was just emerging, and he began to sense that the heroic philosophy which had inspired him might be outmoded. In 1995, Bostrom wrote a poem, “Requiem,” which he told me was “a signing-off letter to an earlier self.” It was in Swedish, so he offered me a synopsis: “I describe a brave general who has overslept and finds his troops have left the encampment. He rides off to catch up with them, pushing his horse to the limit. Then he hears the thunder of a modern jet plane streaking past him across the sky, and he realizes that he is obsolete, and that courage and spiritual nobility are no match for machines.”•

Tags: ,

babybot (2)

In an excellent Five Books interview, writer Calum Chace suggests a quintet of titles on the topic of Artificial Intelligence, four of which I’ve read. In recommending The Singularity Is Near, he defends the author Ray Kurzweil against charges of techno-quackery, though the futurist’s predictions have grown more desperate and fantastic as he’s aged. It’s not that what he predicts can’t ever be be done, but his timelines seem to me way too aggressive.

Nick Bostrom’s Superintelligence, another choice, is a very academic work, though an important one. Interesting that Bostrom thinks advanced AI is a greater existential threat to humans than even climate change. (I hope I’ve understood the philosopher correctly in that interpretation.) The next book is Martin Ford’s Rise of the Robots, which I enjoyed, but I prefer Chace’s fourth choice, Andrew McAfee and Erik Brynjolfsson’s The Second Machine Age, which covers the same terrain of technological unemployment with, I think, greater rigor and insight. The final suggestion is one I haven’t read, Greg Egan’s sci-fi novel Permutation City, which concerns intelligence uploading and wealth inequality.

An excerpt about Kurzweil:

Question:

Let’s talk more about some of these themes as we go through the books you’ve chosen. The first one on your list is The Singularity is Near, by Ray Kurzweil. He thinks things are moving along pretty quickly, and that a superintelligence might be here soon. 

Calum Chace:

He does. He’s fantastically optimistic. He thinks that in 2029 we will have AGI. And he’s thought that for a long time, he’s been saying it for years. He then thinks we’ll have an intelligence explosion and achieve uploading by 2045. I’ve never been entirely clear what he thinks will happen in the 16 years in between. He probably does have quite detailed ideas, but I don’t think he’s put them to paper. Kurzweil is important because he, more than anybody else, has made people think about these things. He has amazing ideas in his books—like many of the ideas in everybody’s books they’re not completely original to him—but he has been clearly and loudly propounding the idea that we will have AGI soon and that it will create something like utopia. I came across him in 1999 when I read his book, Are We Spiritual Machines? The book I’m suggesting here is The Singularity is Near, published in 2005. The reason why I point people to it is that it’s very rigorous. A lot of people think Kurzweil is a snake-oil salesman or somebody selling a religious dream. I don’t agree. I don’t agree with everything he says and he is very controversial. But his book is very rigorous in setting out a lot of the objections to his ideas and then tackling them. He’s brave, in a way, in tackling everything head-on, he has answers for everything. 

Question:

Can you tell me a bit more about what ‘the singularity’ is and why it’s near?

Calum Chace:

The singularity is borrowed from the world of physics and math where it means an event at which the normal rules break down. The classic example is a black hole. There’s a bit of radiation leakage but basically, if you cross it, you can’t get back out and the laws of physics break down. Applied to human affairs, the singularity is the idea that we will achieve some technological breakthrough. The usual one is AGI. The machine becomes as smart as humans and continues to improve and quickly becomes hundreds, thousands, millions of times smarter than the smartest human. That’s the intelligence explosion. When you have an entity of that level of genius around, things that were previously impossible become possible. We get to an event horizon beyond which the normal rules no longer apply.

I’ve also started using it to refer to a prior event, which is the ‘economic singularity.’ There’s been a lot of talk, in the last few months, about the possibility of technological unemployment. Again, it’s something we don’t know for sure will happen, and we certainly don’t know when. But it may be that AIs—and to some extent their peripherals, robots—will become better at doing any job than a human. Better, and cheaper. When that happens, many or perhaps most of us can no longer work, through no fault of our own. We will need a new type of economy.  It’s really very early days in terms of working out what that means and how to get there. That’s another event that’s like a singularity — in that it’s really hard to see how things will operate at the other side.•

Tags: , , , , , ,

weneedbrainanitrump9 (1)

In a short Washington Post Q&A conducted by Robert Gebelhoff, philosopher Nick Bostrom explains why he favors human enhancement. It will be a thorny thing to implement, but it’s going to happen if humans don’t succumb first to an existential risk. In fact, cognitive enhancement may be the only way we don’t become extinct. An excerpt:

Question:

You’ve written in favor of human enhancement — which includes everything from genetic engineering to “mind-uploading” — to curb the risks AI might bring. How should we balance the risks of human enhancement and artificial intelligence?

Nick Bostrom:

I don’t think human enhancement should be evaluated solely in terms of how it might influence the AI development trajectory. But it is interesting to think about how different technologies and capabilities could interact. For example, humanity might eventually be able to reach a high level of technology and scientific understanding without cognitive enhancement, but with cognitive enhancement we could get there sooner.

And the character of our progress might also be different if we were smarter: less like that of a billion monkeys hammering away furiously at a billion typewriters until something usable appears by chance, and more like the work of insight and purpose. This might increase the odds that certain hazards would be foreseen and avoided. If machine superintelligence is to be built, one may wish the folks building it to be as competent as possible.•

 

Tags: ,

It’s certainly disingenuous that the UK publication the Register plastered the word “EXCLUSIVE” on Brid-Aine Parnell’s Nick Bostrom interview, since the philosopher, who’s become widely known for writing about existential risks in his book Superintelligence, has granted many interviews in the past. The piece is useful, however, for making it clear that Bostrom is not a confirmed catastrophist, but rather someone posing questions about challenges we may (and probably will) face should our species continue in the longer term. An excerpt:

Even if we come up with a way to control the AI and get it to do “what we mean” and be friendly towards humanity, who then decides what it should do and who is to reap the benefits of the likely wild riches and post-scarcity resources of a superintelligence that can get us out into the stars and using the whole of the (uninhabited) cosmos.

“We’re not coming from a starting point of thinking the modern human condition is terrible, technology is undermining our human dignity,” Bostrom says. “It’s rather starting from a real fascination with all the cool stuff that technology can do and hoping we can get even more from it, but recognising that there are some particular technologies that also could bring risks that we really need to handle very carefully.

“I feel a little bit like humanity is a bit like an infant or a teenager: some fairly immature person who has got their hands on increasingly powerful instruments. And it’s not clear that our wisdom has kept pace with our increasing technological prowess. But the solution to that is to try to turbo-charge the growth of our wisdom and our ability to solve global coordination problems. Technology will not wait for us, so we need to grow up a little bit faster.”

Bostrom believes that humanity will have to collaborate on the creation of an AI and ensure its goal is the greater good of everyone, not just a chosen few, after we have worked hard on solving the control problem. Only then does the advent of artificial intelligence and subsequent superintelligence stand the greatest chance of coming up with utopia instead of paperclipped dystopia.

But it’s not exactly an easy task.•

Tags: ,

I think human beings will eventually go extinct without superintelligence to help us ward off big-impact challenges, yet I understand that Strong AI brings its own perils. I just don’t feel incredibly worried about it at the present time, though I think it’s a good idea to start focusing on the challenge today rather than tomorrow. In his Medium essay “Russell, Bostrom and the Risk of AI,” Lyle Cantor wonders whether humans are to computers as chimps are to humans. An excerpt:

Consider the chimp. If we are grading on a curve, chimps are very, very intelligent. Compare them to any other species besides Homo sapiens and they’re the best of the bunch. They have the rudiments of language, use very simple tools and have complex social hierarchies, and yet chimps are not doing very well. Their population is dwindling, and to the extent they are thriving they are thriving under our sufferance not their own strength.

Why? Because human civilization is little like the paperclip maximizer; We don’t hate chimps or the other animals whose habitats we are rearranging; we just see higher value arrangements of the earth and water they need to survive. And we are only every-so-slighter smarter than chimps.

In many respects our brains are nearly identical. Yes, the average human brain is about three times the size of an average chimp’s, but we still share much of the same gross structure. And our neurons fire at about 100 times per second and communicate through salutatory conduction, just like theirs do.

Compare that with the potential limits of computing. …

In terms of intellectual capacity, there’s an awful lot of room above us. An AI could potentially think millions of times faster than us. Problems that take the smartest humans years to solve it could solve in mintues. If a paperclip maximizer (or value-of-Goldman Sachs-stock maximzer or USA hegemony maximzer or refined-gold maximizer) is created, why should we expect our fate then to be any different that that of chimps now?•

Tags: , ,

One tricky point about designing autonomous machines is that if we embed in them our current moral codes, we’ll unwittingly stunt progress. Our morality has a lot of room to develop, so theirs needs to as well. I don’t think Strong AI is arriving anytime soon, but it’s a question worth pondering. From Adrienne LaFrance at the Atlantic:

How do we build machines that will make the world better, even when they start running themselves? And, perhaps the bigger question therein, what does a better world actually look like? Because if we teach machines to reflect on their actions based on today’s human value systems, they may soon be outdated themselves. Here’s how MIRI researchers Luke Muehlhauser and Nick Bostrom explained it in a paper last year:

Suppose that the ancient Greeks had been the ones to face the transition from human to machine control, and they coded their own values as the machines’ final goal. From our perspective, this would have resulted in tragedy, for we tend to believe we have seen moral progress since the Ancient Greeks (e.g. the prohibition of slavery). But presumably we are still far from perfection.

We therefore need to allow for continued moral progress. One proposed solution is to give machines an algorithm for figuring out what our values would be if we knew more, were wiser, were more the people we wished to be, and so on. Philosophers have wrestled with this approach to the theory of values for decades, and it may be a productive solution for machine ethics.•

Tags: , ,

Nick Bostrom’s book Superintelligence: Paths, Dangers, Strategies is sort of a dry read with a few colorful flourishes, but its ideas have front-burnered the existential threat of Artificial Intelligence, causing Stephen Hawking, Elon Musk and other heady thinkers to warn of the perils of AI, “the last invention we will ever need to make,” in Bostrom-ian terms. The philosopher joined a very skeptical Russ Roberts for an EconTalk conversation about future machines so smart they have no use for us. Beyond playing the devil’s advocate, the host is perplexed by the idea of superintelligence can make the leap beyond our control, that it can become “God.” But I don’t think machines need be either human or sacred to slip from our grasp in the long-term future, to have “preferences” based not on emotion or intellect but just the result of deep learning that was inartfully programmed by humans in the first place. One exchange:

“Russ Roberts: 

So, let me raise, say, a thought that–I’m interested if anyone else has raised this with you in talking about the book. This is a strange thought, I suspect, but I want your reaction to it. The way you talk about superintelligence reminds me a lot about how medieval theologians talked about God. It’s unbounded. It can do anything. Except maybe created a rock so heavy it can’t move it. Has anyone ever made that observation to you, and what’s your reaction to that?

Nick Bostrom:

I think you might be the first, at least that I can remember.

Russ Roberts: 

Hmmm.

Nick Bostrom: 

Well, so there are a couple of analogies, and a couple of differences as well. One difference is we imagine that a superintelligence here will be bounded by the laws of physics, and which can be important when we are thinking about how we are thinking about how it might interact with other superintelligences that might exist out there in the vast universe. Another important difference is that we would get design this entity. So, if you imagine a pre-existing superintelligence that is out there and that has created the world and that has full control over the world, there might be a different set of options available across humans in deciding how we relate to that. But in this case, there are additional options on the table in that we actually have to figure out how to design it. We get to choose how to build it.

Russ Roberts:

Up to a point. Because you raise the specter of us losing control of it. To me, it creates–inevitably, by the way, much of this is science fiction, movie material; there’s all kinds of interesting speculations in your book, some of which would make wonderful movies and some of which maybe less so. But to me it sounds like you are trying to question–you are raising the question of whether this power that we are going to unleash might be a power that would not care about us. And it would be the equivalent of saying, of putting a god in charge of the universe who is not benevolent. And you are suggesting that in the creation of this power, we should try to steer it in a positive direction.

Nick Bostrom: 

Yeah. So in the first type of scenario which I mentioned, where you have a singleton forming because the first superintelligence is so powerful, then, yes, I think a lot will depend on what that superintelligence would want. And, the generic [?] there, I think it’s not so much that you would get a superintelligence that’s hostile or evil or hates humans. It’s just that it would have some goal that is indifferent to humans. The standard example being that of a paper clip maximizer. Imagine an artificial agent whose utility function is, say, linear in the number of paper clips it produces over time. But it is superintelligent, extremely clever at figuring out how to mobilize resources to achieve this goal. And then you start to think through, how would such an agent go about maximizing the number of paper clips that will be produced? And you realize that it will have an instrumental reason to get rid of humans in as much as maybe humans would maybe try to shut it off. And it can predict that there will be much fewer paper clips in the future if it’s no longer around to build them. So that would already create the society effect, an incentive for it to eliminate humans. Also, human bodies consist of atoms. And a lot of juicy[?] atoms that could be used to build some really nice paper clips. And so again, a society effect–it might have reasons to transform our bodies and the ecosphere into things that would be more optimal from the point of view of paper clip production. Presumably, space probe launchers that are used to send out probes into space that could then transform the accessible parts of the universe into paper clip factories or something like that. If one starts to think through possible goals that an artificial intelligence can have, it seems that almost all of those goals if consistently maximally realized would lead to a world where there would be no human beings and indeed perhaps nothing that we humans would accord value to. And it only looks like a very small subset of all goals, a very special subset, would be ones that, if realized, would have anything that we would regard as having value. So, the big challenge in engineering an artificial motivation system would be to try to reach into this large space of possible goals and take out ones that would actually sufficiently match our human goals, that we could somehow endorse the pursuit of these goals by a superintelligence.”

Tags: ,

The end is near, roughly speaking. A number of scientists and philosophers, most notably Martin Rees and Nick Bostrom, agonize over the existential risks to humanity that might obliterate us long before the sun dies. A school of thought has arisen over the End of Days. From Sophie McBain in the New Statesman (via 3 Quarks Daily):

“Predictions of the end of history are as old as history itself, but the 21st century poses new threats. The development of nuclear weapons marked the first time that we had the technology to end all human life. Since then, advances in synthetic biology and nanotechnology have increased the potential for human beings to do catastrophic harm by accident or through deliberate, criminal intent.

In July this year, long-forgotten vials of smallpox – a virus believed to be ‘dead’ – were discovered at a research centre near Washington, DC. Now imagine some similar incident in the future, but involving an artificially generated killer virus or nanoweapons. Some of these dangers are closer than we might care to imagine. When Syrian hackers sent a message from the Associated Press Twitter account that there had been an attack on the White House, the Standard & Poor’s 500 stock market briefly fell by $136bn. What unthinkable chaos would be unleashed if someone found a way to empty people’s bank accounts?

While previous doomsayers have relied on religion or superstition, the researchers at the Future of Humanity Institute want to apply scientific rigour to understanding apocalyptic outcomes. How likely are they? Can the risks be mitigated? And how should we weigh up the needs of future generations against our own?

The FHI was founded nine years ago by Nick Bostrom, a Swedish philosopher, when he was 32. Bostrom is one of the leading figures in this small but expanding field of study.”

Tags: , ,

Philosopher Nick Bostrom, who has AI on the brain these days, just did an Ask Me Anything at Reddit. One exchange about the future of labor:

Question:

Good evening from Australia Professor! I would really like to know what your opinion is on technological unemployment. There is a bit of a shift in public thought and awareness at the moment about the rapid advances in both software and hardware displacing human workers in numerous fields.

Do you believe this time is actually different compared to the past and we do have to worry about the economic effects of technology, and more specifically AI, in permanently displacing humans?

Nick Bostrom:

It’s striking that so far we’re mainly used our higher productivity to consume more stuff rather than to enjoy more leisure. Unemployment is partly about lack of income (fundamentally a distributional problem) but it is also about a lack of self-respect and social status.

I think eventually we will have technological unemployment, when it becomes cheaper to do most everything humans do with machines instead. Then we can’t make a living out of wage income and would have to rely on capital income and transfers instead. But we would also have to develop a culture that does not stigmatize idleness and that helps us cultivate interest in activities that are not done to earn money.”

Tags:

Speaking of machines taking over, here’s one final excerpt from Nick Bostrom’s Superintelligence. It comes from one of the best passages, “Of Horses and Men.” The sequence I’m quoting is rather dire, though Bostrom later looks at the more positive side of technology handling labor for us and how extreme wealth disparity could be remedied. The excerpt:

“With cheaply copyable labor, market wages fall. The only place where humans would remain competitive may be where customers have a basic preference for work done by humans. Today, goods that have been handcrafted or produced by indigenous people sometimes command a price premium. Future consumers might similarly prefer human-made goods and human athletes, human artists, human lovers, and human leaders to functionally indistinguishable or superior artificial counterparts. It is unclear, however, just how widespread such preferences would be. If machine-made alternatives were sufficiently superior, perhaps they would be more highly prized.

One parameter that might be relevant to consumer choice is the inner life of the worker providing a service or product. A concert audience, for instance, might like to know that the performer is consciously experiencing the music and the venue. Absent phenomenal experience, the musician could be regarded as merely a high-powered jukebox, albeit one capable of creating the three-dimensional appearance of a performer interacting naturally with the crowd. Machines might then be designed to instantiate the same kinds of mental states that would be present in a human performing the same task. Even with perfect replication of subjective experiences, however, some people might simply prefer organic work. Such preferences could also have ideological or religious roots. Just as many Muslims and Jews shun food prepared in ways they classify as haram or treif, so there might be groups in the future that eschew products whose manufacture involved unsanctioned use of machine intelligence.

What hinges on this? To the extent that cheap machine labor can substitute for human labor, human jobs may disappear. Fears about automation and job loss are of course not new. Concerns about technological unemployment have surfaced periodically, at least since the Industrial Revolution; and quite a few professions have in fact gone the way of the English weavers and textile artisans who in the early nineteenth century united under the banner of the folkloric ‘General Ludd’ to fight against the introduction of mechanized looms. Nevertheless, although machinery and technology have been substitutes for many particular types of human labor, physical technology has on the whole been a complement to labor. Average human wages around the world have been on a long-term upward trend, in large part because of such complementarities. Yet what starts out as a complement to labor can at a later stage become a substitute for labor. Horses were initially complemented by carriages and ploughs, which greatly increased the horse’s productivity. Later, horses were substituted for by automobiles and tractors. These later innovations reduced the demand for equine labor and led to a population collapse. Could a similar fate befall the human species?

The parallel to the story of the horse can be drawn out further if we ask why it is that there are still horses around. One reason is that there are still a few niches in which horses have functional advantages; for example, police work. But the main reason is that humans happen to have peculiar preferences for the services that horses can provide, including recreational horseback riding and racing. These preferences can be compared to the preferences we hypothesized some humans might have in the future, that certain goods and services be made by human hand. Although suggestive, this analogy is, however, inexact, since there is still no complete functional substitute for horses. If there were inexpensive mechanical devices that ran on hay and had exactly the same shape, feel, smell, and behavior as biological horses — perhaps even the same conscious experiences — then demand for biological horses would probably decline further.

With a sufficient reduction in the demand for human labor, wages would fall below the human subsistence level. The potential downside for human workers is therefore extreme: not merely wage cuts, demotions, or the need for retraining, but starvation and death. When horses became obsolete as a source of moveable power, many were sold off to meatpackers to be processed into dog food, bone meal, leather, and glue. These animals had no alternative employment through which to earn their keep. In the United States, there were about 26 million horses in 1915. By the early 1950s, 2 million remained.”

Tags:

Genetic enhancement in humans isn’t likely around the corner, but it will be pretty impossible to avoid its path at some point in the future even if you disagree with it, the way the online world is currently almost unavoidable. A brief passage from Nick Bostrom’s Superintelligence about how designer babies may sway people, even countries, to fall in line:

“Once the example has been set, and the results start to show, holdouts will have strong incentives to follow suit. Nations would face the prospect of becoming cognitive backwaters and losing out in economic, scientific, military, and prestige contests with competitors that embrace the new human enhancement technologies. Individuals within a society would see places at elite schools being filled with genetically selected children (who may also on average be prettier, healthier, and more conscientious), and will want their own offspring to have the same advantages. There is some chance that a large attitudinal shift could take place over a relatively short time, perhaps in as little as a decade, once the technology is proven to work and to provide a substantial benefit.”

Tags:

robo

A piece of Superintelligence that Nick Bostrom adapted for Slate which stresses that AI doesn’t need be like humans to surpass us:

“An artificial intelligence can be far less humanlike in its motivations than a green scaly space alien. The extraterrestrial (let us assume) is a biological creature that has arisen through an evolutionary process and can therefore be expected to have the kinds of motivation typical of evolved creatures. It would not be hugely surprising, for example, to find that some random intelligent alien would have motives related to one or more items like food, air, temperature, energy expenditure, occurrence or threat of bodily injury, disease, predation, sex, or progeny. A member of an intelligent social species might also have motivations related to cooperation and competition: Like us, it might show in-group loyalty, resentment of free riders, perhaps even a vain concern with reputation and appearance.

An AI, by contrast, need not care intrinsically about any of those things. There is nothing paradoxical about an AI whose sole final goal is to count the grains of sand on Boracay, or to calculate the decimal expansion of pi, or to maximize the total number of paper clips that will exist in its future light cone. In fact, it would be easier to create an AI with simple goals like these than to build one that had a humanlike set of values and dispositions. Compare how easy it is to write a program that measures how many digits of pi have been calculated and stored in memory with how difficult it would be to create a program that reliably measures the degree of realization of some more meaningful goal—human flourishing, say, or global justice.”

Tags:

On a recent Guardian “Science Weekly” podcast, host Ian Sample interviewed Oxford philosopher Nick Bostrom, author of the new book, Superintelligence: Paths, Dangers, Strategies, which looks at whether AI, often called “the last invention,” will be the death of us. I always think that we’ll end up extinct without the development of superintelligence, but Bostrom believes we can survive all in nature but not perhaps our own unnatural creations. He points out that AI would never enslave us because if it moves past our abilities it would continue improving until it would be able to execute any task we could do far better than us. Listen here.

Tags: ,

Commenting on philosopher Nick Bostrom’s new book, Elon Musk compared superintelligence to nuclear weapons in terms of the danger it poses us. From Adario Strange at Mashable:

“Nevertheless, the comparison of A.I. to nuclear weapons, a threat that has cast a worrying shadow over much of the last 30 years in terms of humanity’s longevity possibly being cut short by a nuclear war, immediately raises a couple of questions.

The first, and most likely from many quarters, will be to question Musk’s future-casting. Some may use Musk’s A.I. concerns — which remain fantastical to many — as proof that his predictions regarding electric cars and commercial space travel are the visions of someone who has seen too many science fiction films. ‘If Musk really thinks robots might destroy humanity, maybe we need to dismiss his long view thoughts on other technologies.’ Those essays are likely already being written.

The other, and perhaps more troubling, is to consider that Musk’s comparison of A.I. to nukes is apt. What if Musk, empowered by rare insight from his exclusive perch guiding the very real future of space travel and automobiles, really has an accurate line on the future of A.I.?

Later, doubling down on his initial tweet, Musk wrote, ‘Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.'”

Tags: , ,

Speaking of the emergence of really smart machines, philosopher Nick Bostrom’s new book, Superintelligence, has just been published in the UK (with the U.S. edition available later this year). Here’s a piece from Clive Cookson’s Financial Times review:

“Since the 1950s proponents of artificial intelligence have maintained that machines thinking like people lie just a couple of decades in the future. In Superintelligence – a thought-provoking look at the past, present and above all the future of AI – Nick Bostrom, founding director of Oxford’s university’s Future of Humanity Institute, starts off by mocking the futurists.

‘We are still far from real AI despite last month’s widely publicised ‘Turing test’ stunt, in which a computer mimicked a 13-year-old boy with some success in a brief text conversation. About half the world’s AI specialists expect human-level machine intelligence to be achieved by 2040, according to recent surveys, and 90 per cent say it will arrive by 2075. Bostrom takes a cautious view of the timing but believes that, once made, human-level AI is likely to lead to a far higher level of ‘superintelligence’ faster than most experts expect – and that its impact is likely either to be very good or very bad for humanity.

The book enters more original territory when discussing the emergence of superintelligence. The sci-fi scenario of intelligent machines taking over the world could become a reality very soon after their powers surpass the human brain, Bostrom argues. Machines could improve their own capabilities far faster than human computer scientists.

‘Machines have a number of fundamental advantages, which will give them overwhelming superiority,’ he writes. ‘Biological humans, even if enhanced, will be outclassed.’ He outlines various ways for AI to escape the physical bonds of the hardware in which it developed. For example, it might use its hacking superpower to take control of robotic manipulators and automated labs; or deploy its powers of social manipulation to persuade human collaborators to work for it. There might be a covert preparation stage in which microscopic entities capable of replicating themselves by nanotechnology or biotechnology are deployed worldwide at an extremely low concentration. Then at a pre-set time nanofactories producing nerve gas or target-seeking mosquito-like robots might spring forth (though, as Bostrom notes, superintelligence could probably devise a more effective takeover plan than him).

What would the world be like after the takeover? It would contain far more intricate and intelligent structures than anything we can imagine today – but would lack any type of being that is conscious or whose welfare has moral significance. ‘A society of economic miracles and technological awesomeness, with nobody there to benefit,’ as Bostrom puts it. ‘A Disneyland without children.'”

Tags: ,

The three Transformations humans must make if we’re to attain a higher plane of living, à la philosopher Nick Bostrom’s “Letter from Utopia“:

“To reach Utopia, you must first discover the means to three fundamental transformations.

The First Transformation: Secure life!

Your body is a deathtrap. This vital machine and mortal vehicle, unless it jams first or crashes, is sure to rust anon. You are lucky to get seven decades of mobility; eight if you be fortune’s darling. That is not sufficient to get started in a serious way, much less to complete the journey. Maturity of the soul takes longer. Why, even a tree-life takes longer.

Death is not one but a multitude of assassins. Do you not see them? They are coming at you from every angle. Take aim at the causes of early death – infection, violence, malnutrition, heart attack, cancer. Turn your biggest gun on aging, and fire. You must seize the biochemical processes in your body in order to vanquish, by and by, illness and senescence. In time, you will discover ways to move your mind to more durable media. Then continue to improve the system, so that the risk of death and disease continues to decline. Any death prior to the heat death of the universe is premature if your life is good.

Oh, it is not well to live in a self-combusting paper hut! Keep the flames at bay and be prepared with liquid nitrogen, while you construct yourself a better habitation. One day you or your children should have a secure home. Research, build, redouble your effort!

The Second Transformation: Upgrade cognition!

Your brain’s special faculties: music, humor, spirituality, mathematics, eroticism, art, nurturing, narration, gossip! These are fine spirits to pour into the cup of life. Blessed you are if you have a vintage bottle of any of these. Better yet, a cask! Better yet, a vineyard!

Be not afraid to grow. The mind’s cellars have no ceilings!

What other capacities are possible? Imagine a world with all the music dried up: what poverty, what loss. Give your thanks, not to the lyre, but to your ears for the music. And ask yourself, what other harmonies are there in the air, that you lack the ears to hear? What vaults of value are you witlessly debarred from, lacking the key sensibility?

Had you but an inkling, your nails would be clawing at the padlock.

Your brain must grow beyond any genius of humankind, in its special faculties as well as its general intelligence, so that you may better learn, remember, and understand, and so that you may apprehend your own beatitude.

Mind is a means: for without insight you will get bogged down or lose your way, and your journey will fail.

Mind is also an end: for it is in the spacetime of awareness that Utopia will exist. May the measure of your mind be vast and expanding.

Oh, stupidity is a loathsome corral! Gnaw and tug at the posts, and you will slowly loosen them up. One day you’ll break the fence that held your forebears captive. Gnaw and tug, redouble your effort!

The Third Transformation: Elevate well-being!

What is the difference between indifference and interest, boredom and thrill, despair and bliss?

Pleasure! A few grains of this magic ingredient are worth more than a king’s treasure, and we have it aplenty here in Utopia. It pervades into everything we do and everything we experience. We sprinkle it in our tea.

The universe is cold. Fun is the fire that melts the blocks of hardship and creates a bubbling celebration of life.

It is the birth right of every creature, a right no less sacred for having been trampled on since the beginning of time.

There is a beauty and joy here that you cannot fathom. It feels so good that if the sensation were translated into tears of gratitude, rivers would overflow.

I reach in vain for words to convey to you what it all amounts to… It’s like a rain of the most wonderful feeling, where every raindrop has its own unique and indescribable meaning – or rather it has a scent or essence that evokes a whole world… And each such evoked world is subtler, richer, deeper, more multidimensional than the sum total of what you have experienced in your entire life.

I will not speak of the worst pain and misery that is to be got rid of; it is too horrible to dwell upon, and you are already cognizant of the urgency of palliation. My point is that in addition to the removal of the negative, there is also an upside imperative: to enable the full flourishing of enjoyments that are currently out of reach.

The roots of suffering are planted deep in your brain. Weeding them out and replacing them with nutritious crops of well-being will require advanced skills and instruments for the cultivation of your neuronal soil. But take heed, the problem is multiplex! All emotions have a natural function. Prune carefully lest you accidentally reduce the fertility of your plot.

Sustainable yields are possible. Yet fools will build fools’ paradises. I recommend you go easy on your paradise-engineering until you have the wisdom to do it right.

Oh, what a gruesome knot suffering is! Pull and tug on those loops, and you will gradually loosen them up. One day the coils will fall, and you will stretch out in delight. Pull and tug, and be patient in your effort!

May there come a time when rising suns are greeted with joy by all the living creatures they shine upon.”

Tags:

No publication birthed on the Internet is better than Aeon, a provocative stream of essays about technology, consciousness, nature, the deep future, the deep past and other fundamental concerns of life on Earth. In a world of brief tweets and easy access, the site asks the long and hard questions. Two great recent examples: Michael Belfiore’s “The Robots Are Coming,” a look at society when our silicon sisters no longer have an OFF switch; and Ross Andersen’s “Hell on Earth,” an examination of how infinite life extension will impact the justice system. (And if you’ve never read Andersen’s work about philosopher Nick Bostrom, go here and here.) Excerpts from these essays follow.

From “The Robots Are Coming”:

“Robots in the real world usually look nothing like us. On Earth they perform such mundane chores as putting car parts together in factories, picking up our online orders in warehouses, vacuuming our homes and mowing our lawns. Farther afield, flying robots land on other planets and conduct aerial warfare by remote control.

More recently, we’ve seen driverless cars take to our roads. Here, finally, the machines veer toward traditional R U R territory. Which makes most people, it seems, uncomfortable. A Harris Interactive poll sponsored by Seapine Software, for example, announced this February that 88 per cent of Americans do not like the idea of their cars driving themselves, citing fear of losing control over their vehicles as the chief concern.

The main difference between robots that have gone before and the newer variety is autonomy. Whether by direct manipulation (as when we wield power tools, or grip the wheel of a car) or via remote control (as with a multitude of cars and airplanes), machines have in the past remained firmly under human control at all times. That’s no longer true and now autonomous robots have even begun to look like us.

I got a good, long look at the future of robotics at an event run by the Defense Advanced Research Projects Agency (known as the DARPA Robotics Challenge, or DRC Trials), outside Miami in December. What I saw by turns delighted, amused, and spooked me. My overriding sense was that, very soon, DARPA’s work will shift the technological ground beneath our feet yet again.”

____________________

From “Hell on Earth”:

“It is hard to avoid the conclusion that Hitler got off easy, given the scope and viciousness of his crimes. We might have moved beyond the Code of Hammurabi and ‘an eye for an eye’, but most of us still feel that a killer of millions deserves something sterner than a quick and painless suicide. But does anyone ever deserve hell?

That used to be a question for theologians, but in the age of human enhancement, a new set of thinkers is taking it up. As biotech companies pour billions into life extension technologies, some have suggested that our cruelest criminals could be kept alive indefinitely, to serve sentences spanning millennia or longer. Even without life extension, private prison firms could one day develop drugs that make time pass more slowly, so that an inmate’s 10-year sentence feels like an eternity. One way or another, humans could soon be in a position to create an artificial hell.

At the University of Oxford, a team of scholars led by the philosopher Rebecca Roache has begun thinking about the ways futuristic technologies might transform punishment. In January, I spoke with Roache and her colleagues Anders Sandberg and Hannah Maslen about emotional enhancement, ‘supercrimes’, and the ethics of eternal damnation. What follows is a condensed and edited transcript of our conversation.

Ross Andersen:

Suppose we develop the ability to radically expand the human lifespan, so that people are regularly living for more than 500 years. Would that allow judges to fit punishments to crimes more precisely?

Rebecca Roache:

When I began researching this topic, I was thinking a lot about Daniel Pelka, a four-year-old boy who was starved and beaten to death [in 2012] by his mother and stepfather here in the UK. I had wondered whether the best way to achieve justice in cases like that was to prolong death as long as possible. Some crimes are so bad they require a really long period of punishment, and a lot of people seem to get out of that punishment by dying. And so I thought, why not make prison sentences for particularly odious criminals worse by extending their lives?

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

The life-extension scenario may sound futuristic, but if you look closely you can already see it in action, as people begin to live longer lives than before. If you look at the enormous prison population in the US, you find an astronomical number of elderly prisoners, including quite a few with pacemakers. When I went digging around in medical journals, I found all these interesting papers about the treatment of pacemaker patients in prison.”

Tags: , , ,

Oxford philosopher Nick Bostrom, who thinks the sky may be fake or falling, has a new title, Superintelligence, which is about the Singularlity, out later this year. An excerpt from Bookseller about it, and then a passage from a 2007 New York Times article in which I first encountered Bostrom’s version of our world as a computer simulation.

_________________________

From Bookseller:

“’Everyone wonders when we’ll create a machine that’s as smart as us—or maybe just a little bit smarter than us. What people can fail to realise is that’s not the end of the process, rather [it’s] the beginning,’ [Oxford University Press science publisher Keith] Mansfield commented. ‘The smart machine will be capable of improving itself, becoming smarter still. Very quickly, we may see an intelligence explosion, with humanity left far behind.’

Just as the fate of gorillas now depends more on humans than on gorillas themselves, so the fate of our species would come to depend on the actions of machine superintelligence, Bostrom’s book will argue.”

_________________________

From “Our Lives, Controlled From Some Guy’s Couch,” by John Tierney:

“Dr. Bostrom assumes that technological advances could produce a computer with more processing power than all the brains in the world, and that advanced humans, or ‘posthumans,’ could run ‘ancestor simulations’ of their evolutionary history by creating virtual worlds inhabited by virtual people with fully developed virtual nervous systems.

Some computer experts have projected, based on trends in processing power, that we will have such a computer by the middle of this century, but it doesn’t matter for Dr. Bostrom’s argument whether it takes 50 years or 5 million years. If civilization survived long enough to reach that stage, and if the posthumans were to run lots of simulations for research purposes or entertainment, then the number of virtual ancestors they created would be vastly greater than the number of real ancestors.

There would be no way for any of these ancestors to know for sure whether they were virtual or real, because the sights and feelings they’d experience would be indistinguishable. But since there would be so many more virtual ancestors, any individual could figure that the odds made it nearly certain that he or she was living in a virtual world.

The math and the logic are inexorable once you assume that lots of simulations are being run. But there are a couple of alternative hypotheses, as Dr. Bostrom points out. One is that civilization never attains the technology to run simulations (perhaps because it self-destructs before reaching that stage). The other hypothesis is that posthumans decide not to run the simulations.

‘This kind of posthuman might have other ways of having fun, like stimulating their pleasure centers directly,’ Dr. Bostrom says. ‘Maybe they wouldn’t need to do simulations for scientific reasons because they’d have better methodologies for understanding their past. It’s quite possible they would have moral prohibitions against simulating people, although the fact that something is immoral doesn’t mean it won’t happen.’

Dr. Bostrom doesn’t pretend to know which of these hypotheses is more likely, but he thinks none of them can be ruled out.”

Tags: ,

The opening of “Where Are They?” philosopher Nick Bostrom’s 2008 essay explaining why the discovery of extraterrestrial life may spell doom for earthlings:

“When water was discovered on Mars, people got very excited. Where there is water, there may be life. Scientists are planning new missions to study the planet up close. NASA’s next Mars rover is scheduled to arrive in 2010. In the decade following, a Mars Sample Return mission might be launched, which would use robotic systems to collect samples of Martian rocks, soils, and atmosphere, and return them to Earth. We could then analyze the sample to see if it contains any traces of life, whether extinct or still active. Such a discovery would be of tremendous scientific significance. What could be more fascinating than discovering life that had evolved entirely independently of life here on Earth? Many people would also find it heartening to learn that we are not entirely alone in this vast cold cosmos.

But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.”

Tags:

« Older entries