Anders Sandberg

You are currently browsing articles tagged Anders Sandberg.

journalistcar (1)

Here are 50 ungated pieces of wonderful journalism from 2015, alphabetized by author name, which made me consider something new or reconsider old beliefs or just delighted me. (Some selections are from gated publications that allow a number of free articles per month.) If your excellent work isn’t on the list, that’s more my fault than yours.

  • Who Runs the Streets of New Orleans?” (David Amsden, The New York Times Magazine) As private and public sector missions increasingly overlap, here’s an engaging look at the privatization of some policing in the French Quarter.
  • In the Beginning” (Ross Andersen, Aeon) A bold and epic essay about the elusive search for the origins of the universe.
  • Ask Me Anything (Anonymous, Reddit) A 92-year-old German woman who was born into Nazism (and participated in it) sadly absolves herself of all blame while answering questions about that horrible time.
  • Rethinking Extinction” (Stewart Brand, Aeon) The Whole Earth Catalog founder thinks the chance of climate-change catastrophe overrated, arguing we should utilize biotech to repopulate dwindling species.
  • Anchorman: The Legend of Don Lemon” (Taffy Brodesser-Akner, GQ) A deeply entertaining look into the perplexing facehole of Jeff Zucker’s most gormless word-sayer and, by extension, the larger cable-news zeitgeist.
  • How Social Media Is Ruining Politics(Nicholas Carr, Politico) A lament that our shiny new tools have provided provocative trolls far more credibility than a centralized media ever allowed for.
  • Clans of the Cathode” (Tom Carson, The Baffler) One of our best culture critics looks at the meaning of various American sitcom families through the medium’s history.
  • The Black Family in the Age of Mass Incarceration” (Ta-Nehisi Coates, The Atlantic) The author examines the tragedy of the African-American community being turned into a penal colony, explaining the origins of the catastrophic policy failure.
  • Perfect Genetic Knowledge” (Dawn Field, Aeon) The essayist thinks about a future in which we’ve achieved “perfect knowledge” of whole-planet genetics.
  • A Strangely Funny Russian Genius” (Ian Frazier, The New York Review of Books) Daniil Kharms was a very funny writer, if you appreciate slapstick that ends in a body count.
  • Tomorrow’s Advance Man” (Tad Friend, The New Yorker) Profile of Silicon Valley strongman Marc Andreessen and his milieu, an enchanted land in which adults dream of riding unicorns.
  • Build-a-Brain” (Michael Graziano, Aeon) The neuroscientist’s ambitious thought experiment about machine intelligence is a piece I thought about continuously throughout the year.
  • Ask Me Anything (Stephen Hawking, Reddit) Among other things, the physicist warns that the real threat of superintelligent machines isn’t malice but relentless competence.
  • Engineering Humans for War” (Annie Jacobsen, The Atlantic) War is inhuman, it’s been said, and the Pentagon wants to make it more so by employing bleeding-edge biology and technology to create super soldiers.
  • The Wrong Head” (Mike Jay, London Review of Books) A look at insanity in 1840s France, which demonstrates that mental illness is often expressed in terms of the era in which it’s experienced.
  • Death Is Optional” (Daniel Kahneman and Noah Yuval Harari, Edge) Two of my favorite big thinkers discuss the road ahead, a highly automated tomorrow in which medicine, even mortality, may not be an egalitarian affair.
  • Where the Bodies Are Buried,” (Patrick Radden Keefe, The New Yorker) Ceasefires, even treaties, don’t completely conclude wars, as evidenced by this haunting revisitation of the heartbreaking IRA era.
  • Porntopia” (Molly Lambert, Grantland) The annual Adult Video News Awards in Las Vegas, the Oscars of oral, allows the writer to look into a funhouse-mirror reflection of America.
  • The Robots Are Coming” (John Lanchester, London Review of Books) A remarkably lucid explanation of how quickly AI may remake our lives and labor in the coming decades.
  • Last Girl in Larchmont” (Emily Nussbaum, The New Yorker) The great TV critic provides a postmortem of Joan Rivers and her singular (and sometimes disquieting) brand of feminism.
  • “President Obama & Marilynne Robinson: A Conversation, Part 1 & Part 2” (Barack Obama and Marilynne Robinson, New York Review of Books) Two monumental Americans discuss the state of the novel and the state of the union.
  • Ask Me Anything (Elizabeth Parrish, Reddit) The CEO of BioViva announces she’s patient zero for the company’s experimental age-reversing gene therapies. Strangest thing I read all year.
  • Why Alien Life Will Be Robotic” (Sir Martin Rees, Nautilus) The astronomer argues that ETs in our inhospitable universe have likely already transitioned into conscious machines.
  • Ask Me Anything (Anders Sandberg, Reddit) Heady conversation about existential risks, Transhumanism, economics, space travel and future technologies conducted by the Oxford researcher. 
  • Alien Rights” (Lizzie Wade, Aeon) Manifest Destiny will, sooner or later, became a space odyssey. What ethics should govern exploration of the final frontier?
  • Peeling Back the Layers of a Born Salesman’s Life” (Michael Wilson, The New York Times) The paper’s gifted crime writer pens a posthumous profile of a protean con man, a Zelig on the make who crossed paths with Abbie Hoffman, Otto Preminger and Annie Leibovitz, among others.
  • The Pop Star and the Prophet” (Sam York, BBC Magazine) Philosopher Jacques Attali, who predicted, back in the ’70s, the downfall of the music business, tells the writer he now foresees similar turbulence for manufacturing.

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Dr. Anders Sandberg of the Future of Humanity Institute at Oxford just did one of the best Reddit AMAs I’ve ever read, a brilliant back-and-forth with readers on existential risks, Transhumanism, economics, space travel, future technologies, etc. He speaks wisely of trying to predict the next global crisis: “It will likely not be anything we can point to before, since there are contingency plans. It will be something obvious in retrospect.”

The whole piece is recommended, and some exchanges are embedded below.

_________________________

Question:

Will we start creating new species of animals (and plants, fungi, and microbes) any time soon?

What about fertilizing the oceans? Will we turn vast areas of ocean into monoculture like a corn field or a wood-pulp plantation?

When will substantial numbers of people live anywhere other than Earth? Where will it be?

What will we do about climate change?

Dr. Anders Sandberg:

I think we are already making new species, although releasing them into nature is frowned upon.

Ocean fertilization might be a way of binding carbon and getting good “ocean agriculture”, but the ecological prize might be pretty big. Just consider how land monocultures squeeze out biodiversity. But if we needed to (say to feed a trillion population), we could.

I think we need to really lower the cost to orbit (beanstalks, anyone?) for mass emigration. Otherwise I expect the first real space colonists to be more uploads and robots than biological humans.

I think we will muddle through climate: technological innovations make us more green, but not before a lot of change will happen – which people will also get used to.

_________________________

Question:

What augmentations, if any, do you plan on getting?

Dr. Anders Sandberg:

I have long wanted to get a magnetic implant to sense magnetic fields, but since I want to be able to get close to MRI machines I have held off.

I think the first augmentations will be health related or sensory enhancement gene therapy – I would love to see ultraviolet and infrared. But life extension is likely the key area, which might involve gene therapy and implanting modified stem cells.

Further down the line I want to have implants in my hypothalamus so I can access my body’s “preferences menu” and change things like weight setpoint or manage pain. I am a bit scared of implants in the motivation system to help me manage my behavior, but it might be useful. And of course, a good neural link to my exoself of computers and gadgets would be useful – especially if it could allow me to run software supported simulations in my mental workspace.

In the long run I hope to just make my body as flexible and modifiable as possible, although no doubt it would tend to normally be set to something like “idealized standard self”.

It is hard to tell which augmentations will arrive when. But I think going for general purpose goods – health, intelligence, the ability to control oneself – is a good heuristic for what to aim for.

_________________________

Question:

What major crisis can we expect in next few years? What the world is going to be like by 2025?

Dr. Anders Sandberg:

I am more of a long term guy, so it might be better to ask the people at the World Economic Forum risk report (where I am on the advisory board).http://www.weforum.org/reports/global-risks-report-2015

One group of things are economic troubles – they are safe bets before 2025 since they happen every few years, but most are not major crises. Expect some asset bubbles or deflation in a major economy, energy price shocks, failure of a major financial mechanism or institution, fiscal crises, and/or some critical infrastructure failures.

Similarly there will be at least some extreme weather or natural disaster events that cause a nasty surprise (think Katrina or the Tohoku earthquake) – such things happen all the time, but the amount of valuable or critical stuff in the world is going up, and we are affected more and more systemically (think hard drive prices after the Thai floods – all the companies were located on the same flood plain). I would be more surprised by any major biodiversity loss or ecosystem collapse, but the oceans are certainly not looking good. Even with the scariest climate scenarios things in 2025 are not that different from now.

What to look out for is interstate conflicts that get global consequences. We have never seen a “real” cyber war: maybe it is overhyped, maybe we underestimate the consequences (think something like the DARPA cyber challenge as persistent, adapting malware everywhere). Big conflicts are unfortunately not impossible, and we still have lots of nukes in the world. WMD proliferation looks worryingly doable.

If I were to make a scenario for a major crisis it would be something like a systemic global issue like the oil price causing widespread trouble in some unstable regions (think of past oil-food interactions triggering unrest leading to the Arab Spring, or Russia being under pressure now due to cheap oil), which spills over into some actual conflict that has long-range effects getting out of hand (say the release of nasty bio- or cyberweapons). But it will likely not be anything we can point to before, since there are contingency plans. It will be something obvious in retrospect.

And then we will dust ourselves off, swear to never let that happen again, and half forget it.

_________________________

Question:

As I understand it, regarding existential risk and our survival as a species, most if not all discussion has to happen under the umbrella of ‘if we don’t kill ourselves off first.’ Surely, as a man who thinks so far ahead, you must have some hope that catastrophic self-inflicted won’t spell the end of our race, or at least that it won’t put us back irrevocably far technologically. In your estimation, what are the immediate self-inflicted harms we face and will we have the capacity to face them when their destructive effects manifest. Will the climate change to the point of poisoning our planet, will uncontrolled pollution destroy our global ecology in some other way, will nuclear blasts destroy all but the cockroaches and bacteria on the planet? It seems to me that we needn’t think too far to see one of these scenarios come to pass if we don’t present a globally concerted effort to intervene.

Dr. Anders Sandberg:

I think climate change, like ecological depletion or poisons, are unlikely to spell radical disaster (still, there is enough of a tail to the climate change distribution to care about the extreme cases). But they can make the world much worse to live in, and cause strains in the global social fabric that make other risks more likely.

Nuclear war is still a risk with us. And nuclear winters are potential giga-killers; we just don’t know whether they are very likely or not, because of model uncertainty. I think the probability is way higher than most people think (because of both Bayesian estimation and observer selection effects).

I think bioengineered pandemics are also a potential stumbling block. There may not be many omnicidal maniacs, but the gain-of-function experiments show that well-meaning researchers can make potentially lethal pathogens and the recent distribution of anthrax by the US military show that amazingly stupid mistakes do happen with alarming regularity.

See also: https://theconversation.com/the-five-biggest-threats-to-human-existence-27053

_________________________

Question:

I have trouble imagining how our current economic structure could cope with all the 10’s of millions of driver/taxi/delivery jobs going.

The economic domino effect of inability to pay debts/mortgages, loss of secondary jobs they were supporting, fall in demand for goods, etc, etc

It seems like the world has never really got back to “normal” (whatever that is anymore in the 21st century) after the 2008 financial crisis & never will.

I’m an optimist by nature, I’m sure we will segue & transition into something we probably haven’t even imagined yet.

But it’s very hard to imagine our current hands off laissez fair style of economy functioning in the 2020’s in the face of so much unemployment.

Dr. Anders Sandberg:

Back in the 19th century it would have seemed absurd that the economy could absorb all those farmers. But historical examples may be misleading: the structure of the economy changes.

In many ways laissez faire economics work perfectly fine in the super-unemployed scenario: we just form an internal economy, less effective than the official one sailing off into the stratosphere, and repeat the process (the problem might be if property rights make it impossible to freely set up a side economy). But clearly there is a lot of human capital wasted in this scenario.

Some people almost reflexively suggest a basic income guarantee as the remedy to an increasingly automated economy. I think we need to think much more creatively about other solutions, the BIG is just one possibility (and might not even be feasible in many nations).

_________________________

Question:

What is the most defining characteristic of transhumanism as an idea in the 10s compared with the 00s?

Dr. Anders Sandberg:

Back when I started in the 90s we were all early-Wired style tech enthusiasts. The future was coming, and it was all full of cyber! Very optimistic, very much based on the idea that if we could just organise better and convince society that transhumanism was a good idea, then we would win.

By the 00s we had learned that just having organisations does not mean your ideas get taken seriously. Although they were actually taken seriously to a far greater extent: the criticism from Fukuyama and others actually forced a very healthy debate about the ethics and feasibility of transhumanism. Also, the optimism had become tempered post-dotcom, post-911: progress is happening, but much more uneven and slow than we may have hoped for. It was by this point the existential risk and AI safety strands came into their own.

Transhumanism in the 10s? Right now I think the cool thing is the posttranshumanist movements like the rationalists and the effective altruists: in many ways full of transhumanist ideas, yet not beholden to always proclaiming their transhumanism. We have also become part of institutions, and there are people that grew up with transhumanism who are now senior enough to fund things, make startups or become philanthropists.

_________________________

Question:

Which do you think is more important for the future of humanity, the exploration of outer space (planets, stars, galaxies, etc.)? Or the exploration of inner space (consciousness, intelligence, self, etc.)?

Dr. Anders Sandberg:

Both, but in different ways. Exploration of outer space is necessary for long term survival. Exploration of inner space is what may improve us.

Question:

What step would you take first? Would you first discover “everything” or as much as possible about inner space, or outer space?

Dr. Anders Sandberg:

I suspect safety first: getting off-planet is a good start. But one approach does not preclude working on the other at the same time.•

Tags:

electronicbrain

I think we’re pretty much done for without superintelligence, though it could also do us in. It’s just a gamble we’ll have to take. From Anders Sandberg’s Guardian article about doomsday scenarios, “The Five Biggest Threats to Human Existence“:

Intelligence is very powerful. A tiny increment in problem-solving ability and group coordination is why we left the other apes in the dust. Now their continued existence depends on human decisions, not what they do. Being smart is a real advantage for people and organisations, so there is much effort in figuring out ways of improving our individual and collective intelligence: from cognition-enhancing drugs to artificial-intelligence software.

The problem is that intelligent entities are good at achieving their goals, but if the goals are badly set they can use their power to cleverly achieve disastrous ends. There is no reason to think that intelligence itself will make something behave nice and morally. In fact, it is possible to prove that certain types of superintelligent systems would not obey moral rules even if they were true.

Even more worrying is that in trying to explain things to an artificial intelligence we run into profound practical and philosophical problems. Human values are diffuse, complex things that we are not good at expressing, and even if we could do that we might not understand all the implications of what we wish for.

Software-based intelligence may very quickly go from below human to frighteningly powerful. The reason is that it may scale in different ways from biological intelligence: it can run faster on faster computers, parts can be distributed on more computers, different versions tested and updated on the fly, new algorithms incorporated that give a jump in performance.

It has been proposed that an “intelligence explosion” is possible when software becomes good enough at making better software. Should such a jump occur there would be a large difference in potential power between the smart system (or the people telling it what to do) and the rest of the world. This has clear potential for disaster if the goals are badly set.•

Tags:

In time, we’ll all be enhanced–and not just our bodies. It won’t be cheating but improvement. Necessary even, for survival. From Anders Sandberg’s new Practical Ethics post about Transhumanism and performance enhancement:

“If we were to make a choice behind a veil of ignorance between a world where there was more talent to go around and a world with less talent, it seems that the reasonable choice is to choose the world of talent. We would probably also want to choose a world where talent was more equally distributed than one where it was less equal. But even the less talented people in a talented but unequal world could benefit from the greater prosperity and creativity.

In practice talent needs plenty of help to develop: without support and good teachers innate potential is unlikely to matter. So the ability to help kids develop their potential (and help them overcome their less able sides) is important for actualizing that talent. Without it none of the above worlds would be preferable. But figuring out how to cultivate and stimulate kids is hard. Hence, any information that could help do this better would be welcome.

So my basic stance is that if genetic information could personalize education well, go for it!

But… I am less convinced than the geneticists that we can actually do it, at least in the near future.”

Tags:

We worried for a long time about someone pushing the button, dropping the big one, ending the whole thing. But what if the buttons are pushing themselves? What if there are no buttons? From Anders Sandberg at Practical Ethics:

Can we run warfare without anybody being responsible? I do not claim to understand just war theory or the other doctrines of ethics of war. But as a computer scientist I do understand the risks of relying on systems that (1) nobody is truly responsible for, (2) cannot be properly investigated and corrected. Since presumably the internal software will be secret (since much of the military utility of autonomous systems will likely be due to their “smarts”) outside access or testing will be limited. The behavior of complex autonomous systems in contact with the real world can also be fundamentally unpredictable, which means that even perfectly self-documenting machines may not give us useful information to prevent future mis-behaviors.

Getting redress against a ‘mistake’ appears far harder in the case of a drone killing a group of civilians than by a gunship crew; if the mistake was due to an autonomous system it is likely that the threshold will be even higher. Even from a pragmatic perspective of creating disincentives for sloppy warfare the remote and diffused responsibility insulates the prosecuting state. In fact, we are perhaps obsessing too much about the robot part and too little about the extrajudicial part of heavily automated modern warfare.”

Tags: