“We Do Not Know How Superintelligences Would Behave”

barbarosa-e1409077427174 (1)

If we’re lucky, Homo sapiens are not the living end.

If we snake through the Anthropocene, our species will accomplish some great things, perhaps even creating newer and more exciting species. They may be like us, but they won’t be us, not in some essential ways. That could happen through bioengineering or space colonization. One way or another, machine superintelligence will likely be involved toward those ends, unless, of course, it pulls the plug on the process and starts one of its own. I believe it will be more merger than hostile takeover, but everything remains possible.

From a piece by Sidney Perkowitz in the Los Angeles Review of Books about Murray Shanahan’s The Technological Singularity: 

Shanahan argues that the obstacles to building such a brain are technological, not conceptual. A whole human brain is more than we can yet copy, but we can copy one a thousand times smaller. That is, we are on our way, because existing digital technology could simulate the 70 million neurons in a mouse brain. If we can also map these neurons, then, according to Shanahan, it is only a matter of time before we can obtain a complete blueprint for an artificial mouse brain. Once that brain is built, Shanahan believes it would “kick-start progress toward human-level AI.” We’d need to simulate billions of neurons of course, and then qualitatively “improve” the mouse brain with refinements like modules for language, but Shanahan thinks we can do both through better technology that deals with billions of digital elements and our rapidly advancing understanding of the workings of human cognition. To be sure, he recognizes that this argument relies on unspecified future breakthroughs.

But if we do manage to construct human-level AIs, Shanahan believes they would “almost inevitably” produce a next stage — namely, superintelligence — in part because an AI has big advantages over its biological counterpart. With no need to eat and sleep, it can operate nonstop; and, with its impulses transmitted electronically in nanoseconds rather than electrochemically in milliseconds, it can operate ultra-rapidly. Add the ability to expand and reproduce itself in silicon, and you have the seed of a scarily potent superintelligence.

Naturally, this raises fears of artificial masterminds generating a disruptive singularity. According to Shanahan, such fears are valid because we do not know how superintelligences would behave: “whether they will be friendly or hostile […] predictable or inscrutable […] whether conscious, capable of empathy or suffering.” This will depend on how they are constructed and the “reward function” that motivates them. Shanahan concedes that the chances of AIs turning monstrous are slim, but, because the stakes are so high, he believes we must consider the possibility.•

Tags: ,