Murray Shanahan

You are currently browsing articles tagged Murray Shanahan.

pacmanghosts

Will machines eventually grow intelligent enough to eliminate humans? One can only hope so. I mean, have you watched the orange-headed man discuss the length of his ding-dong at the GOP debates?

In all seriousness, I think the discussion about humans vs. machines is inherently flawed. It supposes that Homo sapiens as we know them will endlessly be the standard. Really unlikely. If our Anthropocene sins don’t doom us, we’ll likely have the opportunity to engineer a good part of our evolution, whether it’s here on Earth or in space. (Alien environments we try to inhabit will also change the nature of what we are.) Ultimately, it will be a contest between Humans 2.0 and Strong AI, though the two factions may reach detente and merge.

For the time being, really smart researchers teach computers to teach themselves, having them use Deep Learning to master Pac-Man and such, speeding the future here a “quarter” at a time. From an article about the contemporary London AI scene by Rob Davies in the Guardian:

Murray Shanahan, professor of cognitive robotics at Imperial, believes that while we should be thinking hard about the moral and ethical ramifications of AI, computers are still decades away from developing the sort of abilities they’d need to enslave or eliminate humankind and bringing Hawking’s worst fears to reality. One reason for this is that while early artificial intelligence systems can learn, they do so only falteringly.

For instance, a human who picks up one bottle of water will have a good idea of how to pick up others of different shapes and sizes. But a humanoid robot using an AI system would need a huge amount of data about every bottle on the market. Without that, it would achieve little more than getting the floor wet.

Using video games as their testing ground, Shanahan and his students want to develop systems that don’t rely on the exhaustive and time-consuming process of elimination – for instance, going through every iteration of lifting a water bottle in order to perfect the action – to improve their understanding.

They are building on techniques used in the development of DeepMind, the British AI startup sold to Google in 2014 for a reported £400m. DeepMind was also developed using computer games, which it eventually learned to play to a “superhuman” level, and DeepMind programs are now able to play – and defeat – professional players of the Chinese board game Go.

Shanahan believes the research of his students will help create systems that are even smarter than DeepMind.•

Tags: ,

barbarosa-e1409077427174 (1)

If we’re lucky, Homo sapiens are not the living end.

If we snake through the Anthropocene, our species will accomplish some great things, perhaps even creating newer and more exciting species. They may be like us, but they won’t be us, not in some essential ways. That could happen through bioengineering or space colonization. One way or another, machine superintelligence will likely be involved toward those ends, unless, of course, it pulls the plug on the process and starts one of its own. I believe it will be more merger than hostile takeover, but everything remains possible.

From a piece by Sidney Perkowitz in the Los Angeles Review of Books about Murray Shanahan’s The Technological Singularity: 

Shanahan argues that the obstacles to building such a brain are technological, not conceptual. A whole human brain is more than we can yet copy, but we can copy one a thousand times smaller. That is, we are on our way, because existing digital technology could simulate the 70 million neurons in a mouse brain. If we can also map these neurons, then, according to Shanahan, it is only a matter of time before we can obtain a complete blueprint for an artificial mouse brain. Once that brain is built, Shanahan believes it would “kick-start progress toward human-level AI.” We’d need to simulate billions of neurons of course, and then qualitatively “improve” the mouse brain with refinements like modules for language, but Shanahan thinks we can do both through better technology that deals with billions of digital elements and our rapidly advancing understanding of the workings of human cognition. To be sure, he recognizes that this argument relies on unspecified future breakthroughs.

But if we do manage to construct human-level AIs, Shanahan believes they would “almost inevitably” produce a next stage — namely, superintelligence — in part because an AI has big advantages over its biological counterpart. With no need to eat and sleep, it can operate nonstop; and, with its impulses transmitted electronically in nanoseconds rather than electrochemically in milliseconds, it can operate ultra-rapidly. Add the ability to expand and reproduce itself in silicon, and you have the seed of a scarily potent superintelligence.

Naturally, this raises fears of artificial masterminds generating a disruptive singularity. According to Shanahan, such fears are valid because we do not know how superintelligences would behave: “whether they will be friendly or hostile […] predictable or inscrutable […] whether conscious, capable of empathy or suffering.” This will depend on how they are constructed and the “reward function” that motivates them. Shanahan concedes that the chances of AIs turning monstrous are slim, but, because the stakes are so high, he believes we must consider the possibility.•

Tags: ,