Demis Hassabis

You are currently browsing articles tagged Demis Hassabis.

Can’t say I’m unduly focused on superintelligence posing an existential threat to our species in the immediate future, especially since so-called Weak AI is already here and enabling its own alarming possibilities: ubiquitous surveillance, attenuated democracy and a social fabric strained by disappearing jobs. We may very well require these remarkably powerful tools to survive tomorrow’s challenges, but we’d be walking blind to not accept that they’re attended by serious downsides.

Deep Learning will be particularly tricky, expressly because it’s a mysterious method that doesn’t allow us to know how it makes its leaps and gains. Demis Hassabis, the brilliant DeepMind founder and the field’s most famous practitioner, has acknowledged being “pretty shocked,” for instance, by AlphaGo’s unpredictable gambits during last year’s demolition of Lee Sedol. Hassibis, who has sometimes compared his company to the Manhattan Project (in scope and ambition if not in impact), has touted AI’s potentially ginormous near-term benefits, but tomorrow isn’t all that’s in play. The day after also matters.

The neuroscientist is fairly certain we’ll have Artificial General Intelligence inside a century and is resolutely optimistic about carbon and silicon achieving harmonic convergence. Similarly sanguine on the topic these days is Garry Kasparov, the Digital Age John Henry who was too dour about computer intelligence at first and now might be too hopeful. The human-machine tandem he foresees may just be a passing fancy before a conscious uncoupling. By then, we’ll have probably built a reality we won’t be able to survive without the constant support of our smart machines.

Hassibis, once a child prodigy in chess, wrote a Nature review of Kasparov’s new book, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins. (I’m picking up the title tomorrow, so I’ll write more on it later.) An excerpt:

Chess engines have also given rise to exciting variants of play. In 1998, Kasparov introduced ‘Advanced Chess’, in which human–computer teams merge the calculation abilities of machines with a person’s pattern-matching insights. Kasparov’s embrace of the technology that defeated him shows how computers can inspire, rather than obviate, human creativity.

In Deep Thinking, Kasparov also delves into the renaissance of machine learning, an AI subdomain focusing on general-purpose algorithms that learn from data. He highlights the radical differences between Deep Blue and AlphaGo, a learning algorithm created by my company DeepMind to play the massively complex game of Go. Last year, AlphaGo defeated Lee Sedol, widely hailed as the greatest player of the past decade. Whereas Deep Blue followed instructions carefully honed by a crack team of engineers and chess professionals, AlphaGo played against itself repeatedly, learning from its mistakes and developing novel strategies. Several of its moves against Lee had never been seen in human games — most notably move 37 in game 2, which upended centuries of traditional Go wisdom by playing on the fifth line early in the game.

Most excitingly, because its learning algorithms can be generalized, AlphaGo holds promise far beyond the game for which it was created. Kasparov relishes this potential, discussing applications from machine translation to automated medical diagnoses. AI will not replace humans, he argues, but will enlighten and enrich us, much as chess engines did 20 years ago. His position is especially notable coming from someone who would have every reason to be bitter about AI’s advances.•

Two quainter examples of technology crossing wires with chess.

In 1989, Kasparov, in London, played a remote match via telephone with David Letterman.

In 1965, Bobby Fischer, in NYC, played via Teletype in a chess tournament in Havana.

Tags: , ,

In the adrenaline rush to create a mind-blowing new technology (and profit from it directly or indirectly), ethical questions can be lost in an institutional fog and in competition among companies and countries. Richard Feynman certainly felt he’d misplaced his moral compass in just such a way during the Manhattan Project. 

The attempt to create Artificial General Intelligence is something of a Manhattan Project for the mind, and while the point is the opposite of destruction, some believe that even if it doesn’t end humans with a bang, AGI may lead our species to a whimpering end. The main difference today is those working on such projects seem keenly aware of the dangers that may arise while we’re harnessing the power of these incredible tools. That doesn’t mean the future is assured–there’ll be twists and turns we can’t yet imagine–but it’s a hopeful sign.

Bloomberg Neural Net reporter Jack Clark conducted a smart Q&A with DeepMind CEO Demis Hassabis, discussing not only where his work fits into the scheme of Alphabet but also the larger implications of superintelligence. An excerpt:


You’ve said it could be decades before you’ve truly developed artificial general intelligence. Do you think it will happen within your lifetime?

Demis Hassabis:

Well, it depends on how much sleep deprivation I keep getting, I think, because I’m sure that’s not good for your health. So I am a little bit worried about that. I think it’s many decades away for full AI. I think it’s feasible. It could be done within our natural lifetimes, but it may be it’s the next generation. It depends. I’d be surprised if it took more than, let’s say, 100 years.


So once you’ve created a general intelligence, after having drunk the Champagne or whatever you do to celebrate, do you retire?

Demis Hassabis:

No. No, because …


You want to study science?

Demis Hassabis:

Yeah, that’s right. That’s what I really want to build the AI for. That’s what I’ve always dreamed about doing. That’s why I’ve been working on AI my whole life: I see it as the fastest way to make amazing progress in science.


Say you succeed and create a super intelligence. What happens next? Do you donate the technology to the United Nations?

Demis Hassabis:

I think it should be. We’ve talked about this a lot. Actually Eric Schmidt [executive chairman of Alphabet, Google’s parent] has mentioned this. We’ve talked to him. We think that AI has to be used for the benefit of everyone. It should be used in a transparent way, and we should build it in an open way, which we’ve been doing with publishing everything we write. There should be scrutiny and checks and balances on that.

I think ultimately the control of this technology should belong to the world, and we need to think about how that’s done. Certainly, I think the benefits of it should accrue to everyone. Again, there are some very tricky questions there and difficult things to go through, but certainly that’s our belief of where things should go.•

Tags: ,


Demis Hassabis’ DeepMind was 20 years in the making, and its trouncing of Go champion Lee Se-dol was a plateau but still only prelude. “I think for perfect information games, Go is the pinnacle,” Hannabis has said, though the greater goal is to redirect the AI advances toward healthcare, virtual assistants, robotics, etc.

After the Google AI’s Game 1 triumph, Hassabis sat down with Sam Byford of Verge for an interview, which is very much worth reading. Perhaps most interesting is that even the developers didn’t really know exactly where AlphaGo was going, which is promising and a little worrisome.

An excerpt:

Sam Byford:

So for someone who doesn’t know a lot about AI or Go, how would you characterize the cultural resonance of what happened yesterday?

Demis Hassabis:

There are several things I’d say about that. Go has always been the pinnacle of perfect information games. It’s way more complicated than chess in terms of possibility, so it’s always been a bit of a holy grail or grand challenge for AI research, especially since Deep Blue. And you know, we hadn’t got that far with it, even though there’d been a lot of efforts. Monte Carlo tree search was a big innovation ten years ago, but I think what we’ve done with AlphaGo is introduce with the neural networks this aspect of intuition, if you want to call it that, and that’s really the thing that separates out top Go players: their intuition. I was quite surprised that even on the live commentary Michael Redmond was having difficulty counting out the game, and he’s a 9-dan pro! And that just shows you how hard it is to write a valuation function for Go.

Sam Byford:

Were you surprised by any of the specific moves that you saw AlphaGo play?

Demis Hassabis:

Yeah. We were pretty shocked — and I think Lee Se-dol was too, from his facial expression — by the one where AlphaGo waded into the left deep into Lee’s territory. I think that was quite an unexpected move.

Sam Byford:

Because of the aggression?

Demis Hassabis:

Well, the aggression and the audacity! Also, it played Lee Se-dol at his own game. He’s famed for creative fighting and that’s what he delivered, and we were sort of expecting something like that. The beginning of the game he just started fights across the whole board with nothing really settled. And traditionally Go programs are very poor at that kind of game. They’re not bad at local calculations but they’re quite poor when you need whole board vision.•

Tags: ,

Computer scientists long labored (and, ultimately, successfully) to make machines superior at backgammon or chess, but they had to teach that AI the rules and moves, designing the brute force of their strikes. Not so with Google’s new game-playing computer which can muster a mean game of Space Invaders or Breakout with no coaching. It’s Deep Learning currently focused on retro pastimes, but soon enough it will be serious business. From Rebecca Jacobson at PBS Newshour:

This isn’t the first game-playing A.I. program. IBM supercomputer Deep Blue defeated world chess champion Garry Kasparov in 1997. In 2011, an artificial intelligence computer system named Watson won a game of Jeopardy against champions Ken Jennings and Brad Rutter.

Watson and Deep Blue were great achievements, but those computers were loaded with all the chess moves and trivia knowledge they could handle, [Demis] Hassabis said in a news conference Tuesday. Essentially, they were trained, he explained.

But in this experiment, designers didn’t tell DQN how to win the games. They didn’t even tell it how to play or what the rules were, Hassabis said.

“(Deep Q-network) learns how to play from the ground up,” Hassabis said. “The idea is that these types of systems are more human-like in the way they learn. Our brains make models that allow us to learn and navigate the world. That’s exactly the type of system we’re trying to design here.”•

Tags: ,

Demis Hassabis, the Google Deep Learning expert recently interviewed by Steven Levy, is also queried by Murad Ahmed in the Financial Times. He argues what I suspect to be true: Machine consciousness isn’t anywhere on the horizon though not theoretically impossible. An excerpt:

A modern polymath, the 38-year-old’s career has already included spells as a child chess prodigy, master computer programmer, video games designer and neuroscientist. Four years ago, these experiences led him to start DeepMind, an AI company that, he says, has the aim of making “machines smart.”

For some, this is a utopic idea — a world aided by super-smart digital assistants working to solve humanity’s most pressing problems, from disease to climate change. Others warn of a grim Armageddon, with cognisant robots becoming all too aware of human limitations, then moving to crush their dumb creators without emotion.

Hassabis, wearing a figure-hugging black top and dark-rimmed glasses, blends in at Hakkasan, where the decor is mostly black and the lighting minimal. He tells me he knows the place well — it’s where he took executives from Google, during a series of meetings that led to the search giant paying £400m for his fledgling company a year ago. Google is betting Hassabis may be able to unlock the secrets of the mind.

“It’s quite possible there are unique things about humans,” he argues. “But, in terms of intelligence, it doesn’t seem likely. With the brain, there isn’t anything non-computable.” In other words, the brain is a computer like any other and can, therefore, be recreated. Traits previously considered innate to humans — imagination, creativity, even consciousness — may just be the equivalent of software programs. …

Hassabis argues that we’re getting ahead of ourselves. “It’s very, very far in the future from the kinds of things we’re currently dealing with, which is playing Pong on Atari,” he says. “I think the next four, five, 10 years, we’ll have a lot more information about what these systems do, what kind of computations they’re creating, how to specify the right goals. At the moment, these are science fiction stories. Yes, there’s no doubt that AI is going to be a hugely powerful technology. That’s why I work on it. It has the power to provide incredible advances for humanity.”

Too soon then, to be worrying about how to wage war with a sentient robot army? “In our research programme, there isn’t anything that says ‘program consciousness,’ ” he says.•


There’s a line near the end of 1973’s Westworld, after things have gone haywire, that speaks to concerns about Deep Learning. A technician, who’s asked why the AI has run amok and how order can be restored, answers: “They’ve been designed by other computers…we don’t know exactly how they work.”

At Google, search has never been the point. It’s been an AI company from the start, Roomba-ing information to implement in a myriad of automated ways. Deep Learning is clearly a large part of that ultimate search. On that topic, Steven Levy conducted a Backchannel interview with Demis Hassabis, the company’s Vice President of Engineering for AI projects, who is a brilliant computer-game designer. For now, it’s all just games. An excerpt:

Steven Levy:

I imagine that the more we learn about the brain, the better we can create a machine approach to intelligence.

Demis Hassabis:

Yes. The exciting thing about these learning algorithms is they are kind of meta level. We’re imbuing it with the ability to learn for itself from experience, just like a human would do, and therefore it can do other stuff that maybe we don’t know how to program. It’s exciting to see that when it comes up with a new strategy in an Atari game that the programmers didn’t know about. Of course you need amazing programmers and researchers, like the ones we have here, to actually build the brain-like architecture that can do the learning.

Steven Levy:

In other words, we need massive human intelligence to build these systems but then we’ll —

Demis Hassabis:

… build the systems to master the more pedestrian or narrow tasks like playing chess. We won’t program a Go program. We’ll have a program that can play chess and Go and Crosses and Drafts and any of these board games, rather than reprogramming every time. That’s going to save an incredible amount of time. Also, we’re interested in algorithms that can use their learning from one domain and apply that knowledge to a new domain. As humans, if I show you some new board game or some new task or new card game, you don’t start from zero. If you know to play bridge and whist and whatever, I could invent a new card game for you, and you wouldn’t be starting from scratch—you would be bringing to bear this idea of suits and the knowledge that a higher card beats a lower card. This is all transferable information no matter what the card game is.•

Tags: ,