Pedro Domingos’ book The Master Algorithm takes on many issues regarding machine learning, but as the title makes implicit, it wonders chiefly about the possibility of a unified theory enabling an ultimate learning machine, which, the author recently told Russ Roberts of EconTalk, could perhaps figure out as much as 80% of any problem posed. Can’t say I’m expecting its development in my lifetime.
In one section of the interview, there’s a technical and philosophical exchange between host and guest about creating infantile robots that can grow and learn experientially as human babies do–gradually, with small steps becoming giant leaps. Two points about this section:
- I believe Domingos is right to say that philosophers who believe “standard models of biology, chemistry, and physics cannot explain human consciousness” are getting ahead of themselves. No one should be shocked if the keys to consciousness are located via knowledge developed within current frameworks. I think that’s actually the likely outcome. We’re not at some sort of “end of science” moment.
- Machines could theoretically someday possess the type of complicated emotions humans have, or maybe they won’t. It may not matter in some practical matters. After all, a plane can fly without being a bird. Roberts’ consternation about a sort of robot consciousness sans emotions seems like a visceral and romantic concern on his part, but such a scenario could have profound implications. Not to say that emotions are a fail-safe from destruction–sometimes they get the best of us–but it does seem they’re essential in the long-term to truly complex growth, though it’s impossible (for now) to be sure.
So, I’m going to read a somewhat lengthy paragraph that charmed me, from the book. And then I want to ask you a philosophical question about it. So here’s the passage:
If you’re a parent, the entire mystery of learning unfolds before your eyes in the first three years of your child’s life. A newborn baby can’t talk, walk, recognize objects, or even understand that an object continues to exist when the baby isn’t looking at it. But month after month, in steps large and small, by trial and error, great conceptual leaps, the child figures out how the world works, how people behave, how to communicate. By a child’s third birthday all this learning has coalesced into a stable self, a stream of consciousness that will continue throughout life. Older children and adults can time-travel–aka remember things past, but only so far back. If we could revisit ourselves as infants and toddlers and see the world again through those newborn eyes, much of what puzzles us about learning–even about existence itself–would suddenly seem obvious. But as it is, the greatest mystery in the universe is not how it begins or ends, or what infinitesimal threads it’s woven from. It’s what goes on in the small child’s mind–how a pound of gray jelly can grow into the seat of consciousness.
So, I thought that was very beautiful. And then you imagined something called Robby the Robot, that would somehow simulate the experience and learn from, in the same way a child learns. So, talk about how Robby the Robot might work; and then I’ll ask my philosophical question.
Yes. So, there are several approaches to solving the problem of [?]. So, how can we create robots and computers that are as intelligent as people? And, you know, one of them, for example, is to mimic evolution. Another one is to just build a big knowledge base. But in some ways the most intriguing one is this idea of building a robot baby. Right? The existence proof of intelligence that we have as human beings–in fact, if we didn’t have that we wouldn’t even be trying for this. So, the idea of–so the path, one possible path to (AI) artificial intelligence, and the only one that we know is guaranteed to work, right? Is to actually have a real being in the real world learning from experience in the same way that a baby does. And so the ideal is the robot baby is–let’s just create something that has a brain–but it doesn’t have to be at the level of neurons, it’s just at the level of capabilities–that has the same capabilities that the brain, that the mind, if you will, that a newborn baby has. And if it does have those capabilities and then we give it the same experience that a newborn baby has, then two or three years later we will have solved the problem. So, that’s the promise of this approach.
So, the thought, the philosophical thought that I had as I was down in the basement the other day with my wife and we were sorting through boxes of stuff that we don’t look at except once a year when we go down in the basement and decide what to throw out and what to keep. And one of the boxes that we keep, even though we never examine it, except when we open, go down to the basement once a year to go down through the boxes, is: It’s a box of stuffed animals that our children had when they were babies. And we just–we don’t want to throw it out. I don’t know if our kids will ever want to use them with their children–if they have children; our kids, we don’t have any grandchildren but I think we imagine the possibility that they would be used again. But I think something else is going on there. And if our children were in the basement with us, going through that, and they saw the animal or the stuffed item that they had when they were, say, 2 and a half or 3 years old, that was incredibly precious to them–and of course has no value to them whatsoever to them right now–they would have, just as we have, as parents, they would have an incredible stab of emotional reaction. A nostalgia. A feeling that I can’t imagine Robby the Robot would ever have. Am I wrong?
I don’t know. So, this is a good question. There are actually several good questions here. One is: Would Robby the Robot need to have emotions in order to learn. I actually think the answer is Yes. And: will it have those emotions? I think at a functional level we already know how to put the equivalent of emotions into a robot, because emotions are what guides us. Right? We were talking before about goals, right? Emotions are the way evolution in some sense programmed you to do the right things and not the wrong ones, right? The reason we have fear and pleasure and pain and happiness and all of these things is so that we can choose the right things to do. And we know how to do that in a robot. The technical term for that is the objective function–
Or the utility function. Now, whether at the end of the day–
But it’s not the same. It doesn’t seem the same. Maybe it would be. I don’t know. That’s a tough question.
Exactly. So, functionally, in terms of the input-output behavior, I think this could be indistinguishable from the robot having emotions. Whether the robot is really having emotions is probably something that we will never know for sure. But again, we don’t know if animals or if even other people have the same emotions that we do. We just give them credit for them because they are similar to us. And I think in practice what will happen, in fact, this is already happening, with all of these chatbots, for example, is that: If these robots and computers behave like they have emotions, we will treat them as if they have emotions and assume that they do. And often we assume that they have a lot more emotions than they do because we project our humanity into them. So, I think at a practical level [?] it won’t make that much difference. There remains this very fascinating philosophical question, which is: What is really going on in their minds? Or in our minds, for that matter. I’m not sure that we will ever really have an answer to that.
I’ve raised the question recently on the program about whether consciousness is something which is amenable to scientific understanding. Certain philosophers, David Chalmers, Thomas Nagle claim–and they are both atheists–but they claim that models of evolution and the standard models of biology, chemistry, and physics cannot explain human consciousness. Have you read that work? Have you thought about it at all?
Yeah. And I think that–I disagree with them at the following level. I think if you fast forward to 50 years from now, we will probably have a very good and very satisfying model of consciousness. It will probably be using different concepts than the ones that people have from the sciences right now. The problem is that we haven’t found the right concepts to pin down consciousness yet. But I think there will come a point at which we do, in the sense that all the psychological and neural correlates of consciousness will be explained by this model. And again, for practical purposes, maybe even for philosophical purposes that will be good. Now, there is, I think, what is often called the hard question of consciousness. Which is: At the end of the day, because consciousness is a subjective experience, you cannot have an objective test of it. So in some sense once you get down to that hard core, consciousness is beyond the scope of science. Unless somebody comes up with something that I don’t quite imagine yet, I think again what will probably happen is that we will get to a point, probably not in the near future–it will be decades from now–where we understand consciousness well enough that we are satisfied with our understanding and we don’t ask ourselves these questions about it any more. And I can find analogies in the history of science where similar things that used to seem completely mysterious–like, life itself used to be completely mysterious. And today it’s not that mysterious any more. There’s DNA (Deoxyribonucleic acid) and there’s proteins and there’s what’s called the central dogma of biology. At the end of the day, the mystery of life is still there. It’s just really not that prominent on our minds any more because we feel like we understand, you know, the essence of how life works. And I think chances are the same thing will happen with consciousness.•