Gary Marcus

You are currently browsing articles tagged Gary Marcus.

rosietherobot-e1435092783710

If we our species or some version of it persists long enough, conscious machines will be possible–probable, even. We’ll ultimately pull apart the vast mystery of the human brain, and unlocking those secrets will begin us on a path to making machines that are SMART, not just smart. It’s worth pursuing a Big Data workaround, a shortcut to superintelligence, but that seems less a sure thing.

In an Edge interview, psychologist Gary Marcus is concerned that the brute force of Big Data may be leading us astray in the search for Artificial Intelligence. If you recall, in late January the NYU psychologist argued the DeepMind AlphaGo system was overhyped, but by March he was proven wrong. His other questions about our ability to widely apply such an AI remain unsettled, however. Marcus feels particularly strongly that driverless cars will be hampered by real-world uncertainty.

From Edge:

If you’re talking about having a robot in your home—I’m still dreaming of Rosie the robot that’s going to take care of my domestic situation—you can’t afford for it to make mistakes. The DeepMind system is very much about trial and error on an enormous scale. If you have a robot at home, you can’t have it run into your furniture too many times. You don’t want it to put your cat in the dishwasher even once. You can’t get the same scale of data. If you’re talking about a robot in a real-world environment, you need for it to learn things quickly from small amounts of data.                                 

The other thing is that in the Atari system, it might not be immediately obvious, but you have eighteen choices at any given moment. There are eight directions in which you can move your joystick or not move it, and you multiply that by either you press the fire button or you don’t. You get eighteen choices. In the real world, you often have infinite choices, or at least a vast number of choices. If you have only eighteen, you can explore: If I do this one, then I do this one, then I do this one—what’s my score? How about if I change this one? How about if I change that one?                                 

If you’re talking about a robot that could go anywhere in the room or lift anything or carry anything or press any button, you just can’t do the same brute force search of what’s going on. We lack for techniques that are able to do better than just these kinds of brute force things. All of this apparent progress is being driven by the ability to use brute force techniques on a scale we’ve never used before. That originally drove Deep Blue for chess and the Atari game system stuff. It’s driven most of what people are excited about. At the same time, it’s not extendable to the real world if you’re talking about domestic robots in the home or driving in the streets.•      

Tags:

5master

Human dominance in the game of Go is going but not yet gone. That’s one of the clarifying points Gary Marcus makes in a Backchannel piece that looks at Google’s machine intelligence triumphing over a human “champion” in the ancient game. Even when AI becomes the true Go champion, that doesn’t mean such knowledge will be easily transferable to other areas. Furthermore, the psychologist explains that the Google system isn’t in fact a pure neural network but a hybrid. An excerpt:

The European champion of Go is not the world champion, or even close. The BBC, for example, reported that “Google achieves AI ‘breakthrough’ by beating Go champion,” and hundreds of other news outlets picked up essentially the same headline. But Go is scarcely a sport in Europe; and the champion in question is ranked only #633 in the world. A robot that beat the 633rd-ranked tennis pro would be impressive, but it still wouldn’t be fair to say that it had “mastered” the game. DeepMind made major progress, but the Go journey is still not over; a fascinating thread at YCombinator suggests that the program — a work in progress — would currently be ranked #279.

Beyond the far from atypical issue of hype, there is an important technical question: what is the nature of the computer system that won? 

By way of background, there is a long debate about so-called neural net models (which in its most modern form is called “deep-learning”) and classical “Good-old-fashioned Artificial Intelligence” (GOFAI) systems, of the form that the late Marvin Minsky advocated. Minsky, and others like his AI-co-founder John McCarthy grew up in the logicist tradition of Bertrand Russell, and tried to couch artificial intelligence in something like the language of logic. Others, like Frank Rosenblatt in the 50s, and present-day deep learners like Geoffrey Hinton and Facebook’s AI Director Yann LeCun, have couched their models in terms of simplified neurons that are inspired to some degree by neuroscience.

To read many of the media accounts (and even the Facebook posts of some of my colleagues), DeepMind’s victory is a resounding win for the neural network approach, and hence another demerit for Minsky, whose approach has very much lost favor.

But not so fast.•

Tags:

NYU psychologist Gary Marcus is one of the talking heads interviewed for this CBS Sunday Morning report about the future of robots and co-bots and such. He speaks to the mismeasure of the Turing Test, the current mediocrity of human-computer communications and the potential perils of Strong AI. To his comment about the company dominating AI winning the Internet, I really doubt any one company will be dominant across most or even many categories. Quite a few will own a piece, and there’ll be no overall blowout victory, though there are vast riches to be had in even small margins. View here.

Tags: , , ,

Google’s translated text isn’t perfect, but it’s far better than I could do. Of course, those algorithms aren’t conscious of their accomplishment, while I’m aware of my shortcoming. Erasing that distinction isn’t a bridge too far, but it’s going to take a long time to cross. EconTalk host Russ Roberts did an excellent podcast this week with cognitive scientist Gary Marcus about the future of AI. A couple of excerpts follow.

________________________________

Russ Roberts:

Now, to be fair to AI and those who work on it, I think, I don’t know who, someone made the observation but it’s a thoughtful observation that any time we make progress–well, let me back up. People say, ‘Well, computers can do this now, but they’ll never be able to do xyz.’ Then, when they learn to do xyz, they say, ‘Well, of course. That’s just an easy problem. But they’ll never be able to do what you’ve just said’–say–‘understand the question.’ So, we’ve made a lot of progress, right, in a certain dimension. Google Translate is one example. Siri is another example. Wayz, is a really remarkable, direction-generating GPS (Global Positioning System) thing for helping you drive. They seem sort of smart. But as you point out, they are very narrowly smart. And they are not really smart. They are idiot savants. But one view says the glass is half full; we’ve made a lot of progress. And we should be optimistic about where we’ll head in the future. Is it just a matter of time?

Gary Marcus:

Um, I think it probably is a matter of time. It’s a question of whether are we talking decades or centuries. Kurzweil has talked about having AI in about 15 years from now. A true artificial intelligence. And that’s not going to happen. It might happen in the century. It might happen somewhere in between. I don’t think that it’s in principle an impossible problem. I don’t think that anybody in the AI community would argue that we are never going to get there. I think there have been some philosophers who have made that argument, but I don’t think that the philosophers have made that argument in a compelling way. I do think eventually we will have machines that have the flexibility of human intelligence. Going back to something else that you said, I don’t think it’s actually the case that goalposts are shifting as much as you might think. So, it is true that there is this old thing that whatever used to be called AI is now just called engineering, once we can do it.

________________________________

Russ Roberts:

Given all of that, why are people so obsessed right now–this week, almost, it feels like–with the threat of super AI, or real AI, or whatever you want to call it, the Musk, Hawking, Bostrom worries? We haven’t made any progress–much. We’re not anywhere close to understanding how the brain actually works. We are not close to creating a machine that can think, that can learn, that can improve itself–which is what everybody’s worried about or excited about, depending on their perspective, and we’ll talk about that in a minute. But, why do you think there’s this sudden uptick, spike in focusing on the potential and threat of it right now?

Gary Marcus:

Well, I don’t have a full explanation for why people are worried now. I actually think we should be worried. I don’t understand exactly why there was such a shift in the public view. So, I wanted to write about this for The New Yorker a couple of years ago, and my editor thought, ‘Don’t write this. You have this reputation as this sober scientist who understands where things are. This is going to sound like Science Fiction. It will not be good for your reputation.’ And I said, ‘Well, I think it’s really important and I’d like to write about it anyway.’ We had some back and forth, and I was able to write some about it–not as much as I wanted. And now, yeah, everybody is talking about it. I don’t know if it’s because Bostrom’s book is coming out or because people, there’s been a bunch of hyping, AI stories make AI seem closer than it is, so it’s more salient to people. I’m not actually sure what the explanation is. All that said, here’s why I think we should still be worried about it. If you talk to people in the field I think they’ll actually agree with me that nothing too exciting is going to happen in the next decade. There will be progress and so forth and we’re all looking forward to the progress. But nobody thinks that 10 years from now we’re going to have a machine like HAL in 2001. However, nobody really knows downstream how to control the machines. So, the more autonomy that machines have, the more dangerous they are. So, if I have an Angry Birds App on my phone, I’m not hooked up to the Internet, the worst that’s going to happen if there’s some coding error maybe the phone crashes. Not a big deal. But if I hook up a program to the stock market, it might lose me a couple hundred million dollars very quickly–if I had enough invested in the market, which I don’t. But some company did in fact lose a hundred million dollars in a few minutes a couple of years ago, because a program with a bug that is hooked up and empowered can do a lot of harm. I mean, in that case it’s only economic harm; and [?] maybe the company went out of business–I forget. But nobody died. But then you raise things another level: If machines can control the trains–which they can–and so forth, then machines that either deliberately or unintentionally or maybe we don’t even want to talk about intentions: if they cause damage, can cause real damage. And I think it’s a reasonable expectation that machines will be assigned more and more control over things. And they will be able to do more and more sophisticated things over time. And right now, we don’t even have a theory about how to regulate that. Now, anybody can build any kind of computer program they want. There’s very little regulation. There’s some, but very little regulation. It’s kind of, in little ways, like the Wild West. And nobody has a theory about what would be better. So, what worries me is that there is at least potential risk. I’m not sure it’s as bad as like, Hawking, said. Hawking seemed to think like it’s like night follows day: They are going to get smarter than us; they’re not going to have any room for us; bye-bye humanity. And I don’t think it’s as simple as that. The world being machines eventually that are smarter than us, I take that for granted. But they may not care about us, they might not wish to do us harm–you know, computers have gotten smarter and smarter but they haven’t shown any interest in our property, for example, our health, or whatever. So far, computers have been indifferent to us.•

Tags: , ,

As I’ve mentioned before, I doubt we’ll survive as a species without AI and brain-enhancement, though those things could potentially end us as well. It’s a gambit. The opening of Kevin Loria’s Business Insider article about the future of souped-up brains, which I would guess are probably still a long way off:

With a jolt of electricity, you might be able to enter a flow state that allows you to learn a new skill twice as fast, solve problems that have mystified you for hours, or even win a sharpshooting competition.

And this just scratches the surface in terms of what we might be able to do to improve cognition as our understanding of the brain improves. With an implanted chip, the possibilities might be close to limitless.

Researchers think that as we learn more about the brain, we’ll be able to use electricity to boost focus, memory, learning, mathematical ability, and pattern recognition. Electric stimulation may also clear away depression and stave off cognitive decline. We’ll eventually even implant computer chips that allow us to directly search the web for information or even download new skills — like Neo learning Kung-fu in The Matrix.

We’re heading down a path that will allow us to supercharge the brain.

The key is decoding how the brain works. That’s the hurdle in the way, and the one that billions of dollars in research are going towards right now.

‘I don’t think there’s any doubt we’ll eventually understand the brain,’ says Gary Marcus, a professor of psychology at New York University, and an editor of the upcoming book The Future of the Brain: Essays by the World’s Leading Neuroscientists.

‘The big question is how long it’s going to take,’ he says.”

Tags: ,

Yes, eventually you’ll have the implant, and those brain chips may arrive in two waves: initially for the treatment of chronic illness and then for performance enhancement. Because of the military’s interest in the latter, however, those waves might come crashing down together. From “The Future of Brain Implants,” an article by Gary Marcus and Christof Koch in the Wall Street Journal:

“Many people will resist the first generation of elective implants. There will be failures and, as with many advances in medicine, there will be deaths. But anybody who thinks that the products won’t sell is naive. Even now, some parents are willing to let their children take Adderall before a big exam. The chance to make a ‘superchild’ (or at least one guaranteed to stay calm and attentive for hours on end during a big exam) will be too tempting for many.

Even if parents don’t invest in brain implants, the military will. A continuing program at Darpa, a Pentagon agency that invests in cutting-edge technology, is already supporting work on brain implants that improve memory to help soldiers injured in war. Who could blame a general for wanting a soldier with hypernormal focus, a perfect memory for maps and no need to sleep for days on end? (Of course, spies might well also try to eavesdrop on such a soldier’s brain, and hackers might want to hijack it. Security will be paramount, encryption de rigueur.)

An early generation of enhancement implants might help elite golfers improve their swing by automating their mental practice. A later generation might allow weekend golfers to skip practice altogether. Once neuroscientists figure out how to reverse-engineer the end results of practice, “neurocompilers” might be able to install the results of a year’s worth of training directly into the brain, all in one go.

That won’t happen in the next decade or maybe even in the one after that. But before the end of the century, our computer keyboards and trackpads will seem like a joke; even Google Glass 3.0 will seem primitive.”

Tags: ,

I don’t anticipate human-level AI at any time in the near future if at all. Silicon does some things incredibly well and so does carbon, but they’re not necessarily the same things. Even when they both successfully tackle the same problem successfully, it’s executed differently. For instance: Machines haven’t started writing great film reviews but instead use algorithms that help people choose movies. It’s a different process–and a different experience. 

I would guess that if machines are to ever to truly understand in a human way, it will be because there’s been a synthesis of biology and technology and not because the latter has “learned” the ways of the former. In a New Yorker blog  item, NYU psychologist Gary Marcus offers a riposte to the recent New York Times article which strongly suggested we’re at the dawn of a new age of human-like smart machines. An excerpt:

There have been real innovations, like driverless cars, that may soon become commercially available. Neuromorphic engineering and deep learning are genuinely exciting, but whether they will really produce human-level A.I. is unclear—especially, as I have written before, when it comes to challenging problems like understanding natural language.

The brainlike I.B.M. system that the Times mentioned on Sunday has never, to my knowledge, been applied to language, or any other complex form of learning. Deep learning has been applied to language understanding, but the results are feeble so far. Among publicly available systems, the best is probably a Stanford project, called Deeply Moving, that applies deep learning to the task of understanding movie reviews. The cool part is that you can try it for yourself, cutting and pasting text from a movie review and immediately seeing the program’s analysis; you even teach it to improve. The less cool thing is that the deep-learning system doesn’t really understand anything.

It can’t, say, paraphrase a review or mention something the reviewer liked, things you’d expect of an intelligent sixth-grader. About the only thing the system can do is so-called sentiment analysis, reducing a review to a thumbs-up or thumbs-down judgment. And even there it falls short; after typing in ‘better than Cats!’ (which the system correctly interpreted as positive), the first thing I tested was a Rotten Tomatoes excerpt of a review of the last movie I saw, American Hustle: ‘A sloppy, miscast, hammed up, overlong, overloud story that still sends you out of the theater on a cloud of rapture.’ The deep-learning system couldn’t tell me that the review was ironic, or that the reviewer thought the whole was more than the sum of the parts. It told me only, inaccurately, that the review was very negative. When I sent the demo to my collaborator, Ernest Davis, his luck was no better than mine. Ernie tried ‘This is not a book to be ignored’ and ‘No one interested in the subject can afford to ignore this book.’ The first came out as negative, the second neutral. If Deeply Moving is the best A.I. has to offer, true A.I.—of the sort that can read a newspaper as well as a human can—is a long way away.”

Tags: ,

In his 1970 Apollo 11 account, Of a Fire on the Moon, Norman Mailer realized that his rocket wasn’t the biggest after all, that the mission was a passing of the torch, that technology, an expression of the human mind, had diminished its creators. “Space travel proposed a future world of brains attached to wires,” Mailer wrote, his ego having suffered a TKO. And just as the Space Race ended the greater race began, the one between carbon and silicon, and it’s really just a matter of time before the pace grows too brisk for humans.

Supercomputers will ultimately be a threat to us, but we’re certainly doomed without them, so we have to navigate the future the best we can, even if it’s one not of our control. Gary Marcus addresses this and other issues in his latest New Yorker blog piece, “Why We Should Think About the Threat of Artificial Intelligence.” An excerpt:

“It’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine. There might be a few jobs left for entertainers, writers, and other creative types, but computers will eventually be able to program themselves, absorb vast quantities of new information, and reason in ways that we carbon-based units can only dimly imagine. And they will be able to do it every second of every day, without sleep or coffee breaks.

For some people, that future is a wonderful thing. [Ray] Kurzweil has written about a rapturous singularity in which we merge with machines and upload our souls for immortality; Peter Diamandis has argued that advances in A.I. will be one key to ushering in a new era of ‘abundance,’ with enough food, water, and consumer gadgets for all. Skeptics like Eric Brynjolfsson and I have worried about the consequences of A.I. and robotics for employment. But even if you put aside the sort of worries about what super-advanced A.I. might do to the labor market, there’s another concern, too: that powerful A.I. might threaten us more directly, by battling us for resources.

Most people see that sort of fear as silly science-fiction drivel—the stuff of The Terminator and The Matrix. To the extent that we plan for our medium-term future, we worry about asteroids, the decline of fossil fuels, and global warming, not robots. But a dark new book by James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, lays out a strong case for why we should be at least a little worried.

Barrat’s core argument, which he borrows from the A.I. researcher Steve Omohundro, is that the drive for self-preservation and resource acquisition may be inherent in all goal-driven systems of a certain degree of intelligence. In Omohundro’s words, ‘if it is smart enough, a robot that is designed to play chess might also want to build a spaceship,’ in order to obtain more resources for whatever goals it might have.”

Tags: , , , ,

At the New Yorker Elements blog, NYU psychology professor Gary Marcus has a post about the AI community, which seems more interested in creating machines that are better at sleight of hand than depth of thought. An excerpt:

“In a terrific paper just presented at the premier international conference on artificial intelligence, [Henry] Levesque, a University of Toronto computer scientist who studies these questions, has taken just about everyone in the field of A.I. to task. He argues that his colleagues have forgotten about the ‘intelligence’ part of artificial intelligence.

Levesque starts with a critique of Alan Turing’s famous ‘Turing test,’ in which a human, through a question-and-answer session, tries to distinguish machines from people. You’d think that if a machine could pass the test, we could safely conclude that the machine was intelligent. But Levesque argues that the Turing test is almost meaningless, because it is far too easy to game. Every year, a number of machines compete in the challenge for real, seeking something called the Loebner Prize. But the winners aren’t genuinely intelligent; instead, they tend to be more like parlor tricks, and they’re almost inherently deceitful. If a person asks a machine ‘How tall are you?’ and the machine wants to win the Turing test, it has no choice but to confabulate. It has turned out, in fact, that the winners tend to use bluster and misdirection far more than anything approximating true intelligence. One program worked by pretending to be paranoid; others have done well by tossing off one-liners that distract interlocutors. The fakery involved in most efforts at beating the Turing test is emblematic: the real mission of A.I. ought to be building intelligence, not building software that is specifically tuned toward fixing some sort of arbitrary test.”

Tags: ,

From “Slaves to the Algorithm,” Steven Poole’s new Aeon essay about handing over function, and by extension, moral judgement, to math:

“At first thought, it seems like a pure futuristic boon — the idea of a car that drives itself, currently under development by Google. Already legal in Nevada, Florida and California, computerized cars will be able to drive faster and closer together, reducing congestion while also being safer. They’ll drop you at your office then go and park themselves. What’s not to like? Well, for a start, as the mordant critic of computer-aided ‘solutionism’ Evgeny Morozov points out, the consequences for urban planning might be undesirable to some. ‘Would self-driving cars result in inferior public transportation as more people took up driving?’ he wonders in his new book, To Save Everything, Click Here (2013).

More recently, Gary Marcus, professor of psychology at New York University, offered a vivid thought experiment in The New Yorker. Suppose you are in a self-driving car going across a narrow bridge, and a school bus full of children hurtles out of control towards you. There is no room for the vehicles to pass each other. Should the self-driving car take the decision to drive off the bridge and kill you in order to save the children?

What Marcus’s example demonstrates is the fact that driving a car is not simply a technical operation, of the sort that machines can do more efficiently. It is also a moral operation. (His example is effectively a kind of ‘trolley problem’, of the sort that has lately been fashionable in moral philosophy.) If we let cars do the driving, we are outsourcing not only our motor control but also our moral judgment.”

Tags: , ,

Another really smart post by NYU psychology professor Gary Marcus at the New Yorker News Desk blog, this one entitled, “Why Making Robots Is So Darned Hard.” An excerpt:

“Meanwhile, whether a robot looks like a human or hockey puck, it is only as clever as the software within. And artificial intelligence is still very much a work-in-progress, with no machine approaching the full flexibility of the human mind. There is no shortage of strategies—ranging from simulations of biological brains to deep learning and to older techniques drawn from classical artificial intelligence—but there is still no machine remotely flexible enough to deal with the real world. The best robot-vision systems, for example, work far better with isolated objects than with complex scenes involving many objects; a robot can easily learn to tell the difference between a person and a basketball, but it’s far harder to learn why the people are passing the ball a certain way. Visual recognition of complex flexible objects, like strands of cooked spaghetti and opening and closing humans hands, present tremendous challenges, too. Even further away is a robust way of embodying computers with common sense.

In virtually every robot that’s ever been built, the key challenge is generalization, and moving things from the laboratory to the real world. It’s one thing to get a robot to fold a colorful towel in an empty room; it’s another to get it to succeed in a busy apartment with visual distractions that the machine can’t quite parse.”

Tags:

Excellent post by psychologist Gary Marcus at the New Yorker site about the soul, so to speak, of machines, as driverless cars are poised to become the first contraptions to force the issue of AI ethical systems. The opening:

“Google’s driver-less cars are already street-legal in three states, California, Florida, and Nevada, and some day similar devices may not just be possible but mandatory. Eventually (though not yet) automated vehicles will be able to drive better, and more safely than you can; no drinking, no distraction, better reflexes, and better awareness (via networking) of other vehicles. Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would be immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.

That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.” (Thanks Browser.)

Tags: