Gill Pratt

You are currently browsing articles tagged Gill Pratt.

Industrial robots are built to be great (perfect, hopefully) at limited, repetitive tasks. But with Deep Learning experiments, the machines aren’t programmed for chores but rather to teach themselves to learn how to master them from experience. Since every situation in life can’t be anticipated and pre-coded, truly versatile AI needs to autonomously conquer obstacles that arise. In these trials, the journey has as much meaning–more, really–than the destination.

Of course, not everyone would agree that humans are operating from such a blank slate, that we don’t already have some template for many behaviors woven into our neurons–a collective unconsciousness of some sort. Even if that’s so, I’d think there’ll soon be a way for robots to transfer such knowledge across generations.

One current Deep Learning project: Berkeley’s Brett robot, designed to be like a small child, though a growing boy. The name stands for “Berkeley Robot for the Elimination of Tedious Tasks,” and you might be tempted to ask how many of them it would take to screw in a light bulb, but it’s already far beyond the joke stage. As usual with this tricky field, it may take longer than we’d like for the emergence of such highly functional machines, but perhaps not as long as we’d expect.

Jack Clark of Bloomberg visited the motherless “child” at Berkeley and writes of it and some of the other current bright, young things. An excerpt from his report:

What makes Brett’s brain tick is a combination of two technologies that have each become fundamental to the AI field: deep learning and reinforcement learning. Deep learning helps the robot perceive the world and its mechanical limbs using a technology called a neural network. Reinforcement learning trains the robot to improve its approach to tasks through repeated attempts. Both techniques have been used for many years; the former powers Google and other companies’ image and speech recognition systems, and the latter is used in many factory robots. While combinations of the two have been tried in software before, the two areas have never been fused so tightly into a single robot, according to AI researchers familiar with the Berkeley project. “That’s been the holy grail of robotics,” says Carlos Guestrin, the chief executive officer at AI startup Dato and a professor of machine learning at the University of Washington.

After years of AI and robotics research, Berkeley aims to devise a system with the intelligence and flexibility of Rosie from The Jetsons. The project entered a new phase in the fall of 2014 when the team introduced a unique combination of two modern AI systems&and a roomful of toys—to a robot. Since then, the team has published a series of papers that outline a software approach to let any robot learn new tasks faster than traditional industrial machines while being able to develop the sorts of broad knowhow for solving problems that we associate with people. These kinds of breakthroughs mean we’re on the cusp of an explosion in robotics and artificial intelligence, as machines become able to do anything people can do, including thinking, according to Gill Pratt, program director for robotics research at the U.S. Defense Advanced Research Projects Agency.

 

Tags: , ,

The future usually arrives gradually, even frustratingly slowly, often wearing the clothes of the past, but what if it got here today or soon thereafter?

The benefits of profound technologies rushing headlong at us would be amazing and amazingly challenging. Gill Pratt, who oversaw the DARPA Robotics Challenge, wonders in a new Journal of Economic Perspectives essay if the field is to have a wild growth spurt, a synthetic analog to the biological eruption of the Cambrian Period. He thinks that once the “generalizable knowledge representation problem” is addressed, no easy feat, the field will speed forward. The opening:

About half a billion years ago, life on earth experienced a short period of very rapid diversification called the “Cambrian Explosion.” Many theories have been proposed for the cause of the Cambrian Explosion, with one of the most provocative being the evolution of vision, which allowed animals to dramatically increase their ability to hunt and find mates (for discussion, see Parker 2003). Today, technological developments on several fronts are fomenting a similar explosion in the diversification and applicability of robotics. Many of the base hardware technologies on which robots depend—particularly computing, data storage, and communications—have been improving at exponential growth rates. Two newly blossoming technologies—“Cloud Robotics” and “Deep Learning”—could leverage these base technologies in a virtuous cycle of explosive growth. In Cloud Robotics— a term coined by James Kuffner (2010)—every robot learns from the experiences of all robots, which leads to rapid growth of robot competence, particularly as the number of robots grows. Deep Learning algorithms are a method for robots to learn and generalize their associations based on very large (and often cloud-based) “training sets” that typically include millions of examples. Interestingly, Li (2014) noted that one of the robotic capabilities recently enabled by these combined technologies is vision—the same capability that may have played a leading role in the Cambrian Explosion. Is a Cambrian Explosion Coming for Robotics?

How soon might a Cambrian Explosion of robotics occur? It is hard to tell. Some say we should consider the history of computer chess, where brute force search and heuristic algorithms can now beat the best human player yet no chess-playing program inherently knows how to handle even a simple adjacent problem, like how to win at a straightforward game like tic-tac-toe (Brooks 2015). In this view, specialized robots will improve at performing well-defined tasks, but in the real world, there are far more problems yet to be solved than ways presently known to solve them.

But unlike computer chess programs, where the rules of chess are built in, today’s Deep Learning algorithms use general learning techniques with little domain-specific structure. They have been applied to a range of perception problems, like speech recognition and now vision. It is reasonable to assume that robots will in the not-too-distant future be able perform any associative memory problem at human levels, even those with high-dimensional inputs, with the use of Deep Learning algorithms. Furthermore, unlike computer chess, where improvements have occurred at a gradual and expected rate, the very fast improvement of Deep Learning has been surprising, even to experts in the field. The recent availability of large amounts of training data and computing resources on the cloud has made this possible; the algorithms being used have existed for some time and the learning process has actually become simpler as performance has improved.•

Tags:

The DARPA Grand Challenge of 2004 quickly resulted in private companies investing heavily in driverless. Will the department’s recently completed Robotics Challenge lead to a similar public-to private shift? Gill Pratt, outgoing member of DARPA, explains to Sam Thielman of Guardian how he believes that will occur. An excerpt:

Question:

Google, Daimler and Uber all have self-driving cars now; how do you anticipate humanoid robots reaching the private sector?

Gill Pratt:

I think the next big thing to conquer is cost. All the prototypes that you saw were in the hundreds of thousands to millions of dollars. And once a market is identified, whether it’s in manufacturing or agriculture or ageing society, once someone kind of finds the match between the technology and the market, the costs will go way down, and that will be an amazing thing. The next neat thing that’s going to happen is cloud robotics: that’s where when one robot learns something they all learn something.

Let’s say you have a group of robots used for ageing society and their job is to clean up within your house. As each machine does its work, eventually one of them will come across an object and not know what it is, and it’ll reach out to the cloud, through the internet, and say: “Does anyone know what this thing is?” Let’s say that no one does. Then it’ll reach out to a person and the person will say: “Oh, that’s a jar of oil, and that belongs in the cupboard next to the jar of vinegar.” And the robot will say: “Got it!” And now every single one of them knows. In this way, you can bootstrap up the confidence of all the machines throughout the world. I think that will be the next technology.•

Tags: ,

Google has withdrawn one of its recently purchased companies from the upcoming DARPA Robotics Challenge in Pomona, but the competition will continue apace, albeit with some accelerated marching orders (i.e., the cables have been cut). From Erico Guizzo at IEEE Spectrum:

In a call with reporters this afternoon, Gill Pratt, program manager for the DRC, said the tasks for the final challenge will be similar to the ones we saw at the trials. But this time the tasks will be “put together in a single mission” that teams have one hour to complete.

The robots will start in a vehicle, drive to a simulated disaster building, and then they’ll have to open doors, walk on rubble, and use tools. Finally they’ll have to climb a flight of stairs. But one more thing, Pratt said: there will be a surprise task waiting for the robots at the end.

Just when we thought the DRC couldn’t get any cooler—it just did. Naturally, Pratt declined to elaborate on what this mystery task might entail.

He also emphasized that now the robots will operate completely untethered. There won’t be cables to provide power and data—and to keep them from falling down. “They’ll have to get up on their own,” he said. “That’s raising the bar on how good the robots have to be.”•

___________________________

“Basically we have to cut the cord”:

Tags: ,