Adam Fisher

You are currently browsing articles tagged Adam Fisher.

Even this early in the game, autonomous vehicles are probably as safe or safer than ones driven by humans. But the question is this: How much safer can they be? From Adam Fisher’s long-form PopSci look at Google’s fleet in beat mode:

“Right now, Chauffeur is undergoing what’s known in Silicon Valley as a closed beta test. In the language particular to Google, the researchers are ‘dogfooding’ the car—driving to work each morning in the same way that [Anthony] Levandowski does. It’s not so much a perk as it is a product test. Google needs to put the car in the hands of ordinary drivers in order to test the user experience. The company also wants to prove—in a statistical, actuarial sense—that the auto-drive function is safe: not perfect, not crash-proof, but safer than a competent human driver. ‘We have a saying here at Google,’ says Levandowski. ‘In God we trust—all others must bring data.’

Currently, the data reveal that so-called release versions of Chauffeur will, on average, travel 36,000 miles before making a mistake severe enough to require driver intervention. A mistake doesn’t mean a crash—it just means that Chauffeur misinterprets what it sees. For example, it might mistake a parked truck for a small building or a mailbox for a child standing by the side of the road. It’s scary, but it’s not the same thing as an accident.

The software also performs hundreds of diagnostic checks a second. Glitches occur about every 300 miles. This spring, Chris Urmson, the director of Google’s self-driving-car project, told a government audience in Washington, D.C., that the vast majority of those are nothing to worry about. ‘We’ve set the bar incredibly low,’ he says. For the errors worrisome enough to require human hands back on the wheel, Google’s crew of young testers have been trained in extreme driving techniques—including emergency braking, high-speed lane changes, and preventing and maneuvering through uncontrolled slides—just in case.

The best way to execute that robot- to-human hand-off remains an open question. How many seconds of warning should Chauffeur provide before giving back the controls? The driver would need a bit of time to gather situational awareness, to put down that coffee or phone, and refocus. ‘It could be 20 seconds; it could be 10 seconds,’ suggests Levandowski. The actual number, he says, will be ‘based on user studies and facts, as opposed to, ‘We couldn’t get it working and therefore decided to put a one-second [hand-off] time out there.’

So far, Chauffeur has a clean driving record. There has been only one reported accident that can conceivably be blamed on Google. A self-driving car near Google’s headquarters rear-ended another Prius with enough force to push it forward and impact another two cars, falling-dominoes style. The incident took place two years ago—the Stone Age, in the foreshortened timelines of software development—and, according to Google spokespeople, the car was not in self-driving mode at the time, so the accident wasn’t Chauffeur’s fault. It was due to ordinary human error.

Human drivers get into an accident of one sort or another an average of once every 500,000 miles in the U.S. Accidents that cause injuries are even rarer, occurring about once every 1.3 million miles. And a fatality? Every 90 million miles. Considering that the Google self-driving program has already clocked half a million miles, the argument could be made that Google Chauffeur is already as safe as the average human driver.”

Tags: ,