Anthony Levandowski

You are currently browsing articles tagged Anthony Levandowski.

The implied religiosity which often attends Artificial Intelligence, a dynamic identified by Jaron Lanier among other technological critics, becomes explicit in the Way of the Future, roboticist Anthony Levandowski’s new Silicon Valley spiritual-belief system in which the Four Horsemen, should they arrive, will do so in driverless cars.

To be perfectly accurate, Levandowski, the pivotal figure in the current legal scrum between Google and Uber over autonomous-vehicle intellectual property, isn’t prophesying End of Days scenarios but is rather preaching that we are in the process of transitioning from a planet ruled by humans (not great guardians, admittedly) to one governed by what he thinks will be superior machines. If we get on our knees and pray at his church today, he believes, we’ll be more likely to be accepted tomorrow as pets who sit at the feet of our masters.

His techno-theocracy sounds even less inviting than the religion promoted by the computer-savvy, self-described messiah Maharaj Ji, a teenage guru from India who briefly came to prominence in America during the disco-addled decade of the 1970s. From a 1974 profile of him by Marjoe Gortner:

The guru is much more technologically oriented, though. He spreads a lot of word and keeps tabs on who needs what through a very sophisticated Telex system that reaches out to all the communes or ashrams around the country. He can keep count of who needs how many T-shirts, pairs of socks–stuff like that. And his own people run this system; it’s free labor for the corporation.

The morning of the third day I was feeling blessed and refreshed, and I was looking forward to the guru’s plans for the Divine City, which was soon going to be built somewhere in the U. S. I wanted to hear what that was all about.

It was unbelievable. The city was to consist of “modular units adaptable to any desired shape.” The structures would have waste-recycling devices so that water could be drunk over and over. They even planned to have toothbrushes with handles you could squeeze to have the proper amount of paste pop up (the crowd was agog at this). There would be a computer in each communal house so that with just a touch of the hand you could check to see if a book you wanted was available, and if it was, it would be hand-messengered to you. A complete modern city of robots. I was thinking: whatever happened to mountains and waterfalls and streams and fresh air? This was going to be a technological, computerized nightmare! It repulsed me. Computer cards to buy essentials at a central storeroom! And no cheating, of course. If you flashed your card for an item you already had, the computer would reject it. The perfect turn-off. The spokesman for this city announced that the blueprints had already been drawn up and actual construction would be the next step. Controlled rain, light, and space. Bubble power! It was all beginning to be very frightening.•

Following up on his recent WTF Wired feature about Levandowski, Mark Harris offers a further interview with the Silicon Valley spiritualist, who, like many in the Singularity industry, worships at the altar of “intelligence,” a term that’s far more slippery to define than many in the sector are willing to admit. The algorithmic abbot believes unimaginable machine intelligence must equate to God. “If there is something a billion times smarter than the smartest human, what else are you going to call it?” Well, perhaps the devil?

An excerpt:

Levandowski has been working with computers, robots, and AI for decades. He started with robotic Lego kits at the University of California at Berkeley, went on to build a self-driving motorbike for a DARPA competition, and then worked on autonomous cars, trucks, and taxis for Google, Otto, and Uber. As time went on, he saw software tools built with machine learning techniques surpassing less sophisticated systems—and sometimes even humans.

“Seeing tools that performed better than experts in a variety of fields was a trigger [for me],” he says. “That progress is happening because there’s an economic advantage to having machines work for you and solve problems for you. If you could make something one percent smarter than a human, your artificial attorney or accountant would be better than all the attorneys or accountants out there. You would be the richest person in the world. People are chasing that.”

Not only is there a financial incentive to develop increasingly powerful AIs, he believes, but science is also on their side. Though human brains have biological limitations to their size and the amount of energy they can devote to thinking, AI systems can scale arbitrarily, housed in massive data centers and powered by solar and wind farms. Eventually, some people think that computers could become better and faster at planning and solving problems than the humans who built them, with implications we can’t even imagine today—a scenario that is usually called the Singularity.

Levandowski prefers a softer word: the Transition. “Humans are in charge of the planet because we are smarter than other animals and are able to build tools and apply rules,” he tells me. “In the future, if something is much, much smarter, there’s going to be a transition as to who is actually in charge. What we want is the peaceful, serene transition of control of the planet from humans to whatever. And to ensure that the ‘whatever’ knows who helped it get along.”

With the internet as its nervous system, the world’s connected cell phones and sensors as its sense organs, and data centers as its brain, the ‘whatever’ will hear everything, see everything, and be everywhere at all times. The only rational word to describe that ‘whatever’, thinks Levandowski, is ‘god’—and the only way to influence a deity is through prayer and worship.

“Part of it being smarter than us means it will decide how it evolves, but at least we can decide how we act around it,” he says. “I would love for the machine to see us as its beloved elders that it respects and takes care of. We would want this intelligence to say, ‘Humans should still have rights, even though I’m in charge.’”

Levandowski expects that a super-intelligence would do a better job of looking after the planet than humans are doing, and that it would favor individuals who had facilitated its path to power. Although he cautions against taking the analogy too far, Levandowski sees a hint of how a superhuman intelligence might treat humanity in our current relationships with animals. “Do you want to be a pet or livestock?” he asks. “We give pets medical attention, food, grooming, and entertainment. But an animal that’s biting you, attacking you, barking and being annoying? I don’t want to go there.” 

Enter Way of the Future. The church’s role is to smooth the inevitable ascension of our machine deity, both technologically and culturally. In its bylaws, WOTF states that it will undertake programs of research, including the study of how machines perceive their environment and exhibit cognitive functions such as learning and problem solving.

Tags: ,

Even this early in the game, autonomous vehicles are probably as safe or safer than ones driven by humans. But the question is this: How much safer can they be? From Adam Fisher’s long-form PopSci look at Google’s fleet in beat mode:

“Right now, Chauffeur is undergoing what’s known in Silicon Valley as a closed beta test. In the language particular to Google, the researchers are ‘dogfooding’ the car—driving to work each morning in the same way that [Anthony] Levandowski does. It’s not so much a perk as it is a product test. Google needs to put the car in the hands of ordinary drivers in order to test the user experience. The company also wants to prove—in a statistical, actuarial sense—that the auto-drive function is safe: not perfect, not crash-proof, but safer than a competent human driver. ‘We have a saying here at Google,’ says Levandowski. ‘In God we trust—all others must bring data.’

Currently, the data reveal that so-called release versions of Chauffeur will, on average, travel 36,000 miles before making a mistake severe enough to require driver intervention. A mistake doesn’t mean a crash—it just means that Chauffeur misinterprets what it sees. For example, it might mistake a parked truck for a small building or a mailbox for a child standing by the side of the road. It’s scary, but it’s not the same thing as an accident.

The software also performs hundreds of diagnostic checks a second. Glitches occur about every 300 miles. This spring, Chris Urmson, the director of Google’s self-driving-car project, told a government audience in Washington, D.C., that the vast majority of those are nothing to worry about. ‘We’ve set the bar incredibly low,’ he says. For the errors worrisome enough to require human hands back on the wheel, Google’s crew of young testers have been trained in extreme driving techniques—including emergency braking, high-speed lane changes, and preventing and maneuvering through uncontrolled slides—just in case.

The best way to execute that robot- to-human hand-off remains an open question. How many seconds of warning should Chauffeur provide before giving back the controls? The driver would need a bit of time to gather situational awareness, to put down that coffee or phone, and refocus. ‘It could be 20 seconds; it could be 10 seconds,’ suggests Levandowski. The actual number, he says, will be ‘based on user studies and facts, as opposed to, ‘We couldn’t get it working and therefore decided to put a one-second [hand-off] time out there.’

So far, Chauffeur has a clean driving record. There has been only one reported accident that can conceivably be blamed on Google. A self-driving car near Google’s headquarters rear-ended another Prius with enough force to push it forward and impact another two cars, falling-dominoes style. The incident took place two years ago—the Stone Age, in the foreshortened timelines of software development—and, according to Google spokespeople, the car was not in self-driving mode at the time, so the accident wasn’t Chauffeur’s fault. It was due to ordinary human error.

Human drivers get into an accident of one sort or another an average of once every 500,000 miles in the U.S. Accidents that cause injuries are even rarer, occurring about once every 1.3 million miles. And a fatality? Every 90 million miles. Considering that the Google self-driving program has already clocked half a million miles, the argument could be made that Google Chauffeur is already as safe as the average human driver.”

Tags: ,