Will Knight

You are currently browsing articles tagged Will Knight.

In Michael Crichton’s original 1973 Westworld, when the machines begin to rise, the operators are flummoxed about what measures to take to prevent calamity. One dejected scientist says resignedly about the robots run amok: “They’ve been designed by other computers…we don’t know exactly how they work.”

As algorithms grow more complex, they begin to escape us. Deep learning will encourage such opaqueness if it develops unchecked, which is currently the most likely scenario. Some who should know better have repeated the ridiculous idea that if things go wrong, we can just pull out one plug or another and all will be fine.

There will be no plug. Even if there was, in a highly technological society, yanking it from the wall would mean the end of us. 

In “The Dark Secret at the Heart of AI,” an excellent Will Knight Technology Review article, the author speaks to the problem of machines teaching themselves, a powerful tool and, perhaps, weapon. He warns that “we’ve never before built machines that operate in ways their creators don’t understand.” 

The opening:

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users.•

Tags:

retrofuturesuburbs3456876

The robots may be coming for our jobs, but they’re not coming for our species, not yet.

Anyone worried about AI extincting humans in the short term is really buying into sci-fi hype far too much, and those quipping that we’ll eventually just unplug machines if they get too smart is underselling more distant dangers. But in the near term, Weak AI (e.g., automation) is far more a peril to society than Strong AI (e.g., conscious machines). It could move us into a post-scarcity tomorrow, or it could do great damage if it’s managed incorrectly.What happens if too many jobs are lost all at once? Will there be enough of a transition period to allow us to pivot?

In a Technology Review piece, Will Knight writes of a Stanford study on AI that predicts certain key disruptive technologies will not have cut a particularly wide swath by 2030. Of course, even this research, which takes a relatively conservative view of the future, suggests we start discussing social safety nets for those on the short end of what may become an even more imbalanced digital divide.

The opening:

The odds that artificial intelligence will enslave or eliminate humankind within the next decade or so are thankfully slim. So concludes a major report from Stanford University on the social and economic implications of artificial intelligence.

At the same time, however, the report concludes that AI looks certain to upend huge aspects of everyday life, from employment and education to transportation and entertainment. More than 20 leaders in the fields of AI, computer science, and robotics coauthored the report. The analysis is significant because the public alarm over the impact of AI threatens to shape public policy and corporate decisions.

It predicts that automated trucks, flying vehicles, and personal robots will be commonplace by 2030, but cautions that remaining technical obstacles will limit such technologies to certain niches. It also warns that the social and ethical implications of advances in AI, such as the potential for unemployment in certain areas and likely erosions of privacy driven by new forms of surveillance and data mining, will need to be open to discussion and debate.•

Tags:

In Japan, Pepper is a dear, adorable thing, but the robot is being reprogrammed to be sort of a jerk for use in America. While that cultural reimagining is telling in a small way, the greater takeaway from Will Knight’s smart Technology Review piece about this particular machine is that truly flexible and multifaceted robot assistants still need a lot of work. Of course, Weak AI can do a lot of good (and wreak a lot of havoc on the economy) all by itself. An excerpt:

Brian Scassellati, a professor at Yale University who studies how people and robots can interact, says significant progress has been made in the area in the last 10 years. “Human-robot interaction has really started to home in on the kinds of behaviors that give you that feeling of presence,” he says. “A lot of these are small, subtle things.” For example, Pepper can crudely read your emotions by using software that analyzes facial expressions. I found the robot to be pretty good at telling whether I was smiling or frowning.

However, Scassellati does not believe robots are ready to become constant companions or even effective salespeople. The robots that succeed “are going to be for very limited use,” he suggests. “They’re going to be for targeted use, and probably not with the general population.”

My short time with Pepper makes me think that targeting limited applications is a sensible move.•

 

Tags: ,

I previously posted some stuff about driverless-car testing in a mock cityscape in Ann Arbor, Michigan, which might seem unnecessary given Google’s regular runs on actual streets and highways. But here’s an update on the progress from “Town Built for Driverless Cars,” by Will Knight at Technology Review:

“A mocked-up set of busy streets in Ann Arbor, Michigan, will provide the sternest test yet for self-driving cars. Complex intersections, confusing lane markings, and busy construction crews will be used to gauge the aptitude of the latest automotive sensors and driving algorithms; mechanical pedestrians will even leap into the road from between parked cars so researchers can see if they trip up onboard safety systems.

The urban setting will be used to create situations that automated driving systems have struggled with, such as subtle driver-pedestrian interactions, unusual road surfaces, tunnels, and tree canopies, which can confuse sensors and obscure GPS signals.

‘If you go out on the public streets you come up against rare events that are very challenging for sensors,’ says Peter Sweatman, director of the University of Michigan’s Mobility Transformation Center, which is overseeing the project. ‘Having identified challenging scenarios, we need to re-create them in a highly repeatable way. We don’t want to be just driving around the public roads.'”

Tags:

An excerpt from “The Robots Running This Way,” Will Knight’s long-form Technology Review article about Boston Dynamics, one of Google’s recently purchased robotics companies:

Many of the robots struggle to complete the tasks without malfunctioning, freezing up, or toppling over. Of all the challenges facing them, one of the most difficult, and potentially the most important to master, is simply walking over uneven, unsteady, or just cluttered ground. But the Atlas robots (several academic groups have entered versions of the Boston Dynamics machine) walk across such terrain with impressive confidence.

A couple of times each day, the crowd gets to see two other legged robots made by Boston Dynamics. In one demo, a four-legged machine about the size of a horse trots along the track carrying several large packs; it cleverly shuffles its feet to stay upright when momentarily unbalanced by a hefty kick from its operator. In another, a smaller, more agile four-legged machine revs up a loud diesel engine, then bounds maniacally along the racetrack like a big cat, quickly reaching almost 20 miles per hour.

The crowd, filled with robotics researchers from around the world and curious members of the public, gasps and applauds. But the walking and running technology found in the machines developed by Boston Dynamics is more than just dazzling. If it can be improved, then these robots, and others like them, might stride out of research laboratories and populate the world with smart mobile machines. That helps explain why a few days before the DARPA Challenge, Boston Dynamics was acquired by Google.•

Tags: