Patrick Lin

You are currently browsing articles tagged Patrick Lin.

Now five decades old, the thought experiment known as the Trolley Problem is experiencing new relevance due to the emergence of driverless cars and other robotized functions requiring aforethought about potential moral complications. Despite criticisms about the value of such exercises, I’ve always found them useful and including them in the conversation about autonomous designs surely can’t hurt.

Lauren Cassani Davis of the Atlantic looks at the merging of a stately philosophical scenario and cutting-edge technology. An excerpt about Stanford mechanical engineer Chris Gerdes:

Gerdes has been working with a philosophy professor, Patrick Lin, to make ethical thinking a key part of his team’s design process. Lin, who teaches at Cal Poly, spent a year working in Gerdes’s lab and has given talks to Google, Tesla, and others about the ethics of automating cars. The trolley problem is usually one of the first examples he uses to show that not all questions can be solved simply through developing more sophisticated engineering. “Not a lot of engineers appreciate or grasp the problem of programming a car ethically, as opposed to programming it to strictly obey the law,” Lin said.

But the trolley problem can be a double-edged sword, Lin says. On the one hand, it’s a great entry point and teaching tool for engineers with no background in ethics. On the other hand, its prevalence, whimsical tone, and iconic status can shield you from considering a wider range of dilemmas and ethical considerations. Lin has found that delivering the trolley problem in its original form—streetcar hurtling towards workers in a strangely bare landscape—can be counterproductive, so he often re-formulates it in terms of autonomous cars:

You’re driving an autonomous car in manual mode—you’re inattentive and suddenly are heading towards five people at a farmer’s market. Your car senses this incoming collision, and has to decide how to react. If the only option is to jerk to the right, and hit one person instead of remaining on its course towards the five, what should it do?

It may be fortuitous that the trolley problem has trickled into the world of driverless cars: It illuminates some of the profound ethical—and legal—challenges we will face ahead with robots. As human agents are replaced by robotic ones, many of our decisions will cease to be in-the-moment, knee-jerk reactions. Instead, we will have the ability to premeditate different options as we program how our machines will act.•

Tags: , ,

The Trolley Problem as applied to autonomous cars is currently one of the most popular legal and philosophical exercises. I think it may be somewhat overstated since the average human driver routinely tries to swerve from a crash, and that may be the default setting for robocars in what will hopefully be a world of far fewer potential crashes. But the issue of liability will still need to be worked out. From Patrick Lin’s Wired piece about the possibility of “adjustable ethics settings”:

“Do you remember that day when you lost your mind? You aimed your car at five random people down the road. By the time you realized what you were doing, it was too late to brake.

Thankfully, your autonomous car saved their lives by grabbing the wheel from you and swerving to the right. Too bad for the one unlucky person standing on that path, struck and killed by your car.

Did your robot car make the right decision? This scene, of course, is based on the infamous ‘trolley problem‘ that many folks are now talking about in AI ethics. It’s a plausible scene, since even cars today have crash-avoidance features: some can brake by themselves to avoid collisions, and others can change lanes too.

The thought-experiment is a moral dilemma, because there’s no clearly right way to go. It’s generally better to harm fewer people than more, to have one person die instead of five. But the car manufacturer creates liability for itself in following that rule, sensible as it may be. Swerving the car directly results in that one person’s death: this is an act of killing. Had it done nothing, the five people would have died, but you would have killed them, not the car manufacturer which in that case would merely have let them die.

Even if the car didn’t swerve, the car manufacturer could still be blamed for ignoring the plight of those five people, when it held the power to save them. In other words: damned if you do, and damned if you don’t.

So why not let the user select the car’s ‘ethics setting’? 

Tags:

Research has shown that a gentrifying neighborhood rocked by a shocking headline murder doesn’t stop gentrifying. The glaring, but rare, tragedy isn’t enough to reverse progress. The rewards of having a piece of an interesting, but still affordable, community outweighs the risks. I think the same will be true of autonomous vehicles, which will make the streets and highways far safer even if occasionally there’s a loud crash.

One of the biggest moral quandaries about driverless cars is one on the margins: When a collision is imminent, software, not humans, would make the decision of who is likely to live and who is to die. I would think the fairest scenario would be to aim for the best outcome for the greatest number of those involved. But perhaps car owners will be able to opt into a “moral system” the way they can choose organ donation. Maybe they’ll be an insurance break for those who do. Who knows? It’s likely, though, that this decision, like the steering wheel itself, won’t be in our hands.

In Patrick Lin’s new Wired article, The Robot Car of Tomorrow May Just Be Programmed to Hit You,” he analyzes all aspects of this ethical problem. An excerpt:

“Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?

As a matter of physics, you should choose a collision with a heavier vehicle that can better absorb the impact of a crash, which means programming the car to crash into the Volvo. Further, it makes sense to choose a collision with a vehicle that’s known for passenger safety, which again means crashing into the Volvo.

But physics isn’t the only thing that matters here. Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?”

Tags:

In his Wired article, Patrick Lin articulates the new driverless car thought experiment, “the school-bus problem,” though like everyone else he doesn’t mention that hitting a bus would likely kill the car’s driver as well. The example could use a more logical spin, though I get the point. An excerpt:

“Ethical dilemmas with robot cars aren’t just theoretical, and many new applied problems could arise: emergencies, abuse, theft, equipment failure, manual overrides, and many more that represent the spectrum of scenarios drivers currently face every day.

One of the most popular examples is the school-bus variant of the classic trolley problem in philosophy: On a narrow road, your robotic car detects an imminent head-on crash with a non-robotic vehicle — a school bus full of kids, or perhaps a carload of teenagers bent on playing ‘chicken” with you, knowing that your car is programmed to avoid crashes. Your car, naturally, swerves to avoid the crash, sending it into a ditch or a tree and killing you in the process.

At least with the bus, this is probably the right thing to do: to sacrifice yourself to save 30 or so schoolchildren. The automated car was stuck in a no-win situation and chose the lesser evil; it couldn’t plot a better solution than a human could.”

Tags: