I think an interesting concentration right now for law students would be the legality and ethics of automated machines. One question yet to be answered which falls within that purview is the liability of automakers and drivers when a robocar malfunctions. While these new machines will save a huge number lives, they won’t be flawless. From Alex Brown at National Journal:
‘What happens when something goes wrong? Robot cars may prevent thousands of accidents, but eventually, inevitably, there will be a crash.
‘Who’s responsible if the car crashes?’ Audi’s Brad Stertz said earlier this year. ‘That’s going to be an issue.’
It’s tough to argue the passenger (who may well be the victim) should be held responsible if a car controlled by a computer runs itself off the road. But should automakers face long, expensive lawsuits when life-saving technology suffers a rare glitch? ‘
Automaker liability is likely to increase. Crashes are much more likely to be viewed as the fault of the car and the manufacturer,’ Anderson said. ‘If you’re an automaker and you know you’re going to be sued [more frequently], you’re going to have reservations.… The legal liability test doesn’t take into account the long-run benefits.’
In other words, even though a technology is an overall boon to the greater good, its rare instances of failure—and subsequent lawsuits—won’t take that into account. That could slow the movement of driverless cars to the mass market if automakers are wary of legal battles.”
Tags: Alex Brown, Brad Stertz