In an opinion piece for Wired, Patrick Lin, the director of the Ethics and Emerging Sciences Group at California Polytechnic State University, ponders how self-driving cars will handle having to make value calls when crashes are inevitable: "Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?" And, the Law of Unintended Consequences raises its head yet again……
Comments