Autonomous Cars May be Programmed to Hit You May 15, 2014
There has been a lot of talk about self-driving cars recently. News outlets are abuzz with stories about multiple companies trying their hand at developing the technology, and at the rate these companies are progressing, driving as we know it may soon be extinct. However as more work is done, more issues and problems arise. From legislation to insurance, there are numerous obstacles that need to be addressed. The latest issue that has cropped up is a doozy and one that the general public should be concerned about.
Imagine the scenario where a crash is unavoidable. The driverless car has two options, hit a SUV or hit a Smart car. If you were programming the vehicle, you would want the car to choose the option that minimizes the risk for passenger injury, which most would assume is the bigger, sturdier SUV. It’s a sensible goal but when you step back and think about it, programming a car to collide with one particular object over another is an awful lot like a targeting algorithm. This targeting algorithm takes the car industry down a morally dangerous path.
This crash-optimization algorithm for autonomous vehicles would seem to require deliberate discrimination. If cars are programmed to hit the bigger target in an unavoidable crash, the owners of these larger, targeted vehicles would bear this burden constantly through no fault of their own. How does this action alone change the automotive atmosphere? For example, would we see a spike in sales for smaller cars because people don’t want to assume the algorithm’s risk?
What about two motorcyclists, one who is wearing a helmet and one that is not? Is the car programmed to steer towards the motorcyclist who is wearing the helmet? Is he or she supposed to be punished for being more cautious? If so, do more motorcyclists move to not wearing helmets to avoid scenarios like this? Do more motorcyclist deaths occur because more people are choosing not to wear a helmet?
It’s easy to see how this discrimination can quickly become unethical. Programmers would basically be determining who gets to live and who gets to die. Those who want to increase their chances of living have to actually become bigger risks, which is inverse logic. There are more questions than answers at this point but companies are barreling down the path toward autonomous vehicles quickly which means sooner rather than later, these questions will need answers.