Self-driving car ethics
Self-driving car ethics
Posted Nov 1, 2015 15:39 UTC (Sun) by giraffedata (guest, #1954)In reply to: Self-driving car ethics by dlang
Parent article: Security quotes of the week
I'd bet a large amount of money that cars will be programmed to brake in straight line if a collision can't be avoided rather than to brake less and steer through any obstacle
I agree with this conclusion, but this programming is an ethical choice (and I don't care whether we characterize it as an ethical choice of the program or the programmer - it's the same thing to me). It's essentially one resolution of the trolley dilemma - better to let 5 people die than pull a switch and make 1 other person die, thereby becoming a murderer. Except with some uncertainty thrown in.
The fact that the car doesn't have perfect information doesn't make the ethical decision easy. Some would say the ethical thing is to play the probabilities, identifying what is more or less likely to be a pedestrian, which cars are most likely to contain children, and all that.
By the way, I believe in the real world of today, Google is dealing with a version of this question. Recent data shows that collisions are more likely when a Google self-driving car is involved. It appears to be because the car's normal safe-harbor reaction to a bad situation is to stop, where a human would not (either out of carelessness or reasoned strategy), causing a tailgater to hit the Google car. There are probably people saying Google's driving strategy is correct even though it causes more collisions, because the moral fault for those collisions lies with the tailgater. But I'll bet engineers are considering smarter strategies where not braking avoids a collision with the tailgater but increases the risk of some other collision. They know people don't want Google cars on the roads if having them will produce more collisions.
Posted Nov 1, 2015 23:04 UTC (Sun)
by dlang (guest, #313)
[Link] (8 responses)
Everything I've seen says they are involved in far fewer collisions per hour drive, per mile driven, etc. Also keep in mind that they are being tested in some of the worst road/traffic conditions available (bay area traffic)
If the problem is that they are being rear-ended because they stopped too soon, I expect that they will be changed to use all available space to stop rather than stopping as quickly as possible, and flash their brake lights is the car behind them doesn't appear to be slowing fast enough.
But if you are the one driving, and your choice is to follow the traffic laws and have the car behind you rear-end you or hit someone and avoid the collision, if you hit someone, you are at fault, if the car behind you hits you, they are at fault, whatever your "ethics" say.
Posted Nov 2, 2015 1:58 UTC (Mon)
by giraffedata (guest, #1954)
[Link] (7 responses)
But wait. The .3 figure is just for property-damage-only accidents, which someone thought was the appropriate comparison because self-driving cars had only that kind of accident. When you throw in injury accidents, the all-cars number climbs to .34.
Still, it appears that self-driving cars cause significantly more not-at-fault accidents than other cars(all of those .3/100,000 Google accidents are not-at-fault; only half of the all-car figure is), which suggests programming changes such as not reacting to every crisis by stopping as fast as possible.
But maybe not. There are scenarios where any strategy is going to end up failing. If the goal is to avoid blame instead of to avoid collisions, I can see it looking better for the self-driving car in hindsight from an accident if that strategy was to stop than if it was to go (or speed up or steer), even if stopping is known to cause more collisions. I guess that's one of those ethical considerations.
Posted Nov 2, 2015 2:30 UTC (Mon)
by dlang (guest, #313)
[Link] (6 responses)
If you want to make every case where a programmer has the machinery halt instead of going off into a forest of rarely used/rarely tested other options an ethical decision, then you can. But sane people are going to look at it as just normal defensive programing "when things are going wrong, don't risk making them worse"
In no way is the car, or the programmer making a morality based judgement call about what the right thing to do in that particular situation is. Instead the programmer is trying to get back to a known-safe state. If you insist on a morality tag, how about 'first do no harm'
Posted Nov 2, 2015 3:59 UTC (Mon)
by giraffedata (guest, #1954)
[Link] (5 responses)
When things are going wrong, don't make them worse. That's all it is, but how do you decide what is worse? Does braking make them worse?
There are going to be plenty of cases where only ethics can tell you which of the available outcomes is best (and different people's ethics will give different answers). I.e. there will be hard choices. Human drivers have to make these choices; obviously, programmers of driverless cars will too.
Posted Nov 2, 2015 8:30 UTC (Mon)
by dlang (guest, #313)
[Link] (4 responses)
any one type of accident (all a type something that could be avoided by one specific avoidance technique) is going to be even more rare.
any logic that has to decide between different avoidance techniques for what would otherwise be a guaranteed accident is only going to be exercised once every few hundred thousand hours of operation, and the conditions (and therefor the choices available) are going to vary drastically.
If this doesn't end up qualifying as rarely executed and therefor rarely tested code, I don't know what would.
Posted Nov 2, 2015 13:18 UTC (Mon)
by anselm (subscriber, #2796)
[Link] (2 responses)
I'd think that anyone clever enough to build a self-driving car should be clever enough to build a simulator where accidents happen a lot more often than in real life, in order to exercise and debug the software that deals with emergencies. It's not as if you needed to actually crash cars to make that work. Also, reproducible conditions.
Posted Nov 2, 2015 18:38 UTC (Mon)
by dlang (guest, #313)
[Link] (1 responses)
Otherwise system code could just be checked by code validation tools and there would be no problems dealing with real-world hardware.
Posted Nov 2, 2015 18:49 UTC (Mon)
by anselm (subscriber, #2796)
[Link]
We're interested in whether the car software does reasonable things in accident-like situations, and does these correctly as designed, in order to avoid the problem where the accident-handling code is only very rarely exercised at all because (fortunately) so few accidents happen in real life. This is essentially automated testing, and it's not exactly a novel idea.
Dealing with random peripheral hardware failures is a different ball game altogether, and worth looking at separately once we're satisfied that the car software does the Right Thing in principle.
Posted Nov 3, 2015 2:59 UTC (Tue)
by giraffedata (guest, #1954)
[Link]
And this is pure straw man argument, because nobody ever said this code is not rarely executed and tested.
I just don't see what this has to do with what we were talking about - whether self driving car programs will have to make ethical choices. The post before was the first mention in this whole long thread of frequency of execution. Before that, we were talking about the choice to brake in a straight line because the car can't be sure what is and is not a pedestrian and because inaction generates less liability than action. And whether that choice is informed by ethics.
Posted Nov 8, 2015 20:28 UTC (Sun)
by Wol (subscriber, #4433)
[Link]
One thing you've completely ignored is that self-driving cars can communicate. Once more cars have (and can broadcast the fact) they have collision-avoidance-braking or whatever, cars will be able to adapt. If a google car knows the car behind has anti-collision capabilities, it will be "happy" to brake harder, knowing that the car behind will avoid it.
The same thing with a lane-change to avoid an oncoming car - if the car in the next lane has self-driving or anti-collision ability, the google car will be "happy" to get in its way, knowing it also will take evasive action.
And no, I do not expect cars to talk and negociate amongst themselves. What I expect is that it will be more like IFF - cars will be aware of other cars, and will know which cars are capable of what because they will broadcast their capabilities. So a google car will know if the car behind or in front is self-driving, and it will know also what those cars "want" to do. It will also know which cars are NOT self-driving, and they won't know what those cars want and will give them a much wider berth.
As others have said, the transition to totally self-driving cars will take a while, and this ability will allow fully manual, fully self driving, and more-or-less-intelligent cruise control cars to co-exist easily on the road.
And this mix of abilities will hopefully make sure that self-driving cars don't have to make those sorts of decisions. At the end of the day, things go wrong, we have mechanical failures etc, but if self-driving cars eliminate driver error as a cause of accidents for many/most vehicles, then, well, there's not a lot you can do about unexpected mechanical failure, or suicidal maniacs or whatever, but once you eliminate driver error you'll eliminate most accidents.
Cheers,
Self-driving car ethics
You're right. I misremembered the stats. this article in the San Jose Mercury News raised eyebrows when it reported that DMV accident reports show 4 accidents in the most recent 100,000 miles for Google self-driving cars vs .3 per 100,000 for cars in general. But it's a tiny sample and Google says the total number is 11 accidents in 1.8 million miles, and if you charge half of the accidents to the other car (self-driving cars are only ever in two-car accidents), as with the .3 figure, you get the same .3 figure for self-driving cars.
Self-driving car ethics
Self-driving car ethics
I don't think we've even touched on reliability or rarely used and rarely tested options. At least I haven't. I've just been talking about making the car do, in every situation, whatever produces the best outcome. Or actually, since the outcome will not always be controllable, the probabilistic expected value of the outcome.
Self-driving car ethics
Self-driving car ethics
Self-driving car ethics
Self-driving car ethics
Self-driving car ethics
Self-driving car ethics
If this doesn't end up qualifying as rarely executed and therefor rarely tested code, I don't know what would.
Self-driving car ethics
Wol