|
|
Subscribe / Log in / New account

Security quotes of the week

Security quotes of the week

Posted Oct 31, 2015 3:54 UTC (Sat) by rahvin (guest, #16953)
In reply to: Security quotes of the week by dlang
Parent article: Security quotes of the week

I'm not sure how we even get to this discussion. The car is never going to be a position to kill the driver. About the only way a car could kill a occupant would be to hit a fixed object, roll the vehicle, leave the roadway or cause a t-bone crash (these are the prime killers in traffic accidents). As others have said, the car will be programmed to simply try to avoid the obstacle while braking or stop before hitting it. This could either be a lane change while braking or just braking within the lane and the computer would be able to instantly calculate whether braking in the lane is sufficient and verify if a lane change is possible before a human driver would have even perceived the threat. Perception-reaction time (the time for the driver to perceive the threat and start the muscle movements to react to it) varies per driver but the average used in roadway design is 2.5 seconds but even in the most astute young person it is never probably less than half a second just because of the slowness of nerve transmission for the reaction. A computer could perceive the threat and begin reacting to it in milliseconds.

There is no way a car braking could injure an occupant as the car can't decelerate fast enough to be a threat to the passengers. And they aren't ever going to program the car to hit a fixed object or leave the roadway, nor are they going to program the car to make a maneuver that could result in rollover. There would be no reason at all for these type of maneuvers. About the only one that's even a possibility is to create a maneuver that would set the car up for a side impact. But that would be a freak accident and not anything that could be programmed for or against.

I recognize this is mostly a thought exercise on computer ethics but I just don't see how a car computer could ever cause the death of an occupant through an action to avoid killing someone outside the vehicle. Basic physics is against the possibility of braking being a threat and where it involves dangerous maneuvers there would be no reason to program a vehicle to make such a maneuver, ever.


to post comments

Self-driving car ethics

Posted Nov 1, 2015 4:05 UTC (Sun) by giraffedata (guest, #1954) [Link] (17 responses)

The car is never going to be a position to kill the driver. ... As others have said, the car will be programmed to simply try to avoid the obstacle while braking or stop before hitting it.

And that's not deciding whether to kill the driver (passenger)? Try this: an oncoming car has veered into your lane. To your left is lots more oncoming cars. To your right, a pedestrian. At full brake steering straight, you know you'll collide head-on with the out-of-control oncoming car and most likely your passenger will die. Same with steering to the left at full brake. Steering to the right at full brake, you'll most likely kill the pedestrian, but your passenger will most likely survive. You have a choice to make.

I'm not sure why you mention human reaction time. All I can take from your analysis of it is that a self-driving car has to make ethical choices that a human wouldn't.

Self-driving car ethics

Posted Nov 1, 2015 4:26 UTC (Sun) by dlang (guest, #313) [Link] (12 responses)

special cases are the enemy of reliability.

I don't believe that the car is going to be able to reliably tell the difference between a pedestrian and any other obstacle, let alone try to make 'ethical' judgements to deliberately harm anyone or anything.

Especially since inaction (even if that inaction doesn't avoid harm) inherently has less liability that any action that causes harm.

I'd bet a large amount of money that cars will be programmed to brake in straight line if a collision can't be avoided rather than to brake less and steer through any obstacle (if a collision can be avoided by extreme manoeuvres, I'm still not sure that the cars, especially early on will be programmed to try)

Self-driving car ethics

Posted Nov 1, 2015 11:49 UTC (Sun) by flussence (guest, #85566) [Link]

Ethical judgements like that are hard enough for a human, but legal cost-minimization ones are easy. Making a best effort to drive safely and exercising the crumple zones works out far cheaper for the manufacturer than taking a high-speed detour into a bystander's knees, even if the second case avoids causing death.

Self-driving car ethics

Posted Nov 1, 2015 15:39 UTC (Sun) by giraffedata (guest, #1954) [Link] (10 responses)

I'd bet a large amount of money that cars will be programmed to brake in straight line if a collision can't be avoided rather than to brake less and steer through any obstacle

I agree with this conclusion, but this programming is an ethical choice (and I don't care whether we characterize it as an ethical choice of the program or the programmer - it's the same thing to me). It's essentially one resolution of the trolley dilemma - better to let 5 people die than pull a switch and make 1 other person die, thereby becoming a murderer. Except with some uncertainty thrown in.

The fact that the car doesn't have perfect information doesn't make the ethical decision easy. Some would say the ethical thing is to play the probabilities, identifying what is more or less likely to be a pedestrian, which cars are most likely to contain children, and all that.

By the way, I believe in the real world of today, Google is dealing with a version of this question. Recent data shows that collisions are more likely when a Google self-driving car is involved. It appears to be because the car's normal safe-harbor reaction to a bad situation is to stop, where a human would not (either out of carelessness or reasoned strategy), causing a tailgater to hit the Google car. There are probably people saying Google's driving strategy is correct even though it causes more collisions, because the moral fault for those collisions lies with the tailgater. But I'll bet engineers are considering smarter strategies where not braking avoids a collision with the tailgater but increases the risk of some other collision. They know people don't want Google cars on the roads if having them will produce more collisions.

Self-driving car ethics

Posted Nov 1, 2015 23:04 UTC (Sun) by dlang (guest, #313) [Link] (8 responses)

where is the data that shows that Google cars get into MORE collisions (and more by what criteria) that human powered cars?

Everything I've seen says they are involved in far fewer collisions per hour drive, per mile driven, etc. Also keep in mind that they are being tested in some of the worst road/traffic conditions available (bay area traffic)

If the problem is that they are being rear-ended because they stopped too soon, I expect that they will be changed to use all available space to stop rather than stopping as quickly as possible, and flash their brake lights is the car behind them doesn't appear to be slowing fast enough.

But if you are the one driving, and your choice is to follow the traffic laws and have the car behind you rear-end you or hit someone and avoid the collision, if you hit someone, you are at fault, if the car behind you hits you, they are at fault, whatever your "ethics" say.

Self-driving car ethics

Posted Nov 2, 2015 1:58 UTC (Mon) by giraffedata (guest, #1954) [Link] (7 responses)

You're right. I misremembered the stats. this article in the San Jose Mercury News raised eyebrows when it reported that DMV accident reports show 4 accidents in the most recent 100,000 miles for Google self-driving cars vs .3 per 100,000 for cars in general. But it's a tiny sample and Google says the total number is 11 accidents in 1.8 million miles, and if you charge half of the accidents to the other car (self-driving cars are only ever in two-car accidents), as with the .3 figure, you get the same .3 figure for self-driving cars.

But wait. The .3 figure is just for property-damage-only accidents, which someone thought was the appropriate comparison because self-driving cars had only that kind of accident. When you throw in injury accidents, the all-cars number climbs to .34.

Still, it appears that self-driving cars cause significantly more not-at-fault accidents than other cars(all of those .3/100,000 Google accidents are not-at-fault; only half of the all-car figure is), which suggests programming changes such as not reacting to every crisis by stopping as fast as possible.

But maybe not. There are scenarios where any strategy is going to end up failing. If the goal is to avoid blame instead of to avoid collisions, I can see it looking better for the self-driving car in hindsight from an accident if that strategy was to stop than if it was to go (or speed up or steer), even if stopping is known to cause more collisions. I guess that's one of those ethical considerations.

Self-driving car ethics

Posted Nov 2, 2015 2:30 UTC (Mon) by dlang (guest, #313) [Link] (6 responses)

you keep trying to pain everything as an ethical decision, when it's really just the programmers trying to make things work reliably.

If you want to make every case where a programmer has the machinery halt instead of going off into a forest of rarely used/rarely tested other options an ethical decision, then you can. But sane people are going to look at it as just normal defensive programing "when things are going wrong, don't risk making them worse"

In no way is the car, or the programmer making a morality based judgement call about what the right thing to do in that particular situation is. Instead the programmer is trying to get back to a known-safe state. If you insist on a morality tag, how about 'first do no harm'

Self-driving car ethics

Posted Nov 2, 2015 3:59 UTC (Mon) by giraffedata (guest, #1954) [Link] (5 responses)

I don't think we've even touched on reliability or rarely used and rarely tested options. At least I haven't. I've just been talking about making the car do, in every situation, whatever produces the best outcome. Or actually, since the outcome will not always be controllable, the probabilistic expected value of the outcome.

When things are going wrong, don't make them worse. That's all it is, but how do you decide what is worse? Does braking make them worse?

There are going to be plenty of cases where only ethics can tell you which of the available outcomes is best (and different people's ethics will give different answers). I.e. there will be hard choices. Human drivers have to make these choices; obviously, programmers of driverless cars will too.

Self-driving car ethics

Posted Nov 2, 2015 8:30 UTC (Mon) by dlang (guest, #313) [Link] (4 responses)

accidents are rare (0.34 per 100,000 hours according to a prior poster)

any one type of accident (all a type something that could be avoided by one specific avoidance technique) is going to be even more rare.

any logic that has to decide between different avoidance techniques for what would otherwise be a guaranteed accident is only going to be exercised once every few hundred thousand hours of operation, and the conditions (and therefor the choices available) are going to vary drastically.

If this doesn't end up qualifying as rarely executed and therefor rarely tested code, I don't know what would.

Self-driving car ethics

Posted Nov 2, 2015 13:18 UTC (Mon) by anselm (subscriber, #2796) [Link] (2 responses)

I'd think that anyone clever enough to build a self-driving car should be clever enough to build a simulator where accidents happen a lot more often than in real life, in order to exercise and debug the software that deals with emergencies. It's not as if you needed to actually crash cars to make that work. Also, reproducible conditions.

Self-driving car ethics

Posted Nov 2, 2015 18:38 UTC (Mon) by dlang (guest, #313) [Link] (1 responses)

simulators and reproducible conditions are a poor relation to real-world conditions.

Otherwise system code could just be checked by code validation tools and there would be no problems dealing with real-world hardware.

Self-driving car ethics

Posted Nov 2, 2015 18:49 UTC (Mon) by anselm (subscriber, #2796) [Link]

We're interested in whether the car software does reasonable things in accident-like situations, and does these correctly as designed, in order to avoid the problem where the accident-handling code is only very rarely exercised at all because (fortunately) so few accidents happen in real life. This is essentially automated testing, and it's not exactly a novel idea.

Dealing with random peripheral hardware failures is a different ball game altogether, and worth looking at separately once we're satisfied that the car software does the Right Thing in principle.

Self-driving car ethics

Posted Nov 3, 2015 2:59 UTC (Tue) by giraffedata (guest, #1954) [Link]

If this doesn't end up qualifying as rarely executed and therefor rarely tested code, I don't know what would.

And this is pure straw man argument, because nobody ever said this code is not rarely executed and tested.

I just don't see what this has to do with what we were talking about - whether self driving car programs will have to make ethical choices. The post before was the first mention in this whole long thread of frequency of execution. Before that, we were talking about the choice to brake in a straight line because the car can't be sure what is and is not a pedestrian and because inaction generates less liability than action. And whether that choice is informed by ethics.

Self-driving car ethics

Posted Nov 8, 2015 20:28 UTC (Sun) by Wol (subscriber, #4433) [Link]

> By the way, I believe in the real world of today, Google is dealing with a version of this question. Recent data shows that collisions are more likely when a Google self-driving car is involved. It appears to be because the car's normal safe-harbor reaction to a bad situation is to stop, where a human would not (either out of carelessness or reasoned strategy), causing a tailgater to hit the Google car. There are probably people saying Google's driving strategy is correct even though it causes more collisions, because the moral fault for those collisions lies with the tailgater. But I'll bet engineers are considering smarter strategies where not braking avoids a collision with the tailgater but increases the risk of some other collision. They know people don't want Google cars on the roads if having them will produce more collisions.

One thing you've completely ignored is that self-driving cars can communicate. Once more cars have (and can broadcast the fact) they have collision-avoidance-braking or whatever, cars will be able to adapt. If a google car knows the car behind has anti-collision capabilities, it will be "happy" to brake harder, knowing that the car behind will avoid it.

The same thing with a lane-change to avoid an oncoming car - if the car in the next lane has self-driving or anti-collision ability, the google car will be "happy" to get in its way, knowing it also will take evasive action.

And no, I do not expect cars to talk and negociate amongst themselves. What I expect is that it will be more like IFF - cars will be aware of other cars, and will know which cars are capable of what because they will broadcast their capabilities. So a google car will know if the car behind or in front is self-driving, and it will know also what those cars "want" to do. It will also know which cars are NOT self-driving, and they won't know what those cars want and will give them a much wider berth.

As others have said, the transition to totally self-driving cars will take a while, and this ability will allow fully manual, fully self driving, and more-or-less-intelligent cruise control cars to co-exist easily on the road.

And this mix of abilities will hopefully make sure that self-driving cars don't have to make those sorts of decisions. At the end of the day, things go wrong, we have mechanical failures etc, but if self-driving cars eliminate driver error as a cause of accidents for many/most vehicles, then, well, there's not a lot you can do about unexpected mechanical failure, or suicidal maniacs or whatever, but once you eliminate driver error you'll eliminate most accidents.

Cheers,
Wol

Self-driving car ethics

Posted Nov 2, 2015 16:43 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

> Same with steering to the left at full brake

Depending on the vehicle, this can become "roll into the oncoming car".

Self-driving car ethics

Posted Nov 2, 2015 17:38 UTC (Mon) by raven667 (subscriber, #5198) [Link] (2 responses)

>You have a choice to make.

Not really, I don't think any jury would fault the car designers if the auto-car was unable to avoid an out of control car due to being surrounded by obstacles and the laws of physics, the fault would lie with who controls the out-of-control car in your scenario. I just don't see a machine as an actor to which ethics applies, ethics is a concept that only applies to how people interact with other people, so the car doesn't make ethical choices, the designers do when making the rules, and the only choice they can make is to not interact with obstacles to the whatever extent the laws of physics allow.

Self-driving car ethics

Posted Nov 3, 2015 3:36 UTC (Tue) by giraffedata (guest, #1954) [Link] (1 responses)

We really weren't talking about liability or fault, especially legal liability (except where we touched on the fact that acting in self-interest by limiting one's legal liability instead of in the interest of others is an ethical choice).

And I made it clear that I for one am considering the ethics of the machine just a convenient way of talking about the ethics of the designer, so we don't need to worry about whether a car is alive or whatever.

the only choice they can make is to not interact with obstacles to whatever extent the laws of physics allow.

There will be plenty of situations where there the laws of physics will not allow avoiding interaction with obstacles, but will permit various forms of interacting (hit this car or that one, or a concrete wall, or the thing which is 80% likely to be pedestrian). The programmer will choose one, and might have to use his sense of ethics to do it. Maybe his ethics allow him to just do what's easiest and cheapest - brake in a straight line, maybe -- even if it increases the chance someone will die.

Self-driving car ethics

Posted Nov 3, 2015 4:41 UTC (Tue) by raven667 (subscriber, #5198) [Link]

>There will be plenty of situations where there the laws of physics will not allow avoiding interaction with obstacles, but will permit various forms of interacting (hit this car or that one, or a concrete wall, or the thing which is 80% likely to be pedestrian). The programmer will choose one, and might have to use his sense of ethics to do it. Maybe his ethics allow him to just do what's easiest and cheapest - brake in a straight line, maybe -- even if it increases the chance someone will die.

I just don't foresee this situation coming up very often, every design effort is to never allow Newton to take over the wheel, as much as humans can control, almost by definition any accident will be unavoidable, so if the machine can't brake to avoid hitting something then it unavoidably will hit that thing. If the rules are kept simple then ethical reasoning never comes into it.

For example if a child jumps out from behind an occlusion with not enough room to stop and oncoming traffic in the opposite lane then the kid is going to get run over and I don't think there is an ethical problem with that, its just an accident.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds