|
|
Subscribe / Log in / New account

Somewhat OT: Bias and ethical issues in machine-learning models

Somewhat OT: Bias and ethical issues in machine-learning models

Posted Sep 3, 2019 0:38 UTC (Tue) by dskoll (subscriber, #1630)
Parent article: Bias and ethical issues in machine-learning models

"Thou shalt not kill" is a bad translation. The original text is more like "Do not murder" which is a little less ambiguous.


to post comments

Somewhat OT: Bias and ethical issues in machine-learning models

Posted Sep 3, 2019 7:54 UTC (Tue) by vadim (subscriber, #35271) [Link] (1 responses)

I would say it's a lot more ambiguous, actually.

"Kill" is a pretty clear term.

"Murder" on the other hand just stands for "whatever kinds of killing that we happen to disagree with", which can mean absolutely anything. It's almost as bad of a rule as "do the right thing".

Somewhat OT: Bias and ethical issues in machine-learning models

Posted Sep 8, 2019 19:23 UTC (Sun) by marcH (subscriber, #57642) [Link]

Yes, "Murder" is better because it doesn't try to hide the actual ambiguity of the real-world rule like "Kill" tries to.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds