LWN: Comments on "Bias and ethical issues in machine-learning models" https://lwn.net/Articles/797880/ This is a special feed containing comments posted to the individual LWN article titled "Bias and ethical issues in machine-learning models". en-us Mon, 10 Nov 2025 14:34:33 +0000 Mon, 10 Nov 2025 14:34:33 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Bias and ethical issues in machine-learning models https://lwn.net/Articles/798890/ https://lwn.net/Articles/798890/ nix <div class="FormattedComment"> <font class="QuotedText">&gt; I was reading offline and couldn’t readily guess what twin cities were meant</font><br> <p> Obviously Ul Qoma and Besźel, right? (Right?)<br> </div> Tue, 10 Sep 2019 15:11:46 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798857/ https://lwn.net/Articles/798857/ robbe <div class="FormattedComment"> I found that the article was also pretty USA-centric (or -biased, if you like). More so than LWN typically is.<br> <p> For example, I was reading offline and couldn’t readily guess what twin cities were meant – and I am not well versed in how the mentioned universities typically earn money.<br> </div> Tue, 10 Sep 2019 12:38:38 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798715/ https://lwn.net/Articles/798715/ ballombe <div class="FormattedComment"> <font class="QuotedText">&gt; Biases in ML are the subject of this amusing and interesting article : </font><br> <a href="https://thegradient.pub/nlps-clever-hans-moment-has-arrived/">https://thegradient.pub/nlps-clever-hans-moment-has-arrived/</a><br> <p> What makes the article strange is that anybody that teach human students has encountered this issue (student guessing the correct answer to a test from lexical cue), but somehow it is new to machine learning researchers ?<br> <p> On the other hand, this can gives a pseudo-objective measure of how much student test answers can be guessed from lexical cue!<br> </div> Mon, 09 Sep 2019 18:56:04 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798731/ https://lwn.net/Articles/798731/ marcH <div class="FormattedComment"> tl;dr: garbage in, garbage out.<br> <p> So while AI is finally producing something useful after a few decades, there are apparently still a few jobs that need humans. These humans will likely be _assisted_ by AI more and more often.<br> <p> As this example shows, working with more and more machines will require being comfortable with data, science and numbers more than ever before. Too bad climate change and energy are for instance in the US questions of party affiliation much more than question of... degrees Farenheits and BTUs. Let's put all the blame on non-decimal units for the failed education ;-)<br> <p> </div> Sun, 08 Sep 2019 23:26:58 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798726/ https://lwn.net/Articles/798726/ marcH <div class="FormattedComment"> In such a "big typo" case I would not just strike the spurious sentence through but remove it completely. Strike through is useful to correct readers who previously internalized something wrong but logical enough to be believed. But here this is just noise I wasted a fair amount of time on trying to understand what correction had been made.<br> </div> Sun, 08 Sep 2019 19:29:20 +0000 Somewhat OT: Bias and ethical issues in machine-learning models https://lwn.net/Articles/798725/ https://lwn.net/Articles/798725/ marcH <div class="FormattedComment"> Yes, "Murder" is better because it doesn't try to hide the actual ambiguity of the real-world rule like "Kill" tries to.<br> </div> Sun, 08 Sep 2019 19:23:38 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798506/ https://lwn.net/Articles/798506/ kmweber <div class="FormattedComment"> Mar Hicks's article "Hacking the Cis-tem" in IEEE Annals of the History of Computing a few months ago is an excellent account of algorithmic bias as a social/historical issue, if anyone's interested. Historians and other STS scholars are increasingly turning attention to these issues.<br> </div> Thu, 05 Sep 2019 16:48:21 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798503/ https://lwn.net/Articles/798503/ madhatter <div class="FormattedComment"> I'm glad somebody made that point. When I read in the original article<br> <p> <font class="QuotedText">&gt; Although there's a difference between personal medical information and routine retail sales like potato chips, Target was probably within reasonable ethical bounds in sending the pregnancy information.</font><br> <p> my hackles went right up. That might be within reasonable ethical bounds in the US, but the EU is definitely moving away from the idea that retailers are entitled to make any use of data arising from customer transactions other than those involved in processing the actual transaction.<br> </div> Thu, 05 Sep 2019 16:24:18 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798416/ https://lwn.net/Articles/798416/ dvdeug <div class="FormattedComment"> One wonders how you'll feel when the AI selects against people with multisyllable names (because all their programmers were named Wang, Li or Chen) or people with words like "Republican" on their resume.<br> <p> In any case, the law cares. If you're writing code that evaluates people, you may raising legal issues for your company if you don't try to control for biases in the legally protected categories.<br> </div> Thu, 05 Sep 2019 05:40:32 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798307/ https://lwn.net/Articles/798307/ nix <div class="FormattedComment"> In this case it might well be genuine bias: the proportion of women in the output was not equivalent to the proportion in the general population. i.e. actual mathematical bias. (One wonders if you'd call it 'political bullshit' if it came out with a large bias in the opposite direction...)<br> </div> Wed, 04 Sep 2019 13:57:23 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798285/ https://lwn.net/Articles/798285/ edeloget <div class="FormattedComment"> Does that mean that the book is biased?<br> <p> (Ok, I'm leaving). <br> </div> Wed, 04 Sep 2019 09:17:46 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798273/ https://lwn.net/Articles/798273/ nilsmeyer <div class="FormattedComment"> And a lot of these times these algorithms are used in advertising, which isn't a particularly ethical pursuit either. <br> </div> Wed, 04 Sep 2019 06:29:18 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798239/ https://lwn.net/Articles/798239/ q3cpma <div class="FormattedComment"> <font class="QuotedText">&gt;After looking at results and realizing that the model was biased against women, the department created another model after changing obvious gender markers such as pronouns and names. But the model was still biased</font><br> <font class="QuotedText">&gt;But the model was still biased</font><br> "But the model wasn't aligning with our/my bias".<br> <p> Why do we need this political bullshit disguised as truism (i.e. ML doesn't erase biases), by the way?<br> </div> Tue, 03 Sep 2019 16:17:34 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798234/ https://lwn.net/Articles/798234/ nilsmeyer <div class="FormattedComment"> <font class="QuotedText">&gt; But it’s not like we can avoid that anyway - remember, the vast majority of these examples are not trained on objective data, like say exam results (which can be biased but obviously a lot less so than things which are just human choices)</font><br> <p> If you look at Exams and IQ tests, there are some clear distributions that don't really inspire comfort. Many people who argue for (unconscious) biases existing would probably have huge problems with the uneven distributions here. <br> <p> <font class="QuotedText">&gt; You might not think that, for example, women’s low rate of participation in engineering jobs is a bias. I don’t agree, but it’s a position one could hold.</font><br> <p> You would need to be able to prove a couple of things:<br> 1. rates of participation aren't (roughly) equal due to bias.<br> 2. (optional) that bias creates harmful outcomes<br> 3. this can be fixed by properly tweaked machine learning algorhitms <br> <p> I think that's a tall order, especially if you not only look at professions like engineering, there are some with even greater disparity, there are very few female plumbers or bricklayers, men are under-represented in healthcare. This all has to fit together somehow. Are there any jobs that have 50:50 parity between sexes? <br> </div> Tue, 03 Sep 2019 15:41:33 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798190/ https://lwn.net/Articles/798190/ jake <div class="FormattedComment"> <font class="QuotedText">&gt; Did I miss something ?</font><br> <p> No, we did. In a late-breaking edit, we eliminated one of the examples, but missed that it was referred to further on. Sigh ...<br> <p> jake<br> </div> Tue, 03 Sep 2019 13:45:40 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798188/ https://lwn.net/Articles/798188/ Paf <div class="FormattedComment"> Well, sure? But it’s not like we can avoid that anyway - remember, the vast majority of these examples are not trained on objective data, like say exam results (which can be biased but obviously a lot less so than things which are just human choices) - they’re trained on stuff that was generated through human choices.<br> <p> So, yes - we have to decide what we want these things to do, and that means thinking about what biases might be present in them. You might not think that, for example, women’s low rate of participation in engineering jobs is a bias. I don’t agree, but it’s a position one could hold.<br> <p> But consider a more extreme example, let’s say a criminal sentencing system that was trained only on sentencing information from the American south from 1900 to 1950, and is intended to tell a judge what a “reasonable” sentence is based on this data. If the input includes racial descriptors, I think *most* people would agree it’s not going to generate reasonable sentences for black defendants compared to white defendants.<br> <p> So what would you do with that? I mean, you’d presumably, idk, try to pick a different input set or blind it to racial information, etc, etc.<br> <p> And so you’re altering the model to fit your view of the world - your bias, one might say.<br> <p> From there on in, it’s just an argument about what the right view of the world is and what data sets and models are good enough/objective enough/whatever to generate trained systems that give “good” results.<br> <p> So yeah, you can obviously wrap yourself around the axle looking for your desired results and just bend the model/inputs until it gives them... but it also seems clear that the choice of input data etc is hugely important. And now we’re in a *massive* grey area of moral choice.<br> <p> We’re not modeling physics here. We’re asking about human choices, so bias is everywhere.<br> </div> Tue, 03 Sep 2019 13:29:14 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798187/ https://lwn.net/Articles/798187/ farnz <p>It's slightly more subtle than that - it's "make sure your machine learning biases match the biases our society claims to want, not those that it actually has". Tue, 03 Sep 2019 13:14:58 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798186/ https://lwn.net/Articles/798186/ clugstj <div class="FormattedComment"> A lot of this article sounds like "Make sure your machine-learning biases match our societal biases".<br> </div> Tue, 03 Sep 2019 12:32:54 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798184/ https://lwn.net/Articles/798184/ mimor <div class="FormattedComment"> For those that are looking for a plethora of examples where machine-learning models went wrong, I can recommend the following read:<br> <a href="https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction">https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction</a><br> The examples are mostly US based, so if you're looking for a EU or other continent-example, this is not the book for you.<br> </div> Tue, 03 Sep 2019 09:30:08 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798183/ https://lwn.net/Articles/798183/ mimor <div class="FormattedComment"> <font class="QuotedText">&gt; just as a factory has a button any employee can push to stop the assembly line</font><br> <p> FYI: This is called an Andon cord/button: <a href="https://en.wikipedia.org/wiki/Andon_">https://en.wikipedia.org/wiki/Andon_</a>(manufacturing)<br> </div> Tue, 03 Sep 2019 09:27:44 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798182/ https://lwn.net/Articles/798182/ wazoox <div class="FormattedComment"> Biases in ML are the subject of this amusing and interesting article : <a href="https://thegradient.pub/nlps-clever-hans-moment-has-arrived/">https://thegradient.pub/nlps-clever-hans-moment-has-arrived/</a><br> </div> Tue, 03 Sep 2019 09:22:05 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798181/ https://lwn.net/Articles/798181/ laarmen <div class="FormattedComment"> I'm confused by the sentence "This stage is where the two companies mentioned earlier parted ways on the question of offering customer rewards." in the section "Modeling targets and business problems". Which companies are we talking about here ? AFAICT the only companies mentioned before this are O'Reilly Media and Minne Analytics. Did I miss something ?<br> </div> Tue, 03 Sep 2019 09:02:09 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798180/ https://lwn.net/Articles/798180/ nilsmeyer <div class="FormattedComment"> I think part of the problems is that reality doesn't always align with our biases on how it should be. <br> </div> Tue, 03 Sep 2019 08:45:21 +0000 Somewhat OT: Bias and ethical issues in machine-learning models https://lwn.net/Articles/798179/ https://lwn.net/Articles/798179/ vadim <div class="FormattedComment"> I would say it's a lot more ambiguous, actually.<br> <p> "Kill" is a pretty clear term.<br> <p> "Murder" on the other hand just stands for "whatever kinds of killing that we happen to disagree with", which can mean absolutely anything. It's almost as bad of a rule as "do the right thing".<br> <p> </div> Tue, 03 Sep 2019 07:54:05 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798171/ https://lwn.net/Articles/798171/ scientes <div class="FormattedComment"> ...goes back to right swiping.<br> </div> Tue, 03 Sep 2019 01:14:39 +0000 Bias and ethical issues in machine-learning models https://lwn.net/Articles/798170/ https://lwn.net/Articles/798170/ scientes <div class="FormattedComment"> I, personally, find that having us humans be bred by machines is a relief.<br> </div> Tue, 03 Sep 2019 01:14:05 +0000 Somewhat OT: Bias and ethical issues in machine-learning models https://lwn.net/Articles/798169/ https://lwn.net/Articles/798169/ dskoll <div class="FormattedComment"> "Thou shalt not kill" is a bad translation. The original text is more like "Do not murder" which is a little less ambiguous.<br> </div> Tue, 03 Sep 2019 00:38:24 +0000