Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is EXACTLY what people are trying to raise awareness of. This is implicit bias. It doesn't matter if people have "good intentions" or made "honest mistakes" if the tool is implicitly biased. This is why having a diverse team working on a project is important, because it will help call out these issues early rather than after they have been put in production.

It's also not purely a technical problem, if you feed the model only pictures of white and asian male college students then it's no surprise when you get a model that biases towards recognizing white and asian male college students (which is exactly how several prominent models were trained).



Biased algorithms/models are particularly dangerous because they tend to provide a veneer of objectivity (plausible deniability if you want to put a more cynical lens on it) that could frustrate attempts to hold users accountable.


Agreed. A huge problem I haven't been able to think of is the (already happening in masse) practice of targeting black people because the data says so. However over time police Dept invests data capture resources in "bad areas" - which may have a lot of black residents. I wonder how to separate the defense that some black areas are obviously bad and why would a dept put surveillance resources in a good neighborhood?

I personally feel it's wrong but that's one thing I've always got hung up on in building a critique.


Mathematician Cathy O'Neil's book "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy" is a good introduction to the implicit biases in machine learning:

https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction


That book is incredibly misleading.

This has nothing to do with machine learning. It is a simple correlational situation.

If African Americans have, on average, poorer credit ratings, then correlational models will begin to equate race with poor credit ratings, which will impact their ability to get credit and hence feeding back on that mechanism.

...of course RACE isn't allowed to be factored into financial applications, so the applications will often use other data points, like zip code, that end up having a correlation to bad credit as well as race. ...often producing the same result.

The problem isn't with the models - it's with reality.

The author famously said "Math is Racist". It's hard to get over such stupidity.


I don't think you're making the case you think you're making.


It seems like they are making the case that if reality is biased then models of reality will retain that bias.

If so they seem to make the point well.

If the only information you have about a loan applicant is where they live, your decision will be 'biased' if the location of where someone lives is correlated with other factors (as opposed to, say, the fact they live on a flood plain means don't give them a loan).

In this context, saying "Math is Racist" is like saying "Physics hates Fat People" because gravity disproportionately affects heavier people. Accurately reporting what is happening is not biased, making decisions without considering [edit: or not making a decision because you didn't consider] the context is biased.

Maths is a tool (well, collection of tools), and the onus is first on the tool user to use it in a fair way. Yes it is important for educators and tool creators to be mindful of how these tools will be used in practice, but there is a big jump from that idea to "Math is Racist".


Isn’t this a similar argument that could be made against race and gender based affirmative action? I don’t understand how organizations like ACLU are critical of face recognition tech because it reenforces implicit bias that engineers have but then turns around and supports race and gender based affirmative action that similarly reenforces implicit bias where PoC (but not Asians for some reasons) and non-males are presumed to be disadvantaged purely due to their identity.


I'm not sure what argument you are referring to here (if it was one above).

I think these organisations are criticising the tool builders for creating tools that are easily misused (or are created with unreasonable limitations, like only being valid for university students at one university, but are sold as widely applicable).

Supporting affirmative action initiatives like you list is trying to address the biases that exist in reality. I think this is often a bit backward (not addressing the root cause) but it can be expensive (in time, effort, money, politics) to address the actual root cause so these programs aim to address the bias at the place in manifests.

This is a similar (dare I say pragmatic?) argument to "it would be cheaper and more effective to just give everyone a no-strings attached payment each month then to provide means-tested payments to those who need help".

Detrmining if these arguments are correct is a different thing altogether, and I have no idea if these programs are cheaper and more effective then dealing with the root problem, or if it's even possible to define and address the root problem in the first place!

The two things you contrast above are fundamentally different - one is criticising tools and tool builders, the other trying to address perceived biases in the world.


When you say "[non-white/non-males] are presumed to be disadvantaged", have you talked to or listened to black or female academics? I follow ~4 black academics on twitter, and each of them have contributed to the #BlackintheIvory topic. Their identity plays a huge role in how others treat them.

> but not Asians for some reasons

Asian people are distinct because so many of them have immigrated recently, and immigration requirements favor educated and well-off folks. That masks many issues because they should have better than average outcomes due to better than average education and skills.


That's why "racism" has been redefined. Because it makes it morally convenient in the quest to "undo" past injustice.

On a side note: welcome to the Twilight Zone.


> so the applications will often use other data points, like zip code, that end up having a correlation to bad credit as well as race. ...often producing the same result.

You realize this too is illegal right? The law doesn't say "you can't use race" - instead it says (paraphrased by the Brookings Institute): "Are people within a protected class being clearly treated differently than those of nonprotected classes, even after accounting for credit risk factors?"[1]

O'Neil points out that math is often used to obfuscate this (whether it be deliberately or not). This is a valid point, and one that people who think of math as a values neutral tool should consider.

I didn't love the book, but it's difficult to make the argument that she is stupid.

[1] https://www.brookings.edu/research/credit-denial-in-the-age-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: