Will AI bias be that hard to correct for?
I recently went to a showing of the film Bias and met the filmmaker Robin Hauser. Most of the film discusses our unconscious biases as revealed by the Implicit Association Test which extracts bias information by having you quickly make associations between words and concepts and testing your error rate.
The producer of the film took the test, and was bothered by the biases it revealed in her, but also bothered by the fact that no amount of conscious effort to correct the bias worked. You could take the test again and again, trying not to have the bias, but it doesn't work. It did make her more aware of the biases in more low speed decision making and she hopes she's improved.
A smaller section at the end of the film dealt with the way that human biases in training data have produced "AI" tools which incorporate those biases. For example, if you take data about criminal activity, the much higher arrest and incarceration rates for black people will cause the resulting network to be trained to evaluate black individuals as more likely to be criminal. This has resulted in them being denied parole just for the colour of their skin. This happens even if race is not a parameter in the training data because networks will pick up correlations from zip code, education and many other factors to infer race.
Many other types of bias in AI networks have been found since then and it's become an important issue.
Let me offer one note of optimism. Software is not like people in one key way here. The producer of the movie, once she discovered her bias, was unable to correct it, at least not any time soon. With software, when we find a flaw, and we correct it, the software does not make that mistake again. It may make other mistakes, and if we are not careful (as in the implicit race correlations) it may find other ways to make the mistake, but what is fixed generally stays fixed.
If our training data is biased and we can come to a conclusion as to how and why it is biased, we can usually correct for the bias.
Bias in AI is important because people have this illusion of computers as inherently objective. People got spooked to realize that computers that learn from humans reflect their biases when we weren't expecting them. As such, we do tend to trust a score from software more than human judgment. We don't attribute the "flaws of humanity" to computers, attributing them other flaws instead.
But we're getting over that surprise quickly. It is now becoming part of established practice to start looking for bias in training data that might effect results. It will soon be part of the legal duty of care to do such analysis. The big question is, how do we make sure we've found all of it, and how do we correct for it.
We won't find all of it, but I think we can find a lot of it. If we can see it in people, we can see it in software.
Correcting it is the larger challenge. Some simple corrections are possible -- we have big data after all. If an algorithm underscores one race, you can deliberately re-enter race as a factor and program in opposite (classical) weightings. Crude, but the result will match the demographic scores you demand. However, it probably won't do so fairly (even if it is more fair than it was) so this is far from trivial, but it's a different problem from building the AI in the first place.
In some cases, we may have to admit that statistical methods (which is what today's AI is) are not appropriate for certain applications. One example is predictive policing. At first glance, most predictive policing algorithms do something fairly obvious -- they allocate police to locations with a higher history of crime. We did that even before predictive policing algorithms came about.
The problem is that allocation of police creates risk for the general population because those who get extra police surveillance in their neighbourhood get arrested for tiny minor violations that other people don't get arrested for. The poor residents of Vermont Square in LA may not speed any more than the movie stars of Bel Air, but they will get more speeding tickets because they are allocated more police by the algorithms. This means punishing the innocent for what their neighbours did.
Let's hope that by now, people realize that biases in training data will of course be imported into machine learning systems trained from them, be ready to watch for it, and to document it and correct it. It's a problem, but not a blocker, for useful technology.
Comments
Anonymous
Fri, 2018-12-14 15:42
Permalink
There's a lot of evidence
There's a lot of evidence that the Implicit Association Test is flawed, and does not show what it purports to show. I find the fact that people still discuss it as though there's no worries about it's scientific validity troubling.
brad
Fri, 2018-12-14 19:30
Permalink
Quite probable
One thing I found when I was taking it. First it showed me the "common" association -- for example male/science vs. female/arts. Then it showed me the reversal of that. However, in effect this was training me to do the first, and then finding I had trouble doing the second. (Turns out I didn't, but most people do.)
However, the core thing is true -- that humans have unconscious bias, and find it difficult to eliminate it even with conscious thought.
Russell de silva
Sat, 2018-12-15 12:02
Permalink
No correlation between implicit bias and actions
Not only does the implicit bias test not measure what it claims to measure.
Implicit bias has been found NOT to translate into concious action.
Be very wary of sociology/anthropology/psychology results which support theories people want to believe for emotional/political reasons.
Psychology is littered with recent theories purporting to explain environmental causes for life outcomes which fail to replicate. Motivated reasoning and the resulting poor scholarship has left these soft sciences in a parlous state.
brad
Sat, 2018-12-15 12:45
Permalink
Hmm
Want to learn more on that. There are obviously other measures of bias (like the change in results for hiring in symphony orchestras when they switched to doing auditions behind curtains) which did translate into action. I don't know if they have done IAT type tests on those people. The most surprising thing I read about IAT was they found higher implicit bias on ethnic levels among urban people compared to rural, which is contradictory to what you might think. Ie. that exposure to lots of different people created more biases. Clearly voting patterns urban/rural seem to run differently, but not in the way you would expect from this.
Add new comment