From Sexism to Racism, A.I. Systems Learn Our Biases
Artificial intelligence, or AI, is becoming more and more integrated into our lives, used for everything from driving cars to assisting with medical decisions. But AI is only as good as its data, and if that data is biased, AI-based products and systems may perpetuate existing inequities.
One recent example: New York State regulators are investigating “potentially discriminatory treatment” after consumers complained that the algorithm for the new Apple credit card is sexist. In one case, a husband’s credit limit was 20 times higher than his wife’s, even though her credit rating was higher than his and even though they file joint tax returns.
So how can this type of bias be mitigated? A new article in Nature Machine Intelligence co-authored by Graduate Center doctoral candidate Emanuel Moss recommends incorporating qualitative methods from the social sciences into AI design.
Engineers are already trying to weave “human values” into such systems. But the authors argue that human values are too fluid, context-dependent, and based on an individual’s history to be fairly evaluated by quantitative methods. The social sciences, however, have long grappled with these concepts and have the tools to deal with them. Quantitative data will always embed a scientist’s biases, racial or otherwise, no matter how objective they try to be. But qualitative methods require both researcher and research participants to reflect on their position in the world and the effect of their actions.
The article offers four cues and three questions to guide AI design. For the cues, the authors suggest recognizing that quantitative data can be biased; that behavior under artificial conditions may differ from lived experience; that individual behavior varies by social context; and that past data reflecting historical inequities may differ from future outcomes. For the questions, they propose asking: What do we know about society and why; how do we know what we know (about society); and who is designing a technological intervention for a social setting, who participates, and who is affected by it?