- Joined
- Sep 13, 2012
- Messages
- 11,265
- Reaction score
- 0
Technology Is Biased Too. How Do We Fix It?
Algorithms were supposed to free us from our unconscious mistakes. But now there’s a new set of problems to solve.
Whether it’s done consciously or subconsciously, racial discrimination continues to have a serious, measurable impact on the choices our society makes about criminal justice, law enforcement, hiring and financial lending. It might be tempting, then, to feel encouraged as more and more companies and government agencies turn to seemingly dispassionate technologies for help with some of these complicated decisions, which are often influenced by bias. Rather than relying on human judgment alone, organizations are increasingly asking algorithms to weigh in on questions that have profound social ramifications, like whether to recruit someone for a job, give them a loan, identify them as a suspect in a crime, send them to prison or grant them parole.
But an increasing body of research and criticism suggests that algorithms and artificial intelligence aren’t necessarily a panacea for ending prejudice, and they can have disproportionateimpacts on groups that are already socially disadvantaged, particularly people of color. [...]
In 2014, a report from the Obama White House warned that automated decision-making “raises difficult questions about how to ensure that discriminatory effects resulting from automated decision processes, whether intended or not, can be detected, measured, and redressed.” [...]
Although AI decision-making is often regarded as inherently objective, the data and processes that inform it can invisibly bake inequality into systems that are intended to be equitable. Avoiding that bias requires an understanding of both very complex technology and very complex social issues.
Consider COMPAS, a widely used algorithm that assesses whether defendants and convicts are likely to commit crimes in the future. [...]
At first glance, COMPAS appears fair: White and black defendants given higher risk scores tended to reoffend at roughly the same rate. But an analysis by ProPublica found that, when you examine the types of mistakes the system made, black defendants were almost twice as likely to be mislabeled as likely to reoffend — and potentially treated more harshly by the criminal justice system as a result [...]
An even stickier question is whether the data being fed into these systems might reflect and reinforce societal inequality. For example, critics suggest that at least some of the data used by systems like COMPAS is fundamentally tainted by racial inequalities in the criminal justice system. “If you’re looking at how many convictions a person has and taking that as a neutral variable — well, that’s not a neutral variable [...] The criminal justice system has been shown to have systematic racial biases.”
Black people are arrested more often than whites, even when they commit crimes at the same rates. Black people are also sentenced more harshly and are more likely to searched or arrested during a traffic stop. That’s context that could be lost on an algorithm (or an engineer) taking those numbers at face value. [...]
“Part of the problem is that people trained as data scientists who build models and work with data aren’t well connected to civil rights advocates a lot of the time,” said Aaron Rieke of Upturn
[...]
What does ‘fairness’ mean?
Once we move beyond the technical discussions about how to address algorithmic bias, there’s another tricky debate to be had: How are we teaching algorithms to value accuracy and fairness? And what do we decide “accuracy” and “fairness” mean? If we want an algorithm to be more accurate, what kind of accuracy do we decide is most important? If we want it to be more fair, whom are we most concerned with treating fairly? [...]
“In some cases, the most accurate prediction may not be the most socially desirable one, even if the data is unbiased, which is a huge assumption — and it’s often not,” Rieke said.
Advocates say the first step is to start demanding that the institutions using these tools make deliberate choices about the moral decisions embedded in their systems, rather than shifting responsibility to the faux neutrality of data and technology.
“It can’t be a technological solution alone,” Ajunwa said. “It all goes back to having an element of human discretion and not thinking that all tough questions can be answered by technology.” [...]
https://fivethirtyeight.com/features/technology-is-biased-too-how-do-we-fix-it
Algorithms were supposed to free us from our unconscious mistakes. But now there’s a new set of problems to solve.
Whether it’s done consciously or subconsciously, racial discrimination continues to have a serious, measurable impact on the choices our society makes about criminal justice, law enforcement, hiring and financial lending. It might be tempting, then, to feel encouraged as more and more companies and government agencies turn to seemingly dispassionate technologies for help with some of these complicated decisions, which are often influenced by bias. Rather than relying on human judgment alone, organizations are increasingly asking algorithms to weigh in on questions that have profound social ramifications, like whether to recruit someone for a job, give them a loan, identify them as a suspect in a crime, send them to prison or grant them parole.
But an increasing body of research and criticism suggests that algorithms and artificial intelligence aren’t necessarily a panacea for ending prejudice, and they can have disproportionateimpacts on groups that are already socially disadvantaged, particularly people of color. [...]
In 2014, a report from the Obama White House warned that automated decision-making “raises difficult questions about how to ensure that discriminatory effects resulting from automated decision processes, whether intended or not, can be detected, measured, and redressed.” [...]
Although AI decision-making is often regarded as inherently objective, the data and processes that inform it can invisibly bake inequality into systems that are intended to be equitable. Avoiding that bias requires an understanding of both very complex technology and very complex social issues.
Consider COMPAS, a widely used algorithm that assesses whether defendants and convicts are likely to commit crimes in the future. [...]
At first glance, COMPAS appears fair: White and black defendants given higher risk scores tended to reoffend at roughly the same rate. But an analysis by ProPublica found that, when you examine the types of mistakes the system made, black defendants were almost twice as likely to be mislabeled as likely to reoffend — and potentially treated more harshly by the criminal justice system as a result [...]
An even stickier question is whether the data being fed into these systems might reflect and reinforce societal inequality. For example, critics suggest that at least some of the data used by systems like COMPAS is fundamentally tainted by racial inequalities in the criminal justice system. “If you’re looking at how many convictions a person has and taking that as a neutral variable — well, that’s not a neutral variable [...] The criminal justice system has been shown to have systematic racial biases.”
Black people are arrested more often than whites, even when they commit crimes at the same rates. Black people are also sentenced more harshly and are more likely to searched or arrested during a traffic stop. That’s context that could be lost on an algorithm (or an engineer) taking those numbers at face value. [...]
“Part of the problem is that people trained as data scientists who build models and work with data aren’t well connected to civil rights advocates a lot of the time,” said Aaron Rieke of Upturn
[...]
What does ‘fairness’ mean?
Once we move beyond the technical discussions about how to address algorithmic bias, there’s another tricky debate to be had: How are we teaching algorithms to value accuracy and fairness? And what do we decide “accuracy” and “fairness” mean? If we want an algorithm to be more accurate, what kind of accuracy do we decide is most important? If we want it to be more fair, whom are we most concerned with treating fairly? [...]
“In some cases, the most accurate prediction may not be the most socially desirable one, even if the data is unbiased, which is a huge assumption — and it’s often not,” Rieke said.
Advocates say the first step is to start demanding that the institutions using these tools make deliberate choices about the moral decisions embedded in their systems, rather than shifting responsibility to the faux neutrality of data and technology.
“It can’t be a technological solution alone,” Ajunwa said. “It all goes back to having an element of human discretion and not thinking that all tough questions can be answered by technology.” [...]
https://fivethirtyeight.com/features/technology-is-biased-too-how-do-we-fix-it