The algorithms are the real racists: Technology is "biased" too.

lecter

not even webscale
@Silver
Joined
Sep 13, 2012
Messages
11,265
Reaction score
0
Technology Is Biased Too. How Do We Fix It?
Algorithms were supposed to free us from our unconscious mistakes. But now there’s a new set of problems to solve.

Whether it’s done consciously or subconsciously, racial discrimination continues to have a serious, measurable impact on the choices our society makes about criminal justice, law enforcement, hiring and financial lending. It might be tempting, then, to feel encouraged as more and more companies and government agencies turn to seemingly dispassionate technologies for help with some of these complicated decisions, which are often influenced by bias. Rather than relying on human judgment alone, organizations are increasingly asking algorithms to weigh in on questions that have profound social ramifications, like whether to recruit someone for a job, give them a loan, identify them as a suspect in a crime, send them to prison or grant them parole.

But an increasing body of research and criticism suggests that algorithms and artificial intelligence aren’t necessarily a panacea for ending prejudice, and they can have disproportionateimpacts on groups that are already socially disadvantaged, particularly people of color. [...]

In 2014, a report from the Obama White House warned that automated decision-making “raises difficult questions about how to ensure that discriminatory effects resulting from automated decision processes, whether intended or not, can be detected, measured, and redressed.” [...]

Although AI decision-making is often regarded as inherently objective, the data and processes that inform it can invisibly bake inequality into systems that are intended to be equitable. Avoiding that bias requires an understanding of both very complex technology and very complex social issues.
Consider COMPAS, a widely used algorithm that assesses whether defendants and convicts are likely to commit crimes in the future. [...]

At first glance, COMPAS appears fair: White and black defendants given higher risk scores tended to reoffend at roughly the same rate. But an analysis by ProPublica found that, when you examine the types of mistakes the system made, black defendants were almost twice as likely to be mislabeled as likely to reoffend — and potentially treated more harshly by the criminal justice system as a result [...]

An even stickier question is whether the data being fed into these systems might reflect and reinforce societal inequality. For example, critics suggest that at least some of the data used by systems like COMPAS is fundamentally tainted by racial inequalities in the criminal justice system. “If you’re looking at how many convictions a person has and taking that as a neutral variable — well, that’s not a neutral variable [...] The criminal justice system has been shown to have systematic racial biases.”

Black people are arrested more often than whites, even when they commit crimes at the same rates. Black people are also sentenced more harshly and are more likely to searched or arrested during a traffic stop. That’s context that could be lost on an algorithm (or an engineer) taking those numbers at face value. [...]

“Part of the problem is that people trained as data scientists who build models and work with data aren’t well connected to civil rights advocates a lot of the time,” said Aaron Rieke of Upturn
[...]

What does ‘fairness’ mean?
Once we move beyond the technical discussions about how to address algorithmic bias, there’s another tricky debate to be had: How are we teaching algorithms to value accuracy and fairness? And what do we decide “accuracy” and “fairness” mean? If we want an algorithm to be more accurate, what kind of accuracy do we decide is most important? If we want it to be more fair, whom are we most concerned with treating fairly? [...]

“In some cases, the most accurate prediction may not be the most socially desirable one, even if the data is unbiased, which is a huge assumption — and it’s often not,” Rieke said.

Advocates say the first step is to start demanding that the institutions using these tools make deliberate choices about the moral decisions embedded in their systems, rather than shifting responsibility to the faux neutrality of data and technology.
“It can’t be a technological solution alone,” Ajunwa said. “It all goes back to having an element of human discretion and not thinking that all tough questions can be answered by technology.” [...]
https://fivethirtyeight.com/features/technology-is-biased-too-how-do-we-fix-it






 
Shit like technology and algorithms is white people shit so of course it's racist lmao are you guys seriously trying to argue otherwise
 
Sounds like each race needs it's own land mass.
 
Computers have no emotion. If you simply feed it crime statistics, then divide the categories into age, sex and race, it will objectively target certainly groups more. If the decision was made by a human, all sorts of accusations about racism and sexism would fly. That leaves the question of whether we truly want to be objective and let numbers do its work, or continue being in self-denial for the sake of political correctness.
 
Computers have no emotion. If you simply feed it crime statistics, then divide the categories into age, sex and race, it will objectively target certainly groups more. If the decision was made by a human, all sorts of accusations about racism and sexism would fly. That leaves the question of whether we truly want to be objective and let numbers do its work, or continue being in self-denial for the sake of political correctness.

Silicon chips have a long history of slavery and imperialism. All that woman degrading porn is watched on an electronic device with guess what? a fucking silicone chip. it wasn't enough to give woman cancer from silicone fake breast funbags, but now they are spreading their cockman oppressor wafers and soldered connectors into our very mind flesh in some sort of processor rape, like that tree rape at that cabin all those years ago.

hqdefault.jpg
 
Computers have no emotion. If you simply feed it crime statistics, then divide the categories into age, sex and race, it will objectively target certainly groups more. If the decision was made by a human, all sorts of accusations about racism and sexism would fly. That leaves the question of whether we truly want to be objective and let numbers do its work, or continue being in self-denial for the sake of political correctness.

logic is racist because you can't control the outcome when done in a pure manner. It won't abide by politically correct constraints.

Essentially it is 'unflitered' and would not be invited back to a dinner party.
 
Definitive proof that Google (and all liberals) are the real racists:



Seriously though, maybe they didn't put enough black people in the training set. The bias is in how you select your training data, not in the outcomes that the algorithm predicts.
 
They are working as intended. Of course they will affect different groups differently. That's the point. If they gave loans to everyone in Detroit they'd be out of business.
 
Code:
switch(race) {
    case "black":
    case "latino":
    case "muslim":
        discriminate();
        break;
    default:
         privilege++;
         enjoy(privilege);
}

Can anyone code review my racist code?
 
Lol, so an Algorithm that considers how many times a person has been convicted is racist. Yeah...
 
I firmly believe that everyone who think white people are inherently racist should be deported to Liberia.
 
“Part of the problem is that people trained as data scientists who build models and work with data aren’t well connected to civil rights advocates a lot of the time,” said Aaron Rieke of Upturn

Sounds like a good thing.

Code:
switch(race) {
case "black":
case "latino":
case "muslim":
discriminate();
break;
default:
privilege++;
enjoy(privilege);
}

Can anyone code review my racist code?

Of course you'd use a switch on black people, you racist.
 
Funny how no one here has yet to actually address the points made in the article
At first glance, COMPAS appears fair: White and black defendants given higher risk scores tended to reoffend at roughly the same rate. But an analysis by ProPublica found that, when you examine the types of mistakes the system made, black defendants were almost twice as likely to be mislabeled as likely to reoffend — and potentially treated more harshly by the
criminal justice system as a result [...]
An even stickier question is whether the data being fed into these systems might reflect and reinforce societal inequality. For example, critics suggest that at least some of the data used by systems like COMPAS is fundamentally tainted by racial inequalities in the criminal justice system. “If you’re looking at how many convictions a person has and taking that as a neutral variable — well, that’s not a neutral variable [...] The criminal justice system has been shown to have systematic racial biases.”

Black people are arrested more often than whites, even when they commit crimes at the same rates. Black people are also sentenced more harshly and are more likely to searched or arrested during a traffic stop. That’s context that could be lost on an algorithm (or an engineer) taking those numbers at face value. [...]
Also interesting that everyone ignored the part where one of the people interviewed conceded that the data might produce politically incorrect information even off of unbiased data
“In some cases, the most accurate prediction may not be the most socially desirable one, even if the data is unbiased, which is a huge assumption — and it’s often not,” Rieke said.
 
Funny how no one here has yet to actually address the points made in the article
I'd be interested to see how they define/measure what a re-offender is and how the model could be off. Seems like one of those affecting the results by observing it type of thing. If you are acting on the information of an algorithm saying someone would be a repeat offender resulting in sentencing them longer, how do you discount the sentencing on their future actions?

Also interesting that everyone ignored the part where one of the people interviewed conceded that the data might produce politically incorrect information even off of unbiased data

Upturn is an activist organization around technology. They are fully in the camp of data being biased. I don't think "conceded" is the word you wanted to use.
 
Back
Top