Ethics
(Artificial Intelligence) - Racist Robots
How Do We Eliminate AI Bias?
Anil Kumar Kummari
We have
already seen glimpses of what might be on the horizon. Programs developed by
companies at the forefront of AI research have resulted in a string of errors
that look uncannily like the darker biases of humanity: a Google image
recognition program labelled the faces of several black people as gorillas; a
LinkedIn advertising program showed a preference for male names in searches,
and a Microsoft chatbot called Tay spent a day learning from Twitter and began
spouting anti-Semitic messages.
Tay Chatbot - Microsoft
In 2016,
Microsoft released a “playful” chatbot named Tay onto Twitter designed to show
off the tech giant’s burgeoning artificial intelligence research. Within 24
hours, it had become one of the internet’s ugliest experiments. By learning
from its interactions with other Twitter users, Tay quickly went from tweeting
about how “humans are super cool,” to claiming “Hitler was right I hate the Jews.”
While it was a public relations disaster for Microsoft, Tay demonstrated an
important issue with machine learning artificial intelligence: That robots can
be as racist, sexist and prejudiced as humans if they acquire knowledge from
text written by humans.
US-Risk Assessment System
“If you want
to take steps towards changing that you can’t just use historical information.”
In May last, year report claimed that a computer program used by a US court for
risk assessment was biased against black prisoners.
The
Correctional Offender Management Profiling for Alternative Sanctions was much
more prone to mistakenly label black defendants as likely to reoffend according
to an investigation by ProPublica.
ProPublica
did, as part of a larger examination of the powerful, largely hidden effect of
algorithms in American life. Obtained the risk scores assigned to more than
7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked
to see how many were charged with new crimes over the next two years, the same
benchmark used by the creators of the algorithm. The score proved remarkably
unreliable in forecasting violent crime: Only 20 percent of the people predicted
to commit violent crimes actually went on to do so.
When a full
range of crimes were taken into account — including misdemeanours such as
driving with an expired license — the algorithm was somewhat more accurate than
a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for
any subsequent crimes within two years.
We also
turned up significant racial disparities, just as Holder feared. In forecasting
who would re-offend, the algorithm made mistakes with black and white
defendants at roughly the same rate but in very different ways. The formula was
particularly likely to falsely flag black defendants as future criminals,
wrongly labelling them this way at almost twice the rate as white defendants. White
defendants were mislabelled as low risk more often than black defendants.
BRISHA BORDEN
Prior Offenses
4 juvenile misdemeanors
HIGH RISK8
Subsequent Offenses
None
VERNON PRATER
Prior Offenses
2 armed robberies, 1 attempted armed robbery
LOW RISK3
Subsequent Offenses
1 grand theft
Could defendants’
prior crimes or the type of crimes they were arrested for explain this
disparity? No. We ran a statistical test that isolated the effect of race from
criminal history and recidivism, as well as from defendants’ age and gender.
Black defendants were still 77 percent more likely to be pegged as at higher
risk of committing a future violent crime and 45 percent more likely to be
predicted to commit a future crime of any kind.
Maxine
Mackintosh, a leading expert in health data, said the problem is mainly the
fault of skewed data being used by robotic platforms. Machine learning may be
inherently racist and sexist if it learns from humans and will typically favour
white men, research has shown. Machine-learning algorithms, which will mimic
humans and society’s actions, will have an unfair bias against women and ethnic
minorities.
The white
suspect had prior offences of attempted burglary and the black suspect had
resisting arrest. Seemingly, giving no indication as to why, the black suspect
was given a higher chance of reoffending and the white suspect was considered
‘low risk’. But, over the next two years, the black suspect stayed clear of
illegal activity and the white suspect was arrested three more times for drug
possession.
He added
researchers at Boston University had demonstrated the inherent bias in AI algorithms
by training a machine to analyse text collected from Google News. When they
asked the machine to complete the sentence “Man is to computer programmers as
woman is to x”, the machine answered “homemaker”. Stopping
racist, sexist robots a challenge for AI Health data
expert Maxine Mackintosh said that the problem lies with society, and not the
robots. She said: “These big data are really a social mirror – they reflect the
biases and inequalities we have in society. “If you want to take steps towards
changing that you can’t just use historical information.”
“People expected AI to be unbiased; that’s just
wrong”
This is the
threat of AI in the near term. It is not some sci-fi scenario where robots take
over the world. Its AI-powered services making decisions we do not understand,
where the decisions turn out to hurt certain groups of people.
Refence:-
3. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing



No comments:
Post a Comment