Thursday, 7 December 2017
Sunday, 26 November 2017
Ethics (Artificial Intelligence) – Defending Our Privacy
Anil Kumar Kummari
Now a days All are accessible to use high end
technology in mobile phones, for everything we are we are giving our
information like fingerprint, Eye Scanner ( Retina Recognition ), health Apps
will track your moments also. All our information is in the at one place. Once
it hacked or miss used it will be at risk. As artificial intelligence
proliferates, companies and governments are aggregating enormous data sets to
feed their AI initiatives.
Although privacy is not a new concept in
computing, the growth of aggregated data magnifies privacy challenges and leads
to extreme ethical risks such as unintentionally building biased AI systems,
among many others. Privacy and artificial intelligence are both complex topics.
There are no easy or simple answers because solutions lie at the shifting and
conflicted intersection of technology, commercial profit, public policy, and
even individual and cultural attitudes. Data protection officials from more than
60 countries expressed their concerns over challenges posed by the emerging
fields of robotics, artificial intelligence and machine learning due to the new
tech's unpredictable outcomes. The global privacy regulators also discussed the
difficulties of regulating encryption standards and how to balance law
enforcement agency access to information with personal privacy rights.
Such
technological developments “pose challenges for a consent model of data
collection,” and may lead to an increase in data privacy risks, John Edwards,
New Zealand privacy commissioner, said at the 38th International Data
Protection and Privacy Commissioners' Conference, in Marrakesh, Morocco. For
example, decision-making machines may be used to “engender or manipulate the
trust of the user,” and would be an “all seeing, all remembering in-house
guests,” that would collect personal data via numerous sensors. Peter Fleischer,
global privacy counsel at Alphabet Inc.'s Google, said that established privacy
principles would continue to be relevant for new technologies, but machine
learning raised particular problems, such as machines finding “ways to
re-identify data.”
The
emerging technologies may have a broad impact across various industries.
“Humans teaching machines to learn” was a “revolution in the making” that may
have broad societal consequences that could cut across numerous economic
sectors, Fleischer said. For example, data-driven machines may have the ability
to analyse sensitive medical data, make medical diagnoses, thereby potentially
revolutionizing the health-care industry, Fleischer said at the conference.
Machines that learn would act “like a chef: see the ingredients and comes up
with something new,” he said.
“Before
the prospect of an intelligence explosion, we humans are like small children
playing with a bomb. Such is the mismatch between the power of our plaything
and the immaturity of our conduct.”
Nick
Bostrom, Professor in AI Ethics and Philosophy
at the University of Oxford
Google
CEO Sundar Pichai thinks we are now living in an “artificial intelligence-first
world.” He’s probably right. Artificial intelligence is all the rage in Silicon
Valley these days, as technology companies race to build the first killer app
that utilizes machine learning and image recognition. Today, Google announced
an AI-powered assistant built into its new Pixel phones. But there’s a pivotal
downside to the company’s latest creation: Because of the very nature of
artificial intelligence, our data is less secure than ever before, and
technology companies are now collecting even more personal information about
each one of us.
Re-Defining Privacy
Unfortunately,
the answer is no. We cannot turn back time. There is no completely private
space available to us, anymore. Most of the things we do are already registered
as data somewhere (and this occurs as soon as we do them). Purpose limitation
is not always possible. We have fallen in love with the algorithmically driven
companies that utilize technology to deliver an instantly better user experience.
They pervade all aspects of everyday life. We already live in a world of big
data. In addition, we cannot stop the emergence of artificial intelligence. The
Internet of Things means that all of our devices are already connected (or will
be connected in the near future). Connected and smart cities will continue to
make our lives better. Our telephones already keep track of our moves and our
connections and favourite places. Smart fridges keep track of our groceries.
The list goes on and on. Moreover, perhaps most obviously, we love being
connected and sharing our lives with others, via social media and other online
platforms.
This
does not mean that privacy disappears or that it ceases to matter.
Privacy
is, and will continue to be, enormously important. Rather, privacy has been
transformed by the proliferation of network technologies and the new forms of
unmediated communication that such technologies facilitate.
In
particular, technology has changed the character of the “zone” of privacy that
people expect to be protected. There has been a shift from a settled space
based on a clear distinction between public and private life to a more
uncertain and dynamic zone that is constructed by and between individuals. Privacy
as a well-defined space over which a person has “ownership” has been replaced
by a more complex space that is constantly being negotiated and contested.
Work
is similarly transformed. Businesses are becoming more flexible ecosystems /
networks / platforms. “Lifetime” employment is no longer feasible or even
desirable in a digital world. Working relationships become looser and more
transitory as businesses are introducing more flexible work arrangements in
which “employees” are “hired” for well-defined, but successive “tours of duty”.
Keeping artificial intelligence data
in the shadows
One
way for IT to address data privacy issues with machine learning is to
"mask" the data collected, or anonymize it so that observers cannot
learn specific information about a specific user. Some companies take a similar
approach now with regulatory compliance, where blind enforcement policies use
threat detection to determine if a device follows regulations but do not glean
any identifying information.
“With
AI it becomes easier to correlate data ... and remove privacy.”
Brian Katz
Device
manufacturers have also sought to protect users in this way. For example, Apple
iOS 10 added differential privacy, which recognises app and data usage patterns
among groups of users while obscuring the identities of individuals. Tools such
as encryption are also important for IT to maintain data privacy and security. Another
best practice is to separate business and personal apps using technologies such
as containerisation. Enterprise mobility management tools can be set up to look
at only corporate apps but still be able to white list and blacklist any apps to
prevent malware. That way, IT does not invade users' privacy on personal apps.
Reference:-
- https://gizmodo.com/googles-ai-plans-are-a-privacy-nightmare-1787413031
- http://searchmobilecomputing.techtarget.com/news/450419686/Artificial-intelligence-data-privacy-issues-on-the-rise
- https://medium.com/startup-grind/artificial-intelligence-is-taking-over-privacy-is-gone-d9eb131d6eca
Saturday, 25 November 2017
Ethics (Artificial Intelligence) – Bio-terrorism
Anil Kumar Kummari
As Stephen Hawking noted in 2014, “Whereas the
short-term impact of AI depends on who controls it, the long-term impact
depends on whether it can be controlled at all”.
THE
MEDICINE OF TOMORROW
As
the medical wearable and sensor, market starts to truly boom, it is logical to
think ahead to what might follow this “wearable revolution.” I think that the
next step will be insideables, digestible, and digital tattoos.
“Insideables”
means devices implanted into the body, generally just under the skin. In fact,
there are people who already have such implants, which they can use to open up
a laptop, a smartphone, or even the garage door. “Digestible” are pills or tiny
gadgets that can be swallowed, which could do things like track digestion and
the absorption of drugs. “Digital tattoos” are tattoos with “smart”
capabilities. They might easily measure all of our health parameters and vital
signs.
All
of these teeny-tiny devices might be misused—some could be used to infuse
lethal drugs into an organism or strip a person of their privacy. That is the
reason why it is of the utmost importance to pay attention to the security
aspect of these devices. They can be vulnerable to attacks, and our life will
(quite literally) depend on the safety precautions of the company developing
the sensors. That may not sound too comforting—putting your health in the hands
of a company—but microchip implants are heavily regulated in the US, and so we
are already looking ahead to issues surrounding this advancement.
1) Hacking medical devices
It
has already been proven that pacemakers and insulin pumps can be hacked.
Security experts have warned us that someone would be murdered through these methods
any time soon. How can we prevent wearable devices that are connected to our
physiological system from being hacked and controlled from a distance?
2) Bioterrorism due to nanotechnology
In
the wildest futuristic scenarios, tiny Nano robots in our bloodstream could
detect diseases. After a few decades, they might even eradicate the word
symptom inasmuch as no one would have them any longer. These microscopic robots
would send alerts to our smartphones or digital contact lenses before disease
could develop in our body. If it becomes reality, and micro robots swimming in
bodily fluids are already out there, how can we prevent terrorists from trying
to hack these devices controlling not only our health but also our lives.
THE TINY ROBOT REVOLUTION
In
the future, nanoscale robots could live in our bloodstream or in our eyes and
prevent any diseases by alerting the patient (or doctor) when a condition is
about to develop. They could interact with our organs and measure every health
parameter, intervening when needed.
Nanobots
are so tiny that it is almost impossible to discover when someone, for example,
puts one into your glass and you swallow it. Some people are afraid that, by
using such tiny devices, total surveillance would become feasible. There also
might be the possibility there to utilize nanobots to deliver toxic or even
lethal drugs to the organs.
By
researching ways to identify when these nanobots are being utilized now, we
could potentially prevent their misuse in the future.
AUGMENTING
INTELLIGENCE
In the future, brain implants will be able to empower
humans with superpowers with the help of chips that allow us to hear a
conversation from across a room, give us the ability to see in the dark, let us
control moods, restore our memories, or “download” skills like in The Matrix
movie trilogy. However, implantable neuro-devices might also be used as weapons
in the hands of the wrong people.
Conclusion
Bioterrorism remains a legitimate threat both from
domestic and international terrorist groups. From a public health perspective,
timely surveillance, awareness of syndromes resulting from bioterrorism,
epidemiologic investigation capacity, laboratory diagnostic capacity and the
ability to rapidly communicate critical information on a need to know basis to
manage public communication through the media are vital. Ensuring adequate
supply of drugs, laboratory reagents, antitoxins and vaccines is essential.
Formulating and putting into practice, SOPs/drills at all levels of health care
will go a long way in minimising mortality and morbidity in case of a
bioterrorist attack.
Reference:-
Friday, 24 November 2017
Ethics
(Artificial Intelligence) - Artificial
Stupidity
Anil Kumar Kummari
Futurists
worry about artificial intelligence becoming too intelligent for humanity’s
good. Here and now, however, artificial intelligence can be dangerously dumb.
When complacent humans become over-reliant on dumb AI, people can die. The
lethal record of accomplishment goes from the Tesla Autopilot crash last year,
to the Air France 447 disaster that killed 228 people in 2009, to the Patriot
missiles that shot down friendly planes in 2003.
War
Algorithm logo that is particular problematic for the military, which, more than
any other potential user, would employ AI in situations that are literally life
or death. It needs code that can calculate the path to victory amidst the chaos
and confusion of the battlefield, the high-tech Holy Grail we have calling the
War Algorithm. While the Pentagon has repeatedly promised it won’t build killer
robots — AI that can pull the trigger without human intervention — people will
still die if intelligence analysis software mistakes a hospital for a terrorist
hide-out, a “cognitive electronic warfare” pod doesn’t jam an incoming missile,
or if a robotic supply truck doesn’t deliver the right ammunition to soldiers
running out of bullets.
“Before
we work on artificial intelligence why don’t we do something about natural
stupidity?” —Steve Polyak
Should we worry about how quickly
artificial intelligence is advancing?
There
are people who are grossly overestimating the progress that has been made.
There are many, many years of small progress behind many of these things,
including mundane things like more data and computer power. The hype is not
about whether the stuff we are doing is useful or not—it is. However, people
underestimate how much more science needs to be done. Moreover, it is difficult
to separate the hype from the reality because we are seeing these great things and
to the naked eye, they look magical.
Artificial stupidity. How can we
guard against mistakes?
Intelligence
comes from learning, whether you are human or machine. Systems usually have a
training phase in which they "learn" to detect the right patterns and
act according to their input. Once a system is fully trained, it can then go
into test phase, where it is hit with more examples and we see how it performs.
Obviously,
the training phase cannot cover all possible examples that a system may deal
with in the real world. These systems can be fooled in ways that humans would
not be. For example, random dot patterns can lead a machine to “see” things
that are not there. If we rely on AI to bring us into a new world of labour,
security and efficiency, we need to ensure that the machine performs as
planned, and that people can’t overpower it to use it for their own ends.
Artificial
stupidity as a limitation of artificial intelligence. Artificial stupidity is
not just delivering deliberate errors into the computer, but it could also be
seen as a limitation of computer artificial intelligence.
Dr.
Jay Liebowitz argues that "if intelligence and stupidity naturally exist,
and if AI is said to exist, then is there something that might be called
"artificial stupidity?"
Liebowitz pointed out that the
limitations are:
Ability
to possess and use common sense
Development
of deep reasoning systems
Ability
to vary an expert system's explanation capability
Ability
to get expert systems to learn
Ability
to have distributed expert systems
Ability
to easily acquire and update knowledge
— Liebowitz,
1989, Page 109
Once
a system is fully trained, it can then go into test phase, where it is hit with
more examples and we see how it performs.
However, like google maps, it shows only shortest route. However, in
reality, it do not know that road is closed due to temporary actions of govt.
Artificial stupidity. How can we
guard against mistakes?
What
is important to keep in mind is that the training phase is not able to cover
all possible scenarios that a system can come across, hence why systems can be
fooled in ways that humans would not be.
Importance
of ensuring that the machines perform as planned and that people are not able
to overpower it to use it for their own benefits. At the time of making, it
must be follow three laws of Robotics. Which makes artificial stupidity followed
by the machines and goes into the hands of terror groups.
Reference:-
- https://breakingdefense.com/2017/06/artificial-stupidity-when-artificial-intel-human-disaster/
- https://www.technologyreview.com/s/546301/will-machines-eliminate-us/
Thursday, 23 November 2017
Future of
Artificial Intelligence in Cyber Security
Anil Kumar Kummari
With
AI being introduced in every industry, the cyber security space would be no
stranger to it. With advancement, new exploits and vulnerabilities could be
easily identified and analysed to prevent further attacks. Incident response
systems could also benefit greatly from AI. When under attack, the system will
be able to identify the entry point and stop the attack as well as patch the
vulnerability.
Studies
show that it takes, on an average in 2016, 99 days for a company to realize
that they have been compromised. Although a long way from 146 days in 2015, yet
a very long time for the attackers to gain all the information they were
looking for. This period is not only enough to steal data but also manipulate
it without detection. This can have a great impact on the company as it makes
it very difficult for the company to differentiate between the fake and the
actual data.
With
the advancements in AI, hopefully, all of the above problems would be able to
mitigate the problems being faced.
“With AI it becomes
easier to correlate data… and remove privacy”
Keeping artificial intelligence data in
the shadows
One
way for IT to address data privacy issues with machine learning is to
"mask" the data collected, or anonymize it so that observers cannot
learn specific information about a specific user. Some companies take a similar
approach now with regulatory compliance, where blind enforcement policies use
threat detection to determine if a device follows regulations but do not glean
any identifying information. Device manufacturers have also sought to protect
users in this way. For example, Apple iOS 10 added differential privacy, which
recognizes app and data usage patterns among groups of users while obscuring
the identities of individuals.
Amazon
Becomes the First to Turn to Artificial Intelligence to Protect Data in the
Cloud…
Amazon
Web Services (AWS), it makes plenty of sense for Amazon's team of engineers and
programmers to continue to place a substantial priority on keeping this
sensitive info safe, secure, and out of sight of the prying eyes of digital
intruders. However, the fate of your dealership's data (and that of countless
other organizations) may not actually rest in human hands at all anymore. As
the editorial team over at Forbes magazine explains, Amazon has blazed a new trial
by becoming the first public cloud computing and storage service provider to
turn to artificial intelligence (A.I.) to safeguard information held within
AWS. Known as "Amazon Macie," this new safety measure leverages the
power of machine learning in an effort to automatically discover, codify, and
shield stored data on behalf of the service's users. In terms of how Amazon
Macie works, this system utilizes machine learning to both understand the
nature of potentially sensitive information and find security flaws within user
accounts on AWS. From here, analysing and reporting issues to customers,
including real-time alerts related to usage that the A.I.
Is artificial intelligence (AI) used to
detect cyber-attacks, how is its success rate?
Of course,
AI can be used to detect cyber-attacks. There are plenty of academic researches
about detecting cyber-attacks using artificial intelligence.
The success
rate of those researches varies between 85% and 99%.
In
the last few years, in addition to academic researches, some products have been
improved to detect cyber-attacks with the help of artificial intelligence like Dark
Trace. Dark Trace claims to have more than 99% of success rate and it has a
very low rate of false positives. For more details, you can check the company’s
website.
AI Solutions for Cyber Security
Automation and false positives
Although
informatics systems are prone to failure and attacks, they are a necessary help
to overwhelmed security engineers. There is a growing shortage of cyber
security specialists, and the mix of high-value actions and routine tasks
should be divided between man and machine. Computers are expected to automatically
perform daily tasks like analysing network traffic, granting access based on
some set of rules and detecting abnormalities, while the cyber security
specialists can work on designing algorithms and studying emerging threats. Removing
false positives is also one of the main tasks that require human assistance and
one of the reasons why AI is not ready to take over security completely.
Predictive analytics
Cyber
threats have become more and more complex. Just gathering data about attacks
like data breaches, malware types, and phishing activity and creating
signatures is no longer enough. The new approach is to monitor a wide number of
factors and identify patterns of what constitutes normal and abnormal activity,
without looking for specific traces of a particular malicious activity, but for
spikes or silent moments. Some companies even pair this with other AI-powered
tools including natural language processing to speed up this process. Staying a
step ahead of hackers will be increasingly difficult, as predictive analysis
can be tricked with randomization.
Immunity
Learning
from nature is effective not only in engineering but in cyber security as well.
The body’s immune system is one of the best defensive lines in the living
world. AI could be trained to behave like the white cells and antibodies,
neutralizing threats that are not according to the known patterns without
shutting down the whole system. This approach could be the cure to the adaptive
malware previously discussed. The system learns from experiences and becomes
stronger, just like an organism that has been exposed to the diseases, and
overcomes it.
Hands-on Approach
Cybersecurity
powered by AI is just the natural step in protecting vulnerable data. The race
between those aiming to create safe systems and attackers is crossing into new
territory, but machines are far away from taking the lead. Currently, both
parties are restructuring their data and integrating systems. There are
numerous corrective actions necessary from humans. This is a process, composed
of multiple layers, not a one-time action. The defining factor remains the
education of the humans involved, first as users then as protectors.
Reference:-
- http://bigdata-madesimple.com/will-artificial-intelligence-take-over-cyber-security/
Wednesday, 22 November 2017
Ethics
(Artificial Intelligence) -
“Cybersecurity
of sensitive data”
Anil Kumar Kummari
Do
you use Siri, Google Now, Cortana or Alexa? They work by recording your voice,
uploading the recording to the cloud, then processing the words and sending
back the answer. After you have your answer, you forget about the query. But
your recorded voice, the text extracted from it, and the entire context of the
back-and-forth conversations you had are still doing work in the service of the
A.I. that makes virtual assistants work. Everything you say to your virtual
assistant is funnelled into the data-crunching A.I. engines and retained for
analysis. In fact, the artificial intelligence boom is as much about the
availability of massive data sets as it is about intelligent software. The
bigger the data sets, the smarter the A.I.
Artificial
Intelligence Right Now?
Siri: Apple’s personal
assistant on iPhone’s and Mac OS.
Netflix: Recommendation
engine
Nest: Home Automation
Alexa: Amazon’s smart
hub
Gaming: Games like Call
of Duty and Far Cry rely heavily on AI
News Generation: Companies like
Yahoo, AP use AI to write small news stories such as financial summaries,
sports recap, etc.
Fraud Detection
Customer Support: Companies have
been using small scale Chat Bots to automate
this process.
Self-driving cars
Speech Recognition
Robotics
Why Are Criminals
Targeting Sensitive Data?
Adapting
and responding to evolving cyber threats and protecting critical infrastructure
and proprietary business assets are essential for both government agencies and
businesses. “Post-mortem” analyses of breaches offer a treasure trove of
lessons learned and reveal attack tactics, techniques and procedures. Cyber
criminals leverage technology vulnerabilities and trickery to exploit the
human-technology gap — by targeting sensitive passwords, data and applications
regularly used by staff. Data theft is the goal of most recent breaches. Cyber
criminals typically break into vulnerable systems and pivot between systems
using stolen credentials or posing as a third-party contractor to gain access
to valuable data. Targeted confidential data comprises personnel records,
public billing information, credit card numbers, financial or health records
and more. The theft of your city’s legally protected data can result in
significant regulatory fines, loss of public trust and damage to the city’s
reputation. Fortune.com estimates that
in 2016, the cost of data breaches averaged $4 million dollars or $158 per
record. Medical history, credit card data and Social Security numbers have the
highest cost per stolen record at $355.
Sensitive Data
Risk Management
Data
is the new currency. Traditional currency and property risk-management
techniques also apply to protecting against cybercrime. Regulated or sensitive
data has monetary value and makes an attractive target for cybercriminals.
Reducing the amount of regulated data stored on hand is equivalent to cash
management practices, such as moving excess cash from registers to a hardened
safe or transporting it to a bank’s vault. Unrestricted and unmonitored
employee access to a large amount of cash is typically prohibited; however,
public agencies often fail to apply the same level of scrutiny for employee access
to regulated or sensitive data.
Whatever
the motive, it is clear that governments are the highest-value targets for
hackers today. Thus, it is critical that agencies invest in strong cyber defences—stronger,
if anything, than those found in the private sector.
As
with modern-day terrorism, cybersecurity has proven daunting because the nature
of the threat is constantly evolving. Each major technological
development—mobile, social, cloud computing—brings a host of new risks.
“With
AI it becomes easier to correlate data ... and remove privacy”
Keeping artificial intelligence data in
the shadows
One
way for IT to address data privacy issues with machine learning is to
"mask" the data collected, or anonymize it so that observers cannot
learn specific information about a specific user. Some companies take a similar
approach now with regulatory compliance, where blind enforcement policies use
threat detection to determine if a device follows regulations but do not glean
any identifying information.
Device
manufacturers have also sought to protect users in this way. For example, Apple
iOS 10 added differential privacy, which recognises app and data usage patterns
among groups of users while obscuring the identities of individuals.
References:-
- https://www.computerworld.com/article/3035595/emerging-technology/artificial-intelligence-needs-your-data-all-of-it.html
- https://remora.com/blog/amazon-macie-machine-learning-cloud-storage-security
Tuesday, 21 November 2017
Ethics
(Artificial Intelligence) - Racist Robots
How Do We Eliminate AI Bias?
Anil Kumar Kummari
We have
already seen glimpses of what might be on the horizon. Programs developed by
companies at the forefront of AI research have resulted in a string of errors
that look uncannily like the darker biases of humanity: a Google image
recognition program labelled the faces of several black people as gorillas; a
LinkedIn advertising program showed a preference for male names in searches,
and a Microsoft chatbot called Tay spent a day learning from Twitter and began
spouting anti-Semitic messages.
Tay Chatbot - Microsoft
In 2016,
Microsoft released a “playful” chatbot named Tay onto Twitter designed to show
off the tech giant’s burgeoning artificial intelligence research. Within 24
hours, it had become one of the internet’s ugliest experiments. By learning
from its interactions with other Twitter users, Tay quickly went from tweeting
about how “humans are super cool,” to claiming “Hitler was right I hate the Jews.”
While it was a public relations disaster for Microsoft, Tay demonstrated an
important issue with machine learning artificial intelligence: That robots can
be as racist, sexist and prejudiced as humans if they acquire knowledge from
text written by humans.
US-Risk Assessment System
“If you want
to take steps towards changing that you can’t just use historical information.”
In May last, year report claimed that a computer program used by a US court for
risk assessment was biased against black prisoners.
The
Correctional Offender Management Profiling for Alternative Sanctions was much
more prone to mistakenly label black defendants as likely to reoffend according
to an investigation by ProPublica.
ProPublica
did, as part of a larger examination of the powerful, largely hidden effect of
algorithms in American life. Obtained the risk scores assigned to more than
7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked
to see how many were charged with new crimes over the next two years, the same
benchmark used by the creators of the algorithm. The score proved remarkably
unreliable in forecasting violent crime: Only 20 percent of the people predicted
to commit violent crimes actually went on to do so.
When a full
range of crimes were taken into account — including misdemeanours such as
driving with an expired license — the algorithm was somewhat more accurate than
a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for
any subsequent crimes within two years.
We also
turned up significant racial disparities, just as Holder feared. In forecasting
who would re-offend, the algorithm made mistakes with black and white
defendants at roughly the same rate but in very different ways. The formula was
particularly likely to falsely flag black defendants as future criminals,
wrongly labelling them this way at almost twice the rate as white defendants. White
defendants were mislabelled as low risk more often than black defendants.
BRISHA BORDEN
Prior Offenses
4 juvenile misdemeanors
HIGH RISK8
Subsequent Offenses
None
VERNON PRATER
Prior Offenses
2 armed robberies, 1 attempted armed robbery
LOW RISK3
Subsequent Offenses
1 grand theft
Could defendants’
prior crimes or the type of crimes they were arrested for explain this
disparity? No. We ran a statistical test that isolated the effect of race from
criminal history and recidivism, as well as from defendants’ age and gender.
Black defendants were still 77 percent more likely to be pegged as at higher
risk of committing a future violent crime and 45 percent more likely to be
predicted to commit a future crime of any kind.
Maxine
Mackintosh, a leading expert in health data, said the problem is mainly the
fault of skewed data being used by robotic platforms. Machine learning may be
inherently racist and sexist if it learns from humans and will typically favour
white men, research has shown. Machine-learning algorithms, which will mimic
humans and society’s actions, will have an unfair bias against women and ethnic
minorities.
The white
suspect had prior offences of attempted burglary and the black suspect had
resisting arrest. Seemingly, giving no indication as to why, the black suspect
was given a higher chance of reoffending and the white suspect was considered
‘low risk’. But, over the next two years, the black suspect stayed clear of
illegal activity and the white suspect was arrested three more times for drug
possession.
He added
researchers at Boston University had demonstrated the inherent bias in AI algorithms
by training a machine to analyse text collected from Google News. When they
asked the machine to complete the sentence “Man is to computer programmers as
woman is to x”, the machine answered “homemaker”. Stopping
racist, sexist robots a challenge for AI Health data
expert Maxine Mackintosh said that the problem lies with society, and not the
robots. She said: “These big data are really a social mirror – they reflect the
biases and inequalities we have in society. “If you want to take steps towards
changing that you can’t just use historical information.”
“People expected AI to be unbiased; that’s just
wrong”
This is the
threat of AI in the near term. It is not some sci-fi scenario where robots take
over the world. Its AI-powered services making decisions we do not understand,
where the decisions turn out to hurt certain groups of people.
Refence:-
3. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Subscribe to:
Comments (Atom)




















