Ethics (Artificial Intelligence) – Defending Our Privacy
Anil Kumar Kummari
Now a days All are accessible to use high end
technology in mobile phones, for everything we are we are giving our
information like fingerprint, Eye Scanner ( Retina Recognition ), health Apps
will track your moments also. All our information is in the at one place. Once
it hacked or miss used it will be at risk. As artificial intelligence
proliferates, companies and governments are aggregating enormous data sets to
feed their AI initiatives.
Although privacy is not a new concept in
computing, the growth of aggregated data magnifies privacy challenges and leads
to extreme ethical risks such as unintentionally building biased AI systems,
among many others. Privacy and artificial intelligence are both complex topics.
There are no easy or simple answers because solutions lie at the shifting and
conflicted intersection of technology, commercial profit, public policy, and
even individual and cultural attitudes. Data protection officials from more than
60 countries expressed their concerns over challenges posed by the emerging
fields of robotics, artificial intelligence and machine learning due to the new
tech's unpredictable outcomes. The global privacy regulators also discussed the
difficulties of regulating encryption standards and how to balance law
enforcement agency access to information with personal privacy rights.
Such
technological developments “pose challenges for a consent model of data
collection,” and may lead to an increase in data privacy risks, John Edwards,
New Zealand privacy commissioner, said at the 38th International Data
Protection and Privacy Commissioners' Conference, in Marrakesh, Morocco. For
example, decision-making machines may be used to “engender or manipulate the
trust of the user,” and would be an “all seeing, all remembering in-house
guests,” that would collect personal data via numerous sensors. Peter Fleischer,
global privacy counsel at Alphabet Inc.'s Google, said that established privacy
principles would continue to be relevant for new technologies, but machine
learning raised particular problems, such as machines finding “ways to
re-identify data.”
The
emerging technologies may have a broad impact across various industries.
“Humans teaching machines to learn” was a “revolution in the making” that may
have broad societal consequences that could cut across numerous economic
sectors, Fleischer said. For example, data-driven machines may have the ability
to analyse sensitive medical data, make medical diagnoses, thereby potentially
revolutionizing the health-care industry, Fleischer said at the conference.
Machines that learn would act “like a chef: see the ingredients and comes up
with something new,” he said.
“Before
the prospect of an intelligence explosion, we humans are like small children
playing with a bomb. Such is the mismatch between the power of our plaything
and the immaturity of our conduct.”
Nick
Bostrom, Professor in AI Ethics and Philosophy
at the University of Oxford
Google
CEO Sundar Pichai thinks we are now living in an “artificial intelligence-first
world.” He’s probably right. Artificial intelligence is all the rage in Silicon
Valley these days, as technology companies race to build the first killer app
that utilizes machine learning and image recognition. Today, Google announced
an AI-powered assistant built into its new Pixel phones. But there’s a pivotal
downside to the company’s latest creation: Because of the very nature of
artificial intelligence, our data is less secure than ever before, and
technology companies are now collecting even more personal information about
each one of us.
Re-Defining Privacy
Unfortunately,
the answer is no. We cannot turn back time. There is no completely private
space available to us, anymore. Most of the things we do are already registered
as data somewhere (and this occurs as soon as we do them). Purpose limitation
is not always possible. We have fallen in love with the algorithmically driven
companies that utilize technology to deliver an instantly better user experience.
They pervade all aspects of everyday life. We already live in a world of big
data. In addition, we cannot stop the emergence of artificial intelligence. The
Internet of Things means that all of our devices are already connected (or will
be connected in the near future). Connected and smart cities will continue to
make our lives better. Our telephones already keep track of our moves and our
connections and favourite places. Smart fridges keep track of our groceries.
The list goes on and on. Moreover, perhaps most obviously, we love being
connected and sharing our lives with others, via social media and other online
platforms.
This
does not mean that privacy disappears or that it ceases to matter.
Privacy
is, and will continue to be, enormously important. Rather, privacy has been
transformed by the proliferation of network technologies and the new forms of
unmediated communication that such technologies facilitate.
In
particular, technology has changed the character of the “zone” of privacy that
people expect to be protected. There has been a shift from a settled space
based on a clear distinction between public and private life to a more
uncertain and dynamic zone that is constructed by and between individuals. Privacy
as a well-defined space over which a person has “ownership” has been replaced
by a more complex space that is constantly being negotiated and contested.
Work
is similarly transformed. Businesses are becoming more flexible ecosystems /
networks / platforms. “Lifetime” employment is no longer feasible or even
desirable in a digital world. Working relationships become looser and more
transitory as businesses are introducing more flexible work arrangements in
which “employees” are “hired” for well-defined, but successive “tours of duty”.
Keeping artificial intelligence data
in the shadows
One
way for IT to address data privacy issues with machine learning is to
"mask" the data collected, or anonymize it so that observers cannot
learn specific information about a specific user. Some companies take a similar
approach now with regulatory compliance, where blind enforcement policies use
threat detection to determine if a device follows regulations but do not glean
any identifying information.
“With
AI it becomes easier to correlate data ... and remove privacy.”
Brian Katz
Device
manufacturers have also sought to protect users in this way. For example, Apple
iOS 10 added differential privacy, which recognises app and data usage patterns
among groups of users while obscuring the identities of individuals. Tools such
as encryption are also important for IT to maintain data privacy and security. Another
best practice is to separate business and personal apps using technologies such
as containerisation. Enterprise mobility management tools can be set up to look
at only corporate apps but still be able to white list and blacklist any apps to
prevent malware. That way, IT does not invade users' privacy on personal apps.
Reference:-
- https://gizmodo.com/googles-ai-plans-are-a-privacy-nightmare-1787413031
- http://searchmobilecomputing.techtarget.com/news/450419686/Artificial-intelligence-data-privacy-issues-on-the-rise
- https://medium.com/startup-grind/artificial-intelligence-is-taking-over-privacy-is-gone-d9eb131d6eca













