Ethics
(Artificial Intelligence) -
“Cybersecurity
of sensitive data”
Anil Kumar Kummari
Do
you use Siri, Google Now, Cortana or Alexa? They work by recording your voice,
uploading the recording to the cloud, then processing the words and sending
back the answer. After you have your answer, you forget about the query. But
your recorded voice, the text extracted from it, and the entire context of the
back-and-forth conversations you had are still doing work in the service of the
A.I. that makes virtual assistants work. Everything you say to your virtual
assistant is funnelled into the data-crunching A.I. engines and retained for
analysis. In fact, the artificial intelligence boom is as much about the
availability of massive data sets as it is about intelligent software. The
bigger the data sets, the smarter the A.I.
Artificial
Intelligence Right Now?
Siri: Apple’s personal
assistant on iPhone’s and Mac OS.
Netflix: Recommendation
engine
Nest: Home Automation
Alexa: Amazon’s smart
hub
Gaming: Games like Call
of Duty and Far Cry rely heavily on AI
News Generation: Companies like
Yahoo, AP use AI to write small news stories such as financial summaries,
sports recap, etc.
Fraud Detection
Customer Support: Companies have
been using small scale Chat Bots to automate
this process.
Self-driving cars
Speech Recognition
Robotics
Why Are Criminals
Targeting Sensitive Data?
Adapting
and responding to evolving cyber threats and protecting critical infrastructure
and proprietary business assets are essential for both government agencies and
businesses. “Post-mortem” analyses of breaches offer a treasure trove of
lessons learned and reveal attack tactics, techniques and procedures. Cyber
criminals leverage technology vulnerabilities and trickery to exploit the
human-technology gap — by targeting sensitive passwords, data and applications
regularly used by staff. Data theft is the goal of most recent breaches. Cyber
criminals typically break into vulnerable systems and pivot between systems
using stolen credentials or posing as a third-party contractor to gain access
to valuable data. Targeted confidential data comprises personnel records,
public billing information, credit card numbers, financial or health records
and more. The theft of your city’s legally protected data can result in
significant regulatory fines, loss of public trust and damage to the city’s
reputation. Fortune.com estimates that
in 2016, the cost of data breaches averaged $4 million dollars or $158 per
record. Medical history, credit card data and Social Security numbers have the
highest cost per stolen record at $355.
Sensitive Data
Risk Management
Data
is the new currency. Traditional currency and property risk-management
techniques also apply to protecting against cybercrime. Regulated or sensitive
data has monetary value and makes an attractive target for cybercriminals.
Reducing the amount of regulated data stored on hand is equivalent to cash
management practices, such as moving excess cash from registers to a hardened
safe or transporting it to a bank’s vault. Unrestricted and unmonitored
employee access to a large amount of cash is typically prohibited; however,
public agencies often fail to apply the same level of scrutiny for employee access
to regulated or sensitive data.
Whatever
the motive, it is clear that governments are the highest-value targets for
hackers today. Thus, it is critical that agencies invest in strong cyber defences—stronger,
if anything, than those found in the private sector.
As
with modern-day terrorism, cybersecurity has proven daunting because the nature
of the threat is constantly evolving. Each major technological
development—mobile, social, cloud computing—brings a host of new risks.
“With
AI it becomes easier to correlate data ... and remove privacy”
Keeping artificial intelligence data in
the shadows
One
way for IT to address data privacy issues with machine learning is to
"mask" the data collected, or anonymize it so that observers cannot
learn specific information about a specific user. Some companies take a similar
approach now with regulatory compliance, where blind enforcement policies use
threat detection to determine if a device follows regulations but do not glean
any identifying information.
Device
manufacturers have also sought to protect users in this way. For example, Apple
iOS 10 added differential privacy, which recognises app and data usage patterns
among groups of users while obscuring the identities of individuals.
References:-
- https://www.computerworld.com/article/3035595/emerging-technology/artificial-intelligence-needs-your-data-all-of-it.html
- https://remora.com/blog/amazon-macie-machine-learning-cloud-storage-security


No comments:
Post a Comment