Ethics
(Artificial Intelligence) - Artificial
Stupidity
Anil Kumar Kummari
Futurists
worry about artificial intelligence becoming too intelligent for humanity’s
good. Here and now, however, artificial intelligence can be dangerously dumb.
When complacent humans become over-reliant on dumb AI, people can die. The
lethal record of accomplishment goes from the Tesla Autopilot crash last year,
to the Air France 447 disaster that killed 228 people in 2009, to the Patriot
missiles that shot down friendly planes in 2003.
War
Algorithm logo that is particular problematic for the military, which, more than
any other potential user, would employ AI in situations that are literally life
or death. It needs code that can calculate the path to victory amidst the chaos
and confusion of the battlefield, the high-tech Holy Grail we have calling the
War Algorithm. While the Pentagon has repeatedly promised it won’t build killer
robots — AI that can pull the trigger without human intervention — people will
still die if intelligence analysis software mistakes a hospital for a terrorist
hide-out, a “cognitive electronic warfare” pod doesn’t jam an incoming missile,
or if a robotic supply truck doesn’t deliver the right ammunition to soldiers
running out of bullets.
“Before
we work on artificial intelligence why don’t we do something about natural
stupidity?” —Steve Polyak
Should we worry about how quickly
artificial intelligence is advancing?
There
are people who are grossly overestimating the progress that has been made.
There are many, many years of small progress behind many of these things,
including mundane things like more data and computer power. The hype is not
about whether the stuff we are doing is useful or not—it is. However, people
underestimate how much more science needs to be done. Moreover, it is difficult
to separate the hype from the reality because we are seeing these great things and
to the naked eye, they look magical.
Artificial stupidity. How can we
guard against mistakes?
Intelligence
comes from learning, whether you are human or machine. Systems usually have a
training phase in which they "learn" to detect the right patterns and
act according to their input. Once a system is fully trained, it can then go
into test phase, where it is hit with more examples and we see how it performs.
Obviously,
the training phase cannot cover all possible examples that a system may deal
with in the real world. These systems can be fooled in ways that humans would
not be. For example, random dot patterns can lead a machine to “see” things
that are not there. If we rely on AI to bring us into a new world of labour,
security and efficiency, we need to ensure that the machine performs as
planned, and that people can’t overpower it to use it for their own ends.
Artificial
stupidity as a limitation of artificial intelligence. Artificial stupidity is
not just delivering deliberate errors into the computer, but it could also be
seen as a limitation of computer artificial intelligence.
Dr.
Jay Liebowitz argues that "if intelligence and stupidity naturally exist,
and if AI is said to exist, then is there something that might be called
"artificial stupidity?"
Liebowitz pointed out that the
limitations are:
Ability
to possess and use common sense
Development
of deep reasoning systems
Ability
to vary an expert system's explanation capability
Ability
to get expert systems to learn
Ability
to have distributed expert systems
Ability
to easily acquire and update knowledge
— Liebowitz,
1989, Page 109
Once
a system is fully trained, it can then go into test phase, where it is hit with
more examples and we see how it performs.
However, like google maps, it shows only shortest route. However, in
reality, it do not know that road is closed due to temporary actions of govt.
Artificial stupidity. How can we
guard against mistakes?
What
is important to keep in mind is that the training phase is not able to cover
all possible scenarios that a system can come across, hence why systems can be
fooled in ways that humans would not be.
Importance
of ensuring that the machines perform as planned and that people are not able
to overpower it to use it for their own benefits. At the time of making, it
must be follow three laws of Robotics. Which makes artificial stupidity followed
by the machines and goes into the hands of terror groups.
Reference:-
- https://breakingdefense.com/2017/06/artificial-stupidity-when-artificial-intel-human-disaster/
- https://www.technologyreview.com/s/546301/will-machines-eliminate-us/



No comments:
Post a Comment