Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 3

ARTIFICIAL INTELLIGENCE

 What is AI?
From SIRI to self- driving cars, artificial intelligence (AI) is progressing rapidly. While science
fiction often portrays AI as robots with human-like characteristics, AI can encompass anything
from Google’s search algorithms to IBM’s Watson to autonomous weapons

Artificial Intelligence today is properly known as narrow AI (or weak AI), in that is designed to
perform a narrow task (e.g. only facial recognition or only internet searches or only driving car).
However, the long-term goal of many researchers is to create general AI (or strong AI). While
narrow AI may outperform humans at whatever its special task is, like playing chess or solving
equations.

 Why Research AI Safety?


In Near term, the goal of keeping AI’s impact on society beneficial motivates research in many
areas, from economics and law to technical topics such as verification, validity, security and
control. Whereas it may be little more than a mirror nuisance if your laptop crashes or gets
hacked, it becomes all more important that an AI system does what you want it to do if it
controls your car, your airplane, your pacemaker, your automated trading system or your power
grid another short term challenges preventing a devastating arms race in lethal autonomous
weapons

In long term, an important question is what will happen if the quest for strong AI succeeds and
an AI system becomes better than humans at all cognitive tasks. As pointed out by I.J.Good in
1965, designing smarter AI systems itself a cognitive task. Such a system could potentially
undergo recursive self important. By inventing revolutionary new technologies, such a super
intelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI
might be biggest event in human history. Some experts have expressed concern, though, that is
might also be the last, unless we learn to align the goals of AI with ours before it becomes super
intelligent.

 Impact of AI on human life:


Over the next five years we are about to witness the world we live in entirely disrupted by
improvements in artificial intelligence (AI) and machine learning. Children today are growing up
with AI assistants in their homes (Google Assistant, Siri and Alexa) to point that you might
consider their presence an extension of co-parenting.
“Everything we love about civilization is a product of intelligence, so
amplifying our human intelligence with artificial intelligence has the potential
of helping civilization flourish like never before-as long as we manage to keep
the technology beneficial.”
Max Tegmark, President of the Future of Life Institute

1. Transportation:
The transportation industry looks like it will be the first to be completely disrupted by artificial
intelligence. In fact a lot of the impact of AI is already taking place. Uber and Lyft are both
working on self-driving technology. GPS navigation software company (which was acquired by
Google in 2013) qietly released a new app called CarPool that converts its 50-plus million users
into drivers and allow user to commute to work together for a fee.

It seems that Tesla has already beaten most other competitors to market with its autopilot
feature. Tesla now has over 300 million miles driven on autopilot, and all Tesla vehicles on the
road today are only a software update away from fully autonomous driving capability. Tesla is
also looking to disrupt the trucking industry with its new autonomous vehicle called Semi.

2. Criminal Injustice:
The next industry disrupted by artificial intelligence is the criminal justice system.
Advancements in facial recognition are making the fingerprint obsolete. Tech startups are using
AI to automate legal work. Meanwhile, some courts are already using AI to sense criminals and
determine parole eligibility.

But the criminal justice system is the one area where too much innovation could be a terrible
thing for society and lead us into a dystopian future if we are not careful. At this year’s SXSW,
Elon Musk said, “AI is far more dangerous than nukes. Far. So why do we have no regulatory
oversight?

Without proper government regulations of artificial intelligence and machine learning, we are at
risk of major disruption to our democracy:

 Does the government’s use of AI require a warrant to search your online data?
 Can AI be used to listen in on American citizen’s phone call without a warrant?
 How can you subpoena an AI algorithm to testify so you can face your accuser in a court
of law?
 How do we handle malpractice when AI recommends improper handling of a legal case?

These a just a few of the legal questions raised when introducing autonomous, decision
making technology into our criminal justice system
3. Advertising:
Finally, artificial intelligence is going to take targeted/personalized advertising to a whole other
level. I f you think the facebook scandal was bad, then you have no idea what’s in store in the
next decade. Advertisers are already able to predict what types of ads emotionally impact your
purchasing behavior. As time goes on, ads are going to continue to become more tailored to the
individual. Imagine Amazon’s Alexa slipping sponsored messages into a natural conversation or
personalized augmented reality billboard ads that know you by name.

Even today, the impact AI is having on your society cannot be ignored. However, if you want to
have a competitive edge and you are willing to prepare for these changes now, there is still
plenty of time to be ahead of the curve.

 How can AI be dangerous?


Most researchers agree that a super intelligent AI is unlikely to exhibit human emotions like love
or hate, and there is no reason to expect AI to become intentionally benevolent or malevolent.
Instead, when considering how AI might become a risk, experts think two scenarios most likely:

1. The AI is programmed to do something devastating:


Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands
of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms
race could inadvertently lead to an AI war that also results in mass casualties. To avoid being
thwarted by the enemy, the weapons would be designed to be extremely difficult to simply “turn
off” so human could plausibly lose control of such a situation

2. The AI is programmed to do something beneficial, but it develops a


destructive method for achieving its goal:
This can happen whenever we fail to fully align the AL’s goals with ours, which is strikingly
difficult. If you ask obedient intelligent car to take you to the airport as fast as possible, it might
get you there chased by helicopters and covered in vomit, doing not what you wanted but
literally what you asked for. If a super intelligent system is tasked with a ambitious geo
engineering project, it might wreak havoc with our ecosystem as a side effect, and view human
attempts to stop it as a threat to be met.

As these examples illustrate, the concern about advanced AI isn’t malevolence but competence.
A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t
aligned with ours, we have a problem. A key goal of AI safety research is to never place
humanity in the position of those ants.

You might also like