Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Artificial Intelligence is Dangerous

[First Affirmative speaker]


Good morning/afternoon Mr/Madam Chairman and fellow students.
I present myself Goorchurn Jeetish as the first speaker of the affirmative team and
my friend Tanveer Ramdul as the second speaker.
The topic for our debate is (that) "Artificial Intelligence Is Dangerous." We define the
topic as an attempt to reproduce super intelligent humans' capabilities with
pernicious, if not lethal, repercussions.

We the affirmative team believe that this statement is true.

Today as the first speaker I will be talking to you about the use of A.I. driven
autonomous weapons, threats of A.I. to fundamental rights & democracy and

My second speaker will be talking about Job Automation and 'deepfakes'.

I am going to discuss the use of A.I. driven autonomous weapons, A.I's threats to
democracy.
My first point is A.I. driven autonomous weapons. The reason for this is because they
are ideal for tasks such as assassinations, destabilizing nations, subduing
populations and selectively killing a particular ethnic group. Not everyone agrees that
A.I. is more dangerous than nukes [nuclear weapons]. But what if A.I. decides to
launch nukes -- or, say, biological weapons -- sans human intervention? Or, what if
an enemy manipulates data to return A.I.-guided missiles whence they came? Both
are possibilities. And both would be disastrous. If any major military power pushes
ahead with AI weapon development, a global A.I. arms race is virtually inevitable and
this will lead to "a tech Cold War" by world powers. The endpoint of this technological
trajectory is obvious: autonomous weapons will become the Kalashnikovs of
tomorrow. An example of autonomous weapon in use today is the Israeli Harpy drone,
which is programmed to hunt for specific targets. They are nicknamed
"slaughterbots". These "slaughterbots" are not merely the stuff of fiction. One such
drone, [not the Harpy drone by the way] nearly killed the president of Venezuela in
2018 and an attempted assassination on Iraqi Prime Minister in November 2021 at his
residence. A.I is limited by its lack of common sense and human ability to reason
across domains. No matter how much you train an autonomous weapon, the limitation
on domain will keep it from fully understanding the consequences of its action.
[e.g. Nicolas Maduro, Al-Khadimi,

Now to my second point, which is the threats of A.I to fundamental rights &
democracy. This is because the dawn of an AI-powered technological era marks a
turning point in the history of humanity. AI powered-facial recognition systems are
arming governments with unprecedented capabilities to monitor, track and surveil
individual people. Even governments in democracies with strong traditions of rule of
law find themselves tempted to abuse these new abilities. Research by Carnegie
(2019) shows that 51% of advanced democracies deploy AI surveillance systems. In
1949, George Orwell wrote his dystopian novel 1984 that described a future society in
which the government continuously monitored everyone's actions and conversation.
Today, AI technology has made that level of monitoring possible and has sparked
outrage that 'Big Brother' is watching over. For instance, the Chinese government has
used AI in wide-scale crackdowns in regions that are home to ethnic minorities within
China. That kind of surveillance systems in Xinjiang & Tibet have been described as
"Orwellian." Though China is not a democratic country, its "Orwellian" use of facial
recognition AI can a model to populist leaders in democratic countries to monitor
their citizens.
[-Detained between 1 and 2 million Uyghurs in its so-called re-educational camps in
Xinjiang
-Monitor smart phones]

So Mr/Madam Chairman, and fellow students, in conclusion, the frontier risks that
could emerge from full militarization of autonomous weapons include catastrophic
fallout from army raids and a human existential crisis in the age of machine sentience.
However, when big brother is watching you and then making decisions based on that
intel, it's not only an invasion of privacy but rather social oppression.

So, the key question for humanity today is whether to start a global arms race prevent
it from starting.

When Big Brother is watching you and then making decisions based on that intel, it’s
not only an invasion of privacy it can quickly turn to social oppression.
Heading for a dystopia.

[State of the art surveillance…]

[Second Affirmative speaker]

Good morning/afternoon Mr/Madam Chairman and fellow students.

The topic for our debate is (that) "Artificial Intelligence Is Dangerous."

We the affirmative team believe that this statement is true.

REBUTTAL
The first speaker from the negative team has tried to tell you (During the debate you
will write on your rebuttal card what the first negative said.)

This is wrong because (During the debate you will write down a reason why that point
is wrong.)

S/he has also said that (If you have another rebuttal point write that down.)

This is wrong because (Again write down a reason why that point is wrong.)

[N.B: REBUTTAL POU ECRIRE PENDANT DEBATE QUAND FIRST SPEAKER OF


NEGATIVE TEAM POU REBUTTE MOI]
RECAP
Our first speaker has already explained the use of A.I in autonomous weapons and the
threats A.I pose to democracy.

ARGUMENTS
Today I will be talking to you about how A.I-powered deepfakes spread false
information & unemployment caused by A.I Job Automation.

My first point is the spread of misinformation by A.I-powered deepfakes. [Montre


Photo.]

This is because/the reason for this is Deepfakes is a type of artificial intelligence used
to create convincing images, audio and video hoaxes. The most notable use case --
and most dangerous -- is when others choose to use the technology for nefarious
purposes. A recent example is the debunked deep fake video of Ukrainian President
Volodymyr Zelensky, in which, he purportedly ordered Ukrainian soldiers fighting
Russian forces, following the Russian Invasion of Ukraine, to surrender. From the
dark corners of the web, the use of deepfakes has begun to spread to the political
sphere, where the potential for mayhem is even greater.

Now to my second point, how A.I job automation causes unemployment.

This is because automation is a danger of AI that is already affecting society. From


mass production factories to self-driving cars, automation has been occurring for
decades - and the process is accelerating. A Brookings Institution study in 2019 found
that 36 million jobs could be at a high risk of automation in the coming years. The issue
is that AI systems will outperform humans. The further problem is that workers who
lost their jibs are ineligible to work in the AI sector due to lacking the required
credentials or expertise. Through this, AI will harm the standard of living for many
people (by causing mass unemployment).
ENDING
So, Mr/Madam Chairman, fellow student, in conclusion, deepfakes A.I are not only a
technical problem, and as the Pandora's box has been opened, they are not going to
disappear in the foreseeable future. Such "crafted" deepfake videos are more likely to
cause real damage. In the same way, as AI systems continue to improve they will
become far more adept at tasks than humans. This could be in pattern recognition,
providing insights, or making accurate predictions. The resulting job disruption could
result in increased social inequality and even an economic disaster.

Sum up
Thus, the evidence is overwhelming that "a global AI arms race is virtually
inevitable." Such an escalatory dynamic represents a familiar terrain as we
have seen the American-Soviet nuclear-arms race. This multilateral arms
race, if allowed to run its course, will eventually become a race towards
oblivion. However, facial recognition AI sytems is generating fears that Big
Brother AI will soon be watching our every move.

As the AI technology becomes more accessible, deepfakes could

In the same way, the axiom "everything that can be automated, will be
automated" is no longer science fiction. Al-enabled machines have displaced
human employees, creating greater income inequality and increased
unemployment.

You might also like