Ai Assignment 2

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

EFFECTS OF AI ON WARFARE

The integration of AI into warfare is likely to alter the foundations of every spectrum of
International relations and military. Experts have compared AI to electricity several times but
its potential effect has been compared to nukes too, which does seem like an overestimation.
The last instrument that transformed the epistemology of war was the nuclear bomb.
Although it would be unfair to compare AI to nuclear weapons as its destructive strength
remains quite speculative, it won’t be wrong to admit it has created the same amount of panic
and dilemma upon the general public as it is inclined to transmute warfare strategies and
systems forever. Since the development of gunpowder and nuclear weapons, autonomous
weapons have been considered to bring in the third revolution in warfare ( Kai-Fu Lee, 2021)
All states are rushing towards an inconspicuous technological arms race of data and
algorithms through heavy funding provided to R&D of AI. China and the U.S. enter a
discreet A.I. cold war with China declaring its ambition to become “the center of artificial
intelligence” by 2030 as its national priority. U.S. President Joe Biden in March 2023
disclosed the budget for AI development to be $1.8 billion for fiscal year 2024, the Pentagon
is a $760 billion organisation and is said to be spending half of China in comparison
according to U.S. politician Seth Moulton. This huge leap of faith in AI transfers the power
from state actors to non-state actors which will either revive or disrupt the future of wars and
security( Berg, Chou, Marijan et.al, 2023)

The rise of autonomous weapons is likely to create a shift in the balance of powers as AI has
the potential to provide destruction at a lower cost. Simultaneously, it has the propensity to
elevate the positions of small powers by providing them with the capability to defend and
protect their national security as they wouldn’t need to spend huge amounts on arms and
ammunition. On the other hand, it could also be a threat to the states that haven’t developed
or invested in AI research as just using traditional methods like guerrilla tactics or guns and
tanks won’t be reliable anymore ( Alex Wilner, 2022), One might question why the U.S. and
China have such huge budgets? This is because AI is still an emerging technology that needs
major investment in R&D, which will eventually make it cost-effective in the future. In 2021,
the US National Security Commission on Artificial Intelligence also cautioned that “the
United States must act now to field AI systems and invest substantially more resources in AI
innovation to protect its security, promote its prosperity, and safeguard the future of
democracy”. However, these claims create a foggy line between the offense and defense
implications of security which will have a consecutive change in the deterrence and
engagement strategies. This will result in different branches of warfare like political,
economic, algorithmic, hybrid, and unrestricted warfare backed by computational agenda and
big data.(Momani, Shull, Bélanger et. al, 2022)

There is a huge influence of the commercial sector on defense systems, with the emergence
of AI. Most military systems are collaborating with tech companies for drones and other
software to store the big data. Talking about data, countries like China and Russia have a
huge advantage here as there are no restrictions on storing and collecting private data of
citizens, helping them to track non-state actors who could be harmful to the country like
terrorists and spies. General Jack Shanahan the inaugural Director of the Joint Artificial
Intelligence Centre in the U.S. insisted “What I don’t want to see is a future where our
potential adversaries have a fully ai-enabled force and we do not”, as storing civil public data
is a violation of law in the country ( The Economist, 2019). How much can we trust that tech
companies would create algorithms that are not manipulated and set up to fulfill personally
beneficial agendas? Which remains a question of concern. Sharing military data can be
concluded as a form of scrutiny prone to risk as it could weaken the control of the military
over the data. Although a counter-argument would suggest most talented engineers prefer to
work for commercial companies like Facebook, Amazon, and Apple as they also have higher
budgets for AI development than the military sphere does and hence are an asset to the
government and are highly capable of eliminating biases (M. L. Cummings, 2017). Several
questions still remain unanswered, the most important one being “ If the commercial
companies are the ones providing these automated weapons, then is it the country that
becomes the superpower or the company that provides this technology?”. However, some
commercial companies like Google have also faced backlash from employees for entering the
warfare space, which ultimately led to the termination of the $9M contract in 2018b( The
Economist, 2019).

Before we discuss the future risks and opportunities incorporation of AI into military systems
has to offer it is also important to divulge its functioning. At the core of all AI systems lies
autonomy. However, it is important to distinguish between an automated and autonomous
system. In an autonomous system for each input, the output never changes as the reasoning of
the computer is based on a rule-based construction. On the other hand, the functioning of an
autonomous system can be quite apophonic. Its reasoning ability is based on making
predictions by guessing the best attainable solution after the input is provided.

The figure above provided in the Chatham House report on the future of AI in warfare is a
projection of how an autonomous system should operate in a zestful environment. It
interlinks the fact that the computer assesses the world through the modes of cameras and
microphones in the form of sensors, similar to the five senses of human beings which help it
to interpret the world. To exemplify, the UAV (unmanned aerial vehicle) autonomous system
is utilized for navigation as it warns the user about the altitudes that are wary of hurdles
through the radar augmentation model. Simultaneously, the GPS guides the user through
coordinates, so the airplane does not end up in a zone where flying is restricted. As an
implication of the above model, the stress fighter pilots experience during high altitude
missions creates stressful conditions on both their mental and physical health as they are
subject to drowsiness, which in turn would affect their performance. Automated pilots, on the
other hand, wouldn't experience such conditions. The U.S. Air Force also uses “ predictive
logistics'' which it utilizes to identify malfunctions and fix its tools. However, the same
system is very complex in driverless cars as it also has to manage the vehicles nearby through
constant calculations. In driverless cars, sensors like LIDAR ( Light, Detection, and Ranging)
are merged with traditional sensors and computational vision (M. L. Cummings, 2017).

The goals of these military systems can be divided into three categories according to Michael
Horowitz of the University of Pennsylvania (The Economist, 2019).
1. To permit machines to function even in the absence of human control
2. To manage and explicate huge chunks of data
3. Not only sustaining but also making decisions during war.

In the coming years, as the use of AI automated weapons and systems increases there will be
a clash between the human brain and the computer brain. The effect of these systems on
human ethics and rights is unavoidable, irrespective of the benefits or supposed security AI-
supported systems could provide. What makes these systems fragile and vulnerable is the fact
that the algorithms are still not completely free of bias. It is very much possible that the
drones used to recognize targets are biased in their calculations as well, and if so it could cost
the lives of innocent people in a war-like situation. However, it can also be argued that with
the help of correct data, these systems can target the right individuals which in turn can save
casualties and the life of innocent civilians. Autonomous systems could also help identify
targets in grey zone areas which are often unreachable. According to a 2013 article in Fiscal
Times, the Department of Defence revealed that the cost the Pentagon had to bear for each
soldier in Afghanistan was $850,000 annually. In comparison, the robots cost significantly
less i.e. $230,000.Additionally, the facial recognition capability of the drone, along with the
biometric and signature sensors would also make it difficult to camouflage the enemy forces
as well as their resources, and if there is sufficient time the targets could also be re-allocated
smoothly(Amitai, Oren, 2017).

In the initial paragraphs it was noted how the defense and offense lines could become blurred
with the use of automated systems, a concrete example of this is the DARPA’s (Defence
Advanced Research Projects Agency) software RAID (Real-time Adversarial Intelligence
and Decision-making) hoping to predict the intention, action and even sentiments of the
adversarial forces 5 hours into the future(The Economist,2019). This practice can be termed
as predictive analytics. Unfortunately, the software can be used both to defend in extreme
times but at the same time also launch attacks and provide threats to the opposite side.
Situations like these can create extreme destabilization and sometimes even escalation. On a
positive note, AI could also be used at border checking points during migration between
states that don’t have a clear borderline, as well as assess risk in case of unsafe migrators
through software like RAID( Ralph Thiele, 2020).

Russian President Vladimir Putin famously predicted that “whoever becomes the leader in the
AI sphere will become the ruler of the world”(The Economist, 2019). This could become
particularly true with the states using autonomous weapons which could either escalate or
control the destruction of war. The Israeli government has ordered about 100 drones from an
American company Skydio to handle the ongoing Israel-Gaza war, according to Mark
Valentine who handles government contracts at Skydio. Israel has also been said to be
ordering self-piloting drones for internal combat. It should also be noted that neither Ukraine
nor Israel has signed the new foreign policy implemented by the U.S. for keeping the
responsible use of AI-automated systems bound by International law of war, which was
validated by 45 other countries(Mohar Chatterjee, 2023). There is also a nimble debate
among Chinese policymakers about the use of cognitive warfare. This refers to the ability of
creating favourable conditions to exert influence on the minds of the enemy, to fulfill one’s
own agenda without having to put up a fight, as it conspires plans to win over Taiwan. China
even went to the extent of stating “War is not only a material contest, but also a spiritual
contest,” as it aims to create “ a psychological support system” by providing smart sensor
bracelets that can examine the mental state of the soldiers and officers by keeping their facial
recognition data stored in real-time ( Gabriel Dominguez, 2023) All these actions by different
states point towards the fact that the use of AI into autonomous systems is bound to increase
in coming years. Especially with the existence of the cognitive factor, misperception, and
misjudgment is also likely to increase confusion among states about their real strength(James
Johnson, 2019)

To go back to the statement in the introduction about the panic created among the general
public and states with the incorporation of AI into military systems, it would be only right to
conclude the cause of it to be the fact that governments could transfer the power regarding
critical decisions like human life into the hands of a robot or an artificially existing data
system that is at a high risk of being manipulated by its adversaries. Convention on
Prohibitions ( CCW) or Restrictions on the Use of Certain Conventional Weapons points out
that “human responsibility for decisions on the use of weapons systems must be retained
since accountability cannot be transferred to machines” (Frank Sauer, 2023). The only way to
reach common ground and keep the interests of citizens safeguarded, while also not lagging
technologically behind and risking the safety of the respective states is by initiating consistent
discussions among researchers, academics, tech giants, government officials, and
policymakers on international and national levels. This will lead to a favorable agreement
through negotiations and not approving algorithms till all the biases are eliminated. This will
also bridge the gap existing between the commercial and the defense sectors, not leaving the
future of AI in warfare by a cliffhanger.

BIBLIOGRAPHY

Reference list

Bélanger, B.M., Aaron Shull, Jean-François (2022). Introduction: The Ethics of Automated
Warfare and AI. [online] Centre for International Governance Innovation. Available at:
https://www.cigionline.org/articles/introduction-the-ethics-of-automated-warfare-and-ai/.

Berg, M. and Chatterjee, M. (2023). AI vs. nukes: ‘This is much more dangerous’. [online]
POLITICO. Available at:
https://www.politico.com/newsletters/digital-future-daily/2023/05/25/ai-vs-nukes-this-is-
much-more-dangerous-00098862.

CHATTERJEE, M. (2023). Israel’s appetite for high-tech weapons highlights a Biden policy
gap. [online] POLITICO. Available at: https://www.politico.com/news/2023/11/25/israel-
hamas-war-ai-weapons-00128550.

Cummings, M.L. (2017). Artificial Intelligence and the Future of Warfare. [online] Available
at: https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-
artificial-intelligence-future-warfare-cummings-final.pdf.

Dominguez, G. (2023). Winning without fighting? Why China is exploring ‘cognitive


warfare.’ [online] The Japan Times. Available at:
https://www.japantimes.co.jp/news/2023/05/26/asia-pacific/china-pla-ai-cognitive-warfare/.
Etzioni, A. and Etzioni, O. (2017). Pros and Cons of Autonomous Weapons Systems. [online]
Available at: https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/
pros-and-cons-of-autonomous-weapons-systems.pdf.

Johnson, J. (2019). The AI-cyber nexus: implications for military escalation, deterrence and
strategic stability. Journal of Cyber Policy, 4(3), pp.1–19.
doi:https://doi.org/10.1080/23738871.2019.1701693.

Marijan , B. (2023). AI-Guided Weapons Must Be Curbed by Global Rules — and Soon.
[online] Centre for International Governance Innovation. Available at:
https://www.cigionline.org/articles/ai-guided-weapons-must-be-curbed-by-global-rules-and-
soon/ [Accessed 17 Nov. 2023].

Sauer, F. (2020). Stepping back from the brink: Why multilateral regulation of autonomy in
weapons systems is difficult, yet imperative and feasible. International Review of the Red
Cross, 102(913), pp.235–259. doi:https://doi.org/10.1017/s1816383120000466.

The Economist (2019). Artificial intelligence is changing every aspect of war. [online] The
Economist. Available at:
https://www.economist.com/science-and-technology/2019/09/07/artificial-intelligence-is-
changing-every-aspect-of-war.

Thiele, R. (2020). Artificial Intelligence - A key enabler of hybrid warfare Hybrid CoE
Working Paper 6 COI STRATEGY & DEFENCE. [online] Available at:
https://www.hybridcoe.fi/wp-content/uploads/2020/07/WP-6_2020_rgb-1.pdf [Accessed 17
Dec. 2023].

Wilner , A. (2022). AI and the Future of Deterrence: Promises and Pitfalls. [online] Centre
for International Governance Innovation. Available at: https://www.cigionline.org/articles/ai-
and-the-future-of-deterrence-promises-and-pitfalls/ [Accessed 17 Dec. 2023].

You might also like