APznzab3ZoA4MbKvpyhGGUE9T3ticsMGaJvKnAFZe7UCiHk1_whAeAs7OXH26WInRE_KpFnyYvnY5ILqV1dRtZ3c5rVRSzS_ZwtijiDAW_Hv0DI_BU5VANTIEWvvp0kRGSd9bfN5gRGnccfDMjqRqfX7JDP6obJi8GJR2NhtnQMZBfumpZP1o89ecdAFLS2Sr3IGqsC8yBN5vwO0DIseDsncufA_uTVE6Pa2wZ5U9lwc4dY5

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Index

02 - Letter from the Secretary General

03 - Letter from the Director General

04 - Letter from the Committee Director

05 - Letter from the Committee Co-Director

06 - Introduction to the Topic

07 - Introduction to the Committee

08 - History and discussion of the topic

13 - Definition of key terms

15 - Past international actions

17 - Current Situation

19 - Bloc Positions

21 - QARMAs

22 - Position Paper

23 - Bibliography

1
Letter from the Secretary General
Dear Delegates,

On behalf of the Secretariat, it is a great pleasure to welcome you to the first edition of Maria
Reina Marianistas Model United Nations.

My name is Gastón Mendoza and it is an honor to be the first Secretary General of Maria Reina
's history. The whole Secretariat is thrilled to welcome you all and to organize the first
conference of our school.

My journey in MUN started in 2022, I had no references regarding what MUN was, honestly, I
thought it was a place where you defend your personal ideals. I remember the first simulation, I
did terribly and I was done with it. Thankfully my mom did not let me quit at that time, and told
me to do it one more time. I was sent to my first conference which was Santa Maria Model UN,
and had a wonderful experience. I was really lucky to have amazing faculty advisors and an
incredible partner, who became one of the most important people in my life, and someone that I
admire to this day. After that conference, MUN became a crucial point in my life, and to this day
I love debating and winning experience as a delegate. For me, the best part of Model UN is the
capacity to connect with your team and become a family during the days of preparation and
debate, MUN has brought me knowledge about the world and its problems, but the most
important thing for me is all the amazing people that I have met along the way and how all of
them were important to the development of this conference.

Since 2022 the idea of having MRMUN was out there, however for different reasons it was never
pitched to the directory of my school. Finally, this year, with the support of all the participants of
the team, ex participants, and faculty advisors, we were able to present formally MRMUN 2024
to our principal, and she accepted it immediately. I can’t be more happy about the support and
hard work that everybody involved in this conference has given, and I am sure that this is the
first of many Maria Reina’s MUN editions.

I wish you the best of luck and hope you have an amazing experience in the first of many
MRMUN.

Sincerely,

Gastón Mendoza Rivera

2
Letter from the Director General
Dear Delegates,

Welcome to the first edition of Maria Reina Model United Nations, we are delighted to have you
in our conference this year and hope you enjoy each of your committees while having a great and
intense debate!

My name is Mara Podestá, Director General of this year 's conference, and along with the
secretariat I'm completely thrilled to welcome you, to surely, the first of many editions of
MRMUN.

My journey in MUN began in 2022. Initially, I was quite apprehensive and nervous, and due to
personal reasons, I missed my first simulation. Not really understanding what MUN was about, I
wanted to quit, but with the support of my family, I persevered. When I was finally selected for
my first debate, I prepared thoroughly, and alongside my partner, we managed to win an award.
Although it wasn’t the top prize, it was enough to motivate me to continue with the team for
three more years. Being part of this team has been an incredible experience—MUN feels like a
family to me, helping me grow in life and meet wonderful people. The most rewarding aspect of
Model UN for me is the strong connections you build with your team during preparation and
debates. MUN has expanded my understanding of global issues and introduced me to amazing
people who have been crucial to my growth in life.

Perseverance is key to success, it has taught me that every achievement, no matter how small, is
valuable. Each step forward, each effort put in, and each challenge overcome contributes to
progress. Model UN holds a special place in my heart, as it has not only broadened my
knowledge but also strengthened my character and resilience. The friendships and connections
I’ve made through MUN are invaluable, and the sense of accomplishment from our collective
efforts is profound. I hope to continue my MUN journey in university, further developing my
skills and passion for debate and diplomacy. I am immensely grateful for the dedication and
effort of everyone involved in making this conference possible, and I am confident that this is
just the beginning of many Maria Reina MUN editions.

I wish you all the best and hope you have an amazing experience,

Sincerely,

Mara Podestá

3
Letter from the Committee Director
Dear delegates

I extend a warm welcome to each and every one of you to the Maria Reina Model United
Nations (MRMUN) conference. My name is Esteban Felix Morán and I’m currently a freshman
at Purdue University for the Computer Engineering major. With a wealth of MUN experience
since 2019, including winning awards at prestigious conferences like ILMUNC and NAIMUN, I
am incredibly excited to guide our committee's discussions as your Director in the DISEC
committee.

Our chosen topic for this committee is Autonomous Weapons Systems (AWS) and AI in
Modern Warfare, a matter of utmost urgency and complexity that demands our collective
attention. In this digital age, the rapid advancement of technology has rendered AI a vital
concern for all member states. It has the power to both facilitate and threaten our way of life. As
we delve into the intricacies of this topic, I anticipate thought-provoking debates and innovative
solutions emerging from our committee sessions.

I am eager to witness a wealth of knowledge and high-level research, as well as the negotiation
and diplomatic skills that each of you will bring to the table. I’m also very passionate about all
fields of computer science, and can’t wait to hear your creative proposals on the regulation of
these disruptive technologies.

I am excited to meet all of you and embark on this remarkable MUN journey together. Brace
yourselves for an enriching experience filled with intellectual rigor, diplomacy, and lasting
friendships.

Esteban Felix
Co-Chair

4
Letter from the Committee Co-Director
My name is Vasco Soldevilla and I would like to warmly welcome you to MRMUN 2024! I am 22
years old, I’m currently studying Law at the Universidad de Lima, and I am very happy to be able
to lead this committee alongside Esteban! My career at MUN started in 2017 and I've loved it
ever since. It fills me with pride to see so many delegates debating with the passion and
determination with which I debated in high school.

In my free time I usually like to play Dota 2 and League of Legends while I enjoy my favorite
snacks (Cuates and Tampico). I have to confess my addiction of playing guitar with my band.
Specifically, I like listening to Mozart and J Balvin as my favorite musical artists. Finally, one of
the things I like to do is dressing my little Yorkie named “Tyson” with different costumes and
sending him to dog contests.

As a director, I would be very interested to see realistic and applicable solutions. I would like to
see all QARMAs discussed effectively and demonstrate knowledge in the areas in which we
encourage you to continue researching. There is nothing better than a good leader and
negotiator, who knows how to present their ideas while representing the people who believe in
them.

I am sure that both you and the Committee will learn a lot from debating in this committee, and
be assured that I will be there to help you at every stage of the process! It is not only a challenge
for you, but also for me, as Chair, to make a comeback to the good old fashioned MUN style.
However, I can bet that we will help each other in this conference and that we will make this an
unforgettable experience.

Best of luck to y’all!

Vasco Soldevilla.

5
Introduction to the Topic
Should an algorithm have the ability to make life or death decisions? Should a machine, triggered
by its environment, take moral judgment on the most fundamental human right?

On January 3rd 2020, Qasem Soleimani, an Iranian major general, was killed by an American
drone strike near Baghdad International Airport, Iraq. The attack was carried out by a U.S.
MQ-Reaper (Semi-autonomous military drone) that launched several missiles aimed at two
vehicles in which Soleimani and several other pro-Iranian paramilitary figures were leaving the
airport.

The U.S. Department of Defense issued a statement which said that the strike was carried out “at
the direction of the president”. Trump asserted that Soleimani had been planning further attacks
on American diplomats and military personnel and had approved the attack on the American
embassy. The following day, the US announced the deployment of 3,500 paratroopers to the
region, and the UK announced support from the Royal Navy.

In the aftermath of the attack, the world witnessed the capabilities and consequences of remote
autonomous weapons. A single drone attack caused alarm around the world, raising fears that
Iranian retaliation could spiral into a far larger and global conflict.

Throughout history, wars and armed conflicts have always presented legally or morally blurred
situations. Decisions taken in command rooms or on the battlefields have often proved to be
illegal or unethical according to International Humanitarian Law (IHL). Technology has only
aggravated moral concerns as they pose the introduction of disruptive mechanisms for mass
destruction. Therefore, nations have been involved in a constant debate to settle international
standards for the use of new technologies in warfare.

Autonomous Weapons Systems (AWS) are one of those technologies subject of constant debate.
The generally accepted definition of AWS are those systems of weapons that use Artificial
Intelligence (AI) to automate some or all of their functions carrying out military operations.
Within this type of systems, Lethal Autonomous Weapons Systems (LAWS) are those that, once
activated, can select and engage targets with no human intervention as an operator.

The risks posed by LAWS involve ethical and security concerns that are still unaddressed by the
International Community. This includes legal and moral accountability, human dignity and
civilian protection, precision standards, human control of operations, and the risk of
proliferation and illicit trafficking. The absence of a strong framework for oversight and control
poses challenges in addressing these ethical and humanitarian concerns surrounding autonomous
weapons. Moreover, the fast pace of technological advancements often outpaces the formulation
of appropriate legal and ethical guidelines

It is imperative to consider as well the exponential growth of this industry. According to Allied
Market Research, the global autonomous weapons market is expected to reach $30.15 billion by
2030, having a compound annual growth rate of 10.4% since 2021. The Center for the Study of
the Drone at Bard College stated that over 95 countries possessed some form of AWS, with at

6
least 30 countries having armed drones. The growing presence of AWS is of international
concern, as its use is almost guaranteed in the vast number of contemporary conflicts.

This topic is considered a conflict point in terms of technology, humanity and politics, causing
important changes in the global community and the proliferation of these types of weapons. The
challenge for delegates in this committee is striking the balance between two goals: anticipating
future technological development by restricting advancements that would violate international
law, and creating an instrument that won’t become obsolete as the technology develops.
Regulations proposed in this committee would therefore need to be both narrowly prescribed
enough to be effective at its inception, and broad enough to apply to future iterations of
weapons as technology develops.

Introduction to the Committee


The Disarmament and International Security Committee (DISEC) is the first of the main six
committees of the General Assembly of the United Nations (UNGA). As well as the other
committees, this one counts with the representation of the 193 Member States of the UN, every
country with the right to one vote in the sessions.

The role of the First Committee is essentially political, dealing with disarmament, global
challenges and threats to peace and security that affects the international community, and seeks
out solutions to the challenges in the international security regime. The purpose of DISEC is
specified in the Chapter IV, Article Nº11, of the United Nations Charter, which manifests that
“the General Assembly may consider the general principles governing disarmament and the
regulation of armaments, and may make recommendation with regard to such principles to the
Members or to the Security Council or to both” (United Nations, n.d.).

During its early years, DISEC faced numerous challenges in addressing the global disarmament
agenda. The escalating tensions of the Cold War between the United States and the Soviet
Union, two superpowers with formidable nuclear arsenals, posed a significant obstacle to
disarmament efforts. Nevertheless, DISEC played a vital role in initiating dialogues and
negotiations to promote arms control, disarmament, and non-proliferation.

Most of the recent DISEC sessions address the use of disruptive technology, this includes the
implications of AWS and AI in warfare. Some of the most relevant resolutions upon which
delegates could use as a basis for proposed regulations include: the Resolution 74/29 (2019):
“Towards a comprehensive arms control regime for conventional ammunition.” and the
Resolution 75/44 (2020):”Promoting international cooperation on peaceful uses of artificial
intelligence.”. These resolutions call for increased international cooperation and dialogue on
emerging technologies and emphasizes the potential benefits and risks of AI, urging member
states to collaborate on ensuring AI technologies are developed and used in ways that enhance
global peace and security..

7
DISEC collaborates closely with multiple bodies of the UN, including the United Nations
Institute for Disarmament Research (UNIDIR), which provides in-depth research and analysis
on disarmament issues, including emerging technologies such as AWS and AI. For example, in
2020, UNIDIR reported that 61% of experts believe that a legally binding instrument on LAWS
is necessary. Despite this, few efforts have made significant impact to regulate this technology,
the most important one being the Convention on Certain Conventional Weapons (CCW), upon
which DISEC references and supports most of its discussion.

Finally, although DISEC does not have the power to impose rules on unwilling countries, we
must set a standard for which countries can strive towards in terms of balancing civilian safety
and innovation interest. In that way, international cooperation on peaceful uses of technology in
the context of international security is one of the main agenda items. The regulation of
deployment and management practices involving Autonomous Weapons Systems overly
concerns the First Committee, guaranteeing international peace and security, as well as the
protection of civilians' rights.

History and discussion of the topic


The rapid evolution and advancement of technology being used in armament has always been
one of the most crucial and concerning aspects of this technological advancement in the world.
While revolutionary technologies hold much promise for humanity, when taken up for military
uses they can pose risks for international peace and security. The challenge is to build
understanding among stakeholders about technology and develop responsive solutions to
mitigate such risks. That is where we might be today with military applications of AI.

AI is undoubtedly the main focus of the entire world today, and its use has affected all industries.
It is being implemented in every aspect of our lives, and of course, also in the military aspect. To
understand modern military technology including Autonomous Weapons Systems, we must first
have a thorough understanding of AI and its applications.

What is AI?
In simple terms, AI is a system that would allow machines, arms in our case, to have their minds.
Artificial intelligence enables computers and digital devices to learn, read, write, talk, see, create,
play, analyze, make recommendations, and other things humans do.

AI refers to the field of computer science focused on developing these human capabilities.
However, the intricacies of AI encompass sub-fields of machine learning and deep learning,
which illustrate just how complex and multifaceted the field truly is. The technical concepts in
this section will be further explained in the Definition of key terms section to facilitate the
reading.

AI was nothing but science fiction until the 1950s. By the 1950s, there were generations of
scientists believing in the future of AI. It was five years later that the first computer program that
operated with the simplest form of AI to mimic the problem-solving skills of a human came into

8
existence. Since the 2000s, research to further improve this “mimic” ability led us to discover a
technique based on the mechanism that our brain uses to learn, called “neural networks”.

In computer science, neural networks refer to the algorithms that allow a computer to learn from
previous data. This also allows AI systems to be trained to process large datasets, such as
detecting humans in a given image. Structures for neural networks have become so complex and
require such extremely large datasets, that AI systems can perform tasks with remarkable
accuracy and efficiency.

The emergence of AI with large language modeling capabilities has introduced a new era of
possibilities in various domains, including education and cognitive enhancement. AI can process
vast amounts of data and information in a fraction of the time it would take a human. But when
this fast data analysis is implemented to arms, it can create disastrous results.

Heavy arms enhanced with the technology of fast analysis and decision-making will undoubtedly
bring catastrophe. Personalized data analysis per user is one of the biggest advantages for the
military aspect, as this power can create better strategies and easier uses for arms.

The powers of AI are vast, and still growing. The UN continuously encourages the usage of
these powers to be in peacebuilding and humanitarian development, and not during conflict.
With such risks, many high-value companies like Google pledged not to develop AI weapons and
launched new policies. However, the usage of AI in “self-defense” continues to be a debate for
the nations.

Militarization of AI
Steps to facilitate artificial intelligence in the military were taken almost immediately after AI was
first formed in the 1950s. In 1958, the U.S. Department of Defense formed the Advanced
Research Projects Agency (which was later renamed DARPA) to facilitate research and
development of military and industrial strategies. Later, the U.S. The Department of Defense
began training computers with machine learning to mimic basic human reasoning. In the
1970s-80s, further research began to be made for further advancements in weaponization and
military worldwide.

The US continued to transparently develop its AI technologies for military uses. As projects like
DARPA and DART strive in their military assistance systems, specifically for logistics and
transformation; in 2014, the US Department of Defense unveiled the “Third Offset Strategy,”
which posits that rapid advances in AI will define the next generation of warfare and took
further steps to facilitate development and research in this field. Countries like the Russian
Federation and China have also launched their plans for the militarization of AI systems,
especially with the Russian army unveiling a “robot army” with guns.

Military AI capabilities include not only weapons but also decision support systems that help
defense leaders at all levels make better and more timely decisions, from the battlefield to the
boardroom, and systems relating to everything from finance, payroll, and accounting, to the

9
recruiting, retention, and promotion of personnel, to collection and fusion of intelligence,
surveillance, and reconnaissance data.

Many countries endorse freedom for the usage of AI in military intelligence, surveillance, and
reconnaissance data purposes in their regulations. Not only this, but even the countries that
continuously voice their concerns on the “negative use” - specifically the usage in armament and
military technology - of AI choose to continue using it for their national defense mechanisms.
These include data processing and analysis, cyber security, simulation and autonomous systems
such as drones, counter-terrorism investigations and prosecutions.

Autonomous Weapons Systems


Autonomous weapons systems (AWS) represent a significant evolution in military technology,
combining AI detection and decision-making capabilities with weaponry to enable autonomous
action in combat scenarios. The history of this technology dates back to the early 20th century,
with advancements accelerating in recent decades.

In the mid-20th century, during the Cold War, nations began exploring semi-autonomous and
remotely operated systems for military purposes. These early systems laid the groundwork for
today's more advanced AWS. These leverage AI to enhance their operational capabilities:

● Sensors and Perception: AWS uses sensors such as high-powered cameras, radars, and
thermal imaging devices to perceive their environment. Computer vision analyzes this
data to identify targets and assess threats, which could also include AI-powered facial
recognition software.
● Decision-Making Algorithms: AI algorithms process sensor data in real-time to make
decisions autonomously. These algorithms may include machine learning models trained
to recognize patterns and predict outcomes based on historical data.
● Action and Response: Once a target is identified and a decision is made, AWS can
engage autonomously, potentially without direct human intervention.

The implications of AWS are profound and raise ethical, legal, and strategic concerns. Unlike
traditional weapons operated by humans, AWS operates with varying degrees of autonomy. The
types of AWS are:

● Semi-Autonomous: Human determines target and launches weapon. Weapon


autonomously identifies and engages targets.
● Supervised Autonomous: Human launches weapons and can intervene at any time.
Weapons autonomously identifies, selects and engages targets.
● Fully Autonomous: Humans may launch weapons but do not interact further. Weapon
autonomously identifies, selects, and engages targets with no human intervention.

The least used and subject to most concern legally and ethically are fully autonomous systems.
However, all types of AWS raises questions about accountability and compliance with
international humanitarian law. Concerns persist regarding the potential for unintended

10
consequences, civilian casualties, and the escalation of conflicts due to automated
decision-making.

From a strategic perspective, AWS offers advantages such as increased response speed, reduced
risk to human soldiers, and enhanced operational efficiency. It also offers enhanced cybersecurity
in operations, given that AWS communicates via encrypted channels and as part of a networked
system.

However, the rapid development and deployment of these systems have sparked global debate
and calls for regulation to ensure responsible use and mitigate risks.

History of Autonomous Weapons Systems


Although initial uses of AWS started after the Cold War, the outbreak of this technology started
after the 9/11 attacks. The United States and China begin integrating AI and machine learning
into military technologies, enhancing the autonomy of UAVs and ground robots.

In 2001, deployment of armed Predator drones by the United States for reconnaissance and
targeted strikes in Afghanistan. Also, increased proliferation of UAVs across various military
forces globally, including armed and surveillance drones used in conflict zones such as Iraq,
Afghanistan, and Pakistan, caused massive civilian casualties.

Some of the most important LAWS used in conflict areas are the Predator and Reaper Drones,
widely used by the US and allies for surveillance and targeted strikes. Incidents of civilian
casualties reported in regions like Pakistan and Yemen, sparked international debate on collateral
damage and legal implications.

Figure 1: US Reaper Drone

Other nations, such as South Korea and Russia, used AI enhanced robots such as the Samsung
SGR-A1 Sentry and the Uran-9. AWS technology evolved, also into other military
(non-necessarily warfare) applications, such as human research, espionage and
intelligence-gathering.

The different types of AWS used in military forces include:

11
● Unmanned Aerial Vehicles (UAVs):
● Unmanned Ground Vehicles (UGVs):
● Unmanned Surface Vehicles (USVs):
● Autonomous Underwater Vehicles (AUVs):
● Autonomous Sentry Guns
● Combat Drones and Swarm Systems
● Missile Defense Systems
● Artillery and Weapon Systems

Humanitarian Benefits
In 2018, the GGE submitted a paper presented by the USA identifying five ways that AWS and
AI can reduce risk to civilians, not by taking over the decision-making process themselves, but by
assisting human commanders to make more accurate and more efficient operational decisions.

Incooporating autonomous self-destruct, deactivation or neutralization mechanisms could


reduce the risk of consequences, including for example, time limits for each operation. Missions
could be set in time frames, and if unable to succeed, terminate engagements or seek additional
human operator input before continuing engagements.

Furthemore, AI can increase awareness of civilians and civilian objects on the battlefield. The use
of AI streamlined decision-making would enable commanders to better assess the expected
incidental loss to civilians and to take proper precautions, which comply within the principles of
International Humanitarian Law (IHL)

AWS can also improve assessments of likely effects of military operations. Software is used to
prevent likely effects to different weapons, making predictions to improve accuracy. It will assist
in carrying out proportionality and precautions assessments, making commander judgment more
effective. More accurate weapons pose less risk of collateral damage, and increased accuracy can
also mean that a smaller warhead can be used to generate the same military effect. AWS could
also reduce the need for immediate fire in self-defense.

Characteristics to consider
In regulating AWS, it is important to consider the following characteristics:

● Predictability: How an AWS will function in any given circumstance of use, and the
effects likely to result.
● Explainability: Why an AWS makes a certain decision. Needs to trust that the outcome
will comply with international law.
● Reliability: How consistently the AWS will function as intended, without failure or
unintended effects. What protocols should be given if affected by human error.
● Ability to be subject to intervention: Human commanders need to be able to call back
the weapon if it is determined that the attack is no longer in compliance with IHL.

12
● Ability to Self-Adapt: Developers and operators should consider the ability of the
weapon to expand its own functions and capabilities based on interaction with the
environment.

Definition of key terms


Algorithm
A finite, well-defined sequence of computational instructions or steps, often written in a
computer code, designed to solve a specific problem or to perform a particular task.

Neural Networks
The concept of “neural networks” is a computational representation of the pattern used by
individual neurons working together to process data. Each node (neuron) applies a mathematical
function to its inputs and passes an output to the next node. The input used by the neural
network can include all types of data, including images, audio, numbers, text, etc.

Figure 2: Neural Network for image processing

In figure 2, we can visualize the graphical representation of 4 layers of nodes that work together
to detect the number shown in the blue square. The first layer reads the color value, either white
(1) or black (0), of each pixel in the blue square. The following layers process this data for each
pixel to determine if the figure as a whole corresponds to any number.

13
Machine Learning
Machine learning is a subset of AI focused on the development of algorithms that allow
computers to learn from and make predictions based on data. It use neural networks with
specific types of data, and can be categorized in three main types:

● Supervised Learning: Algorithms are trained on labeled datasets, meaning that each
training example is paired with an output label.
● Unsupervised Learning: Algorithms are used to identify patterns and structures in data
without explicit labels.
● Reinforcement Learning: Algorithms learn optimal actions through trial and error
interactions with an environment. This type of learning is used in scenarios requiring
decision making, such as robotics and game playing.

Deep Learning
Deep Learning is a subset of machine learning that involves neural networks with three or more
layers. This is used mainly to allow machines to learn from large datasets.

Computer Vision
Computer Vision is a field of artificial intelligence that enables computers to interpret and make
decisions based on visual data from the world, such as images and videos. It includes the
following subfields:

● Image Processing: Techniques like filtering, detection, and segmentation are used to
preprocess images and extract meaningful features.
● Object Detection: Use of algorithms to identify and localize objects within an image.
● Image Classification: Techniques that categorize an image into predefined classes using
trained models

Encryption
Encryption is a process of converting plaintext data into a coded format to prevent unauthorized
access. Involve mathematical functions and a “key” to transform data into ciphertext (coded
text).

Cybersecurity
Cybersecurity is the practice of protecting systems, networks, and data from digital attacks,
unauthorized access, damage, or theft. Common practices include securing data via encryption
and monitoring network traffic, as well as user authentication systems and security barriers.

14
Past international actions
As of now there is no comprehensive regulatory scheme designed specifically to address AWS.
This does not mean that they are not being regulated, but their are just merely subject to the
principles of international humanitarian law that are applicable to all weapons systems

There is constant debate on AWS in the international legal community, not only because of their
ethical implications but because of their potential to violate international law, some call for an
outright ban on the development and use of all AWS, while some, such as the “Campaign to Stop
Killer Robots” have called for the prohibition of some types of AWS and regulation of others
deemed more legally and ethically acceptable.

Figure 3: Campaign to stop killer robots

Nations around the world call for the creation of new international law that directly outlines how
autonomous weapons systems should be governed and how to identify any weapons or uses that
would be categorically unlawful.

The UN's Convention on Certain Conventional Weapons (CCW) held its first informal meeting
of experts in 2013, in order to discuss the issue of lethal autonomous weapons systems (LAWS).
This meeting marked the beginning of formal UN involvement in addressing the challenges of
AWS. A treaty regulating AWS has already been proposed by multiple states as a potential
protocol to the CCW, which place restrictions on the use of weapons which may be deemed to
be excessively injurious or to have indiscriminate effects.

Since 2017 the UN has convened a yearly meeting of The Group of Governmental Experts on
Emerging Technologies in the area of lethal autonomous weapons systems (GGE). Its mission
statement is to explore and agree on possible recommendations related to emerging technologies
in the context of the objectives and purposes of the CCW. Each year parties to the CCW and
NGOs are invited to submit Working Papers and statements addressing the topics of discussion
selected for that particular year.

15
In 2019 the GGE established 11 guiding principles to shape all future work by the group and
influence the development of a legal framework. It is highly recommended to further research
and acknowledge these principles as they serve as an excellent guide for discussions in the
DISEC committee.

The first problem faced by the GGE and international efforts is the lack of a common definition
of AWS. In order for any regulation to be effective States need to be able to agree on a universal
definition on AWS so that they can uniformly apply the substantive law. In seeking proposals
from States, the GGE has attempted to create a working definition that helps to draw the line
between acceptable and unacceptable AWS.

The general consensus in the International Community seems to ban fully autonomous lethal
weapons which operate completely outside of human control and are designed to attack with
lethal force. What this leaves is the need to then regulate semi-autonomous and non-lethal AWS,
drawing that line between acceptable and unacceptable systems and uses.

Categories of definitions proposed

● By Machine-Learning Capabilities:

“An intelligent weapon system with autonomous operation made.. capable of recognizing
patterns in combat environments and of learning to operate and making decisions regarding the
critical functions…based on uploaded databases acquired experiences and its own calculations
and conclusions” - Brazil GGE Submission (2020).

● By Level of Human-Machine Interaction:

“A weapon system that, once activated, can select and engage targets without further
intervention by an operator” - United States Department of Defense Directive (2020).

“Weapons systems that completely exclude the human factor from decisions about their
employment” - Germany GGE Submission (2020).

● By Autonomous Critical Functions:

“A weapon that incorporates autonomy into the critical functions of selecting and engaging to
apply force against targets without human intervention” - Group of 14 States GGE Submission
(2022).

Ethical Concerns
Under article 36 of additional protocol i to the Geneva Conventions, state parties are under an
obligation to determine whether the employment of any new weapon would in some of all
circumstances be prohibited by international law. For this purpose, ethical concerns have been
raised by states and NGOs since before the first meeting of the GGE, which typically fall into 3
categories:

16
● Legal accountability: Some established process for holding an individual accountable
for the consequences of their actions, either the operator or commander, or the
programmers or manufactures of the weapon in case of malfunction.
● Moral accountability: Human commander's intent must be directly linked to the
outcome of the attack so that we find them morally responsible for the result. Delegating
the decision to engage to a machine would also prevent considerations of humanity.
Even where something or someone is a legitimate target the human operator still has the
choice to forgo the attack based on considerations of humanity rooted in human
emotions that cannot be programmed into a machine.
● Human dignity, use of force & unintended harm: Delegating the decision to use
force to a machine undermines the human dignity of combatants and any civilians put at
risk. Kristoff Haynes (UN rep) - “To allow machines to determine when and where to
use force against humans is to reduce those humans to objects.

Current Situation
Huge progress in visual recognition tools, machine learning and robotics is making it far easier
for computers to navigate complex environments. The war in Ukraine has delivered an
opportunity for defense tech companies, and a surge in conflicts worldwide has revealed a
fragmented international community increasingly unable or unwilling to enforce humanitarian
laws.

The war in Ukraine has been characterized by drone deployment of unprecedented scale, with
thousands of UAVs used to track enemy forces, guide artillery and bomb targets. One iny,
inexpensive FPV (first-person view) drone has proved to be one of the most potent weapons in
this war, where conventional warplanes are relatively rare because of a dense concentration of
anti-aircraft systems near front lines.

FPVs - originally designed for civilian racers - are controlled by pilots on the ground and often
crash into targets, laden with explosives. The total cost of the drone’s components, including an
explosive warhead secured with cable ties, can be as little as $500 or less.

Figure 4: FPV Remote Control system

17
Since 24 February 2022, the Action on Armed Violence organization has recorded at least 745
children among the 20,138 civilians killed and injured by explosive weapons, which include UAV
and FPV strikes, in Ukraine. Urban residential locations are the worst impacted locations for
child casualties of explosive violence, accounting for 52% (386) of children harmed since the
invasion.

In the middle east, The Israeli army has pioneered the use of remote-controlled quadcopters
(drone with four rotors) equipped with machine guns and missiles to surveil, terrorize and kill
civilians sheltering in tents, schools, hospitals, and residential areas. Residents of Gaza’s Nuseirat
Refugee Camp report that some drones broadcast sounds of babies and women crying, in order
to lure out and target Palestinians.

This AWS works alongside a targeting software system called “Lavender”. A key concern is that
this system was built and trained on flawed data. According to +972 Magazine, the training data
fed into the system included information on non-combatant employees of Gaza’s Hamas
government, resulting in Lavender mistakenly flagging as targets individuals with communication
or behavioral patterns similar to those of known Hamas militants. These included police and civil
defense workers, militants’ relatives, and even individuals who merely had the same name as
Hamas operatives.

As reported, even though Lavender had a 10% error rate when identifying an individual’s
affiliation with Hamas, this system got approval to automatically adopt its kill lists “as if it were a
human decision.” Soldiers reported not being required to thoroughly or independently check the
accuracy of Lavender’s outputs or its intelligence data sources.

These examples in contemporary conflicts showcase the lack of supervision and regulation to
ensure civilian safety and ethical considerations in the military applications of AI.

Industry
The development and sale of autonomous weapons systems have become a lucrative business,
with companies across the globe investing heavily in AWS technologies. Major defense
contractors like Lockheed Martin, Northrop Grumman, and BAE Systems have been at the
forefront, developing advanced drones, autonomous ground vehicles, and other AI-enabled
weaponry.

According to a report by the Stockholm International Peace Research Institute (SIPRI), global
military spending reached approximately $2 trillion in 2021, with a significant portion allocated
to the development and procurement of autonomous and AI-driven systems. This growth is
driven by increasing investments in autonomous systems by countries seeking to modernize their
military capabilities and gain a strategic edge.

In addition to drones, the market for other types of autonomous systems, including unmanned
ground vehicles (UGVs) and autonomous naval vessels, is also expanding rapidly. The global
market for military robotics is projected to grow from $16.5 billion in 2020 to $30.8 billion by
2025, with a CAGR of 12.92% .

18
Illicit Trafficking
The proliferation of AWS has raised significant concerns about their regulation and the potential
for illicit trafficking. The relatively small size and ease of assembly of some AWS, particularly
drones, make them attractive to non-state actors, including terrorist organizations and criminal
gangs. Reports have surfaced of drones being used by groups like ISIS and various drug cartels
to conduct surveillance, deliver explosives, and smuggle contraband across borders.

While precise figures for the illicit market in AWS are challenging to determine, estimates suggest
that the black market for military-grade drones and autonomous systems could be worth
hundreds of millions of dollars annually. This market includes the trafficking of both complete
systems and components that can be assembled into functional weapons.

A report by the United Nations Office on Drugs and Crime (UNODC) highlights that the illicit
arms trade, including sophisticated weaponry like drones, is a growing concern, with annual
values estimated to be between $1 billion and $2 billion. This includes a variety of weapons, but
the inclusion of high-tech systems indicates the expanding scope of illicit trafficking.

The primary sources of illicit trafficking in AWS and military uses of AI include:

● Conflict Zones: Areas with ongoing conflicts, such as Syria, Libya, and Yemen, often
serve as hubs for the illicit arms trade due to the instability and lack of effective
governance.
● Corrupt Military Personnel: In some cases, military personnel in various countries may
engage in the illicit sale of autonomous weapons systems to supplement their income.
● Transnational Criminal Organizations: Drug cartels and other organized crime
groups have been known to procure AWS for use in their operations.
● Online Marketplaces: The dark web and other clandestine online platforms allow for
anonymous transactions, making it difficult for authorities to trace and regulate the trade.

Bloc Positions
United States and the West
The United States has been a key player in the development and deployment of Autonomous
Weapons Systems (AWS) and artificial intelligence (AI) technologies in modern warfare. The U.S.
military has invested heavily in research and development of these technologies with the aim of
gaining a strategic advantage on the battlefield.

The U.S. Department of Defense has outlined its position on AWS in its directive on
autonomous and semi-autonomous weapons. The directive emphasizes the importance of
maintaining human control over the use of force and ensuring that AWS are used in accordance
with international law. The United States has also been vocal about the need for transparency
and accountability in the development and deployment of AI technologies in warfare.

19
The American approach to AWS is characterized by a focus on leveraging these technologies to
enhance military capabilities while also recognizing the ethical and legal implications of their use.
The U.S. government has engaged in discussions with other countries and international
organizations to address concerns about the potential risks associated with AWS, such as the loss
of human control over military operations and the potential for unintended harm to civilians.

Overall, the United States sees AWS and AI as important tools for maintaining military
superiority and deterring potential adversaries, while also recognizing the need for international
cooperation and regulation to ensure their responsible use in modern warfare.

European Union
The European Union has taken a more cautious approach to the use of AI and AWS in modern
warfare compared to the United States. European countries have expressed concerns about the
ethical and legal implications of deploying autonomous weapons systems on the battlefield.

The EU has been actively engaged in discussions on the regulation of AI and AWS at both the
regional and international levels. The European Parliament has called for a ban on the
development and use of lethal autonomous weapons systems and has emphasized the need for
human oversight and accountability in military decision-making processes.

European countries have also been working on developing ethical guidelines for the use of AI in
warfare, with a focus on ensuring compliance with international humanitarian law and human
rights standards. The EU has advocated for a human-centric approach to the development and
deployment of AI technologies, emphasizing the importance of transparency, accountability, and
respect for human dignity.

While European countries recognize the potential military benefits of AI and AWS, they are also
mindful of the risks and challenges associated with their use. The EU is committed to promoting
a rules-based international order that governs the responsible use of these technologies in
warfare and ensures that human rights and humanitarian principles are upheld.

China and the East


China has emerged as a major player in the development and deployment of AI and AWS in
modern warfare. The Chinese military has been investing heavily in AI technologies with the aim
of enhancing its military capabilities and gaining a strategic advantage over potential adversaries.

The Chinese government has not been as vocal as Western countries in outlining its position on
AWS and AI in warfare. However, China has expressed its commitment to developing AI
technologies for both civilian and military applications, with an emphasis on leveraging these
technologies to enhance national security and defense capabilities.

China's approach to AWS is characterized by a focus on technological innovation and strategic


competition with other major powers. The Chinese military has been exploring the use of AI for
a wide range of military applications, including intelligence gathering, surveillance, and
autonomous weapons systems.

20
While China has not explicitly called for a ban on lethal autonomous weapons systems, it has
emphasized the importance of maintaining human control over the use of force and ensuring
that AI technologies are used in accordance with international law. Chinese officials have also
highlighted the need for international cooperation and dialogue on the regulation of AI and AWS
to address concerns about their potential risks and challenges.

In conclusion, China's stance on autonomous weapons systems reflects its strategic priorities and
ambitions in modern warfare, with a focus on leveraging AI technologies to enhance military
capabilities while also recognizing the need for responsible use and international cooperation in
this rapidly evolving field.

Questions All Resolution Must Answer (QARMAs)


1. What definitions and ethical considerations should guide the use of AI in warfare,
particularly in relation to the protection of civilian populations, adherence to
international humanitarian law, and respect for human rights?
2. What measures can be taken to ensure human control and accountability over the
development and deployment of Autonomous Weapons Systems (AWS) and artificial
intelligence (AI) technologies in modern warfare?
3. How can the international community promote transparency and information sharing
regarding the use of AI and AWS in military operations?
4. How can countries strike a balance between leveraging AI technologies for military
advantage and ensuring that these technologies are used in a manner that upholds ethical
principles and international norms?
5. What role should international organizations, such as the United Nations and regional
bodies, play in regulating the development and use of AWS and AI in modern warfare to
promote peace, security, and stability?
6. How can countries address concerns about the potential proliferation of autonomous
weapons systems and the implications this may have for arms control, disarmament, and
illicit trafficking?
7. In what ways can the international community foster cooperation and dialogue among
stakeholders in the society to address the opportunities presented by AI technologies in
military applications?

21
Position Papers
A Position Paper is a policy statement in which delegates analyze and present their country’s view
on the issue being discussed, focusing on past national and international actions and the
development of viable proposals for the topic.

Your position paper should always include a heading with the title (“Position Paper”), your
delegation (the country you are representing), your committee (full name), the topic you are
discussing (as stated in your study guide), your full name and the name of your school.

Additionally, a standard position paper is comprised of three paragraphs:

1. Your first paragraph should include a brief introduction to the topic, always connecting
the issue to your country. Try to include statistics, data and phrases that may apply.
Always bear in mind that you should be focusing on answering the question “Why is the
issue relevant to my country?” and explain your country’s situation and policy about the
issue.
2. Your second paragraph should include a summary of past actions taken by the
international community related to the topic. Explain your country’s involvement,
comment on the effectiveness of the measures, and state how they can be improved.
3. Your third paragraph should focus on proposing solutions, always according to your
country’s policy. Try to be creative and propose original ideas that will help other
delegates (and your dais) remember your contribution to the debate. Finally, do not
forget to write a strong closing sentence.

The format for the position paper is the following:

● Font: Times New Roman


● Font Size: 12
● Spacing: 1.15
● Bibliography: APA 7th edition
● Margins: Standard

Each delegation is responsible for submitting a Position Paper to the mail of the committee,
disec@mariareinamarianistas.net by Tuesday, July 2nd (11:59 pm). It is important to mention that
delegates who do not present the position paper would NOT be eligible for awards.

22
Bibliography
● https://vcdnp.org/laws-where-are-we/
● https://www.nature.com/articles/d41586-024-01029-0
● https://www.sciencedirect.com/science/article/abs/pii/S0160791X23001276
● https://www.linkedin.com/pulse/ai-modern-warfare-autonomous-weapons-object-dron
es-ologunde/
● https://www.geopoliticalmonitor.com/lethal-autonomous-weapon-systems-a-gamechang
er-demanding-regulation/
● https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/
May-June-2017/Pros-and-Cons-of-Autonomous-Weapons-Systems/
● https://www.cigionline.org/articles/the-united-states-quietly-kick-starts-the-autonomous
-weapons-era/
● https://www.thebureauinvestigates.com/drone-war/data/afghanistan-reported-us-covert
-actions-2020/
● https://www.cloudflare.com/learning/ai/what-is-neural-network/
● https://ccdcoe.org/uploads/2020/02/UN-191213_CCW-MSP-Final-report-Annex-III_
Guiding-Principles-affirmed-by-GGE.pdf
● https://www.reuters.com/graphics/UKRAINE-CRISIS/DRONES/dwpkeyjwkpm/
● https://reliefweb.int/report/ukraine/ukraine-12-civilians-killed-and-20-injured-russian-d
rone-strike-apartment-building-odesa
● https://www.accessnow.org/publication/artificial-genocidal-intelligence-israel-gaza/
● https://www.unodc.org/unodc/en/firearms-protocol/firearms-study.html

23

You might also like