Seminar PPT f4

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 22

BHOJ REDDY ENGINEERING COLLEGE FOR WOMEN

TECHNICAL SEMINAR ON
Artificial Intelligence Crime: An
Overview of Malicious Use and Abuse of
AI

INTERNAL GUIDE PRESENTED BY


A NAVILA K SRIJA REDDY
ASSOCIATE PROFESSOR 19321A04F4
DEPARTMENT OF ECE IV B.TECH ECE-C
CONTENTS
Introduction
What is Artificial Intelligence
Advantages and Disadvantages of AI
Artificial Intelligence Crime
Predicting who will commit the crime
Malicious Abuse of AI
Malicious Use of AI
Conclusion
 References

1
INTRODUCTION

• The capabilities of Artificial Intelligence (AI) evolve rapidly and affect


almost all sectors of society.
• AI has been increasingly integrated into criminal and harmful activities,
expanding existing vulnerabilities, and introducing new threats.
• This explains about the representative incidents which allows to construct a
typology of the malicious use and abuse of systems with AI capabilities.
• The main objective is to clarify the types of activities and corresponding
risks.

2
WHAT IS ARTIFICIAL INTELLIGENCE

• Artificial Intelligence is the ability of a computer program to learn and think.


• The goals of artificial intelligence include learning, reasoning, and perception.
• John McCarthy coined the term ‘Artificial Intelligence’ in the 1956.
• AI is being used across different industries including finance and healthcare.
• AI is used in different types of technologies today. For example,
Self-driving Cars
Robotics
Automation

3
ADVANTAGES OF AI:
•Reduction in Human Error.

•Takes risks instead of Humans.

•Faster Decisions

DISADVANTAGES OF AI:
•Making Humans Lazy

•Unemployment

•Lacking Out of Box Thinking

4
ARTIFICIAL INTELLIGENCE CRIME

•An Interdisciplinary Analysis of Foreseeable Threats and Solutions Artificial intelligence (AI)


research and regulation seek to balance the benefits of innovation against any potential harms and
disruption.

•AI used for crime detection: Detecting gunfire


Detecting clues on crime scene
Detecting bombs

• AI used for crime prevention :Predicting crime spots


Predicting who will commit a crime
Detecting bombs
5

5
PREDICTING WHO WILL COMMIT THE CRIME

• The Chinese facial acknowledgment organization used Cloud Walk Technology to


really foresee if an individual will carry out wrongdoing before it occurs.
• The organization intends to utilize facial acknowledgment and walk examination
innovation to enable the legislature to utilize propelled Al to discover and follow
people.
• The framework will identify if there are any suspicious changes in their conduct or
surprising developments.

6
MALICIOUS ABUSE OF AI

7
1) INTEGRITY ATTACKS:

• Machine learning (ML) has become more prevalent in recent years.


• Machine learning models prone to integrity attacks.
• In integrity attacks, hackers attempt to inject false information into a system
to corrupt the data, undermining their trustworthiness.
• Adversarial examples, a type of integrity attack, can be used to manipulate
ML models, causing the algorithm to make mistakes (e.g. misclassification).
• Example:poisoning attack.

8
Below you have two images of a dog that are indistinguishable for us humans.
The image on the left is an original picture of a puppy . The one on the right is
a slight modification of the first, that I created by adding the noise vector in the
central image.

9
2) UNINTENDED OUTCOMES OF THE USE OF AI

• Models used to train AI systems can present a different result from what was expected by the
developer for various reasons.
• Algorithms can present an unexpected output due to, for instance, unintentional memorization
by models based on neural networks.
• Example: Gmail's Smart Compose
• When a user enters a text prefix such as ‘‘my ID number is’’, the model should not suggest a
text completion with the ID number of another user see

10
3) ALGORITHMIC TRADING/STOCK MARKET
MANIPULATION

• With the help of computers and AI-powered software programs, technology


facilitates and accelerates the pace of financial analysis and decisions.
• The use of AI systems in market trading, which causes it to move ‘‘with
lightning speed’’ has both positive and negative aspects.
• With the increase of algorithmic trading, the stock market is susceptible to
high-speed crashes.
• The incidents can be intentional or accidental.
• Example: 2010 Flash Crash.

11
4) MEMBERSHIP INFERENCE ATTACKS

• In membership inference attacks try to uncover and reconstruct data used to train
Machine Learning models.
• These attacks can be effective on several systems, such as classification and
sequence-to-sequence models.
• The attacks can target datasets containing, for instance, biometric and genetic
data.
• Example: thispersondoesnotexist.com.

12
MALICIOUS USE OF AI

13
1) SOCIAL ENGINEERING

• Social engineering attacks use deception techniques to manipulate human subjects


to share sensitive or personal information, which can be used for fraudulent
purposes.
• Such attacks are performed in different ways using an array of AI techniques.
• Using these techniques, cybercriminals can create elegant manipulation tactics,
consequently increasing their chances of success and gains.
• Two ways :A) Phishing
B) Manipulation

14
A)PHISHING
•Phishing is a type of social engineering attack often used to steal user data,
including login credentials and credit card numbers.
•Hackers use AI techniques to develop a ‘social bot’, which can help them
deceive and manipulate a person into complying with their request 
•The bots are used to deceive and manipulate people into complying with their
requests (e.g., sharing personal information).
Example: Cyberlover.
•We can prevent phishing by different techniques like Two-factor authentication.

15
B) MANIPULATION

• Malicious attackers can use AI techniques to develop algorithms or social bots to


manipulate public opinion.
• Large numbers of bots are created to support actions with malicious intent.
• Bots can used to create the perception of support for a cause in public consultations
and interfere with polls. Bots can potentially influence public opinion and the outcome
of elections.
• Example : Cambridge Analytica scandal

16
2) MISINFORMATION AND FAKE NEWS
• Al systems can be used to accelerate the creation and spread of unsubstantiated
content aimed at misinformation.
• The development and diffusion of technology, blogging platforms, and social
media have changed the way individuals consume information, access news
items, and form opinions.
• The fast pace of the Internet also enables anyone to create and rapidly share
content, which can reach many people. This scenario has created an
environment that allows the creation and spread of misinformation and fake
news.
• Example: tools such as GPT-3.

17
3) HACKING
• The act of compromising digital devices and networks through unauthorized
access to an account or computer system.
• Hacking refers to the misuse of devices like computers, smartphones, tablets,
and networks to cause damage to or corrupt systems, gather information on
users, steal data and documents, or disrupt data-related activity.
• Ways of Hacking are,
Forgery
Repetitive Tasks
Malware

18
4) AUTONOMOUS WEAPONS SYSTEMS (AWS)
• Malicious actors could alter the software embedded in weapons systems,
resulting in unintended casualties
• Militaries have been exploring the possibility of autonomy in weapons for some
time, practically since the inception of AI in 1950.
• As machines can process data, analyze information, and make decisions in some
situations in less time than humans, their use is particularly attractive in the
context of defense.
• Example: hacking the GPS of drones.

19
CONCLUSION
• The threats posed by the use and abuse of AI systems must be well
understood to create mechanisms that protect society and critical
infrastructures from attacks.
• Mapping these threats enables advanced reflection of governance
strategies, policies, and activities that can be developed or improved to
minimize risks and avoid harmful consequences.
• Enhanced collaboration among governments, industries, and civil society
actors is vital to increase preparedness and resilience against malicious use
and abuse of AI.

20
REFERENCES
• K. Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial
Intelligence. London, U.K.: Yale Univ. Press, 2021.
• B. Hibbard, Ethical Artificial Intelligence, 1st ed. Madison, WI, USA, 2015.
• Y. Yadav, ‘‘The failure of liability in modern markets,’’ Virginia Law Rev., vol. 102,
pp. 1031–1100, May 2016.
• A. Bessi and E. Ferrara, ‘‘Social bots distort the 2016 U.S. Presidential election
online discussion,’’ 1st Monday, vol. 21, no. 11, Nov. 2016.
• J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Niessner, ‘‘Face2Face:
Real-time face capture and reenactment of RGB videos,’’ in Proc. IEEE Conf
Computer. Vis. Pattern Recognition. (CVPR), Las Vegas, NV, USA, Jun. 2016.

21

You might also like