65c5255979e80 Case Cipher Case Study 2024

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

CASE CIPHER

Innovate, Integrate, Elevate: Unleashing the Power of Technology in Every Challenge


INSTRUCTIONS
● Teams must submit their presentation by 11:59 pm,
13th February 2024. Submission should be in pdf
format (Max 4 slides excluding the introduction and
the final slide.)

● The solutions should be mailed to:-


“casecipher.cmssrcc@gmail.com” with the subject as
<Team Name Case Cipher> and the attachment (PDF)
as <TeamName_CaseCipher>

● Kindly mention the names and contact details of all


the team members in the body of the email.

● Relevant information can be added to the analysis


from the web or other suitable sources.

● Any form of plagiarism will be heavily penalized.

EVALUATION CRITERIA
● Quality of Analysis
● Feasibility of Solution
● Creativity
● Overall Presentation of the case
Deep Fake Technology
Introduction & background

Deepfake is a term that refers to synthetic media that


have been digitally manipulated to replace one
person’s likeness convincingly with that of another.

The term deepfake combines deep, taken from AI


deep-learning technology (a type of machine learning
that involves multiple levels of processing), and fake,
addressing that the content is not real. The term came
to be used for synthetic media in 2017 when a Reddit
moderator created a subreddit called “deepfakes” and
began posting videos that used face-swapping
technology.

Deepfakes are created using powerful techniques from


machine learning and AI, such as Generative
Adversarial Networks (GANs). It works by training the
machine by feeding loads of data regarding the
targeted person and then using it to create fake media.
Famous examples of deepfakes include Mark Zuckerberg
boasting about having "total control of billions of
people's stolen data'' and Barack Obama labelling
Donald Trump a "complete dipshit." A deepfake video
purporting to show Volodymyr Zelenskyy, the president
of Ukraine, pleading with his soldiers to submit was
disclosed in 2022.

There are at least four major types of deepfake


producers:

1) Communities of deepfake hobbyists.


2) Political players such as foreign governments, and
various activists.
3) Other malevolent actors such as fraudsters.
4) Legitimate actors, such as television companies.

Online users can find amusement in hobbyists' meme-


like deepfakes, but there are also more malevolent
parties participating. Deepfakes can be used by a variety
of political actors, such as political agitators, hacktivists,
terrorists, and foreign states, in misinformation
campaigns to sway public opinion and erode trust in the
institutions of a particular nation. Deepfakes are
weaponized misinformation used in modern hybrid
warfare to stoke unrest and disrupt elections.
LEGALISATION OF DEEPFAKES:
Deep Fakes are prohibited by laws in three states. Texas
prohibits the use of deepfakes in an attempt to influence
elections, Virginia forbids the dissemination of deepfake
pornography, and California has laws prohibiting the use
of non-consensual deep fake pornography and political
deepfakes within 60 days of an election. While deepfakes
are not explicitly illegal in India, there are guidelines in
India's IT Rules for identifying and eliminating deepfakes
from social media, and punishing the guilty. While new
laws can be introduced to prevent deep fakes, they also
need mechanisms of enforcement.

ECONOMIC BACKGROUND :
In recent years, we have seen a rise in deepfakes.
Between 2019 and 2020, the number of deepfake online
content increased by 900%. Forecasts suggest that this
worrisome trend will continue in the years to come –
with some researchers predicting that “as much as 90%
of online content may be synthetically generated by
2026.” Oftentimes misused to deceive and conduct
social engineering attacks, deepfakes erode trust in
digital technology and increasingly pose a threat to
businesses.
The development of artificial intelligence (AI) has
significantly increased the risk of deepfakes. AI
algorithms, including generative models, can now create
media that are difficult to distinguish from real images,
videos or audio recordings. Moreover, these algorithms
can be acquired at a low cost and trained on easily
accessible datasets, making it easier for cybercriminals
to create convincing deep fakes for phishing attacks and
scam content.

Generative models like Generative Adversarial Networks


(GANs) and Variational Autoencoders (VAEs), have shown
remarkable capabilities in generating art, music, and
other creative content. These algorithms can analyze
vast datasets of existing artworks and then generate
new, often indistinguishable, pieces of art.
Problem statement 1 :
When AI-generated art becomes commercially
successful, issues arise regarding ownership and profits.
Should the AI system's creators, or the owners of the
dataset, be entitled to financial gains? How can
copyright laws adapt to accommodate this new form of
creativity?

Problem statement 2 :
Should deepfake technology be made legal, and if so,
might this have uncontrollable repercussions?
Furthermore, how likely is it that society will accept
content produced by deepfake technology?
COPYRIGHT NOTICE

This case is the intellectual property of CMS,


SRCC. All rights reserved. No part of this
publication may be reproduced, stored in a
retrieval system, or transmitted, in any form or
by any means, electronic mechanical,
photocopying, recording, or otherwise, without
the prior written permission of The Computers
and Mathematics Society, SRCC.

For any further queries, mail us at:


casecipher.cmssrcc@gmail.com

OR Contact:
Jai Mehtani: 8527433813
Yashasvi Dhankar: 9910533063
Disha Chaprana: 9899653034

You might also like