Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022].

See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
WHEN “SWEETIE” IS NOT SO SWEET: ARTIFICIAL INTELLIGENCE
AND ITS IMPLICATIONS FOR CHILD PORNOGRAPHY

Claudia Ratner

The production of child pornography using Artificial Intelligence is poised to potentially evade current laws protecting child
abuse. Artificial Intelligence “DeepFakes” can be used to create indistinguishable videos and images of child abuse, without
actual child abuse ever occurring. This Note proposes two solutions for curbing this inevitable dilemma. First, Artificial
Intelligence should fall under the “computer-generated” terminology found in the 18 U.S.C. § 2256(8) definition of child
pornography. Second, if Artificial Intelligence cannot be considered to fall under that definition, then 18 U.S.C. § 2256
(8) should be amended to include “Artificial Intelligence-generation.”

Key Points for the Family Court Community:


 The United States is one of the largest producers and consumers of child abuse content in the world.
 In 2018, technology companies reported over 45 million images and videos of child abuse.
 The National Center for Missing and Exploited Children has to review 25 million images of child abuse annually,
amounting to 480,769 images a week.
 Currently, there are no laws regulating Artificial Intelligence in the United States.
 DeepFake technology has progressed to allow even the most rudimentary computer user to create a pornographic
DeepFake in just a few hours.

Keywords: AI; Artificial Intelligence; Child abuse; Child Pornography; Deepfake; Deep Learning; Family Law; First
Amendment; Internet; Machine Learning; Obscenity; Technology.

I. INTRODUCTION

“Sweetie,” a ten-year-old Filipina girl, logs online to a public chat room and turns on her
webcam.1 She smiles and chats with seemingly anonymous users in the room. But soon enough,
she has hundreds of adults messaging her to perform live sexual acts for money.2 Sweetie, how-
ever, is not actually real.3 Sweetie is a computer-generated, virtual child, run by children’s
rights organization, Terre des Hommes, in a research project to expose online sexual predators.4
In the two-and-a-half months that Sweetie was active, she helped the organization identify over
20,000 people attempting to chat with her online, including 1,000 adults from 71 countries who
were willing to pay the virtual child for webcam sex.5 Terre des Hommes aims to expose and
end the “webcam sex tourism” that has reached new levels of popularity on the internet.6
Sweetie was created in 20137 and was programmed to not only look like a ten-year-old girl, but
to use tone, facial expressions, and movements to replicate those of a real child’s.8 Now imagine
that this same type of technology is used to create a virtual child for the purpose of being used
in child abuse media, rather than trying to prevent it. This situation is a repulsive, yet realistic
possibility of how Artificial Intelligence technologies can be used to create pornographic
images and videos of children, without the children ever being recorded in sexually explicit
ways.9
Sweetie was created by a computer-generated software program that was controlled and com-
manded by a human.10 Computer-generation, like the CGI used in most block-buster films,11 was

Corresponding: claudia@gotogallo.com

FAMILY COURT REVIEW, Vol. 59 No. 2, April 2021 386–401, doi: 10.1111/fcre.12576
© 2021 Association of Family and Conciliation Courts.
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Ratner/WHEN “SWEETIE” IS NOT SO SWEET 387

cutting-edge at the time, but is being replaced by Artificial Intelligence (“AI”).12 Merriam-
Webster defines AI as “a branch of computer science dealing with the simulation of intelligent
behavior in computers” and “the capability of a machine to imitate intelligent human behav-
ior.”13 Further subsets of AI, i.e. Machine Learning and Deep Learning, aim to help the AI
“learn” how to progressively get better at tasks without being explicitly programmed by a
human.14 Thus, through Deep Learning techniques and training, AI has the capability to “cre-
ate” all on its own.15
The average technology user comes in contact with AI near daily (i.e. “Hey Siri,” and
“Alexa!”), but as of late, AI is being used for more malicious purposes.16 A specific type of
Deep Learning AI, known as “DeepFakes” for example, has the capability to create realistic,
lifelike, and near indistinguishable visuals of people, celebrities, even children, without any
human intervention or control.17 This technology, though useful in many aspects of the enter-
tainment industry, is causing multitudes of legal and ethical problems, especially in the area of
porn.18
With new innovations in technology, the consumption of child pornography is primarily
accessed through online, virtual means like social media websites, file and photo sharing sites,
mobile apps, and, more recently, on “live streams.”19 The ability to quickly create, post, and
share content and, specifically, content depicting child abuse, is becoming a rampant problem
for internet, media, and technology companies.20 For example, in 2018, technology companies
reported over 45 million images and videos of child abuse,21 while The National Center for
Missing and Exploited Children reported that they review 5 million images of child abuse annu-
ally, amounting to 480,769 images per week.22 As the U.S. dominates the position of being one
of the largest producers and consumers of child abuse content in the world,23 and with the
U.S. also being a leader in advancing the studies and applications of AI,24 the necessity to regu-
late this quickly evolving technology is inescapable. However, regulating a technology that can
independently create child pornography presents a new crop of issues that current laws may not
be able to keep up with.25
This Note will address the imminent effects that Artificial Intelligence will have on the produc-
tion of child pornography and aims to increase awareness of the present and future dangers of
unregulated technologies. The proposed solutions intend to provide a starting point for a nationwide
recognition and definition of Artificial Intelligence as applied to child pornography laws and suggest
a model basis for including Artificial Intelligence in child pornography definitions. Part II of this
Note defines Artificial Intelligence, Machine Learning, Deep Learning, and “DeepFakes,” and
explains how each aspect of the technology can be used and implemented in the creation and pro-
duction of child pornography. Part III will dive into the history of child pornography laws in the
United States and demonstrate how the current laws are used to deter the production of conven-
tional child pornography. Part IV proposes the first solution, where Artificial Intelligence should be
included within the 18 U.S.C. § 2256(8) definition of child pornography. Part V proposes the sec-
ond solution, where if Artificial Intelligence cannot be included under the current definition of child
pornography, then 18 U.S.C. § 2256(8) should be amended to incorporate Artificial Intelligence in
the definition. Part VI presents two obstacles to regulating Artificial Intelligence produced child por-
nography: the first debates whether current laws against child abuse already consider Artificial Intel-
ligence technologies, and the second discusses the issues of Artificial Intelligence-created child
pornography. Lastly, Part VII will conclude the Note by reiterating the dangers of unregulated Arti-
ficial Intelligence and the necessity for current child pornography laws to modernize in this
technology-driven day and age.

II. WHAT IS ARTIFICIAL INTELLIGENCE?

In 1950, Alan Turing dissected the question “Can machines think?” in his groundbreaking paper,
Computing Machinery and Intelligence.26 As an esteemed mathematician, logician, cryptanalyst,
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
388 FAMILY COURT REVIEW

and most notably, computer scientist, Turing is considered one of the founding fathers of AI.27
Turing developed the “Imitation Game,” more commonly known as the “Turing Test.”28 The test
aimed to determine whether a computer can make a human believe that it [the computer] is actually
a human.29 If the human is unable to distinguish the computer from the “foil” human in the test,
then the computer will then be considered an “intelligent entity.”30 Although this test faced bouts of
criticism and objection, Turing’s theory of computer sentience built the foundation for future
research and development of AI.31

A. ARTIFICIAL INTELLIGENCE

Artificial Intelligence is a branch of computer science aimed to replicate human intelligence and
behavior.32 The objective of this science is to create AI that is capable of human intellectual pro-
cesses, such as making decisions, finding meaning, applying reasoning, generalizing and recogniz-
ing patterns, and learning from experiences.33 More specific behaviors, like complex problem
solving, speech recognition, visual interpretation, and forming responses, are similar to the way a
human learns in that the AI must perceive, evaluate, and understand its environment and stimulus
in order to make a decision.34 To accomplish this type of intelligent behavior, the AI (through
Machine Learning) is given a task and fed large amounts of data that is then processed through a
series of algorithms.35 From this, the AI is able to make a decision or predict an outcome based on
the information it has assessed.36 The AI then repeats this scenario over and over again, until it is
able to thoroughly detect its environment, recognize patterns in the data, and subsequently learn
from those patterns in order to adjust its response or decision accordingly.37
At its inception, AI technologies were limited to use by computer scientists and researchers, who
were only able to develop the AI to play checkers, chess, solve mazes, and translate language.38 In
the 21st century, however, advances in AI technology have expanded exponentially because of tre-
mendous increases in computer power; massive, inexpensive, and easily accessible data sets; and
major developments in the field of Machine Learning.39 Now, AI is incorporated into nearly every
facet of a person’s life.40 Siri, Alexa, and Google Assistant live in phones, laptops, and smart homes
to help users in their day-to-day lives: like scheduling appointments, ordering food, online shopping
even setting the thermostat or locking the front door.41 Self-driving cars are no longer a fantasy
from science fiction novels, and IBM’s Watson defeated Jeopardy’s two greatest contestants.42 One
of the most frequent, albeit frightening, uses of AI is actually being implemented by social media
giant, Facebook.43 Facebook’s algorithm, dubbed “DeepFace,” claims to contain the “largest facial
dataset to date.”44 DeepFace is operated by an AI, Deep Learning, facial recognition system.45
Deep Learning is a subset of Machine Learning, and is the basis for how AI technologies are able
to learn and create without human intervention.46

B. MACHINE LEARNING AND DEEP LEARNING

Machine Learning is a subset of AI that teaches systems to automatically learn from experiences
without a human explicitly programming it to do so.47 Machine Learning algorithms let the AI
comb through enormous amounts of data and find patterns within it.48 This process implements
statistical-based learning rather than human involvement; it allows the AI to learn and adapt from
its own experiences to make “smarter” predictions on data.49 There are many types of learning pro-
cesses, but the two most common forms are Supervised and Unsupervised Learning.50 In Super-
vised Learning, humans provide the AI with labeled input data or training sets and desired
outputs.51 This orders the AI to recognize the specific patterns in the data and to produce the pre-
chosen outcomes.52 In Unsupervised Learning, the data provided has no labels, and no human input
or guidance.53 This forces the AI to organize the data by searching for related characteristics and
patterns, only altering the outcomes based on its own internal knowledge of the data.54
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Ratner/WHEN “SWEETIE” IS NOT SO SWEET 389

Deep Learning is a further subset of Machine Learning that is more closely based on the way
that the human brain functions.55 Deep Learning uses something known as “artificial neural
networks,” which are composed of several layers of “hidden” data (or neurons), each level con-
taining information from the previous one.56 This process allows the AI to learn from one level of
data and build upon it in the next level, continuing this process throughout the whole network until
it has essentially “learned” and can provide a prediction or output.57 Chris Meserole of the
Brookings Institute provides a description on how these neural networks function by illustrating
how an algorithm detects facial features in an image.58
Deep Learning in AI is how Siri, Alexa, and self-driving cars are able to function, but it can also
be implemented in marketing research, speech recognition, and of course, image recognition.59 Be
that as it may, Deep Learning and artificial neural networks need immense amounts of information
to operate correctly, commonly known as “Big Data.”60 Big Data is mostly pulled from, unsurpris-
ingly, the internet.61 It may be surprising, however, that Big Data is almost always generated by
people willingly using, posting, and sharing on the internet.62 For example, browsing history, cook-
ies, “app-tracking,” and online purchases are used by AI to predict a person’s shopping habits.63
Facebook’s DeepFace facial recognition software processes only the images users upload them-
selves.64 Facebook alleges that DeepFace has achieved 97.25% accuracy in facial recognition; com-
pare that to a human’s level of accuracy at 97.53%.65
Deep Learning AI is partially able to progress so quickly because of its access to Big Data.66
Facial recognition technologies allow the AI to essentially “see,” also known as “computer
vision.”67 Computer vision aids the AI in identifying and interpreting the content of images, as well
as what the image represents.68 With this, the AI is then able to learn how to create images too.69
In creating its own visuals, AI uses General Adversarial Networks (GANs).70 This process consists
of two AI neural networks: one network creates the image, while the other network (which has
access to the “target” image), evaluates and critiques the “created” image for accuracy.71 The
assessing network sends the image back to the creating network, where the creating network will
improve on the image based on the assessing network’s estimation.72 This process is repeated until
the assessing network determines that the AI created image is “identical” to the target image.73 AI
has progressed past the point of creating images of animals, cartoons, and representations of
words.74 Now, AI can create its own artwork,75 its own language,76 and even create its own “child”
AI.77 Nonetheless, one of the most controversial abilities of AI revolves around its ability to create
realistic, near indistinguishable images and videos of people.78

C. DEEPFAKES

The term “DeepFake” was coined by Reddit user “deepfake,” who posted AI-generated videos
of female celebrities engaged in sex.79 By definition, DeepFakes are fabricated videos, images, and
other media created by AI that appear to be real.80 The ability for the AI to create this media, as
explained above, uses Deep Learning to analyze the target image from the Big Data images and
videos it is fed, while also mimicking the patterns it observes.81 Then, the media is ran through a
GAN process until the desired DeepFake is created.82 DeepFakes can be created in just a few hours
with the right tools.83 The AI software can be found in open source libraries across the internet,
and Facebook, Instagram, YouTube, and Google Photos, where 24 billion selfies were uploaded in
2015–2016, can all be used for data entry.84 With the words “Fake News” on the tip of everyone’s
tongue, DeepFakes are causing widespread fear of its ability to sway political elections, disturb
stock markets, and even cause nuclear warfare.85 However, the most popular and prominent use of
DeepFakes is in creating fake, non-consensual pornography.86
Cybersecurity company, DeepTrace, published a study named “The State of DeepFakes.”87 The
study found that 96% of all online DeepFake videos (14,678) were of non-consensual pornogra-
phy.88 Soon after the “discovery” of these videos, Reddit, Twitter, Discord, and Pornhub have all
banned non-consensual, DeepFake pornography on their platforms.89 Unfortunately, non-consensual
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
390 FAMILY COURT REVIEW

pornographic DeepFakes are still widely created and circulated throughout the internet, while some
savvy techies are launching apps that help the average person create their own DeepFakes.90 The
DeepNude App, for example, was created by programmer “Alberto,” who developed AI to take
photos of clothed women uploaded by the user and “strip” the women naked.91 The AI in the app
was able to generate realistic, nude versions of the clothed women in just a few seconds.92 The App
sold for $50 and eventually went viral, causing the owner to take it down after backlash.93 The rise
of DeepFakes is most concerning because the media it produces is indistinguishable from real
photos and videos.94 DeepFakes seem so realistic because the human eye is ill-equipped to deter-
mine whether it is real or fake, and likewise, the only way to distinguish media by AI is by actually
using AI itself.95

D. THE TROUBLE WITH CHILDREN

Although current DeepFake concerns are centered around female objectification and revenge
porn, some porn creators are worried about AIs creating child pornography.96 Big Data “face sets”
of celebrities are traded amongst DeepFake porn creators; unbeknownst to them, however, some
sets contain images of the celebrities as children.97 It is uncertain whether these images were
inserted accidentally or deliberately, but the result of using a minor’s face or likeness on a “porno-
graphic body” or in a pornographic setting could result in serious jail time.98 The PROTECT Act
of 2003 is the current body of law that governs “virtual” child pornography.99 However, the consti-
tutionality of virtual child pornography has been in a legal grey area as the advancements in tech-
nology make it increasingly difficult to regulate effectively.100

III. HISTORY OF CHILD PORNOGRAPHY LAWS IN THE UNITED STATES

A. DEFINING OBSCENITY AND CHILD PORNOGRAPHY

The United States has a brief, yet significant history regarding the foundation of its current child
pornography laws.101 Starting with the Constitution, the First Amendment pronounces that “Con-
gress shall make no law… abridging the freedom of speech.”102 This coveted constitutional protec-
tion, unfortunately, has been the subject of multiple issues regarding its precedent over child
pornography.103
In 1973, the Miller v. California Court established the current obscenity test for speech that can
be regulated without violating First Amendment rights:104

(1) [W]hether “the average person, applying contemporary community standards” would find that the
work, taken as a whole, appeals to the prurient interest; (2) whether the work depicts or describes, in a
patently offensive way, sexual conduct specifically defined by the applicable state law; and (3) whether
the work, taken as a whole, lacks serious literary, artistic, political, or scientific value.105

The Miller test does not automatically qualify sexual materials as obscene because to be
obscene, it must meet all three standards.106 Later in 1977, The Protection of Children Against Sex-
ual Exploitation Act was the first federal act that criminalized child pornography.107 The Act aimed
to protect minors from being filmed, photographed, or recorded in sexual acts and prohibited
transporting or mailing such material for “immoral purposes.”108 However, the Act only covered
materials that were considered obscene under Miller.109
The first child pornography case, New York v. Ferber, partially rectified the child pornography
and obscenity issue.110 The Court ruled that production and distribution of child pornographic
materials was not a right protected under the First Amendment as the material was directly related
to child abuse.111 In this case, the defendant sold two videos of adolescent boys masturbating to an
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Ratner/WHEN “SWEETIE” IS NOT SO SWEET 391

undercover officer, violating two laws combating the distribution of child pornography.112 The
Court recognized the thin line where “laws directed at the dissemination of child pornography run
the risk of suppressing protected expression by allowing the hand of the censor to become unduly
heavy.”113 Still, the Court reasoned that “the prevention of the sexual exploitation and abuse of chil-
dren constitutes a government objective of surpassing importance,”114 and listed five factors for all-
owing state legislatures more leeway in their regulation of child pornography.115
Although the Court’s reasoning for broader legal standards regarding child pornography was evi-
dent, the Court was careful to discern the differences between actual versions of child pornography
with other delineations of it: “We note that the distribution or other depictions of sexual conduct,
not otherwise obscene, which do not involve live performance or photographic or other visual
reproduction of live performances, retains First Amendment protection.”116 The Ferber Court was
only concerned with protecting children who were actual victims of abuse; it did not necessarily
criminalize the mediums of child pornography where no abuse occurred.117
While New York v. Ferber addressed the issues of production and distribution of child pornogra-
phy, the case Osborne v. Ohio addressed the issue of its possession.118 The defendant was found to
be in possession of four photographs individually depicting a young, naked male in a sexually
explicit position, violating an Ohio state law.119 The defendant argued that solely possessing child
pornography, under the precedent of the Supreme Court case Stanley v. Georgia, was constitution-
ally valid.120 In Stanley, the Court ruled that a Georgia law proscribing the private possession of
obscene materials was unconstitutional as the regulation of obscenity “does not extend to mere pos-
session by the individual in the privacy of his own home.”121 Despite this, the Osborne Court
maintained that the Stanley decision was narrow and that states have a constitutionally compelling
interest in criminalizing the possession of child pornography.122 The Court explained, citing Ferber,
that the Ohio law was enacted to protect victims of child abuse, while also attempting to attack the
underground and under-regulated child pornography market.123

B. “NEW AGE” CHILD PORNOGRAPHY

When Osborne was decided in 1990 (alongside the invention of the “World Wide Web”124), the
Court recognized that the growing child pornography market presented a problem with almost no
possible solution.125 In its past, child pornography was disseminated mostly through tangible
means, but with the newfound ease and accessibility of the internet, Congress attempted to antici-
pate the advent of using computers in the production and circulation of child pornography.126 The
Child Pornography Prevention Act (“CPPA”) of 1996 essentially broadened the definition of child
pornography to include any sexually explicit “depictions” of minors, specifically:

(A) the production of such visual depiction involves the use of a minor engaging in sexually explicit
conduct;
(B) such visual depiction is, or appears to be, a minor engaging in sexually explicit conduct; or
(C) such visual depiction has been created, adapted, or modified to appear that an identifiable minor is
engaging in sexually explicit conduct;
(D) such visual depiction is advertised, promoted, presented, described, or distributed in such a manner
that conveys the impression that the material is or contains a visual depiction of a minor engaging in
sexually explicit conduct.127

In sum, the CPPA aimed to proscribe all versions of virtual child pornography, regardless if a real
child was ever actually involved in its production.128 Though the Act’s congressional findings
were well-intentioned, the result of its application was not able to survive constitutional
muster.129
The CPPA remained in effect until 2002, when the case Ashcroft v. Free Speech Coalition chal-
lenged the Act on First Amendment grounds, alleging that it was overbroad and vague.130 At issue
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
392 FAMILY COURT REVIEW

was that the Act criminalized speech that was “neither obscene under Miller, nor child pornography
under Ferber.”131 Focusing on the “virtual” aspect, the Court found that the CPPA extended its
reach to speech that was well beyond the arm of the Miller test, in fact banning all sexually explicit
materials so long as the subjects “appear to be” minors.132 The government argued that the virtual
child pornography it aimed to illegalize was indistinguishable from real child pornography.133 How-
ever, the Court clarified that contrary to the speech addressed in Ferber, there is no “intrinsic rela-
tion” between virtual child pornography and child abuse, and that the harm that could follow from
it was “contingent and indirect.”134 Thus, the Court ruled that the CPPA was unconstitutional prima
facie, as it prohibited a significant amount of protected speech: “Congress may pass valid laws to
protect children from abuse, and it has… The prospect of crime, however, by itself does not justify
laws suppressing protected speech.”135

C. THE PROTECT ACT AND WHAT IT FAILS TO PROTECT

In an attempt to redeem its unsuccessful attack on virtual child pornography, Congress


enacted the PROTECT Act of 2003 in the wake of the Ashcroft decision.136 The applicable part
of the PROTECT Act focused on the pandering and solicitation of child pornography and
amended key sections of the CPPA in order to encompass the grey area of virtual child pornog-
raphy.137 The PROTECT Act has appeared before the Supreme Court on only one occasion: in
a 2008 case named United States v. Williams.138 The case discussed the constitutionally of the
PROTECT Act under overbreadth, vagueness and First Amendment concerns.139 Specifically,
the case questioned the “scienter” requirement found in § 2252A, where a person will be
punished if he knowingly:

Reproduces… distributes… advertises, promotes, presents, distributes, or solicits through the mails…
any material or purported material in a manner that reflects the belief, or that is intended to cause
another to believe, that the material or purported material is, or contains—
i. an obscene virtual depiction of a minor engaging in sexually explicit conduct; or
ii. a visual depiction of an actual minor engaging in sexually explicit conduct.140

The knowledge requirement established that a person may be found guilty of requesting or
offering to provide child pornography regardless if the child pornography actually exists.141 One of
the main issues addressed was whether the pandering provision skirted around the Ashcroft and
Ferber holdings that drew lines between protected “virtual” and proscribed “actual” child pornog-
raphy.142 The dissent believed that it did, however, the opinion made clear that the pandering provi-
sion did not explicitly outlaw virtual child pornography under Ashcroft, recognizing that “… the
child-protection rationale for speech restriction does not apply to materials produced without chil-
dren”143 because “the defendant must believe that the picture contains certain material, and that
material in fact (and not merely in his estimation) must meet the statutory definition [of child por-
nography].”144 The Williams Court upheld the constitutionality of the PROTECT Act’s pandering
provision by distinguishing that the knowledge requirement did not proscribe virtual child pornog-
raphy if the offeror (or offeree) knows that the virtual child pornography contains no actual
children.145

D. CURRENT DEFINITION OF CHILD PORNOGRAPHY

The PROTECT Act amended the definition of child pornography in accordance with the con-
gressional findings of new technologies that previous definitions failed to include.146 The current
definition of virtual child pornography lies within 18 U.S.C. § 2256:
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Ratner/WHEN “SWEETIE” IS NOT SO SWEET 393

(A) the production of such visual depiction involves the use of a minor engaging in sexually explicit
conduct;
(B) such visual depiction is a digital image, computer image, or computer-generated image that is, or is
indistinguishable from, that of a minor engaging in sexually explicit conduct; or
(C) such visual depiction has been created, adapted, or modified to appear that an identifiable minor is
engaging in sexually explicit conduct.147

The definition still aims to protect actual victims of child abuse, though its inclusion of virtual
child pornography broadens its scope.148 Penned in 2003 and amended in 2008, § 2256 has yet
to be questioned on its constitutionality.149 However, near 20 years later, technology has
evolved tenfold, and this definition is yet again outdated. The § 2256 definition is the crux of
the solutions in this Note, proposing that AI should either be read to fall under the “computer-
generated” provision, or the definition should be amended to include a separately defined AI
provision.

IV. SOLUTION I: ARTIFICIAL INTELLIGENCE SHOULD FALL UNDER THE


“COMPUTER-GENERATED” PROVISION IN 18 U.S.C. § 2256

Artificial Intelligence, as explained previously, has the ability to create media that is either
based on large data sets to portray a designated subject, or the ability to create media after
processing the large data sets to create its “own” desired subject.150 DeepFakes is a very new
and growingly popular way to do this, with the product being indistinguishable from a real
image or video.151 When related to child pornography, AI and DeepFake technologies are able
to create virtual versions of child pornography that can either be created by conglomerating
hundreds, if not thousands of images and videos of child abuse to create combined, but
completely new media of child abuse, or by “studying” images and videos of child abuse to cre-
ate its own original media of child abuse.152 In these instances, AI is creating “visual depic-
tions” of child pornography that neither feature nor contain actual abuse of an actual child.
Instead, the AI has created indistinguishable, albeit fake media of child abuse out of data and
pixels. This issue, if not addressed, will curtail the PROTECT Act’s mission to prevent the dis-
semination of virtual child pornography.153 Thus, AI must be included as a “medium” of child
pornography within § 2256.
The first proposed solution to this problem is to simply read the definition as written to include
AI. § 2256(8)(B) states that a visual depiction of child pornography is “a digital image, computer
image, or computer-generated image that is, or is indistinguishable from, that of a minor engaging
in sexually explicit conduct.”154 The “computer-generated” provision should be interpreted to
include AI-created child pornography as something that is “computer-generated.” First, AI is a
branch of computer science.155 Second, one of the major goals of study in AI is to make
“machines” (read: computers) learn, adapt, decide, and create.156 Finally, AI essentially “lives” in a
computer: “AI is simply computer code running software.”157 With these stipulations in mind,
“computer-generated” could possibly be read as “Artificial Intelligence-generated.” This solution
would not only help the Courts address this new technology that can create virtual child pornogra-
phy under the § 2256(8)(B) definition, but would also save the Legislature time, money, and frustra-
tion from either debating what “computer-generation” constitutes and/or from having to create new
law that singularly addresses AI created child pornography.
Alas, simply reading a specific term and concluding it to encompass another similar, but funda-
mentally different term is not always as easy as it seems. It is more than likely that AI will not be
able to be construed as “computer-generated.” A more persuasive argument can be made where AI
requires its own distinction in the § 2256 definition, proposed in the second solution.
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
394 FAMILY COURT REVIEW

V. SOLUTION II: IF ARTIFICIAL INTELLIGENCE CANNOT FALL UNDER THE


“COMPUTER-GENERATED” PROVISION, 18 U.S.C. § 2256 SHOULD BE AMENDED TO
INCLUDE ARTIFICIAL INTELLIGENCE

In order for Artificial Intelligence-created child pornography to be included within the child por-
nography definition, then § 2256 will have to be amended to include Artificial Intelligence.

A. THE DIFFERENCE BETWEEN ARTIFICAL INTELLIGENCE AND COMPUTER


GENERATION

It is first necessary to demonstrate the differences between AI and “computer-generation.” AI,


although it does “work” as a code within computer software, is more likened to a series of algo-
rithms rather than a physical “computer.”158 AI is also a science that consists of much more than
basic “image-generation.”159 Further, the main aspect of AI is the ability for it to function without
human input or interaction;160 a stark comparison to computers, which are more or less programma-
ble devices.161
The term “computer-generation” can also be further differentiated from AI. Within § 2256,
section (6) defines computer under 18 U.S.C. § 1030, where:

The term “computer” means an electronic, magnetic, optical, electrochemical, or other high speed data
processing device performing logical, arithmetic, or storage functions, and includes any data storage
facility or communications facility directly related to or operating in conjunction with such device…162

On its face, the statutory definition of a computer that § 2256 employs clearly does not embody
AI technologies. In regard to “computer-generation” specifically, the congressional findings of the
PROTECT Act indicated the ways “computer-generation” was used:

(A) computer generate depictions of children that are indistinguishable from depictions of real children;
(B) use parts of images of real children to create an image that is unidentifiable as a particular child and in
a way that prevents even an expert from concluding that parts of images of real children were used; or
(C) disguise pictures of real children being abused by making the image look computer generated.163

It is evident from the findings that Congress was very much concerned with the ability of technol-
ogy to “computer generate realistic images of children.”164 However, “computer-generation” in
2003 (and as it is still defined today), pertains primarily to the technology used to create 3-D
images, special effects in movies and T.V., graphic models, and computer animation, all of which
require direct human input, control, and creativity.165 AI, in the way it is recognized today, as well
as DeepFakes, could not have been comprehended nor anticipated to fall under the interpretation of
“computer-generation.” AI and DeepFakes are not “computer-generated images” as inferred in Sec.
501, rather, they are their own separate types of technology that can create without human control.
It then cannot be said that AI qualifies under the “computer-generated” provision of § 2256(8).

B. THE NECESSITY OF REQUIRING A SEPARATE DEFINITION OF ARTIFICIAL


INTELLIGENCE

Artificial Intelligence needs its own separate provision within § 2256(8) as well as its own statu-
tory definition. The definition already includes the “mediums” of child pornography,
i.e. “photograph, film, video, picture, or computer or computer-generated image or picture… digital
image, computer image, or computer-generated image,”166 so an Amendment adding the term “Arti-
ficial Intelligence-generated” is not far-fetched. An Amendment that adds in an AI provision would
also clear up any “computer-generation” vs. AI defenses before they occur, as well as help prepare
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Ratner/WHEN “SWEETIE” IS NOT SO SWEET 395

the Courts and Legislature for future technologies that involve AI. The Congressional findings of
the PROTECT Act expound that the changes from the CPPA are explicitly due to technological
advancements; in this case, it would be unfounded to not recognize the astronomical technological
advancements since the Act’s creation in 2003.167
Further, the Amendment should also include a statutory definition of AI and of what constitutes
“Artificial Intelligence-generation” within the definition of child pornography. Section 2256 plainly
defines all the different terms in the Act that could be construed differently from the Legislative
intent.168 AI is considered the technology of the future, and there are infinite ways for the technol-
ogy to be utilized.169 Unambiguously defining its application in its ability to create “indistinguish-
able” versions of child pornography is necessary to distinguish against the AI technology itself in
order to protect the Act from further First Amendment attacks.170 In sum, an explanatory definition
of what AI is and how it can create virtual child pornography is essential.171

VI. COUNTER-ARGUMENTS

A. THE PROTECT ACT ALREADY COVERS CHILD PORNOGRAPHY CREATED BY


ARTIFICIAL INTELLIGENCE

It could be argued that the PROTECT Act was written broadly to embrace all mediums of virtual
child pornography, regardless of the “computer-generation” provision. This argument’s foundation
lies in United States v. Williams, where the opinion divulged that because of the scienter require-
ment, real, virtual (and in rare cases, “no”) child pornography all fall within reach of the Act if it is
known or pandered to contain actual children and actual child abuse.172 However, the dissent makes
clear that this requirement is faulty when prohibiting virtual child pornography that does not involve
actual minors; if there is any doubt whether the media actually contains real children, then there
may be no illegality.173
The PROTECT Act is solely focused on knowingly pandering and soliciting child pornography;
it does not prohibit the actual production of virtual child pornography.174 For example, the AI child
pornography could be pandered as just that, “AI child pornography.” Then the offeror and offeree
would both have knowledge that the media contained no actual children or actual abuse, thus cir-
cumventing the law. This hypothetical creates issue: it may not matter to child pornographers
whether the media was “real,” if the AI child pornography is that realistic.175 Accordingly, a neces-
sity for a definition of AI child pornography within the definition of child pornography is obvious
in order to hinder such pandering.

B. CHILD PORNOGRAPHY CREATED BY ARTIFICIAL INTELLIGENCE IS NOT “REAL”


CHILD PORNOGRAPHY

Since AI is able to create its own original images and videos, and create its own “humans,”176 it
can be argued that AI child pornography is not “real” child pornography under any definition.
In dispute, the “indistinguishable” requirement reinforces the idea that AI and DeepFake child
pornography are indiscernible from real child pornography:

[U]sed with respect to a depiction, means virtually indistinguishable, in that the depiction is such that an
ordinary person viewing the depiction would conclude that the depiction is of an actual minor engaged
in sexually explicit conduct.177

Here, it is evident that the “ordinary person” standard would consider AI child pornography as
“real” child pornography with an actual child. However, when coupled with the “identifiable minor”
provision, this interpretation changes:
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
396 FAMILY COURT REVIEW

“identifiable minor” means a person:


i. who was a minor at the time the visual depiction was created, adapted, or modified; or
ii. whose image as a minor was used in creating, adapting, or modifying the visual depiction; and
iii. who is recognizable as an actual person by the person’s face, likeness, or other distinguishing
characteristic, such as a unique birthmark or other recognizable feature.178

With the ability of AI to create media organically, an “identifiable minor” may never actually be
“used” in the creation of AI child pornography. Section 2252A also provides an affirmative defense
to the PROTECT Act if no child was used in its production.179 In this case, it would be even more
necessary to have a definition of “Artificial Intelligence-generated” in pursuance of regulating child
pornography that does not exploit actual children or actual child abuse.180

VII. CONCLUSION

Artificial Intelligence is a rapidly growing area of technology that will soon be intertwined with
every aspect of a person’s life. Unfortunately, with new technology such as this, it can be used
uninhibitedly when there are no laws or regulations to govern it. In the area of child pornography,
Artificial Intelligence has the ability to create indistinguishable versions of it, without ever actually
featuring actual abuse of an actual child. Under the current definition of child pornography seen in
18 U.S.C. § 2256, there is conflict as to whether “Artificial Intelligence-generated” child pornogra-
phy would be considered to fall under the statute. In remedy, this Note proposed that either Artifi-
cial Intelligence should fall under the “computer-generated” terminology found in 18 U.S.C. § 2256
(8), or if Artificial Intelligence cannot be considered to fall within the “computer-generated” provi-
sion, then 18 U.S.C. § 2256(8) should be amended to include “Artificial Intelligence-generation.”
These solutions aim to provide a foundation for incorporating new technologies in current law in
hopes of thwarting the production of child pornography.

ENDNOTES

1. Leslie Katz, Meet ‘Sweetie,’ a Virtual Girl Created to Target Child Predators, CjNET (Nov. 5, 2013, 4:52 pm PST),
https://www.cnet.com/news/meet-sweetie-a-virtual-girl-created-to-target-child-predators/.
2. Id.
3. Id.
4. Id.
5. Id.
6. Sweetie 2.0: Stop Webcam Child Sex, TERRE DES HOMMES https://www.terredeshommes.nl/en/programmes/sweetie-20-
stop-webcam-child-sex (last visited Sept. 22, 2019).
7. Id.
8. Katz, supra note 1.
9. See Samantha Cole, AI-Assisted Fake Porn is Here and We’re All Fucked, VICE: MOTHERBOARD (Dec. 11, 2017,
2:18 pm), https://www.vice.com/en_us/article/gydydm/gal-gadot-fake-ai-porn.
10. Id.
11. Lindsay Rowntree, How AI Is Learning to Create Computer-Generated Imagery, EXCHANGEWIRE (Sept. 12, 2017),
https://www.exchangewire.com/blog/2017/09/12/ai-learning-create-computer-generated-imagery/.
12. Id.
13. Artificial Intelligence, MERRIAM-WEBSTER, https://www.merriam-webster.com/dictionary/artificial%20intelligence (last
visited Sept. 22, 2019).
14. Artificial Intelligence, BUILT IN, https://builtin.com/artificial-intelligence (last visited Dec. 3, 2019).
15. See Dom Galeon, Google’s Artificial Intelligence Built an AI That Outperforms Any Made by Humans, FUTURISM
(Dec. 1, 2017), https://futurism.com/google-artificial-intelligence-built-ai. See also Bernard Marr, Artificial Intelligence Can
Now Generate Amazing Images – What Does This Mean for Humans? FORBES (Apr. 15, 2019 12:23 am), https://www.forbes.
com/sites/bernardmarr/2019/04/15/artificial-intelligence-can-now-generate-amazing-images-what-does-the-mean-for-humans/
#46890a015077.
16. See Cole, supra note 9.
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Ratner/WHEN “SWEETIE” IS NOT SO SWEET 397

17. John Brandon, Terrifying High-Tech Porn: Creepy ‘DeepFake’ Videos are on the Rise, FOX NEWS (Feb. 16, 2018),
https://www.foxnews.com/tech/terrifying-high-tech-porn-creepy-deepfake-videos-are-on-the-rise.
18. Cole, supra note 9.
19. Child Pornography, THE UNITED STATES DEPT. OF JUSTICE, https://www.justice.gov/criminal-ceos/child-pornography
(last updated July 25, 2017).
20. Michael H. Keller and Gabriel J.X. Dance, The Internet is Overrun with Images of Child Sexual Abuse. What Went
Wrong?, THE NEW YORK TIMES (Sept. 28, 2019), https://www.nytimes.com/interactive/2019/09/28/us/child-sex-abuse.html.
21. Id.
22. Child Sexual Abuse Material, THORN, https://www.thorn.org/child-pornography-and-abuse-statistics/ (last visited
Sept. 22, 2019).
23. See Id. See also Keller and Dance, supra note 20.
24. See Jeff Loucks, Susanne Hupfer, David Jarvis, and Timothy Murphy, Future in the Balance? How Countries are
Pursuing an AI Advantage, DELOITTE (May 1, 2019), https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/
ai-investment-by-country.html/.
25. See 18 U.S.C. § 2251 (2003); S.151, 108th Cong. § 501 (2003).
26. Andrew Hodges, The Turing Test, 1950, UNIVERSITY OF OXFORD, https://www.turing.org/uk/scrapbook/test.html (last
visited Dec. 3, 2019).
27. B.J. Copeland, Alan Turing, ENCYCLOPEDIA BRITANNICA, https://www.britannica.com/biography/Alan-
Turing#ref214879 (last visited Dec. 3, 2019).
28. Turing Test, ENCYCLOPEDIA BRITANNICA, https://www.britannica.com/technology/Turing-test (last visited Dec. 3, 2019).
29. B.J. Copeland, Artificial Intelligence, ENCYCLOPEDIA BRITANNICA, https://www.britannica.com/technology/artificial-
intelligence/The-Turing-test (last updated Nov. 19, 2019).
30. Id.
31. Karla Adam, Alan Turing, A Founding Father of Computer Science, Revealed as New Face of British 50-Pound Note,
THE WASHINGTON POST (July 15, 2019, 2:27 pm), https://www.washingtonpost.com/world/europe/alan-turing-a-founding-
father-of-computer-science-revealed-as-new-face-of-british50-pound-note/2019/07/15/96a1e46a-a6ff-11e9-86dd-
d7f0e60391e9_story.html.
32. See MERRIAM-WEBSTER supra note 13.
33. Copeland, supra note 29.
34. The Exponential Guide to Artificial Intelligence, SINGULARITY UNIVERSITY, https://su.org/resources/exponential-guides/
the-exponential-guide-to-artificial-intelligence/ (last visited Dec. 13, 2019).
35. Id.
36. Darrell M. West, What is Artificial Intelligence?, BROOKINGS (Oct. 4, 2018), https://www.brookings.edu/research/
what-is-artificial-intelligence/.
37. SINGULARITY UNIVERSITY, supra note 34.
38. See Copeland, supra note 29.
39. SINGULARITY UNIVERSITY, supra note 34.
40. Ankit Rathi, The Impact of Artificial Intelligence, MEDIUM: TOWARDS DATA SCIENCE (Sept. 23, 2019), https://
towardsdatascience.com/the-impact-of-artificial-intelligence-8615d1d9b7ac.
41. See Megan Wollerton, Alexa, Google Assistant, and Siri Will Get Smarter This Year. Here’s How, CjNET (June 7,
2019), https://www.cnet.com/news/alexa-vs-google-assistant-vs-siri-the-state-of-voice-after-google-io-and-wwdc-2019/.
42. Jo Best, IBM Watson: The Inside Story of How the Jeopardy-Winning Supercomputer was Born, and What it Wants to
do Next, TECH REPUBLIC (Sept. 9, 2013), https://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-
jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/.
43. April Glaser, Facebook’s Face-ID Database Could Be the Biggest in the World. Yes, It Should Worry Us, SLATE (July
9, 2019), https://slate.com/technology/2019/07/facebook-facial-recognition-ice-bad.html.
44. Id.
45. Id.
46. SINGULARITY UNIVERSITY, supra note 34.
47. Id.
48. Karen Hao, What is Machine Learning? MIT TECHNOLOGY REVIEW (Nov. 17, 2018), https://www.technologyreview.
com/s/612437/what-is-machine-learning-we-drew-you-another-flowchart/.
49. SINGULARITY UNIVERSITY, supra note 34.
50. Claudio Masolo, Supervised, Unsupervised, and Deep Learning, MEDIUM: TOWARDS DATA SCIENCE (May 7, 2017),
https://towardsdatascience.com/supervised-unsupervised-and-deep-learning-aa61a0e5471c.
51. Id.
52. Hao, supra note 48.
53. James Furbush, Machine Learning: A Quick and Simple Definition, O’REILLY (May 3, 2018), https://www.oreilly.com/
content/machine-learning-a-quick-and-simple-definition.
54. Masolo, supra note 50.
55. Id.
56. Id.
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
398 FAMILY COURT REVIEW

57. Id. See also A Beginners Guide to Neural Networks and Deep Learning, PATHMIND: AI WIKI, https://pathmind.ai/wiki/
neural-network (last visited, Dec. 4, 2019).
58. Chris Meserole, What is Machine Learning? BROOKINGS (Oct. 4, 2018), https://www.brookings.edu/research/
what-is-machine-learning/. (“[I]magine you wanted to build an algorithm to detect whether an image contained a human
face. A basic deep neural network would have several layers of thousands of neurons each. In the first layer, each neuron
might learn to look for one basic shape, like a curve or a line. In the second layer, each neuron would look at the first
layer, and learn to see whether the lines and curves it detects ever make up more advanced shapes, like a corner or a cir-
cle. In the third layer, neurons would look for even more advanced patterns, like a dark circle inside a white circle, as
happens in the human eye. In the final layer, each neuron would learn to look for still more advanced shapes, such as
two eyes and a nose. Based on what the neurons in the final layer say, the algorithm will then estimate how likely it is
that an image contains a face.”).
59. Id.
60. SINGULARITY UNIVERSITY, supra note 34.
61. SINGULARITY UNIVERSITY, supra note 34.
62. SINGULARITY UNIVERSITY, supra note 34.
63. See SINGULARITY UNIVERSITY, supra note 34.
64. Glaser, supra note 43 (Containing photos and videos from childhood, with different haircuts or tattoos, from various
angles, etc.).
65. asarafraz, DeepFace: Facebook’s Face Verification Algorithm, COMPUT. VISION ONLINE, https://computervisiononline.
com/blog/deepface-facebooks-face-verification-algorithm (last visited, Dec. 4, 2019).
66. SINGULARITY UNIVERSITY, supra note 34.
67. Bernard Marr, Artificial Intelligence Can Now Generate Amazing Images – What Does This Mean for Humans? FOR-
BES (Apr. 15, 2019, 12:23 AM), https://www.forbes.com/sites/bernardmarr/2019/04/15/artificial-intelligence-can-now-
generate-amazing-images-what-does-the-mean-for-humans/.
68. See Id. See also Silke Otte, How Does Artificial Intelligence Work? INNOPLEXUS, https://www.innoplexus.com/blog/
how-artificial-intelligence-works (last visited Dec. 4, 2019).
69. Marr, supra note 67.
70. Id.
71. Id.
72. Id.
73. Id.
74. Id.
75. Id.
76. Tony Bradley, Facebook AI Creates its Own Language in Creepy Preview of Our Potential Future FORBES (July
31, 2017, 11:20 AM), https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-
preview-of-our-potential-future/#1159a924292c.
77. See Galeon, supra note 15.
78. Bernard Marr, AI Can Now Create Artificial People – What Does That Mean for Humans? FORBES (July 12, 2019,
12:20 AM), https://www.forbes.com/sites/bernardmarr/2019/07/12/ai-can-now-create-artificial-people-what-does-that-mean-
humans/.
79. See Cole, supra note 18.
80. Grace Shao, What DeepFakes Are and How They May Be Dangerous, CNBC: TECH (Oct. 13, 2019, 9:40 PM),
https://www.cnbc.com/2019/10/14/what-is-deepfake-and-how-it-might-be-dangerous.html.
81. Id.
82. Id. See also What is a Deepfake? ECONOMIST (Aug. 17, 2019), https://www.economist.com/the-economist-explains/
2019/08/07/what-is-a-deepfake.
83. See Cole, supra note 18.
84. Id.
85. See J.M. Porup, How and Why DeepFake Videos Work – And What is at Risk, CSO (Apr. 10, 2019, 3:00 AM),
https://www.csoonline.com/article/3293002/deepfake-videos-how-and-why-they-work.html. See also Samantha Cole, There is
No Tech Solution to DeepFakes, VICE: MOTHERBOARD (Aug. 14, 2018, 1:26 PM), https://www.vice.com/en_us/article/594qx5/
there-is-no-tech-solution-to-deepfakes.
86. Joseph Cox, Most DeepFakes Are Used for Creating Non-Consensual Porn, Not Fake News, VICE: MOTHERBOARD
(Oct. 7, 2019, 8:47 AM), https://www.vice.com/en_us/article/7x57v9/most-deepfakes-are-porn-harassment-not-fake-news.
87. Id.
88. Id. (Of those pornographic DeepFakes, 99% of the subjects featured were women.).
89. Samantha Cole, Twitter is the Latest Platform to Ban AI-Generated Porn, VICE: MOTHERBOARD (Feb. 6, 2018,
6:12 PM), https://www.vice.com/en_us/article/ywqgab/twitter-bans-deepfakes. See also Samantha Cole, Reddit Just Shut
Down the DeepFakes Subreddit, VICE: MOTHERBOARD (Feb. 7, 2018, 1:35 PM), https://www.vice.com/en_us/article/neqb98/
reddit-shuts-down-deepfakes.
90. See Samantha Cole, We Are Truly Fucked: Everyone is Making AI-Generated Fake Porn Now, VICE: MOTHERBOARD
(Jan. 24, 2018, 1:13 PM), https://www.vice.com/en_us/article/bjye8a/reddit-fake-porn-app-daisy-ridley.
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Ratner/WHEN “SWEETIE” IS NOT SO SWEET 399

91. Samantha Cole, Creator of DeepNude, App That Undresses Photos of Women, Takes it Offline, VICE: MOTHERBOARD
(June 27, 2019, 3:03 pm), https://www.vice.com/en_us/article/qv7agw/deepnude-app-that-undresses-photos-of-women-takes-
it-offline.
92. Id.
93. Id.
94. Samantha Cole, There Is No Tech Solution to DeepFakes, VICE: MOTHERBOARD (Aug. 14, 2018, 1:26 PM), https://
www.vice.com/en_us/article/594qx5/there-is-no-tech-solution-to-deepfakes.
95. Id.
96. Samantha Cole, Fake Porn Makers Are Worried About Accidentally Making Child Porn, VICE: MOTHERBOARD (Feb.
27, 2018, 11:39 AM), https://vice.com/en_us/article/evmkxa/ai-fake-porn-deepfakes-child-pornography-emma-watson-elle-
fanning.
97. Id.
98. Id.
99. Prosecutorial Remedies and Other Tools to End the Exploitation of Children Today Act (“PROTECT”) of 2003,
18 U.S.C. § 2251 (2003); S.151, 108th Cong. (2003).
100. 18 U.S.C. § 2251 (2003); See S. 151, 108th Cong. § 503.
101. Jasmine V. Eggestein & Kenneth J. Knapp, Fighting Child Pornography: A Review of Legal and Technological
Developments, J. DIG. FORENSICS, SEC. & L. 29, 31 fig. 1 (2014).
102. U.S. Const. amend. 1.
103. See Miller v. Cali., 413 U.S. 15 (1973); N.Y. v. Ferber, 458 U.S. 747 (1982); Osborne v. Ohio, 495 U.S.
103 (1990); Stanley v. Ga, 394 U.S. 557 (1969).
104. See Miller., 413 U.S. 15, 19.
105. Id. at 24.
106. David L. Hudson Jr. Pornography & Obscenity, FREEDOM F. INST., https://www.freedomforuminstitute.org/first-
amendment-center/topics/freedom-of-speech-2/adult-entertainment/pornography-obscenity/ (July, 2009).
107. Eggestein & Knapp, supra note 101 at 32.
108. H.R. 9357, 95th Cong. (1978), https://www.congress.gov/bill/95th-congress/house-bill/9357.
109. Eggestein & Knapp, supra note 10, at 32.
110. New York v. Ferber, 458 U.S. 747 (1982).
111. Eggestein & Knapp, supra note 101, at 33.
112. Ferber, 458 U.S. at 752.
113. Id. at 756.
114. Id. at 757.
115. Id. at 757–764 ((1) there is an evident compelling State interest in “safeguarding the physical and psychological
well-being of a minor,” (2) the distribution of child pornography is related directly to child abuse, (3) there is an economic
motive in the advertisement and sales of child pornography that is integral to its production, (4) there is a de minimis value
for the allowance of live performances and photographic reproductions of minors engaged in sex acts, and (5) the recognition
that child pornography is material held outside of the First Amendment is not inconsistent with the Court’s earlier
decisions.).
116. Ferber, 458 U.S. at 764–765.
117. Ferber, 458 U.S. at 761; Ashcroft v. Free Speech Coal., 535 U.S. 234, 249 (2002) (“The production of the work,
not its content, was the target of the statute.”). See also Carl S. Kaplan, Supreme Court Set to Consider ‘Virtual’ Child Por-
nography, N.Y. TIMES (Oct. 19, 2001), https://www.nytimes.com/2001/10/19/technology/supreme-court-set-to-consider-
virtual-child-pornography.html.
118. Osborne v. Ohio, 495 U.S. 103 (1990).
119. Id. at 107.
120. Id. at 108.
121. Stanley v. Georgia, 394 U.S. 557, 568 (1969).
122. Osborne, 495 U.S. at 108, 111.
123. Id. at 110–111.
124. History of the Web, WORLD WIDE WEB FOUNDATION, https://webfoundation.org/about/vision/history-of-the-web/ (last
visited Jan. 13, 2020).
125. Osborne, 495 U.S. at 111.
126. Jacques N. Catudal, Censorship, the Internet, and the Child Pornography Law of 1996: A Critique, ETHICS AND INFO.
TECH., 108 (1999).
127. Child Pornography Prevention Act of 1996, 18 U.S.C. § 2256 (1996); H.R. 4123, 104th Cong. § 3 (2nd Sess. 1996)
(“Child pornography means any visual depiction, including any photograph, film, video, picture, or computer or computer-
generated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually explicit
conduct…”).
128. Ashcroft v. Free Speech Coal., 535 U.S. 234, 239 (2002).
129. See Ashcroft, 535 U.S. at 239. See also H.R. 4123, 104th Cong. § 2 (1996).
130. Ashcroft, 535 U.S. at 239, 243.
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
400 FAMILY COURT REVIEW

131. Id. at 240.


132. Id. at 249.
133. Id. at 247.
134. Id. at 250.
135. Id. at 244, 245.
136. Prosecutorial Remedies and Other Tools to End the Exploitation of Children Today (PROTECT) Act, 18 U.S.C. §
2251 (2003); S.151, 108th Cong. (as passed by Senate, Apr. 30, 2003).
137. Fact Sheet: Protect Act, DEPT. OF JUSTICE (Apr. 30, 2003), https://www.justice.gov/archive/opa/pr/2003/April/03_ag_
266.htm.
138. United States v. Williams, 553 U.S. 285 (2008).
139. Id.
140. 18 U.S.C. § 2252(a)(3) (2003).
141. Williams, 553 U.S. at 293.
142. Id. 303.
143. Id. at 289.
144. Id. at 301.
145. Id. at 303 (“An offer to provide or request to receive virtual child pornography is not prohibited by the statute. A
crime is committed only when the speaker believes or intends the listener to believe that the subject of the proposed transac-
tion depicts real children… Simulated child pornography will be as available as ever, so long as it is offered and sought as
such, and not as real child pornography.”).
146. 18 U.S.C. § 2251 (2003); S.151, 108th Cong. § 501 (2003).
147. 18 U.S.C. § 2256(8) (2020) (“Any visual depiction, including any photograph, film, video, picture, or computer, or
computer-generated image or picture, whether made or produced by electronic, mechanical, or other means, of sexually
explicit conduct…”).
148. See S.151, 108th Cong. § 501 (2003).
149. 18 U.S.C. § 2256(8).
150. See Marr supra note 67. See also Cade Metz & Keith Collins, How A.I. ‘Cat and Mouse Game’ Generates Believ-
able Fake Photos, N.Y. TIMES (Jan. 2, 2018), https://www.nytimes.com/interactive/2018/01/02/technology/ai-generated-
photos.html.
151. See Marr supra note 67 & 78.
152. See Marr supra note 67 & 78. See also Matthew Gault, This Website Uses AI to Generate the Faces of People Who
Do not Exist, V ICE (Feb. 14, 2019), https://www.vice.com/en_us/article/7xn4wy/this-website-uses-ai-to-generate-the-
faces-of-people-who-don’t-exist.
153. See 18 U.S.C. § 2251 (2003); S.151, 108th Cong. (2003).
154. 18 U.S.C. § 2256(8)(B).
155. See MERRIAM-WEBSTER supra note 13.
156. SINGULARITY UNIVERSITY, supra note 34.
157. Id.
158. Id.
159. Id.
160. Id.
161. Computer Hope, Computer, https://www.computerhope.com/jargon/c/computer.htm (Dec. 30, 2019).
162. 18 U.S.C. § 1030(e)(1) (2020).
163. 18 U.S.C. § 2251 (2003); S.151, 108th Cong. § 501 (2003) (“Evidence submitted to the Congress, including from
the National Center for Missing and Exploited Children, demonstrates that technology already exists to disguise depictions of
real children to make them unidentifiable and to make depictions of real children appear computer-generated. The technology
will soon exist, if it does not already, to computer generate realistic images of children.”).
164. 18 U.S.C. § 2251; S.151, 108th Cong. § 501 (2003).
165. Ralph Huchtemann, Concept of Computer Generated Images and Their Application, https://www.streetdirectory.
com/travel_guide/140579/computer/concept_of_computer_generated_images_and_their_application.html/ (last visited Jan.
14, 2019).
166. 18 U.S.C. § 2256.
167. 18 U.S.C. § 2251 (2003); S.151, 108th Cong. § 501 (2003).
168. 18 U.S.C. § 2256.
169. See DELOITTE supra note 24; SINGULARITY UNIVERSITY, supra note 34.
170. See Ashcroft v. Free Speech Coal., 535 U.S. 234 (2002).
171. Furthermore, a Legislative report on Artificial Intelligence and/or DeepFake’s ability to create child pornography
may also be required in order to properly define and explain the technology in the Amendment.
172. Williams, 553 U.S. at 300.
173. Id. at 317 (“[I]n the proposed transaction in an identified pornographic image without the showing of a real child;
no matter what the parties believe, and no matter how exactly a defendant’s actions conform to his intended course of conduct
17441617, 2021, 2, Downloaded from https://onlinelibrary.wiley.com/doi/10.1111/fcre.12576 by Universidad Autonoma de Chile, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Ratner/WHEN “SWEETIE” IS NOT SO SWEET 401

in completing the transaction he has in mind, if there turns out to be reasonable doubt that a real child was used to make the
photos, or none was, there could be, respectively, no conviction and no crime.”).
174. Id.. at 310–11 (“The Act responds by avoiding any direct prohibition of transactions in child pornography, when no
actual minors may be pictured; instead, it prohibits proposals for transactions in pornography when a defendant manifestly
believes or would induce belief in a prospective party that the subject of an exchange or exhibition is or will be an actual
child, not an impersonated, simulated or “virtual” one, or the subject of a composite created from lawful photos spliced
together.”).
175. In this instance, the constitutionality of the PROTECT Act may be called into question again if child pornographers
are able to evade its protection. See Williams, 553 U.S. at 293, 323 (“Still, if I were convinced there was a real reason for the
Government’s fear stemming from computer simulation, I would be willing to reexamine Ferber. Conditions can change, and
if today’s technology left no other effective way to stop professional and amateur pornographers from exploiting children
there would be a fair claim that some degree of expressive protection had to yield to protect children.”).
176. See supra note 152.
177. 18 U.S.C. § 2256(11) (“This definition does not apply to depictions that are drawings, cartoons, sculptures, or paint-
ings depicting minors or adults.”).
178. 18 U.S.C. § 2256(9)(A).
179. 18 U.S.C. § 2252A(c).
180. See 18 U.S.C. § 2251; S.151, 108th Cong. § 501 (2003) (“The difficulties in enforcing the child pornography laws
will continue to grow increasingly worse. The mere prospect that the technology exists to create composite or computer-
generated depictions that are indistinguishable from depictions of real children will allow defendants who possess images of
real children to escape from prosecution; for it threatens to create a reasonable doubt in every case of computer images even
when a real child was abused. This threatens to render child pornography laws that protect real children unenforceable.”).

Claudia Ratner grew up in the beach town of Lewes, Delaware, where she developed an intense curiosity of the arts,
technology, and the world outside of her small community. She graduated from the University of South Carolina and
majored in visual communications and minored in photography. She took her artistic and creative abilities to
New York where she attends the Maurice A. Deane School of Law at Hofstra University. She will graduate in May
2021 with a completed concentration in Intellectual Property. Claudia is interested in the intersection of technology,
art, and the law, focusing on emerging technologies, like artificial intelligence, and its effects on society under a
legal gaze.

You might also like