Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

AI in 2024: Key Points

– AI regulation, including the EU’s AI Act, is taking shape and may


Monitoring begin to affect nearly all industries.

New Regulation – Expect more litigation over alleged copyright infringement in the
training and use of AI systems.
and Staying in – AI technology will increasingly generate cybersecurity challenges

Compliance With –
and privacy risks that companies utilizing AI systems must manage.

The use of AI in employment decisions will be circumscribed


Existing Laws by employment-related laws.

– As companies integrate AI into products and processes, robust internal


Contributing Partners governance policies will be needed to manage risks related to the use
of proprietary and confidential information, as well as of customer and
Ken D. Kumayama / Palo Alto employee personal information for AI input, for example.
Michael E. Leiter / Washington, D.C.
William Ridgway / Chicago
Resa K. Schlossberg / New York
To stay in compliance with existing rules for specific provisions. The draft law
across jurisdictions and prepare for new needs the approval of the Council of
David E. Schwartz / New York ones in the making, companies will need the European Union and the European
David A. Simon / Washington, D.C. to monitor regulatory developments and Parliament, which comprises representa-
consider whether to submit comments tives from the 27 countries in the EU.
or otherwise be involved in legislative or
Counsel regulatory rulemaking on issues affecting The legislation, which would apply to
their interests. providers placing AI systems in the
Pramode Chiruvolu / Palo Alto EU market, would take a “risk-based”
Eve-Christie Vermynck / London 1. Upcoming AI Legislation approach, classifying and regulating
systems based on their risk levels. For
Governments across the world have only
example, new provisions were introduced
just begun to draft and pass laws tailored
Associate to account for general purpose AI systems,
to AI technology. Heading into 2024, we
to require specific transparency obliga-
expect both sector-specific and broader,
Lisa V. Zivkovic / New York tions for foundational models before being
omnibus AI regulations to impact nearly
placed on the market, and to impose a
all industries as the use of AI expands.
stricter regime for high-impact founda-
European Union tional models.

The EU set rules for the use of AI, namely


the EU Artificial Intelligence Act (EU The legislation, which would
This article is from Skadden’s 2024 Insights. AI Act) and the Artificial Intelligence apply to providers placing AI
Liability Directive (AILD).
This memorandum is provided by Skadden, systems in the EU market,
Arps, Slate, Meagher & Flom LLP and its
affiliates for educational and informational EU AI Act. On December 8, 2023, EU would take a “risk-based”
purposes only and is not intended and policymakers reached a deal on the EU approach, classifying and
should not be construed as legal advice. AI Act following three days of marathon regulating systems based
This memorandum is considered advertising on their risk levels.
negotiations. The EU AI Act still needs to
under applicable state laws.
go through final steps before it becomes
One Manhattan West law, but the political accord means that
New York, NY 10001
The draft law also deems risks associ-
its key parameters have been set. The
212.735.3000 ated with certain use cases unacceptable,
provisional agreement provides that the
banning, for example, the scraping of
EU AI Act will apply two years after its
faces from the internet or security footage
entry into force, with some exceptions
to create facial recognition databases;

Follow us for more thought leadership: / skadden.com © Skadden, Arps, Slate, Meagher & Flom LLP. All rights reserved.
AI in 2024: Monitoring New Regulation and
Staying in Compliance With Existing Laws

emotion recognition in the workplace These regulators will then provide information of California residents and
and educational institutions; cogni- tailored recommendations for the uses computation as a whole or part of a
tive behavioral manipulation; social financial, health care, competition and system to make or execute a decision or
scoring; biometric categorization to infer employment sectors. At that point, the facilitate human decision-making.
sensitive data, such as sexual orientation U.K. government will assess whether
or religious beliefs; and certain cases specific AI regulation, or an AI regula- The draft regulations propose granting
of predictive policing for individuals. tor, is required, which will inform the consumers (including employees and
However, the draft law also gives broad business practices of companies placing business-to-business contacts) the right
exemptions for “open-source models.” AI systems in the U.K. in 2024. to receive pre-use notice regarding the
use of ADMT and to opt out of certain
The draft law specifies that the scope of United States automated decision-making activities.
the law shall not affect member states’ The CPPA is expected to begin the formal
The U.S. has yet to adopt a comprehen-
competences in national security (or any rulemaking process in 2024.
sive AI law. However, in October 2023,
entity entrusted with tasks in this area)
the Biden administration issued a sweep- Copyright Office and Patent and
or apply to AI systems used solely for the
ing executive order (AI Executive Order) Trademark Office. The two agencies
purpose of research and innovation.
that directed various U.S. government have issued enforcement actions and
AILD. In parallel, the EU is amending departments and agencies to evaluate the guidance on various AI-related matters,
its product liability regime, updating the safety and security of AI technology and taking the position that a sufficient degree
existing Product Liability Directive and other associated risks, and implement of human input is needed to qualify for
adopting the new AILD to harmonize civil processes and procedures regarding the protections under copyright and patent
liability for AI among EU member states. adoption and use of AI. Federal and state law. Subsequent litigation has affirmed
agencies as well as lawmakers have also these positions. (See our August 28,
The amendments would address strict and shown significant interest in regulating 2023, client alert “District Court Affirms
fault-based liability regimes to provide AI technology. Human Authorship Requirement for
recourse to EU citizens for defective the Copyrightability of Autonomously
or harmful AI products. Importantly, AI Executive Order. The executive order
Generated AI Works.”) We expect guid-
the AILD would establish a rebuttable set deadlines for agencies and regula-
ance from the two agencies to evolve as
presumption of a causal link between tors, and proposed to impose obligations
applicants seek to register new works and
the fault of the provider of AI systems on companies to test and report on AI
inventions that incorporate AI in 2024.
and the output of those systems. systems. It followed a suite of AI policies
that the White House announced earlier in SEC’s proposed AI rules. In July 2023,
These regimes will not only affect compa- 2023. (For more on the executive order, see the Securities and Exchange Commission
nies looking to launch and use AI models our November 3, 2023, client alert “Biden (SEC) proposed broad new rules to
in the EU but may also help shape future Administration Passes Sweeping Executive address conflicts of interest that the SEC
legislation by EU member states and Order on Artificial Intelligence.”) Heading believes are posed by the use of AI and
other jurisdictions. into the new year, we expect accelerated other types of analytical technologies by
efforts to enact AI regulation, particularly broker-dealers and investment advisers.
United Kingdom in light of the AI Executive Order, both at We expect the final rules to be released
the federal and state levels. in 2024. (See our August 10, 2023, client
In contrast, the U.K. has taken an
incremental, sector-led approach to AI alert “SEC Proposes New Conflicts of
CPPA’s proposed regulations on
regulation, as reflected in its March 2023 Interest Rule for Use of AI by Broker-
automated decision-making. The
white paper. The U.K. government has Dealers and Investment Advisers.”)
California Privacy Protection Agency
undertaken consultations and invited (CPPA) recently issued an initial draft
feedback from the AI industry to guide its Other Jurisdictions
of regulations governing the use of
regulation of AI practices. In the coming automated decision-making technology Beyond the EU and U.S., more than
year, it is expected to share high-level (ADMT) under the California Consumer 37 countries — including China, India
guidance and an initial regulatory road Privacy Act (CCPA). The draft regula- and Japan — have proposed AI-related
map with its sector-specific regulators. tions broadly define any system, software legal frameworks.
or process that handles the personal

2 Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates


AI in 2024: Monitoring New Regulation and
Staying in Compliance With Existing Laws

– In August 2023, the International – Getty Images v. Stability AI: Image available websites — or that otherwise
Association of Privacy Professionals licensor alleges infringement of copy- collect personal information, including
published a list of AI legislation rights in over 12 million photos and through inputs by users (such as through
introduced around the world. their captions used to build and promote note-taking and summarization tech-
– In October 2023, the United Nations the text-to-image generative AI systems nologies built into web conferencing
unveiled an AI advisory board aimed Stable Diffusion and DreamStudio. software), may implicate privacy laws
at creating global agreements on how – Authors Guild v. OpenAI Inc.: Novelists in various jurisdictions.
to govern AI systems. The board plans in putative class action accuse OpenAI Applicability of comprehensive privacy
to release final recommendations of using their works without permission laws. Comprehensive privacy laws, such
by mid-2024, which may influence to train the AI models powering ChatGPT. as the EU’s General Data Protection
worldwide regulatory efforts. – Tremblay/Silverman v. OpenAI Inc.: Regulation (GDPR), the CCPA and other
– On November 1, 2023, representatives Two plaintiff groups allege that OpenAI U.S. state privacy laws, impose purpose
from the EU, U.S., U.K., China infringed their copyrighted novels to limitation, data minimization, transpar-
and 25 other countries signed the train the AI models powering ChatGPT. ency, accountability and integrity princi-
Bletchley Declaration, largely echoing – Doe v. GitHub Inc.: Software devel- ples that may be at odds with the use and
the statements of numerous national opers allege that the AI-based coding development of such AI models. Therefore,
and international organizations in tools OpenAI Codex and GitHub companies will need to carefully consider
recognizing the importance of trust- Copilot infringed rights and violated whether they are providing individuals
worthy AI and the potential dangers licensing terms relating to public code whose personal information is used for or
of general-purpose AI models. The developers published on GitHub. by AI models with the requisite notices
declaration suggests international coop- and are obtaining the necessary consent
eration and an inclusive global dialogue 3. Cybersecurity and Privacy Risks prior to such use. Companies will also
that recognizes varying approaches need to be prepared to respond to requests
Cybersecurity
based on national circumstances. by these individuals to exercise their rights
AI technology has the capability to — some of which, such as the right to
2. Current and Future Litigation expand the arsenal of bad actors to carry delete, are complicated by the mechanisms
Copyright Infringement out sophisticated cyberattacks (e.g., of AI — under the privacy laws.
large language models can be used to
Generative AI models, including ChatGPT write malicious code, engineer advanced Lawsuits under biometric privacy laws.
and Google Bard, are generally trained on phishing attacks or more effectively spread AI companies have faced suits under the
vast amounts of content and data. These malware and ransomware). In addition, Illinois’ Biometric Information Privacy
may include copyrighted works extracted AI systems may themselves be vulnera- Act, which prohibits private companies
from publicly available websites. A number ble to data integrity attacks (e.g., through from collecting biometric data unless
of content creators and owners have filed “model poisoning,” an attack conducted they follow stringent requirements. For
suits claiming that use of this content by introducing malicious information into example, recent lawsuits in state courts
infringes the rights, including copyrights, training data). allege Clearview AI scraped billions of
of third parties. pictures from social media platforms to
In response to those risks, the AI create a faceprint database that was then
Data compilers and model trainers have Executive Order in the U.S. calls for sold to police departments and other
argued that their activities constitute “fair testing and reporting rules for companies private organizations.
use” under copyright law. These cases are developing certain AI tools. Companies
just beginning to work their way through must therefore ensure their cybersecurity FTC’s asserted authority over AI. The
the courts. polices adequately identify, measure and Federal Trade Commission (FTC) has
manage these risks. enforced the FTC Act’s prohibitions on
Cases to watch include:
unfair and deceptive acts or practices
– Thomson Reuters v. ROSS Intelligence: Privacy against AI companies (including requiring
Legal publisher alleges infringement in some cases that AI models trained on
Companies that build or use AI models
of copyrights in Westlaw material data in violation of privacy commitments
built on training sets that include personal
used to train an AI-based competitor. be deleted, a remedy termed “algorith-
information — whether purchased from
third parties or extracted from publicly mic disgorgement”). The FTC has also

3 Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates


AI in 2024: Monitoring New Regulation and
Staying in Compliance With Existing Laws

indicated that it will continue to enforce In addition, state and local legislators shape, companies should implement —
the FTC Act to protect against misuses are focused on AI use in hiring and and regularly update — clear and robust
of personal information in connection promotion. For example, New York City internal governance policies in order to
with AI models. (See “FTC Enforcement Local Law 144 requires employers using minimize risk and liability.
Trends in Consumer Protection Under automated employment decision tools
the Biden Administration.”) (AEDTs) to screen candidates for hiring Specifically, heading into 2024,
or promotion to make public the date and companies should consider:
In addition, the FTC recently approved, summary of the results of an annual inde- – Terms of use. Ensure public-
in a 3-0 vote, a resolution authorizing its pendent bias audit of any such AEDTs. facing terms of use sufficiently
ability to issue civil investigative demands reflect internal policies regarding
(CIDs), which are a form of compulsory Illinois also enacted the Artificial AI. For example, companies that
process similar to a subpoena, in non- Intelligence Video Interview Act in 2022, make valuable intellectual property
public investigations involving products which imposes certain requirements on (IP) available on their websites should
and services that use or claim to use AI. employers that use AI to analyze video draft terms that make it clear whether
interviews. California, New Jersey, New use of that IP in connection with
4. Labor and Employment Issues York, Vermont and Washington, D.C., third-party AI products is prohibited.
Many companies are already harnessing have also proposed legislation to regulate
AI use in hiring and promotion. – Employee policies. Outline the
AI to improve efficiency in employment
scope of permitted use of AI in the
processes by, for example, preparing job
In addition, employers and employment workplace in internal policies, and
descriptions, screening and evaluating job
agencies must provide a notice of the use design protections for the use of
candidates, and analyzing data to predict
of AEDTs to employees and candidates for confidential information or IP as
the future success of job applicants. AI
employment who reside in New York City. inputs in AI tools, as well as processes
may also be used to analyze employee
governing the use of AI-generated
productivity, measure performance and Termination content in products and services.
identify candidates for promotion.
When laying off employees, employers – Vendor agreements. Develop and
Hiring and Promotion must also be mindful of compliance not implement policies and processes to
only with anti-discrimination laws but address any AI-specific risks arising
An employer’s use of AI in hiring and
also the U.S. Worker Adjustment and from their vendor agreements and
promotion must comply with laws that
Retraining Notification Act and equiv- ensure that negotiated agreements
prohibit discrimination based on race,
alent state and local laws. Furthermore, have sufficient protections regarding
color, religion, sex, national origin,
pending legislation may impact employ- the use of confidential, proprietary or
disability or age, including Title VII
ers; for example, the U.S. Algorithmic personally identifiable information, IP
of the Civil Rights Act of 1964, the
Accountability Act of 2022 would direct infringement risks and IP ownership.
Americans With Disabilities Act and
the FTC to require organizations to
the Age Discrimination in Employment – Updating processes to reflect AI
conduct impact assessments for bias when
Act in the U.S. risks. Integrate AI-related risks into
using AI systems to make critical deci-
compliance and governance processes,
In 2022 and 2023, the U.S. Equal sions, such as whom to lay off. (See our
including through assessments of the
Employment Opportunity Commission June 2023 article “AI and the Workplace:
impact of AI-related development or
(EEOC) issued guidance with examples Employment Considerations.”)
procurement, and through training and
of ways that AI tools may violate anti- tabletop exercises to sensitize employ-
discrimination laws and best practices 5. AI Governance Practices
ees, management and boards to key AI
to employ when integrating AI into As AI becomes integrated more broadly issues likely to affect the organization.
HR policies and practices. into products and business practices,
and as regulations affecting AI use take

4 Skadden, Arps, Slate, Meagher & Flom LLP and Affiliates

You might also like