Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

A SUMMARY OF 2024 AI REGULATIONS .................................................................................................................

1. INTRODUCTION TO AI REGULATIONS IN 2024 .................................................................................................... 3

1.1 THE GROWING NEED FOR AI REGULATIONS ...................................................................................................................3


1.2 AI REGULATIONS’ COMMON CONCERNS TO ACHIEVE.......................................................................................................4

2. KEY HIGHLIGHTS AND DEVELOPMENTS IN AI REGULATIONS .............................................................................. 4

2.1 MAJOR AI REGULATORY INITIATIVES .............................................................................................................................5


2.2 THE AI EXECUTIVE ORDER (UNITED STATES) ..................................................................................................................5
2.3 THE NEW YORK CITY AI ORDINANCE ............................................................................................................................6
2.4. SPAIN. THE AGENCY FOR THE SUPERVISION OF ARTIFICIAL INTELLIGENCE (AESIA) ................................................................7
2.5. THE EU ARTIFICIAL INTELLIGENCE ACT .........................................................................................................................8

3. THE WHITE HOUSE BLUEPRINT FOR AN AI BILL OF RIGHTS ............................................................................... 12

4. THE STEPS TO COMPLIANCE ............................................................................................................................. 12

4.1. PREVENT OR MITIGATE AI BIAS ................................................................................................................................12


4.2. PROACTIVE EQUITY ASSESSMENTS ............................................................................................................................13
4.3. AI RESPONSIBLE & ETHICAL POLICY ..........................................................................................................................13
4.4. DATA PRIVACY PROTECTION .....................................................................................................................................13
4.5. ADOPT THE FOUR RISK LEVEL CLASSIFICATION MODEL ..................................................................................................13
4.6. HUMAN CONSIDERATION ........................................................................................................................................14
4.7. TRANSPARENCY .....................................................................................................................................................14
4.8. ACCOUNTABILITY ...................................................................................................................................................14
4.9. ADOPTION OF THE AI BLUEPRINT ..............................................................................................................................14
5.10. FOLLOW AI SECURE GUIDELINES.............................................................................................................................14

5. FINAL TAKEAWAYS ............................................................................................................................................ 14


A Summary of 2024 AI Regulations
1. Introduction To AI Regulations In 2024
Artificial Intelligence (AI) has rapidly advanced in recent years, transforming numerous industries and
shaping the way we live and work. With this technological progress comes the need to establish
regulations and guidelines to ensure the responsible and ethical development, deployment, and use of
AI systems. As we enter 2024, this article provides a comprehensive summary of the key AI regulations
and developments that have emerged on a global scale. It explores the ethical considerations and
guidelines for AI systems, the importance of privacy and data protection, the concept of accountability
and liability, the impact of regulations on industry and economy, international collaboration and
standardization efforts, and finally, offers insights into the future outlook and challenges of regulating AI.
Stay tuned as we dive into the dynamic landscape of AI regulations in 2024.

1.1 The Growing Need For AI Regulations

In 2024, the pursuit of AI goals, is taking center stage. The primary objective is to proactively mitigate
any potential harm caused by AI systems. The following are the key goals that various regions and
companies have committed to achieve:

1) Foster an equitable environment that actively addresses any biases related to protected
characteristics.

2) Establish a secure AI ecosystem that prioritizes safety and robustness.

3) Embrace privacy by design principles, ensuring that data practices are devoid of any abusive elements.

4) Promote transparency by providing clear information about the data used and the purpose behind
automated decisions.

5) Offer individuals the option to opt-out of AI-driven processes, allowing for a human review before any
final decision is made.

6) Develop a highly accurate AI system that can be held accountable for its actions.
These objectives reflect a collective commitment to harnessing the potential of AI while safeguarding
against any possible negative consequences. By prioritizing fairness, security, privacy, transparency, and
accountability, we aim to build a responsible and trustworthy AI landscape.

1.2 AI Regulations’ Common Concerns to Achieve

Within both the existing AI regulation and upcoming bills, there are four common areas of concern that
need to be addressed:

Transparency Accountability Human Oversight Security

• Independent • Human in the loop, • Resistance to


• Notices and rights
audits direct involvement errors, faults, and
to opt-out
• Risk management in decisions. inconsistencies
• Ensuring proper
systems • Technical measures
documentation
(continuous to prevent attacks
• Explainability
process of
identification,
evaluation, and
remediation of
risks)
• Data governance
practices
(examination of
data collection,
design choices,
possible bias)

2. Key Highlights And Developments In AI Regulations


During the last months of 2023 and first quarter of 2024, we have witnessed the implementation of four
pivotal regulations that have significantly impacted various regions and countries. These regulations have
proven to be game changers, revolutionizing the way business is conducted and shaping the future
landscape.

2.1 Major AI Regulatory Initiatives

• Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence1


• U.S. Artificial Intelligence Safety Institute (USAISI) through Department of Commerce and NIST2
• New York City AI Ordinance3
• The EU AI Act4
• Israels’ Policy on Artificial Intelligence Regulation and Ethics5
• ISO/IEC 420016

We will analyze various of them below.

2.2 The AI Executive Order (United States)

The insights outlined in the Bidens' AI Executive Order are founded upon the most up-to-date AI New
Standards, comprehensive safety tests, and the subsequent results will be promptly shared with the
Federal Government. Furthermore, the order underscores the imperative need to prevent BIAS (Bias in
Artificial Intelligence Systems) and acknowledges the vast potential and advantages offered by AI
technology.

2.2.1. The AI Executive Order Highlights

1. To ensure the safety, security, and trustworthiness of AI systems, NIST will establish guidelines and
standards for conducting red-teaming tests on such systems.
2. Upcoming requirements for federal contractors include using AI for hiring processes in a
nondiscriminatory manner.

1
https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-
executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ , visited January 11, 2024
2
https://www.commerce.gov/news/press-releases/2023/11/direction-president-biden-department-commerce-
establish-us-artificial, visited January 11, 2024
3
https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-
6596032FA3F9, visited January 11, 2024
4
https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683, visited January 11, 2024
5
https://www.gov.il/en/departments/policies/ai_2023, visited January 11, 2024
6
https://www.iso.org/obp/ui/en/#iso:std:iso-iec:42001:ed-1:v1:en, visited January 11, 2024
3. Protection against AI-enabled fraud and deception by adopting standards and best practices for
detecting AI-generated content and authenticating official content.
4. Adopting principles and best practices to mitigate harms and maximize benefits of AI for workers.
5. Generate reports on AI’s potential labor-market impacts.
6. Finally, to promote a fair, open, and competitive AI ecosystem.

2.3 The New York City AI Ordinance

The New York City AI Ordinance came into effect and enforcement commenced on July 5, 2023. The
proposed legislation mandates the completion of a bias audit for any automated employment decision
tool before its implementation. Additionally, the bill stipulates that individuals residing in the city must
be informed about the utilization of such tools in the hiring or promotion process. They should also
receive notification regarding the specific job qualifications and characteristics that will be assessed by
the automated employment decision tool. Failure to comply with these provisions will result in the
imposition of a civil penalty.

This legislation aims to ensure fairness and transparency in the use of automated employment decision
tools. By conducting a bias audit, potential biases within the tool can be identified and rectified,
promoting equal opportunities for all candidates. Furthermore, notifying candidates and employees
about the use of these tools and the criteria they will be evaluated against fosters a sense of trust and
understanding in the hiring and promotion processes.

The bill recognizes the importance of addressing potential biases that may arise from the use of
automated employment decision tools. By conducting a thorough bias audit, any discriminatory patterns
or tendencies can be identified and eliminated, ensuring that the tool operates in a fair and unbiased
manner.

Moreover, the legislation emphasizes the need for transparency and communication with candidates and
employees. By notifying individuals about the use of these tools and the specific qualifications and
characteristics that will be assessed, they can better prepare themselves and understand the criteria
against which they will be evaluated. This promotes a more informed and equitable hiring and
promotion process.
To uphold the integrity of this legislation, violations of its provisions will be met with civil penalties. This
serves as a deterrent against non-compliance and reinforces the importance of adhering to the
requirements set forth in the bill.

In conclusion, this bill seeks to establish a framework that ensures the fair and unbiased use of
automated employment decision tools. By conducting bias audits, informing candidates and employees,
and imposing penalties for non-compliance, this legislation aims to create a more equitable and
transparent hiring and promotion process.

2.3.1 The New York City AI Ordinance Highlights

1. Notice given for opting out and the right to do so.


2. The details of the AI system can be disclosed to the applicant upon their request.
3. Yearly independent bias audits to be conducted.
4. Results of independent audits must be disclosed to public on employer's website.

2.4. Spain. The Agency for the Supervision of Artificial Intelligence (AESIA)

Established in 2021, AESIA was created with the goal of ensuring that AI systems in Spain are developed
and used responsibly, with a focus on ethical considerations, safety, and fairness. The agency acts as a
guardian, preventing AI from going rogue and wreaking havoc on our lives. So, you can sleep well at
night, knowing that AESIA has got your back.

2.4.1. The Role of AESIA in Spain

AESIA's primary objective is to ensure that AI technologies are used ethically and responsibly. The agency
sets guidelines and standards to prevent AI systems from crossing any ethical boundaries. These
guidelines (now pilars) include:

• Transparency
• Privacy
• Regulatory Sandbox
• Point of Contact with EU and EC
• Control over High-Risk AI Systems
2.5. The EU Artificial Intelligence Act

Approved by agreement7 on December 2023, this pivotal regulation8 seeks to become the major ruler of
AI in the world, at the same level his older brother, the General Data Protection Regulation (GDPR) did
before.

The EU AI Act is mainly based on four pillars:

1. Data quality
2. Transparency
3. Human oversight
4. Accountability.

2.5.1. The AI Act Insights

The EU AI Act aims to achieve several goals:

• Managing Risks. AI can be wonderful, but it also has dangers. The EU AI Act concentrates on
managing these dangers. It wants to ensure that AI applications don't hurt people or businesses.
• Classifying High-Risk Applications. Some AI applications are more hazardous than others. The
EU AI Act will make a list of these high-risk applications. This way, everyone will be aware of
which AI systems need more care and attention.
• Establishing Clear Requirements. For high-risk AI applications, the EU AI Act will establish clear
guidelines. These guidelines will inform AI developers and users what they have to do to make
sure their AI systems are secure and dependable.
• Responsibilities for AI Users and Providers. The EU AI Act will also specify certain
responsibilities for people who use AI and those who provide high-risk AI applications. This
means that everyone involved will have to obey certain rules to safeguard people's rights and
safety.
• Compliance Assessment. Before an AI system is used or sold, it has to go through a compliance
assessment. This means that experts will verify if the AI system fulfills all the required standards.
It's like giving it a quality check before it enters the market.

7
https://digital-strategy.ec.europa.eu/en/policies/ai-pact, visited January 15, 2024
8
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai, visited January 15, 2024
• Enforcement. Once an AI system is in the market, the EU AI Act will ensure that it's being used
properly. If someone violates the rules or doesn't follow the responsibilities, there will be
penalties. This helps keep everyone responsible.
• Governance Structure. The EU AI Act also suggests a governance structure at both the European
and national levels. This means that there will be organizations and authorities in charge of
making sure everything is working safely and smoothly.

2.5.2. The Risk-Based Approach

All AI systems that pose a clear threat to the safety, livelihoods, and rights of individuals will be
prohibited. This includes government social scoring systems and voice-assisted toys that encourage
dangerous behavior.

Unacceptable Risk AI Systems

The AI models in this category are prohibited. Below we can see some examples:

• Real-time remote biometric identification in public spaces used by law-enforcement authorities.


• Use of subliminal techniques.
• Social scoring systems used by Public Authorities.

High-Risk AI Systems

High-risk AI systems encompass various areas, such as critical infrastructures like transportation, where
the lives and well-being of citizens could be jeopardized. Additionally, educational and vocational training
systems that have the potential to determine one's access to education and professional opportunities,
such as exam scoring, fall under this category. Safety components of products, such as AI applications in
robot-assisted surgery, are also considered high-risk. Furthermore, AI systems used in employment,
worker management, and self-employment, such as Resume-sorting software for recruitment, are
included. Essential private and public services, like credit scoring systems that deny citizens the
opportunity to obtain loans, are also deemed high-risk. Law enforcement systems that may infringe upon
people's fundamental rights, such as evaluating the reliability of evidence, are subject to scrutiny.
Additionally, AI systems used in migration, asylum, and border control management, such as verifying
the authenticity of travel documents, are considered high-risk. Lastly, the administration of justice and
democratic processes, such as applying the law to specific sets of facts, are areas where high-risk AI
systems may be utilized.

Before high-risk AI systems can be introduced to the market, they must adhere to strict obligations,
including conducting adequate risk assessments and implementing mitigation systems. The datasets
used to train these systems must be of high quality to minimize risks and prevent discriminatory
outcomes. Activity logging is required to ensure traceability of results, and detailed documentation must
be provided to authorities, offering comprehensive information about the system and its purpose for
assessment of compliance. Clear and sufficient information must also be provided to users. Additionally,
appropriate human oversight measures must be implemented to minimize risks, and the systems must
exhibit a high level of robustness, security, and accuracy.

All remote biometric identification systems are considered high-risk and are subject to stringent
enforcement.

Models in this category are allowed under certain requirements and after running a conformity
assessment. These are examples of high-risk models:

• Systems used in evaluation, hiring and recruiting processes.

• Employee decisions about promotion, termination, and task allocation.

• Automated credit scoring

• Automated insurance claiming processes

• All AI related to education sector


Figure 1. The Four Levels of Risk Approach

AI Limited Risk Systems

Under this category, users must be informed that their data is being processed by AI and most
important, users must be informed of the opportunity to request to communicate with a human instead
of a machine. Some examples include:

• The use of chat bots.


• Deep fakes generation.

AI Minimal Risk Systems

Also under this category, users must be informed that their data is being processed by AI. Examples
include:

• Spam filters
• AI-enabled video games
• Inventory-management systems
3. The White House Blueprint for an AI Bill of Rights
In preparation for a future federal AI regulation, the United States through the White House, has
developed a very comprehensive text in a format of blueprint. This document can be adopted by any
organization as a code of conduct due to its versatility.

These are the main pilars addressed by the Blueprint for an AI Bill of Rights:

1. Safe and Effective Systems. The development of automated systems must involve consultation
with diverse communities, stakeholders, and domain experts to ensure concerns, risks, and
potential impacts of the system are identified.
2. Algorithmic Discrimination Protections. Automated systems can result in algorithmic
discrimination, which involves treating individuals unfairly or unfavorably based on
characteristics such as race, color, and ethnicity.
3. Data Privacy. Built-in protections. Included by default.
4. Notice and Explanation. Individuals must be informed by developers and companies about the
utilization of automated systems and their ramifications. Explanations must go public.
5. Human Alternatives, Consideration, and Fallback. It is crucial for developers and companies to
notify individuals regarding the implementation of automated systems and the resulting
outcomes and to give them the right to opt out and to choose a person to
6. remediate any possible issue or problem.

4. The Steps to Compliance


Complying with AI regulations poses a multifaceted and ever-evolving challenge for organizations
engaged in the use or development of AI systems. These regulations are designed to guarantee the
responsible, ethical, and secure utilization of AI, while also safeguarding the rights and interests of
individuals and society at large. To adhere to these regulations, organizations must diligently follow a set
of general steps:

4.1. Prevent or Mitigate AI Bias

Avoid unfair actions or outcomes that harm people because of their race, color, ethnicity, sex (including
pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual
orientation), religion, age, national origin, disability, veteran status, genetic information, or any other
legally protected category.

4.1.1. How To Prevent or Mitigate AI Bias9

a) Listen to Feedback. From end-users through surveys.


b) Review Training Data. Before feeding the algorithm.
c) Review and Monitor Results. Auditing in real time (Internal & External). Identifying and
improving live.
d) Understand the Origin of Bias. Finding the source can lead us to minimize its effects.
e) Adopt Privacy Principles. To de-identify data as an agnostic information.

4.2. Proactive Equity Assessments

System design that includes proactive equity evaluations. Ongoing actions to ensure algorithm fairness.
Agnostic AI by design.

4.3. AI Responsible & Ethical Policy

Using the same mechanism and logic when a Privacy Notice (Policy) is developed and published should
be applied when writing and posting an AI Responsible and Ethical Notice containing the guidelines
about the correct use fo artifical intelligence and notifying the right of data subjects.

4.4. Data Privacy Protection

Privacy by Default and compliance with each comprehensive regulation. Ensure data used to develop &
test is anonymized or deidentified data.

4.5. Adopt the Four Risk Level Classification Model

Adapt AI systems to the EU AI Act Four Levels of Risks Classification and beready to run a conformity
assessment. The 4 risk levels: Minimal risk, Limited risk, High risk and Unacceptable risk.

9
https://www.transperfect.com/dataforce/blog/3-ways-to-minimize-ai-bias, visited January 15, 2024
4.6. Human Consideration

Establish an Opt-out request form from automated systems in favor of a human alternative consideration
based on reasonable expectations.

4.7. Transparency

AI systems should provide explanations that their decisions are tailored to the stakeholders involved as It
is important for humans to understand that they are engaging with an AI system.

4.8. Accountability

It is of utmost importance to put in place proper mechanisms in order to ensure responsibility and
accountability regarding AI systems and the impact they have.

4.9. Adoption of the AI Blueprint

Adopting the White House ai Blueprint as a code of conduct is a very good step forward to comply with
AI regulations as its pillars are well designed as a structured system.

5.10. Follow AI Secure Guidelines

Each government AI secure guidelines are efficient tools for guidance on developing an AI system. In this
particular case, I will suggest to take a look to the “UK Guidelines for Secure AI System Development”10 as
it recommends specific tips to providers of any systems that use artificial intelligence (AI), whether those
systems have been created from scratch or built on top of tools and services provided by others.

5. Final Takeaways
Complying with AI regulations is essential for ensuring the responsible development and deployment of
artificial intelligence technologies. As an emerging field, AI poses unique risks and challenges that must
be addressed through comprehensive regulatory frameworks. To comply with AI regulations,
organizations and individuals should focus on three key areas: data governance, algorithmic
transparency, and ethical considerations.

10
https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development, visited January 15, 2024
Firstly, data governance plays a pivotal role in AI compliance. Organizations must prioritize privacy,
security, and consent when handling personal data. They should establish robust processes for data
collection, storage, and management, ensuring compliance with relevant laws and regulations governing
data protection. Additionally, organizations should invest in technologies such as Federated Learning,
which allows AI models to be trained on decentralized data sources without compromising individual
privacy. By implementing stringent data governance practices, organizations can demonstrate
compliance and foster trust in their AI systems.

Secondly, algorithmic transparency is crucial to ensure compliance with AI regulations. Organizations


should strive to enhance the explainability of their AI models and algorithms, particularly those deployed
in sensitive domains such as healthcare or finance. This can be achieved by using techniques like
interpretable machine learning to understand and present the logic behind the AI's decision-making
processes. By providing transparent explanations, organizations not only meet regulatory requirements
but also enable end-users to make informed decisions, reduce biases, and address potential concerns
related to fairness and accountability.

Lastly, ethical considerations should guide AI compliance efforts. Organizations should adhere to ethical
principles such as fairness, accountability, and inclusivity throughout the development and deployment
of AI technologies. They should engage in stakeholder consultations, promote diverse perspectives, and
actively mitigate any biases present in their data or algorithms. Ethical considerations also extend to
upholding human values and safeguarding against potential misuse of AI technology. By prioritizing
ethics, organizations can align their AI systems with the societal expectations outlined in regulatory
frameworks.

To conclude, complying with AI regulations requires a multidimensional approach encompassing data


governance, algorithmic transparency, and ethical considerations. Organizations and individuals should
implement robust data governance practices to protect privacy and security. They should also strive for
algorithmic transparency to enhance explainability and mitigate biases, while incorporating ethical
considerations to guide AI development and deployment. By following these guidelines, stakeholders can
foster responsible AI innovation and contribute to a more trustworthy and beneficial AI ecosystem.

Below we can find the summary of the propposed steps:

1. Prevent or Mitigate AI Bias


2. Proactive Equity Assessments

3. AI Responsible & Ethical Policy

4. Data Privacy Protection

5. The Four Risk Level Classification

6. Human Consideration

7. Transparency

8. Accountability

9. Adopt the AI Blueprint

10. Follow AI Secure Guidelines

Andres Saravia. PhD, CIPP/US, Jd

You might also like