Professional Documents
Culture Documents
A Summary of 2024 AI Regulations
A Summary of 2024 AI Regulations
In 2024, the pursuit of AI goals, is taking center stage. The primary objective is to proactively mitigate
any potential harm caused by AI systems. The following are the key goals that various regions and
companies have committed to achieve:
1) Foster an equitable environment that actively addresses any biases related to protected
characteristics.
3) Embrace privacy by design principles, ensuring that data practices are devoid of any abusive elements.
4) Promote transparency by providing clear information about the data used and the purpose behind
automated decisions.
5) Offer individuals the option to opt-out of AI-driven processes, allowing for a human review before any
final decision is made.
6) Develop a highly accurate AI system that can be held accountable for its actions.
These objectives reflect a collective commitment to harnessing the potential of AI while safeguarding
against any possible negative consequences. By prioritizing fairness, security, privacy, transparency, and
accountability, we aim to build a responsible and trustworthy AI landscape.
Within both the existing AI regulation and upcoming bills, there are four common areas of concern that
need to be addressed:
The insights outlined in the Bidens' AI Executive Order are founded upon the most up-to-date AI New
Standards, comprehensive safety tests, and the subsequent results will be promptly shared with the
Federal Government. Furthermore, the order underscores the imperative need to prevent BIAS (Bias in
Artificial Intelligence Systems) and acknowledges the vast potential and advantages offered by AI
technology.
1. To ensure the safety, security, and trustworthiness of AI systems, NIST will establish guidelines and
standards for conducting red-teaming tests on such systems.
2. Upcoming requirements for federal contractors include using AI for hiring processes in a
nondiscriminatory manner.
1
https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-
executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/ , visited January 11, 2024
2
https://www.commerce.gov/news/press-releases/2023/11/direction-president-biden-department-commerce-
establish-us-artificial, visited January 11, 2024
3
https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-
6596032FA3F9, visited January 11, 2024
4
https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683, visited January 11, 2024
5
https://www.gov.il/en/departments/policies/ai_2023, visited January 11, 2024
6
https://www.iso.org/obp/ui/en/#iso:std:iso-iec:42001:ed-1:v1:en, visited January 11, 2024
3. Protection against AI-enabled fraud and deception by adopting standards and best practices for
detecting AI-generated content and authenticating official content.
4. Adopting principles and best practices to mitigate harms and maximize benefits of AI for workers.
5. Generate reports on AI’s potential labor-market impacts.
6. Finally, to promote a fair, open, and competitive AI ecosystem.
The New York City AI Ordinance came into effect and enforcement commenced on July 5, 2023. The
proposed legislation mandates the completion of a bias audit for any automated employment decision
tool before its implementation. Additionally, the bill stipulates that individuals residing in the city must
be informed about the utilization of such tools in the hiring or promotion process. They should also
receive notification regarding the specific job qualifications and characteristics that will be assessed by
the automated employment decision tool. Failure to comply with these provisions will result in the
imposition of a civil penalty.
This legislation aims to ensure fairness and transparency in the use of automated employment decision
tools. By conducting a bias audit, potential biases within the tool can be identified and rectified,
promoting equal opportunities for all candidates. Furthermore, notifying candidates and employees
about the use of these tools and the criteria they will be evaluated against fosters a sense of trust and
understanding in the hiring and promotion processes.
The bill recognizes the importance of addressing potential biases that may arise from the use of
automated employment decision tools. By conducting a thorough bias audit, any discriminatory patterns
or tendencies can be identified and eliminated, ensuring that the tool operates in a fair and unbiased
manner.
Moreover, the legislation emphasizes the need for transparency and communication with candidates and
employees. By notifying individuals about the use of these tools and the specific qualifications and
characteristics that will be assessed, they can better prepare themselves and understand the criteria
against which they will be evaluated. This promotes a more informed and equitable hiring and
promotion process.
To uphold the integrity of this legislation, violations of its provisions will be met with civil penalties. This
serves as a deterrent against non-compliance and reinforces the importance of adhering to the
requirements set forth in the bill.
In conclusion, this bill seeks to establish a framework that ensures the fair and unbiased use of
automated employment decision tools. By conducting bias audits, informing candidates and employees,
and imposing penalties for non-compliance, this legislation aims to create a more equitable and
transparent hiring and promotion process.
2.4. Spain. The Agency for the Supervision of Artificial Intelligence (AESIA)
Established in 2021, AESIA was created with the goal of ensuring that AI systems in Spain are developed
and used responsibly, with a focus on ethical considerations, safety, and fairness. The agency acts as a
guardian, preventing AI from going rogue and wreaking havoc on our lives. So, you can sleep well at
night, knowing that AESIA has got your back.
AESIA's primary objective is to ensure that AI technologies are used ethically and responsibly. The agency
sets guidelines and standards to prevent AI systems from crossing any ethical boundaries. These
guidelines (now pilars) include:
• Transparency
• Privacy
• Regulatory Sandbox
• Point of Contact with EU and EC
• Control over High-Risk AI Systems
2.5. The EU Artificial Intelligence Act
Approved by agreement7 on December 2023, this pivotal regulation8 seeks to become the major ruler of
AI in the world, at the same level his older brother, the General Data Protection Regulation (GDPR) did
before.
1. Data quality
2. Transparency
3. Human oversight
4. Accountability.
• Managing Risks. AI can be wonderful, but it also has dangers. The EU AI Act concentrates on
managing these dangers. It wants to ensure that AI applications don't hurt people or businesses.
• Classifying High-Risk Applications. Some AI applications are more hazardous than others. The
EU AI Act will make a list of these high-risk applications. This way, everyone will be aware of
which AI systems need more care and attention.
• Establishing Clear Requirements. For high-risk AI applications, the EU AI Act will establish clear
guidelines. These guidelines will inform AI developers and users what they have to do to make
sure their AI systems are secure and dependable.
• Responsibilities for AI Users and Providers. The EU AI Act will also specify certain
responsibilities for people who use AI and those who provide high-risk AI applications. This
means that everyone involved will have to obey certain rules to safeguard people's rights and
safety.
• Compliance Assessment. Before an AI system is used or sold, it has to go through a compliance
assessment. This means that experts will verify if the AI system fulfills all the required standards.
It's like giving it a quality check before it enters the market.
7
https://digital-strategy.ec.europa.eu/en/policies/ai-pact, visited January 15, 2024
8
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai, visited January 15, 2024
• Enforcement. Once an AI system is in the market, the EU AI Act will ensure that it's being used
properly. If someone violates the rules or doesn't follow the responsibilities, there will be
penalties. This helps keep everyone responsible.
• Governance Structure. The EU AI Act also suggests a governance structure at both the European
and national levels. This means that there will be organizations and authorities in charge of
making sure everything is working safely and smoothly.
All AI systems that pose a clear threat to the safety, livelihoods, and rights of individuals will be
prohibited. This includes government social scoring systems and voice-assisted toys that encourage
dangerous behavior.
The AI models in this category are prohibited. Below we can see some examples:
High-Risk AI Systems
High-risk AI systems encompass various areas, such as critical infrastructures like transportation, where
the lives and well-being of citizens could be jeopardized. Additionally, educational and vocational training
systems that have the potential to determine one's access to education and professional opportunities,
such as exam scoring, fall under this category. Safety components of products, such as AI applications in
robot-assisted surgery, are also considered high-risk. Furthermore, AI systems used in employment,
worker management, and self-employment, such as Resume-sorting software for recruitment, are
included. Essential private and public services, like credit scoring systems that deny citizens the
opportunity to obtain loans, are also deemed high-risk. Law enforcement systems that may infringe upon
people's fundamental rights, such as evaluating the reliability of evidence, are subject to scrutiny.
Additionally, AI systems used in migration, asylum, and border control management, such as verifying
the authenticity of travel documents, are considered high-risk. Lastly, the administration of justice and
democratic processes, such as applying the law to specific sets of facts, are areas where high-risk AI
systems may be utilized.
Before high-risk AI systems can be introduced to the market, they must adhere to strict obligations,
including conducting adequate risk assessments and implementing mitigation systems. The datasets
used to train these systems must be of high quality to minimize risks and prevent discriminatory
outcomes. Activity logging is required to ensure traceability of results, and detailed documentation must
be provided to authorities, offering comprehensive information about the system and its purpose for
assessment of compliance. Clear and sufficient information must also be provided to users. Additionally,
appropriate human oversight measures must be implemented to minimize risks, and the systems must
exhibit a high level of robustness, security, and accuracy.
All remote biometric identification systems are considered high-risk and are subject to stringent
enforcement.
Models in this category are allowed under certain requirements and after running a conformity
assessment. These are examples of high-risk models:
Under this category, users must be informed that their data is being processed by AI and most
important, users must be informed of the opportunity to request to communicate with a human instead
of a machine. Some examples include:
Also under this category, users must be informed that their data is being processed by AI. Examples
include:
• Spam filters
• AI-enabled video games
• Inventory-management systems
3. The White House Blueprint for an AI Bill of Rights
In preparation for a future federal AI regulation, the United States through the White House, has
developed a very comprehensive text in a format of blueprint. This document can be adopted by any
organization as a code of conduct due to its versatility.
These are the main pilars addressed by the Blueprint for an AI Bill of Rights:
1. Safe and Effective Systems. The development of automated systems must involve consultation
with diverse communities, stakeholders, and domain experts to ensure concerns, risks, and
potential impacts of the system are identified.
2. Algorithmic Discrimination Protections. Automated systems can result in algorithmic
discrimination, which involves treating individuals unfairly or unfavorably based on
characteristics such as race, color, and ethnicity.
3. Data Privacy. Built-in protections. Included by default.
4. Notice and Explanation. Individuals must be informed by developers and companies about the
utilization of automated systems and their ramifications. Explanations must go public.
5. Human Alternatives, Consideration, and Fallback. It is crucial for developers and companies to
notify individuals regarding the implementation of automated systems and the resulting
outcomes and to give them the right to opt out and to choose a person to
6. remediate any possible issue or problem.
Avoid unfair actions or outcomes that harm people because of their race, color, ethnicity, sex (including
pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual
orientation), religion, age, national origin, disability, veteran status, genetic information, or any other
legally protected category.
System design that includes proactive equity evaluations. Ongoing actions to ensure algorithm fairness.
Agnostic AI by design.
Using the same mechanism and logic when a Privacy Notice (Policy) is developed and published should
be applied when writing and posting an AI Responsible and Ethical Notice containing the guidelines
about the correct use fo artifical intelligence and notifying the right of data subjects.
Privacy by Default and compliance with each comprehensive regulation. Ensure data used to develop &
test is anonymized or deidentified data.
Adapt AI systems to the EU AI Act Four Levels of Risks Classification and beready to run a conformity
assessment. The 4 risk levels: Minimal risk, Limited risk, High risk and Unacceptable risk.
9
https://www.transperfect.com/dataforce/blog/3-ways-to-minimize-ai-bias, visited January 15, 2024
4.6. Human Consideration
Establish an Opt-out request form from automated systems in favor of a human alternative consideration
based on reasonable expectations.
4.7. Transparency
AI systems should provide explanations that their decisions are tailored to the stakeholders involved as It
is important for humans to understand that they are engaging with an AI system.
4.8. Accountability
It is of utmost importance to put in place proper mechanisms in order to ensure responsibility and
accountability regarding AI systems and the impact they have.
Adopting the White House ai Blueprint as a code of conduct is a very good step forward to comply with
AI regulations as its pillars are well designed as a structured system.
Each government AI secure guidelines are efficient tools for guidance on developing an AI system. In this
particular case, I will suggest to take a look to the “UK Guidelines for Secure AI System Development”10 as
it recommends specific tips to providers of any systems that use artificial intelligence (AI), whether those
systems have been created from scratch or built on top of tools and services provided by others.
5. Final Takeaways
Complying with AI regulations is essential for ensuring the responsible development and deployment of
artificial intelligence technologies. As an emerging field, AI poses unique risks and challenges that must
be addressed through comprehensive regulatory frameworks. To comply with AI regulations,
organizations and individuals should focus on three key areas: data governance, algorithmic
transparency, and ethical considerations.
10
https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development, visited January 15, 2024
Firstly, data governance plays a pivotal role in AI compliance. Organizations must prioritize privacy,
security, and consent when handling personal data. They should establish robust processes for data
collection, storage, and management, ensuring compliance with relevant laws and regulations governing
data protection. Additionally, organizations should invest in technologies such as Federated Learning,
which allows AI models to be trained on decentralized data sources without compromising individual
privacy. By implementing stringent data governance practices, organizations can demonstrate
compliance and foster trust in their AI systems.
Lastly, ethical considerations should guide AI compliance efforts. Organizations should adhere to ethical
principles such as fairness, accountability, and inclusivity throughout the development and deployment
of AI technologies. They should engage in stakeholder consultations, promote diverse perspectives, and
actively mitigate any biases present in their data or algorithms. Ethical considerations also extend to
upholding human values and safeguarding against potential misuse of AI technology. By prioritizing
ethics, organizations can align their AI systems with the societal expectations outlined in regulatory
frameworks.
6. Human Consideration
7. Transparency
8. Accountability