Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Dear HCAI Group,

The Director of the U.S. White House Office of Management and Budget, Shalanda
Young, issued a 34-page memo for Executive Branch agencies Advancing
Governance, Innovation, and Risk Management for Agency Use of AI (March 28, 2024).
The memo, which responds to Biden’s October 2023 Executive Order, says: “All leaders
from government, civil society and the private sector have a moral, ethical and societal
duty to make sure that artificial intelligence is adopted and advanced in a way that
protects the public from potential harm, while ensuring everyone is able to enjoy its full
benefit.” Furthermore, it “directs agencies to advance AI governance and innovation
while managing risks from the use of AI in the Federal Government, particularly those
affecting the rights and safety of the public.”

A key part of the memo requires federal agencies to “independently evaluate their uses
of AI, monitor them for mistakes and failures and guard against risks of discrimination.”
Then it requires agencies to (1) Complete an AI impact assessment, (2) Test the AI for
performance in a real-world context, (3) Independently evaluate the AI, (4) Conduct
ongoing monitoring, (5) Regularly evaluate risks from the use of AI, (6) Mitigate
emerging risks to rights and safety, (7) Ensure adequate human training and
assessment, (8) Provide additional human oversight, intervention, and accountability as
part of decisions or actions that could result in a significant impact on rights or safety,
and (9) Provide public notice and plain-language documentation. These are strong
commitments, which is very good news, but the implementation of these requirements
will take staff, resources, and new processes. It’s a wonderful start.

A second event is that the U.S. National Telecommunications and Information


Administration released a 75-page AI Accountability Policy Report (March 2024) whose
principal author is Ellen P. Goodman. This report seems to include AI uses by
companies, not just Federal Government uses, which means this report could be widely
influential.

The Executive Summary claims: “To promote innovation and adoption of trustworthy AI,
we need to incentivize and support pre- and post-release evaluation of AI systems, and
require more information about them as appropriate. Robust evaluation of AI
capabilities, risks, and fitness for purpose is still an emerging field. To achieve real
accountability and harness all of AI’s benefits, the United States – and the world –
needs new and more widely available accountability tools and information, an
ecosystem of independent AI system evaluation, and consequences for those who fail
to deliver on commitments or manage risks properly.”

Their AI Accountability Chain emphasizes three issues: (1) Access to information by


appropriate means and parties is important throughout the AI lifecycle, from early
development of a model to deployment and successive uses, as recognized in federal
government efforts already underway pursuant to President Biden’s Executive Order
Number 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial
Intelligence, (2) Independent evaluation, including red-teaming, audits, and performance
evaluations, and (3) Consequences for responsible parties… will require the application
and/or development of levers – such as regulation, market pressures, and/or legal
liability – to hold AI entities accountable for imposing unacceptable risks or making
unfounded claims.

The closing section makes 8 recommendations and reiterates the three issues: “The
public, consumers, customers, workers, regulators, shareholders, and others need
reliable information to make choices about AI systems. To justify public trust in, and
reduce potential harms from, AI systems, it will be important to develop “accountability
inputs” including better information about AI systems as well as independent
evaluations of their performance, limitations, and governance. AI actors should be held
accountable for claims they make about AI systems and for meeting established
thresholds for trustworthy AI. Government should advance the AI accountability
ecosystem by encouraging, supporting, and/or compelling these inputs.”

I like that the report calls for national registries of (1) high-risk AI deployments, (2)
adverse incidents reporting databases, and (3) disclosable AI system audits. I’m
pleased that “responsibility” gets repeated mentions, however “user interfaces” appears
only three times, with “design” mostly reserved to design of processes, rather than User
Experience Design. While the report is a step forward in making specific
recommendations, it would be stronger if the processes for doing so were laid out more
clearly with timetables for pilot approaches, mechanisms for refining them, and
clarifications of which bodies make the decisions. It closes with a useful compact
glossary.

When I asked Marc Rotenberg to comment on these reports, he wrote “Your


assessment and instinct are correct. Two excellent reports! Key phrases from the initial
OMB Guidance and the final Order are “rights-impacting” and “safety-impacting” AI
systems in the federal agencies. These must meet “minimum practices” are else they
must not be deployed and, if operational, must be decommissioned.

This is significant for at least three reasons: (1) these are actual prohibitions on AI (not
simply calls to make AI more ethical or more responsible); (2) they align with the key
goals of the EU AI Act - to protect both fundamental rights and public safety - and
indicate a high degree of convergence on transatlantic AI governance; and (3) provide
also an actual test for implementation - will the federal agencies, in practice, prohibit AI
systems, that fail to comply with the OMB Order. That should keep AI experts focused
on federal agencies for many years to come!
Agree also that the NTIA report is significant. A strong framework, though of course on
must always ask hard questions about oversight, implementation, and enforcement.
Drafting recommendations is the easy part. Ensuring compliance is a whole separate
project!”

He also remarked that the 1600+ page 2023 AI and Democratic Values Index was
released last week with detailed metrics and reviews for 80 countries. “One of the key
findings includes the rapid improvement of AI governance in the US. The US score will
go up significantly in our new report from mid-tier to the second tier.”

Best wishes… Ben

Other items:

Larry Medsker, Chair of ACM’s US Technology Policy Committee wrote a new 4-


page ACM TechBrief Issue #10: Automated Vehicles, focuses on the problem
that “Deficiencies in critical testing data and automated vehicle technology are impeding
informed regulation and possible deployment of demonstrably safe automated vehicles.”
The Tech Brief offers these policy implications: “Regulators should not assume that fully
automated vehicles will necessarily reduce road injuries and fatalities. It is unclear that
fully automated vehicles will be able to operate safely without a human driver’s
attention, except on limited roadways and under controlled conditions. Improved safety
outcomes depend on appropriately regulating the safety engineering, testing, and
ongoing performance of automated vehicles.”

Dongsong Zhang, Pallab Sanyal, Fiona Fui-Hoon Nah, and Raghava


Mukkamala are the Guest Editors for a Special Issue of the journal Decision Support
Systems on Generative AI: Transforming Human, Business, and Organizational
Decision Making. Their website says: “This special issue aims to curate and present
state-of-the-art theoretical, technical, behavioral, and organizational research on GenAI
in support of decision making and problem solving” with topics such as: Impact of GenAI
applications in organizations and society, Best practices in prompt engineering, Dark
sides of GenAI, Responsible, trustworthy, and ethical GenAI, and Regulation and data
protection associated with GenAI. The deadline for submissions is November 30, 2024.

Hannes Werthner will host Marc Rotenberg (Center for AI and Digital Policy) to speak
on “AI Governance: An Abundance of Norms” on Zoom or YouTube channel on
Tuesday, April 16, 2024 at 5:00 pm Central European Summer Time, 8am U.S. Eastern
Time. The abstract says: “Over the past year, there was a dramatic increase in the
number of AI governance frameworks adopted by national governments and
international organizations. From the EU AI Act and the US Executive Order to the
Bletchley Declaration and the UN Resolution, policymakers have been busy. Now that
legislation has been adopted, the focus will shift to implementation and enforcement.
But there are also challenging questions emerging. Will there be convergence or
divergence on the norms for AI governance? What can be done at this stage to promote
harmonization while still leaving open the possibility of responding to new challenges?

Data & Society present Generative AI’s Impacts on Labor, Part Three April 18, 2024,
1:30pm Eastern Time. “The relationship workers have with technology is more dynamic,
contested, and layered than predominant narratives suggest. Casting workers as
replaceable, for example, obscures the active and complex ways that workers are
responding to generative AI. While many build new skills and use these tools and
systems to their advantage, others sabotage, counteract, and otherwise circumvent
them… Livia Garofalo, Jeff Freitas, and Quinten Steenhuis will join Data & Society
host Aiha Nguyen to discuss the ways workers are reshaping their relationship with
generative AI tools — and with work itself.”

The ACM’s U.S. Technical Policy Committee presents a webinar: "Death by Algorithm:
The Use, Control, and Legality of Lethal and Other Autonomous Weapons Systems"
on Thursday, April 25, 2024 from 12:30 - 2:00pm EDT. Larry Medsker will moderate
the panel that covers topics such as “The integration of autonomous weapon systems
into military and intelligence operations, including lethal autonomous weapon systems
(LAWS)” and “The use of autonomous weapons in ongoing conflicts in Ukraine and
Gaza will be discussed, as well as approaches to risk management for AI-enabled
military systems.

The Boston Global Forum will Honor Dr. Alondra Nelson with the 2024 World Leader in
AI World Society Award. Nelson is the former Deputy Assistant to President Joe Biden,
former Acting Director of the White House Office of Science and Technology Policy, and
Harold F. Linder Professor at the Institute for Advanced Study. “During her tenure at the
White House Office of Science and Technology Policy (OSTP), Dr. Nelson
spearheaded the development of the “Blueprint for an AI Bill of Rights,” which was
incorporated into both President Biden’s historic executive order on artificial
intelligence…The ceremony honoring Dr. Nelson will take place at 1:00 pm on April 30,
2024 at Harvard University Loeb House. The event will feature a keynote address,
“Governing the Future: AI, Public Policy and Democracy,” by Dr. Nelson.”
ABOUT: This Group on Human-Centered AI, run by Ben Shneiderman sends
occasional notes devoted to ensuring human control while increasing the level of
automation. The goal is to design AI-infused supertools that amplify, augment,
empower, and enhance human performance. The Group, now with 3500+ members, is
meant to carry forward the ideas in Ben’s book on Human-Centered AI.

You might also like