Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

AIIM

Comments
on AI
Accountability
The key tenets of AIIM’s position are:
1. Not all AI is equal and public policy should reflect the
different levels of risk.
Unlike generative AI, some AI is simple, easy to comprehend,
and can be audited to determine how decisions were made
by the technology. Regulators should classify AI into different
categories and establish policy accordingly.
2. Need a flexible, universal framework.
Stakeholders need a framework to better understand their
obligations and ensure compliance. A framework would also
encourage further innovation and adoption of AI.
3. Accuracy is key to advancing AI accountability.
“Trustworthiness” of AI output is unattainable. Accuracy is
a more worthwhile and plausible ambition. It establishes
credibility, currency of the information, completeness, and
chain of control.
Why AIIM Responded to U.S. NTIA’s 4. Transparency will ensure accountability.
Request for Comment. AIIM supports the principle inf the U.S. Administration’s AI Bill
of Rights that consumers must know when, how, and why AI is
Governments around the world have started the arduous process being used.
of developing regulations and standards for artificial intelligence. 5. Responsibility is shared.
In June 2023, AIIM formally responded to a request for comment The developers and organizations who use AI share
from the U.S. National Telecommunications and Information responsibility and liability for AI output.
Administration (NTIA) on AI accountability. According to the NTIA 6. Mandatory auditing of some AI may be implausible.
website, more than 1,400 responses were submitted.
Narrow AI tools are auditable and largely defensible. It’s
Information management is not an insular profession and it’s most important to recognize the difficulty of auditing generative
successful in an organization when it’s focused outward on the AI tools, making mandatory auditing difficult and potentially
impossible.
needs of stakeholders and business outcomes. As such, AIIM’s
leadership strongly believes that we must take a strong, active 7. The volume and quality of AI output should influence
stance on how AI tools use and produce information. regulations.
When considering recordkeeping obligations, regulators should
AIIM believes it’s vital that regulators develop flexible and practical keep in mind the enormous volume of data AI can produce and
guardrails for how information is treated during AI development that the data quality may be subpar and not worth retaining.
and use. Guardrails will empower innovation, boost adoption, and
ensure accountability.

page 2
The reality is the pace of change in AI development has
exceeded the pace at which regulations and standards are being
developed. And as AI adoption and use increases within our own
organizations, it is our responsibility as information management
professionals to protect our organizations and stakeholders.
What follows is a complimentary version of AIIM’s letter to NTIA;
we encourage information management professionals to use the
letter as a tool to help guide conversations, decisions, and policy
about using AI in their own organizations.

AIIM thanks the authors and editors of its letter to NTIA:


Authors.
Jed Cawthorne, MBA, CIP, IG, Principal Evangelist,
Shinydocs Corporation.
Tori Miller Liu, MBA, FASAE, CAE, President
& CEO, AIIM.
Jennifer Ortega, Ulman Public Policy.
Alan Pelz-Sharpe, Founder, Deep Analysis.
Editors.
Ron Cameron, CEO, KnowledgeLake.
Jason Cassidy, CEO, Shinydocs Corporation.
Rikkert Engels, CEO and Founder, Xillio.
Karen Hobert, Future of Work Thought Leadership
& Research, Cisco.
Kramer Reeves, Executive Vice President,
Work-Relay.

AIIM looks forward to continuing to participate in regulatory


conversations about information management in the age of AI.

page 3
AIIM is the world’s leading association dedicated to the
information management industry. Information management
practitioners focus on the collection, processing, storage, security,
retention, and accessibility of information in an organization.
Information managers are also responsible for the accuracy
and transparency of information. Since 1944, AIIM has been
on a mission to transform the way organizations manage their
information, ultimately improving their performance. With
over 66,000 community members representing various fields,
including IT, records management, and knowledge management,
AIIM fosters a vibrant community of information management
leaders. Through independent research, comprehensive training,
and professional certification programs, AIIM empowers these
professionals to enhance their skills and better serve their
organizations. Additionally, AIIM serves as a hub for innovation,
connecting intelligent information management solutions providers
with practitioners and driving advancements in the field.
AI Accountability Policy Request for
It is important to recognize the seemingly limitless potential of AI.
Comment, 88 FR 22433 (RIN: 0660- In business operations specifically, it can enable organizations to
XC057). perform tasks that simply could not be done without AI. Microsoft,
for example, recently said over 2 billion files are added to its
The Association for Intelligent Information Management (“AIIM”) SharePoint, One Drive, and Teams systems every single day.1 It
writes in response to the National Telecommunications and would be impossible for humans to manage this volume of data
Information Administration’s (“NTIA”) request for comment on AI without AI. The technology has enabled companies to expand
accountability policy (“RFC”), located at 88 FR 22433, in order to their operations beyond what was physically possible only a few
provide AIIM organizations’ insights and expertise in this critical years ago. AI has the potential to radically alter the economy
and rapidly developing technology. Artificial intelligence (“AI”) for the better, but the key to allowing the technology to flourish
technology is altering our economy in profound ways, but it is without putting the consumer and the general public at risk will be
essential that the U.S. government establish the principles and to put guardrails in place to ensure the technology is appropriately
guidelines needed to ensure accountability among the users and developed and used. Without these guardrails, AI could cease
providers of these systems. As AIIM’s comments will explain, being a force for good but instead could have devastating
our economy urgently needs a flexible, universal framework at consequences for the public, the economy, and our culture. The
the federal level to boost adoption of and innovation in AI while longer we wait to act, the more likely these potential negative
ensuring accountability among its users and developers. outcomes could come to fruition.

1
Shesha Mani, “New era in content management and security in SharePoint, OneDrive, and Teams,”
Microsoft 365 Blog (May 2, 2023)
page 4
or defined rules that have little to no risk associated with
them. Examples include document or email readers that can
scan the information to either validate the text or sort the item
appropriately, technology that can identify and protect sensitive
personally identifiable information, or tools that can automate
non-sensitive administrative tasks, such as back-office activities
or supply chain work. This type of technology has been around
for years and simply does not pose a significant threat to
consumers or the general public, as both their development
processes and the “training sets” of data used to train them can
be audited.
Generative AI, on the other hand, like ChatGPT for example,
could result in significantly more damage or detrimental harm if
used inappropriately or trained on large datasets that produce
harmful or injurious patterns. These systems, often referred to
as “black-box AI,” have internal processes and decision-making
Not all AI Is Equal, and Public Policy mechanisms that cannot be determined or audited, making
it difficult to hold it and the entity using it accountable for the
Should Reflect the Different Levels decisions made. Moreover, developers of Generative AI are
of Risk. not being transparent about what data was used to train their
systems, leaving even more uncertainty around their systems.
The RFC uses the term AI generically to encompass all artificial
Likewise, the output of Generative AI could be harmful in and of
intelligence. NTIA does not acknowledge the significant
itself (e.g., biased, misleading, incorrect, made up, plagiarized),
differences in the types of AI that exist today and seems to imply
regardless of intent. Consider the lawyer that used ChatGPT to
all AI comes with the same level of risk. This is not the case.
write a legal brief that, while authoritative sounding, cited cases
Not all AI is created equally or should be given the same degree
that never happened. The lawyer used the output innocently, if
of attention – and thereby regulated at the same extent – by
not naively. Generative AI is trained to always provide the best
policymakers. Different types of AI exist and should be treated
sounding answer. It is not trained to give honest or real answers.
according to the level of risk they pose to the public or the
consumer. Narrow AI should not be regulated in the same manner as
Generative AI. The limited risk Narrow AI poses means the
Some AI is simple, easy to comprehend, and can be audited to
technology, the entities who use it, and the general public would
determine how decisions were made by the technology. These
benefit most from a framework that provides guidance on the
systems, herein referred to as Narrow AI, do not run massive
principles critical to ensuring the AI is accountable and the
neural networks, and humans have control over what and how
entity’s use of the AI abides by appropriate guardrails.
the AI learns (i.e., supervised learning). They apply specified

page 5
Such a framework would help encourage entities to adopt the
technology and help them understand what to look for when
trying to obtain accountable and responsible AI. Generative AI,
however, deserves increased scrutiny and more stringent rules
over its use to ensure the higher risk and dangers associated
with the technology are fully addressed. A more complex and
comprehensive policy approach is needed, including limits on
how entities can use such systems.
This approach – classifying AI into different categories and
establishing policy accordingly – aligns with the European
Union’s AI Act, which is currently working its way through their
legislative processes. While AIIM is not indicating its support
for this legislation nor advocating for the U.S. government
to adopt similar policy, the premise is commendable. The
E.U. has recognized that different kinds of AI come with very
different risk levels to the public and the entities or individuals
using the technology, and that risk should result in different
levels of caution and attention. NTIA would be wise to
recognize those differences as well.
For the remainder of our comments, we will be focusing on
Narrow AI and the priorities that should be the focus for any
federal agency looking to regulate in this space.

page 6
have access to their data, among other things. This uncertainty
around such significant questions leaves entities unwilling to
adopt the technology, even though they are fully aware of the
potential benefits of using AI in operations and decision-making.
This chilling effect is not hypothetical, and examples abound.
For instance, insurance companies have had AI for years that
can analyze images of crashes or other incidents to help make
determinations about fault or awards, but companies have been
afraid to use it out of fear of the potential liability if an AI-made
decision is contested.
Entities’ unwillingness to adopt and take full advantage of AI
technology is also hurting our ability to compete at the global
level. According to a report commissioned by IBM,2 the U.S.
is behind other countries in adopting AI. Only 25% of U.S.
companies have adopted and deployed AI technology, while the
global average is 35%. Meanwhile, China has an adoption rate
of 58%. Our lack of a public policy strategy or legal framework
have resulted in U.S. companies not keeping pace with the
The Stakeholder Community Urgently expansion and availability of the technology. Our reluctance
Needs a Flexible, Universal Framework to adopt AI is leaving us behind in the race, and we’re quickly
losing the ability to perfect, utilize, and set guardrails for this
at the Federal Level to Boost Adoption rapidly developing, ubiquitous technology.

of Narrow AI and Ensure Accountability. In order to combat this uncertainty, a framework is desperately
needed to provide entities and the general public with clear
Interested stakeholders need a framework to better understand guidelines on how to procure and use AI in a responsible
their obligations and ensure compliance with acceptable manner. It is critical that the government not attempt to
principles and best practices when utilizing Narrow AI. The implement regulations but instead focus on a framework that
current lack of guidance has chilled innovation and AI adoption. outlines key principles and characteristics that will aid entities in
Organizations are unwilling to integrate available products and identifying appropriate AI. Regulations, however well-intentioned
technology out of fear of the very real risk of unintended harm, or developed, cannot keep up with rapidly evolving technology
loss, or liability from the use of AI or its resulting outputs. They like AI. We are seeing progress in AI technology on a day-by-
are reluctant to implement new technology when they do not day basis. This pace is simply too fast for the government, and
know their liabilities, don’t know if or how they will be audited regulations would be obsolete even before going into effect.
or who will be auditing them, and are unclear about who may

2
IBM and Morning Consult, “IBM Global AI Adoption Index 2022,” May 2022
page 7
Because of this pace of change, AIIM urges the government to example, answers to who can use the AI system, what the system
conduct regular and ongoing assessments of the framework and or data can be used for, and whether and when entities should
implement updates whenever necessary. This will ensure the declare they are using AI, so consumers are fully aware that AI
framework does not become outdated or irrelevant in the face of was used in a decision that impacts them.
the advancements in the technology or changes in its potential
uses and risks. The framework must be flexible. As we related above, the
government cannot keep up with AI technology and its rapid
AIIM strongly advises that any framework be developed and advancements. Over the past five years, AI has exploded. The
implemented at the federal level. If states are allowed to regulate technology has advanced exponentially, and deployment has
in their own manner, entities will be forced to abide by and accelerated to the point that AI is pervasive throughout our
monitor inevitable policy changes on a state-by-state basis. This economy, our culture, and our day-to-day lives. The government
patchwork would be impossible to implement, and compliance simply cannot keep pace, and any rigid public policy that includes
would be unmanageable and too costly. The sheer volume of strict requirements would be obsolete in a matter of months, if not
data impacted by these regulations would make attempting to weeks or even days. Flexibility must, therefore, be the priority for
alter operations to accommodate changes at each state border any public policy initiative, and in order to achieve this flexibility,
unfeasible. We’re already seeing the consequences of this the government should focus on establishing overarching
approach with regards to data privacy. Companies are attempting principles that allow the technology to progress.
to understand and monitor state laws and regulations, but it’s a
herculean task to keep up and adapt operations to accommodate The National Institute for Standards and Technology’s (“NIST”)
the different approaches, especially as they become obsolete as Cybersecurity Framework is a good model to follow, but AIIM
the technology adapts beyond what the policies were intended to cautions that this framework should not be adopted whole cloth
regulate. by other agencies, which is unfortunately what we are seeing
occur with NIST’s framework. The framework is good in theory
The framework must be universally applied to every sector and and NIST has applied it appropriately; however, when other
industry. Attempting to design a framework for each sector agencies attempt to require compliance with the framework, it
or industry would be overly complicated and unnecessarily stops being a guidepost for the regulated community and, in
burdensome. It may also result in policy that does not easily essence, becomes a rigid regulation that is no longer a living
adapt to changing technology or circumstances or growth of the document that can grow and adapt to new technology or
technology into new areas of the economy. The more complicated advancements.
the approach is, the more likely something will go wrong. Instead,
AIIM is advocating for a framework that can be easily applied to AIIM urges NTIA to establish a framework based on the above
the broad array of entities and industries that want to use and listed factors – universality and flexibility at the federal level – in
would benefit from AI technology. This approach should focus on order to provide the basic guidance to the regulated community
key principles that can guide entities in how to determine which that is so desperately needed today. This framework would
AI services or products are responsible and how to use the AI provide the surety entities need to adopt the technology and
in a conscientious manner. These principles should include, for guarantee accountability among the entities who want to procure,
utilize, and benefit from AI.3

3
This comment does not reflect on the needs of accountability in AI development, but importantly, if the users of AI are aware of and recognize the
basic principles or tenets of responsible AI, they will only procure and use AI that meets the criteria. This will in turn force the market to develop
responsible AI. AI developers will be strongly incentivized to meet the market’s demands, further promoting responsible AI. page 8
Transparency Will Help Ensure
Accountability.
Transparency will be critical to ensuring accountability in AI use.
For the public to fully accept and trust AI, consumers must know
when, how, and why AI is being used. This was an important tenet
of the White House’s Blueprint for an AI Bill of Rights, and AIIM
supports the principle.
In order to attain this transparency, NTIA should consider including
declaration requirements to inform consumers when AI is used to
make decisions and how those decisions were made. Knowing
AI is used and how it is being used will be key in promoting AI,
ensuring accountability, and achieving adoption on the scale AI is
capable of obtaining.
Another potential solution would be to develop a rating system
Accuracy Is Key to Advancing AI that determines the reliability of the information provided by an AI
Accountability. system. A confidence rating system, for example, as advocated
by Harvey Spencer, President and Founder of Factorum LLC, in
NTIA’s focus for any policy pursuits moving forward should a recent blog post,4 would explain the reliability of the sources or
be on accuracy of AI technology, not obtaining the potentially algorithms used. Importantly, Narrow AI already has such systems
unattainable “trustworthiness.” Attempting to achieve this abstract built into them, but Generative AI does not. While this would
goal would be a nearly impossible task and would only result in likely be impossible at this point in time, the government should
the government getting bogged down in a quagmire. Focusing consider devoting funding to research how to build such a rating
on trustworthiness will be a distraction in the larger, more critical system into all AI systems.
quest of accountability – if the government can hold entities
responsible for the AI they use and how they use it. Transparency and accountability would give the public and
entities a level of understanding of how AI is impacting them and
Accuracy, on the other hand, is a more worthwhile and plausible their interests. That transparency will spur greater uptake of the
ambition. We as a society want to know critical facts when technology both among the public and the entities that want to
engaging with AI: Is the information I’ve received and/or the use AI to help streamline or improve their operations, products,
source from which it was obtained credible? Is this the most and services.
up to date information? Is the information complete or were
critical details ignored, overlooked, or omitted? Answering
these questions will provide users of AI with more certainty and
awareness and enable users to hold entities accountable for using
AI that does not provide accurate, credible outputs.

4
Harvey Spencer, LinkedIn blog post (May 2023)
page 9
Specific Concerns and Challenges Auditing.
There are several practical realities that would make
of AI Accountability Policy Must Be mandatory auditing difficult. Primarily, there are not
Acknowledged. enough auditors available in the U.S. to mandate
nationwide auditing for every step of the AI lifecycle
NTIA recognized in the RFC that there are numerous concerns or value chain. While the supply of individuals
about AI and challenges in establishing accountability in its use. capable of auditing AI systems is insufficient, the
AIIM applauds NTIA for recognizing the difficulties of regulating ability of entities to perform such assessments will
in this space. All of these concerns and obstacles must be dealt inevitably be limited.
with cautiously and thoughtfully. Fortunately, many of these
difficulties can be resolved – or at least alleviated – by pursuing a Only Narrow AI can be audited for what goes into an
federal, flexible, universal framework, as AIIM is calling for in this AI system and what comes out. Generative AI, on the
comment. That said, there are specific areas flagged by NTIA that other hand, cannot be audited for how decisions are
AIIM believes are worth discussing in more detail and providing made or what happens between the input and output
specific recommendations. stages. This shortcoming must be recognized in any
public policy initiative moving forward.

Liability. Recordkeeping.
Questions were raised in the RFC regarding who Any recordkeeping obligations must recognize the
should be held liable if AI is misused. AIIM believes enormous volume of data that AI can produce.
both the developers of AI systems and the entity Importantly, much of the information is junk and
using those systems should be responsible for the not actually used by the entity in any way – whether
outcomes they produce. Importantly, this should meaningfully or otherwise. NTIA and the government
apply to both the organization and their employees. should be cognizant of that when considering any
The purchaser of or entity using the tool must be recordkeeping obligations.
responsible for the manner in which they use the
technology, and when purchasing that product
or service, they should recognize and be held
accountable for that risk. If an entity uses Generative
AI and other high-risk products or services and cannot
identify or explain the reasons behind the decision
the AI system has made, that liability is and should
be on the entity. Additionally, the entity is oftentimes
the only one who can audit both the data going in
and the results coming out of the AI system, making it
uniquely capable of ensuring the AI system is meeting
appropriate standards.

page 10
AIIM acknowledges that there could be serious consequences
if AI is misused or allowed to developed with no guardrails in
place, but that is why it is critical we continue to design, develop,
promote, and adopt accountable AI based on sound principles
that prioritize the public wellbeing.

Conclusion.
AI is a rapidly developing technology that has the potential
to change the world. If developed properly with its risks
recognized and basic principles in place to mitigate those risks,
AI can be a force for good. It can better our lives, our economy,
and our world.
Government’s role should be to incentivize adoption of
Global Competitiveness Must Be accountable, responsible AI. A flexible, universal framework
applied at the federal level would provide entities with the
Considered. information and surety they need to adopt the technology and
fully incorporate it into their operations soundly and safely.
The U.S. needs to act and be a leader on the issue of AI NTIA should immediately take the first steps in developing the
accountability, and the U.S. does not want to cede its role in this guidance that would allow the technology to safely advance
space and allow other countries to set the ground rules for AI. The and flourish in the U.S. and set the ground rules for responsible
U.S. should be the ones who determine the best practices and AI development and adoption.
principles governing AI development and use. The consequences
of not doing so and not establishing strong protections from the AIIM thanks NTIA for providing the opportunity to weigh in on
outset could be – and likely would be – devastating. this critical issue. We look forward to working with NTIA moving
forward.
AIIM wants to take this opportunity to caution against any pause
of AI as was recently advocated in a letter circulated by the
Future of Life Institute.5 The letter called on AI labs to pause
training of AI systems for at least six months, but this pause Tori Miller Liu, MBA, FASAE, CAE.
would be misguided. It would hurt the development, training, President & CEO, AIIM.
and deployment of AI, hindering innovation and adoption. Such a
pause would also hurt our global competitiveness. Other countries
are not pausing and will see our pause as an opportunity to
step in as leaders in AI. We will be left behind, hurting our place
on the world stage and, perhaps more importantly, potentially
exacerbating all the risks of AI that are recognized in the letter.

5
Future of Life Institute Letter, “Pause Giant AI Experiments: An Open Letter,” March 22, 2023
page 11
Your Next Step:
Join AIIM+ and
AIIM + Pro.

A new way to access our


great content on your terms.
Are you ready to master new
skills and build real relationships?
Join AIIM+ and AIIM + Pro for
exclusive access to helpful
resources, comprehensive
training, and a buzzing
community of 3,000 information
professionals just like you!

AIIM+ is where Information


Professionals belong.

Join Today

page 12
AIIM helps organizations
improve their performance
by transforming the way they
manage their information.

© 2023.

AIIM.
+1 301 587 8202.

hello@aiim.org.

www.aiim.org.

You might also like