Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Decision Tree for the Responsible

Application of Artificial Intelligence (v1.0)


BEFORE YOU BEGIN: This decision tree is intended to be a Inclusive stakeholder engagement is central to this framework.
guide that assists the user and their organization to structure Before applying the tree, consider who in your organization
the decision-making process on whether to develop or deploy -or outside of it- would be best qualified to answer these
AI solutions. Users should recognize that the model, while questions. In most cases, the necessary skill set will be
designed to be generally applicable, will not capture the spread across multiple individuals. Always consider whether
nuances of every possible scenario. In any particular context, those providing responses are trustworthy, independent, and
the answers may be difficult to judge, and despite the binary competent to do so. Assess the incentive structures that may
nature of the decision nodes, many questions may not have be present for those providing input into decision-making, and
clear yes/no answers in real life. In such cases, collect as at each step along the way, ask who would benefit and who
much information as possible, and use caution if deciding might be harmed by the actions to be taken.
to proceed. The answers to many of these questions may
be difficult to judge at the outset of a project, so this chart ** Pages 5 & 6 contain the legend (color guide), definitions (of
bolded italicized terms), and additional resources to accompany
should be consulted periodically throughout the process of the
this decision tree. Familiarize yourself with those pages before you
development and deployment of an AI solution. Additionally, begin for optimal use.
this tree is meant for AI solutions, but many of its core
principles also apply to non-AI solutions.

“I am looking for a technological


START HERE:
solution to my problem”
Pause. A comprehensive
and inclusive stakeholder
See
Stakeholder participation strategy
Engagement Ask: Who are the stakeholders is critical to developing
on page 5 in your problem? How will they ? I don’t know responsible solutions.
be included in this process? Review our Stakeholder
Engagement guide and
complete this step before
I have identified the stakeholders and, with their guidance, proceeding.
developed an inclusive participation strategy

1 See
AI Risk
Consider Your Develop a list of possible solutions to your problem Management
Solutions along with each solution’s benefits and limitations, Resources
on page 5
including the relative risk of using AI in this application.

I have a clear list of solutions and


would like to evaluate an AI solution

Return to Step 1. Consider


Ask: How would artificial
other potential solutions
intelligence be superior to ? I don’t know
and whether they may be
another approach?
superior to an AI system.

I have identified key metrics that indicate AI


is superior to other solutions

NO Ask: Does this tool already exist? YES

Ask: Will this tool rely on Ask: Does this tool rely
NO training data? on training data? NO

Go to Go to
YES YES
Step 3 Step 3

Page 1 Continued on Next Page


Go to Go to
Step 3 Step 3

2 2
Consider the Consider the
Training Data Training Data

Ask: What kind of training


Ask: What kind of training data
data will be required? Can they
were used to train the algorithm?
be sufficiently representative
Are the data applicable to my
of the use case to solve ? No/Unsure No/Unsure ?
situation? How do you know? Are
the problem they intend to
you confident in this?
address? How do you know?
Are you confident in this?
See See
Fundamental Fundamental
Very confident Very confident Rights
Rights
on page 6 on page 6

Ask: Can the training data be Ask: Were the training


gathered in a way that respects data gathered in a way that
ethics and human rights? How respects ethics and Human
will you know? Should data depict ? No/Unsure No/Unsure ? Rights? Can you confirm this?
victimization, can the rights Should data depict victimization,
and dignity of the victims be can the rights and dignity of the
protected? victims be protected?
See
See Harms from
Harms from Robust safeguards Robust safeguards Automated
Automated have been designed were in place Systems
Systems on page 6
on page 6

Ask: Are the training data and


Ask: Will the training data
their outputs being handled in
and their outputs be handled Pause project
No/Unsure a way that mitigates risks to
in a way that mitigates risks to ? until issue can No/Unsure ? participants? Consider data
participants? Consider data be resolved.
privacy.
privacy.

Data privacy and security Data privacy and security


safeguards are in place safeguards were in place

Ask: Are the training data Ask: Are the training data
verifiable? Based on the answer, Not so Not so verifiable? Based on the answer,
how confident can one be of ? confident confident ? how confident can one be of their
their accuracy? accuracy?

Very confident Very confident

Ask: Will the applicability Ask: When and where were the
of the training data to the training data collected? Are they
problem change over time, or still relevant to the problem to
? No/Unsure No/Unsure ? be solved? Will their relevance
according to other variables like
geography? Can these changes change over time, or according to
be addressed? other variables like geography?

There is a plan to
There is a plan to account for this
account for this

Develop procedures to
periodically review relevance
and applicability of training data.

Page 2 Continued on Next Page


3 3
Consider the Consider the
Tool Itself Tool Itself
See See
Scientific Scientific
Resources Resources
on page 5 Choose a developer/vendor to Contact the developer/vendor on page 5
create your AI tool. of the AI tool.

Ask: Is it possible to develop Ask: Has this tool been


a set of extensive tests in a extensively tested in a context
context that is applicable to NO NO
that is applicable to the intended
the intended use case? use case? (Consider prior uses.)
YES Pause project until YES
extensive testing can
be performed in an
applicable context with
safeguards in place for
human subjects.

Ask: Does the test data


Ask: Did the test data collection
collection process have
NO NO process have safeguards in place
safeguards in place for human
for human subjects?
subjects? If the issue(s) cannot
YES be resolved
YES

Consider other
solutions. Return to
See
Step 1.
See
Fundamental Fundamental
Rights Rights
on page 6 Ask: Can the tests be carried out Ask: Were the tests carried out in on page 6
in a way that respects the rights NO NO
a way that respects the rights of
of any human subjects? any human subjects?
YES YES
Proceed to develop
the tool and conduct
See extensive tests
Harms from
Automated
Systems
on page 6 Carefully scrutinize the results of the testing. Did they reveal any
anomalous behaviors or outcomes that would be problematic if YES
used as the basis for real-world decisions?

NO

Testing revealed
no anomalous
or problematic behavior

Can you explain what caused


Anomalous
the anomalous outcomes?
results can be
YES Can the tool or its training data
explained,
be modified to correct such
or addressed
behavior?

Anomalous
results cannot
be explained
or addressed

Consider other solutions.


Return to Step 1.

Page 3 Continued on Next Page


See
Harms from
4
Automated Consider the
Systems Application
on page 6

Attempt to anticipate future risk. Consider the various ways things could go
wrong, legally, ethically, and morally. Research analogous situations where
automated decision making has failed.

Stakeholders have been consulted, are aware of, and approve of the risks
Pause. Work with
See
stakeholders to
Disparate
Impacts understand the
Ask: What are the possible disparate impacts of your tool’s
on page 6 full impact of your
application? Develop context specific safeguards that address
Unsure tool’s application.
stakeholder concerns. Pay particular attention to vulnerable
Develop context
populations.
specific safeguards
for all risks before
See Safeguards are in place to address all stakeholder concerns proceeding.
Fundamental
Rights
on page 6
Ask: Could the application involve emergency limitations on
fundamental rights?
Emergency limitations
are possible.

Ask: Are these limitations The limitations


necessary? Can the goal be reached cannot be After extensive
in another way? avoided collaboration with
stakeholders, we
There are other concluded that no
options Stakeholders are emergency limitations
Ask: Are stakeholders aware of and consenting to are possible in any
not aware/do not these limitations?
consent foreseeable scenario.

Stakeholders are aware of and


Pause. Work with consenting to these limitations
stakeholders to
adjust strategy to
modify or eliminate Limitations are
limitations to Ask: Are these limitations are proportionate to the risk?
disproportionate
fundamental rights
before proceeding.
Limitations are proportionate

Limitations are Ask: Can these limitations be constrained to the


unconstrained shortest possible time frame? Is that time frame
bounded (i.e., not open-ended)?

See
Limitations can be constrained in time Post-
Deployment
Resources
on page 6
There are no oversight Ask: Are oversight mechanisms in place to govern
mechanisms/stakeholders these limitations? Do stakeholders have a voice in these Move forward with
have no oversight mechanisms? implementation.
Regularly monitor outcomes
Stakeholders have oversight post-deployment to meet
effectiveness, compliance, and
equity goals. Note that post-
Ask: in light of the above steps, do the benefits of this deployment guidance is beyond
application likely outweigh the risks? the scope of this decision
tree. However monitoring and
NO YES auditing is crucial to mitigating
Consider other solutions
unintended harms in the future.
to this problem.
See page 6 for preliminary
Return to Step 1. resources on post-deployment
best practices.

Page 4 End of Decision Tree. Resources Next.


RESOURCES accompanying the Decision Tree for the
Responsible Application of Artificial Intelligence (v1.0)
Decision Tree Legend AI Risk Management Resources
The National Institute of Standards and Technology (NIST) produced the
# You have reached a new step in the tree AI Risk Management Framework (AI RMF 1.0) to be a resource for the
organizations designing, developing, deploying, or using AI systems to help
Refer to the Resources for additional information manage the many risks of AI and promote trustworthy and responsible
development and use of AI systems. The RMF describes seven traits of
You should be consulting your stakeholders trustworthy AI as valid and reliable, safe, secure and resilient, accountable
and transparent, explainable and interpretable, privacy enhanced, and fair
with their harmful biases managed.
Stop and follow the instructions before proceeding
The RMF outlines four functions for managing risk throughout the lifecycle
of AI: mapping the context of the AI system and identifying associated
Be mindful of these recommendations
risks; measuring those risks; managing risks through prioritization and
regular monitoring; and governing to ensure compliance, evaluation, and
You should follow the directions in the box leadership to cultivate a culture of risk management within an organization
and decrease the likelihood of negative impacts.
Stakeholder Engagement For additional guidance, see the full NIST Framework here.
Stakeholders are individuals and/or groups that are impacted The NIST AI RMF 1.0 acknowledges that risk cannot be eliminated,
by the problem you are hoping to address. These same necessitating risk prioritization. AI systems that directly interact with or
stakeholders are likely to also be affected by the AI solution impact humans, or those which were trained on large datasets containing
you are evaluating. The term “stakeholders” does not refer sensitive or protected information might warrant higher initial prioritization
to a homogenous group. Stakeholders can include multiple for risk assessment. Further consideration of risk prioritization can be
individuals/groups with unique (and possibly conflicting) values found in the European Union’s recent proposal on the regulation of AI,
and interests. which introduces a tiered evaluation of risks:
While we use the term “stakeholders” throughout the decision
• Unacceptable risk: AI systems considered a clear threat to the safety,
tree, different steps in the tree may involve different sets of
livelihoods and rights of people.
stakeholders. When you are identifying the stakeholders, be
sure to include all possible impacted individuals and/or groups • High risk: Included, but not limited to, technologies involving critical
and continuously review your list to reflect changes across infrastructures, educational or vocational training, safety components
time. of products, employment, management of workers and access to self-
employment, essential services, law enforcement that may interfere
Partnership on AI (PAI)’s white paper “Making AI Inclusive” with fundamental rights, migration, asylum and border control
provides four guiding principles for ethical stakeholder movement, and administration of justice and democratic processes.
engagement in AI/ML development:
• Limited risk: AI systems which should comply with minimal
transparency requirements that would allow users to make informed
1. All participation is a form of labor that should be recognized
decisions.
2. Stakeholder engagement must address inherent power • Minimal risk: Systems with high transparency and minimal threats to
asymmetries people, such as AI-enabled spam filters.

3. Inclusion and participation can be integrated across all stages For additional information, read the full policy here.
of the development lifecycle Definitions
4. Inclusion and participation must be integrated to the The AAAS “Artificial Intelligence and the Courts: Materials for Judges”
application of other responsible AI principles includes a Foundational Issues and Glossary that provides definitions for
key terms (bolded-italicized) used in this decision tree:
Additionally, PAI offers three recommendations aligned with
Accuracy: The ability to produce a correct or true value relative to a
these principles:
defined parameter.
1. Allocate time and resources to promote inclusive development Artificial Intelligence (AI): No widely agreed upon definition. AI is
both a concept and a category of technology tools that are powered by
2. Adopt inclusive strategies before development begins advanced mathematical models and data that can augment, replicate or
improve upon the type of human cognitive task that otherwise requires
3. Train towards an integrated understanding of ethics thinking, beyond calculating.

Inspired by these principles and recommendations, this Test Data: The data used to evaluate how well a trained model is
decision tree places inclusive stakeholder engagement at performing once it is built and before it is released.
the center of the responsible AI framework. The yellow boxes Training Data: The historical data used to develop and teach an AI
refer to (suggested) points in the tree that call for stakeholder model the logic and pattern recognition to generate desired predictions
engagement. Once you have identified all the stakeholders, in the future.
work together with them to create an inclusive stakeholder
engagement strategy centered around stakeholder preference For additional guidance, read the full paper here.
(e.g.: consider when they would - or would not - like to be Scientific Resources
consulted, via which channels of communication, with what
kind of compenstation, etc.). For general questions, contact AAAS at srj@aaas.org.
If you need explicit scientific guidance, reach out to On-Call Scientists
For additional guidance, read the full PAI paper here. at oncall@aaas.org and we will attempt to match you with an expert
who can assist in your specific case. Additionally, consider reaching out
to your local scientific community or university resources.

Resources Continued on Next Page.


Page 5
Fundamental Rights Types of Harm from Acknowledgements
The AAAS Framework for the Responsible Automated Systems The AAAS Center for Scientific
Development of AI Research identified four Responsibility and Justice (CSRJ) would
The same properties of speed, versatility,
overall guiding principles for evaluating AI like to thank the following individuals
and flexibility that make AI a useful tool for
in the context of human rights. for contributing their knowledge to this
automated decision making also have the
project. Without their invaluable insights,
These are: potential to magnify the harm that such
the creation of this tool would not have
systems may cause when carelessly or
• Informed Consent - individuals have been possible.
improperly developed and deployed. By
autonomy to make their own choices about
examining the harms that AI has been known
participation Ronald Arkin
to produce in previous circumstances, you
• Beneficence - AI applications must benefit can be better informed about the risks that Regents’ Professor Emeritus,
both the individual and the group may be present as you consider deploying School of Interactive Computing,
• Nonmaleficence - AI applications must not AI tools in your own work. Throughout this Georgia Institute of Technology
harm participants process, keep in mind the precautionary
principle: actions which present an uncertain B Cavello
• Justice - participants must be treated
potential for harm must be accompanied Director of Emerging Technologies,
equally, and not be subject to disparate
by measures to minimize the threat of Aspen Institute
impacts
that harm unless it can be shown that they
Additionally, the following rights must always present no appreciable risk. Clarice Chan
be respected: Former White House Presidential
This list outlines some common ways Innovation Fellow
• Privacy that AI can lead to harm. A more detailed
• Data Confidentiality elaboration of this list is available from Fredy Cumes
• Non-Discrimination Microsoft* here.
• Security of the person • Over-reliance on safety features Michelle Ding
• Freedom of Movement Brown University
• Inadequate fail-safes
AAAS CSRJ
• Freedom from being subjected to • Over-reliance on automation**
experimentation • Distortion of reality or gaslighting Jonathan Drake
• The right to enjoy the benefits of science • Reduced self-esteem/reputation damage AAAS CSRJ
• Freedom from cruel, inhuman, or degrading • Addiction/attention hijacking
treatment Scott Edwards
• Identity theft Program Director, Research and Advocacy,
For additional information, see the White
• Misattribution Amnesty International
House Blueprint for an AI Bill of Rights here.
• Economic Exploitation
Disparate Impacts • Devaluation of Expertise Joel Ericsen
AAAS CSRJ
• Dehumanization
When evaluating AI applications in a human
rights context, one should pay special attention • Public Shaming Juan E. Gilbert
to actions that may disproportionately impact • Loss of Liberty Andrew Banks Family Preeminence
marginalized and/or vulnerable populations. • Loss of Privacy Endowed Professor & Chair,
• Environmental Impact Computer & Information Science &
In particular, consider the scale of the impact
Engineering Department,
and the timing of the implementation. For • Erosion of Social & Democratic Structures Herbert Wertheim College of Engineering,
example, are the tools deployed extensively?
• Discrimination in: Employment, Housing, University of Florida
Are they already in use? Will they be soon?
These factors may influence the presence or Insurance/Benefits, Education, Access to
Technology, Credit, Access to / Pricing of Danielle Grey-Stewart
degree of disparate impact.
Goods and Services University of Oxford
The following list of traits, although not **For more on Overreliance on AI, read the AAAS Center for Scientific Evidence in
exhaustive, provides examples of groups that Microsoft report here. Public Issues
are commonly subject to disproportionate
Theresa Harris
impacts: Post-Deployment Resources AAAS CSRJ
• Race and ethnicity The resources below, though not exhaustive,
• Disability status are intended to serve as guidance for regular Nick Hesterberg
• Geographic location post-deployment monitoring and auditing of Executive Director,
AI systems. Environmental Defender Law Center
• Immigration status
• Contact with the criminal justice system Jennifer Kuzma
• “Closing the AI accountability gap: defining
• Socioeconomic status an end-to-end framework for internal
• Dependence on safety nets algorithmic auditing,” Raji et al. Daniela Orozco Ramelli
Senior Forensic Professional, EQUITAS
• Age • “AI Auditing Framework,” Institute of
• Sexual orientation/Gender identity Internal Auditors
Jake Porway
• Religion, particularly if a minority • “AI Audit-Washing and Accountability,”
German Marshall Fund Kate Stoll
• “Using Algorithm Audits to Understand AI,” AAAS Center for Scientific Evidence in
Stanford University Human-Centered AI Public Issues

Disclosure Mihaela Vorvoreanu


*This decision tree was funded by AAAS through the (AI)2 : Artificial Intelligence — Director, UX Research & RAI education,
Applications/Implications initiative, which is supported by Microsoft. The interpretations Aether, Microsoft
and conclusions contained in this document are those of the authors and do not necessarily
represent the views of the AAAS Board of Directors, its council and membership, or Microsoft.

Page 6

You might also like