Professional Documents
Culture Documents
Decision Tree For The Responsible Application of AI v1 0 1691875904
Decision Tree For The Responsible Application of AI v1 0 1691875904
1 See
AI Risk
Consider Your Develop a list of possible solutions to your problem Management
Solutions along with each solution’s benefits and limitations, Resources
on page 5
including the relative risk of using AI in this application.
Ask: Will this tool rely on Ask: Does this tool rely
NO training data? on training data? NO
Go to Go to
YES YES
Step 3 Step 3
2 2
Consider the Consider the
Training Data Training Data
Ask: Are the training data Ask: Are the training data
verifiable? Based on the answer, Not so Not so verifiable? Based on the answer,
how confident can one be of ? confident confident ? how confident can one be of their
their accuracy? accuracy?
Ask: Will the applicability Ask: When and where were the
of the training data to the training data collected? Are they
problem change over time, or still relevant to the problem to
? No/Unsure No/Unsure ? be solved? Will their relevance
according to other variables like
geography? Can these changes change over time, or according to
be addressed? other variables like geography?
There is a plan to
There is a plan to account for this
account for this
Develop procedures to
periodically review relevance
and applicability of training data.
Consider other
solutions. Return to
See
Step 1.
See
Fundamental Fundamental
Rights Rights
on page 6 Ask: Can the tests be carried out Ask: Were the tests carried out in on page 6
in a way that respects the rights NO NO
a way that respects the rights of
of any human subjects? any human subjects?
YES YES
Proceed to develop
the tool and conduct
See extensive tests
Harms from
Automated
Systems
on page 6 Carefully scrutinize the results of the testing. Did they reveal any
anomalous behaviors or outcomes that would be problematic if YES
used as the basis for real-world decisions?
NO
Testing revealed
no anomalous
or problematic behavior
Anomalous
results cannot
be explained
or addressed
Attempt to anticipate future risk. Consider the various ways things could go
wrong, legally, ethically, and morally. Research analogous situations where
automated decision making has failed.
Stakeholders have been consulted, are aware of, and approve of the risks
Pause. Work with
See
stakeholders to
Disparate
Impacts understand the
Ask: What are the possible disparate impacts of your tool’s
on page 6 full impact of your
application? Develop context specific safeguards that address
Unsure tool’s application.
stakeholder concerns. Pay particular attention to vulnerable
Develop context
populations.
specific safeguards
for all risks before
See Safeguards are in place to address all stakeholder concerns proceeding.
Fundamental
Rights
on page 6
Ask: Could the application involve emergency limitations on
fundamental rights?
Emergency limitations
are possible.
See
Limitations can be constrained in time Post-
Deployment
Resources
on page 6
There are no oversight Ask: Are oversight mechanisms in place to govern
mechanisms/stakeholders these limitations? Do stakeholders have a voice in these Move forward with
have no oversight mechanisms? implementation.
Regularly monitor outcomes
Stakeholders have oversight post-deployment to meet
effectiveness, compliance, and
equity goals. Note that post-
Ask: in light of the above steps, do the benefits of this deployment guidance is beyond
application likely outweigh the risks? the scope of this decision
tree. However monitoring and
NO YES auditing is crucial to mitigating
Consider other solutions
unintended harms in the future.
to this problem.
See page 6 for preliminary
Return to Step 1. resources on post-deployment
best practices.
3. Inclusion and participation can be integrated across all stages For additional information, read the full policy here.
of the development lifecycle Definitions
4. Inclusion and participation must be integrated to the The AAAS “Artificial Intelligence and the Courts: Materials for Judges”
application of other responsible AI principles includes a Foundational Issues and Glossary that provides definitions for
key terms (bolded-italicized) used in this decision tree:
Additionally, PAI offers three recommendations aligned with
Accuracy: The ability to produce a correct or true value relative to a
these principles:
defined parameter.
1. Allocate time and resources to promote inclusive development Artificial Intelligence (AI): No widely agreed upon definition. AI is
both a concept and a category of technology tools that are powered by
2. Adopt inclusive strategies before development begins advanced mathematical models and data that can augment, replicate or
improve upon the type of human cognitive task that otherwise requires
3. Train towards an integrated understanding of ethics thinking, beyond calculating.
Inspired by these principles and recommendations, this Test Data: The data used to evaluate how well a trained model is
decision tree places inclusive stakeholder engagement at performing once it is built and before it is released.
the center of the responsible AI framework. The yellow boxes Training Data: The historical data used to develop and teach an AI
refer to (suggested) points in the tree that call for stakeholder model the logic and pattern recognition to generate desired predictions
engagement. Once you have identified all the stakeholders, in the future.
work together with them to create an inclusive stakeholder
engagement strategy centered around stakeholder preference For additional guidance, read the full paper here.
(e.g.: consider when they would - or would not - like to be Scientific Resources
consulted, via which channels of communication, with what
kind of compenstation, etc.). For general questions, contact AAAS at srj@aaas.org.
If you need explicit scientific guidance, reach out to On-Call Scientists
For additional guidance, read the full PAI paper here. at oncall@aaas.org and we will attempt to match you with an expert
who can assist in your specific case. Additionally, consider reaching out
to your local scientific community or university resources.
Page 6