Professional Documents
Culture Documents
ATARC AIDA Guidebook - FINAL 4l
ATARC AIDA Guidebook - FINAL 4l
As the designers and builders of the system communicate with the stakeholders, they need to
include the ongoing HCAI attributes. By addressing these considerations upfront, the
stakeholders that may be resistent to change can be assured that the team is following an
approach that builds trust. By demonstrating the AI system as it is built and including these
considerations in the demonstrations, stakeholders can be confident that the team is covering
all the bases in buidling a trustworthy AI system.
For further consideration, MITRE’s HMT Systems Engineering Guide identifies the following
HMT leverage points that can inform success (see the link below for definitions):31
• Observability
Example: Computer Vision
• Predictability
Models designed to translate visual data
• Directing Attention
based on features and contextual information
• Exploring the Solution Space
identified during training. This enables
• Adaptability models to interpret images and video and
• Directability apply those interpretations to predictive or
decision making tasks.
• Calibrated Trust
Tools are available for evaluating some of these human-centered metrics. MITRE’s Calibrated
Trust Evaluation Toolkit (https://comm.mitre.org/calibrated-trust-toolkit/), for example, helps
ensure that users’ expectations match the system’s actual capabilities. Value cards can be used
to estimate whether models would be accepted or preferred by various stakeholders (Shen et
al. 2021). There are also a variety of tools for exploring the fairness of machine learning models.
31
https://www.mitre.org/sites/default/files/publications/pr-17-4208-human-machine-teaming-systems-
engineering-guide.pdf, pp. 1-3.
Page 44