Professional Documents
Culture Documents
CalvoPetersCave impactassessmentNatureMI2020
CalvoPetersCave impactassessmentNatureMI2020
CalvoPetersCave impactassessmentNatureMI2020
net/publication/339008702
CITATIONS READS
31 1,287
3 authors:
Stephen Cave
University of Cambridge
25 PUBLICATIONS 880 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Stephen Cave on 20 February 2020.
I
t is the 50th anniversary of the Inputs (depletion)
change rapidly. With few physical communities and stakeholders as they Ethics Guidelines for Trustworthy AI14
limitations, software is modified easily, seek to assess the claims made about these may provide more workable summary
remotely and, in addition to regular systems, and to determine where — or structures for an assessment and report.
updates, intelligent systems are continu- if — their use is acceptable.” As such, While there are many AI ethics frameworks
ally gathering new data, learning and it is at the time of procurement that available, recent analysis shows that they
reprogramming themselves. agencies conduct an AIA. In contrast, share common value structures and equate
• Scale. While traditional approaches the HIAT aims to become part of the to other ethical frameworks in health15,
assess impact within communities or design and development process from providing helpful corroboration and
regions with geopolitical boundaries, early stages, and to involve technology- alignment with other forms of professional
software has no physical boundaries and, makers themselves. practice and regulation.
as a society, we have witnessed impacts A HIAT would introduce a framework In short, we would make the following
crossing geographic, political, legal and large enough in scope to incorporate all practical recommendations for developing
cultural borders. uses of intelligent technologies while a first version of a HIAT:
• Responsiveness. While the emphasis in being sensitive to the factors unique to
traditional impact assessment is on the these systems. In the next section we • Use an existing ethical technology
anticipation of impact, and monitoring briefly outline some practical approaches framework such as the European Union’s
to check for compliance, the scale and towards achieving this. Ethics Guidelines for Trustworthy AI
malleability of software impact make it as a foundation to guide assessment
harder to predict accurately. Therefore, Defining and measuring impact and report structure.
while anticipation is still essential, an A HIAT should aim to predict and • Employ social science methods for
effective HIAT process will have to lean evaluate the impact that new digital measuring impact against the framework
more heavily on ongoing evaluation and technologies have on all stakeholders. — for example, measures of autonomy
improvement in response. It would acknowledge that these systems and wellbeing developed in psychology.
• Humans as resource. As noted above, have psycho-social impacts on individuals • Include existing assessment methods
while traditional EIA centres on the and society, and that some of these may be and tools developed for specific tech-
natural environment as the source of negative for health, wellbeing and values nologies such as automated decision
resource extraction, new intelligent sys- such as privacy, autonomy and democracy. systems (AIA) and data privacy (DPIA)
tems centre on humans. Many of these While the EIA deals with physical as appropriate to the context.
systems mine human data and extract and observable measures, the most • Require compliance with related techni-
human attention, using these to change salient effects of intelligent systems are cal standards, such as IEEE specifications
human behaviour. Such an approach has often on the subjective experience of as they emerge.
novel social and philosophical implica- human beings. Social scientists have • Gather input from all stakeholders,
tions that need to be considered. established means for measuring this including a diverse representative
• Practice and training. While tradi- kind of impact. For example, the World sample of end-users, in order to
tional impact assessment is standard Happiness Report published by the inform the assessment.
to physical engineering, policy and UN Sustainable Development Solutions
government practice, software designers Network11 measures population-wide Building trust
and engineers currently have no impact wellbeing across countries using measures While pressure for greater regulation
assessment built into their processes or from psychology and behavioural of intelligent systems is mounting, the
training. This must change. To make economics. Human–computer interaction dominant argument against is that it
this change possible, a HIAT must be researchers also use a combination of slows innovation agility. Such concerns
designed with consideration for the qualitative and quantitative methods from apply to all industries, but agility has been
iterative and agile processes common psychology for the evaluation of software of special concern in the software industry,
in these fields. systems with respect to psychological which has until recently favoured the ‘move
experience12. Some of these draw on fast and break things’ motto. But as this
A handful of recent efforts have already measures of autonomy and wellbeing industry comes under increasing criticism
contributed to impact assessment processes developed over several decades in and scrutiny, it is realizing the importance
for specific areas related to AI. These psychology research13. of systems that build trust. Other industries
include data protection impact assessments For a HIAT report, qualitative and know that values such as quality assurance,
(DPIAs) required by EU regulations, and quantitative measures could be given for a safety, security and traceability are essential
algorithmic impact assessments (AIAs) range of categories determined by the ethical to this. Impact assessments are an important
focusing on automated decision-making frameworks for intelligent systems already tool for embedding and validating these
in government (for example, for policing, in development, such as the IEEE Global values and have been successfully used
resource-allocation and so on). For Initiative on Ethical AI. This large-scale in many industries including mining,
example, the AI Now Institute outlines an international initiative is producing a set of agriculture, civil engineering and industrial
AIA process for government procurement industry standards akin to ISO 14001:2015 engineering. Other sectors too, such
of automated decision systems10 and the (a management system for environmental as pharmaceuticals, are accustomed to
Canadian government provides a scorecard responsibility). The IEEE specifications innovating within strong regulatory
tool for evaluating such systems. The will include assurance procedures for environments, and there would be little trust
AIA framework is intended to help public technical issues affecting privacy, security, in their products without this framework.
agencies to critically assess automated psychological wellbeing and other ethical As AI matures, we need frameworks such as
systems that impact justice and fair concerns. While these specifications HIAT to give citizens confidence that this
distribution. AI Now states that the promise detailed technical standards, powerful new technology will be broadly
AIA is “designed to support affected other ethical frameworks, such as the beneficial to all. ❐
Rafael A. Calvo 1,2*, Dorian Peters 1,2 and References 10. Reisman, D., Schultz, J., Crawford, K. & Whittaker, M.
1. Huxley, T. H. Nature 1, 9–11 (1869). Algorithmic Impact Assessments: A Practical Framework for Public
Stephen Cave 2 2. Kramer, A. D. I., Guillory, J. E. & Hancock, J. T. Proc. Natl Acad. Agency Accountability (AI Now Institute, 2018).
1
Dyson School of Design Engineering, Imperial Sci. USA 111, 8788–8790 (2014). 11. Sachs, J., Becchetti, L. & Annett, A. World Happiness Report 2016
3. Wu, T. The Attention Merchants: The Epic Scramble to Get Inside Vol. 2 (UN Sustainable Development Solutions Network, 2016).
College London, London, UK. 2Leverhulme Centre for 12. Peters, D., Calvo, R. A. & Ryan, R. M. Front. Psychol. 9,
Our Heads (Vintage, 2017).
the Future of Intelligence, University of Cambridge, 4. Bond, R. M. et al. Nature 489, 295–298 (2012). 797 (2018).
Cambridge, UK. 5. Burr, C., Cristianini, N. & Ladyman, J. Minds Mach. 28, 13. Deci, E. L. & Ryan, R. M. Psychol. Inq. 11, 227–268 (2000).
735–774 (2018). 14. Ethics Guidelines for Trustworthy AI (European Commission, 2019).
*e-mail: r.calvo@imperial.ac.uk 6. Crawford, K. & Calo, R. Nat. News 538, 311–313 (2016). 15. Floridi, L. et al. Minds Mach. 28, 689–707 (2018).
7. Kross, E. et al. PLOS ONE 8, e69841 (2013).
8. Kemm, J., Parry, J., Palmer, S. & Palmer, S. R. (eds.) Health Impact
Published online: 3 February 2020 Assessment (Oxford Univ. Press, 2004). Competing interests
https://doi.org/10.1038/s42256-020-0151-z 9. Becker, H. A. Eur. J. Oper. Res. 128, 311–321 (2001). The authors declare no competing interests.