CalvoPetersCave impactassessmentNatureMI2020

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/339008702

Advancing impact assessment for intelligent systems

Article  in  Nature Machine Intelligence · February 2020


DOI: 10.1038/s42256-020-0151-z

CITATIONS READS

31 1,287

3 authors:

Rafael A Calvo Dorian Peters


Imperial College London University of Cambridge
379 PUBLICATIONS   10,470 CITATIONS    57 PUBLICATIONS   1,684 CITATIONS   

SEE PROFILE SEE PROFILE

Stephen Cave
University of Cambridge
25 PUBLICATIONS   880 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

OSPIA/EQClinic View project

AI Ethics View project

All content following this page was uploaded by Stephen Cave on 20 February 2020.

The user has requested enhancement of the downloaded file.


comment

Advancing impact assessment for intelligent


systems
With the rise of AI technologies in society, we need a human impact assessment for technology.

Rafael A. Calvo, Dorian Peters and Stephen Cave

I
t is the 50th anniversary of the Inputs (depletion)

environmental impact assessment


(EIA), a significant step towards making
engineering more socially responsible. Natural resources
But a growing number of policymakers
are now voicing the need for an approach Economy
to assess the human and social impacts of
intelligent systems. We discuss how the EIA
provides a partial blueprint for what we call
a human impact assessment for technology
(HIAT), and how more recent algorithmic
and data protection impact assessment
initiatives can contribute. We also discuss
how ethical frameworks for such a human Environment Human Technology
Land, water, air, materials Attention, behaviour, data
impact assessment could draw on recently EIA HIAT
established artificial intelligence (AI) ethics
principles. We argue that this approach
will help build trust in an industry facing
increasing criticism and scrutiny.
Outputs plus waste (impacts)

Outputs and costs of data-driven


Human and environment as subjects of resource extraction and impact.
industries
In 1869, amid the enthusiasm of the first
industrial revolution, T. H. Huxley called
on readers of the first issue of Nature to multi-sector engagement involving industry, and has no voice of its own, data industries
rejoice in the progress of the previous 50 regulators and the general public. Moreover, rely on human resources to generate
years1. In 1969, exactly 100 years later, the 50-year history of experience with their data, and it is a general democratic
the environmental costs of that progress EIA can teach us to anticipate likely issues principle that humans are given a say with
had become salient enough to prompt including disagreement about measures, regard to actions that will affect them. As
radical changes to policy through formal the need for an evolving process, and such, we will need to improve the ways
environmental assessment and regulation. conflicts between industry and regulation. human needs and values are integrated
Speeding forward another 50 years, we Traditional EIA evaluates the effects into design processes with respect to AI.
now face a fourth industrial revolution and of human intervention on the biophysical Measuring impact will help make these
our relationship with technology is again in environment by considering the intended values explicit and thus provide a platform
flux. While we benefit from this progress, we (products) and unintended (waste) for defending them.
are also witnessing its ethical, psychological consequences of industry. The process
and social costs. Some of the by-products allows for the identification of resource Distinguishing HIAT
of our systems include manipulation of costs (for example, depletion of fossil fuels) While the EIA has inspired a number of
emotions2, attention3 and voting behaviours and pollutants (for example, chemical human- and society-focused processes
at scale4, as well as interactions designed run-off) so these can be weighed against such as health impact assessment8 and
for deception and coercion5. AI systems potential benefits. New, data-driven social impact assessment9, none is sufficient
can bias legal, educational and employment industries differ from conventional for addressing the far-reaching and
decisions6 and have unintended negative engineering projects as human activity unprecedented effects of intelligent systems
impacts on health and wellbeing7. itself is considered the resource as well as due to a number of unique characteristics.
But we can learn from the history of the product, while individual or societal Specifically, an effective HIAT will need
EIA for improving the impacts of AI. Like ill-being is a potential ‘waste’ effect. to manage the following:
environmental impact, the human impact Engineers and other stakeholders such
of intelligent technologies is difficult to as policymakers need an assessment • Dynamic systems. While traditional
model, with consequences that are hard to framework that takes this into account. impact assessment approaches evaluate
predict. It also requires a framework for While the natural environment is an interventions that are relatively static
strong multidisciplinary collaboration, and involuntary recruit to human industry, once in place, intelligent systems can
Nature Machine Intelligence | VOL 2 | February 2020 | 89–91 | www.nature.com/natmachintell 89
comment

change rapidly. With few physical communities and stakeholders as they Ethics Guidelines for Trustworthy AI14
limitations, software is modified easily, seek to assess the claims made about these may provide more workable summary
remotely and, in addition to regular systems, and to determine where — or structures for an assessment and report.
updates, intelligent systems are continu- if — their use is acceptable.” As such, While there are many AI ethics frameworks
ally gathering new data, learning and it is at the time of procurement that available, recent analysis shows that they
reprogramming themselves. agencies conduct an AIA. In contrast, share common value structures and equate
• Scale. While traditional approaches the HIAT aims to become part of the to other ethical frameworks in health15,
assess impact within communities or design and development process from providing helpful corroboration and
regions with geopolitical boundaries, early stages, and to involve technology- alignment with other forms of professional
software has no physical boundaries and, makers themselves. practice and regulation.
as a society, we have witnessed impacts A HIAT would introduce a framework In short, we would make the following
crossing geographic, political, legal and large enough in scope to incorporate all practical recommendations for developing
cultural borders. uses of intelligent technologies while a first version of a HIAT:
• Responsiveness. While the emphasis in being sensitive to the factors unique to
traditional impact assessment is on the these systems. In the next section we • Use an existing ethical technology
anticipation of impact, and monitoring briefly outline some practical approaches framework such as the European Union’s
to check for compliance, the scale and towards achieving this. Ethics Guidelines for Trustworthy AI
malleability of software impact make it as a foundation to guide assessment
harder to predict accurately. Therefore, Defining and measuring impact and report structure.
while anticipation is still essential, an A HIAT should aim to predict and • Employ social science methods for
effective HIAT process will have to lean evaluate the impact that new digital measuring impact against the framework
more heavily on ongoing evaluation and technologies have on all stakeholders. — for example, measures of autonomy
improvement in response. It would acknowledge that these systems and wellbeing developed in psychology.
• Humans as resource. As noted above, have psycho-social impacts on individuals • Include existing assessment methods
while traditional EIA centres on the and society, and that some of these may be and tools developed for specific tech-
natural environment as the source of negative for health, wellbeing and values nologies such as automated decision
resource extraction, new intelligent sys- such as privacy, autonomy and democracy. systems (AIA) and data privacy (DPIA)
tems centre on humans. Many of these While the EIA deals with physical as appropriate to the context.
systems mine human data and extract and observable measures, the most • Require compliance with related techni-
human attention, using these to change salient effects of intelligent systems are cal standards, such as IEEE specifications
human behaviour. Such an approach has often on the subjective experience of as they emerge.
novel social and philosophical implica- human beings. Social scientists have • Gather input from all stakeholders,
tions that need to be considered. established means for measuring this including a diverse representative
• Practice and training. While tradi- kind of impact. For example, the World sample of end-users, in order to
tional impact assessment is standard Happiness Report published by the inform the assessment.
to physical engineering, policy and UN Sustainable Development Solutions
government practice, software designers Network11 measures population-wide Building trust
and engineers currently have no impact wellbeing across countries using measures While pressure for greater regulation
assessment built into their processes or from psychology and behavioural of intelligent systems is mounting, the
training. This must change. To make economics. Human–computer interaction dominant argument against is that it
this change possible, a HIAT must be researchers also use a combination of slows innovation agility. Such concerns
designed with consideration for the qualitative and quantitative methods from apply to all industries, but agility has been
iterative and agile processes common psychology for the evaluation of software of special concern in the software industry,
in these fields. systems with respect to psychological which has until recently favoured the ‘move
experience12. Some of these draw on fast and break things’ motto. But as this
A handful of recent efforts have already measures of autonomy and wellbeing industry comes under increasing criticism
contributed to impact assessment processes developed over several decades in and scrutiny, it is realizing the importance
for specific areas related to AI. These psychology research13. of systems that build trust. Other industries
include data protection impact assessments For a HIAT report, qualitative and know that values such as quality assurance,
(DPIAs) required by EU regulations, and quantitative measures could be given for a safety, security and traceability are essential
algorithmic impact assessments (AIAs) range of categories determined by the ethical to this. Impact assessments are an important
focusing on automated decision-making frameworks for intelligent systems already tool for embedding and validating these
in government (for example, for policing, in development, such as the IEEE Global values and have been successfully used
resource-allocation and so on). For Initiative on Ethical AI. This large-scale in many industries including mining,
example, the AI Now Institute outlines an international initiative is producing a set of agriculture, civil engineering and industrial
AIA process for government procurement industry standards akin to ISO 14001:2015 engineering. Other sectors too, such
of automated decision systems10 and the (a management system for environmental as pharmaceuticals, are accustomed to
Canadian government provides a scorecard responsibility). The IEEE specifications innovating within strong regulatory
tool for evaluating such systems. The will include assurance procedures for environments, and there would be little trust
AIA framework is intended to help public technical issues affecting privacy, security, in their products without this framework.
agencies to critically assess automated psychological wellbeing and other ethical As AI matures, we need frameworks such as
systems that impact justice and fair concerns. While these specifications HIAT to give citizens confidence that this
distribution. AI Now states that the promise detailed technical standards, powerful new technology will be broadly
AIA is “designed to support affected other ethical frameworks, such as the beneficial to all. ❐

90 Nature Machine Intelligence | VOL 2 | February 2020 | 89–91 | www.nature.com/natmachintell


comment

Rafael A. Calvo   1,2*, Dorian Peters   1,2 and References 10. Reisman, D., Schultz, J., Crawford, K. & Whittaker, M.
1. Huxley, T. H. Nature 1, 9–11 (1869). Algorithmic Impact Assessments: A Practical Framework for Public
Stephen Cave   2 2. Kramer, A. D. I., Guillory, J. E. & Hancock, J. T. Proc. Natl Acad. Agency Accountability (AI Now Institute, 2018).
1
Dyson School of Design Engineering, Imperial Sci. USA 111, 8788–8790 (2014). 11. Sachs, J., Becchetti, L. & Annett, A. World Happiness Report 2016
3. Wu, T. The Attention Merchants: The Epic Scramble to Get Inside Vol. 2 (UN Sustainable Development Solutions Network, 2016).
College London, London, UK. 2Leverhulme Centre for 12. Peters, D., Calvo, R. A. & Ryan, R. M. Front. Psychol. 9,
Our Heads (Vintage, 2017).
the Future of Intelligence, University of Cambridge, 4. Bond, R. M. et al. Nature 489, 295–298 (2012). 797 (2018).
Cambridge, UK. 5. Burr, C., Cristianini, N. & Ladyman, J. Minds Mach. 28, 13. Deci, E. L. & Ryan, R. M. Psychol. Inq. 11, 227–268 (2000).
735–774 (2018). 14. Ethics Guidelines for Trustworthy AI (European Commission, 2019).
*e-mail: r.calvo@imperial.ac.uk 6. Crawford, K. & Calo, R. Nat. News 538, 311–313 (2016). 15. Floridi, L. et al. Minds Mach. 28, 689–707 (2018).
7. Kross, E. et al. PLOS ONE 8, e69841 (2013).
8. Kemm, J., Parry, J., Palmer, S. & Palmer, S. R. (eds.) Health Impact
Published online: 3 February 2020 Assessment (Oxford Univ. Press, 2004). Competing interests
https://doi.org/10.1038/s42256-020-0151-z 9. Becker, H. A. Eur. J. Oper. Res. 128, 311–321 (2001). The authors declare no competing interests.

Nature Machine Intelligence | VOL 2 | February 2020 | 89–91 | www.nature.com/natmachintell 91

View publication stats

You might also like