Professional Documents
Culture Documents
Trustworthy Swarms
Trustworthy Swarms
may help to demonstrate high proficiency in a task, and thus build that the individual robots self-organise themselves to environments
trust towards the overall swarm. However, while a swarm that is that are best suited to their material and algorithmic properties.
tolerant to multiple failures might perform a task proficiently, a The adaptation in the systems we discuss here stems from the
human user may still observe faults taking place. Subsequently, the quantity of robots within the swarm. Robots can test alternative
user may perceive that the capabilities of a faulty individual are methods or compare their performance with one another directly
representative of the whole swarm, impacting the user’s perception or indirectly, gradually adapting to changes in environment or task.
of the swarm’s trustworthiness. Such adaptability to volatile or unpredictable environments may be
The failure of individual robots within a swarm can sometimes seen as a positive element when considering trust, or may make the
do more damage than merely altering a human user’s perception swarm seem unpredictable, resulting in lower trust of the system.
of the overall system. While it is assumed that a failed robot will
have little negative impact on the swarm, in some instances, failed 3 HUMAN-SWARM INTERACTIONS
robots may have significant impact on the swarm performance. For Human trust in swarm systems has not yet been studied in great
example, a single robot may restrict or “tether” the entire swarm, detail. Robotics research often lacks qualitative case studies on
causing other robots to fail at their task [12]. Research into these the configuration and influence of technical tools or technologies
types of failures must be sought in swarm engineering in order to for technology-mediated knowing. When qualitative research is
be mitigated or avoided. Deploying systems with weaknesses in sometimes produced as part of their research design, it tends to be
robustness can shape a human’s negative perception of the swarm subsumed under technical and engineering detail. For these reasons,
and diminish public trust in swarm systems. it is difficult to conclusively say what factors help to build trust, or
whether actively faulty robots will cause issues of mistrust. In the
2.3 Swarm Scalability next section, we draw attention to the literature on Human-Swarm
The replication and scaling up of individual robots in a swarm Interaction in order to give us a better picture of how humans
could, in theory, improve its overall performance and encourage and swarms interact, with a focus on how humans can monitor,
humans to trust the system. However, increasing the number of understand, and control swarms.
robots within a system comes with increased complexity for a user
as it becomes more difficult to cognitively process what each robot 3.1 Understanding Swarms
is doing in parallel. In addition to this, increased performance with 3.1.1 The Challenge. A substantial source of distrust in swarm
increased robot numbers is not always guaranteed. In most systems systems may stem from the complexity and unpredictability of a
there will be a maximum performance attainable by increasing swarm’s behaviour. Unpredictability can generate gaps and uncer-
swarm size, meaning that there will be a point at which performance tainties in human knowledge which makes it increasingly difficult
will no longer continue to increase and will most likely decline as for us to understand the motive or goal of a swarm. In addition, the
robots are added to the system [38]. Such a drop in performance distributed nature of a swarm challenges developers in terms of
could result in distrust, especially by people who are in the process knowing what information is actually valuable and how to present
of developing the system and those who are operating or interacting this information to humans, placing greater emphasis on estab-
with the system. Increasing the scalability of the swarm may also lishing meaningful linkages between humans and robot swarms.
make it difficult for humans to assign clear accountability to specific Unpredictability, in particular, has already been noted as a large
robots which fail or cause harm among the swarm. factor in human distrust of autonomous systems [69] and the wider
field of Human-Robot Interaction (HRI) more generally [10].
2.4 Swarm Adaptability These problems highlight the need for some form of understand-
ing between human operators and swarms. The Human-Swarm
Swarms and the algorithms that dictate their behaviour can result
Interaction literature typically highlights the importance of human
in “adaptability”. The use of algorithms that dictate pheromone
operators as being able to make observations and predictions about
navigation [17], for example, have been shown to generate adap-
the swarm, allowing them to understand system behaviour: what
tive properties in swarm systems. Such algorithms are said to or-
the system is doing, why, and what it will do next [61]. The chal-
chestrate swarm navigation in terms of finding the shortest path
lenge here, then, is to develop tools that allow human operators
between two points through continuous travel. Paths are adjusted
to easily understand the state of the swarm, its performance, and
by the algorithm in the event of obstacles that block the path of the
its emergent behaviours across many different tasks or problems
swarm. In some cases, the swarm may even identify multiple paths,
[15]. Such tools may also be used to inform humans of potential
splitting the traffic across two routes. This is especially the case
collisions, and faulty or defective robots, as well as to allow enough
if the swarm is too large to travel along the single shortest route
time to stop accidents from happening [20]. If a swarm has been
without congestion [66].
developed without some form of human understanding of what the
In recent research, we see discussions on how robot diversity
system is doing, it becomes difficult for human operators to trust
amongst heterogeneous swarms can lead to systems being signif-
the system and predict its behaviour.
icantly more “adaptive” [39]. A good example of this can be seen
in [80]. Wilson’s methodology shows a heterogeneous swarm ex- 3.1.2 Operator interactions: monitoring swarms. Building a trans-
ploring a diverse environment. The swarm leverages its differing parent system in terms of its observability and predictability is a
morphology and adjusts the preference of individual robots through good starting point to understand what the swarm is doing, why
the use of a hormone inspired system. The resulting effect means and what it will do next. For example, it can be argued that the
TAS ’23, July 11–12, 2023, Edinburgh, United Kingdom Sabine Hauert et al.
swarm will produce a substantial amount of information about its adapted and re-ordered, and additional sub-behaviours can be in-
behaviour, allowing human operators some insight into what is cluded [23]. In this instance, behaviour trees represent a human-
going on. Yet, given the swarm’s distributed nature and adaptability readable form for helping to understand robot behaviours more
which, in effect, lead to a substantial increase in different types of effectively than they would be able to with traditional black-box
interactions and unpredictable behaviour, human operators will style neural network controllers [43]. With a more human-readable
find it increasingly difficult to be aware of everything and process format, humans interacting with a swarm can gain insight into
this information. In this instance, too much information might in individual robot behaviour, which may make the swarm more inter-
fact be harmful, confuse and overwhelm operators, or even erode pretable and provide understanding of such behaviour. Equipping
their trust, especially in high risk settings. Reiterating the impor- operators with such visual information may help to build a degree
tance of system transparency, information about the swarm needs of trust towards the system and, during operation, could enable
to be human “readable” which can help make swarm functionality them to make better decisions about whether to continue to trust
less opaque, and actionable. “Actionable” in the sense that the oper- the system. Behaviour trees and the issue of control will be covered
ator has knowledge about the swarm performance, can change the in more detail later in this article in the section: “Swarm Control”.
behaviour of the swarm to improve performance, and can detect
3.1.3 Public Interactions: Adapting to swarms. In the previous sec-
and mitigate faults.
tion we highlighted a variety of methods for monitoring and un-
This matter of monitoring and controlling swarms becomes even
derstanding the emergent behaviour of swarms. Specifically, we
more challenging outside controlled laboratory settings in real
outlined a variety of interfaces swarm developers are designing to
world swarm scenarios. Take for instance a swarm having to detect
establish effective, meaningful and interpretable interactions with
or extinguish a forest fire, the number of robots involved, dynamics
their specific swarm system (e.g. [24, 40, 69]). However, while there
of the fire and environment, and high-pressure operation are likely
are great efforts underway to improve human-swarm interactions,
to have adverse impacts on the “cognitive workload” of human
most of this research is geared towards human users and operators
operators [40, 47]. For example, according to Kolling et al., [47],
(operators who may even have a role in developing the swarms).
the monitoring of a large number of robots can create an over-
This means there is a lack of research on human-swarm interactions
whelming level of “cognitive complexity” for an operator due to
from a public perspective. Nor is there much research that tackles
the observation of robots moving in different directions.
the challenges of public(s) and different cultures adapting to real
Consequently, there has been a surge in human-swarm interac-
world applications of swarms [37]. Initial work exploring human-
tion research on the management of human cognitive workload
swarm interaction in public settings has focused on art, outreach,
and control strategies [56, 62, 68]. For example, Nunnally et al. [62]
and opinion polling [8, 9]. The identification of meaning-making
realise how the display of certain interface-related elements – such
processes from a public perspective may contribute useful knowl-
as a viewport of the open environment and swarm state displayed
edge to human-swarm interaction research, which could inform
to the human “from a birds eye, orthographic view” – function
more effective trust practices in understanding the behaviour of
as props for operators monitoring their swarms. Other attempts
swarms. For instance, research on the adoption of Autonomous
to simplify swarm information for human operators include: heat
Vehicles (AVs) has identified the value of external human-machine
maps displaying the belief and confidence of received swarm in-
interfaces (eHMIs) as a solution to the loss of social interactions
formation [24], augmented reality systems providing live robot
by human drivers (such as hand gestures or walking behaviour),
information and gesture-based swarm control through haptic in-
opening possibilities for research on meaningful interface design
terfaces [7, 20], as well as interfaces that display the percentage of
that could help make these vehicles safer for the public (as well as
swarm distribution defined in terms of the operator’s phenomena
safe for the driver) (e.g., [31, 49, 58]). Here, members of the public
of interest [36].
are invited into an inclusive design process that aims to put their
Similar research has provided human operators the ability to
voices and lived experience at the heart of the design process. For
learn swarm dynamics through the use of automatically gener-
example, Swedish company Semcon 1 have put forward the idea
ated explanations which make explicit the decisions made by au-
of a ‘smiling car’ so that the human is likely to understand the
tonomous robots [76]. This work found that explanations of swarm
vehicle’s intent: when the vehicle detects a pedestrian a “smile”
behaviour helped build transparency and trust in autonomous
lights up on the front grille confirming that it will stop. Elsewhere,
robots. By giving operators the option to inspect the decisions
Ackermans et al. [5] are developing a type of LED light strip to
of individual robots, robots that were deemed to have made poor
support the interaction between AVs and pedestrians through the
decisions in the past were no longer ignored by operators due to
use of colours or animations to express the vehicles intention, with
their prior failures. Giving operators an explanation of the deci-
members of the public having a preference for “uniformly flashing
sions made by the robot helped to build trust in poorly performing
or pulsing animation” when AVs communicate a yielding intention.
systems.
Taking the AV research into consideration, if swarms are to be
Elsewhere, other research has addressed problems of human-
located in the public arena, then new modes of interface design
swarm interaction through the adoption of behaviour trees - a hier-
will be required for public interaction. If this is the case, a combina-
archical method for visualising the system’s emergent behaviours
tion of these insights can help inform a design response to better
[43]. Behaviour trees are formed from modular branches displaying
facilitate public interactions with swarms. This may generate a
behaviours and sub-behaviours. These behaviours can be easily
variety of new interaction questions: How much information does
1 The Smiling Car - Self driving car that sees you - https://semcon.com/smilingcar/
Trustworthy Swarms TAS ’23, July 11–12, 2023, Edinburgh, United Kingdom
a member of the public need to know, and to convey information challenging. As we have already discussed in earlier sections: multi-
with clarity that is immediately recognisable to the swarm system? ple individual robots can complicate the human operator’s decision
How can multiple swarm systems from different companies (who making processes (their cognition abilities are harmed when mon-
have different technical, political and commercial interests) operate itoring, for example [55]). Not only this, but in centralising the
harmoniously and interact with each other (and their public) in swarm to a human operator and visualising a single point of failure,
society? the robot system may lose its swarm-like properties. This concept
has created a large field of research, giving insight to a multitude
3.1.4 What’s Next for Human-Swarm Interaction? The previous of control options that allow humans to interact with swarms.
sections have highlighted how human-swarm interactions influ- Human-Swarm interactions can take the form of control at vari-
ence trust in the behaviour of a swarm system, and what could ous resolutions, from manual control (i.e. control over all individ-
cause distrust and uncertainty. Future Human-Swarm Interaction ual robots) to decentralised control (i.e. control that preserves the
research should develop methodologies for collecting swarm infor- swarm’s distributed and emergent nature). Prior work often refers
mation and visualising swarm properties, and deliver this informa- to these forms of control in terms of “Levels of Autonomy” (LOA).
tion in a human readable format that is interpretable to the user. According to the literature, LOA tends to fall into three categories
However, the balance between the visualisation of information [61]:
and improving operator performance without increasing opera-
tor workload remains a challenge [7]. In addition to interfaces, a Manual: Total human control of the swarm system.
human-understandable answer of what the swarm is doing can be Mixed-initiative: A hybrid approach in which swarms and
mediated by the design of the swarm system itself. As previously humans collaborate.
mentioned, there are methods through which swarm behaviour Fully autonomous: A swarm which acts with no human in-
and configuration can be designed or evolved, which make systems put to behaviour.
more human readable. This, in turn, will help make the job of in-
Existing research presents arguments for both hands on and
terfaces conveying swarm intent simpler and easier to understand.
hands off approaches. An argument for a fully autonomous swarm
Although as work by Walsh [75] reminds us, this is easier said than
can be seen in work by Walker et al. [74], which coins the term “ne-
done as designing human-swarm interactions requires a good un-
glect benevolence” in reference to the idea that humans can benefit
derstanding of how people think, their preferences, and the nature
from reducing their input, and leaving full autonomy to the swarm.
of their day-to-day work.
However, Human-Swarm Interaction studies have also found that
Furthermore, it is also important to consider the type of train-
human intervention can be key to success in some scenarios. Such
ing human-operators and users require in using swarm systems.
interventions are said to alleviate shortcomings in autonomy and
Appropriate training not only helps in the responsible use and man-
concerns around changes in the intent of the swarm [47]. Finding
agement of swarm behaviours (e.g. fault detection), but it may also
a balance between these options will most likely change because
prevent cognitive overload and manage stress as well as improve
of differences in work practices and other social, cultural, and or-
the adoption of the swarm in workplaces or wider society. More-
ganisational factors. We note here that swarm developers will need
over, it will be important to consider who will be consuming swarm
to work closely with operators/users of these swarms in order to
information. For example, in the context of professional work, a
establish preferred levels of autonomy and interactions when these
warehouse operator is not going to require the same information
systems are deployed in work environments.
as a nuclear decommissioning engineer. Developers who recognise
such requirements will encourage adoption and potential trust in 3.2.2 Tools for swarm control. In this section we focus attention on
swarm systems. However, the use of swarms in contexts of pro- different tools that are being developed that allow human operators
fessional working environments can also be associated with other to control robot swarms.
undesirable ramifications including complacency, deskilling of pro- First, research into operator preferences for the control of robot
fessional expertise and the degradation of work in the long term swarms reveals a strong preference for Swarm-User Interfaces. For
[40, 41, 51]. For example, periods of monitoring swarm systems may example, a study by Kim et al. [45] reveals a preference for “tangi-
undermine skills and diminish operators’ ability to generate expec- ble” user interfaces which provide tangible links between human
tations of correct behaviour. Matters such as deskilling (as well as operators and the real world, allowing specific observations and
“overtrust” and the tendency for humans to over rely on swarm interactions to be linked with a variety of control commands such
information) require further attention and must be addressed if as gesture, voice, or physical touch. Such modalities are seen as a
operators and society are to fully trust and accept swarm systems means to establish effective control of swarms through user-defined
in the future [51]. interaction. In this work, gesture was found to be the most com-
monly used interaction (66% of the time) followed by verbal inter-
3.2 Swarm Control action (26% of the time) and then touch (23% of the time) for swarm
3.2.1 The Challenge. Some swarm systems, particularly those in- control. Despite the benefits that these interaction-methodologies
teracting closely with humans, require an element of human control. bring, the authors also noted a number of shortfalls. For example,
Human input can benefit the swarm’s output by providing context operators had a strong preference for gesture and touch based in-
for the activity that is not available to the swarm. However, when terfaces, yet the usefulness of these interfaces are likely to diminish
it comes to providing context and understanding of the activities in dark rooms because such vision systems were designed for light
of the swarm, the volume of robots within a swarm can make this or bright environments. In addition to this, a sparse swarm spread
TAS ’23, July 11–12, 2023, Edinburgh, United Kingdom Sabine Hauert et al.
out across a large environment would not be in view of the opera- The above three approaches are a step towards prioritising hu-
tor. Although gesture or touch may not be useful for total control man operators’ preferences, a principle that is recognised as a core
of a system, it could function well as a method of control for a requirement of a desirable and trustworthy technology [6]. How-
semi-autonomous swarm. ever, this also raises the concern that efforts to integrate users’
A similar approach on swarm control considers the design of an preferences in the design of swarm systems may end up shifting
augmented reality human-swarm interface [64]. In this example a responsibility from swarm developers onto the human operators
human operator was equipped with a tablet-based augmented real- who have expressed those preferences. In this way, the operator
ity interface allowing the human operator to virtually select, handle, could become a “moral crumple zone” [26]. Like crumple zones in
and drag objects to their intended destination within a digital twin cars are designed to crumple in case of a crush and so absorb some
of the physical environment. In this instance, the augmented reality of the energy of the impact, this term is used to refer to human
application allowed human operators the opportunity to control operators or end users who end up bearing the moral and legal
large numbers of robots by focusing their gaze on the movement of responsibility for the malfunction of a complex autonomous system,
virtual objects (“environment-orientated action”), rather than the despite having little control over such system.
many robots of the swarm (“robot-orientated interaction”). Interest- This concept applies to cases where swarm behaviour has been
ingly, Patel et al. [64] found that virtual object manipulation through engineered to provide operators with multiple performance options
environment-orientated commands significantly reduced the cog- [79]. Even if operators are given the option to choose their preferred
nitive load of human operators. This suggests that human control swarm configuration, their choice is limited to the finite number
of swarms may be effective if the operator is focused more towards of configuration options pre-designed by the developers. On the
environment-orientated interactions and swarm-wide goals rather one hand, this means that these configurations can be individually
than robot-orientated commands. verified prior to use, creating greater reassurance that the system
Second, there is a growing interest in designing behaviour trees will perform as requested. On the other hand, the implication is that
as a strategy for controlling robot swarms. For example, Hogg et al. developers do make critical choices in terms of swarm performance
[35] present a methodology for generating human-readable control and safety at the stage of system development and verification, and
for swarm behaviour through the use of behaviour trees. In this so could hold some level of responsibility in the event of system
instance, behaviour trees are artificially evolved to generate human failure. These considerations highlight the need for the development
understandable and human readable swarm behaviour options. In of clear rules on accountability and liability if these “personalised”
theory, this methodology should allow human operators to bet- robot swarms are to be adopted, and trusted, in the real world.
ter interpret the behavioural options available to the swarm. In These examples highlight preferences for Human-Swarm Interac-
turn, this allows the operators to pass high level commands to the tions as well as methods for making these interactions more visual
agents of the swarm, providing influence while preserving swarm and meaningful to human operators. However, a key challenge
properties. remains: how to identify the right amount of human interactions
Importantly, the technique of artificial evolution has been a pow- that are optimal with the swarm (in the sense that they maximise
erful tool within the context of swarm robotics because of its ability trust).
to discover robot controllers resulting in desired swarm behaviours
[13, 48]. Like the example above, we can see how evolutionary 3.2.3 What’s Next for Swarm Control? Having explored some ap-
algorithms can generate bespoke user specification and discover so- proaches to the control of swarms and highlighted how interactions
lutions the engineer might not have thought of, such as commands may affect trust in swarm systems, it seems that Human-Swarm
or controls that a human can control or design. By iterating through Interaction is something that will be required should swarms be
many generated designs, evolutionary algorithms can identify not trusted to operate in mainstream settings. At the bare minimum, a
just one strong solution to a problem but a multitude of solutions. human should have the ability to deactivate a system that operates
Multiple-solution evolution has been used in novelty searches [53] unsafely. In addition, swarm systems that are being deployed in
and more recently in the MAP-elites algorithm [27, 60] which gen- the public domain should also have some means of engaging in
erates a large number of specialised high performing solutions for a some form of meaningful interaction. For example, how should a
wide range of characteristic sub sets. These methods provide human swarm react to the presence of a human? How should a swarm
operators who interact with swarms with pre-evolved high per- behave in a safe manner and ensure that no harm comes to those
forming options, among which they can choose to fit their specific in its proximity? Having identified human-swarm safety as a key
needs. Human operators can also effectively edit swarm behaviours component in building trust, the next section builds on this concern
ad hoc as attitudes or tasks change in a given scenario. For example, with safety by focusing more specifically on technical methods that
in a warehouse logistics scenario - where the job of the swarm is help developers/operators perform quality control and manage the
to collect items, store them, and deliver them back upon request - safety of swarm applications.
human operators could choose a swarm configuration that priori-
tises energy efficiency over the number of items delivered over a 4 SWARM TESTING
certain period of time. In this way, users are given the opportunity Before robot swarms are deployed in the real world they need to be
to calibrate and optimise swarms for the characteristics they per- tested in a way that follows accepted technical standards in swarm
sonally value, including the configuration they trust the most, in engineering. Such testing methods are designed to help developers
their specific application of the system [79]. build safety cases for swarm robots in order to make these systems
pass safety concerns before they are deployed in society. Examples
Trustworthy Swarms TAS ’23, July 11–12, 2023, Edinburgh, United Kingdom
of swarm applications in which swarm safety is crucial include in the context of cybersecurity due to its decentralised nature. These
search and rescue [65], construction [78], and space exploration examples of robustness illustrate how unique the problem of FDDR
[67, 73]. is in the swarm context.
Hunt and Hauert [37], in particular, have provided a “safety
checklist” for swarm engineers to consider a number of safety-
related items that range from ethics and regulations, to the possibil- 5 VERIFICATION FOR SWARMS
ity of physical or behavioural harm from a swarm. As a swarm is a Verification and Validation (V&V) is a technical process that engi-
decentralised and complex system, it is hard to know whether the neers use to gain confidence in the correctness of a system relative
emergent behaviour from individual interactions will be safe. This to its requirements. Prior to and separate from verification, a tech-
is one of the main challenges we face in guaranteeing swarm safety. nical specification of the system must clearly define the expected
In the following subsections, we focus on two popular methods that operational behaviour of the system and many challenges are asso-
swarm engineers are using to develop safe swarms: fault detection, ciated with this task for autonomous systems [3].
diagnosis and recovery, and security. Recent work by Abeywickrama et al. [4], in particular, proposes
a new process for the safety assurance of emergent behaviour in
autonomous robotic swarms called AERoS (Assurance of Emergent
4.1 Fault Detection, Diagnosis and Recovery Behaviour in Autonomous Robotic Swarms). Focus is given to spec-
Existing Fault Detection, Diagnosis and Recovery (FDDR) methods ification design as a means to assure the safety of the emergent
typically address systems simpler than swarms [63]. Methods of properties of the swarm, rather than the individual behaviours of
fault detection and diagnosis may be classed as either endogenous the robots [4]. Given the specifications, there are different verifica-
or exogenous: the former means that a robot detects and diagnoses tion methods that can be employed to test the safety of the system.
its own state, and the latter refers to a robot detecting the state of Such methods are often chosen by those involved in the design
others. A single robot system is only able to perform endogenous and development process in order to better fit the relevance of the
fault detection by definition, but a robot in a swarm may perform application, operational domain and the specific properties that
both. Graham Miller and Gandhi [30] present a survey of exogenous need to be verified.
fault detection and diagnosis methods. In the survey, fault detection
methods are categorised as data modelling-based, immune system-
inspired or blockchain-based, whereas fault diagnosis methods are 5.1 Swarm Trustworthiness Properties
categorised as local sensing-based. As a point of interest, Graham To continue assessing the trustworthiness of swarms, distinctive
Miller and Gandhi [30] point out the work of Christensen et al. [22] properties of the swarm, such as proficiency, adaptability, robust-
on firefly-based exogenous fault detection and claim this to be the ness, and scalability may require methods that extend beyond the
beginning of swarm specific work on fault detection. They go on to traditional model of V&V (i.e., requirements, design, implementa-
highlight how the immune system is analogous to a swarm system tion, testing and deployment). Since trustworthiness is a property
and consider exogenous fault detection as an artificial immune sys- that encapsulates not just technical safety and functional correct-
tem model for fault detection in engineered systems [63]. Similarly, ness but many other non-functional attributes, for example, social
Tarapore et al. [71] present a fault detection method based on the interactions and ethics [28, 50, 82], those involved in the design
adaptive immune system of vertebrates. and development of swarm systems must explore more holistic
Additionally, an immune-inspired fault recovery method is pro- approaches in order to consider the social and ethical implications
posed by Ismail et al. [42] where robots may “self-heal” through of the technology.
simulated trophallaxis (a term borrowed from behavioural biology A major challenge in the verification of this holistic and so-
to describe the mutual exchange of food between adult social insects ciotechnical approach is the lack of standards and regulations by
or between them and their larvae) to take power from failed robots. which they can be evaluated against. While technical verification
Beyond immune inspiration, blockchain-based methods may be methods exist to help developers assess what the technology should
used to detect malicious or faulty robots in a swarm. Strobel et al. do (i.e. in terms of its functionality), a pertinent question remains as
[70] propose a blockchain protocol where a single robot must per- to whether such strategies - typically focused on the technical prop-
form “proof of work” in order to prove that its sensors adequately erties of the system - are suitable for the social, ethical, regulatory
work before sensor information is shared to the collective. Although and legal dimensions of testing. Technical methods of verification
blockchain methods do not specifically detect faults, blockchain and validation can only go so far in proving whether a system
approaches can be used to diminish and block the contributions of is “safe enough” before it is released into the wild. For example,
faulty or malicious robots [30]. Finally, data-driven methods have while there are well defined rules to determine driver conduct [72]
also be explored to systematically extract local metrics to detect which is often verified using technical methods of simulation-based
faults in robot swarms [52]. verification [34], the standards for evaluating not-so-static social
It would be useful to consider the FDDR framework holistically as interactions or ethical behaviour are either impossible to capture
each step should inform the others in an iterative process. Bjerknes or extremely difficult to model due to the dynamic nature of ev-
and Winfield [12] demonstrate that the assumption that a swarm eryday activity and the contextual relevance of the ever changing
is robust is false. If this is the case, engineers should decide on the social environment [29]. Such consideration of human and social
number and severity of faults that should be tolerated in the system. factors are important if we are to build and maintain user trust in
On the other hand, a swarm may prove robust to malicious attacks the system [21], especially in the context of different moral and
TAS ’23, July 11–12, 2023, Edinburgh, United Kingdom Sabine Hauert et al.
epistemic articulations of “under trust” or “over trust” which can Human-Swarm Interfaces should be designed to ensure that human-
lead to serious and sometimes fatal consequences [46]. swarm interaction is concise and readable irrespective of swarm
scale. In addition, user experiences should not only be collected,
5.2 Swarm Trustworthiness Properties & but also readily accessible and shared within organisations and
Assessment across entire industries as a means to improve safety [57]. The
operational experiences of operators or users interacting with these
Robustness, scalability and adaptability are considered as being par-
swarms may be collected in the form of interviews, questionnaires
ticularly useful properties of swarm trustworthiness, although this
or surveys as a means to effectively explore the safety implications
list could be extended to cover other properties 2 . These properties
of human-swarm experiences, preferably during developmental,
are examined in regards to swarm trustworthiness.
testing, or operational activities [57].
5.2.1 Trust Property: Robustness. “Robustness” is often described
5.2.3 Trust Property: Adaptability. “Adaptability” is the degree to
as the degree to which a swarm system can function correctly in
which a system can effectively and efficiently adapt to different
the presence of failed robots. Failures may come in many forms,
or evolving environments 3 . For a swarm application, this could
from hardware and software failure, to malicious actors subverting
be operating with reduced robot numbers or environmental ob-
the behaviour of a robot. A suitable verification task may be to find
structions, such as a blocked path in a warehouse creating the need
the limits of robustness either through simulation software and/or
for another route. In this case, effective metrics could be based
hardware trials of whether a task is successful (or not) in order
around robot “idle times” (time not spent doing something useful)
ensure that the task in question meets the specifications of both
and “recovery rates” (speed at which a new route is found) when
human developers, operators and/or users. For example, does the
the swarm encounters obstructions. Similarly, performance metrics
swarm still successfully complete assigned tasks when a specific
could be used to assess the adaptability of swarms performing in
number of robots fail or are removed from the swarm?
some particular environment - such as “task success rate” in a find-
5.2.2 Trust Property: Scalability. “Scalability” is defined here as and-fetch operation in relation to some particular problem, such as
the ability of the system to complete assigned tasks with a larger determining its adaptability in a new building layout.
number of robots, i.e. “scaling up”. This property of scalability is
somewhat integrated into the swarm engineers’ conceptualisation 5.3 Swarm Assessment Methods
of a swarm and has become a key parameter in the optimisation Formal verification can be used to prove whether a swarm property
of performance and other non-functional properties dependent is consistent against an entire parameter space. This is in contrast
on the specific application or context. Scalability is also central to to sampling the parameter space if using a testing method, such
the development of metrics that provide a quantitative measure as a simulation technique. While formal verification can help pro-
of coverage and spatial self-organisation. For example, established vide proof of assurance against single properties of a system, the
metrics such as “spatial-density” (i.e. the amount of space that is technique is not always suitable for complex systems operating
provided per robot within its own area), could be used to develop in unbound domains, with a high dimensional parameter space.
a measure of trustworthy density. We imagine the metric of “trust- However, a swarm system operating in a bound environment with
worthy density” to be useful to human operators if they need to a fixed number of actions may be more easily verifiable in practice
assess whether their swarm system requires a minimum or max- compared to a self-driving vehicle. In this case, a swarm of robots
imum density of robots to perform optimally. Such a metric may that use a fixed set of actions - such as behaviour trees - is perhaps
also be used by the swarm to self-select its own volume based on better suited to formal verification. For example, formal methods
“operational feedback”, where the number of idle robots may be such as Probabilistic Finite State Machine (PFSM) verification have
considered surplus to needs, or conversely over-utilised and require been used to demonstrate the robustness of a swarm even when
more robots to efficiently complete assigned tasks. Another scalabil- network connectivity is lost in terms of local network failure [81].
ity metric could be designed to focus on performance and may also Formal verification methods can help build notions of resilience
include other metrics, such as task completion rate and network and assurance against a large range of parameters. However, such
connectivity. In theory, metrics such as these could help engineers methods only give assurance of the model and there may be distinct
build trustworthy swarms. This interplay between density and per- difference between the model and the real system. This is where
formance can be crucial, especially when the completion of a task testing methods, such as simulation based verification, can help
favours a low density of swarms (allowing robots free movement) build evidence by bridging the gap between simulation and reality
but network connectivity is optimal at the highest density (highest in making assurance arguments.
network redundancy). A corroborative approach to verification and validation, in partic-
As we have previously mentioned, coordinating a large group ular, seeks to combine mutually consistent evidence from multiple
of robots may cause human operators to become overwhelmed and diverse assessment methods that will raise the confidence of en-
with information, including metrics or data that may unhelpfully gineers in terms of their system’s trustworthiness [34]. For example,
distract the operator and potentially lead to unsafe situations. In a combination of both physical and simulation-based verification
this instance, we repeat our previous argument early in this article: methods can be used to help build a test suite. [34], claim that their
approach is useful because it complements both the reliability of ex- can be tested by a suite of both existing and emerging formal speci-
isting formal assessment methods and the bounds of the operational fication, verification and validation methods as a means of building
domain, such as edge cases. assurance. Such techniques, along with emerging best practice in
Many verification and validation techniques can be used to build Human-Swarm Interactions research, may in turn incorporate the
a suitably diverse test suite. These techniques may include manual human element of trustworthiness. Going forward, innovative ap-
written test cases by humans which are based on historical evi- proaches that consider both technical and social considerations
dence (“accidentology”) [1], or fault-based test selection, based on (such as human readable interfaces and visualisations) are essential
requirements coverage and robot-based test generation [19]. There if we are to build trust in swarms before deployment.
are also assertion-based techniques that can be used to prove that
the system complies with a set of provided rules, such as the driving
conduct of self-driving vehicles [34], which could be adapted for a REFERENCES
set of assertions covering swarm behaviour. [1] 2012. A taxonomy ofmodel-based testing approaches. Software Testing, Verifica-
tion and Reliability Volume 22, 5 (2012), 297–312.
Runtime verification techniques can also be used as a post-design [2] 2015 - 2014. Big Hero 6 (Disney DVD). Disney, California.
strategy to build assurance, where an oracle (a system capable of [3] Dhaminda B. Abeywickrama, Amel Bennaceur, Greg Chance, Yiannis Demiris,
predicting or generating anticipated outcomes) is used to compare Anastasia Kordoni, Mark Levine, Luke Moffat, Luc Moreau, Mohammad Reza
Mousavi, Bashar Nuseibeh, Subramanian Ramamoorthy, Jan Oliver Ringert, James
current and expected behaviour [54]. Runtime verification requires Wilson, Shane Windsor, and Kerstin Eder. 2022. On Specifying for Trustworthi-
a suitable oracle to be defined which sets the expected bounds of ness. (2022). arXiv:2206.11421
operation. Such a technique can be especially useful for complex [4] Dhaminda B. Abeywickrama, James Wilson, Suet Lee, Greg Chance, Peter D.
Winter, Arianna Manzini, Ibrahim Habli, Shane Windsor, Sabine Hauert, and
systems which are expected to change regularly and require regular Kerstin Eder. 2023. AERoS: Assurance of Emergent Behaviour in Autonomous
software updates which occur without time for a complete system Robotic Swarms. arXiv:2302.10292 [cs.RO]
[5] Sander Ackermans, Debargha Dey, Peter Ruijten, Raymond H Cuijpers, and
re-verification. Bastian Pfleging. 2020. The effects of explicit intention communication, conspic-
Taken together, verification of swarm robustness, scalability uous sensors, and pedestrian attitude in interactions with automated vehicles.
and adaptability are suited to a mix of formal and testing-based In Proceedings of the 2020 chi conference on human factors in computing systems.
1–14.
verification. A Probabilistic Finite State Machine (PFSM) verification [6] HLEG AI. 2019. High-level expert group on artificial intelligence. Ethics guidelines
approach, in particular, could be suitable to prove system robustness for trustworthy AI, 6.
in terms of network connectivity and task completion rate [81]. [7] C Kristopher Alder, Samuel J McDonald, Mark B Colton, and Michael A Goodrich.
2015. Toward haptic-based management of small swarms in cordon and patrol.
Complementary to this, simulation should be used for assertion- In 2015 Swarm/Human Blended Intelligence Workshop (SHBI). IEEE, 1–8.
based analysis of the system in order to ensure validation and [8] Merihan Alhafnawi, Edmund R. Hunt, Severin Lemaignan, Paul O’Dowd, and
Sabine Hauert. 2022. Deliberative Democracy with Robot Swarms. In 2022
specifications are met, especially when it comes to investigating IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 7296–
edge cases. Additionally, simulation may be used for capturing 7303.
scalability information by increasing the swarm size, and reliability [9] Merihan Alhafnawi, Edmund R. Hunt, Severin Lemaignan, Paul O’Dowd, and
Sabine Hauert. 2022. MOSAIX: a Swarm of Robot Tiles for Social Human-Swarm
information compiled from long horizon observations. However, if Interaction. In 2022 International Conference on Robotics and Automation (ICRA).
long-horizon testing is not feasible, but high reliability is needed for 6882–6888.
low probability faults (such as safety critical aspects of the system), [10] Victoria Alonso and Paloma De La Puente. 2018. System transparency in shared
autonomy: A mini review. Frontiers in neurorobotics 12 (2018), 83.
then reliability models can be used to estimate future fault rates that [11] Dan Amir and Ofra Amir. 2018. Highlights: Summarizing agent behavior to
can be assessed against a specified fault tolerance level [16]. As a next people. In Proceedings of the 17th International Conference on Autonomous Agents
and MultiAgent Systems. 1168–1176.
step, physical and hardware testing should be conducted in order [12] Jan Dyre Bjerknes and Alan FT Winfield. 2013. On fault tolerance and scalability
to validate formal and simulation-based methods and potentially of swarm robotic systems. In Distributed autonomous robotic systems. Springer,
explore scenarios not possible in either of the non-physical methods. 431–444.
[13] Josh C Bongard. 2013. Evolutionary robotics. Commun. ACM 56, 8 (2013), 74–83.
As physical testing is usually the more costly method, a test plan [14] Manuele Brambilla, Eliseo Ferrante, Mauro Birattari, and Marco Dorigo. 2013.
should set out and prioritise the most important trust properties to Swarm robotics: a review from the swarm engineering perspective. Swarm
analyse. In doing so, validation of simulated results will help to build Intelligence 7, 1 (2013), 1–41.
[15] Daniel S Brown, Michael A Goodrich, Shin-Young Jung, and Sean Kerman. 2016.
a strong and corroborative argument of swarm trustworthiness. Two invariants of human-swarm interaction. Technical Report. AIR FORCE
RESEARCH LAB ROME NY ROME United States.
[16] Ricky W. Butler and George B. Finelli. 1993. The Infeasibility of Quantifying the
Reliability of Life-Critical Real-Time Software. IEEE Transactions on Software
Engineering 19, 1 (1993), 3–12.
6 CONCLUSION [17] S Camazine, JL Deneubourg, NR Franks, J Sneyd, G Theraulaz, and E Bonabeau.
2001. Self-Organization in Biological Systems. (2001).
As robot swarms leave the laboratory to enter real world applica- [18] Daniel Carrillo-Zapata, Emma Milner, Julian Hird, Georgios Tzoumas, Paul J
tions, swarm engineers will need to design these systems in safe Vardanega, Mahesh Sooriyabandara, Manuel Giuliani, Alan FT Winfield, and
and useful ways that allow humans to trust them. However, devel- Sabine Hauert. 2020. Mutual shaping in swarm robotics: User studies in fire and
rescue, storage organization, and bridge inspection. Frontiers in Robotics and AI
oping trustworthy swarms is fraught with many challenges. For 7 (2020), 53.
example, the emergent behaviour of a swarm may come in at spo- [19] Greg Chance, Abanoub Ghobrial, Severin Lemaignan, Tony Pipe, and Kerstin
Eder. 2020. An agency-directed approach to test generation for simulation-
radic and unpredictable times, with changing behaviour across based autonomous vehicle verification. In 2020 IEEE International Conference On
different environments. In order to address concerns about the un- Artificial Intelligence Testing (AITest). IEEE, 31–38.
predictability of a swarm system, we have highlighted a variety of [20] Mingxuan Chen, Ping Zhang, Xiaodan Chen, Yuliang Zhou, Fang Li, and Guan-
glong Du. 2018. A Human-swarm Interaction Method Based on Augmented
areas researching the “properties” of swarms as a means to produce Reality. In 2018 WRC Symposium on Advanced Robotics and Automation (WRC
information and build trust in their behaviour. These properties SARA). IEEE, 108–114.
TAS ’23, July 11–12, 2023, Edinburgh, United Kingdom Sabine Hauert et al.
[21] Erin K. Chiou and John D. Lee. 2021. Trusting Automation: Designing for Re- [48] John Koza. 1994. Evolution of emergent cooperative behavior using genetic
sponsivity and Resilience. Human Factors (2021). programming. Computing with Biological Metaphors (1994), 280–297.
[22] Anders Lyhne Christensen, Rehan OGrady, and Marco Dorigo. 2009. From [49] Tobias Lagström and Victor Malmsten Lundgren. 2016. AVIP-Autonomous vehicles’
Fireflies to Fault-Tolerant Swarms of Robots. IEEE Transactions on Evolutionary interaction with pedestrians-An investigation of pedestrian-driver communication
Computation 13, 4 (2009), 754–766. and development of a vehicle external interface. Master’s thesis.
[23] Michele Colledanchise and Petter Ögren. 2018. Behavior trees in robotics and AI: [50] John D. Lee and Katrina A. See. 2004. Trust in automation: Designing for appro-
An introduction. CRC Press. priate reliance. Human Factors 46, 1 (2004), 50–80.
[24] Mohammad Divband Soorati, Jediah Clark, Javad Ghofrani, Danesh Tarapore, and [51] John D Lee and Bobbie D Seppelt. 2009. Human factors in automation design. In
Sarvapali D Ramchurn. 2021. Designing a User-Centered Interaction Interface Springer handbook of automation. Springer, 417–436.
for Human–Swarm Teaming. Drones 5, 4 (2021), 131. [52] Suet Lee, Emma Milner, and Sabine Hauert. 2022. A Data-Driven Method for
[25] Marco Dorigo, Guy Theraulaz, and Vito Trianni. 2021. Swarm Robotics: Past, Metric Extraction to Detect Faults in Robot Swarms. IEEE Robotics and Automation
Present, and Future [Point of View]. Proc. IEEE 109, 7 (2021), 1152–1165. Letters 7, 4 (2022), 10746–10753.
[26] Madeleine Clare Elish. 2019. Moral crumple zones: Cautionary tales in human- [53] Joel Lehman and Kenneth O Stanley. 2011. Evolving a diversity of virtual creatures
robot interaction (pre-print). Engaging Science, Technology, and Society (pre-print) through novelty search and local competition. In Proceedings of the 13th annual
(2019). conference on Genetic and evolutionary computation. 211–218.
[27] Sondre A Engebråten, Jonas Moen, Oleg Yakimenko, and Kyrre Glette. 2018. [54] Martin Leucker and Christian Schallhart. 2009. A brief account of runtime
Evolving a repertoire of controllers for a multi-function swarm. In International verification. Journal of Logic and Algebraic Programming 78, 5 (2009), 293–303.
Conference on the Applications of Evolutionary Computation. Springer, 734–749. [55] Michael Lewis. 2013. Human interaction with multiple remote robots. Reviews of
[28] Luciano Floridi. 2019. Establishing the rules for building trustworthy AI. Nature Human Factors and Ergonomics 9, 1 (2013), 131–174.
Machine Intelligence 1, 6 (2019), 261–262. [56] Sebastian Lindner and Axel Schulte. 2021. Evaluation of Swarm Supervision
[29] Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Complexity. In International Conference on Intelligent Human Systems Integration.
Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Springer, 50–55.
Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena. 2018. AI4People—An [57] Carl Macrae. 2022. Learning from the failure of autonomous and intelligent
Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and systems: Accidents, safety, and sociotechnical sources of risk. Risk analysis 42, 9
Recommendations. Minds and Machines 28, 4 (2018), 689–707. (2022), 1999–2025.
[30] Olivier Graham Miller and Vaibhav Gandhi. 2021. A survey of modern exogenous [58] Karthik Mahadevan, Sowmya Somanath, and Ehud Sharlin. 2018. Communi-
fault detection and diagnosis methods for swarm robotics. Journal of King Saud cating awareness and intent in autonomous vehicle-pedestrian interaction. In
University - Engineering Sciences 33, 1 (2021), 43–53. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems.
[31] Azra Habibovic, Victor Malmsten Lundgren, Jonas Andersson, Maria Klingegård, 1–12.
Tobias Lagström, Anna Sirkka, Johan Fagerlönn, Claes Edgren, Rikard Fredriks- [59] Clark A Miller and Ira Bennett. 2008. Thinking longer term about technology:
son, Stas Krupenia, et al. 2018. Communicating intent of automated vehicles to is there value in science fiction-inspired approaches to constructing futures?
pedestrians. Frontiers in psychology (2018), 1336. Science and Public Policy 35, 8 (2008), 597–606.
[32] Heiko Hamann and Heinz Worn. 2007. A space-and time-continuous model of [60] Jean-Baptiste Mouret and Jeff Clune. 2015. Illuminating search spaces by mapping
self-organizing robot swarms for design support. In First International Conference elites. arXiv preprint arXiv:1504.04909 (2015).
on Self-Adaptive and Self-Organizing Systems (SASO 2007). IEEE, 23–23. [61] Changjoo Nam, Huao Li, Shen Li, Michael Lewis, and Katia Sycara. 2018. Trust of
[33] Peter A Hancock, Deborah R Billings, Kristin E Schaefer, Jessie YC Chen, Ewart J humans in supervisory control of swarm robots with varied levels of autonomy.
De Visser, and Raja Parasuraman. 2011. A meta-analysis of factors affecting trust In 2018 ieee international conference on systems, man, and cybernetics (smc). IEEE,
in human-robot interaction. Human factors 53, 5 (2011), 517–527. 825–830.
[34] Christopher Harper, Greg Chance, Abanoub Ghobrial, Saquib Alam, Tony Pipe, [62] Steven Nunnally, P Walker, Andreas Kolling, Nilanjan Chakraborty, Michael
and Kerstin Eder. 2021. Safety Validation of Autonomous Vehicles using Assertion- Lewis, K Sycara, and M Goodrich. 2012. Human influence of robotic swarms
based Oracles. arXiv preprint arXiv:2111.04611 (2021). with bandwidth and localization issues. In 2012 IEEE International Conference on
[35] Elliott Hogg, Sabine Hauert, David Harvey, and Arthur Richards. 2020. Evolving Systems, Man, and Cybernetics (SMC). IEEE, 333–338.
behaviour trees for supervisory control of robot swarms. Artificial Life and [63] James O’Keeffe, Danesh Tarapore, Alan G. Millard, and Jonathan Timmis. 2018.
Robotics 25, 4 (2020), 569–577. Adaptive online fault diagnosis in autonomous robot swarms. Frontiers in Robotics
[36] Elliott Hogg, Sabine Hauert, and Arthur G Richards. 2021. Evolving Robust and AI 5 (November 2018).
Supervisors for Robot Swarms in Uncertain Complex Environments. DARS- [64] Jayam Patel, Yicong Xu, and Carlo Pinciroli. 2019. Mixed-granularity human-
SWARM 2021 : Joint conference between distributed autonomous robotic systems swarm interaction. In 2019 International Conference on Robotics and Automation
(DARS) and SWARM robotics conference ; Conference date: 01-06-2021 Through (ICRA). IEEE, 1059–1065.
04-06-2021. [65] Jacques Penders, Lyuba Alboul, Ulf Witkowski, Amir Naghsh, Joan saez pons,
[37] Edmund R. Hunt and Sabine Hauert. 2020. A checklist for safe robot swarms. Stefan Herbrechtsmeier, and Mohamed Habbal. 2011. A Robot Swarm Assisting
Nature Machine Intelligence 2 (07 2020). a Human Fire-Fighter. Advanced Robotics 25 (01 2011), 93–117.
[38] Edmund R Hunt, Simon W Jones, and Sabine Hauert. 2019. Testing the limits of [66] Thomas Schmickl and Karl Crailsheim. 2006. Trophallaxis among swarm-robots:
pheromone stigmergy in high-density robot swarms. Royal Society Open Science A biologically inspired strategy for swarm robotics. In The First IEEE/RAS-EMBS
6, 11 (6 Nov. 2019). International Conference on Biomedical Robotics and Biomechatronics, 2006. BioRob
[39] Edmund R Hunt. 2020. Phenotypic plasticity provides a bioinspiration framework 2006. IEEE, 377–382.
for minimal field swarm robotics. Frontiers in Robotics and AI 7 (2020), 23. [67] Melanie Schranz, Martina Umlauft, Micha Sende, and Wilfried Elmenreich. 2020.
[40] Aya Hussein and Hussein Abbass. 2018. Mixed initiative systems for human- Swarm Robotic Behaviors and Current Applications. Frontiers in Robotics and AI
swarm interaction: Opportunities and challenges. In 2018 2nd Annual Systems 7 (2020).
Modelling Conference (SMC). IEEE, 1–8. [68] Mohammad Divband Soorati, Javad Ghofrani, Payam Zahadat, and Heiko
[41] Aya Hussein, Leo Ghignone, Tung Nguyen, Nima Salimi, Hung Nguyen, Min Hamann. 2018. Robust and adaptive robot self-assembly based on vascular
Wang, and Hussein A Abbass. 2018. Towards bi-directional communication in morphogenesis. In 2018 IEEE/RSJ International Conference on Intelligent Robots
human-swarm teaming: A survey. arXiv preprint arXiv:1803.03093 (2018). and Systems (IROS). IEEE, 4282–4287.
[42] Amelia Ritahani Ismail, Jan Bjerknes, Jon Timmis, and Alan Winfield. 2015. An [69] Daniel P Stormont. 2008. Analyzing human trust of autonomous systems in
Artificial Immune System for Self-Healing in Swarm Robotic Systems. 61–74. hazardous environments. In Proc. of the Human Implications of Human-Robot
[43] Simon Jones, Matthew Studley, Sabine Hauert, and Alan Winfield. 2018. Evolving Interaction workshop at AAAI. 27–32.
behaviour trees for swarm robotics. In Distributed Autonomous Robotic Systems. [70] Volker Strobel, Eduardo Castelló Ferrer, and Marco Dorigo. 2020. Blockchain
Springer, 487–501. Technology Secures Robot Swarms: A Comparison of Consensus Protocols and
[44] Simon Jones, Alan F Winfield, Sabine Hauert, and Matthew Studley. 2019. On- Their Resilience to Byzantine Robots. Frontiers in Robotics and AI 7 (2020).
board evolution of understandable swarm behaviors. Advanced Intelligent Systems [71] Danesh Tarapore, Pedro Lima, Jorge Carneiro, and Anders Christensen. 2015. To
1, 6 (2019), 1900031. err is robotic, to tolerate immunological: Fault detection in multirobot systems.
[45] Lawrence H Kim, Daniel S Drew, Veronika Domova, and Sean Follmer. 2020. Bioinspiration and biomimetics 10 (02 2015), 016014.
User-defined swarm robot control. In Proceedings of the 2020 CHI Conference on [72] UK Driving Standards Agency. 2012. The Official Highway Code. Her Majestys
Human Factors in Computing Systems. 1–13. Stationery Office.
[46] Bing Cai Kok and Harold Soh. 2020. Trust in robots: Challenges and opportunities. [73] Emil Vassev, Roy Sterritt, Christopher Rouff, and Mike Hinchey. 2012. Swarm
Current Robotics Reports 1, 4 (2020), 297–309. Technology at NASA: Building Resilient Systems. IT Professional 14, 2 (2012),
[47] Andreas Kolling, Phillip Walker, Nilanjan Chakraborty, Katia Sycara, and Michael 36–42.
Lewis. 2015. Human interaction with robot swarms: A survey. IEEE Transactions [74] Phillip Walker, Steven Nunnally, Mike Lewis, Andreas Kolling, Nilanjan
on Human-Machine Systems 46, 1 (2015), 9–26. Chakraborty, and Katia Sycara. 2012. Neglect benevolence in human control
Trustworthy Swarms TAS ’23, July 11–12, 2023, Edinburgh, United Kingdom
of swarms in the presence of latency. In 2012 IEEE International Conference on N.Y.) 343 (02 2014), 754–8.
Systems, Man, and Cybernetics (SMC). IEEE, 3009–3014. [79] James Wilson. 2022. Search Space Illumination of Robot Swarm Parameters for
[75] Toby Walsh. 2017. Android Dreams: The past, present and future of artificial Trustworthiness. In Swarm Intelligence: 13th International Conference, ANTS 2022,
intelligence. Oxford University Press. Málaga, Spain, November 2–4, 2022, Proceedings, Vol. 13491. Springer Nature, 378.
[76] Ning Wang, David V Pynadath, and Susan G Hill. 2016. Trust calibration within [80] James Wilson, Jon Timmis, and Andy Tyrrell. 2018. A Hormone-Inspired Arbi-
a human-robot team: Comparing automatically generated explanations. In 2016 tration System For Self Identifying Abilities Amongst A Heterogeneous Robot
11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, Swarm. In 2018 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE,
109–116. 843–850.
[77] Stop Autonomous Weapons. 2017. Slaughterbots. YouTube. https://www.youtube. [81] Alan F.T. Winfield, Wenguo Liu, Julien Nembrini, and Alcherio Martinoli. 2008.
com/watch?v=9CO6M2HsoIA Modelling a wireless connected swarm of mobile robots. Swarm Intelligence 2,
[78] Justin Werfel, Kirstin Petersen, and Radhika Nagpal. 2014. Designing Collective 2-4 (2008), 241–266.
Behavior in a Termite-Inspired Robot Construction Team. Science (New York, [82] Jeannette M. Wing. 2021. Trustworthy AI. Commun. ACM 64, 10 (2021), 64–71.