Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Autonomous robot teams for search and rescue

missions in complex terrains


Ivan Nikolov, Mile Maliminov, Andrej Petrovski
March 2024

Abstract - Autonomous robot teams are accessibility, which can impede timely and effective
the next step in rescue operations in chal- rescue efforts.
lenging terrains. In this paper, we explore In recent years, there has been a growing interest
how to utilise coordinated robotic systems for in leveraging autonomous robot teams to augment
navigating complex environments and how to and enhance SAR capabilities, offering the potential
effectivley execute search and rescue missions. to overcome many of the limitations of conventional
Through a combination of theoretical analysis approaches.
and practical demonstration, we highlight This paper investigates the application of coor-
the potential of such systems to revolutionize dinated autonomous robot teams for SAR missions
search and rescue efforts, improving response in complex terrains. We aim to explore how ad-
times, operational efficiency, and ultimately, vancements in robotics, artificial intelligence, and
saving lives. Our findings underscore the sensor fusion can be harnessed to develop robust and
importance of continued research and devel- adaptable systems capable of navigating challenging
opment in this domain to further refine and environments and executing search and rescue tasks
deploy autonomous robot teams as assets in autonomously.
emergency response scenarios. The significance of our work lies in improving
response times, operational efficiency, and ulti-
Keywords-Autonomous Robots, Search and mately, saving lives in emergency situations. By
Rescue, Complex Terrains, Artificial Intelli- harnessing the collective intelligence and capabilities
gence, Sensor Fusion, Advanced Locomotion, of autonomous robot teams, we can mitigate risks
Machine Learning, Parallel Computing. to human responders and enhance overall mission
success rates. However, we also acknowledge the
I. Introduction need for ongoing research and development efforts to
address remaining technical hurdles and ensure the
In search and rescue (SAR) operations every practical deployment of these systems in real world
second is of the up most importance and is often the scenarios.
the difference between life and death. SAR missions
can happen anywhere and at anytime and more
often than not they happen in complex terrains that II. Individual systems
present formidable challenges to traditional response
teams due to the inherent dangers and dificulties Mobile Robots in Mine Rescue and
associated with such environments. Recovery
These challenges often include hard to naviagate
landscapes, hazardous conditions, and limited Mining accidents, spanning from the early days of

1
mining to modern times, have claimed thousands of borehole entry (BE), and void entry (VE).
lives [6]. SE and BE scenarios are more common, while VE
To address the dangers of rescue operations in deployments are rare due to the challenges posed by
hazardous underground environments, mine rescue underground voids. Despite not having found
teams have been crucial, risking their lives to save survivors to date, mine rescue robots have proven
others. However, advancements in robotics offer a invaluable by supplying vital information to human
promising solution to enhance safety and efficiency teams. For instance, during the Excel mine fire
in such operations. Since 2001, various robot models recovery, robots provided essential video records of
have been utilized in nine responses across the U.S. conditions within the mine, enabling rescue teams to
, primarily in mine rescues and recovery operations. prepare adequately for the hostile environment they
These robots assist by navigating ahead of human would encounter. Similarly, at the Storm Decline
teams, providing crucial video footage, atmospheric exploration, robots detected dangerous atmospheres
data, and identifying potential hazards, thus mitigat- without endangering human lives, highlighting their
ing risks faced by rescue teams. significant role in risk mitigation.

Figure 1: Physical challenges emerging from a SE


(slope entry is shown).

Figure 3: Physical challenges emerging from a BE.

The tasks associated with underground mine rescue


fall into three categories: nominal search and rescue,
mine recovery, and deployment.
• The search and rescue task involves both visual
inspection and air quality testing to determine
human safety and the state of the mine.
• Mine recovery focuses on both evaluating the
state of the mine and conducting any
Figure 2: Physical challenges emerging from a void necessary repairs in order to save or reopen the
entry. mine, including manipulation of the environment
by moving debris, opening doors or fans, etc.
The deployment scenarios for mine rescue robots
typically fall into three categories: surface entry (SE), • The deployment task is implicit, but critical,

2
as rescue robot deployment delays have led to
robots being deployed later in the incident
timeline than desired.

– One deployment task is transportation and


logistics.
– A second deployment task is the construc-
tion or modification of site-specific systems,
such as the lowering systems designed and Figure 4: The snake robot enters into the simulated
built on-site. coal mine tunnel.
The types of subterranean robots for underground
applications can be categorized into five main groups, Multi-Robot Exploration and Fire
ordered by decreasing technical maturity. Searching
• Tracked or wheeled crawlers, demonstrated in
Marjovi et al [5] in their paper present a novel
mines, represent the most mature form of
approach to cooperative multi-robot exploration and
mobility. Despite some challenges, tracked or
fire searching in unknown environments. At its core,
wheeled crawlers are well-suited for navigating
the system aims to optimize exploration time while
rubble.Pipe crawlers, on the other hand, are
effectively localizing fire sources, thus improving
limited to boreholes and cannot operate on mine
response capabilities in hazardous scenarios.
floors.

• Swimmers, such as the Minefish prototype, To achieve this, the system employs a
explore flooded or underwater voids, but are not decentralized decision-making mechanism where
relevant for rescue operations due to their robots autonomously select exploration frontiers
amphibious nature. based on cost-gain ratios. This ensures that each
robot contributes efficiently to the exploration effort,
• Small unmanned aerial vehicles (UAVs) have minimizing the risk of collisions and redundant
potential for flying in underground structures, exploration.
especially in surface entry scenarios, although
practical challenges such as control over long One of the key components of the system is its
distances and operation in darkness remain decentralized frontier-based exploration strategy,
unanswered. which allows robots to navigate through the
environment while avoiding obstacles. By utilizing
• Snake robots, highly flexible and capable of potential field motion control, the robots are guided
traveling on floors and in pipes, are suitable for along optimal paths towards their designated
borehole and void entry scenarios. exploration frontiers. This approach not only
• Legged robots offer promise for stepping over enhances navigation efficiency but also ensures
obstacles but face challenges with power de- robustness in dynamic environments where obstacles
mands, particularly in surface entry scenarios re- may appear unexpectedly.
quiring long travel distances.
The main goal of the exploration process is to cover
Snake robots [1], along with legged robots, are the whole environment in the minimum possible time.
considered essential for future advancements, over- Therefore, it is essential for the robots to share their
coming limitations of pipe crawlers for borehole based tasks and individually reach the objectives through
mine rescue. optimal paths. In an unknown environment the

3
Figure 5: Whole system’s schematic . Figure 6:
Algorithm that is used.
immediate goals are the frontiers. Frontiers are the
borders of the explored area with the unexplored
area, where robots can expect to gain the knowledge system’s performance is further evaluated based on
of the environment. Most of the time when the robots exploration time and the number of repeated nodes
are exploring the area, there are several unexplored during traversal, showing promising results for en-
hancing operational efficiency in search and rescue
regions; this poses a problem of how to assign specific
frontiers to the individual robots. We want to avoid missions.
sending several robots to the same frontier which may Overall, the described system represents a
significant advancement in autonomous robotic
result in collision concerns. The other issue is that we
do not want to have a central station for moderating systems for search and rescue operations. By lever-
the robots. aging decentralized decision-making and cooperative
exploration strategies, the system offers a practical
Furthermore, the system addresses challenges solution for navigating challenging terrains and
related to map merging and information integration, locating fire sources in emergency situations, ulti-
crucial for maintaining a coherent representation mately contributing to saving lives and mitigating
of the environment across all robots. By sharing risks to human responders.
map data and coordinating exploration efforts, the
robots collectively build a comprehensive map of the Use of robots in Maritime Search and
environment, enabling efficient decision-making and Rescue missions
task allocation.
The use of autonomous robots in search and
Experimental validation of the system is con- rescue missions in land or air-based scenarios had
ducted in both simulation and real-world environ- been a staple for a longer time, with their use in
ments, demonstrating its effectiveness and reliabil- water emergencies trailing behind. Considering
ity. Results indicate that the proposed approach dangers to the lives of the humans involved in the
performs well across various scenarios, particularly accident, as well as the unpredictable consequences
in complex environments with multiple robots. The of the traumatic events to these humans’ mental

4
AMVs, just like their respective counterparts in
the UAVs and UGVs, can be used to perform a
plethora of tasks, all of which require
localization within the environment as well as navi-
gation through unpredictable terrain. Both of these
issues are something the community has grappled
with for a while because of the limitations of the
methods of communication. It’s simple enough while
the vehicle is above the surface thanks to radio and
spread-spectrum communication but underwater it’s
an entirely different experience. Propagation of the
aforementioned signals is seriously hindered by the
environment they’re in and thus, it’s needed to
prioritize contactless automation, limiting communi-
cation to a minimum and only utilizing information
transfer when necessary. Additionally, SLAM tech-
niques initially used for ground robots are
Figure 7: Real maze experiment, Visual positioning
repurposed in order to improve performance (lessen-
application screenshot.
ing cost and overhead).

Simultaneous localization and


and physical capabilities dictated that some degree mapping
of human involvement was to be included in every
single one of these missions. But then, a new issue
arises regarding the safety of the humans setting out
to rescue the survivors. For a while, the application Simultaneous localization and mapping, or SLAM for
of these systems was only theoretical, and was short is a technique where the robot simultaneously
deemed too dangerous and unreliable for any real localizes itself within an environment while also
world use. Even today, it’s not as widespread as its mapping out everything as it goes. Depending on
airborne and ground-based counterparts. our goals and needs, we can utilize either feature
based SLAM (extracting environmental features
Maritime search and rescue missions utilize thanks to sensors and keeping them in the state
AMVs (autonomous marine vehicles) split into space) or view-based SLAM (storing “views” in the
ASVs (autonomous surface marine vehicles) and state space where positioning is paired with certain
AUVs (autonomous underwater vehicles) whose measurements from that position and compared with
development began in the 1970s. Since then, major previous positions for reference). Another dichotomy
advancements in computer memory, efficiency and present in the SLAM approach is “online” SLAM as
compactness have allowed for AMVs to carve out opposed to a full SLAM. While online SLAM only
a legitimate use in the field. These advancements takes into account the current pose and its map.
have ensured that they can autonomously handle
tasks which were previously thought inconceivable Some popular SLAM approaches are:
and which required manned vehicles (or remote
controlled navigation). From here on out we’ll • EKF–SLAM: utilizes the extended Kalman
mainly focus on the AUVs with the ASVs taking a filter (EKF) which borrows techniques from cal-
backseat. culus and linearizes the system model using the
Taylor expansion. Using predict–update cycles

5
it estimates the pose and map, and includes incremental smoothing and mapping (iSAM),
the map features and pose in its state vector. relaxation on a mesh, multilevel relaxation,
It is applicable to both view-based SLAM and hierarchical optimization for pose graphs on
feature-based SLAM. However, this approach manifolds (HOGMAN) and iterative alignment.
isn’t feasible for larger-scale maps due to the All of the aforementioned solutions are similar
larger number of features present in the in theory but differ in implementation.
environment which makes it too computationally
expensive. • Artificial intelligence (AI) SLAM: re-
lies on neural networks and fuzzy logic for
• SEIF–SLAM: Sparse extended information its goals. RatSLAM is an example of a
filter (SEIF) and exactly sparse extended neural-network-based data fusion done with an
information filter (ESEIF) are two popular odometer and a camera where utilizing neural
SLAM approaches using the information filter. networks we model rodent brains. Another
The main issue in these techniques is the example of AI SLAM are self-organizing maps
necessity of keeping a sparse information matrix (SOM) to perform a multi-agent SLAM. SOMs
at all times in order to not disrupt the are neural networks trained without supervision.
Gaussian distribution. This is done through so
called “sparsification” strategies.
• FastSLAM: A SLAM approach relying on
particle filters, where non-linear filtering is done
without approximating the system models. Fast-
SLAM displays poses and features as particles in
the state space. This is the only solution where
both online and full SLAM are done together,
which means it simultaneously estimates both
its current pose and the full trajectory. The
aforementioned particles each contain an
estimate of the pose and all features, with each
feature represented and updated through a v
separate EKF. It should come as no surprise that
just like EKF–SLAM, it is applicable to both Figure 8: SLAM Methods Comparison.
view-based SLAM and feature-based SLAM.
• GraphSLAM: GraphSLAM also uses
approximation by Taylor expansion, however,
unlike EKF–SLAM it’s a method where the
entire trajectory and map are estimated and thus
it’s considered an offline, or “full” version of
SLAM. Poses in this approach are represented
as nodes in a graph with the edges representing
motion and observation constraints.
Optimizing these constraints is a must in Graph
SLAM because it’s necessary in order to
calculate the spatial distribution of the nodes.
Some solutions to this technique are: square root
smoothing and mapping (SAM), Figure 9: SLAM Methods Comparison.

6
. ray. The ship supports the AUV. There’s also
the short baseline (SBL) where
AUV navigation and localization beacons placed on each side of the ship’s hull
triangulate the AUV’s position. Downsides to
technique these two methods are the USBL’s limited range
and SBL’s accuracy being dependent on the
ship’s baseline length. Still, they represent
Inertial/dead reckoning: reliable techniques that shine in tricky
When the AUV autonomously positions itself environments.
without acoustic support from a ship or transpon-
ders, it dead reckons. In dead reckoning (DR), the • Long baseline (LBL) and GPS intelligent
AUV calculates its position based on its orientation, buoys (GIBs):
velocity, or acceleration vector. Traditional DR isn’t Beacons are placed over a wide mission area,
too popular as a navigation method, yet modern in the case of LBLs they are placed on the
AUVs widely use navigation systems dependent on seafloor while GIBs work with surface-level bea-
DR. A significant downside to DR is that the errors cons. LBL navigation uses the two-way travel
accumulate over time. To improve upon DR, inertial time of acoustic signals to determine
systems integrate measurements from accelerometers location. In order to offset the complexity of this
and gyroscopes, offering higher frequency measure- technique we can take advantage of the particle
ments compared to acoustic sensors based on time- filter based FastSLAM, as well as Doppler sonar
of-flight. sensors. All things considered, LBL systems are
However, over time the drift still adds up. One valuable for their robustness which makes them
solution to this is incorporating drift as part of the extremely useful in high-risk scenarios like sur-
state space and using slower rate sensors to calibrate veys under ice.
the inertial sensors. Another source of error we can • Single fixed beacon:
handle is the speed of sound in water, in order to re- Single fixed beacono, also known as virtual LBL
duce systemic noise. Using a Doppler Velocity Log (VLBL), it represents a means to counter the
(DVL) for the velocity of the water relative to the complexity of installing enough beacons for LBL
AUV as well as calculating the local current using by only using a single beacon that propagates
an Acoustic Doppler Current Profiler or an ocean signals to simulate the baseline. The main
model. If we have no access to the ocean velocity, issue with this approach is the incremental error
then a transponder on a surface buoy is also capable growth as the AUV deviates further and further
of doing a satisfactory job at estimating it. from the intended path. The only way to
Lastly, there is terrain-aided navigation, utilizing combat this is through careful path planning
particle filtering with bathymetric maps if available, that ensures the vehicle stays within the
showing comparable results to DVL and multibeam observable range of the beacon.
sonars. It is recommended to proceed with caution
if this method is used in an area with little to no • Acoustic modem:
bathymetric variance. Advances in acoustic modems have allowed for
new techniques to be developed. Beacons no
• Ultrashort (super short) baseline longer have to be stationary, and full AUV
(USBL/SSBL): autonomy can be achieved with support from
USBL navigation enables an AUV to localize autonomous surface vehicles, equipped with
itself relative to a surface ship, with relative acoustic modems, or by communicating and
range and bearing determined by time-of-flight ranging in underwater teams. Communication
and phase differencing across a transceiver ar- between AUVs in order to cooperatively navi-

7
gate is done through the time-division multiple
access scheme where each vehicle is allotted a
time slot in which it can share its information.
The downside of this is that in bigger multi-AUV
teams the time between one vehicle’s slots
increases.

Geophysical: Techniques that use external environ-


mental information as references for navigation. This
must be done with sensors and processing that are
capable of detecting, identifying, and classifying some
environmental features. Nearly all methods in the
geophysical category use some variation of SLAM.
Geophysical navigation is subdivided into:
Figure 10:
• Magnetic – while only explored in theory, it’s Survival curve for people completely buried in
possible to use magnetic field maps for avalanche as function of the duration of burial.
localization.
• Optical – using a camera and visual odometry, operations. Despite their reliability, the operational
pictures of the seabed are captured and identified range of these beacons remains a limiting factor,
in order to navigate. exacerbated by environmental conditions and
morphological complexities of avalanches.
• Sonar – using acoustic signals to detect and
In response to these challenges, there is growing
classify environmental features.
interest in leveraging unmanned aerial vehicles
(UAVs) for avalanche rescue missions. Drones
Multipurpose UAV for search and equipped with sophisticated localization and
rescue operations in mountain navigation algorithms offer the potential for rapid
avalanche events and accurate victim detection, minimizing exposure
to additional risks for rescuers, such as secondary
Within the last century there has been a avalanches. While several research projects and com-
sharp increase in the individuals engaging in winter mercial initiatives have explored the feasibility of
sport activites. However, this surge in participation UAVs in avalanche rescue, practical implementations
has also heightened the risk of avalanches, which and comprehensive performance evaluations are still
remain a significant hazard for both recreational limited in scientific literature.
enthusiasts and professionals. Avalanche-related The initial exploration employed traditional Search
deaths, primarily due to asphyxiation and lack of and Rescue (SAR) techniques utilizing live visual
oxygen, underscore the critical need to minimize and thermal imaging of the surveyed region. While
burial time to enhance survival chances, especially the outcomes were in line with anticipated results,
considering the sharp decline in survival probability notable constraints emerged. These encompassed
after 10 minutes of burial.[7] the necessity for proficient individuals to decipher
video footage from the field and the demand for
To address the challenge of reducing rescue time, several operators to oversee both aerial maneuvers
modern avalanche beacons (ARTVA) have proven and observational duties. Moreover, existing thermal
effective in guiding rescuers during search opera- imaging technologies, as demonstrated by the FLIR
tions, thereby enhancing efficiency, particularly in Tau 2 IR camera, encounter difficulties in detecting
auto-rescue scenarios where stress can impede thermal signatures when an individual is engulfed

8
incorporates a temperature-compensated inertial
measurement unit (IMU) that automatically cor-
rects for temperature-induced drift and bias during
operation, streamlining on-site deployment without
the need for calibration procedures. Additionally, it
offers automated warning and failure procedures for
various contingencies, thereby enhancing operational
safety and reliability.

Navigation for autonomous flight relies on data


from the IMU and GPS, necessitating a consistent
GPS signal. However, recognizing the challenges
of maintaining a steady GPS signal, particularly
in mountainous terrain, the authors acknowledged
alternative techniques explored in existing literature
to address navigation issues in areas with limited
Figure 11:
or absent GPS coverage. These strategies aim to
Aerial images shot with a FLIR Tau 2 IR camera on
enhance the system’s reliability and usability in
the emergency site: in the bottom the partially
adverse conditions.
buried with only the uncovered head, arm and knee
visible.

beneath a snow cover exceeding 10-20 cm in thick- III. The robot rescue projects
ness.

The selection of a commercially available UAV was The ICARUS Project


guided by specific criteria tailored to the require-
ments of the search task. These criteria included theAfter a significant disaster strikes, one of the
capability for complete autonomous flight, the abil- primary tasks for crisis management teams is to
ity to operate independently without reliance on a locate and rescue any human survivors present at the
ground station, a payload capacity of approximately scene. The primary objective of the ICARUS project
1-1.5 kg to accommodate an avalanche beacon and [2][3] is to design a compact and lightweight cam-
thermal camera, ease of transportability and opera- era system capable of detecting these survivors. To
tion even in mountainous environments, stable flight achieve this goal, the project will focus on developing
performance even in windy conditions, endurance of an infrared sensor with exceptionally high sensitivity
at least 10-15 minutes under maximum load, and specifically in the mid-infrared wavelength range,
flexibility for system integration and customization which is deemed most effective for detecting human
to facilitate mission planning and management. presence. Photovoltaic low-noise detectors like the
quantum cascade detector (QCD) are well-suited for
The UAV chosen for this purpose, the Ven- this purpose due to their ability to efficiently detect
ture UAV by PROS3, underwent customization to human survivors amidst challenging conditions.
suit the unique demands of the application and to Consequently, the ICARUS project will prioritize
accommodate the necessary payload. Equipped with the development of such detectors. Aquatic search
a MicroPilot 2128g2Heli autopilot, it allows for the and rescue (SAR) operations encounter natural
integration of custom payloads and the configu- obstacles due to the limited survival time of individ-
ration of tailored flight operations. The autopilot uals in water, even in mild climates. Additionally,

9
lacking and urgently needed. The ICARUS project
involves the use of various unmanned vehicles, em-
phasizing the need to provide software tools capable
of simulating these systems for training purposes.
Different simulation environments, including ground,
air, and water scenarios, will be developed to train
future ICARUS operators effectively. These training
tools will simulate specific scenarios where virtual
Figure 12: robots transmit sensor data to the Command and
USVs developed within the ICARUS project. Control Component operated by rescue services,
enabling them to evaluate simulated emergencies
and respond appropriately. Additionally, past
the safety of SAR teams must be considered when event scenarios will be recorded and replayed for
deploying resources, especially during night-time, training using this tool. The Command and Control
low visibility, or harsh weather conditions, where a Component supporting rescue services will integrate
rapid response could pose risks to the teams. To spatial information from various sources such as
address these challenges, unmanned surface vehicles area maps, satellite images, and sensor data from
(USVs) are proposed for SAR tasks. These vehicles unmanned robots to provide a comprehensive
can carry search equipment and deploy initial aid situational overview to rescue teams, aiding decision-
devices, reducing the time it takes for basic life making. An interactive human-machine interface
support equipment to reach the incident area. utilizing semantic information will be utilized for
Moreover, traditional life rafts can be automated rescue operations. The Command and Control
to move independently towards survivors, speeding Component will allocate ICARUS robots to rescue
up the recovery process. The project outlines two teams and coordinate control decisions to minimize
main approaches to meet these needs: enhancing risks during task execution. The Large Unmanned
survival capsules to move towards surface survivors
and modifying medium-sized USVs for SAR oper-
ations. By equipping survival capsules with power
generation, essential instruments, communication
tools, and motion capabilities, their effectiveness
in rescuing individuals in hard-to-reach scenarios
can be improved. USVs, being unmanned, offer the
advantage of conducting SAR missions in adverse
weather conditions without risking additional human Figure 13:
lives. The adaptation of USVs for SAR focuses Large Unmanned Ground Vehicle.
on sensing and perception, mission planning and
control, and developing a system for deploying Ground Vehicle (LUGV) planned for the ICARUS
life-saving rafts from the USVs in the incident area. project will play several crucial roles in emergency
response scenarios. Firstly, it will serve as a mobile
Advanced technological tools are ineffective for sensor platform, gathering precise data necessary for
crisis managers if they lack the necessary training. navigating challenging environments and supporting
Therefore, a comprehensive training and support emergency teams. Equipped with various sensor
infrastructure is crucial. A significant trend involves systems, it will adapt to different environment
the development of PC-based trainer simulators and types. Secondly, it will function as a platform for a
online e-training. However, for Search and Rescue powerful manipulator, capable of clearing obstacles
(SAR) operations, such infrastructure is currently and lifting objects, including deploying smaller

10
UGVs to inaccessible areas. Thirdly, it will act as for force feedback. The LUGV will operate semi-
a transport platform for Small UGVs (SUGVs), autonomously, utilizing a behavior-based control
carrying multiple sensor systems and providing framework to fuse sensor inputs for real-time cog-
access to areas inaccessible to the LUGV itself. nitive reasoning while maintaining human oversight.

The SHERPA Project

Figure 14: Figure 16:


Real life example of a LUGV. Sketch from the SHERPA team.

The SHERPA project [4] focuses on a combined


aerial and ground robotic platform designed to aid
human operators in surveillance and rescue missions,
particularly in harsh environments like alpine scenar-
ios. The core ”SHERPA team” comprises:
• A human rescuer, typically a skilled professional
such as a mountain guide or forest guard, who
is an integral part of the team. The human
transmits their position wirelessly to the robotic
platform and communicates with it using user-
friendly technological devices, facilitating natu-
ral interaction without distracting from critical
Figure 15: tasks. Vocal and gestural interactions are em-
2nd Real life example of a LUGV. phasized in the SHERPA system.
• Small scale rotary-wing Unmanned Aerial Ve-
The LUGV will be powered by a combustion engine hicles (UAVs) equipped with cameras and sen-
and employ a chain-drive for maneuverability on sors extend the coverage area beyond what the
difficult terrain, with a maximum speed of 25km/h. human rescuer can manage alone, providing vi-
It will feature inertial measurement and satellite- sual information and monitoring emergency sig-
based localization systems for navigation. Sensor nals. These UAVs are engineered for high au-
systems include bumpers, laser systems for terrain tonomy and can be supervised by the human
analysis and obstacle detection, stereo systems, in a straightforward manner, essentially acting
time-of-flight cameras, and victim detection sensors. as additional ”eyes” for the rescuer. They are
The manipulator will be a 6-axis robotic arm capable designed to be safe near humans and can po-
of lifting up to 250kg, controlled by an exoskeleton tentially be deployed manually by the rescuer.

11
However, they have limited autonomy, payload The small-scale UAVs have limited autonomy and
capacity, and operational range. onboard intelligence due to payload constraints, yet
they offer incomparable capabilities for capturing
• A ground rover serves as a transportation mod- data, such as visual information, from advantageous
ule for the rescuer’s equipment, a hardware sta- positions. With high maneuverability, they can hover
tion with computational and communication ca- over targets and follow the rescuer in inaccessible ar-
pabilities, and a recovery and recharging module eas. Their operational radius is confined to the vicin-
for the small-scale unmanned aerial vehicles in- ity of the hosting ground rover due to battery lim-
troduced above. It operates autonomously with itations. They serve as the ”trained wasps” of the
long endurance and a payload calibrated for rele- team.
vant hardware. Wirelessly connected to the res- The main objective is to equip the SHERPA team
cuer, it follows movements and interacts natu- with cognition and control capabilities to support the
rally. A multi-functional robotic arm enhances rescuer. The technological platform feeds relevant
autonomous capabilities, aiding in small-scale information, such as detailed 3D maps, visual data,
UAV deployment. Key elements include com- emergency signals, and suggested patrolling direc-
putational and communication hardware, UAV tions. Overall, it aims to improve human perception
recovery/ recharge stations, and equipment stor- in the demanding rescue environment by providing
age, all enclosed in the ”SHERPA box” for filtered information. Remarkable cognitive capabili-
potential decoupling and transport via high- ties are necessary for fusing and filtering multi-source
payload UAVs. information from heterogeneous SHERPA actors.
• Long endurance, high-altitude, and high- Autonomy is instrumental in achieving this goal.
payload aerial vehicles, complementing small- To confine the research activity for objective evalu-
scale UAVs, complete the SHERPA team. They
construct 3D maps, serve as communication
hubs, and patrol large areas, flying at around
50-100 meters above ground. Two complemen-
tary technological platforms from the SHERPA
research groups include a fixed-wing UAV from
ETH Zurich and a rotary-wing UAV, the
Yamaha Rmax, from Linkoping University.

Figure 18:
Trained Wasp.

ation, five benchmarks will drive SHERPA activities


with both ”academic” and ”industrial” significance.
These benchmarks aim to test all SHERPA ac-
tors’ features, cognition, control capabilities, and
complementarities. Specific metrics to assess the
robotic platform’s performance in these benchmarks
will be developed within the project. The first
two benchmarks, ”Virtual Ground Leashing” and
”Virtual Aerial Leashing,” focus on human-platform
Figure 17: interaction, with the ground and aerial platforms as
SHERPA rover. the main focus, respectively.

12
environments.
The third and fourth benchmarks, ”Ground and
Aerial Deployment” and ”Team Coordination,”
highlight collaborative features within and between References
SHERPA teams. Finally, the fifth benchmark,
”The Avalanche,” is designed to be highly realistic, [1] Yun Bai and Yuan Bin Hou. Research of en-
involving all SHERPA actors and testing most vironmental modeling method of coal mine res-
platform functionalities from previous benchmarks. cue snake robot based on information fusion. In
Each benchmark will have associated demonstration 2017 20th International Conference on Informa-
activities. tion Fusion (Fusion), pages 1–8, 2017.

[2] Geert De Cubber, Daniela Doroftei, Yvan Bau-


doin, Daniel Serrano, Karsten Berns, Christo-
pher Armbrust, Keshav Chintamani, Rui Sabino,
Stephane Ourevitch, and Tommaso Flamma.
Search and rescue robots developed by the eu-
ropean icarus project. 10 2013.

[3] Geert De Cubber, Daniela Doroftei, Daniel Ser-


rano, Keshav Chintamani, Rui Sabino, and
Stephane Ourevitch. The eu-icarus project: De-
veloping assistive robotic tools for search and res-
cue operations. In 2013 IEEE International Sym-
posium on Safety, Security, and Rescue Robotics
(SSRR), pages 1–4, 2013.
Figure 19:
Visualisation of the SHERPA Search area. [4] L. Marconi, C. Melchiorri, M. Beetz, D. Panger-
cic, R. Siegwart, S. Leutenegger, R. Car-
loni, S. Stramigioli, H. Bruyninckx, P. Doherty,
IV. Conclusion A. Kleiner, V. Lippiello, A. Finzi, B. Siciliano,
A. Sala, and N. Tomatis. The sherpa project:
The application of autonomous robot teams Smart collaboration between humans and ground-
for search and rescue missions is a field of research aerial robots for improving rescuing activities in
that is still in its infant phase and is the only logical alpine environments. In 2012 IEEE Interna-
next step to advance the field. It hasn‘t had its tional Symposium on Safety, Security, and Res-
breaktrough moment yet for it to see wide use cue Robotics (SSRR), pages 1–4, 2012.
around the world and is mostly used in experimental
situations. The funding and the research that it [5] Ali Marjovi, João Gonçalo Nunes, Lino Marques,
has garnered in the last few decades needs to be and Anı́bal de Almeida. Multi-robot exploration
increased for significant advancements to be made. and fire searching. In 2009 IEEE/RSJ Interna-
The already achieved milestones are still significant tional Conference on Intelligent Robots and Sys-
and help the individual systems or institutions that tems, pages 1929–1934, 2009.
utilise them in their missions, but are insignificant
in the grand scheme of things. The potential [6] Robin R. Murphy, Jeffery Kravitz, Samuel L.
reward of using autonomous robot teams is extremly Stover, and Rahmat Shoureshi. Mobile robots in
high because it reduces the risk of involving and mine rescue and recovery. IEEE Robotics Au-
putting humans (rescue teams) in already dangerous tomation Magazine, 16(2):91–103, 2009.

13
[7] Mario Silvagni, Andrea Tonoli, Enrico Zener-
ino, and Marcello Chiaberge. Multipurpose uav
for search and rescue operations in mountain
avalanche events. Geomatics, Natural Hazards
and Risk, 8(1):18–33, 2017.

14

You might also like