2012 - A Ground Control Station For A Multi-UAVSurveillance System

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

J Intell Robot Syst

DOI 10.1007/s10846-012-9759-5

A Ground Control Station for a Multi-UAV


Surveillance System
Design and Validation in Field Experiments

Daniel Perez · Ivan Maza · Fernando Caballero ·


David Scarlatti · Enrique Casado · Anibal Ollero

Received: 29 June 2012 / Accepted: 26 July 2012


© Springer Science+Business Media B.V. 2012

Abstract This paper presents the ground control the UAVs and to show their operational informa-
station developed for a platform composed by tion in a 3D realistic environment in real time. The
multiple unmanned aerial vehicles for surveillance ground control station has been designed to assist
missions. The software application is fully based the operator in the challenging task of managing
on open source libraries and it has been designed a system with multiple UAVs, trying to reduce
as a robust and decentralized system. It allows the his workload. The multi-UAV surveillance system
operator to dynamically allocate different tasks to has been demonstrated in field experiments using
two quadrotors equipped with visual cameras.

D. Perez · I. Maza (B) · F. Caballero · A. Ollero Keywords Multi-UAS platforms · Ground


Grupo de Robotica, Vision y Control, Universidad de control station · Decentralized architectures ·
Sevilla, Seville, Spain
e-mail: imaza@us.es Graphical interfaces
D. Perez
e-mail: dprodriguez.grvc@gmail.com
F. Caballero
e-mail: fcaballero@us.es 1 Introduction
A. Ollero
e-mail: aollero@cartuja.us.es The progress on miniaturization technologies, to-
gether with new sensors, embedded control sys-
D. Scarlatti · E. Casado tems and communication, have boosted the devel-
Boeing Research & Technology Europe, Avenida Sur
opment of many new small and relatively low cost
del Aeropuerto de Barajas 38, Bldg 4, Madrid 28042,
Spain Unmanned Aerial Vehicles (UAVs). However,
constraints such as power consumption, weight
D. Scarlatti
e-mail: David.Scarlatti@boeing.com and size play an important role in the UAVs’
performance, and particularly in small size, light
E. Casado
e-mail: Enrique.Casado@boeing.com and low cost UAVs. Then, the cooperation of
many of these vehicles is the most suitable ap-
A. Ollero proach for many applications. A single powerful
Center for Advanced Aerospace Technology
aerial vehicle equipped with a large array of dif-
(CATEC), Parque Tecnológico y Aeronáutico de
Andalucía, La Rinconada, Spain ferent sensors of different modalities is limited at
e-mail: aollero@catec.aero any one time to a single view point. However, a
J Intell Robot Syst

team of aerial vehicles can simultaneously collect Reference [9] provides a survey on relevant
information from multiple locations and exploit aspects such as the perceptual and cognitive issues
the information derived from multiple disparate related to the interface of the UAV operator,
points to build models that can be used to take including the application of multimodal technolo-
decisions. Team members can exchange sensor gies to compensate for the dearth of sensory in-
information, collaborate to identify and track tar- formation available. The decrease on the reaction
gets and perform detection and monitoring activ- time of the operator adding modalities such as au-
ities among other tasks [1, 2]. Thus, for example, ditory information (3D audio), speech synthesis,
a team of aerial vehicles can be used for explo- haptic feedback and touch screens are analyzed
ration, detection, precise localization, monitoring in [10].
and measuring the evolution of natural disasters, It is possible to find commercial GCSs for
such as forest fires. Furthermore, the multi-UAV multi-UAV systems ranging from the advanced
approach leads to redundant solutions offering proprietary and closed solution by Boeing for
greater fault tolerance and flexibility. the X-45 [11, 12] to open source solutions as
An efficient user friendly Ground Control Sta- QGroundControl [5] or the Paparazzi System [13]
tion (GCS) is a crucial component in any Un- used in [14].
manned Aerial System (UAS) based platform. This paper presents the ground control station
The GCS gathers all the information about the developed for a platform composed by multiple
UAV status and allows to send commands ac- unmanned aerial vehicles in surveillance missions
cording to the specified missions. It is remarkable demonstrated in field experiments. The applica-
that most of the stations include a common set tion is fully based on open source libraries and it
of components, such as artificial horizons, battery has been designed as a robust and decentralized
and IMU indicators and lately 3D environments system. It allows to allocate different tasks to the
[3–5], as generally accepted useful elements for UAVs and shows their operational information in
the operators. a 3D realistic environment in real time.
The operator’s workload grows exponentially
with the number of UAVs operating/flying in
the platform. Due to the critical nature of the 2 Multi-UAV Surveillance System Architecture
unmanned flights, there has been a continuous
effort to improve the capabilities of the GCSs The system used to validate the ground control
managing multiple UAVs. The use of multimodal station is composed by a team of UAVs, a network
technologies begins to be usual in current GCSs of ground cameras and the station itself. The over-
[6, 7], involving several modalities such as posi- all software architecture of the system is shown in
tional sound, speech recognition, text-to-speech Fig. 1, where different independent processes can
synthesis or head-tracking. The level of interac- be identified. Each software component can be
tion between the operator and the GCS increases transparently executed in the same machine or in
with the number of information channels, but different computers thanks to the communication
these channels should be properly arranged in functions provided by the YARP library. The only
order to avoid overloading the operator. exception is the UAV position/orientation con-
In [8] some of the emerging input modalities for troller that must be executed on its corresponding
human-computer interaction (HCI) are presented on-board computer.
and the fundamental issues in integrating them In each UAV, there are two main levels: the
at various levels—from early “signal” level to in- Automatic Decision Level (ADL) and the pro-
termediate “feature” level and to late “decision” prietary Executive Level (EL). The former deals
level—are discussed. The different computational with high-level decision-making whereas the lat-
approaches that may be applied at the different ter is in charge of the execution of the so-called
levels of modality integration are presented, along elementary tasks (land, take-off and go-to). In
with a briefly review of several demonstrated mul- the interface between both levels, the ADL sends
timodal HCI systems and applications. task requests and receives the execution state of
J Intell Robot Syst

Fig. 1 Decentralized
software architecture.
The boxes and ellipses
represent the ground and
flying segment processes
respectively. In each
UAV, there are two main
levels: the Automatic
Decision Level (ADL)
and the proprietary
Executive Level (EL).
The gray arrows
represent XBee
communication links.
Finally, the YARP based
communications are
represented by black
(video transmission) and
white (non-video) arrows

each task and the UAV state. For distributed The next section is focused on the design and
decision-making purposes, interactions among the implementation of the GCS and its interaction
ADLs of different UAVs are required. Finally, with the rest of components.
the position/orientation controller is running on
the hardware on-board the UAV. The main el-
ements of this architecture are described in [1], 3 Ground Control Station Design
which summarizes the functionalities endowed at and Implementation
each level.
The ground control station allows the user to The main goal in the design of the GCS was to
specify the missions and tasks to be executed by simplify the command and control of a multi-
the platform, and also to monitor the execution UAV surveillance system in order to keep the
state of the tasks and the status of the different workload of the operator below a certain level.
components of the platform. On the other hand, The system was designed to be controlled by a
the ground camera network provides images from single user, who must be able to command several
the area of interest for surveillance purposes. interacting UAVs in order to perform coordinated
An useful capability of the system is the possi- missions. In addition, the following aspects have
bility to operate in simulation mode. The EL and been considered in the design:
ADL can simulate a virtual UAV that is treated
like a real one by the GCS. From the point of view – The GCS should automatically detect inser-
of the GCS and its operator, it is transparent if tions or removals of UAVs in the system,
the UAV is real or simulated using a computer. following a plug-and-play paradigm.
In simulation mode, the operator can increase his – The GCS should provide visual alerts as a
skills with the platform and the GCS, improv- response to different events that may happen
ing his reaction times and situation awareness during the operation of the system, such as a
during the design and execution of surveillance low battery level or a problem with the video
missions. transmission of a ground camera.
J Intell Robot Syst

Fig. 2 Ground control


station main layout. Four
main areas have been
numbered: 1 UAV
selector, 2 detailed UAV
information, 3 map area
and 4 tab widget

– The GCS should display all the information required to localize the selected UAV on any
required to be aware of the state of each point on the Earth. It contains information
UAV. However, the information must be about GPS latitude, longitude, and altitude,
carefully selected in order to avoid overload- heading, barometric altitude and the number
ing the operator. of available GPS satellites being used by the
– A certain level of decisional autonomy is as- UAV on-board controller. The UAV status
sumed on-board each UAV (see Section 2), shows its current mode of operation and the
so that the user should not pay continuous level of the battery (voltage and intensity). A
attention to the state of all the UAVs in the visual/sound alarm is triggered when the bat-
system. tery reaches a critical level and a red LED is
turned on. Additionally, it shows information
coming from the Inertial Measurement Unit
The resulting layout of the GCS is depicted (IMU).
in Fig. 2, where four main areas have been 3. Map area. This is an interactive map that
numbered: displays important information such as the
locations of UAVs, alarms or motion tasks.
1. UAV selector. One click on the UAV’s iden- 4. Tab Widget. It contains all the widgets of the
tifier (or its shortcut) selects the UAV, updat- application (logging, sensor visualization, etc),
ing the area 2 in Fig. 2 with its information and except the map widget.
making an automatic zoom over its location
on the map. The layout of the current version The common cycle of operation is simple. After
of the GCS has been designed to support four the initial setup (detecting screens on the station,
UAVs in terms of selectors, shortcuts, etc. activating the sound, etc.) an automatic updating
However, the software can be easily adapted process is executed every ten seconds in order
to support more UAVs, being the workload of to detect the addition/extraction of components
the operator the limiting factor regarding the (such as UAVs and cameras) in the system. If a
maximum number of UAVs to handle from new component is detected, a thread showing its
the GCS. information is launched at a refresh rate of 10 Hz,
2. Detailed information of the selected UAV. while in parallel the GCS updates the different
These indicators show all the information widgets. The overall process is shown in Fig. 3.
J Intell Robot Syst

Fig. 3 A scheme of the


main execution loop of
the ground control station
software. The different
events or tasks are
treated in their specific
slots, in parallel to the
main loop execution

The selection criteria for the libraries used in – The video image treatment was based in the
the software implementation was based on: Intel’s OpenCV library [17], a fact standard in
digital image processing.
– Widely contrasted usage in the community – The integration of the map was solved using
and reliability. the Marble library [18] that provides a fully
– Well documented APIs. compatible Qt widget.
– Open source. – A realistic 3D environment was developed
– Multiplatform. using the OpenSceneGraph [19] and the OSG
– C++ based. Earth [20] libraries.
– Finally, interprocess communication was
solved by using the YARP library [21].
According to these prerequisites, the following
YARP is an open source code framework,
decisions were taken in the design:
multiplatform, almost written in C++, that
supports distributed computation in a very
– The Graphical User Interface (GUI) was de- efficient way.
veloped using the Qt library [15], that allows
an easy development of user interfaces due to During the software development stage, the
its signals and slots connection method [16]. main difficulties were found in the asynchronous
It provides its own IDE and easy integration nature of the system with multiple UAVs, tasks
with other C++ libraries. and associated events. In addition, several issues
J Intell Robot Syst

related to the integration of the different libraries The map widget is completely interactive and
had to be solved. the user can perform different operations: zoom
Finally, it should be mentioned that the design in/out, move around, follow the selected UAV,
and implementation of the GCS has no limitation generate missions and pathways, choose the ele-
in terms of the number of UAVs that can be man- ments to be displayed (UAVs, waypoints, camera
aged. In this paper, the screenshots correspond projections, intruders or surveillance zones), get
to a platform composed by four UAVs, but the coordinates, etc.
layout of the interface can be directly and easily
configured for a higher number of UAVs. In this 3.2 SVG Widgets
point, the limit is given by the capabilities of the
human operator when dealing with an increasing Some information components, such as the bat-
number of UAVs. tery levels, are represented using SVG widgets
In the following subsections, the main widgets (see Fig. 4, left) to simulate a cockpit environment
of the GCS are described along with their usage that can be more intuitive for the operator. These
by the operator to control several UAVs. widgets are based on the Qt Embedded Widgets
Demo [15].
3.1 Marble Map Widget
3.3 OSG Earth Widget
The map widget is an important component in
any GCS that shows the position and orientation The OSG Earth widget shows a 3D real size model
of the UAVs on a global map, updating them in of the Earth, with the geographical information
real time. It shows also additional information, also provided by the Open Street Map Project.
such as areas of interest, intruders, waypoints, Over the Earth model, the system draws the
trajectories, video streaming from the UAVs, etc. UAVs and other relevant elements to the mission
The widget is based on the Marble Library, and such as the waypoints, lines from the UAV to the
the geographical information is provided by the next scheduled waypoint, planes perpendicular to
Open Street Maps project [22]. This combination the surface of the Earth showing the remaining
has proven to be an interesting option compared trajectory, etc.
to Google Maps, with an important advantage: it Additional 3D models of buildings, bridges,
does not require access to the Internet during the monuments, etc can be displayed on the terrain
operation of the system (it is possible to pre-cache to increase the realism of the environment and to
the areas of interest in disk). improve the situational awareness of the remote

Fig. 4 Left Example of


the SVG widgets used in
the ground control. Right
UAV control panel that
integrates buttons for the
different tasks, a
mini-HUD and a classical
mission time counter
J Intell Robot Syst

Fig. 5 Screenshot of the


OSG Earth widget used
in the ground control
station. The camera can
be configured to be
controlled by the
operator or to follow the
UAV automatically from
a chase or top view. It is
also possible to switch to
a virtual view of any of
the available ground
cameras

operator. These objects can be loaded from a specified missions against inconsistencies and un-
COLLADA model [23]. Figure 5 shows a screen- safe trajectories.
shot of the widget. Finally, a mini Head-Up Display (HUD) with
a classical artificial horizon level has been devel-
oped and integrated. It shows the orientation of
3.4 Multi-UAV Control Panel Widget the UAV with respect to the horizon, and the
indicators of the altitude, heading and speed. This
When an UAV is added to the system, a new information is overlaid on the images from an on-
UAV control panel widget is created with an board camera if it is available. This widget has
unique color code identifier. The control panel been developed from scratch using the graphical
shows the UAV number and a LED with the capabilities provided by the Qt library [15].
status of the communication link. There are also
eight buttons to send the following tasks to the
3.5 Graphs Widgets
UAV (from left to right according to Fig. 4, right):
The graphs widget plots graphics of data from
– Row 1: TAKE-OFF, LAND, RETURN-HOME, different sensors of the selected UAV in real time.
GO-TO. Available measurements are yaw, pitch and roll
– Row 2: SURVEILLANCE, WAIT, TAKE- angles, speed vector and battery levels. It also
IMAGES, RECHARGE. plots the trajectory followed by the UAV in Carte-
sian coordinates [x,y] (as it can be seen in Fig. 6),
It is important to highlight that the disposition so the operator can track its flight path during the
of the command buttons and the information sim- execution of the mission.
plify the work of the operator. The task buttons
included in the panel allows to build complex 3.6 OSD Notifications and Alarms
missions based on the combination of different
tasks. In addition, the ADL and EL on-board The different events that may occur during the
the UAV (see Section 2) checks the different execution of a mission are notified to the oper-
J Intell Robot Syst

Fig. 6 An example of a graph widget showing the trajectory followed by the UAV in Cartesian coordinates [x,y]

ator in different ways. At the top level, an On 4 GCS Testing in Field Experiments
Screen Display (OSD) system has been devel-
oped, popping-up the notifications to the user in The ground control station was tested in field
a partially transparent window, as it can be seen experiments on November 2011 in Seville. The
in Fig. 7. overall multi-UAV surveillance system architec-
Events with higher priority or with the need ture described previously along with the devel-
of an interactive answer by the operator are dis- oped GCS were used in two types of missions with
played in a pop-up window, as it is shown in with two customized quadrotors.
Fig. 12. Sounds can be also linked to both types
of alarms.
4.1 Hardware Setup

3.7 Log Player Widget The UAVs integrated in the system were two
quadrotors of the model Pelican manufactured by
The log player widget is a tool very useful to Ascending Technologies [24]. This quadrotor had
replay previous flights and experiments for mis- available enough payload (500 g) for executing
sion debriefing purposes. The operator only have different missions. It is equipped with an Intel
to select a log file, corresponding to a previous Atom on-board which allows to connect add-on
mission, to replay it using the common multimedia sensors and cameras, mainly via mini USB inter-
controls (play, stop, etc). faces. The UAV has a flight autonomy of approx-
imately fifteen minutes at full payload.
A customized Ubuntu Linux distribution was
installed on the Atom board and configured with
several watchdogs processes in order to guarantee
the wireless LAN communication and the alloca-
tion of identifiers to the devices on-board. Each
quadrotor was equipped with a high definition
webcam, a XBee-PRO radio data link and a GPS
receiver for telemetry, all of them connected to
the Atom board with customized mini USB con-
nections (see Fig. 8). The captured images were
transmitted to the ground through the wireless
Fig. 7 An example of the OSD notification system that
uses a partially transparent window to keep the operator LAN connection, whereas the on-board position
informed about the different events and orientation controller used the XBee radio
J Intell Robot Syst

and both EL/ADLs) on a single laptop. However,


looking for an improved supervision during the
field experiments, it was decided to use three dif-
ferent laptops: two of them running the EL/ADL
associated to each quadrotor and the third one to
execute the GCS.
The next sections describe the two types of
missions carried out in Seville in November 2011
to test the developed multi-UAV GCS.

4.2 Mission 1: Camera Network Dynamic


Repairing

The objective of this mission is to use the IP


Fig. 8 Customized quadrotor of the model Pelican by
Ascending Technologies equipped with a high definition
cameras on-board the UAVs to provide images
webcam, a XBee-PRO radio data link and a GPS receiver of an area in which a ground camera is damaged
for telemetry, all of them connected to the Atom board or temporally offline. Thus, the system must be
with customized mini USB connections able to detect the potential failures of the ground
cameras and to schedule the required tasks to
allow the UAV cover the area around the non-
link to send the telemetry data to its EL/ADL working camera location.
processes (running on a laptop). This hardware If the signal from one camera of the network
setup is summarized in Fig. 9. (or its datalink) is lost, an alarm is triggered on
Due to the decentralized software design, it the GCS map as it is shown in Fig. 10. Then, the
is also possible to run all the processes (GCS operator is asked to confirm an automatic task for

Fig. 9 Hardware setup


used in the field
experiments. The
captured images were
transmitted to the ground
through the wireless LAN
connection, whereas the
on-board position and
orientation controller
used the XBee radio link
to send the telemetry data
to its EL/ADL processes
(running on a laptop).
The Linksys E4200 dual
band router was used to
support LAN
communication between
the ground
computers/cameras and
wireless communication
with the on-board IP
cameras
J Intell Robot Syst

4.3 Mission 2: Cooperative Area Surveillance

In this mission, the area of interest was automati-


cally split in subareas taking into account the num-
ber of available UAVs in order to efficiently pa-
trol it. The mission was launched by the GCS (un-
der the operator’s supervision) when an intruder
was detected and localized by the ground camera
network. The intruder alarm was displayed on
the interactive map of the GCS as it is shown in
Fig. 10 If a camera of the ground network (or its datalink)
is lost, an alarm is triggered on the GCS map. The operator Fig. 12. Then the operator had two options:
is asked to confirm an automatic task for the UAVs in
order to cover the area lost by the camera 1. Allow the automatic generation of a surveil-
lance task around the area in which the in-
truder was detected.
2. Set different waypoints manually around the
the UAVs in order to cover the area lost by the area of interest.
camera.
The task is allocated to one of the UAVs ac- In the field experiments conducted, the oper-
cording to its available capabilities that depends ator always selected the first option in order to
on the location, flight endurance and sensors on- test the autonomous capabilities of the system.
board. The UAV takes-off, flies to the selected Thus, a surveillance area task was automatically
location and sends the images to the GCS until requested by the GCS to the ADLs, that replied
the ground camera link is restored (see Fig. 11). with the subareas and waypoints for each UAV
If the UAV’s autonomy is below a certain thresh- considering the operational constraints, such as
old, the GCS will detect it and will communicate battery levels or distances to the subareas. These
a RETURN-HOME task to get the UAV back. In subareas and waypoints were displayed on the
this case, the original task can be automatically GCS to be validated (see Fig. 13). Then, the op-
transferred to another UAV in order to continue erator could have modified or removed the differ-
with the mission. ent tasks allocated to each UAV. Once the plan

Fig. 11 UAV-1 is
hovering on the area
previously covered by the
lost camera to restore the
surveillance
J Intell Robot Syst

Table 1 Web links to several videos showing different


GCS capabilities: widgets described from Sections 3.1–
3.6, simulation involving four UAVs and field missions
(Mission 1 and Mission 2) carried out in November 2011
using a team of two quadrotors
Widgets http://grvc.us.es/GCS/gcs_widgets.wmv
Simulation http://grvc.us.es/GCS/gcs_sim.avi
Mission 1 http://grvc.us.es/GCS/gcs_mission1.wmv
Mission 2 http://grvc.us.es/GCS/gcs_mission2.wmv

to the GCS and the operator could supervise the


area searching for other potential intruders. The
workload of the operator was low during the exe-
cution, as he only had to attend the images from
the cameras displayed on the map.
Table 1 contains web links to the following
videos:

Fig. 12 Automatic intruder alarm window. The operator – Widgets: Video showing all the widgets de-
has the option to allow the automatic generation of a scribed from Sections 3.1–3.6.
surveillance task around the area in which the intruder was – Simulation: A simulation of a mission involv-
detected ing four UAVs.
– Mission 1: Mission described in Section 4.2.
– Mission 2: Mission described in Section 4.3.
was completed, the operator validated it and the
autonomous execution of the mission started.
The IP cameras on-board were pointing down-
wards sending images through the wireless link 5 Conclusions and Future Work

The design adopted on the system architecture


has demonstrated to be an efficient solution to
achieve a rapid deployment of a multi-UAV sur-
veillance system. The design of the GCS exploited
the autonomous capabilities of the multi-UAV
system to decrease the workload of the operator.
Thus, the developed GCS allowed to use most
of the capabilities that the autonomous multi-
UAV system could provide, while at the same
time offered an user-friendly and easy-to-operate
interface.
Future work will be focused on the next lines:
– Integration of the information from other sen-
sors in the GCS such as LIDAR, seismic,
smoke detectors, etc.
– Possibility of executing the GCS station from
a remote location through internet.
– Customize several GCS interfaces in order
to satisfy different user profiles: engineer,
Fig. 13 Resulting automatic area partition and list of surveillance operator or strategic mission
waypoints planner.
J Intell Robot Syst

Acknowledgements This work has been developed in the versity of Illinois at Urbana-Champaign, AHFD-05-
framework of the ADAM and CLEAR (DPI2011-28937- 5/FAA-05-1. Tech. Rep. (2005)
C02-01) Spanish National Research projects. The authors 10. Maza, I., Caballero, F., Molina, R., Pena, N., Ollero, A.:
would also like to thank the Boeing Research and Tech- Multimodal interface technologies for UAV ground
nology Europe company for their financial and technical control stations. A comparative analysis. J. Intell. Ro-
support. bot. Syst. 57(1–4), 371–391 (2010). doi:10.1007/s10846-
009-9351-9
11. Boeing: X-45 Unmanned aerial combat system. http://
www.boeing.com/history/boeing/x45_jucas.html (2012).
Accessed 15 May 2012
References
12. Boeing: Ground control station for multiple X-
45 Unmanned Aerial Combat Systems. http://www.
1. Maza, I., Caballero, F., Capitan, J., de Dios, J.M., youtubewatch?v=ilyUNkjKlPM.com/ (2012). Accessed
Ollero, A.: A distributed architecture for a robotic 15 May 2012
platform with aerial sensor transportation and self- 13. The Paparazzi Project: Free and open-source hardware
deployment capabilities. J. Field Robot. 28(3), 303–328 and software autopilot system. http://paparazzi.enac.fr
(2011). doi:10.1002/rob.20383 (2012). Accessed 15 May 2012
2. Bernard, M., Kondak, K., Maza, I., Ollero, A.: Au- 14. Brisset, P., Hattenberger, G.: Multi-UAV control with
tonomous transportation and deployment with aerial the paparazzi system. In: Proceedings of the First Con-
robots for search and rescue missions. J. Field Robot. ference on Humans Operating Unmanned Systems
28(6), 914–931 (2011). doi:10.1002/rob.20401 (HUMOUS’08). Brest, France, 3–4 September 2008
3. Jovanovic, M., Starcevic, D.: Software architecture for 15. N. Corporation: Qt reference documentation. http://
ground control station for unmanned aerial vehicle. In: doc.qt.nokia.com/4.6 (2012). Accessed 4 Mar 2012
Tenth International Conference on Computer Model- 16. Blanchette, J., Summerfield, M.: C++ GUI Program-
ing and Simulation, 2008. UKSIM 2008, pp. 284–288 ming with Qt 4. Prentice Hall, Englewood Cliffs, NJ
(2008) (2006)
4. Dong, M., Chen, B., Cai, G., Peng, K.: Development 17. OpenCV: Open source computer vision. http://opencv.
of a real-time onboard and ground station software willowgarage.com (2012). Accessed 15 May 2012
system for a uav helicopter. JACIC 4(8), 933–955 18. The Marble Project. KDE Educational Project (2005–
(2007) 2011). Available Online: http://edu.kde.org/marble
5. QGroundControl: Open source MAV ground control (2012). Accessed 4 Mar 2012
station. http://qgroundcontrol.org (2010). Accessed 15 19. Open Scene Graph. Open source 3d graphics
May 2012 toolkit (2005–2011). Available Online: http://www.
6. Lemon, O., Bracy, A., Gruenstein, A., Peters, S.: The openscenegraph.org (2012). Accessed 4 Mar 2012
WITAS multi-modal dialogue system I. In: Proceed- 20. OSG Earth. Terrain rendering toolkit. Pelican Map-
ings of the 7th European Conference on Speech Com- ping (2005–2011). Available Online: http://osgearth.
munication and Technology (EUROSPEECH), pp. org/ (2012). Accessed 4 Mar 2012
1559–1562, Aalborg, Denmark (2001) 21. Metta, G., Fitzpatrick, P., Natale, L.: Yarp: yet another
7. Ollero, A., Maza, I. (eds): Multiple Heterogeneous Un- robot platform. Int. J. Adv. Robot. Syst. 3(1), 043–048
manned Aereal Vehicles. ser. Springer Tracts on Ad- (2006)
vanced Robotics. ch. Teleoperation Tools, pp. 189–206. 22. The Open Street Map Project (2005–2011). Avail-
Springer, New York (2007) able Online: http://www.openstreetmap.org (2012). Ac-
8. Sharma, R., Pavlovic, V.I., Huang, T.S.: Toward multi- cessed 15 May 2012
modal human-computer interface. Proc. I.E.E.E. 86(5), 23. COLLADA - Digital Asset and FX Exchange Schema.
853–869 (1998) Available Online: https://collada.org (2012). Accessed
9. McCarley, J.S., Wickens, C.D.: Human Factors Impli- 15 May 2012
cations of UAVs in the National Airspace. Institute 24. Ascending Technologies (2011). Available Online:
of Aviation, Aviation Human Factors Division, Uni- www.asctec.de (2012). Accessed 4 Mar 2012

You might also like