Professional Documents
Culture Documents
User Manual v0.7.0
User Manual v0.7.0
User Manual
by Helmut Puhr
Version 0.7.0 Valiant Viscacha
Alpha Release
Short contents
Short contents · ii
Contents · iii
List of Figures · x
List of Tables · xv
Introduction · 1
Installation · 11
UI Overview · 16
Filters · 62
Evaluation · 70
Troubleshooting · 275
Appendix · 280
ii
Contents
Short contents ii
Contents iii
List of Figures x
List of Tables xv
Introduction 1
User Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Feature Highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Application modes 3 , Import 3 , Analysis 3 , Visualization 3 , Evaluation 4 ,
Semi-automated Processing 4
General Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Contributors 6 , Test Data 6
Key Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Installation 11
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Operating System 11 , Recommended Hardware 11 , Graphics Cards &
Drivers 12
Using the AppImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Building from Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Running the Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Configuration Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
UI Overview 16
Main Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Main Menubar 18 , Main Tab Area 18 , Main Statusbar 19
Main Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
File Menu 19 , Import Menu 21 , Configuration Menu 22 , Process Menu 23
Import Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Import ASTERIX Recording 23
Main Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Override Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
iii
CONTENTS iv
Mappings Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Import ASTERIX from Network 30 , Import GPS Trails 31
Main Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Config Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Import View Points 34
Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Configure Data Sources 35
Import/Export of Configuration Data Sources . . . . . . . . . . 39
Show Meta Variables 39 , Configure Sectors 40
Import Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Postprocess . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Calculate Radar Plot Positions 47 , Calculate Associations 49
Reference/Tracker UTN Creation . . . . . . . . . . . . . . . . . . 51
Sensor UTN Creation . . . . . . . . . . . . . . . . . . . . . . . . . 52
Dubious Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Calculate Associations for ARTAS 54
Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Views 61
Filters 62
Default Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Aircraft Address Filter 63 , Aircraft Identification Filter 63 , ADSB Quality
Filter 64 , ADSB MOPS Filter 64 , ARTAS Hash Code Filter 64 , Detection
Type Filter 65 , Position Filter 65 , Time of Day Filter 66 , Track Number Fil-
ter 66 , Mode 3/A Codes Filter 66 , Mode C Codes Filter 67 , Primary Only 67
, UTN Filter 67
Adding a New Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Evaluation 70
Pre-Requisites & Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Target Report Associations 70 , Sector Altitude Filtering 70 , Reference Data 71
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Main Tab 74
Data Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Sector Layer/Requirement Mapping . . . . . . . . . . . . . . . . 75
Targets Tab 76 , Filter Tab 79 , Standard Tab 80
Current Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Results Tab 84
Running . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
CONTENTS v
Troubleshooting 275
Known Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
CentOS Fuse Usermount Permissions 275 , Missing glibc Library Ver-
sions 275 , White OSGView & Shader Errors in Console Log 276 , Graphical
Issues 277
Reporting Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
Already Reported Issues 278 , Collect Information 278
For Application Crashes . . . . . . . . . . . . . . . . . . . . . . . 278
Issue Reporting 279
Appendix 280
Appendix: Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
v0.6.0 and later 280 , Pre-v0.6.0 and older 280 , Configuration Folder 280 ,
Data Folder 281
Appendix: Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Current Format 281 , Deprecated Format 282
Appendix: View Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
File Content 283 , Custom Attributes 284 , Version 0.2 284
View Point Context . . . . . . . . . . . . . . . . . . . . . . . . . . 285
View Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
View Point Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
Appendix: Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Positions Accuracy Ellipses 291 , ADS-B 291
V0 Transponders . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
V1 & V2 Transponders . . . . . . . . . . . . . . . . . . . . . . . . 292
MLAT 293 , Radar 294 , RefTraj 294 , Tracker 294
Appendix: Latex Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
CONTENTS ix
Ubuntu & Debian Variants 295 , CentOS & Fedora Variants 295 , Testing 295
Appendix: Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
jASTERIX 296 , SDDL 296 , ADS-B exchange 301
Appendix: Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Appendix: Disclaimer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Appendix: Used Libaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Appendix: ADS-B exchange . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
List of Figures
1 Main Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2 File Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3 Main Window After Opening a Database . . . . . . . . . . . . . . . . . . . . 21
4 File Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5 Configuration Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6 Process Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
7 Import ASTERIX data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
8 Task: Import ASTERIX data override . . . . . . . . . . . . . . . . . . . . . . . 27
9 Task: Import ASTERIX data mappings . . . . . . . . . . . . . . . . . . . . . . 28
10 Task: Import ASTERIX Data DBContent Variable Details . . . . . . . . . . . 29
11 Import ASTERIX data status . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
12 Import ASTERIX Data from network . . . . . . . . . . . . . . . . . . . . . . . 31
13 Import GPS Trail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
14 Import GPS Trail Config . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
15 Import View Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
16 Configure Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
17 Configure Data Sources: Radar details . . . . . . . . . . . . . . . . . . . . . . 38
18 Show Meta Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
19 Configure Sectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
20 Configure Sectors with example file . . . . . . . . . . . . . . . . . . . . . . . 42
21 Configure Sectors import dialog . . . . . . . . . . . . . . . . . . . . . . . . . . 43
22 Configure Sectors Manage tab . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
23 Configure Sectors with example sector . . . . . . . . . . . . . . . . . . . . . . 46
24 Configure Sectors editing result . . . . . . . . . . . . . . . . . . . . . . . . . . 47
25 Calculate Radar Plot positions . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
26 Calculate Radar Plot positions done . . . . . . . . . . . . . . . . . . . . . . . 49
27 Associate Target Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
28 Calculate Associations from ARTAS . . . . . . . . . . . . . . . . . . . . . . . 55
29 Data Sources Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
30 Filters Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
31 Filters Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
32 Target Address filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
33 Callsign filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
34 ADSB quality filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
35 ADSB quality filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
x
LIST OF FIGURES xi
46 Evaluation tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
47 Evaluation Main tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
48 Evaluation Targets tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
49 Evaluation Targets tab with loaded data . . . . . . . . . . . . . . . . . . . . . 78
50 Evaluation Filter tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
51 Evaluation Standard tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
52 Evaluation Standard tab: Add requirement . . . . . . . . . . . . . . . . . . . 82
53 Evaluation Results tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
54 Evaluation: Post-processing after loading . . . . . . . . . . . . . . . . . . . . 85
55 Evaluation Targets tab after loading . . . . . . . . . . . . . . . . . . . . . . . 86
56 Evaluation Filter UTNs dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
57 Evaluation Targets tab after filtering . . . . . . . . . . . . . . . . . . . . . . . 90
58 Evaluation: Running evaluation status . . . . . . . . . . . . . . . . . . . . . . 91
59 Evaluation results: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
60 Evaluation results: Sector PD errors in OSGView . . . . . . . . . . . . . . . . 93
61 Evaluation results: Sector Detail Example . . . . . . . . . . . . . . . . . . . . 94
62 Evaluation results: Target PD Errors in OSGView . . . . . . . . . . . . . . . . 95
63 Evaluation results: Per-Target PD Detail Example . . . . . . . . . . . . . . . 96
64 Evaluation results: Target Single PD Error in OSGView . . . . . . . . . . . . 97
65 Evaluation results: Generate report dialog . . . . . . . . . . . . . . . . . . . . 98
66 Evaluation results: Generate report in progress . . . . . . . . . . . . . . . . . 99
67 Evaluation Detection requirement . . . . . . . . . . . . . . . . . . . . . . . . . 100
68 Evaluation Dubious Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
69 Evaluation Dubious Tracks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
70 Evaluation Extra Data requirement . . . . . . . . . . . . . . . . . . . . . . . . 112
71 Evaluation Extra Track requirement . . . . . . . . . . . . . . . . . . . . . . . 114
72 Evaluation Identification Correct requirement . . . . . . . . . . . . . . . . . . 116
73 Evaluation Identification False requirement . . . . . . . . . . . . . . . . . . . 118
74 Evaluation Mode 3/A False requirement . . . . . . . . . . . . . . . . . . . . . 120
75 Evaluation Mode 3/A Present requirement . . . . . . . . . . . . . . . . . . . 122
76 Evaluation Mode C False requirement . . . . . . . . . . . . . . . . . . . . . . 124
77 Evaluation Mode C Present requirement . . . . . . . . . . . . . . . . . . . . . 126
78 Evaluation Position Across requirement . . . . . . . . . . . . . . . . . . . . . 128
79 Evaluation Position Along requirement . . . . . . . . . . . . . . . . . . . . . 130
80 Evaluation Position Distance requirement for correct positions . . . . . . . . 132
81 Evaluation Position Distance requirement for false positions . . . . . . . . . 133
LIST OF FIGURES xii
1 SQL operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
xv
Introduction
The OpenATS COMPASS tool aims at providing a generalized framework for ATC surveil-
lance data inspection and analysis.
It’s name being an abbreviation for compliance assessment, the OpenATS COMPASS
tool allows air traffic surveillance recordings to be imported into a database for analysis,
visualization and evaluation.
The application is highly configurable and quite complex, and is developed for usage
by air traffic surveillance professionals. To support new users, a user manual as well as
YouTube videos are supplied.
The C++ code is released under the GPL-3.0, while the Linux AppImage and the user
manual are released under CC BY 4.0.
OpenATS COMPASS is publicly available and free for anyone to use (including com-
mercial usage), under the previously stated licenses.
User Manual
This document has its focus on interaction and working procedures required to make use
of the existing functionality. In this introduction, feature highlights and acknowledgments
are listed, followed by a brief summary of important aspects of COMPASS in Key Con-
cepts.
In the section Installation, prerequisites are listed and installation instructions are
given.
In the larger section UI Overview, the steps are described to run the application, create
a database or access an existing one, import, process data, as well as load data into Views.
1
INTRODUCTION 2
The application commonly uses the ’Offline’ mode. When ASTERIX data is imported
from the network, the application switches into ’Live’ mode, which is described in Live
Mode.
In the section Troubleshooting details about reported issues are collected. It also con-
tains instructions on how to report new issues.
In the last section Appendix additional details are listed. The section Appendix: Utili-
ties explains how data can be manually imported into COMPASS. In Appendix: Licensing
information is given about under which conditions COMPASS can be used, and what li-
braries with what licences are used in the background.
INTRODUCTION 3
Feature Highlights
Application modes
• Offline: Recordings can be imported and larger amounts of data can be inspected
and evaluated
• Live: Data can be read directly from network interfaces and immediately displayed
to show the current airspace situation
Import
• Support of SQLite3 database system
• Dynamic ASTERIX import using jASTERIX
• Import of (D)GPS trails from NMEA files
• Import of polygons from GML, KML, ESRI Shapefiles
• Supported data source types
– Radar
– MLAT & WAM
– ADS-B
– System Tracker
– Reference Trajectory
Analysis
• High performance processing, low memory footprint
• Filtering for detailed analysis
• Simple custom filter generation
• General target report association (find unique targets)
• ARTAS track association (TRI) analysis
Visualization
• Textual data inspection using ListBox View
– Display of data as text tables
– Configurable loading of data of interest
– Export of data as CSV
• Data distribution inspection using Histogram View
– Display of any numeric variable or evaluation result as histogram
– Linear or logarithmic axis
• Graphical data inspection using OSG View
– Automatic labeling of data
INTRODUCTION 4
Evaluation
• Standard compliance evaluation
– Definition of standards based on configurable requirements
– Generalized comparison of test data vs. reference data
– Calculation of requirements/performance indicators
– Investigation/display of results at several levels of detail
– Automatic removal of targets (short tracks, VFR, . . . ) possible
– Manual removal of specific targets possible
– Export of results as report PDF
Semi-automated Processing
• Command line options for common processing features
General Aspects
COMPASS is a highly specialized surveillance data processing framework with a strong
focus on high-performance and a low memory footprint to enable the processing of large
quantities of data. Surveillance data is fetched from a database (limited by a filter system),
then processed and displayed using so-called Views (specific visualizations of the result
set).
There are two application modes: Offline and Live. In Offline mode an offline record-
ing can be imported and larger amounts of data can be inspected and evaluated. In Live
mode data can be read directly from network interfaces, inserted into the database, and
INTRODUCTION 5
After a previously generated database is opened, data can be loaded using a database
query. A filter configuration may restrict the loaded data to a result set. Such a result set is
displayed for analysis using the various existing Views.
Each View defines which contents of the database are required to fulfill its purpose,
and only such parts are loaded. During a loading process from the database, subsets of the
query result are immediately added to the current result set and all views are updated.
Acknowledgments
The following libraries are used in the project (list not exhaustive):
• Qt5
• Boost
• SQLite3
• GDAL
• Log4Cpp
• LibArchive
• Eigen3
• OpenSceneGraph
• OSGEarth
• nlohmann::json
• Intel Thread Building Blocks
• Catch2
• OpenSSL
• NemaTode
Contributors
Also, several persons are contributing to the development and testing of COMPASS. Many
thanks to them as well, if you’d like to be named here please contact the author.
Test Data
Special thanks are in order for Malta Air Traffic Services (https://www.maltats.com/)
for providing the project with a test dataset with which screenshots were made.
Key Concepts
In this section, a few key concepts are introduced to convey a somewhat deeper under-
standing of COMPASS, and to allow the reader to understand some main design choices
made by the author. This should also give indications about the strengths and draw-backs
of the chosen approach.
Database Systems
A database allows for storage, retrieval and filtering of the data of interest. While SQLite3 -
being a stand-alone database - also has some drawbacks, it was chosen for its performance
and ease of use (compared to e.g. NoSQL databases or MySQL variants).
SQLite3 encapsulates a database in a single file container, which is read from a stor-
age medium (e.g. hard drive). Therefore, it does not require the installation of a database
service, which is one of the advantages of the current solution.
Configuration
At startup numerous configuration files are loaded, and at shutdown the current configu-
ration state of COMPASS is saved.
The configuration is not just a matter of storing simple parameters of components, but
also what components exist. To give an example: Each existing View is saved, and when
the program is started again, the previously active Views are restored. The same is true
for almost all components of COMPASS.
Using this configuration, a user can have a specific program configuration for a specific
usage situation, which can be instantly reused for a different dataset, using a specific View
or filter configuration, allowing for a high degree of flexibility and supporting numerous
use-cases.
For each data source up to 4 different lines can be used (L1, L2, L3, L4). Such lines
can either be used to distinguish data recorded from different network lines, but also to
distinguish different recordings of the same data source. For instance, different tracker
runs can be imported and analyzed by importing them into different data source lines.
DBContent is present, target reports can be loaded from a database and displayed.
The resulting JSON data is then mapped to DBContent variables stored in the database.
The mapping between JSON keys and DBContent variables is configurable, allowing for a
broad usage spectrum.
Meta Variables
To allow displaying data from different DBContent in the same system, so-called Meta
variables were introduced, which hold variables that are present in some or all DBContent
(with a possibly different name or unit).
For example, there is the meta variable ’Time of Day’, which is a collection of sub-variables
for each existing DBContent and the respective ’Time of Day’ variable.
Data Loading
In COMPASS, a unified data loading process was chosen, meaning that only exactly one
common dataset is loaded, which can be inspected using multiple Views.
When started, data is incrementally read from the database, stored in the resulting
dataset, and distributed to the active Views. Each time such a loading process is triggered,
all Views clear their dataset and gradually update.
This makes working with the data somewhat easier to understand, since only one
dataset exists, while on the other hand it does not allow several independent datasets (e.g.
with different filters) to be loaded at the same time.
Live Mode
Commonly, COMPASS is used in the ’Offline’ mode, which allows inspection of larger
amounts of recorded ASTERIX data.
To analyze live data from the network, a ’Live’ mode exists which records data from
UDP network streams, decodes the content, imports the data to the database, and imme-
diately displays the data in the OSGView.
The most current data (up to 5 minutes) is stored in main memory (RAM), to inspect
cases of interest in the immediate past.
The ’Live’ mode can be quit, after which the application returns to the ’Offline’ mode,
to allow inspection of the full database as always.
Installation
COMPASS can either be built from source (which does not include the OSGView, but
allows development of additional features), or downloaded as an AppImage (binary dis-
tributable which includes the OSGView). The recommended option for non-developers is
the AppImage.
However, there are a few prerequisites that should be taken into consideration.
Prerequisites
Operating System
The binary will currently only be provided to (somewhat current) Linux 64-bit operating
systems. For the following distributions, the AppImage was reported to be working cor-
rectly:
• CentOS 7.3
• Ubuntu 14.04, 16.04, 17.04, 18.04, 19.04, 20.04
• Linux Mint 18.3
For the following distributions, the AppImage was reported to have issues:
• CentOS 6.*: Unfortunately the operating system’s glibc versions are too old.
If you have tried other operating systems as the ones listed here, it would be appreci-
ated if you could provide feedback about your experience to compass@openats.at.
Other operating systems (e.g. Windows or Mac) are currently not supported.
Recommended Hardware
The application will perform best on a workstation with at least the following minimum
requirements:
• CPU with at least 2 physical cores
• Dedicated NVidia or ATI graphics card
11
INSTALLATION 12
Depending on the loaded datasize, more RAM or a better graphics card might be of
advantage.
Please note that other graphics cards (Intel, Matrox) will in good probability also
work, but are not supported.
Unsupported Graphics Cards or Drivers If different graphics cards or drivers are in use,
the output of glxinfo might be similiar to this:
OpenGL vendor string: nouveau
OpenGL renderer string: Gallium 0.4 on NVE6
OpenGL core profile version string: 3.1 (Core Profile) Mesa 9.2.5
OpenGL core profile shading language version string: 1.40
OpenGL core profile context flags: (none)
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 9.2.5
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
Or this:
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Desktop
OpenGL core profile version string: 3.3 (Core Profile) Mesa 11.0.6
OpenGL core profile shading language version string: 3.30
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
OpenGL version string: 3.0 Mesa 11.0.6
OpenGL shading language version string: 1.30
OpenGL context flags: (none)
OpenGL extensions:
OpenGL ES profile version string: OpenGL ES 3.0 Mesa 11.0.6
OpenGL ES profile shading language version string: OpenGL ES GLSL ES
,→ 3.00
OpenGL ES profile extensions:
In such cases, the following issues can be expected and will currently not be addressed:
INSTALLATION 14
If you encounter a white-only OSGView and shader errors are present in the console
log, please refer to White OSGView & Shader Errors in Console Log.
To obtain the COMPASS AppImage, download the latest version from https:
//github.com/hpuhr/COMPASS/releases.
Before starting, it has to be made executable (once after download) using the following
command:
chmod +x COMPASS-x86_64_release.AppImage
This is it. The application can then be run using the following command:
./COMPASS-x86_64_release.AppImage
The following points should be considered:
• The AppImage should run under any Linux distribution of similiar date to Ubuntu
14.04 or later, but no guarantees can be made.
• To the authors knowledge, running in virtualized operating systems is possible in
some solutions, but requires additional setup (GPU acceleration).
• OSGView rendering is performed according to the local graphics card and driver,
and might be limited by their capabilities.
If building from source is still a requirement, please contact the author for support
using compass@openats.at.
INSTALLATION 15
Configuration Upgrade
If the used version of COMPASS has never been run on the workstation, the configuration
(and additional application data) will be copied into a subfolder of the home directory
(e.g. ~/.compass/v0.7.0/).
For more details about the configuration, please refer to section Appendix: Configura-
tion.
UI Overview
In this section, the main GUI components are described, to give an overview and introduce
the different workflow options.
Main Window
When the application is started for the first time, the main window is shown as follows.
16
UI OVERVIEW 17
In the simplest use case, the main menubar allows to open or create a database and
importing data. After this step, data of interest can be chosen using the main tab area, and
loaded from the database using the ’Load’ button in the main statusbar.
The loaded data is then shown in Views located either in the main tab area or in other
windows.
UI OVERVIEW 18
The values set in the ’Data Sources’ and ’Filter’ tabs define the dataset loaded from the
database into memory (RAM). Only the data required by the application is loaded into
memory.
Only one dataset can exist at a time. At the beginning of a loading process, the old
dataset is cleared, and a new one is filled sequentially from the database. This single
dataset is then distributed to all existing views.
When new Views are added, or Views require additional data, a manual re-load has to
be performed by the user.
To close the application either the ’File’ menu can be used, or the close button in the
main window decoration.
Please note that using the close button in windows other than the main window only
removes the respective Views, but does not close the application.
Please note that depending on the application status some parts of the UI are inacces-
sible, e.g. are only available when a database was opened.
Main Menubar
A common workflow is to open or create a database using the ’File’ menu. If new data is
to be imported, this can be done using the ’Import’ menu. After all data was imported, the
’Process’ menu can be used to post-process the data (if desired).
At the top exists a main menubar, which allows access to the following groups:
• File Menu: Open/close a database, quit application
• Import Menu: Import ASTERIX, NMEA data
• Configuration Menu: Configure data sources, sectors
• Process Menu: Various processing tasks for imported data
To add additional views, the button can be used. Views can be added either to the
window in which the button was clicked (’Add Here’) or in a new window (’Add In New
Window’).
A single View can be removed by clicking the button in its tab (next to the Views
name) and selecting ’Close’.
Please note that using the close button in windows other than the main window only
removes the respective Views, but does not close the application.
Main Statusbar
The main statusbar at the bottom shows general information and contains the ’Load’
button used to trigger a loading process (only visible when database was opened).
Main Menu
File Menu
Databases can be created, opened and closed using the File menu.
UI OVERVIEW 20
After a database was opened, the ’Import’ menu becomes available, and the main win-
dow looks as follows:
UI OVERVIEW 21
Import Menu
Data can be imported into the database using the Import menu. This menu is only acces-
sible if a database has been opened.
UI OVERVIEW 22
Configuration Menu
Data sources and sectors can be configured using the Configuration menu. Further, the
currently defined Meta Variables can be inspected.
Process Menu
Post-processing tasks can be performed using the Process menu. This menu is only acces-
sible if a database was opened.
Import Data
Import ASTERIX Recording
This task allows importing of ASTERIX data recording files into the opened database.
UI OVERVIEW 24
Please note that the following ASTERIX categories, editions, reserved expansion fields
and special purpose fields are currently supported:
Please note that sensor status messages can be decoded, but are not inserted into the
database. Decoding of ASTERIX CAT002 is recommended if CAT001 data is imported,
since the timestamps are derived from it (if not available in CAT001).
Main Tab
At the top, the recording file to be imported is shown. Below, there exists a combobox
defining in which line the data should be written.
Framing Using the ’Framing’ drop-down menu, the current framing can be selected. Us-
ing the ’Edit’ button, the current framing definition is opened in a text editor.
• Edition Edit Button : Opens the current edition definition in a text editor
• REF: Drop-down menu to select the Reserved Expansion Field definition (if avail-
able)
• REF Edit Button : Opens the current REF definition in a text editor
• SPF: Drop-down menu to select the Special Purpose Field definition (if available)
• SPF Edit Button : Opens the current SPF definition in a text editor
UI OVERVIEW 26
Additional Using the ’Edit Data Block’ button, the ASTERIX data block definition is
opened in a text editor.
Using the ’Edit Categories’ button, the ASTERIX definitions file is opened in a text
editor.
Using the ’Refresh’ button, all of the jASTERIX definitions are re-loaded from harddisk
(e.g. to update after file changes were made).
The ’Cancel’ button aborts the import. Using the ’Test Import’ button the import
process can be tested without inserting the data into the database. The ’Import’ button
triggers the import of the selected file into into the database with the given options. This
function can be run multiple times.
UI OVERVIEW 27
Override Tab
This feature should only be used in very specific circumstances. When activated, the
function adds a Time of Day offset to all imported data, to compensate time offsets (e.g.
when importing data from a replay).
• ’Override Time of Day Active’ checkbox: Determines if the override is active. Dis-
abled by default.
• Time of Day Offset: Positive or negative offset to be added, in seconds. If the result-
ing value is out of bounds, it is adjusted to the [0, 86400] interval.
UI OVERVIEW 28
Mappings Tab
At the top, the GUI elements can be used to show/add/remove ASTERIX JSON parsers.
Below, the currently selected ASTERIX JSON parser is shown and can be configured.
An ASTERIX JSON parser in this context is the function that parses the JSON content
created by the jASTERIX parser, and creates DBContent from it. For each ASTERIX cate-
gory a dedicated parser defines the mapping from JSON to a DBContent.
For the common user interaction is normally not recommended, but sometimes it might
be interesting to know what DBContent is created from which ASTERIX data.
UI OVERVIEW 29
Top Elements Using the drop-down menu, the to-be-shown parser can be selected. The
buttons allow for adding and removing ASTERIX JSON object parsers.
Parser GUI Elements The exact definition of how the parsing works is out of scope for
this document, so only a short summary is given here. For more information please contact
the author.
In the mappings list, the following columns are given:
• Active checkbox: Defines if the specific mapping is used
• JSON Key: JSON location and name of the data to be mapped, commonly in ’Data
Item Number’.’Variable’ or ’Data Item Number’.’Sub Item’.’Variable’ format
• DBContent Variable: Target variable to which this data is mapped
Whenever a mapping is selected, the details are shown on the right hand side, pro-
viding information about the ASTERIX datum (based on the jASTERIX specifications) and
the DBContent variable it is mapped to. Using the ’Show DBContent Variable’ button the
variable can be inspected in detail.
Running
Using the ’Import’ button the import task can be performed. During import a status
indication will be shown.
UI OVERVIEW 30
If a decoding error occurs, a brief message box is shown, after which the application
has to be closed. Please make sure that the correct framing and edition versions are se-
lected, or contact the author for support if this does not resolve the issue.
Comments The time needed for the import strongly depends on the available CPU per-
formance (multi-threading being very beneficial), but an import of 5 million target reports
takes about 3:30 minutes on the author’s hardware.
This task can be run several times, e.g. if multiple ASTERIX recordings from different
data sources are to be imported.
Please note the currently not all data fields (as shown in the JSON object parsers)
are imported.
Except for the differing ASTERIX source at the top, everything works the same as
described in Import ASTERIX Recording.
When the import is started, data is continously read from the network (according to
the data source network lines specificiation) and processed according to Live Mode.
Main Tab
At the top, a label exists indicating the file to be imported.
UI OVERVIEW 33
Below, a text field is given, which after selection of an NMEA file displays the content
information and/or error messages.
Config Tab
Running
After selecting an NMEA file, the task can be performed using the ’Import’ button.
Please note that reference trajectory updates are skipped in the following two cases:
• The time is the same as the previous update
• The quality is set to 0 (invalid position).
After selection of a view point file, a text field is shown, which displays the view point
context information or an error message.
At the bottom, an ’Import’ button exist, which is only enabled if a valid file was se-
lected.
Notes
For a detailed specification of a view point file, please refer to Appendix: View Points.
For each of the datasets, an ’Import ASTERIX Recording’ task is started, using the
previously set configuration. Please make sure that the import configuration is valid for
such processing.
For the filename in each dataset, the absolute path is searched first. If such a file can
not be found, a file with the same name at the location of the view point file is further
searched. If the referred file can not be found, an error message is displayed.
It is not recommended to import multiple view point files, such a use case is not fully
supported yet.
Using the ’Import’ button a process is started to import the view points and ASTERIX
data recordings.
Configuration
Configure Data Sources
This dialog allows management of data sources, as stored in the configuration as well as
in the database.
UI OVERVIEW 36
If data is imported from ASTERIX, data source information might be missing, so this
information must either be edited manually, or loaded from a previous configuration (or a
previous database containing the same information).
A data source is always stored in the configuration, which means it is persistent and
can be used in all new databases. It is useful to define all existing data sources in the
configuration, since they are immediately used during import of data.
A data source is stored in the database only if data from said source is imported. Dur-
ing the import, if a data source can be found (matching SAC/SIC) in the configuration, it
is automatically added to the database.
Please note that currently the position of data sources is only required for Radar data
sources (in the plot position calculation), for all other data sources it would suffice to have
SAC/SIC and name information for display purposes.
UI OVERVIEW 37
When a data source is selected in the table, additional details are shown in the right
side, where edting is also possible.
Please note that all changes to a data sources are always written to the database as well
as the configuration.
Depending on the DSType, additional information can be set in a source. For a non-
Radar source, the following information is given:
• ID: Number indentifier (unique)
• Network Line information
– 4 different lines are possible, each with an ’IP:Port’ syntax, e.g.
* ’1.2.3.4:5’
For sources of DSType ’Radar’, the follwing additional information should be pro-
vided:
• Latitude: Source center position as WGS-84 latitude, as floating point number in
degrees, e.g. 42.0001
• Longitude: Source center position as WGS-84 longitude, as floating point number in
degrees, e.g. 17.01
• Altitude: Source center altitude above MSL, in meters
There are two versions of the data sources JSON file used for import/export. Please
refer to Appendix: Data Sources.
Using these functions, the configuration data sources can be changed for sensor context
switches, or e.g. exported before an COMPASS version upgrade.
Configure Sectors
This dialog allows management of sectors (as 2.5D polygons) stored in the database.
This step is recommended if the sectors are to be used for evaluation purposes and/or
if the altitude information is of use. For 2D display-only polygons it is recommended to
add such information to the used map files, as described in Adding/Changing Map Files.
Please note that importing at least one sector is required for using the evaluation fea-
ture.
UI OVERVIEW 41
Import Tab
In the ’File Selection’ list, a list of available files is provided. Entries can be added using
the ’Add’ button, or removed using either the ’Remove’ or ’Remove All’ buttons.
Files are imported using the GDAL library, which can read a number of GIS files, and all
encapsulated polygons or multi-polygons are written to the database with unique names.
Supported common file-formats are e.g.:
UI OVERVIEW 42
• ESRI Shapefile
• GML
• KML
Note that only polygonal information is added, and all information is assumed to be
stored in the WGS84 coordinate system.
Below a text field is given which, after selection of a file, displays the content informa-
tion and/or error messages.
Usage After adding an example file, the ’Import’ tab looks as follows:
To import a sector, select the desired file in the file list and press the ’Import’ button,
after which an import dialog is shown.
Manage Tab In the ’Manage’ tab, a table listing the existing sectors is given, which
allows editing/deleting of the information stored in the database.
The ’Exclude’ sector is a special case where e.g. a sector layer ’Example’ has at least one
normal sector ’Area’, giving the 2.5d polygon in which surveillance data of interest exists.
Now, to exclude target reports in a specific sector inside the bigger ’Area’ sector, one or
several smaller sectors can be imported (again into sector layer ’Example’) and marked as
’Exclude’ sectors and at best using a different color.
It is recommended to export all sectors after creation, for later usage (e.g. in another
database).
Postprocess
Calculate Radar Plot Positions
This dialog allows (re-)calculation of Radar plot latitude/longitude position information
based on the defined data sources.
UI OVERVIEW 48
Please note that for this step the Radar data source positions have to set in the database
(see Configure Data Sources), otherwise no plot position can be calculated.
There are two projection methods (radar polar coordinates to WGS-84 coordinates)
available. The RS2G projection is the currently recommended option.
OGR Projection The EPSG code for the projection has to be chosen according to your
needs, please refer to http://spatialreference.org/ref/epsg/ for a list of possible
codes.
The WGS84 latitude/longitude coordinates are then calculated using the radar posi-
tions in the database, the range and the azimuth. Please note that currently there will be
offsets in the projected coordinates compared to the e.g. the ARTAS projection. The reason
for this is under investigation.
RS2G Projection For this projection, no additional attributes must be given. Please note
that this projection is based on a common ’radar slant to geodesic transformation’, it should
be equvalent to the ARTAS projection. A verification is still needed, please contact the
author if you would be willing to support this.
Usage Using the ’OK’ button the task can be performed. During import a status indica-
tion will be shown:
UI OVERVIEW 49
Please note that this task can be re-run with different projections if wanted.
Calculate Associations
This task allows creation of UTNs (Unique Target Numbers) and target report association
based on Mode S Addresses, Mode A/C codes and position information.
Each target found is identified by a UTN, and groups together all reference/track-
er/sensor target reports (which can) be associated to this target.
Please note that if usage of UTNs is not needed, these step does not have to be per-
formed. If usage of the evaluation feature is wanted, running this task is required.
UI OVERVIEW 50
The configuration of the task requires knowledge of the internals of how the associa-
tions are made. As a brief description, the following steps are performed:
• Delete all previously existing associations
• Load all target reports, per data source
• Add UTNs based on Reference (RefTraj) data sources
• Add UTNs based on Tracker data sources
• Add UTNs based on remaining sensor data sources
UI OVERVIEW 51
Common parameters:
The following steps are performed for each Reference/Tracker data source:
• Per-source target creation
– Create new list of targets based on
UI OVERVIEW 52
* Track number
* Mode S address
* Mode A/C, time difference, position difference (track continuation)
– Find dubious targets (dubious target detection)
– Clean dubious targets (if configured)
– Per-source target to common target association (target/target association)
– Score-based approach
– Associates new per-source targets to existing targets based on
* Mode S address
* Time overlap
* Mode A code(s) similiarity (only if minimum time overlap is given)
* Mode C code(s) similiarity (only if Mode A similiarity is given)
* Position similiarity (only if Mode A/C similiarity is given)
The exact method discussion will be added at a later time, since the algorithms will be
improved to include position accuracy information (error standard deviation etc.) in the
near future.
After these steps, for each target which can be created by the simple score-based associ-
ation method will be used to additionally associate sensor target reports (non Reference/-
Tracker based target reports), as discussed in the next section.
The following steps are performed for each sensor data source:
• Find possible target in existing target list based on
– Mode S address
UI OVERVIEW 53
Dubious Targets
The dubious target check is solely based on a simple maximum speed given all associated
Reference/Tracker target reports. It should be used to detect/mark possibly wrong associ-
ations, to investigate such targets later and, if possible, resolve such issues using different
parameter values.
Please note that in a special case where target reports from difference Reference/-
Tracker source are very close in time this check can generate a large number of false posi-
tives and should be disabled. For this reason the default value was set to a disproportion-
ately high value.
Discussion
The user should be aware that, while this association feature is quite an improvement over
the previous method, it is still somewhat limited. It strongly depends on the correctness of
Mode S addresses, as well as the Reference/Tracker information (track number, secondary
information and position information). If the mentioned information is erroneous, the
made association will be sub-optimal or plainly wrong.
For the Sensor UTN Creation, the correctness of the associations strongly depends
again on the Mode S address as well as the quality of the assocations made in the Refer-
ence/Tracker UTN Creation.
Also, for association of non Mode S sensor target reports a trade-off has to be made
in the ’Maximum Acceptable Distance’ parameter, especially if they are primary-only. It
should be set within the limits of Reference/Tracker error plus maximum sensor error
(which can still include radar bias’) and the used target separation minima. This is of
course not well-suited for strongly different sensors accuracies and seperations (e.g. when
mixing ground and air surveillance data).
Running
To run the task, click the ’Run’ button. After the assocations are saved, the task is done:
UI OVERVIEW 54
Please note that if no ARTAS TRI SPF information exists in the database, these steps do
not have to be performed.
UI OVERVIEW 55
In this task, the ARTAS association information stored in system track updates can be
used to create UTN which associate each system track update to the used sensor target
reports.
• Association Time Past (s): Time window length (in seconds) into the past where sen-
sor target reports are considered for association.
• Association Time Future (s): Time window length (in seconds) into the future where
sensor target reports are considered for association.
• Acceptable Time for Misses (s): Time window length at the beginning and end of the
recording (in seconds) where misses (not found hash codes) are acceptable.
• Dubious Association Distant Time (s): Maximum age of made associations (in sec-
onds), if older they are considered as dubious associations.
• Dubious Association Close Time Past (s): Time window length (in seconds) into the
past where made associations are considered as dubious if multiple sensor hashes
exist.
• Dubious Association Close Time Future (s): Time window length (in seconds) into
the future where made associations are considered as dubious if multiple sensor
hashes exist.
• Ignore Track End Associations: If set, no assocations for system track updates where
the track end flag is set are created.
• Mark Track End Associations Dubious: If set, assocations for system track updates
where the track end flag is set are counted as dubious.
• Ignore Track Coasting Associations: If set, no assocations for system track updates
where the track coasted flag is set are created.
• Mark Track Coasting Associations Dubious: If set, assocations for system track up-
dates where the track coasted flag is set are counted as dubious.
UTN Creation from System Track Numbers and End Track Time :
The task should create a unique target number (UTN) for every ARTAS track. The track
begin/end flags therefore are used to create new UTNs or finalize existing ones. To cover
the case when such information is wrong or missing, the ’End Track Time’ is used to check
the time between track updates from one track number. If the gap is larger then the defined
time, a new UTN is created even if no track begin/end flag was set.
Association Time Window Consider the following figure: In a timeline, a system track
update exists at time 0, while referenced sensor target reports exist at the times 1,2,3. In
this description, it is assued that all sensor target reports have the same referenced ARTAS
MD5 hash value.
The time window defined by [Association Time Past, Association Time Future] defines
which referenced sensor target reports are considered for association. The ’Past’ time is
the time difference into the past (default 20s), the ’Future’ time is the time difference into
the future (default 2s).
In this example, 1 and 2 are considered, while 3 is disregarded. Since 1 is closer in time
to the system track update, the association between (0,1) is made.
UI OVERVIEW 57
The time defined by Dubious Association Distant Time is used to mark assocations which
are older than the defined time as dubious. The associations are still created, but counted
as dubious. Since 2 is still in the association time window, the assocation (0,2) is made but
counted as dubious.
The time window defined by [Dubious Association Close Time Past, Dubious Association
Close Time Future] defines when assocations are counted as dubious if multiple referenced
sensor target reports exist in it. In this case, 1 and 2 are considered for association, and
since 1 is closer in time to the system track update, the association between (0,1) is made.
But, since 2 also falls into this time window, the association of (0,1) is counted as dubious
association.
Data Sources
In this tab, the data sources existing in the database are shown. Data sources are added to
the database if data during the import process was associated to the respective data source
(and the respective data source line).
Data sources are grouped by DSType (data source type, e.g. Radar, MLAT, ...), and can
have up to 4 active lines (L1-L4). Each line for which data exists in the database is shown
as a button.
UI OVERVIEW 59
At the bottom, the ’Associations’ label indicates if association information exists, and
from which data source it was generated.
UI OVERVIEW 60
Filters
At the top, the ’Use Filters’ checkbox defines whether filtering is active.
Each filter consists of a checkbox, defining if a filter is active (contributes to the search
query), a triangle-button (to show/hide the filter configuration elements), a unique name,
and a manage button (activates a context menu).
Please note that the filter configuration will be saved at program shutdown, which is
also true for new filters. At startup, all filters from the configuration are generated and
UI OVERVIEW 61
Please also note that active filters, at the moment, are always combined with a logical
AND. Therefore, when two filters are active, only the intersection of data which both filters
allow is loaded.
As an example, the ’Time of Day’ filter limits the loaded data to a specific time window,
to load only time slices of the dataset. The ’Mode 3/A Codes’ filter restricts to a list of
(comma-separated) Mode 3/A codes, to single out specific flights.
Views
The ’Add View’ button on the top right in each window allows adding views to the
current window or in a new window.
Each View is contained in a tab within a parent window. At startup, per default only
the main window exists, which also holds a ListBox view. If the main window is closed,
the COMPASS client shuts down. New Views can be added using the ’Add View’ button,
which opens a pull-down menu. Each View can either be added to the main window
(’Add Here’) or into a new window (’Add in New Window’. When added, a new tab
exists in the containing window.
New Views can be added either to currently existing windows as new tabs, or to a
newly opened window. A window can be closed either by the close button in the window
decoration, which discards all contained Views within the window.
To close a single View, one can use the button in the tab header, which frees up all
its allocated resources.
Each View adds its required variables to the loading list for the database. During a
loading process, the loading status of a View is shown in the management tab.
Default Filters
62
FILTERS 63
When active, this filter forces loading of data with the given Mode S address(es),
so it is possible to give multiple values (in hexadecimal notation, irrespective of up-
per or lower case characters, separated by commas). E.g. ’FEFE10’ is possible, or
’FEFE10,FEFE11,FEFE12’. Target reports without a given Mode S address will not be
loaded unless the value ’NULL’ is (also) given.
When active, this filter forces loading of data only from aircraft identifications matching
the given expression. The percent operator denotes a ’any characters’ placeholder. So e.g.
’%TEST%’ will match ’TEST123’ or ’TEST123 ’ (with spaces) or ’MYTEST’. Target reports
without a given aircraft identification are not restricted by this filter.
FILTERS 64
When active, this filter restricts the loaded ADS-B data based on the transponder MOPS
version and various quality indicators.
When active, this filter restricts the loaded ADS-B data based on the transponder MOPS
version. Multiple values can also be given, e.g. ’0’, ’0,1’, etc.
When active, this filter forces loading of data only from target reports with a specific AR-
TAS MD5 hash code, or system track updates referencing this hash code (in their TRI in-
formation). If no hash information is available (e.g. in SASS-C Verif databases or when
this information was not present in the ASTERIX data), this filter should not be used.
When active, this filter forces loading of Radar and Tracker data with the given detection
type, so it is possible to give multiple values (separated by commas). E.g. ’1’ is possible,
or ’1,2,3’. Tracker target reports without a given detection type will not be loaded.
Please note that for CAT62 data the detection type reflects the most recent detection
type used to update the track (last measured detection type).
Position Filter
When active, this filter forces loading of data with latitude/longitude inside the given
thresholds (in degrees).
FILTERS 66
When active, this filter forces loading of data with the time-of-day inside the given thresh-
olds (in HH:MM:SS.SSS).
When active, this filter forces loading of data with the given track numbers, so it is possible
to give multiple values (separated by commas). E.g. ’1’ is possible, or ’1,2,3’. Target reports
without a given track number will not be loaded unless the value ’NULL’ is (also) given.
Please note that ADS-B target reports can also contain a track number in ASTERIX, but
since the information can not currently be mapped to the database (missing in schema),
this filter does not influence ADS-B data loading.
When active, this filter forces loading of data with the given Mode A code(s), so it is
possible to give multiple values (in octal notation, separated by commas). E.g. ’7000’ is
possible, or ’7000,7777’. Target reports without a given Mode A will not be loaded unless
the value ’NULL’ is (also) given.
Please note that ADS-B target reports can also contain Mode 3/A code information.
FILTERS 67
When active, this filter forces loading of data with a barometric altitude inside the given
thresholds (in feet). If Target reports without a barometric altitude should not be loaded
can be set using the ’NULL Values’ checkbox.
Primary Only
When active, this filter forces loading of data without any secondary attributes.
UTN Filter
This filter is only available if target report associations have been generated (see Calculate
Associations).
When active, this filter forces loading of data with the given unique target numbers
(UTNs), so it is possible to give multiple values (separated by commas). E.g. ’1’ is possible,
or ’1,2,3’. Target reports without an associated UTN will not be loaded.
A new filter can be added by clicking the button in the filter tab.
FILTERS 68
First, one has to give the filter a new (unique) name. Then, conditions have to be
defined and added. A condition consists of a DBContent variable, an operator, a value,
and a reset value.
When the triangular button is clicked, a sub-menu is opened, where one can choose
a DBContent variable. The selected variable restricts data of all DBContents if it is of
type ’Meta’, or just data from one DBContent if it is not. Additionally, the mathematical
operator ’ABS’ can be selected. If so, not the value of the variable but the absolute value of
the variable is used: ’ABS(var)>value’ is equivalent to ’var>value OR var<-value’.
An operator can be chosen with the drop-down menu, the supplied operators are com-
mon SQL operators.
FILTERS 69
Operator Description
= Equal
!= Not equal
> Greater than
>= Greater than or equal
< Less than
<= Less than or equal
IN Matches a value in a comma-separated list
LIKE Pattern matching with % and _
IS Value NULL: No value exists
IS NOT Value NULL: Value exists
Table 1: SQL operators
A reset value also has to be supplied, which can be the chosen value or a minimum/-
maximum value set from the database. Whenever a database different from the previous
one is opened, all filters are reset, since previous values may have become invalid.
After a condition is defined, it has to be added using the ’Add condition’ button.
Existing conditions are shown in the ’Current conditions’ list. Please note that for now
added conditions can not be removed.
Now the described process can be repeated until a usable filter emerges, which is added
using the ’Add’ button. The process of adding a new filter can be canceled by using the
’Cancel’ button, which discards all settings. When added, a new filter shows up immedi-
ately in the filter list and is saved to the configuration for persistence.
Evaluation
While it is possible to manually remove single targets from the evaluation, the usage
of correct reference data is paramount for the significance of the evaluation results.
Please note that the evaluation feature should not be used as a sole basis for
decision making - especially not without manually verifying the evaluation results.
There will be improvements in the next releases, and further verification of the results
by the author and other users.
The ’inside-sector’ check is always performed on the reference data only, therefore it is
of importance to only use reference data with existing Mode C code data.
70
EVALUATION 71
Reference Data
The assumption used in the tool is that the reference data is always correct. Therefore,
sub-optimal reference data can cause errors, which will be attributed to the test data in the
evaluation.
Also, since target secondary attributes (currently only Mode S address) are also used in
the ’Target Report Association’ task, errors in these attributes might also lead to imperfect
data association. This would result in wrong evaluation results in almost all requirements.
EVALUATION 72
Overview
Configuration
Main Tab
Data Selection
At the top, the ’Data Selection’ can be performed, by selecting:
• Reference Data:
– DBContent: Any DBContent existing in the database
– Data source checkboxes: Which data sources to use
EVALUATION 75
Since ’any’ type of data can be selected for evaluation, this allows for the following
use-cases:
• Tracker as reference, sensor as test data: Evaluation of sensor
• Tracker as reference, tracker as test data: Evaluation/comparison of different track-
ers/tracker runs
Of course it is also possible to use e.g. an imported GPS trail as reference (see Import
GPS Trails), although this is currently not tested for lack of test data. If you might be able
to provide such test data, please contact the author.
Standard
In the center, using the ’Standard’ drop-down menu, the current standard can be selected.
To create/configure the standard please use the ’Standard’ tab.
On the left, all existing sector layers are listed, in the shown example:
• fir_cut_sim: DOI altitude limitation
For each sector layer, the requirement groups (defined in the Standard tab) can be ac-
tive/disabled. In the shown example, the existing requirement group ’Mandatory’ is active
in the sector layer.
EVALUATION 76
Targets Tab
Before the data was loaded, the table is empty. Each target (defined by the UTN) is shown
in a dedicated row.
Unless otherwise specified, the column content reflects the values from both reference
and test data.
Filter Tab
In the Filter tab, what data is loaded for evaluation can be filtered
Standard Tab
In the Standard tab, at the top the current standard can be selected.
At the bottom, the ’Reference Maximum Time Difference [s]’ field can be edited to
adjust the maximum time difference between reference updates. This value is used to find
time-adjacent reference target reports for test target reports. Please adjust this value to e.g.
the reference update peroid plus 1 second.
Please note that, while the EUROCAE ED-117A has been tested using simulator data,
the following limitations exist:
• In the Stands area, the position accuracy should be calculated using a 5 second po-
sition averaging. This is currently not performed, since the averaging method is not
specified, and is currently being discussed with users.
• Some MLAT sensors use a different update rate for non-moving targets. This is cur-
rently not regarded, since what constituates non-movement is not (generally) speci-
fied, and will lower the probability of detection.
Please note that these limitations will be corrected in the near future.
The other standards have not yet been tested to the fullest degree, which is work in
progress.
Current Standard
Below that, the current standard is shown. On the left side, a tree-view exists showing:
• Standard name
– Requirement Group(s)
* Requirement(s)
EVALUATION 82
When clicking on the standard name, a menu is shown allowing adding new require-
ment groups (’Add Group’).
When clicking on a requirement group, a menu is shown allowing the following func-
tions:
• Delete Group: Delete the selected requirement group
• Add Requirement: Add a requirement of the selected type
• Delete Requirement: Delete the selected requirement
Type Description
Detection Calculates probability of detection
Dubious Targets Calculates probability of dubious targets based on physical
movement (based on test data only)
Dubious Tracks Calculates probability of dubious tracks from Trackers (based
on test data only)
Extra Data Calculates probability of unused test data (based on test data
only)
Extra Track Calculates probability of undetected tracks (based on reference
data only)
Identification Correct Calculates probability of correct secondary identification
Identification False Calculates probability of false secondary identification
Mode 3/A False Calculates probability of false Mode 3/A code
Mode 3/A Present Calculates probability of Mode 3/A code present
Mode C False Calculates probability of false Mode C code
Mode C Present Calculates probability of Mode C code present
Position Across Calculates the probability of the position across error begin
within a threshold
Position Along Calculates the probability of the position along error begin
within a threshold
Position Distance Calculates the probability of the position error begin within or
outside a threshold
Position Latency Calculates the probability of the position latency begin within
a threshold
Speed Calculates the probability of the speed error begin within or
outside a threshold or percentage
If a requirement is clicked, it’s configuration widget is shown on the right hand side.
For detailed information about each requirement please refer to section Requirements.
EVALUATION 84
Results Tab
There are several levels of detail for the results, and each sub-result is shown in a
tree-view on the left side, grouped into sections. Using this tree-view, the results can be
"navigated", and the currently selected results contents are shown on the right side.
More details will be described in the following section Results Inspection & Analysis.
EVALUATION 85
Running
Load Data
After the wanted configuration (in the Main tab) has been set, the ’Load Data’ button
can be clicked. This results in the reference/test data being loaded, after which a post-
processing will be performed.
Please note that the post-processing step uses all available cores on the CPU.
The post-processing pre-calculates only which reference target reports can be used for
direct comparison for specific test target reports.
Therefore, please note that re-loading the data is only required when changes to the
reference/test data settings in the Main tab have been made. Changing requirements or
removing targets from evaluation does not require re-loading.
After the loading and the post-processing have been performed, all targets are shown
in the Targets tab.
EVALUATION 86
Filtering Targets
The Targets tab is useful for removing certain targets from the evaluation (’Use’ checkbox)
and inspecting already removed ones.
Single rows can be selected by clicking on them, which triggers a loading process
showing this exact target (with all associated data) in the available Views. Please note that
this does not require re-loading the evaluation data, but can be used at all times during
the evaluation.
The ’Change Usage’ button can be used for the following actions:
• Use All: Enable usage for all UTNs
• Use None: Disable usage for all UTNs
EVALUATION 87
The ’Filter UTNs’ dialog can be used to dynamically filter UTNs based on the configu-
rated values:
EVALUATION 88
• Remove Short Targets: Removes targets with a small number of target reports or
duration
• Remove Primary-Only Targets: Removes primary-only targets (w/o secondary at-
tributes)
• Remove Mode A/C Code onlys: Removes targets without Mode S attributes
• Remove by Mode A Code: Removes targets having a Mode A code given in the list
• Mode A Code Blacklist: Whether the Mode A codes should be used as blacklist or
whitelist
• Remove by Mode C Code: Removes targets having a Mode C code smaller than the
given value
• Remove by Target Address: Removes targets having a Mode S target address given
in the list
• Target Address Blacklist: Whether the Target Addresses should be used as blacklist
or whitelist
• Remove by Non-Detection of DBContent: Removes targets not being detected by a
given DBContent
When the ’Run’ button is clicked, all (enabled) targets are checked and are disabled if
any of the selected filters apply.
EVALUATION 90
Please note that the ’Change Usage’ button can be used also after the ’Evaluate’ button
is used, which automatically updates the evaluation results.
Evaluation
After the data was loaded, the configuration relating to current standard, requirements
and sector/requirement group usage can be adapted. After that, the evaluation can be
(re-)run using the ’Evaluate’ button.
This will trigger evaluation of the requirements in all sectors (as configured). The
requirement values will be calculated for each target (whether to be used or not). Then,
for each requirement and sector, the results are summed-up as per-sector average (if target
EVALUATION 91
should be used).
Please note that the post-processing step uses all available cores on the CPU.
The uppermost is the ’Requirements→Overview’, giving the sector sums for all re-
quirements.
The next level of detail are the sector sum details, located in ’Sectors→Sector Layer
Name→Requirement Group Name→Requirement Name’.
The lowest level are the per-target details, located in ’Targets→UTN’ and the respective
per-target results located in ’Targets→UTN→Sector Layer Name→Requirement Group
Name→Requirement Name’.
By default, when single-clicking a row in a table the respective results are shown in the
existing Views. When double-clicking, a step into the next level of detail is performed (if
available).
Navigation can be made more efficient by returning to the last sub-result by using the
’Back’ button on the top-left.
EVALUATION 92
Overview
Please note that the results are given as example only and are no indication of performance
for any system currently in operation.
When single-clicking a row, the respective result error are shown in the existing Views.
EVALUATION 93
When double-clicking a row, a step into the respective sector sum details is performed.
EVALUATION 94
Sector Details
On the left side, the current position in the results sections is shown. On the right, the
current results are shown. At the top, there is an overview table giving the details of the
calculation results in the respective sector layer and requirement.
At the bottom, further result details are listed per-target, sorted in this example by the
Probability of Detection (PD).
When single-clicking a row, the respective target data and result errors are shown in
the existing OSGViews.
EVALUATION 95
When double-clicking a row, a step into the respective target details is performed.
EVALUATION 96
Per-Target Details
On the left side, the current position in the results sections is shown. On the right, the
current results are shown. At the top, there is an overview table giving the details of the
calculation results for the target in the respective sector layer and requirement.
At the bottom, further result details are listed per-target-report, sorted in this example
by time.
When single-clicking a row, the respective target data and the respective single result
error are shown in the existing OSGViews.
EVALUATION 97
Generate Report
Using the "Export PDF" button, a PDF can be generated. A PDF can only be generated
if a Latex environment is installed on the workstation, as described in Appendix: Latex
Installation.
EVALUATION 98
• Run PDFLatex: Automatically runs the pdflatex compile process, immediately creat-
ing a PDF after finished export. Is disabled if command could not be found.
• Open Created PDF: Automatically opens the created PDF. Is disabled if pdflatex com-
mand could not be found.
Please note that the two ’Include ... Details’ options can produce very large PDF reports
(10.000+ pages), and may even overload the Latex sub-system (will result in ’TeX capacity
exceeded, sorry’ error). It is therefore recommended to only activate these options for
small datasets with very few sector layers.
The ’Run’ button startes the export process. At the bottom, status information and a
cancel button exists.
If the export process was sucessful, the dialog is closed automatically. The report Latex
file was written into the report directory, with screenshots in the ’screenshots’ subfolder.
If the respective options were set, a PDF was automatically generated and is opened using
the default PDF application.
If a Latex error ocurred, a message box with the error message is shown. If the ’TeX
capacity exceeded, sorry’ error is shown, disable one or both of the ’Include ... Details’
options.
Please note that the generated report can of course be edited by the user and re-
generated using pdflatex, which allows for better customization options (adding e.g. de-
tails, corporate identity etc.).
Requirements
Please note that the exact requirement calculation methods are quite complex and will be
added at a later point.
Detection
Configuration
The ’Detection’ requirement can be used to check whether targets are detected at all.
For each target existing in the reference data (within the current sector) a target must
be detected within e.g. each test update interval or other given interval. Missed update
intervals are either called misses or gaps, which are used to calculate a probability of
detection, which has to fulfill a given threshold.
Calculation
As a summary, the reference is used to calculate the number of expected update intervals
inside the sector layer (#EUI). Then, for the test data, if the reference exists at the time,
time differences between target reports are checked and the number of misses/gaps are
calculated as number of missed update intervals (#MUI).
Gaps are, if a minimum or maximum gap length is used, only counted if the detected
gap fulfills the thresholds.
The ratio of #MUI and #EUI gives the probability of missed update interval, the
counter-probability gives the Probability of Detection (PD). The PD must greator or equal
than the defined ’Probability’ for the requirement to pass.
Result Values
Sector
EVALUATION 102
Single Target
Dubious Targets
Configuration
The ’Dubious Targets’ requirement can be used to check for dubious for dubious move-
ment from data-source. This requirement checks based on test data only, so the reference
data is of no importance.
For each track number (existing in an UTN) a number of checks are performed, and a
probability of dubious target report is calculated. Which checks are used can be defined
as follows, but are focused on short tracks or physically dubious movement.
• Minimum Comparison Time [s]: Skip movement checks if time between updates is
smaller than the defined time
• Maximum Comparison Time [s]: Skip movement checks if time between updates is
larger than the defined time
• Mark Primary-Only: Checkbox if all primary-only tracks should be counted as dubi-
ous
• Use Minimum Updates: Checkbox if tracks with less than the defined number of
updates should be counted as dubious
• Minimum Updates [1]: Minimum number of updates
• Use Minimum Duration: Checkbox if tracks with a duration less than the defined
time should be counted as dubious
• Minimum Duration [s]: Minimum duration
• Use Maximum Groundspeed: Checkbox if maximum groundspeed should be used
• Maximum Groundspeed [kts]: Maximum groundspeed to be considered
• Use Maximum Acceleration: Checkbox if maximum acceleration should be used
• Maximum Acceleration [m/s2 ]: Maximum acceleration to be considered
• Use Maximum Turnrate: Checkbox if maximum turnrate should be used
• Turnrate Groundspeed [deg/s]: Maximum groundspeed to be considered
• Use Maximum ROCD: Checkbox if turnrate rate of climb/descent should be used
• Maximum ROCD [ft/s]: Maximum rate of climb/descent to be considered
• Dubious Probability [1]: Probability of dubious target report to classify a track as
dubious
Calculation
As a summary, the test data is used to calculate the number of dubious targets in relation
to the total number of targets, which gives the probability of dubios target (PDT). If this
probability is larger than the required one, the requirement is failed.
The total track updates are marked as dubious if any of the following cases hold:
• ’Mark Primary-Only’ is used, and the track is always primary-only (no secondary
attributes)
• ’Minimum Updates’ are used, and the number of track updates is smaller than the
required threshold
• ’Minimum Duration’ is used, and the duration of the track is smaller than the re-
quired threshold
EVALUATION 105
For the movement checks, for each track update the movement is checked. If the Mini-
mum/Maximum Comparison Time check fails, the respective track update is skipped (not
checked).
For each track, the number of target reports failing the movement checks (PDU) is
calculated and the ratio to the total number of track updates is calculated. If this ratio is
larger than the ’Dubious Probability’, the track is marked as dubious.
Result Values
Sector
EVALUATION 106
Single Target
EVALUATION 107
Dubious Tracks
Configuration
The ’Dubious Tracks’ requirement can be used to check for dubious tracks generated by a
Tracker data-source (using the attributed track number). This requirement checks based
on test data only, so the reference data is of no importance.
For each track number (existing in an UTN) a number of checks are performed, and a
probability of dubious target report is calculated. Which checks are used can be defined
as follows, but are focused on short tracks or physically dubious movement.
EVALUATION 109
Calculation
As a summary, the test data is used to calculate the number of dubious tracks in relation
to the total number of tracks, which gives the probability of dubios track (PDT). If this
probability is larger than the required one, the requirement is failed.
The total track updates are marked as dubious if any of the following cases hold:
EVALUATION 110
• ’Mark Primary-Only’ is used, and the track is always primary-only (no secondary
attributes)
• ’Minimum Updates’ are used, and the number of track updates is smaller than the
required threshold
• ’Minimum Duration’ is used, and the duration of the track is smaller than the re-
quired threshold
For the movement checks, for each track update the movement is checked. If the Mini-
mum/Maximum Comparison Time check fails, the respective track update is skipped (not
checked).
For each track, the number of target reports failing the movement checks (PDU) is
calculated and the ratio to the total number of track updates is calculated. If this ratio is
larger than the ’Dubious Probability’, the track is marked as dubious.
Result Values
Sector
EVALUATION 111
Single Target
EVALUATION 112
Extra Data
Configuration
While the ’Detection’ requirement detects "missing" test data, it ignores test data for
which no reference exist - which might indicate issues in the reference data which might
be of interest in the evaluation.
The ’Extra Data’ requirement detects "extra" test data, i.e. test data for which no ref-
erence exists (and fulfills possible constraints), and calculates the number of extra target
reports. Based on the number of target reports which are extra, and the number of target
reports which are also detection by the reference, the Probability of Extra (PEx) data is cal-
culated. The PEx must be less or equal than the defined ’Probability’ for the requirement
to pass.
Result Values
Sector
Single Target
Extra Track
Configuration
The ’Extra Track’ requirement is useful for Tracker evaluation, and detects if more than
one test track exist for a target.
First the time period of each track (by ocurrance of track number, with a maximum
time difference of 5 minutes) is calculated. Then, for each test target report, it is checked if
multiple track number periods match, and counted as extra update if there are more than
EVALUATION 115
1. Based on the number of target reports which are extra, and the number of target reports
which are also detection by the reference, the Probability of Extra (PEx) data is calculated.
The PEx must be less or equal than the defined ’Probability’ for the requirement to pass.
Please note that currently, in the case of multiple tracks existing at the same time, the
requirement does not decided which track is correct and which one is extra, therefore all
target reports are counted as being extra.
Result Values
Sector
Single Target
EVALUATION 116
Identification Correct
Configuration
• Use Mode 3/A Code: If the Mode 3/A code should be checked
• Use Mode S Target Address: If the Mode S target address should be checked
• Use Mode S Target Identification: If the Mode S target identification should be
checked
Result Values
Sector
Single Target
EVALUATION 118
Identification False
Configuration
The ’Identification False’ requirement is used to calculate the probability of a target report
having a false (secondary) identification. False in this context means there is identification
data available, and it is not the same as in the reference.
Result Values
Sector
Single Target
The ’Mode 3/A False’ requirement is used to calculate the probability of a target report
having a false Mode 3/A code. False in this context means there is Mode 3/A information
data available, and it is not the same as in the reference. Only values which are valid and
not garbled are used.
EVALUATION 121
Result Values
Sector
Single Target
EVALUATION 122
The ’Mode 3/A Present’ requirement is used to calculate the probability of a target report
having any Mode 3/A code. Present in this context means there is Mode 3/A information
data available, irrespectively if correct or not.
Result Values
Sector
Single Target
EVALUATION 124
Mode C False
Configuration
The ’Mode C False’ requirement is used to calculate the probability of a target report
having a false Mode C code. False in this context means there is Mode C information data
available, and the absolute difference between the test and the reference is larger than the
given threshold.
Result Values
Sector
Single Target
EVALUATION 126
Mode C Present
Configuration
The ’Mode C Present’ requirement is used to calculate the probability of a target report
having any Mode C code. Present in this context means there is Mode C information data
available, irrespectively if correct or not.
Result Values
Sector
Single Target
EVALUATION 128
Position Across
Configuration
The ’Position Across’ requirement is used to calculate the probability of a target report
having an across-track error smaller than a defined threshold. The offset of the position
(test vs. linear interpolated reference position) is used to calculate the error component
across the track angle of the reference at the time. If the absolute value of this across-track
position error is smaller or equal than the defined threshold, the target report is counted
for the calculated probability PACOK. The PACOK must be greater or equal than the
defined ’Probability’ for the requirement to pass.
Result Values
Sector
Single Target
EVALUATION 130
Position Along
Configuration
The ’Position Along’ requirement is used to calculate the probability of a target report
having an along-track error smaller than a defined threshold. The offset of the position
(test vs. linear interpolated reference position) is used to calculate the error component
along the track angle of the reference at the time. If the absolute value of this along-track
position error is smaller or equal than the defined threshold, the target report is counted
EVALUATION 131
for the calculated probability PALOK. The PALOK must be greater or equal than the
defined ’Probability’ for the requirement to pass.
Result Values
Sector
Single Target
EVALUATION 132
Position Distance
Configuration
This requirement can be used in 2 variations:
The offset of the position (test vs. linear interpolated reference position) is used to
calculate the Euklidian error distance. If the value of this position error is fails the defined
comparison threshold, the target report is counted for the calculated probability PCP
(probability of check passed). The PCP must be in turn pass the check for the requirement
to pass.
The ’Failed Values are of Interest’ checkbox define if the target reports passing or failing
the check are of interest.
Result Values
Sector
Single Target
Position Latency
Configuration
The ’Position Latency’ requirement is used to calculate the probability of a target report
having a time-latency error smaller than a defined threshold. The offset of the position
(test vs. linear interpolated reference position) is used to calculate the error component
along the track angle of the reference at the time, which divided by the negative speed
EVALUATION 136
gives the latency. If the absolute value of this latency is smaller or equal than the defined
threshold, the target report is counted for the calculated probability PLTOK. The PLTOK
must be greater or equal than the defined ’Probability’ for the requirement to pass.
Result Values
Sector
Single Target
Speed
Configuration
The ’Speed’ requirement is used to calculate the probability of a target report having an
speed error smaller than a defined threshold. The difference of the speed (test speed vs.
speed based on reference positions) is calculated, and if its absolute value is smaller or
equal than the defined threshold, the target report is counted for the calculated probability
PCP (probability of check passed). The PCP must be greater or equal than the defined
’Probability’ for the requirement to pass.
In another variation, if the ’Use Percent Threshold if Higher’ checkbox is set, the check
is changed for faster speeds. If the ’Threshold Percent’ value times the calculated speed
is larger or equal the dewfined threshold value, then threshold value is changed to value
of the ’Threshold Percent’ value times the calculated speed. This means that for higher
speeds, an accuracy with the given percentage is required.
• Threshold Value Check Type: ≤, speed difference must be less or equal the given
threshold
• Failed Values are of Interest: Checked, the speed values of interest are the ones not
passing the check
Result Values
Sector
Single Target
EVALUATION 140
The ’View Points’ tab displays existing view points, allows selection, stepping and editing
of view points. Additionally, view points can be imported, removed and exported.
141
VIEW POINTS 142
At the top, a toolbar is shown. In the middle, a table showing all existing view points
exists. At the bottom, general function buttons exist.
VIEW POINTS 143
View Point
A view point is a point of interest in the data persisted in the database. It can have the
following attributes:
Not all attributes are shown in the table, since some are more processing related than
information relative to the user.
Also, additional attributes can be shown. If other additional attributes exist in the view
point information, they are automatically shown in the table. For additional information
please refer to Custom Attributes.
When a view point is selected, the dataset defined by the view point is loaded auto-
matically and the active Views show the relevant data. Using the elements described in
the following sections a user can quickly step through view points, assess the information
shown and change status and comment information.
Please note that changes to view points are saved immediately to the database and no
undo function exists.
VIEW POINTS 144
Toolbar
All of the actions can be triggered using the listed keyboard shortcut in the square
brackets. Up/Down refers to the keyboard arrows.
VIEW POINTS 145
Table
In the view points table, all view points are listed, each one in a seperate row. Columns
can be used for ordering (simply click on the column name), and resized as wanted.
This automatically triggers loading of the data and display in the existing Views.
The status can only be set using the toolbar buttons or keyboard shortcuts, while edit-
ing the comment can also be triggered by a double-click on the respective cell.
Showing/Hiding Columns
Using the ’Edit Columns’ button in the toolbar, unwanted columns can be hidden. Click
on the button to activate the following menu:
Function Buttons
There exist 3 buttons for general functions:
• Import: Imports a view point file selected by the user (only recommended if no view
points are already defined)
• Delete All: Deletes all existing view points
• Export: Exports all existing view points as a view point file
• Export PDF: Exports view points as a PDF file
Data Selection
When the loading process is finished, data is automatically selected using the ’time’ and
’time_window’ attributes.
VIEW POINTS 150
ListBox View
In the ListBox View, the variables set in the ’context_variables’ attribute are added tem-
porarily to the list of variables. Loaded data is presented as always. When the loading
process is finished, selected data is highlighed.
VIEW POINTS 151
OSGView
Loaded data is presented as always and can be adapted to a users needs. After load-
ing, the presented data is centered/zoomed according to the ’position_latitude’, ’posi-
tion_longitude’, ’position_window_latitude’, ’position_window_longitude’ attributes. If
these are not set, the center/zoom is adapted to encompass all of the loaded data.
After Selection
After data loading, the application can be used as in any other situation, therefore changing
filters, adapting the OSGView style or re-loading the dataset is possible.
VIEW POINTS 152
Assessment
In the ’View Points’ tab, after assessing the view point, a user can add a comment and
change the status to annotate the view point with additional information.
The ’Run’ button startes the export process. At the bottom, status information and a
cancel button exists.
If pdflatex is to be run, this is indicated in the ’Status’ text, which might have to be run
several times. For large documents this might take several minutes, this time not being
included in the ’Remaining Time’ estimate.
The export speed of course depends on the View Points, the number of Views, hard-
ware etc. However, even exporting (somewhat unreasonable) 1500 View Points takes
about only 15 minutes on the authors hardware, so it should be adequate for any reason-
able use case.
If the export process was sucessful, a message is shown and the dialog is closed auto-
matically. The report Latex file was written into the report directory, with screenshots in
the ’screenshots’ subfolder. If the respective options were set, a PDF was automatically
generated and is opened using the default PDF application.
If a Latex error ocurred, a message box with the error message is shown.
VIEW POINTS 156
Please note that the generated report can of course be edited by the user and re-
generated using pdflatex, which allows for better customization options (adding e.g. de-
tails, corporate identity etc.).
ListBox View
A ListBox View displays DBContent data as text in tables to allow textual data inspection.
When started, it presents itself in the following manner.
157
LISTBOX VIEW 158
Layout
On the left side a number of tabs exist, one for each type of DBContent and an additional
’All’ tab, each of which contains a table.
On the right side resides the configuration area, which allows configuring what data is
loaded and how it is displayed. The ’Reload’ button on the bottom can be used to trigger
a reload of the view’s data.
Data Loading
To load the data the ’Reload’ button or the mechanism described in Section UI Overview
can be used. To filter the dataset, the mechanism described in Section Filters can be used.
Once updated, the tables are filled with text representing the values of the chosen
DBContent variables. If a value is undefined its cell remains empy. For each type of
DBContent a dedicated table is shown, as well as the ’All’ table, where data from all
DBContent types is shown collectively.
LISTBOX VIEW 160
Please note that since a specific variable might only exist in certain DBContents, the
number of columns in the various tables might differ.
Usage
Selection
In the first column of each table checkboxes are shown, indicating whether that target re-
port is currently selected. Selection may be changed by selecting/de-selecting the respec-
tive checkboxes, or by altering the selection in other views (cross-selection). If the selection
is changed in one of the other views, this view is updated automatically.
Variable Lists
In the ’Variable Lists’ section of the ’Config’ tab, a variable list preset can be selected via a
combo box. Further, custom variable lists can be added, copied and removed by the user.
Variables
For the selected variable list, all DBContent variables which are loaded from the database
are shown in the ’Variables’ list. This list is ordered, and like all configuration elements
persistent. Ordering in the list can be changed by selecting a certain variable and using
the up/down buttons to move it in the respective direction.
When pressing the ’Remove’ button, a selected variable is removed. Pressing the ’Add’
button allows appending a variable to the list using a context-menu.
If Meta variables are used, they are displayed for all DBContents they exist in. If a
DBContent variable is used, it is only displayed in its native content.
LISTBOX VIEW 161
After a adding a variable the dataset has to be reloaded to include the additional data,
therefore the ’Reload’ button becomes active.
Use Presentation
When this checkbox is checked, the so called presentation mode is used. In the database,
the variables might have different units or a data representations which is not easy to read.
For this purpose, a presentation mode was introduced to e.g. show a Mode A code as
octal, or a Time of Day not in seconds since midnight but in HH:MM:SS.SS format.
When the ’Use Presentation’ checkbox is not checked, the original database values are
presented (and exported).
Exporting
The data from the current table can be exported to a comma-separated value (CSV) text,
either as file or copied to the clipboard.
For this example a Mode 3/A code filter was used to load only target reports and
system track updates from a single target.
LISTBOX VIEW 162
To copy to the clipboard, select data to be copied in the table using the Shift key and
the left mouse button, then press Ctrl-C. The CSV text data can then be pasted in other
applications.
To export the complete loaded dataset, click the ’Export’ button, so that a dialog is
opened.
Choose a filename, and press ’Save’ to save the data. If the ’Overwrite Exported File’
checkbox was checked, an existing file is automatically overwritten. Please note that
exporting might take some time for larger datasets, and currently no status indication is
given.
LISTBOX VIEW 163
After export, a dialog is shown indicating that the export was completed.
The exported file can be opened in any editor, or for example imported into LibreOffice
Calc.
Reload
Additionally, if a change is made that requires re-loading of the data (e.g. additional data
should be displayed) the ’Reload’ button becomes available, and can be used to trigger a
loading process as in Section ??.
Histogram View
164
HISTOGRAM VIEW 165
Layout
On the left side resides the plot area in which the histogram is shown (if data has been
loaded). The tool bar at the top shows the currently selected mouse interaction mode and
the available actions.
On the right side resides the configuration area, which allows configuring what data is
loaded and how it is displayed. The ’Reload’ button on the bottom can be used to trigger
a reload of the view’s data.
Data Loading
To load the data the mechanism described in Section UI Overview or the ’Reload’ button
can be used. To filter the dataset, the mechanism described in Section Filters can be used.
On the x-axis the selected variable’s data range is discretized into 20 bins. An additional
bin represents all NULL values.
On the y-axis the bin sizes per DBContent are shown, either in linear or logarithmic
scale.
HISTOGRAM VIEW 167
Below the histogram a legend is shown, giving the total counts of all data points.
In the current example the meta-variable ’Time of Day’ is used, showing the overall
data rate per DBContent.
Usage
Toolbar
The first tool buttons can be used to switch between mouse interaction modes, currently
only one mode is available.
Config Tab
The elements on the top define which data is visualized in the histogram.
Any numerical variable can be visualized by checking the ’Show Variable Data’ box
and selecting a variable in the selection control below. A reload operation might be re-
quired for the selection to take effect.
Result data can be visualized by checking the ’Show Evaluation Result Data’ box.
In this case the the ’Requirement’ and ’Result’ fields indicate which evaluation result is
presented.
The ’Logarithmic Y Scale’ checkbox can be used to switch between linear and logarith-
mic scale of the y-axis.
HISTOGRAM VIEW 168
Histogram
Zoom
The mouse wheel can be used to zoom in or out of the presented data. This is in the current
presentation only useful in limited circumstances. The space key can be used to reset to
the default zoom level (euqivalent to ).
Selection Mode
In ’Select’ mode, data can be selected. The first left mouse-button click starts selection
(showing a red rectangle), the second click finalizes the selection. The data contained in all
intersected bins is selected.
The selected data is then presented in an extra ’Selected’ entry in the legend, showing
the count of all selected data points.
This enables selection of parts of the data based on the presented variable, allowing
deeper analysis e.g. of dubious data.
The ’Invert Selection’ or ’Delete Selection’ actions allow for easier selection of
the wanted target reports.
By pressing the ’Control’ key during the second click, the newly selected data is added
to any previous selection. This can be used to select data incrementally, making more
complex selections possible.
HISTOGRAM VIEW 170
Evaluation Result
If - using the Evaluation feature - a requirement result is presented, the respective data is
shown in the histogram.
Please note that currently evaluation result data can not be selected. This will be im-
proved in one of the next versions.
OSG View
The OSG View allows a graphical representation of target reports from the DBContent.
After creation, it displays a world map in the map widget on the left, and a configuration
panel on the right hand side.
172
OSG VIEW 173
Layout
The map window will automatically traverse to the medium location of the data in the
current database and later pan/zoom to the loaded data (on the first load only).
Please note that the Data Widget and the Configuration panel can be resized and
hidden if wanted.
To load the data use the the mechanism described in Section UI Overview can be used.
To filter the dataset, the mechanism described in Section Filters can be used.
How the data is presented in defined in the Configuration panel, please refer to Con-
figuration Panel for details.
There exist two display modes, 2D(default) and 3D, which are set based on the used
map:
• 2D: Displays map in a 2D projection and (per default) disables usage of height in
geometry data drawing
• 3D: Displays map as a globe and (per default) enables usage of height in geometry
data drawing
Toolbar
The first 4 symbols switch between the mouse action modes, the others provide specific
options.
The others provide general display modes or operations (shortcut refers to keyboard
shortcut):
OSG VIEW 177
Data Widget
Mouse/Keyboard Operations
In the data widget, several mouse and key operations are supported. The following terms
are used:
• LMB: Left mouse button
• MMB: Middle mouse button
• RMB: Right mouse button
The following operations exist, depending on the mouse action modes (set in the tool-
bar):
OSG VIEW 178
Status Information
In the lower-left corner, the data status information is given:
• Status
– Idle: Nothing to do
– Loading: Loading in progress
– Done: Loading/redraw done
– Redrawing: Redraw in progress
• Time Begin: First timestamp in the data, in HH:MM:SS
• Time End: last timestamp in the data, in HH:MM:SS
• Loaded: Number of loaded target reports (from the database)
• Skipped: Number of not-drawn target reports
• Drawn: Number of drawn target reports
• Selected: Number of selected target reports
In the upper-right corner, the COMPASS version is shown. In the lower-right corner,
the current coordinates (map coordinates under the mouse cursor) are shown.
Data Operations
Data Labeling
Labels can be shown for target reports, when the Label mode is active. Simply do a
LMB click on a target report symbol.
OSG VIEW 180
Geometry Menu
Several data-related operations can be also performed in the data widget when the Label
mode is active, by a RMB click on a target report symbol.
OSG VIEW 181
Label Multiple
Multiple target reports can be selected when the Label Multiple mode is active.
The usage of this mode mode works exactly like the Selection mode, only that the target
reports are not selected but each one is labeled. If more then 100 target reports are labeled,
only the first 100 ones are labeled and an error message is shown.
For details about the rectangular target report labeling please refer to Selection.
Distance Measurement
Distance measurements can be made when the Measure mode is active. If ’Use height’
(section Height) is not checked, measurements are done as follows:
Simply do a LMB click on a target report or a position on the map to start the measure-
ment.
OSG VIEW 183
During the measurement, the following information is shown in the top-left corner:
• Distance (km): Great-circle distance using the haversine formula, in meters or kilo-
meters.
• Distance (nm): Great-circle distance using the haversine formula, in nautical miles.
• Bearing (deg): Bearing from point 1 to point 2, in degrees from true north.
Then do a second LMB click on the map on a point of interest to finish the measurement.
OSG VIEW 184
If ’Use height’ (section Height) is checked, measurements are calculated in the same
way as previously (ground distance), but for each target report with height a connecting
line to the respective ground position is displayed.
For a measurement between two target reports with height information, the measure-
ment will be displayed as follows:
OSG VIEW 185
The measurements is shown in the Layers tab, and is identified using a number. For
details about the measurement layer operations please refer to Measurement Operations.
Please note the the measurement number background color can be set as with the label
background color.
OSG VIEW 186
Selection
Target reports can be selected when the Select mode is active. If ’Use height’ is not
checked, select is done as follows:
Simply do a LMB click on a target report or a position on the map to start the selection.
Move the mouse to another location to create a red selection rectangle.
With a second LMB click the selection is finalized and all target reports in the created
latitude/longitude rectangle are selection. This is shown by a different color (yellow by
OSG VIEW 187
default), and the ’Selected’ counter in the lower-left corner shows the current selection
size.
If another selection is done, the previous one is cleared by default. If another selection
should be added to the current one, hold down the Control key when doing the second
LMB click.
OSG VIEW 188
This adds the target reports within the new red rectangle to the current selecion.
OSG VIEW 189
If ’Use height’ is checked, selection can be done with height information, which is
shown as a box. Simply do a LMB click on the map for a target report, and move the
cursor to another target report or map location.
If both locations have zero height (map location or no height information in target
report), again a rectangle is shown. If one or both locations have a non-zero height, a 3D
box is displayed.
OSG VIEW 190
Time Filter
Once activated using the symbol in the toolbar, the time filter facilitates that only target
reports within a specific time window are shown.
OSG VIEW 192
• Use Opacity: Checkbox and slide to set opacity (older target reports are made trans-
parent)
• Scrollbar: Manual time-scrolling
During usage of the time filter, most changes in the Configuration panel are not avail-
able (except for changes in the Layers tab and current style).
Please note that the time filter automatically deactivated if a re-load is triggered.
Depth Check
Once activated using the symbol in the toolbar, during drawing it is checked wether
data is occluded in the depicted scene. E.g. if target reports are ’below ground’, they are
not shown.
OSG VIEW 194
Please note that if the display check is activated for geometry without height display,
bad rendering can occur. For this reason, this mode is only recommended for geometry
display with height and is disabled in 2D display mode.
OSG VIEW 195
All existing labels can be deleted using the symbol in the toolbar.
OSG VIEW 196
Measurement Deletion
All existing measurements can be deleted using the symbol in the toolbar.
Selection Color
The color for selection highlighting can be configured using the symbol in the toolbar.
Selection Invert
Selection Deletion
The display modes can be switched using the symbol or in the toolbar, or by press-
ing the D-key.
Zoom to Home
The currently viewed area can be set to encompass all data in the database using the sym-
bol in the toolbar.
Configuration Panel
• Layers: Allows configuration what data is shown, and access to the map and mea-
surement functions
• Style: Allows configuration of how data is shown
• Labels: Allows configuration of how data is labeled
• Evaluations: Allows configuration of how evaluation results are colored
• Others: Allows configuration of the height usage in the shown geometry
Additionally, at the bottom the ’Update’ button allows redraw/reloading of the geom-
etry data. This button becomes available if changes in the Style or Height tabs require a
redraw or a reload of the data.
Layers Tab
In the ’Layers’ tab, a tree view is given to configure the display of the existing elements.
OSG VIEW 199
How the geometry is layered is defined by the Layer mode in the Style tab, please refer
to Layer Mode for details.
For all geometry layers, a second colum is shown, giving the count of number of
target reports in the (sub-) layer (as were loaded from the database).
In the Layers widget, a number of operations are possible for each tree item.
The context menu allows several actions to be performed on an item. If an item has
sub-items, the same action will automatically be performed on the child items.
Geometry Operations
The geometry menu is triggered by clicking on the respective symbol of the layer.
OSG VIEW 201
Selection Operations
• Select Data: Select (only) target reports in the group
• Add Data to Selection: Add target reports in the group to selection
• Remove Data from Selection: Remove target reports in the group from selection
Ground Lines Operations If height information is used, the following operations exist:
• Show Ground Lines: Shows the connection lines to the ground for all target reports
in the group.
• Clear Ground Lines: Clears the connection lines to the ground for all target reports
in the group.
OSG VIEW 202
Ground Speed Vector Operations If ground speed information is loaded (see Ground
Speed), the following operations exist:
• Show Ground Speed Vectors: Shows ground speed vectors for all target reports in
the group.
• Clear Ground Lines: Clears ground speed vectors for all target reports in the group.
Special Root Geometry Layer Operations This layer has special operation present:
• Reset All Styles: Sets all style information to default (including make all data visible)
• Clear All Assocations: Removes display of all shown associations of all data
Measurement Operations
To access the map operations, click the measurement symbol , then the following oper-
ations are available:
• Copy All Texts: Copies measurement data as text to the clipboard
• Delete All: Removes all measurements
1
Distance (m): 188.59
Distance (nm): 0.1018
Bearing (deg): 26.366
Point1: 36.56370614, 15.74786596
Point2: 36.56522404, 15.74880269
2
Distance (m): 242.09
Distance (nm): 0.1307
Bearing (deg): 141.243
Point1: 36.55986876, 15.72751885
Point2: 36.55817286, 15.72921376
Radars Operations
How to define radar attributes is defined in Configure Data Sources.
The radars layer display configuration is stored in the configuration and restored upon
startup.
To access the map operations, click the (top) radars symbol , then the following
operations are available:
• Show All: Shows all radars
• Hide All: Hides all radars
• Toggle Labels: Shows/hides labels for all radars
• Toggle Range Rings: Shows/hides all range rings for all radars
OSG VIEW 204
Sectors Operations
How to define sectors attributes is defined in Configure Sectors.
For each defined sector, a colored polygon is shown, grouped by it’s layer. Such a
structure could look as follows:
• Layer A
– Sector A1
– Sector A2
• Layer B
– Sector B1
Each layer or sector can be shown/hidden. The sectors layer display configuration is
stored in the configuration and restored upon startup.
For the examples shown in the respective task, the following display can be shown:
OSG VIEW 205
Map Operations
To access the map operations, click the globe symbol . Please note that only the main
Map item has a context menu, and only allows setting of map files and changing of the
opacity.
• Map File: Change the background map
• Opacity: Change the background map opacity
OSG VIEW 207
Changing the Background Map Please note that, while the default background map
is supplied COMPASS, the other background map types are downloaded from public
Internet sources and therefore require an Internet connection. They are then cached locally
to facilitate faster access.
For each map layer defined in the background map, a checkbox is shown to disable the
layer, and by clicking on the map layer symbol it’s opacity can be changed.
To change the background map, click the globe symbol in root Map layer to access
the map selection.
Please note that for each map marked with * a 3D version (as listed) and a 2D version
(with filename suffix ’_2d’) exists. Each of them contains similiar content, but changes the
display mode to 3D/2D upon selection.
The map loading and display in based on the osgEarth library (http://osgearth.
org/), as are to map file definitions.
The map which can be set using this dialog is simply a file list from the folder
’~/.compass/data/maps’. So, changes can be made to the supplied ones or custom user
maps can be added to this folder.
Please refer to section Adding/Changing Map Files for further details.
OSG VIEW 208
ArcGIS Map As supplied in the osgEarth example files, this map data is obtained
from ArcGIS Online (https://doc.arcgis.com/en/arcgis-online/reference/
what-is-agol.htm). It shows satellite imagery, supplied with elevation data from
ReadyMap.
Minimal Map This minimal map shows national borders based on an ESRI shapefile,
provided by Bjorn Sandvik on thematicmapping.org and European airports, provided
by https://ec.europa.eu/eurostat/web/gisco/geodata/reference-data/
transport-networks.
Open Street Map This very useful map shows map data from https://www.
openstreetmap.org/.
It is possible to zoom in to a very high level of detail, to even inspect airport layouts.
OSG VIEW 211
Open Street Map German This very useful map shows map data from https://www.
openstreetmap.de/.
It is possible to zoom in to a very high level of detail, to even inspect airport layouts.
OSG VIEW 213
ReadyMap & ReadyMap Detailed This map also shows satellite data, from http://
web.pelicanmapping.com/readymap-tiles/.
This detailed version shows the same data as ReadyMap, but to a higher detail level.
Please note that this map includes an elevation layer, so mountains are modeled in 3D.
OSG VIEW 214
In this folder, the previosly discussed maps exist as osgEarth ’.earth’ files. If new .earth
files are added, or the content of such files is changed, they can be used from the OSGView
after a restart.
The easiest example is the ’minimal_new.earth’ file, which uses an ESRI shapefile from
the subfolder ’shapefiles’ to display the national borders.
<map name="Wordwide Line Vectors" type="geocentric">
<options>
<lighting>false</lighting>
<terrain>
<min_tile_range_factor>8</min_tile_range_factor>
<color>#000000FF</color>
</terrain>
</options>
</feature_source>
<styles>
<style type="text/css">
world {
stroke: #ffff00;
stroke-width: 2px;
stroke-tessellation-size: 1km;
render-lighting: false;
altitude-clamping: none;
render-depth-test: false;
}
</style>
</styles>
</feature_model>
</map>
The world background colour is set using the ’terrain color’ tag. The name of the
shapefile is given using the ’url’ tag, the line colour and width is set in the style below.
Basically a user can add their own files to the ’shapefiles’ folder, and simply duplicate the
’feature_model’ part with their own ESRI shapefiles. This could the look like this:
<map name="Wordwide Line Vectors" type="geocentric">
<options>
<lighting>false</lighting>
<terrain color="#101010ff"/>
</options>
<level max_range="1e10"/>
</layout>
<styles>
<style type="text/css">
world {
stroke: #ffff00;
stroke-width: 2px;
stroke-tessellation-size: 1km;
render-lighting: false;
altitude-clamping: none;
render-depth-test: false;
}
</style>
</styles>
</feature_model>
<feature_model name="doi">
<features name="wolrd" driver="ogr">
<url>shapefiles/doi.shp</url>
<build_spatial_index>true</build_spatial_index>
<ogr_driver>ESRI Shapefile</ogr_driver>
<convert type="line"/>
</features>
<layout tile_size="100000">
<level max_range="1e10"/>
</layout>
<styles>
<style type="text/css">
states {
stroke: #00ff00;
stroke-width: 2px;
render-depth-test: false;
}
</style>
</styles>
</feature_model>
</map>
While a number of formats are supported, to add a KML file, add the following part to
a ’map’ (as previously):
<feature_model name="wam_area">
<features name="wam_area" driver="ogr">
<url>shapefiles/wam_area.kml</url>
OSG VIEW 218
<ogr_driver>LIBKML</ogr_driver>
<build_spatial_index>true</build_spatial_index>
</features>
<styles>
<style type="text/css">
states {
stroke: #0000ff;
stroke-width: 2px;
render-depth-test: false;
}
</style>
</styles>
</feature_model>
To add a GML file, add the following part to a ’map’ (as previously):
<model driver="feature_geom" name="gml" cache_enabled="false">
<features driver="ogr">
<ogr_driver>GML</ogr_driver>
<url>shapefiles/example.gml</url>
<caching_policy usage="no_cache"/>
</features>
</model>
To add a GeoTIFF file, add the following part to a ’map’ (as previously):
<image driver="gdal" name="tiff" cache_enabled="false" visible="
,→ false">
<url>/usr/share/osgearth/data/world.tif</url>
<caching_policy usage="no_cache"/>
</image>
To add a graticule (latitude/longitude grid), add the following part to a ’map’ (as pre-
viously):
<geodetic_graticule name="Graticule" visible="true">
<color>#ffff007f</color>
<label_color>#ffffffff</label_color>
<grid_lines>20</grid_lines>
<resolutions>10 5.0 2.0 1.0 0.5 0.25 0.125 0.0625 0.03125</
,→ resolutions>
</geodetic_graticule>
For further information please refer to the osgEarth user manual https://
buildmedia.readthedocs.org/media/pdf/osgearth/latest/osgearth.pdf, e.g.
in Section Features & Symbology.
OSG VIEW 219
Style Tab
• Layer Mode: Defines how layers are generated. Please refer to Section Layer Mode
for details.
• Connect Last Layer: Whether grouped target reports in the last layer should be con-
nected using lines
• Connect None Height: If groups are connected, whether target reports with not
height information should be connected
• Blend Mode: Defines the drawing blend mode, allowing clearer symbols (contour)
or colors (src-over).
• Style: Defines how geometry is styled. Please refer to Section Style for details.
• Render Order: Defines the drawing order of DBContents. To bottom one is drawn
first, the top one last (over all others)
• Update button: Triggers a redraw or reload of the geometry, becomes available after
a change if needed.
Layer Mode
In this selection the way layers are generated can be changed.
The layer mode defines what layers are generated, e.g. for ’A’ only layers for all values
of ’A’ are created, for ’A:B’ layers for all values of ’A’ are created, each with sub-layers for
all values of ’B’. In this case, ’A’ is the parent, ’B’ is the child.
If no values exist in the data for a layer, this data is grouped in the layer ’None’.
• DBContent:DS ID
OSG VIEW 221
• DBContent:DS ID:Line ID
• DBContent:DS ID:Line ID:Aircraft Address
• DBContent:DS ID:Line ID:Track Number
• DBContent:DS ID:Aircraft Identification
• DBContent:DS ID:Aircraft Address
• DBContent:DS ID:Track Number
• DBContent:DS ID:Mode 3/A Code
• UTN:DBContent:DS ID
• UTN:DBContent:DS ID:Line ID
• UTN:DBContent:DS ID:Aircraft Identification
• UTN:DBContent:DS ID:Aircraft Address
• UTN:DBContent:DS ID:Track Number
• UTN:DBContent:DS ID:Mode 3/A Code
• Aircraft Identification:DBContent:DS ID
• Mode 3/A Code:DBContent:DS ID
• Aircraft Address:DBContent:DS ID
• Aircraft Address:DBContent:DS ID:Line ID
Please note that the UTN layer modes only exist when association information is
present.
Please also note that after a change in the Layer mode a redraw has to be triggered
before the changes take effect.
DBContent:DS ID In this layer mode the DBContent name is used to create the first
layer, with sub-layers for each data source.
OSG VIEW 222
Mode 3/A Code:DBContent:DS ID In this layer mode the Mode 3/A code is used to
create the first layer, with sub-layers for each DBContent and data source.
OSG VIEW 223
UTN:DBContent:DS ID In this layer mode the UTN is used to create the first layer, with
sub-layers for each DBContent. All target reports without a UTN (not used by ARTAS) are
grouped into layer ’None’.
Please note that connection lines for target reports with a time-of-day difference larger
than 5 minutes will be omitted.
If this is activated for a Layer mode in which the last layer is not target-specific,
this will lead to a sub-optimal representation.
OSG VIEW 224
This mode (normally) makes sense in one of the following Layer modes:
• DBContent:DS ID:Line ID:Aircraft Address
• DBContent:DS ID:Line ID:Track Number
• DBContent:DS ID:Aircraft Identification
• DBContent:DS ID:Aircraft Address
• DBContent:DS ID:Track Number
• UTN:DBContent:DS ID:Aircraft Identification
• UTN:DBContent:DS ID:Aircraft Address
• UTN:DBContent:DS ID:Track Number
• Aircraft Identification:DBContent:DS ID
• Aircraft Address:DBContent:DS ID
• Aircraft Address:DBContent:DS ID:Line ID
OSG VIEW 225
If the height information is used (3D view) and if target reports without height infor-
mation are connected, the lines clutter the display. The ’Connect None Height’ checkbox
allows to set the behaviour.
Please note that changes both these values requires a manual redraw using the ’Re-
draw’ button.
Style
There exist 3 main elements for styling:
OSG VIEW 226
Please note that some style presets (e.g. layer style per ACID, ACAD, Track Num-
ber, Mode 3/A Code) generate lots of different (persistent) styling rules, which decreases
startup speed. After using such styles it is possible to reset the styles using the ’Reset
Styles’ button to increase application startup speed.
Please note that after changing the style to one of these value a redraw has to be
triggered.
Please also note that for such presets the data from which the style is derived has
to be present in the Layer mode, otherwise the layer is styled with a common base style.
OSG VIEW 227
Target Report based Style Presets The following Target report based style presets exist:
• Color by ADS-B MOPS: Color is defined by the ADS-B MOPS version of the
transponder
• Color by ADS-B Position Quality: Color is defined by the ADS-B NUCp/NACp
value of the target report
• Color by Flight Level: Color is defined by the Mode C code/Flight level
• Color by Speed: Color is defined by groundspeed value
• Color by Detection Type: Color is defined by Radar detection type (PSR, SSR,
PSR+SSR, ...)
• Color by Track Angle: Color is defined by direction of movement value
• Color by Track Age Type: Color is defined by track age of variable
Please note that after changing the style to one of these value a reload has to be trig-
gered.
Customized Styling
While the presets come with (mostly) reasonable default values, adaptation can be per-
formed by clicking on the value that should be changed, either in the Default Style or the
Generated rules. Any changes are applied immediately to the geometry.
Style Examples
To give a few examples, some interesting Layer modes and Style preset combinations are
given:
OSG VIEW 228
Figure 143: OSG View Layer color per Mode 3/A Code
OSG VIEW 230
Please note, in this style one can set the following parameters (by clicking the respective
value): In this style, one can change the following parameters:
• Variable used (naming as in ASTERIX)
• Time Interval used
• Colors for the 4 intervals (+ None = not set color)
OSG VIEW 240
Render Order
In the render order widget, the drawing order of the drawn geometry is specified. The
one at the top is drawn last (over all others), so it is useful to move the most important
DBContent (for the current inspection) to the top.
Using the ’Order By’ checkbox, the drawing order can be defined based on:
• DBContent: Data source type (e.g. Tracker, Radar, ...)
• ds_id: Data source (e.g. Tracker1, Tracker2, Radar3, ...)
To change the drawing order click on the item to move and use the buttons on the right
side. Please note that no redraw is required and that the drawing order is persisted in the
configuration.
Labels Tab
When the auto-label feature is active and one or more data sources are selected for
labeling, automatic labels are generated in the OSG View according to the content and
label filter settings.
Label Contents
When clicking the ’Edit Label Contents’ button , the label content specific to a DBCon-
tent can be selected.
The label contents are organized in a 3x3 matrix, the contents if the first 2 rows and
columns are fixed while row 3 and column 3 can be edited.
Multiple level of details (LoDs) are defined (as indicated with border lines in the edit
dialog):
• LoD 1: 1x1 matrix with the best available identification
• LoD 2: 2x2 matrix, LoD 1 with the most important secondary information
• LoD 3: 3x3 matrix, LoD 2 with some additional information
• LoD 1
OSG VIEW 244
Automatic Labeling
If a data source to be labeled is activated, automatic labels are generated (into the configu-
rated direction) and updated once per second.
Please note that when automatic labeling is active, manually created labels are removed
once per second. This will be improved in the future.
OSG VIEW 245
For each target (as in the the last layer item according to Layer Mode) a single label is
created for the latest shown target report.
Depending on the number of visible labels, the LoD is chosen by Auto LoD, if (e.g. with
zoom or time window filter operations) the number of visible labels is reduced, a higher
LoD is shown.
The automatic labeling can be most useful when choosing a target-specifc layer mode
(e.g. based on unique secondary identification or UTN) and activating the time window
filter (or in Live mode).
OSG VIEW 247
Label Filters
When automatic labeling is activated, labeling of targets can be restricted using the label
filters. Each filter can be activated and its filter value set to label only targets with specific
attributes. The filters offer similar ways as the ones defined in Filters.
Mode 3/A Codes When active, this filter forces labeling of data with the given Mode A
code(s), so it is possible to give multiple values (in octal notation, separated by commas).
E.g. ’7000’ is possible, or ’7000,7777’. Target reports without a given Mode A will not be
OSG VIEW 248
Mode C When active, based on the Mode C Min and Mode C Max values, target reports
are only labeled if the minimum and maximum flight level matches the specified thresh-
olds. Target reports without a Mode C Code will not be labeled unless the NULL checkbox
is checked.
Aircraft Identifcation When active, this filter forces labeling of data only from aircraft
identifications matching the given expression. The percent operator denotes a ’any char-
acters’ placeholder. So e.g. ’%TEST%’ will match ’TEST123’ or ’TEST123 ’ (with spaces) or
’MYTEST’. Target reports without a given aircraft identification will not be labeled unless
the value ’NULL’ is (also) given.
Aircraft Address When active, this filter forces labeling of the given Mode S address(es),
so it is possible to give multiple values (in hexadecimal notation, irrespective of up-
per or lower case characters, separated by commas). E.g. ’FEFE10’ is possible, or
’FEFE10,FEFE11,FEFE12’. Target reports without a given Mode S address will not be la-
beled unless the value ’NULL’ is (also) given.
OSG VIEW 249
Evaluation Tab
In the Evaluation tab, labels exist to denote which evaluation results are currently shown,
OSG VIEW 250
Others Tab
Height
Per default, a target reports height is not used for display, which is common in current
air-traffic displays. However, in certain situation a true 3D display might be of interest to
a user, and therefore several options where incorporated:
• Use Height: Use the height based on Mode C hc , transformed to meters
• Offset Factor: If height information is used, this factor ho is added to the height
• Scale Factor: If height information is used, this factor hs is used to multiply the height
• Null Offset: If height information is used, this factor hn is used for target reports
without height information
Generally, if no height information is given (no Mode C code), the height is either 0
or the height offset (if used). That means that those target reports appear to the on the
ground. If connection lines are drawn between the ones in the air and those without, a lot
of annoying lines are shown.
h = ho + hs · hc [m]
If no height information is given:
h = hn [m]
Please note that upon changes to the height usage, a manual redraw has to be per-
formed using the ’Redraw’ button.
OSG VIEW 253
Ground Speed
• Load Ground Speed Variables: Defines if ground speed variables should be loaded
from the database. Must be enabled to enable display of ground speed vectors.
• Ground Speed Period [s]: Length of ground speed vectors, in seconds.
If loading is enabled (possible re-load required) the ground speed vectors can be shown
using Geometry Operations:
OSG VIEW 254
Position Accuracy
• Load Position Accuracy Variables: Defines if position accuracy variables should be
loaded from the database. Must be enabled to enable display of position accuracy
ellipses.
• Position Accuracy Scale [sigma]: Size of accuracy ellipses, defined in standard devi-
ations:
– 1.0 σ: 68.268 %
– 1.5 σ: 86.638 %
OSG VIEW 255
– 2.0 σ: 95.44 %
– 2.5 σ: 98.75 %
– 3.0 σ: 99.73 %
– 3.5 σ: 99.95 %
– 4.0 σ: 99.9936 %
– 4.5 σ: 99.999320 %
– 5.0 σ: 99.99994266 %
• Accuracy Ellipse Num Points Factor [1]: Multiplication factor to calculate number of
ellipse points based on size
• Accuracy Ellipse Max. Num Points [1]: Maximum number of ellipse points
If loading is enabled (possible re-load required) the position accuracy ellipses can be
shown using Geometry Operations:
OSG VIEW 256
How the position accuracy ellipses are generated is defined in Positions Accuracy El-
lipses.
These values are only used if the associated data source does not have custom accuracy
values set (see Configure Data Sources).
How the Radar default accuracies are used is defined in Positions Accuracy Ellipses.
ScatterPlot View
A ScatterPlot View displays the distribution of two numerical variables as points. When
started, it presents itself in the following manner.
258
SCATTERPLOT VIEW 259
Layout
On the left side resides the plot area in which the data is visualized (if data has been
loaded). The tool bar at the top shows the currently selected mouse interaction mode and
the available actions.
On the right side resides the configuration area, which allows configuring what data is
loaded and how it is displayed. The ’Reload’ button on the bottom can be used to trigger
a reload of the view’s data.
Data Loading
To load the data the mechanism described in Section UI Overview or the ’Reload’ button
can be used. To filter the dataset, the mechanism described in Section Filters can be used.
The values of the selected variables are used for positioning on the x-axis and y-axis.
For each value pair a data point is generated. In the current example the WGS-84 meta-
variables ’Longitude’ and ’Latitude’ are used, showing a similar view as the OSG View.
The color of each data point is defined by the DBContent it belongs to.
On the bottom of the plot a legend is shown, giving the total counts of all data points.
SCATTERPLOT VIEW 261
Usage
Toolbar
The first tool buttons can be used to switch between the various mouse interaction modes:
The others provide general actions by which the view can be modified (shortcut refers
to keyboard shortcut):
Config Tab
The selection controls on the top define which data variables are used to generate data
points for the x/y-axis. Such a variable can be any numerical variable. A reload operation
might be required for the selection to take effect.
Please note that visualization of evaluation result data is currently not implemented.
Scatterplot
General Zoom
The mouse wheel can be used to zoom in or out of the presented data, the space key can
be used to reset to the default zoom level (euqivalent to ).
Navigation Mode
In ’Navigate’ mode, the left mouse-button can be used to pan the shown data.
SCATTERPLOT VIEW 262
Selection Mode
In ’Select’ mode, data can be selected. The first left mouse-button click starts selection
(showing a red rectangle), the second click finalizes the selection. All data points inside
the rectangular area are selected.
The selected data is then presented in an extra ’Selected’ entry in the legend, showing
the count of all selected data points.
SCATTERPLOT VIEW 263
This enables selection of parts of the data based on the presented variables, allowing
deeper analysis e.g. of dubious data.
The ’Invert Selection’ or ’Delete Selection’ actions allow for easier selection of
the wanted target reports.
By pressing the ’Control’ key during the second click, the newly selected data is added
to any previous selection. This can be used to select data incrementally, making more
complex selections possible.
Live Mode
The application switches in Live mode when ASTERIX data is imported from the network,
as described in Import ASTERIX from Network.
The data sources network lines have to be defined in as described in Data Sources
Table Content.
When that Live mode is enabled and the correct network lines are setup (and active)
the main window is shown as follows.
264
LIVE MODE 265
In Live mode, most application components are the same as in Offline mode (although
some are deactivated), but in the main status bar the Live mode is indicated and a ’Stop’
button allows stopping the network recording and returning to Offline mode.
LIVE MODE 266
The line button can also be toggled - if a bold border is shown the data from the
respective line stored not only in the database but also in main memory (RAM). The most
recent 5 minutes of data are kept in main memory (RAM), and can be visualized in the
existing Views.
Please note that currently only the OSG View displays data in live mode, for perfor-
mance reasons the other Views are inactive.
OSG View
In Live mode, the OSG View automatically shows the same elements as with the time filter,
and the main components are the same as in Offline mode (although some are deactivated).
LIVE MODE 267
In the time filter elements, at maximum 5 minutes of past data is available. In a 1 sec-
ond update newly received network data is shown, and the labels updated (if automatic
labeling is enabled).
Using the time scrollbar, past data can be inspected, which is kept until it becomes
outdated (older than 5 minutes) or the scrollbar is moved to the most right position. In
this position, the displayed time window will again follow the most recent time.
Command Line Options
Several command line options have been added to allow for semi-automated running of
tasks or convinient usage. This also allows for (limited) automated batch processing of
data.
Please note that configuration of the application still has to be performed using
the GUI, therefore it is required to set up the application correctly before command line
options can be used successfully.
Please also note that error or warning message (or related confirmations) will still
halt the automatic running of tasks, to ensure that the user is always aware of occuring
issues.
Allowed options:
--help produce help message
-r [ --reset ] reset user configuration and data
--expert_mode set expert mode
--create_db arg creates and opens new SQLite3 database
with given filename, e.g.
’/data/file1.db’
--open_db arg opens existing SQLite3 database with
given filename, e.g. ’/data/file1
,→ .db’
--import_data_sources_file arg imports data sources JSON file with
given filename, e.g. ’/data/ds1.
,→ json’
--import_view_points arg imports view points JSON file with
given filename, e.g. ’/data/file1
,→ .json’
--import_asterix_file arg imports ASTERIX file with given
filename, e.g. ’/data/file1.ff’
--import_asterix_file_line arg imports ASTERIX file with given line
,→ ,
268
COMMAND LINE OPTIONS 269
e.g. ’L2’
--import_asterix_network imports ASTERIX from defined network
UDP streams
--import_asterix_network_time_offset arg
used time offset during ASTERIX
,→ network
import, in HH:MM:SS.ZZZ’
--import_asterix_network_max_lines arg
maximum number of lines per data
,→ source
during ASTERIX network import,
,→ 1..4’
--asterix_framing arg sets ASTERIX framing, e.g. ’none’,
’ioss’, ’ioss_seq’, ’rff’
--asterix_decoder_cfg arg sets ASTERIX decoder config using JSON
string, e.g. ’’{"10":{"edition
,→ ":"0.31"}
}’’ (including one pair of single
quotes)
--import_gps_trail arg imports gps trail NMEA with given
filename, e.g. ’/data/file2.txt’
--import_gps_parameters arg import GPS parameters as JSON string,
e.g. ’’{"callsign": "ENTRPRSE",
"ds_name": "GPS Trail", "ds_sac":
,→ 0,
"ds_sic": 0, "mode_3a_code": 961,
"set_callsign": true,
"set_mode_3a_code": true,
"set_target_address": true,
"target_address": 16702992,
"tod_offset": 0.0}’’ (including
,→ one
pair of single quotes)
--import_sectors_json arg imports exported sectors JSON with
given filename, e.g.
’/data/sectors.json’
--associate_data associate target reports
--load_data load data after start
--export_view_points_report arg export view points report after
,→ start
with given filename, e.g.
’/data/db2/report.tex
--evaluate run evaluation
--evaluation_parameters arg evaluation parameters as JSON string,
e.g. ’’{"current_standard": "test
,→ ",
"dbcontent_name_ref": "CAT062",
COMMAND LINE OPTIONS 270
"dbcontent_name_tst": "CAT020"}’’
(including one pair of single
,→ quotes)
--evaluate_run_filter run evaluation filter before evaluation
--export_eval_report arg export evaluation report after start
with given filename, e.g.
’/data/eval_db2/report.tex
--no_cfg_save do not save configuration upon quitting
--quit quit after finishing all previous steps
If additional command line options are wanted please contact the author.
Options
–create_db filename
Adds the supplied filename and creates a new SQLite3 database.
–open_db filename
Adds the supplied filename and opens an existing SQLite3 database.
–import_data_sources_file filename
Adds the data sources defined in the JSON file to the configuration, as described in Im-
port/Export of Configuration Data Sources.
–import_view_points filename
After a database was opened, adds the supplied filename and starts an import using the
task described in Import View Points.
–import_asterix_file filename
After a database was opened, adds the supplied filename and starts an import using the
task described in Import ASTERIX Recording.
–import_asterix_file_line arg
If an import using the task described in Import ASTERIX Recording is started, it will use
the given line identifier (L1 ...L4).
–import_asterix_network
After a database was opened, an import using the task described in Import ASTERIX from
Network is started.
COMMAND LINE OPTIONS 271
–import_asterix_network_time_offset arg
If an import using the task described in Import ASTERIX from Network is started, it will
use the given time offset, in HH:MM:SSS.
–import_asterix_network_max_lines arg
If an import using the task described in Import ASTERIX from Network is started, it will
use the given maximum number of input lines (and deactivate the others).
–asterix_framing framing
When an Import ASTERIX Task is started the given framing is used, the following options
exist:
• none: Raw, netto, unframed ASTERIX data blocks, equivalent to the ’empty’ value
in the GUI
• ioss: IOSS Final Format
• ioss_seq: IOSS Final Format with sequence numbers
• rff: Comsoft RFF format
–asterix_decoder_cfg ’str’
When an Import ASTERIX task is started the given configuration is used, in which the
editions and mapping can be specified for each category using a JSON string.
Using the following string the edition 0.31 can be set for category 010:
’{"10":{"edition":"0.31"}}’ (including one pair of single quotes)
Please note that the naming must be exactly as in the GUI, otherwise the application
quits with an error message.
–import_gps_trail filename
After a database was opened, adds the supplied filename and starts an import using using
the task described in Import GPS Trails.
–import_gps_parameters ’str’
When an Import GPS Trail task is started the given configuration is used, in which the data
source and secondary parameters for the GPS trail is defined using a JSON string.
In a nice formatting the string looks like this:
’{
"callsign":"ENTRPRSE",
"ds_name":"GPS Trail",
"ds_sac":0,
"ds_sic":0,
"mode_3a_code":961,
"set_callsign":true,
"set_mode_3a_code":true,
"set_target_address":true,
"target_address":16702992,
"tod_offset":0.0
}’
Please note that both ’mode_3a_code’ and ’target_address’ must be given as decimal
values.
–import_sectors_json filename
After a database was opened, adds the sectors defined in supplied filename using the task
described in Manage Tab.
–associate_data
After a database with imported content exists, create target report associations using the
task described in Calculate Associations.
–load_data
Triggers a load process after opening the database.
COMMAND LINE OPTIONS 273
–export_view_points_report
After starting the application into the management window, a export View Points as PDF
process is triggered as described in Exporting View Points to PDF.
The given argument filename defines the report filename, with the report directory as
the parent directory of the given filename.
–evaluate
After opening a database a pre-configured evaluation run is triggered as described in Eval-
uation.
–import_gps_parameters ’str’
When an Evaluation task is started the given configuration is used, in which all parameters
of the evaluation can be defined using a JSON string.
In a nice formatting such a string can look like this:
’{
"active_sources_ref":{
"CAT062":{
"1234":true
}
},
"active_sources_tst":{
"CAT021":{
"2345":true
}
},
"current_standard":"Dubious Targets",
"dbcontent_name_ref":"CAT062",
"dbcontent_name_tst":"CAT021",
"use_grp_in_sector":{
"Dubious Targets":{
"SectorName1":{
"Optional":false
},
"SectorName2":{
"Optional":false
},
"SectorName3":{
"Optional":true
},
"SectorName4":{
"Optional":false
}
}
}
}’
COMMAND LINE OPTIONS 274
–evaluate_run_filter
When an Evaluation task is started an automatic filtering of targets is performed, as con-
figured using the ’Filter UTNs’ dialog defined in Filtering Targets.
–export_eval_report
After a pre-configured evaluation run is was performed a report PDF is generated as de-
scribed in Evaluation.
The given argument filename defines the report filename, with the report directory as
the parent directory of the given filename.
–no_cfg_save
When quitting the application later, no configuration changes are saved.
–quit
After the other tasks were run, automatically quits the application.
Troubleshooting
Known Issues
CentOS Fuse Usermount Permissions
On some operating systems, the following error message is shown when starting the Ap-
pImage:
“...inside the AppImage requires a newer glibc version than is present on your target
system (CentOS 6.7 in your example). The recommended way to produce an AppImage
275
TROUBLESHOOTING 276
Unfortunately this means that for e.g. CentOS 6.* (or older) an AppImage produced
using Ubuntu 14.04 will not run, and currently there no plans to build one (unless re-
quested by large number of users).
This is due to the the used graphics libraries OpenSceneGraph and osgEarth depend-
ing on shaders defined in GLSL 3.30 (shading language version) and the used graphics
TROUBLESHOOTING 277
There exists a workaround which might work for you: There exists the issue that in
some cases the shaders are supported in the used Mesa graphics driver, but are not de-
tected correctly. One can override the reported OpenGL version for one application using
a system variable and start the app in that mode using:
MESA_GL_VERSION_OVERRIDE=3.3 ./COMPASS-release.AppImage
At least on the authors workstation with an Intel graphics card and for some other
users this resolved the issue.
Graphical Issues
For issues of the following nature:
• Application might not even start (OpenGl version error)
• Slow display performance
• Graphical display errors (wrong colours, artefacts, etc)
If such a display issue exists, additional information is required.
Please verify that the installed graphics card and driver in use match the supported
version stated in Graphics Cards & Drivers.
Only if this is the case, please collect the output of glxinfo and create an issue report as
stated below.
Reporting Issues
There are several ways of reporting issues. The following steps have to be taken:
• Check if the issue was already reported
• Collect all required information
• Report the issue
Please also make sure that you are using the latest version of COMPASS, since the issue
might already have been corrected in the current version.
Please look through the issues and make sure that yours isn’t already listed. If so, you
can comment on it to indicate the severity for you. If not, please proceed to the next step.
Collect Information
Basically everything that is needed to reproduce the error should be submitted. Depending
on the type of the error, this can differ, but at least the following information should be
given:
• Console log of the application
• Exact steps taken until the error occured
Issue Reporting
Please either create a new isse on GitHub (https://github.com/hpuhr/COMPASS/
issues) or send a mail to compass@openats.at, and include all of the previously collected
information.
If the supplied information is not enough you will be contacted as soon as time allows,
with a request for further detail.
Appendix
Appendix: Configuration
The application configuration is stored in the user home directory in a sub-folder ’.com-
pass’.
If such a folder does not exist, or the configuration is outdated, the default configura-
tion is copied from the AppImage into said folder.
Configuration Folder
All application configuration is stored as JSON files, which are read during startup and
saved during (correct) shutdown of the application.
280
APPENDIX 281
Data Folder
The data folder contains e.g. used images, 3rd party (static) configuration etc.
• fonts: Used fonts
• gdal: GDAL libary data & configuration
• icons: Application icons
• jasterix_definitions: jASTERIX libary definitions
• maps: OSGView map files
• textures: OSGView symbol textures
Current Format
{
"content_type": "data_sources",
"content_version": "0.2",
"data_sources": [
{
"ds_type": "Tracker",
"name": "MRTS",
"sac": 1,
"sic": 3
},
{
"ds_type": "Tracker",
"name": "ARTAS",
"sac": 1,
"sic": 4
},
{
"ds_type": "ADSB",
"info": {
"network_lines": {
"L1": "2.192.31.24:8600",
"L3": "3.192.31.25:8600"
APPENDIX 282
}
},
"name": "ADSB",
"sac": 1,
"short_name": "WADS",
"sic": 5
},
{
"ds_type": "MLAT",
"info": {
"network_lines": {
"L3": "2.192.21.24:8600",
"L4": "3.192.21.25:8600"
}
},
"name": "MLAT1",
"sac": 50,
"short_name": "MLAT1",
"sic": 70
}
]
}
Deprecated Format
{
"Radar": [
{
"altitude": 3.0,
"dbo_name": "Radar",
"latitude": 1.0,
"longitude": 2.0,
"name": "Luqa",
"sac": 120,
"short_name": "LQ",
"sic": 1
},
{
"altitude": 6.0,
"dbo_name": "Radar",
"latitude": 4.0,
"longitude": 5.0,
"name": "Dingli",
"sac": 120,
"short_name": "DG",
"sic": 2
APPENDIX 283
},
{
"altitude": 9.0,
"dbo_name": "Radar",
"latitude": 7.0,
"longitude": 8.0,
"name": "Hal-Far",
"sac": 120,
"short_name": "HF",
"sic": 3
}
],
"Tracker": [
{
"dbo_name": "Tracker",
"name": "ARTAS",
"sac": 120,
"short_name": "ATS",
"sic": 49
}
]
}
View point files are written in the JSON format, and includes a version number. The
current version number is 0.2, the previous version 0.1 is not supported.
Please note that basic JSON format knowledge is required to understand the topics in
this section.
Please note that the format definition has to be fulfilled rigorously, otherwise the ap-
plication may not show the view point or (in rare cases) crash.
Please note that two different use cases were considered: a general one and one for
tracker run comparison. The latter has additional parameters defined, to be used only in
such circumstances.
File Content
The following content is commonly stored in a view point file:
• View points context information
– Information about all later view points
APPENDIX 284
– Defines version
– Can be used for importing the relevant data files
– Also giving contextual information
• View point collection, each consisting of information about
– Unique identifier
– View Point type
– Description text
– A point in time with a time window
– A position with a position window
– A list of DSTypes to be loaded
– A list of data sources to be loaded
– A list of filters
– A list of context variables
Custom Attributes
The view point context, as well as view points, can contain additional information. For
the context, this additional information is shown in the ’Import View Points’ task and not
stored.
Additional data in each view point is read in as is, and persisted in the database. In the
’View Points’ tab, all primitive variables (non-object, non-list) are shown as columns.
Therefore, when generating view points, it might be useful to add such variables (e.g.
for ordering according to error magnitude, priority etc.). This is recommended and will be
supported further.
Version 0.2
The most simple example of a view points file is as follows:
{
"content_type": "view_points",
"content_version": "0.2",
"view_points": [
{
"id": 1,
"name": "All",
"status": "open",
"type": "Saved"
}
]
}
APPENDIX 285
There is an encompassing JSON object, which contains the view point context (object)
and view points (list with one entry).
In the context, optinally datasets can be added, which define related data to be im-
ported.
{
"content_type": "view_points",
"content_version": "0.2",
"view_point_context": {
"datasets": [
{
"filename": "/home/sk/data/test/asterix/test.ff"
}
]
},
...
}
Each of the defined ASTERIX datasets will be automatically imported when using the
’Import View Points’ task.
Please note that the ASTERIX decoder settings have to be set correctly in the configu-
ration and are the same for all files to be imported.
For the tracker run comparison case, additional attributes are used. The assumption
is that there are two recordings, each containing a tracker run, each of which will be
imported into a separate line. The tracker runs will possibly also have the same SAC/SIC
and a time shift, which can be corrected during ASTERIX import using the features de-
scribed in Override Tab.
In this case, the first dataset will be imported into line L1 (0 as L1, default line), while
the second dataset will be imported in to L2 and time shifted by the ’time_offset’ (as Time
of Day in seconds).
Exact definition:
APPENDIX 287
View Point
View points are stored in the ’view_points’ attribute, which is a simple list.
A view point only has to contain and ’id’ and ’type’ attribute, but additional attributes
make it more meaninful.
{
...
{
"id":0,
"type":"any string",
"text":"any string",
"position_latitude":49.5,
"position_longitude":12.2,
"position_window_latitude":0.05,
"position_window_longitude":0.02,
"time":666.0,
"time_window":4.0,
"data_sources": [
[
12750,
[
0,
1
]
],
[
12759,
[
2,
3
]
]
],
"data_source_types": [
APPENDIX 288
"Radar",
"RefTraj"
],
"filters": {
"Time of Day": {
"Time of Day Maximum": "16:02:00.00",
"Time of Day Minimum": "16:00:00.00"
},
"UTNs": {
"utns": "4"
}
},
"context_variables": {
"Meta": [
"Ground Bit",
"Track Groundspeed"
]
}
},
...
}
In each View Point object, the following values can be defined:
If the ’data_sources’, ’data_source_types’ or ’filters’ attribute are not defined, all data
will be loaded.
All possible filters existing in COMPASS can be set using view points, e.g.:
{
...
{
"id": 0,
"name": "all filters",
"position_latitude": 47.550205341739996,
"position_longitude": 14.672879071070001,
"position_window_latitude": 4.913833554478789,
"position_window_longitude": 7.054962643339518,
"status": "open",
"type": "Saved",
"db_objects": [
"Tracker"
],
"filters": {
"Barometric Altitude": {
APPENDIX 290
One exception are the data source filters, which need a ’active_sources’ condition list
with integer values for all sources to be loaded, identified as number. Each source number
APPENDIX 291
For the ’Tracker Track Number’ filter, the data source condition is identified by its data
source name. Using the dataset in the context information, the view point file can ensure
that the same name is used.
Please note that setting custom filters (created by the user) is also possible using view
points. Please contact the author for further information.
Appendix: Algorithms
Positions Accuracy Ellipses
According to the 68–95–99.7 rule, 68.27%, 95.45% and 99.73% of the values lie within one,
two and three standard deviations of the mean, respectively.
Please note that currently only the Cartesian variables are used, and are assumed to be
valid for the target reports position (and north direction), not for a system center.
The usage WGS-84 variables is included, an calculated to Cartesian values on the posi-
tion of the target report.
ADS-B
The accuracy values where taken from The 1090MHz Riddle online book.
V0 Transponders
For the NUCp values, the taken for display are listed in the σ column, resulting in error
circles (same value for both coordinates). Please note that for the NUCp value 0, no error
ellipse is displayed, since the accuracy value is unkown.
V1 & V2 Transponders
For the NACp values, the taken for display are listed in the σ column, resulting in error
circles (same value for both coordinates). Please note that for the NACp value 0, no error
ellipse is displayed, since the accuracy value is unkown.
MLAT
The description of the accuracy values where taken from EUROCONTROL Specification
for Surveillance Data Exchange ASTERIX Part 14 Category 020 Multilateration Target Reports
Appendix A: Reserved Expansion Field (EUROCONTROL-SPEC-0149-14A) document.
Notes:
• XY covariance component = sign Cov(X,Y) * sqrt abs [Cov (X,Y)]
• WGS-84 covariance component = sign Cov(Lat,Long) * sqrt abs [Cov (Lat,Long)]
APPENDIX 294
Radar
For each Radar plot, the DBContent Variables ’DS ID’ and ’Detection Type’ are used to
define from which data source the plot originated, and what type of plot was measured.
From the data source information (see Data Sources Table Content) the radar standard
deviations are collected. If no data source specific values are set, the default values (see
Radar Default Accuracies) are used.
Based on the plot type defined by ’Detection Type’, one (single technology) or the min-
imum/maximum of the standard deviations (for combined plots, defined by ’Use Radar
Minimum StdDev’ flag) is used. If the ’Detection Type’ is not set or is 0 (no detection), the
PSR values are used.
The standard deviates used for display purposes are calculated based on the range
standard deviation and the azimuth standard deviation multiplied by the circumference
at the given range, and are of course rotated by the measurement azimuth.
RefTraj
Calculated as the Tracker APW values.
Tracker
The description of the accuracy values where taken from EUROCONTROL Specifica-
tion for Surveillance Data Exchange ASTERIX Part 9 Category 062 SDPS Track Messages
(EUROCONTROL-SPEC-0149-9) document.
• APC.APC values: Estimated accuracy (i.e. standard deviation) of the calculated po-
sition of a target expressed in Cartesian co-ordinates, in meters
• COV.COV: XY Covariance Component, the unit is listed as in meters, but should be
in m2 and is asssumed as such
• APW.APW values: Standard Deviation of Position of the target expressed in WGS-
84, in degrees
APPENDIX 295
Notes:
• XY covariance component = sign Cov(X,Y) * sqrt abs [Cov (X,Y)]
Depending on what Linux operating system is used, the commands will differ. In this
section installation instructions for a few selected OS’ is provided.
Testing
After installtion, the following command should run an generate a similar output:
$ pdflatex --version
pdfTeX 3.14159265-2.6-1.40.21 (TeX Live 2020/Debian)
kpathsea version 6.3.2
Copyright 2020 Han The Thanh (pdfTeX) et al.
There is NO warranty. Redistribution of this software is
covered by the terms of both the pdfTeX copyright and
the Lesser GNU General Public License.
For more information about these matters, see the file
named COPYING and the pdfTeX source.
Primary author of pdfTeX: Han The Thanh (pdfTeX) et al.
Compiled with libpng 1.6.37; using libpng 1.6.37
Compiled with zlib 1.2.11; using zlib 1.2.11
Compiled with xpdf version 4.02
APPENDIX 296
Appendix: Utilities
jASTERIX
For usage of the jASTERIX library/tool please refer to jASTERIX.
SDDL
The SDDL tool is a open-source ASTERIX decoder and lister, please refer to SDDL for
more information. This text only aims at provding a short usage guide.
While it is possible to download and build from the source code, this usage guide
recommends downloading the a release AppImage from Releases. Please make sure that
at least version 1.1.0 with included JSON functions is used.
After downloading, set the executable flag on the file:
$ chmod +x SDDL-json-x86_64.AppImage
After this, the file can be executed. The help text can be obtained with the following
command:
./SDDL-json-x86_64.AppImage
*** Surveillance Data Decoder and Lister v1.1.0 ***
2000-2018 by Helmut Kobelbauer, Sinabelkirchen/Austria.
...
...
Our ’sddl’ utility at the moment supports the following data formats:
...
...
ADS-B exchange
For information about ADS-B exchange please refer to section Appendix: ADS-B exchange.
It is usually possible to obtain a dataset from any day prior to the current one, covering
the whole world.
Please be aware that using the produced data for commercial purposes (without a
specific agreement) would violate the ADSB exchange terms & conditions. Please refer to
https://www.adsbexchange.com/legal-and-privacy/ for additional information.
The script expects to find an ADS-B exchange database in the current working directory
and generates a SQLite3 database ready to be loaded into COMPASS containing the traffic
between 08Z and 09Z of the given day.
Appendix: Licensing
The COMPASS source code (database backend & GUI) is released under GNU GPLv3:
https://www.gnu.org/licenses/gpl-3.0.en.html
The AppImage binary is released under Creative Commons Attribution 4.0 Interna-
tional (CC BY 4.0)
Human readable: https://creativecommons.org/licenses/by/4.0/
APPENDIX 302
Legal: https://creativecommons.org/licenses/by/4.0/legalcode
The OSGView is not released as open-source and can only be used in the AppImage
binary.
Appendix: Disclaimer
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EX-
PRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGE-
MENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Please note that this list is not exhaustive, and that a large number of other, smaller
libraries is used in the background, for which the licenses were not checked. Please use
the ’ldd’ tool to get details.
APPENDIX 303
Please be aware that using the produced data for commercial purposes (without a
specific agreement) would violate the ADSB exchange terms & conditions. Please refer to
https://www.adsbexchange.com/legal-and-privacy/ for additional information.