Professional Documents
Culture Documents
Full Text 01
Full Text 01
Full Text 01
Final thesis
2010-05-10
Linköping University
Department of Computer and Information Science
Final Thesis
2010-05-10
The Integrated Modular Avionics architecture , IMA, provides means for running
multiple safety-critical applications on the same hardware. ARINC 653 is a
specification for this kind of architecture. It is a specification for space and time
partition in safety-critical real-time operating systems to ensure each
application’s integrity. This Master thesis describes how databases can be
implemented and used in an ARINC 653 system. The addressed issues are
interpartition communication, deadlocks and database storage. Two alternative
embedded databases are integrated in an IMA system to be accessed from
multiple clients from different partitions. Performance benchmarking was used to
study the differences in terms of throughput, number of simultaneous clients, and
scheduling. Databases implemented and benchmarked are SQLite and Raima. The
studies indicated a clear speed advantage in favor of SQLite, when Raima was
integrated using the ODBC interface. Both databases perform quite well and seem
to be good enough for usage in embedded systems. However, since neither SQLite
or Raima have any real-time support, their usage in safety-critical systems are
limited. The testing was performed in a simulated environment which makes the
results somewhat unreliable. To validate the benchmark results, further studies
must be performed, preferably in a real target environment.
Keywords : ARINC 653, INTEGRATED MODULAR AVIONICS, EM-
BEDDED DATABASES, SAFETY-CRITICAL, REAL-TIME OPERATING
SYSTEM, VXWORKS
Contents
1 Introduction 7
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Problem description ......................... 8
1.3.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.3 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Document structure ......................... 9
2 Background 10
2.1 Safety-critical airplane systems . . . . . . . . . . . . . . . . . . . 10
2.1.1 DO-178B ........................... 10
2.2 Avionics architecture . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Federated architecture . . . . . . . . . . . . . . . . . . . . 12
2.2.2 IMA architecture . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 ARINC 653 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3.1 Part 1 - Required Services . . . . . . . . . . . . . . . . . . 14
2.3.2 Part 2 and 3 - Extended Services and Test Compliance . . 18
2.4 VxWorks ............................... 19
2.4.1 Configuration and building ................. 19
2.4.2 Configuration record . . . . . . . . . . . . . . . . . . . . . 19 2.4.3 System
image . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.4 Memory . . . . . . . . . . . . . . . . . . . .
........ 20
2.4.5 Partitions and partition OSes . . . . . . . . . . . . . . . . 21
2.4.6 Port protocol . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.7 Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5 Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5.1 ODBC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5.2 MySQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.5.3 SQLite . . . . . . . . . . . . .
................ 23 2.5.4 Mimer SQL . . . . . . . . . . . . . . . . . . . . . . . . . .
24
2.5.5 Raima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4 Benchmarking 46
4.1 Environment ............................. 46
4.1.1 Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1.2 Variables ........................... 46
4.1.3 Measurement . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Benchmark graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.1 SQLite Insert . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2.2 SQLite task scheduling . . . . . . . . . . . . . . . . . . . . 50
4.2.3 SQLite select . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2.4 Raima select ......................... 52
4.3 Benchmark graphs analysis ..................... 53
4.3.1 Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3.2 Average calculation issues . . . . . . . . . . . . . . . . . . 53
4.3.3 Five clients top . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.4 Scaling ............................ 55
1
5 Comparisons between SQLite and Raima 56
5.1 Insert performance . . . . . . . . . . . . . . . . . . . . . . . . . . 56 5.2 Update
performance . . . . . . . . . . . . . . . . . . . . . . . . . 58 5.3 Select performance . . . . . . . . .
................. 59
5.4 No primary key . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.5 Task scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.6 Response sizes
............................ 62
5.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
A Benchmark graphs 70
A.1 Variables ............................... 70
A.2 SQLite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
A.2.1 Insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
A.2.2 Update ............................ 74
A.2.3 Select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A.2.4 Alternate task scheduling . . . . . . . . . . . . . . . . . . 80
A.2.5 No primary key . . . . . . . . . . . . . . . . . . . . . . . . 82
A.2.6 Large response sizes . . . . . . . . . . . . . . . . . . . . . 85
A.3 Raima . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.3.1 Insert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
A.3.2 Update ............................ 90
A.3.3 Select . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
A.3.4 Alternate task scheduling . . . . . . . . . . . . . . . . . . 97
A.3.5 No primary key . . . . . . . . . . . . . . . . . . . . . . . . 99
A.3.6 Large response sizes . . . . . . . . . . . . . . . . . . . . . 102
2
List of Figures
1.1 An ARINC 653 system. ....................... 8
4.1 Average inserts processed during one timeslot for different number of
client partitions. . . . . . . . . . . . . . . . . . . . . . . . . 48
4.2 Average number of inserts processed during one timeslot of vari-
ous length. .............................. 49
4.3 Average selects processed during one timeslot for different numbers of
client partitions and timeslot lengths. Task scheduling
used is Yield only. .......................... 50
4.4 Average selects processed during one timeslot for different numbers of
client partitions. . . . . . . . . . . . . . . . . . . . . . . . 51
4.5 Average selects processed during one timeslot for different number of
client partitions. The lines represent the average processed queries
using different timeslot lengths. . . . . . . . . . . . . . . . 52
4.6 With one client, the server manages to process all 1024 queries
in one time frame. .......................... 54
4.7 With two clients, the server cannot process 2*1024 in two timeslots due
to context switches. An extra time frame is required. . 54
4.8 The average processing speed is faster than 1024 queries per timeslot, but
it is not fast enought to earn an entire timeslot. . . 54
3
5.5 Comparison between average select queries using different task schedules
in SQLite and Raima. Timeslot length is 50 ms. . . . . 61 5.6 Comparison
between average selects with and without sorting in SQLite and Raima. The
resulting rowsets are of size 128 rows and timeslot length is 50 ms. . . . . . . . . . . .
. . . . . . . . . 62
5.7 Comparison between single row select and 128 rows select queries in
SQLite and Raima. Average values are showed in the graph with timeslot
length of 50 ms. . . . . . . . . . . . . . . . . . . . . 62
A.1 Average inserts processed during one timeslot for different num-
ber of client partitions. ....................... 71
A.2 Average no. inserts processed during one timeslot of various
length. ................................ 72
A.3 Maximum inserts processed during one timeslot for different number of
client partitions. . . . . . . . . . . . . . . . . . . . . . . . . 73
A.4 Maximum inserts processed for various timeslot lengths. . . . . . 73
A.5 Average updates processed during one timeslot for different num-
ber of client partitions. ....................... 74
A.6 Average no. updates processed during one timeslot of various
length. ................................ 75
A.7 Maximum updates processed during one timeslot for different number of
client partitions. . . . . . . . . . . . . . . . . . . . . . 76
A.8 Maximum updates processed for various timeslot lengths. . . . . 76
A.9 Average selects processed during one timeslot for different numbers of
client partitions. . . . . . . . . . . . . . . . . . . . . . . . 77
A.10 Average no. selects processed during one timeslot of various
length. ................................ 78
A.11 Maximum selects processed during one timeslot for different numbers of
client partitions. . . . . . . . . . . . . . . . . . . . . . . . 79
A.12 Maximum no. selects processed during one timeslot of various
length. ................................ 79
A.13 Average selects processed during one timeslot for different numbers of
client partitions and timeslot lengths. . . . . . . . . . . . 80
A.14 Maximum selects processed during one timeslot for different numbers of
client partitions and timeslot lengths. . . . . . . . . . . . 81
A.15 Average selects processed during one timeslot for different numbers of
client partitions and timeslot lengths. . . . . . . . . . . . 82
A.16 Average no. selects processed during one timeslot of various length. 83
A.17 Maximum selects processed during one timeslot for different numbers of
client partitions and timeslot lengths. . . . . . . . . . . . 84
A.18 Maximum no. selects processed during one timeslot of various
length. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
A.19 Average and maximum processed select queries. These selects ask
for128 rows. No sorting is applied. . . . . . . . . . . . . . . . 85
4
A.20 Average and maximum processed selects are displayed. Each query
asks for a 128 rows response. Results are sorted in an
ascending order by an unindexed column. . . . . . . . . . . . . . A.21 Average 86
inserts processed during one timeslot for different num-
ber of client partitions. . . . . . . . . . . . . . . . . . . . . . . . . 87
A.22 Average inserts processed for various timeslot lengths. . . . . . . A.23 88
Maximum inserts processed during one timeslot for different num-
ber of client partitions. . . . . . . . . . . . . . . . . . . . . . . . . 89
A.24 Maximum inserts processed for various timeslot lengths. . . . . . A.25 89
Average number of updates processed during one timeslot for
different number of client partitions. . . . . . . . . . . . . . . . . A.26 Average 90
number of updates processed for various timeslot lengths. 91
A.27 Maximum updates processed during one timeslot for different
Chapter 1
Introduction
This document is a Master thesis report conducted by two final year Computer
science and Engineering students. It corresponds to 60 ECTS, 30 ECTS each. The
work has been carried out at Saab Aerosystems Link¨oping and examined at
Department of Computer & Information science at Link¨oping University.
The report starts with a description of the thesis and some background
information. Following this, the system design and work results are described.
Finally, the document ends with an analysis, and a discussion and conclusion
section.
1.1 Background
Safety-critical aircraft systems often run on a single computer to prevent memory
inconsistency and to ensure that real-time deadlines are held. If multiple
applications are used on the same hardware they may affect each other’s memory
or time constraints. However, a need for multiple applications to share hardware
has arisen. This is mostly due to the fact that modern aircrafts are full of
electronics; no space is left for new systems. One solution to this problem is to use
an Integrated Modular Avionics, IMA, architecture. The IMA architecture provides
means for running multiple safety-critical applications on the same hardware.
The usage of Integrated Modular Avionics is an increasing trend in most aircraft
manufacturers and Saab Aerosystems is no exception.
ARINC 653 is a software specification for space and time partitioning in the
IMA architecture. Each application is run inside its own partition which isolates it
from other applications. Real-time operating systems implementing this standard
will be able to cope with the memory and real-time constraints. This increases the
flexibility but also introduces problems such as how to communicate between
partitions and how the partition scheduling will influence the design. A brief
overview of an ARINC 653-system can be seen in figure 1.1.
1.2 Purpose
As described in the previous section the trend of avionics software has shifted
towards Integrated Modular Avionics systems. ARINC 653 is a specification for
this kind of systems and is used in many modern aircrafts [8]. Saab is interested
CHAPTER 1. INTRODUCTION
Hardware
Figure 1.1: An ARINC 653 system.
in knowing how databases can be used for sharing data among partitions of an
ARINC 653-system. Therefore the purpose of this master thesis is to implement
different databases in an ARINC 653-compliant system and study their behavior.
6
1.3 Problem description
This thesis will study how can databases be embedded in an ARINC
653environment. Saab Aerosystems has not previously used databases together
with ARINC 653 and now wants to know what such a system is capable of.
There will be multiple applications of various safety-criticality levels that may
want to use the database services. The database in itself would not contain safety-
critical data. The problem to solve is how to implement databases in an ARINC
653-compliant system and how to make them communicate with applications.
Important aspects to consider are design and configuration for efficient database
usage, performance implications and usefulness of this kind of ARINC 653system.
1.3.1 Objectives
Goals and objectives for this master thesis:
• Port and integrate alternative databases within an ARINC 653-partition.
the database, that will hide the ARINC-layer from the application and the
database.
an ARINC 653-system.
1.3.2 Method
The workload will be divided into two parts since there are two participating
students in this thesis.
7
CHAPTER 1. INTRODUCTION
1.3.3 Limitations
Limitations for this project are:
8
CHAPTER 2. BACKGROUND
Chapter 2
Background
This section will provide the reader with essential background information.
2.1.1 DO-178B
DO-178B, Software Considerations in Airborne Systems and Equipment
Certification, is an industry accepted guidance about how to satisfy airworthiness
of aircrafts. It focuses only on the software engineering methods and processes
used when developing aircraft software and nothing on the actual result. This
means that if an airplane is certified with DO-178B, you know that the developing
process for developing an airworthiness aircraft has been followed, but you do
not really know if the airplane can actually fly. A system with a poor requirement
specification can go thorough the entire product development life cycle and fulfill
all of the DO-178B requirements. However, the only thing you know about the
result is that it fulfills this poor requirement specification. In other words, bad
input gives bad, but certified output. [3] [5]
Failure categories
Failures in an airborne system can be categorized in five different types according
to the DO-178B document:
9
Catastrophic A failure that will prevent a continuous safe flight and landing.
Results of a catastrophic failure are multiple fatalities among the occupants and
probably loss of the aircraft.
Major A failure that would reduce the capabilities of the aircraft and the ability of
the crew to do their work to any extent of:
Minor A failure that would not significantly reduce the aircraft safety and the
crew’s workload are still within their capabilities. Minor failures may include
slight reduction of safety margins and functional capabilities, a slight increase in
workload or some inconvenience for the occupants.
Level A Software of level A that doesn’t work as intended may cause a failure of
catastrophic type.
Level B Software of level B that doesn’t work as intended may cause a failure of
Hazardous/Severe-Major type.
10
CHAPTER 2. BACKGROUND
Level C Software of level C that doesn’t work as intended may cause a failure of
Major type.
Level D Software of level D that doesn’t work as intended may cause a failure of
Minor type.
Level E Software of level E that doesn’t work as intended may cause as failure of No
Effect type. [3] [4]
Flight
Air Data Management
Mission computer
computer computer
MMU-partitioning Operating
System
Bus
Figure 2.1: A federated architecture and an Integrated Modular Avionics architecture. [6]
11
CHAPTER 2. BACKGROUND
• Traditionally used
• ”Easy” to certify
12
CHAPTER 2. BACKGROUND
nature of IMA systems. But when this has been done and the platform, which is
the real-time operating system, RTOS, and the functions provided by the module
OS, is certified, new modules are easy to certify. Advantages of an IMA
architecture are:
The required services part defines the minimum functionality provided by the
RTOS to the applications. This is considered to be the industry standard interface.
The extended services define optional functionality while the test compliance part
defines how to establish compliance to part 1. [1]
Currently, there is no RTOS that supports all three parts. In fact, no one even
fully supports part 2 or 3. The existing RTOS’s supporting ARINC 653 only
13
CHAPTER 2. BACKGROUND
supports part 1. Although, some RTOS’s have implemented some services defined
in part 2, extended services, but these solutions are vendor specific and do not
fully comply with the specification. [5]
• Partition management
• Process management
• Time management
• Memory allocation
• Interpartition communication
• Intrapartition communication
There is also a section about the fault handler called Health Monitor. [1]
Partition management
Partitions are an important part of the ARINC 653 specification. They are used to
separate the memory space of applications so each application has its own
”private” memory space.
14
CHAPTER 2. BACKGROUND
Time
Figure 2.2: One cycle using the round robin partition scheduling. [6]
Modes A partition can be in four different modes: Idle, normal, coldstart and
warmstart.
IDLE When in this mode, the partition is not initialized and it is not executing any
processes. However, the partition’s assigned timeslots are unchanged.
NORMAL All processes have been created and are ready to run.
COLDSTART In this mode, the partition is in the middle of the initialization process.
Process management
Inside a partition an application resides which consists of one or more processes.
A process has its own stack, priority, and deadline. The processes in a partition
run concurrently and can be scheduled in both a periodic and an aperiodic way.
It is the partition OS that is responsible for controlling the processes inside its
partition.
States A process can be in four different states: Dormant, Ready, Running and
Waiting.
Dormant Cannot receive resources. Processes are in this state before they are
started and after they have been terminated.
Waiting Not allowed to use resources because the process is waiting for an event.
E.g. waiting on a delay.
15
CHAPTER 2. BACKGROUND
two processes have the same priority during a rescheduling event the scheduler
chooses the oldest process in ready mode to be executed.
Time management
Time management is extremely important in ARINC 653 systems. One of the main
points of ARINC 653 is that systems can be constructed so applications will be
able to run before their deadlines. This is possible since partition scheduling is, as
already mentioned, time deterministic. A time deterministic scheduling means
that the time each partition will be assigned to a CPU is already known. This
knowledge can be used to predict the system’s behavior and thereby create
systems that will fulfill their deadline requirements. [1]
Memory allocation
An application can only use memory in its own partition. This memory allocation
is defined during configuration and initialized during start up. There is no
memory routines specified in the core OS that can be called during runtime. [1]
Interpartition communication
The interpartition communication category contains definitions for how to
communicate between different partitions in the same core module.
Communication between different core modules and to external devices is also
covered.
Interpartition communication is performed by message passing, a message of
finite length is sent from a single source to one or more destinations. This is done
through ports and channels.
Ports and Channels A channel is a logical link between a source and one or more
destinations. The channel also defines the transfer mode of the messages.
To access a channel, partitions need to use ports which work as access points.
Ports can be of source or destination type and they allow partitions to send or
receive messages to/from another partition through the channel. A source port
can be connected to one or more destination ports. Each port has its own buffer
and a message queue which both are of predefined sizes. Data which is larger then
this buffer size must be fragmented before sending and then merged when
receiving.
All channels and all ports must be configured by the system integrator before
execution. It is not possible to change these during runtime.
Transfer modes There are two transfer modes available to chose from when
configuring a channel. They are sampling mode and queuing mode.
16
CHAPTER 2. BACKGROUND
Queuing mode In queuing mode, messages are stored in a queue at the port. The
size of this queue and its element sizes are configured before execution.
A message sent from the source partition is buffered in the port’s message
queue while waiting to be transmitted by the channel. At the receiving end,
incoming messages are stored in the destination port’s message queue until its
partition receives the message.
Message queues in the queuing mode are following the First In First Out, FIFO,
protocol. This allow ports to indicate overflow occurrences. However, it is the
application’s job to manage the overflow situations and make sure no messages
are lost. [1]
Intrapartition communication
Intrapartition communication is about how to communicate within a partition.
This can also be called interprocess communication because it is about how
processes communicate with each other. Mechanisms defined here are buffers,
blackboards, semaphores and events.
Buffers and Blackboards Buffers and Blackboards work like a protected data
area that only one process can access at a give time. Buffers can store multiple
messages in FIFO queues which are configured before execution while
Blackboards have only one spot for messages though have the advantage of
immediate updates. Processes waiting for the buffer or blackboard are queued in
a FIFO or priority order. Overall, buffers and blackboards have quite a lot in
common with the queuing and sampling modes respectively in the interpartition
communication section.
Semaphores and Events Semaphores and events are used for process
synchronization. Semaphores control access to shared resources while events
coordinate the control flow between processes.
Semaphores in ARINC 653 are counting semaphores and they are used to
control partition resources. The count represents the number of resources
available. A process accessing a resource has to wait on a semaphore before
accessing it and when finished the semaphore must be released. If multiple
processes wait for the same semaphore, they will be queued in FIFO or priority
order depending on the configuration.
Events are used to notify other processes of special occurrences and they
consist of a bi-valued state variable and a set of waiting processes. The state
variable can be either ”up” or ”down”. A process calling the event ”up” will put all
processes in the waiting processes set into the ready mode. A process calling the
event ”down” will be put into the waiting processes set.[1]
Health Monitor
Fault handling in ARINC 653 is performed by an OS function called Health
Monitor(HM). The health monitor is responsible for finding and reporting faults
and failures.
Errors that are found have different levels depending on where they occurred.
The levels are process level, partition level and OS level. Responses and actions
17
CHAPTER 2. BACKGROUND
taken are different depending on which error level the failure has and what has
been set in the configuration. [1] [10]
• File Systems
The only relevant topic among these is file systems. However, this file system
specification will not be used in this master thesis, see 3.4.1 for more information.
[1] [2]
Part 3 of the ARINC 653 specification defines how to test part 1, required
services. This is out of the scope of this master thesis.
2.4 VxWorks
VxWorks 653 is an ARINC 653 compatible real-time operating system. This
section mostly consists of information regarding VxWorks’ configuration. Details
below are described in VxWorks 653 Configuration and Build Guide 2.2
[11].
18
CHAPTER 2. BACKGROUND
must be given in the configuration file for the module and that resources are
preallocated for each application, shared library and shared data region. Since the
configuration record is a separate component of the module, it can be certified
separately.
• To build applications, stub files for shared libraries used by the applica-tions
must be created. This is done as a part of the shared library build.
2.4.4 Memory
The memory configuration is made up of two parts; the physical memory
configuration and the virtual memory configuration. An example of the memory
organization can be seen in figure 2.3.
App-1
App-2
App-2
Blackboard Blackboard
App-1 POS-2
19
CHAPTER 2. BACKGROUND
POS-1
App-2 Blackboard
App-1 POS-1
POS-1 POS-2
ConfigRecor ConfigRecord ConfigRecord
POS-2 d
ConfigRecord
COS COS COS
COS
Rom Ram App-1 App-2
Physical memory
The physical memory is made up of the read-only memory, ROM, and the random-
access memory, RAM.
As figure 2.3 illustrates, applications and the core OS consumes more memory
in RAM than in ROM. This is because each application requires additional memory
for the stack and the heap. This also applies to the core OS. Each application has
its own stack and heap, since there is no memory sharing allowed between
applications. If an application is using any shared libraries, it also needs to set
aside memory for the libraries’ stacks and heaps. How much memory each
application gets allocated is specified in the configuration record.
Partition OSes, POS, and shared libraries, SL, require no additional space in
RAM. This is because the application that is using the POS/SL is responsible for
the stack and the heap space.
SDR-Blackboard is a shared data region (SDR). It is a memory area set aside for
two or more applications as a place to exchange data.
App-1 and App-2, seen in figure 2.3 are loaded in to separate RAM areas.
They share no memory excepts for the SDR areas.
Virtual memory
Every component, except for the applications, has a fixed, unique address in the
virtual memory. All applications have the same address. This makes it possible to
configure an application as if it is the only application in the system. Each
application exists in a partition, which is a virtual container. The partition
configuration controls which resources that are available to the application.
20
CHAPTER 2. BACKGROUND
vThreads
VxWorks 653 comes with a partition OS called vThreads. vThreads is based on
VxWorks 5.5, and is a multithreading technology. It consists of a kernel and a subset
of the libraries supported in VxWorks 5.5. vThreads runs under the core OS in user
mode.
The threads in vThreads are scheduled by the partition scheduler. The core OS
is not involved in this scheduling. To communicate with other vThreads domains,
the threads are making system calls to the core OS.
vThreads get its memory heap from the core OS during startup. The heap is
used by vThreads to manage memory allocations from its objects. This heap
memory is the only memory available for vThreads; it’s unable to access any other
memory. All attempts to access memory outside the given range will be trapped
by the core OS.
21
CHAPTER 2. BACKGROUND
that partitions get coupled to each other, meaning that one application may block
all the others.
The other protocol, receiver discard, will drop messages when a destination
port is full. This will avoid one application with a full destination buffer to block
all the others. If all destination ports are full, an overflow flag will be set to notify
the application that the message was lost.
2.4.7 Simulator
VxWorks comes with a simulator. It makes it possible to run VxWorks 653
applications on a host computer. Since it’s a simulator, not an emulator, there are
some limitations compared to the target environment. The simulator’s
performance is affected by the host hardware and other software running on it.
The only clock available in the simulator is the host system clock. The
simulator’s internal tick counter is updated with either 60 Hz or 100 Hz, which
gives very low resolution measures. 60 Hz implies that partitions can’t be
scheduled with a timeslot shorter than 16 ms. With 100 Hz it’s possible to
schedule as short as 10 ms.
One feature that isn’t available in the simulator is the performance monitor,
which is used for monitoring CPU usage.
2.5 Databases
Today Saab is using their own custom storage solutions in their avionics software.
They are often specialized for a particular application and not that general. A more
general solution would save both time and money, since it would be easier to reuse.
This chapter contains information about the databases that have been
evaluated for implementation in the system. It also contains some general
information that may be good to know for better understanding of the system
implementation.
2.5.1 ODBC
Open Database Connectivity, ODBC, is an interface specification for accessing data
in databases. It was created by Microsoft to make it easier for companies to
develop Database Management System, DBMS, independent applications.
Applications call functions in the ODBC interface, which are implemented in
DBMS-specific modules called drivers.
ODBC is designed to expose the database capabilities, not supplement them.
This does not mean that just because one is using an ODBC interface to access a
simple database, the database will transform into a fully featured relational
database engine. If the driver is made for a DBMS that does not use SQL, the
developer of the driver must implement at least some minimal SQL functionality.
[17]
2.5.2 MySQL
MySQL is one of the worlds most popular open source databases. It is a high
performance, reliable relational database with a powerful transactional support.
22
CHAPTER 2. BACKGROUND
2.5.3 SQLite
SQLite is an open source embedded database that is reliable, small and fast. These
three factors come as a result of the main goal with SQLite - to be a simple
database. One could look at SQLite as a replacement to fopen() and other
filesystem functions, almost like an abstraction of the file system. There is no need
for any configuration or any server to start or stop.
SQLite is serverless. This means that it does not need any communication
between processes. Every process that needs to read or write to the database
opens the database file and reads/writes directly from/to it. One disadvantage
with a serverless database is that it does not allow more complicated and finer
grained locking methods.
SQLite supports in-memory databases. However, it’s not possible to open a
memory database more than once since a new database is created at every
opening. This means that it’s not possible to have two separate sessions to one
memory database.
The database is written entirely in ANSI-C, and makes use of a very small
subset of the standard C library. It’s very well tested; with tests that have 100%
branch coverage. The source has over 65.000 lines of code while test code and test
scripts have over 45.000.000 lines of code.
It’s possible to get the source code as a single C file. This makes it very easy to
compile and link into an application. When compiled, SQLite only takes about 300
kB memory space. To make it even smaller, it’s possible to compile it without
some features to make it as small as 190 kB.
Data storage
All data is stored in a single database file. As long as the file is readable for the
process, SQLite can perform lookups in the database. If the file is writable for the
process, SQLite can store or change things in the database. The database file
format is cross-platform, which means that the database can be moved around
among different hardware and software systems. Many other DBMS require that
the database is dumped and restored when moved to another system.
SQLite is using a manifest typing, not static typing as most other DBMS. This
means that any type of data can be stored in any column, except for an integer
primary key column. The data type is a property of the data value itself.
Records have variable lengths. This means that only the amount of disk space
needed to store data in a record is used. Many other DBMS’ have fixed length
records, which mean that a text column that can store up to 100 bytes, always
takes 100 bytes of disk space.
23
CHAPTER 2. BACKGROUND
One thing that is still experimental but could be useful, is that it is possible to
specify which memory allocation routines SQLite should use. This is probably
necessary if one wants to be able to certify a system that is using SQLite.
Locking technique
A single database file can be in five different locking states. To keep track of the
locks, SQLite uses a page of the database file. It uses standard file locks to lock
different bytes in this page. One byte for each locking state. The lock page is never
read or written by the database. In a POSIX system setting locks is done via fcntl()
calls.[13]
2.5.5 Raima
Raima is designed to be an embedded database. It has support for both inmemory
and on-disk storage. It also has native support for VxWorks, and should therefore
needs less effort to get up and running in VxWorks 653. It is widely used, among
others is the aircraft vendor Boeing.
Raima has both an SQL API and a native C API. The SQL API conforms to a subset
of ODBC 3.5.
Raima uses a data definition language, DDL, to specify the database design.
Each database needs its own DDL file. The DDL file is parsed with a command line
tool that generates a database file and a C header file. The header file contains C
struct definitions for the database records defined in the DDL and constants for
the field names. These structs and constants are used with Raima’s C API. The DDL
parsing tool also generates index files for keys. Each index file must be specified in
the DDL.
There are two types of DDL in Raima. One which has a C like syntax, DDL, and
one with a SQL like syntax, SDDL. Both types must be parsed by a special tool. Its
24
CHAPTER 2. BACKGROUND
not possible to create databases or tables during runtime with SQL queries.
Fortunately, its possible to link Raima’s DDL/SDDL parsers into applications. This
allows creation of DDL/SDDL files during runtime, that could be parsed by the
linked code.
Raima is storing data in fixed length records. This is not per table basis, but file
basis. If there are multiple tables in the same file, the record size will be the size of
the biggest record needed by any of the tables. This means that there may be a lot
of space wasted, but it should result in better performance.
[15]
25
Chapter 3
26
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Adapter
Application
Database
27
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
The benefit with this design is the application independence. E.g. if one port
buffer is overflowed, only this port’s connection will be affected. The other
connections can still continue to function. Another benefit is that the server will
know which client sent a message. This is because all channels are configured
before the system is run.
The drawback with this approach is that it requires many ports and channels to
be configured which requires a lot of memory.
3.2 Database modules
There are two different database modules. One is for client partitions and provide
a database API to applications. The other one is for server partitions, and handle
the database adapter connections.
Database API
The database API follows the ODBC standard interface. However, not all routines
specified in the ODBC interface are implemented, only a subset of the ODBC
routines that provide enough functionality to get the system to work.
The names and short descriptions of the implemented routines are listed in table
3.1.
Design
ODBC specifies three different datatypes. They are as follows:
Statements Contains a query and when returned, the result of the
query.
Connections Every connection contains handles to the ports
associated with this connection. It also holds all
28
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
1 1 1 1
multiple statements can be created. A statement will contain the query and, when the
response comes from the server, a rowset.
Send queries When a statement has been created, it can be loaded with a query
and then sent by calling function SQLExecDirect with the specific statement as a
parameter.
Efficiency would be greatly diminished if the execute/send routine would be
of a ”blocking” type, i.e. the execution would halt while waiting for a response to
return. If this was the case, every client would only be able to execute one query
per timeslot. This is due to the fact that a query answer cannot be returned to the
sender application before the query has been executed in the database. The
database partition must therefore be assigned to the CPU and process the query
before a response can be sent back. Since the partitions are scheduled with a
round robin scheduling with only one timeslot per partition, the sender
application will get the response at earliest at its next scheduled timeslot.
Our design does not use a ”blocking” execute. Instead it works as ODBC
describes and passes a handle to the execute routine which sends the query and
then continue its execution. The handle is later needed by the fetch/receive
routine for knowing which answer should be fetched. This approach supports
multiple sent queries during one timeslot. However, the fetch routine gets a bit
more complicated.
29
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
repeated until a matching result is found. When this happens, the result is moved
into the specified statement. The data is then moved into the bound buffers before
the routine exit.
A short summary : All queries with results stored earlier in the inport queue,
in front of the matching query, will get their results moved into their statements
for faster access in the future. The result belonging to the specified query is
moved into the statement’s bound columns and are made available for layer three
applications. All queries who have their result stored behind the specified query
in the inport queue will not get their result moved into their statements. These
results have to stay in the port queue until another SQLFetch routine is called in
which these operations are repeated.
30
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
when using this method due to the inappropriate combination of retransmits and
the partition scheduling algorithm.
A client’s processing performance is an important issue. An idea to speed this
up is to send multiple partial rowsets instead of sending the entire rowset at one
time. Benefits would be that clients could start their result processing earlier, i.e.
start processing the first part directly instead of having to wait for the entire
rowset to be transmitted. This is a good idea if the partitions would have been
running concurrently. However since they do run in a round-robin scheduling this
idea loses its advantage. A client which receives a partial row set can start to
process directly, but when the processing has finished it has to wait a whole cycle
to get the next part of the partial row set before it can continue its processing.
This leads to an inefficient behavior where the client application can only process
one partial rowset per timeslot. However, it is possible to achieve good
performance using this method but then the client’s performance must be known
in advance. If that is the case, the server can send precise sized rowsets which the
client would just be able to cope with.
Multitasking
The server module must be able to be connected to multiple client applications.
The server process will therefore spawn additional tasks to handle these
applications. Each server-client connection is run inside its own single task. This
is to deny connections from occupying too much time and to prevent the database
from being locked down in case of a connection deadlock.
A task is similar to a thread. They are stored in the partition memory and can
access most of the resources inside the partition.
Each task handles one and only one of the connections. Its job is to manage
this connection, i.e. receive queries, processes queries and return answers until
the task is terminated. See figure 3.5 for a task job flowchart.
The advantage of a threaded environment is that if a deadlock occurs in one
connection, it won’t affect any other connections. Also, threads can be scheduled
to make the server process queries in different ways depending on the system’s
purpose. One disadvantage with threading is that the system becomes more
complex. Added issues are for example synchronization and multitasking design.
31
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Scheduling
In ARINC, scheduling of tasks is done in a preemptive priority-based way. In the
implemented system the priority is set to the same number for all tasks. This
implies that the tasks will be scheduled in a first in first out, FIFO, order.
There are two schedules of interest here; Round robin with yield and Yield only.
Round robin with yield works as a round robin scheduling but has the
possibility to release the CPU earlier at its own command. This scheduling doesn’t
force the task to occupy its entire timeslot. Instead it will automatically relinquish
the processor at send and receive timeouts. Timeslot lengths are determined by
the number of ticks set by the KernelTimeSlice routine. Since all tasks have the
same priority, all tasks will enjoy about the same execution time. If the partition
timeslot is large enough and the KernelTimeSlice is small enough, all clients will
get some responses every cycle.
Another available scheduling is Yield only. This scheduling lets a task run until
itself relinquishes the CPU. The tasks do not have a maximum continuous
execution time. When used in this system, the CPU is released only at send and
receive timeouts.
Header
The header contains an id, a statement pointer, a data size and a command type as
showed in table 3.2.
The statement pointer is used to distinguish to which statement a query
belongs. This is to make sure that database responses are moved into the correct
statements. An id is required because there are some special occurrences where
the statement pointer isn’t enough to determine which statement is the correct
one. The id is a sequence number that is increased and added to the header at
every send called by SQLExecDirect in the client application.
Id Unique id of a sent query.
32
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Statement pointer Pointer to the statement holding the query in the client application.
Data size Size of the data following the header.
Command type Type of the following data.
NONE is set when nothing should be sent, i.e. the module will skip sending messages of this
type.
QUERY indicates that the package contains a SQL query. Messages of this type will
always generate a response.
Data is transferred in different ways depending on these command types. See table 3.3.
END TRAN < int >∗Commit or Rollback
< int > Nr rows, < int > Nr cols
<name(1)int > length colname(1), < char∗ >
col-
33
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
34
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
3.3.1 Ports
The implementation of our test system uses ports of queuing type. The reason for
this is that no messages are allowed to be lost, as might be the case when using
sampling type ports. The transmission module shall also be able to transfer data
of large sizes. This also requires the usage of queuing ports since large data must
be fragmented to fit inside port buffers. This is impossible with sampling ports.
Fragmentation
Buffer size and queue length are two important settings for a queuing port. Total
size required by a port is the buffer size times the queue length, Portsize = buffer
sizeLarge data streams that cannot fit inside one port bu∗ queue length. ffer must
be split into several smaller parts. There are three different cases that can occur
during the fragmentation:
Case 3 Message is too large and fills all queue buffers. This means that the queue
is full and the port is overflowed. The port’s behavior in this case depends
on which port protocol that is used.
Note that several large messages can be fragmented into the queue at the same time
which increases the risk of port overflow occurrences.
Port protocols
As mention before in section 2.4.6 VxWorks supports two different port protocols:
Sender Block and Receiver Discard.
The transmission module uses the Sender Block port protocol. The reason for
this is that it removes the issue of retransmissions entirely for our test system.
Another reason is that if the Receiver Discard port protocol were to be used, some
35
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Deadlocks
Deadlocks that regard the entire server’s uptime has been taken care of by special
design and configuration at another level. A multithreaded server and the channel
design will prevent one client to block the entire server because only a single
thread becomes locked. See section 3.1.1 and section 3.2.2 for more information.
However, deadlocks inside a single database thread might still occur. This kind of
deadlock prevents the specific database thread and its corresponding application
to execute successfully.
This deadlock type occurs when the client application executes a lot of queries
and thereby overflowing the server thread’s port buffer. This causes the client
application to block itself waiting for the occupied spots in the database port
queue to diminish. Then the server partition is switched in and it will receive and
execute a few of the queries in the database and send back the results to the client
application. If the database responses are large enough to overflow the client
application’s port buffer the server will be blocked too. Now the client application
is switched back in and since the database did some receives before getting
blocked, the client is no longer blocked. The client application therefore resumes
its execution at the same place where it was stopped before, i.e. it continues
sending a lot of queries. This causes the client application to be blocked again
soon. At this point both the client application and the server thread are blocked
since both of their inbound ports are overflowed. See figure
3.6.
To handle single thread deadlocks as described above, there are three options:
prevent, avoid, or detect and recover. In this system deadlock prevention has been
used.
36
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Deadlock recovery Deadlock recovery is used when a deadlock has already occurred. One
solution to recover is to restart one of the involved processes.
Another solution is to introduce timeouts. A send routine which got blocked
due to a full inport queue at receiver, will timeout after a specific time. This will
break the block and the execution continues. However, the execution cannot
continue as normal since the failed send’s data must be transferred. A retransmit
won’t solve this issue as the task will just be blocked and cause a deadlock again.
The data must therefore be discarded. This will be similar to the receiver discard
port protocol in VxWorks.
Both of the above recovery methods results in data loss, which is not wanted in
our system.
37
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
3.4 Databases
SQLite and Raima has been implemented and benchmarked. Unfortunately, it was
impossible to get Mimer up and running, since they only had pre-compiled
binaries for standard VxWorks. When linking these binaries with an application
there ware many linking errors because of missing libraries in VxWorks 653.
MySQL fell off since the embedded version only exists under a commercial license.
38
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
3.4.1 Filesystem
The ARINC 653 specification part 2 specifies a filesystem. This is not implemented
in VxWorks 653. However, there is a filesystem implemented in VxWorks. This
filesystem is not used in this project. One reason for this is to avoid too much
coupling to VxWorks 653-specific implementations. Another reason is the
VxWorks simulator. Since it’s a simulator, not an emulator, the benchmarking may
be even more inaccurate if disk read and write accesses are done via the host
operating system.
The filesystem that is used is an in-ram filesystem. It has been implemented by
overloading filesystem related functions like open() and close(). The overloading
can be made since VxWorks lets you specify which functions that should be linked
from shared libraries and which shouldn’t.
Filesystem usage
Before using the filesystem it must be initialized. The initialization needs an array
with pointers to memory regions, one region per file, and the size of the region.
These memory regions can be of arbitrary sizes and are configured by the
application developer.
When creating a file, using open() or creat(), the first unused memory region
that was given during the initialization is used. This means that one must open the
files in the correct order to make sure that a file gets the wanted memory region.
E.g. if one needs two files in an application, 1MB.txt and 1kB.txt, the filesystem
needs to be initialized with two pointers in the initialization array. If the first
pointer points to a 1MB memory region and the second pointer points to a 1kB
memory region, the 1MB.txt file must be created first since the 1MB memory
region is the first element of the pointer array.
Filesystem structure
To keep track of open files two different structs are used, as seen in figure 3.7. The
first one, file t, is the actual file struct. It has a pointer to where in the memory the
file data is located. This struct also holds all locks, the filename, and the capacity
and the size of the file. The second struct is ofile t. The o is for open. This struct
keeps track of access mode flags and the current file pointer position. The last
struct, fd t, is the file descriptor struct. It holds information about operation
mode flags.
The file descriptor struct and the open file struct could be merged to one, but
then it would be impossible to implement file descriptor duplication functionality.
39
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
The filesystem can have fixed numbers of files, open files and file descriptors.
Since there is a fixed maximum number of each struct, they are allocated in fixed
sized C arrays.
To avoid unnecessary performance issues, stacks are used to keep track of free
file descriptors and free files. With stacks, free descriptors can be found in a
constant time since there is no need of iterating the array with descriptors. The
filesystem is using files with static sizes. This makes reading and writing fast,
since all data is stored in one big chunk, and not spread over blocks. However, this
may lead to unused memory or full files. This means that the filesystem must be
configured carefully.
File locking
At first, file level locking was implemented. An entire file could be locked, not
parts of the file. However, it turned out that SQLite needs byte level locking, and
the ability to use both read and write locks. All locking related operations are
done via the fcntl() and the flock() functions.
40
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
3.5.1 Interface
The interface for the adapters is very simple. It basically provides support for
running queries and fetching results. dba init() Initializes the adapter.
41
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
The status code is used to determine whether the query is a select or not. If the
return value is SQLITE ROW the query must have been a select, or something else
that resulted in one or more rows. However, if the query is a select that returns no
rows, its impossible to determine whether the query is a select or not. Luckily, the
database module only needs to know if there are any rows, not what type of query
that just got executed.
Transactions
SQLite is using pessimistic concurrency control. This, together with it’s locking techniques
makes transactions inefficient when the database is used by multiple connections.
When a query that results in a write to the database is executed in a
transaction, this transaction will lock the entire database with an exclusive lock
until it’s committed. This means that all other transactions have to wait for the
lock to be released before they can execute any more queries.
If there is one transaction per task in the database module and one of these
transactions holds an exclusive lock, and the task suddenly is switched out, the
other tasks won’t be able to perform any database queries. This will decrease the
database module’s performance. A solution could be to use mutex locks and
semaphores in the database module or disabling task switching to prevent nested
transactions.
Result handling
When a select query has been executed the result should be stored in a rowset. To
build the rowset the result from the database is iterated. For each iteration a row
is created and added to the rowset. To be able to add data to the rowset, the size
of the data must be known. This is done by checking the data type for each column
in the database.
Initialization
Since Raima is unable to create tables via normal SQL queries, a specialized
initialization function must be used. This initialization function takes a pointer to
a pointer to a string containing Raima SDDL statements. The SDDL cannot parsed
right away. It has be written to a file first.
Normally, this file would be parsed with Raima’s command-line SDDL tool,
which is not possible in VxWorks since you cant use exec() to execute other
applications. However, it is possible to link Raima’s binaries with your own and
make a call to the SDDL tool’s main function sddl main().
When the SDDL has been parsed the database must be initialized. This is also,
usually, made with a command-line tool, but it’s possible to link these into the
application too and call the main function initdb main(). The result of the parsing
is the database files specified by the SDDL.
42
CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION
Files
Raima is using more files than SQLite, so the filesystem must be planned more
than with SQLite. At least three files is used during runtime, after the database has
been created; database, index and log. During database creation the SDDL file is
needed among others.
Result handling
To fetch the result from a query in Raima is very alike the way it’s done i SQLite.
The result must be iterated, column value types checked to get the size needed in
the row etc.
43
Chapter 4
Benchmarking
Benchmarking is performed to find the implemented system’s thresholds
according to throughput, number of simultaneously clients and scheduling. This is
required to be able to evaluate the usage of databases in an ARINC 653
environment.
In the following sections, the benchmarking environment and results are
described and analyzed. Comparisons between SQLite and Raima can be found in
the next chapter.
4.1 Environment
The benchmarking is performed in a VxWorks 653 simulator on Windows
desktop computer. This section describes how measurements are performed and
what default values are used for the benchmarking variables.
4.1.1 Simulator
The simulator simulates the VxWorks 653 real-time operating system, i.e.
partitions, scheduling and the interpartition communication.
The low resolution of the simulator’s tick counter forces us to mostly use long
timeslots. We use 50 ms, 100 ms and 150 ms as our standard lengths but in a real
system these intervals are very long. A real system often uses partition timeslots
of length 5 ms or shorter. These short timeslots cannot be simulated, but have to
be run on a target hardware.
4.1.2 Variables
Table 4.1 shows the default values for the variables used during benchmarking. If
nothing else is mentioned in each test case, these were the values that were used.
Number of client partitions describes how many client partitions are scheduled
by the round robin partition scheduling in the core OS. Timeslot length is the
length in milliseconds of each partition window in the above scheduling. Task
scheduling describes the process scheduling inside a partition. Primary key is if the
query is using the primary column for selecting single rows. Port buffer size and
Port queue length are port attributes needed for communication between
partitions. Table size is the initialization size of the table used in the benchmarks.
Query response size is the size of the resulting rowset that will be sent back to the
client. Sort result is if the resulting rowset is sorted. Client send quota is the
number of queries each client will send.
Variable Value
Number of client partitions 1 .. 8.
44
CHAPTER 4. BENCHMARKING
4.1.3 Measurement
The benchmarks are created by setting different numbers of clients sending a
predefined number of queries to the server. The server will then process these
queries as fast as it can and measure its average and maximum number of
processed queries for each of its timeslots. The minimum value was skipped since
it will almost always show the value from the last timeslot where the final
remaining queries are processed.
The number of queries sent by each client is 1024. This value is large enough to
cause full load on the database, but low enough to not need too much memory.
Benchmarks in this thesis measure the server’s throughput, that is the number
of queries fully processed in the server during one timeslot. A fully processed
query is a query that has been received from the in port, executed in the database
and its result has been sent to the out port. A query that is processed in the
database when a context switch occur, will be recorded in the next timeslot. The
average number of the processed queries are calculated at the end of a test run by
dividing the total number of queries sent to the server by the total number of
timeslots the server required to finish, see equation (4.1).
This measurement approach was chosen because of the simulator’s inaccurate
time measurement and that queries processed is an easily understandable unit.
Every benchmark configuration has been run five times. A mean value has
then been calculated from these five runs. For example, a mean value of ”partition
switches required” is used in equation 4.1 to calculate the average number of
processed queries during one timeslot.
For both databases the same benchmark configurations and the same queries
have been used. The only thing different between the database benchmarks is the
database adapter. The difference between the adapters are very small and adds as
almost no overhead to the database performance and should not affect the result.
45
CHAPTER 4. BENCHMARKING
benchmarks. All benchmarks can be found in appendix A. In all graphs below the x
axis shows the number of client partitions in the experiment, and the y axis shows
the average number of queries as given in equation 4.1 earlier.
900
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure 4.1: Average inserts processed during one timeslot for different number of client
partitions.
1000
900
800
Queries
700
600 1 Client
2 Clients
3 Clients
4 Clients
500
5 Clients
6 Cleints
7 Clients
400 8 Clients
300
50 100 150
Timeslot [ms]
46
CHAPTER 4. BENCHMARKING
Figure 4.2: Average number of inserts processed during one timeslot of various length.
1000
50 ms
900 100 ms
150 ms
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure 4.3: Average selects processed during one timeslot for different numbers of client
partitions and timeslot lengths. Task scheduling used is Yield only.
47
CHAPTER 4. BENCHMARKING
900
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure 4.4: Average selects processed during one timeslot for different numbers of client
partitions.
600
500
400
Queries
50 ms
100 ms
300 150 ms
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure 4.5: Average selects processed during one timeslot for different number of client
partitions. The lines represent the average processed queries using different timeslot
lengths.
48
CHAPTER 4. BENCHMARKING
4.3.1 Deviation
Five runs with the same configuration were run to ensure better values. The
number of partition switches required in the benchmarks were almost always the
same during the five runs. When the values did differ, it was only by one partition
switch and this only occurred when many partitions were scheduled.
The worst deviation case that occurred during the benchmarks were that
three test runs returned the value 11 while two test runs gave 12 partition
switches. The mean value for this case is 11,4 partition switches, which gives the
standard deviation 0,49. The relative standard deviation is 4,3%.
The maximum performance benchmarks do not rely on partition switches.
Instead they use the mean value of the maximum number of queries processed in
one timeslot. The relative standard deviation here is lower than 5,3%.
Dip at beginning
The ”dip at beginning” appearance occurs in the SQLite insert, update and select
graphs (See figures A.1, A.5 and A.9). This strange curve is explained by the
average calculation method, see equation (4.1) on page 47. Since this method uses
the number of sent queries, divided by the number of required timeslots, the
calculation is very dependent on the timeslot length. A longer timeslot length
leads to more queries getting processed per timeslot and thereby fewer partition
switches are needed. With few clients, a small change in the number of required
partition switches will have huge affect on the calculation result.
If the average number of processed queries is less than a client’s total send
quota, in the benchmarks set to 1024, the server will need more than one timeslot
per client to finish. This means that the denominator in the average calculation
increases and thereby the average performance will be reduced.
With only one client running, the server manages to perform all of the client’s
1024 queries in one timeslot. When the number of clients is more than one,
context switches will occur in the server that adds some overhead. This overhead
makes, in the case of two clients, the server unable to perform all 2∗1024 queries
in two timeslots. This means that an extra timeslot is needed, and the average
performance is therefore heavily decreased. The calculations for the two cases are
1024/1 and (2 ∗ 1024)/3. See figure 4.6 and figureffect of the extra required
partition4.7. For every additionally added client, the a switch will be less and less.
1024 queries Fit in one frame
49
CHAPTER 4. BENCHMARKING
Figure 4.6: With one client, the server manages to process all 1024 queries in one time
frame.
Figure 4.7: With two clients, the server cannot process 2*1024 in two timeslots due to
context switches. An extra time frame is required.
Always max
The average calculation can also explain some other 150 ms curves. For example
figure 4.3 shows select with Yield only scheduling. Its 150 ms curve is a horizontal
straight line which always has the value 1024.
In this case, the average value is in reality slightly above the calculated average
value of 1024. This leads to that the server will finish its processing faster.
However it is not fast enough to gain an entire timeslot and skip the last partition
switch. The number of partition switches are therefore the same and leads to an
erroneous calculated average value. See figure 4.8.
Figure 4.8: The average processing speed is faster than 1024 queries per timeslot, but it is
not fast enought to earn an entire timeslot.
50
CHAPTER 4. BENCHMARKING
the task scheduling in the simulator, which seems to work better with five threads
in combination with the partition timeslot length. This is just a guess from our
side since we have no verified explanation.
4.3.4 Scaling
Looking at the insert and update curves, it can be seen how they scale compared
to the timeslot length. Generally, it seems that between 50 ms and 150 ms they
are linear. However, for SQLite, they do not double the number of queries
processed when the timeslot length is doubled which is the case for Raima.
51
Chapter 5
600
500
400
Queries
300
200
Raima 50 ms
100 Raima 100 ms
SQLite 50 ms
SQLite 100 ms
0
1 2 3 4 5 6 7 8
Clients
Figure 5.1: Comparison between average insert values in SQLite and Raima. Timeslots used
in the graphs are 50 ms and 100ms.
52
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
700
600
500
Queries
400
300
200
Raima 50 ms
100 Raima 100 ms
SQLite 50 ms
SQLite 100 ms
0
1 2 3 4 5 6 7 8
Clients
Figure 5.2: Comparison between average update values in SQLite and Raima. Timeslot
lengths are 50 ms and 100 ms.
53
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
Select comparison
600
500
400
Queries
300
200
Raima 50 ms
100 Raima 100 ms
SQLite 50 ms
SQLite 100 ms
0
1 2 3 4 5 6 7 8
Clients
Figure 5.3: Comparison between average select values in SQLite and Raima. Timeslot
lengths are 50 ms and 100 ms.
300
250
Queries
200
50
0
1 2 3 4 5 6 7 8
Clients
Figure 5.4: Comparison between average select values in SQLite and Raima with and
without primary key. Timeslot length is 50 ms.
54
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
the 50 ms average select benchmarks. Figure 5.5 indicates quite similar curves for
the different scheduling approaches.
Yield only is a bit faster than Round robin and yield, for SQLite, but the
variation is quite small. This slight improvement can be credited to fewer task
switches in the database partition. For SQLite, when using round robin, the
number of processed queries in a task timeslot varies more than for Raima. This
leads to tasks finishing their processing in different round robin task cycles. A
finished task will then be switched in again the next cycle, even though it has
nothing to do. The task will yield and hand over to the next task. This adds
overhead and extra partition switches are needed.
For Raima, the tasks process queries with a uniform performance which make
the tasks finish in a consecutive order. This leads to less overhead and small
differences between the curves.
Task scheduling comparison
450
Raima − Round robin and yield
Raima − Yield only
SQLite − Round robin and yield
400 SQLite − Yield only
350
Queries
300
250
200
150
1 2 3 4 5 6 7 8
Client
Figure 5.5: Comparison between average select queries using different task schedules in
SQLite and Raima. Timeslot length is 50 ms.
55
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
Sort comparison
11
SQLite − no sort
10 SQLite − sort
Raima − no sort
9 Raima − sort
7
Queries
0
1 2 3 4 5 6 7 8
Clients
Figure 5.6: Comparison between average selects with and without sorting in SQLite and
Raima. The resulting rowsets are of size 128 rows and timeslot length is 50 ms.
Figure 5.7 compares the number of rows fetched from the database for single
row queries and 128 rows queries. As expected, both databases can fetch many
more rows during one timeslot when requesting larger rowsets as results. The
database will fetch rows faster and the interpartition communication will be more
effective. Fewer sends decreases the overhead.
Large responses comparison
1500
SQLite − 1 row per query
Raima − 1 row per query
SQLite − 128 rows per query
Raima − 128 rows per query
1000
Rows
500
0
1 2 3 4 5 6 7 8
Clients
Figure 5.7: Comparison between single row select and 128 rows select queries in SQLite
and Raima. Average values are showed in the graph with timeslot length of 50 ms.
5.7 Summary
In this section we summarize our observations for the selected timeslots and
scheduling approaches given the interpartition communication protocols and
adapters implemented.
• In general it appears that the system perform at its best for five clients,both in
SQLite and Raima, given the selected client loads and timeslots.
56
CHAPTER 5. COMPARISONS BETWEEN SQLITE AND RAIMA
• The 150 ms measurements are often strange and unexpected. This can
beexplained by the average metric used (equation 4.1) that depends on the length of
the time slot. used. Therefore 150 ms curves are less accurate than the 50 ms
and 100 ms curves.
• Most graphs are linear but a doubled timeslot won’t double the number of
• SQLite is overall much faster than Raima. Raima can only match SQLitein the
update performance benchmark.
The above summary is based on the benchmarks made in a simulator with low
time resolution, and the simulation were done in Windows on a desktop
computer. This implies that the benchmark results are not entirely reliable.
The benchmarked system is a system with certain properties. The results and
analyses are produced for this system and might not be applicable on other
systems.
57
Chapter 6
6.1 Performance
The relation between performance and scalability, database comparisons, and
simulation details are discussed here.
6.1.2 Scalability
Most benchmarking graphs are quite linear which is good since then it is possible
to estimate results. If we use extrapolation on the curves we notice that curves in
insert and update graphs won’t cross origo. This implies that the linearity won’t
hold for low timeslot values. If the linearity did hold, it would be possible to use
timeslots of zero length and still be able to process queries. This is obviously
impossible. Raima does not have this behavior at all which further strengthens
our conclusion that Raima’s performance is easier to estimate.
It should be noted that there are few different timeslot lengths measured
which makes the above discussion a bit uncertain. To validate these results, more
timeslots must be tested. The three timeslot lengths used in the benchmark were
chosen because they are large enough to work with the simulator’s low
58
CHAPTER 6. DISCUSSION AND CONCLUSION
resolution, while they are low enough to avoid inappropriate large port buffers.
The timeslot lengths are also multiples of 50 which makes it easy to see scalability
issues.
As seen in most of the graphs, there is a peak at five clients. Even though it’s
not verified, this peak might be due to the combination of timeslot lengths in the
task and partition schedulings, and the VxWorks simulator. To make things clear
about this, new benchmarks in a target environment are needed to get more
trustworthy values.
59
CHAPTER 6. DISCUSSION AND CONCLUSION
waiting for a result which won’t arrive before its next timeslot. Instead a fetch
should be called sparsely or when it is really necessary.
60
CHAPTER 6. DISCUSSION AND CONCLUSION
This limits the usefulness of these databases since no important data can be
stored inside and no data can be sent to high safety-critical applications.
61
Bibliography
[1] Airlines electronic engineering committee (AEEC). Mars 2006.
Avionics application software standard interface - ARINC
specification 653 Part 1.
62
[12] Alan Burnes, Andy Welling. 2001. Real-Time Systems and
Programming Languages. Third edition. Pages 486 - 488.
63
Glossary
ACID Atomicity, consistency , isolation, durability. Set
of properties that guaranties transaction are
processed reliably
API Application programming interface
ARINC 653 A specification for time- and space partitioning
in safety-critical real-time operating systems
Core OS
Core operating system for a VxWorks 653
module. Provides OS services and schedules
partitions
COTS Commercial of The Shelf
DDL
Data Definition Language, Used in Raima to
specify databases
DO 178B Software Considerations in Airborne Systems
and Equipment certification. The avionics
software developed by RTCA.
64
Appendix A Benchmark
graphs
A.1 Variables
Table A.1 the default values for the variables used during benchmarking. If
nothing else is mentioned in each test case, these were the values that were used.
Variable Value
Number of client partitions 1 .. 8.
Timeslot length 50,100,150 ms.
Task scheduling Round robin.
Primary key Yes.
Port buffer size 1024 bytes.
Port queue length 512.
Table size 1000 rows, 16 columns
Query response size 1 Row(16 cols).
Sort result No
Client send quota 1024
Table A.1: Variables - default values
A.2 SQLite
This section shows benchmarks concerning SQLite. They are divided into six
categories: Insert, Update, Select, No primary key, Alternate task scheduling and
Large response sizes.
A.2.1 Insert
Insert query benchmarks are discussed here. Table A.2 shows non-default
variable values for these benchmarks.
65
APPENDIX A. BENCHMARK GRAPHS
Variable Value
Query type Insert
Query response size 0
Table A.2: Variables that differ from the default values
Average performance
This benchmark shows the average performance regarding inserts. In figure A.1
the lines represent the average processed queries for one timeslot using different
timeslot lengths. In figure A.2 the lines represent the average processed queries
for different number of clients.
SQLite: Average insert performance
1100
50 ms
100 ms
1000 150 ms
900
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.1: Average inserts processed during one timeslot for different number of client
partitions.
1000
900
800
Queries
700
600 1 Client
2 Clients
3 Clients
4 Clients
500
5 Clients
6 Cleints
7 Clients
400 8 Clients
300
50 100 150
Timeslot [ms]
Figure A.2: Average no. inserts processed during one timeslot of various length.
66
APPENDIX A. BENCHMARK GRAPHS
Maximum performance
This benchmark shows the maximum performance regarding the insert queries.
In figure A.3 the lines represent the maximum processed queries for one timeslot
using different timeslot lengths. In figure A.4 the lines represent the maximum
processed queries during one timeslot for a specific number of clients.
SQLite: Maximum insert performance
1100
1000
900
50 ms
100 ms
800
150 ms
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.3: Maximum inserts processed during one timeslot for different number of client
partitions.
1000
800
Queries
600
1 Client
2 Clients
3 Clients
400 4 Clients
5 Clients
6 Clients
7 Clients
8 Clients
200
0
50 100 150
Timeslot [ms]
A.2.2 Update
The update query benchmarks are showed here. Table A.3 shows non-default
variable settings for these benchmarks.
Variable Value
Query type Update
Query response size 0
Table A.3: Variables that differ from the default values.
67
APPENDIX A. BENCHMARK GRAPHS
Average performance
This benchmark shows the average performance regarding updates. In figure A.5
the lines represent the average processed queries for one timeslot using different
timeslot lengths. In figure A.6 the lines represent the average processed queries
during one timeslot for a specific number of clients.
SQLite: Average update performance
1100
50 ms
1000 100 ms
150 ms
900
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.5: Average updates processed during one timeslot for different number of client
partitions.
1000
900
800
Queries
700
600 1 Client
2 Clients
3 Clients
500 4 Clients
5 Clients
6 Clients
400
7 Clients
8 Clients
300
50 100 150
Timeslot [ms]
Figure A.6: Average no. updates processed during one timeslot of various length.
Maximum performance
This benchmark shows the maximum performance regarding update queries. In
figure A.7 the lines represent the maximum processed queries for one timeslot
using different timeslot lengths. In figure A.8 the lines represent the maximum
processed queries during one timeslot for a specific number of clients.
68
APPENDIX A. BENCHMARK GRAPHS
1000
900
800
700
Queries
600
500
400
300
50 ms
200 100 ms
150 ms
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.7: Maximum updates processed during one timeslot for different number of client
partitions.
1000
900
800
Queries
700 1 Client
2 Clients
600 3 Clients
4 Clients
5 Clients
500 6 Clients
7 Clients
8 Clients
400
300
50 100 150
Timeslot length [ms]
A.2.3 Select
The select query benchmarks are showed here. Table A.4 lists the non-default
variable values for these benchmarks.
69
APPENDIX A. BENCHMARK GRAPHS
Average performance
This benchmark shows the average performance regarding selects. In figure A.9
the lines represent average processed queries for one timeslot using different
timeslot lengths. In figure A.10 the lines represent the average processed queries
during one timeslot for a specific number of clients.
SQLite: Average select performance
1100
50 ms
100 ms
1000
150 ms
900
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.9: Average selects processed during one timeslot for different numbers of client
partitions.
1000
900
800
700
Queries
600
500 1 Client
2 Clients
400
3 Clients
4 Clients
300
5 Clients
200 6 Clients
7 Clients
100 8 Clients
0
50 100 150
Timeslot [ms]
Figure A.10: Average no. selects processed during one timeslot of various length.
70
APPENDIX A. BENCHMARK GRAPHS
Maximum performance
This benchmark shows the maximum performance regarding selects. In figure
A.11 the lines represent maximum processed queries using different timeslot
lengths. In figure A.12 the lines represent the maximum processed queries during
one timeslot for a specific number of clients.
SQLite: Maximum select performance
1100
1000
900
800
700
Queries
600
500
400
300
200
50 ms
100 100 ms
150 ms
0
1 2 3 4 5 6 7 8
Client
Figure A.11: Maximum selects processed during one timeslot for different numbers of
client partitions.
1000
900
800
700
Queries
600
500
1 Client
400 2 Clients
3 Clients
300 4 Clients
5 Clients
200 6 Clients
7 Clients
100 8 Clients
0
50 100 150
Timeslot [ms]
Figure A.12: Maximum no. selects processed during one timeslot of various length.
71
APPENDIX A. BENCHMARK GRAPHS
1000
50 ms
900 100 ms
150 ms
800
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.13: Average selects processed during one timeslot for different numbers of client
partitions and timeslot lengths.
1000
900 50 ms
100 ms
800 150 ms
700
Queries
600
500
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.14: Maximum selects processed during one timeslot for different numbers of
client partitions and timeslot lengths.
72
APPENDIX A. BENCHMARK GRAPHS
Average performance
This benchmark shows the average performance obtained when using selects
without a primary key. See figure A.15 and figure A.16 for the results.
SQLite: Average select perfomance − no primary key
160
140
120
100
Queries
80
60
40
50 ms
20
100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.15: Average selects processed during one timeslot for different numbers of client
partitions and timeslot lengths.
140
120
Queries
100
1 Client
2 Clients
80 3 Clients
4 Clients
5 Clients
60 6 Clients
7 Clients
8 Clients
40
50 100 150
Timeslot [ms]
Figure A.16: Average no. selects processed during one timeslot of various length.
Maximum performance
The figures A.17 and A.18 displays the maximum numbers of queries processed in
the server during one timeslot.
73
APPENDIX A. BENCHMARK GRAPHS
160
140
50 ms
100 ms
120 150 ms
Queries
100
80
60
40
20
0
1 2 3 4 5 6 7 8
Clients
Figure A.17: Maximum selects processed during one timeslot for different numbers of
client partitions and timeslot lengths.
160
140
Queries
120
100 1 Client
2 Clients
3 Clients
80 4 Clients
5 Clients
6 Clients
60 7 Clients
8 Clients
40
50 60 70 80 90 100 110 120 130 140 150
Timeslot [ms]
Figure A.18: Maximum no. selects processed during one timeslot of various length.
Without sort
This benchmark shows average and maximum performance regarding large
response sizes. No sorting is applied in the database on the results. See table A.7
for non default variable values and figure A.19 for benchmark result.
74
APPENDIX A. BENCHMARK GRAPHS
12
10
Queries
2
Average 50 ms
Maximum 50 ms
0
1 2 3 4 5 6 7 8
Client
Figure A.19: Average and maximum processed select queries. These selects ask for128
rows. No sorting is applied.
With sort
This benchmark shows average and maximum performance regarding large
response sizes when sorting is applied, see table A.8 for non default variable
values. Resulting rows are sorted ascending order by an unindexed column. See
figure A.20 for the benchmark result.
Variable Value
Query type Select
Query response size 128
rows
Sort result Yes
Table A.8: Variables that differ from the default values.
75
APPENDIX A. BENCHMARK GRAPHS
SQLite: Average and maximum select performance, 128 sorted rows response
11
Average 50 ms
Maximum 50 ms
10
7
Queries
0
1 2 3 4 5 6 7 8
Clients
Figure A.20: Average and maximum processed selects are displayed. Each query asks for a
128 rows response. Results are sorted in an ascending order by an unindexed column.
A.3 Raima
This section contains benchmark results for Raima. The benchmarks are divided
into six categories: Insert, Update, Select, No primary key, Alternate task
scheduling and Large response sizes.
A.3.1 Insert
The benchmarks in this section shows Raima’s performance with respect to insert
queries. Table A.9 shows the non-default values for variables used during the
insert benchmarks.
Variable Value
Query type Insert
Query response size 0
Table A.9: Variables that differ from the default values.
Average performance
This benchmark show the average performance regarding inserts. In figure A.21
the lines represent the average processed queries for one timeslot using different
timeslot lengths. In figure A.22 the lines represent the average number of queries
processed during one timeslot for a specific number of clients.
76
APPENDIX A. BENCHMARK GRAPHS
800
700
600
Queries
500
400
300
200
50 ms
100 100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.21: Average inserts processed during one timeslot for different number of client
partitions.
1 clients
800 2 clients
3 clients
4 clients
700 5 clients
6 clients
7 clients
Queries
600 8 clients
500
400
300
200
50 100 150
Timeslot [ms]
Maximum performance
This benchmark shows the maximum performance regarding inserts. In figure
A.23 the lines represent the maximum processed queries using different timeslot
lengths. In figure A.24 the lines represent the maximum number of queries
processed during one timeslot for a specific number of clients.
77
APPENDIX A. BENCHMARK GRAPHS
700
600
500
Queries
400
300
200
50 ms
100
100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.23: Maximum inserts processed during one timeslot for different number of client
partitions.
700
600
Queries
500
1 clients
2 clients
400 3 clients
4 clients
5 clients
300 6 clients
7 clients
8 clients
200
50 100 150
Timeslot [ms]
A.3.2 Update
In this section update queries has been benchmarked. Table A.10 shows
nondefault values for variables used during the update benchmarks.
Variable Value
Query type Update
Query response size 0
Table A.10: Variables that differ from the default values.
78
APPENDIX A. BENCHMARK GRAPHS
Average performance
This benchmark shows the average performance regarding updates. In figures
A.25 the lines represent the average processed queries for one timeslot using
different timeslot lengths. In figure A.26 the lines represent the average number
of queries processed during one timeslot for a specific number of clients.
Raima: average update performance
1100
1000
900
800
700
Queries
600 50 ms
100 ms
500 150 ms
400
300
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.25: Average number of updates processed during one timeslot for different
number of client partitions.
1000
900
800
Queries
700
600
1 clients
500 2 clients
3 clients
4 clients
5 clients
400 6 clients
7 clients
8 clients
300
50 100 150
Timeslot [ms]
Figure A.26: Average number of updates processed for various timeslot lengths.
79
APPENDIX A. BENCHMARK GRAPHS
Maximum performance
This benchmark shows the maximum performance regarding updates. In figure
A.27 the lines represent the maximum processed queries for one timeslot using
different timeslot lengths. In figure A.28 the lines represent the maximum
number of queries processed during one timeslot for a specific number of clients
1000
800
Queries
50 ms
600 100 ms
150 ms
400
200
0
1 2 3 4 5 6 7 8
Clients
Figure A.27: Maximum updates processed during one timeslot for different number of
client partitions.
1100
1000
900
1 clients
2 clients
Queries
800 3 clients
4 clients
5 clients
700 6 clients
7 clients
8 clients
600
500
400
300
50 100 150
Timeslot [ms]
80
APPENDIX A. BENCHMARK GRAPHS
A.3.3 Select
In this section the select performance has been benchmarked. It is performed
with the non default variable values seen in table A.11.
Average performance
This benchmark shows the average average performance regarding selects. In
figure A.29 the lines represent the average processed queries for one timeslot
using different timeslot lengths. In figure A.30 the lines represent the number of
select queries processed during one timeslot for different number of clients.
Raima: average select performance
600
500
400
Queries
50 ms
100 ms
300 150 ms
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.29: Average selects processed during one timeslot for different number of client
partitions.
81
APPENDIX A. BENCHMARK GRAPHS
600
550
500
Queries
450
400
1 clients
350
2 clients
3 clients
300
4 clients
5 clients
250
6 clients
7 clients
200
8 clients
150
50 100 150
Timeslot [ms]
Maximum performance
This benchmark shows the maximum performance regarding selects. In figure
A.31 the lines represent the maximum processed queries for one timeslot using
different timeslot lengths. In figure A.32 the lines represent the maximum
processed queries for one timeslot using different number of clients.
Raima: maximum select performance
700
600
500
Queries
400
300
200
100 50 ms
100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.31: Maximum selects processed during one timeslot for different number of client
partitions.
82
APPENDIX A. BENCHMARK GRAPHS
600
500
Queries
400
1 clients
2 clients
300 3 clients
4 clients
5 clients
200 6 clients
7 clients
8 clients
100
50 100 150
Timeslot [ms]
600
500
400
Queries
50 ms
100 ms
300 150 ms
200
100
0
1 2 3 4 5 6 7 8
Clients
Figure A.33: Average selects processed during one timeslot for different number of client
partitions.
83
APPENDIX A. BENCHMARK GRAPHS
600
500
400
Queries
300
200
100
50 ms
100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.34: Maximum selects processed during one timeslot for different number of client
partitions. The lines represent the maximum processed queries using different timeslot
lengths.
Average performance
This benchmark shows the average queries processed when no primary key is
used. In figure A.35 the lines represent the average processed queries for one
timeslot using different timeslot lengths. In figure A.36 the lines represent the
number of queries processed during one timeslot using different number of
clients.
84
APPENDIX A. BENCHMARK GRAPHS
50 ms
30 100 ms
150 ms
25
Queries
20
15
10
0
1 2 3 4 5 6 7 8
Clients
Figure A.35: Average selects processed during one timeslot for different number of client
partitions. No primary key is used.
30
25
Queries
1 clients
20
2 clients
3 clients
4 clients
5 clients
15
6 clients
7 clients
8 clients
10
50 100 150
Timeslot [ms]
Figure A.36: Average selects processed for various timeslot lengths. No primary key is
used.
Maximum performance
This benchmark shows the maximum performance regarding selects when no
primary key is used. In figure A.37 the lines represent the maximum processed
queries for one timeslot using different timeslot lengths. In figure A.38 the lines
represent the maximum number of queries processed during one timeslot for
different number of clients.
85
APPENDIX A. BENCHMARK GRAPHS
35
30
25
Queries
20
15
10
50 ms
5
100 ms
150 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.37: Maximum selects processed during one timeslot for different number of client
partitions. No primary key is used.
35
30
Queries
25
1 clients
2 clients
20 3 clients
4 clients
5 clients
15 6 clients
7 clients
8 clients
10
50 100 150
Timeslot [ms]
Figure A.38: Maximum select processed for various timeslot lengths. No primary key is
used.
Without sort
This benchmark shows average and maximum performance regarding large
response sizes when sorting is applied, see figure A.39. Table A.14 shows
nondefault variables used in this benchmark.
86
APPENDIX A. BENCHMARK GRAPHS
5
Queries
0
1 2 3 4 5 6 7 8
Clients
Figure A.39: Average and maximum processed selects are displayed. Each query asks for
128 rows.
With sort
This benchmark shows average and maximum performance regarding large
response sizes when sorting is applied, see table A.15 for non default variable
values. Resulting rows are sorted in ascending order by an non indexed column.
See figure A.40 for the benchmark result.
Variable Value
Query type Select
Query response size 128 Rows (16 cols)
Sort result Yes
Table A.15: Variables that differ from the default values.
87
APPENDIX A. BENCHMARK GRAPHS
Raima: Average and maximum select performance, 128 sorted rows response
8
5
Queries
2 Average 50 ms
Maximum 50 ms
0
1 2 3 4 5 6 7 8
Clients
Figure A.40: Average and maximum processed selects are displayed. Each query asks for
128. Results are sorted in an ascending order by an non-indexed column.
88
På svenska
In English
The publishers will keep this document online on the Internet - or its possible
replacement - for a considerable time from the date of publication barring
exceptional circumstances.
The online availability of the document implies a permanent permission for
anyone to read, to download, to print out single copies for your own use and to
use it unchanged for any non-commercial research and educational purpose.
Subsequent transfers of copyright cannot revoke this permission. All other uses
of the document are conditional on the consent of the copyright owner. The
publisher has taken technical and administrative measures to assure authenticity,
security and accessibility.
According to intellectual property law the author has the right to be
mentioned when his/her work is accessed as described above and to be protected
against infringement.
For additional information about the Linköping University Electronic Press
and its procedures for publication and for assurance of document integrity,
please refer to its WWW home page: http://www.ep.liu.se/
Språk
Typ av publikation
Svenska ISBN (licentiatavhandling)
X Annat (ange nedan) Licentiatavhandling
X Examensarbete
C-uppsats
Engelska/English ISRN LIU-IDA/LITH-EX-A--10/010--SE
D-uppsats
Antal sidor Rapport
105 Serietitel (licentiatavhandling)
Annat (ange nedan)
Serienummer/ISSN (licentiatavhandling)
Publikationens titel
Usage of databases in ARINC 653-compatible real-time system
Författare
Martin Fri och Jon Börjesson
Sammanfattning
The Integrated Modular Avionics architecture , IMA, provides means for runningmultiple safety-critical
applications on the same hardware. ARINC 653 is a specification for this kind of architecture. It is a specification
for space and time partition in safety-critical real-
WLPHRSHUDWLQJV\VWHPVWRHQVXUHHDFKDSSOLFDWLRQ¶V integrity. This Master thesis describes
how databases can be implemented and used in an ARINC 653 system. The addressed issues are interpartition
communication, deadlocks and database storage. Two alternative embedded databases are integrated in an
IMA system to be accessed from multiple clients from different partitions. Performance benchmarking was used
to study the differences in terms of throughput, number of simultaneous clients, and scheduling. Databases
implemented and benchmarked are SQLite and Raima. The studies indicated a clear speed advantage in favor
of SQLite, when Raima was integrated using the ODBC interface. Both databases perform quite well and seem
to be good enough for usage in embedded systems. However, since neither SQLite or Raima have any real-
time support, their usage in safety-critical systems are limited. The testing was performed in a simulated
environment which makes the results somewhat unreliable. To validate the benchmark results, further studies
must be performed, preferably in a real target environment.