Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

APPENDIX F 1

Simulator description
For the purpose of feasibility checking the designed model of the system with its program model are
joined with a time and event triggered simulator, which enables the representation and co-simulation
of HW and SW nodes.

An internal representation of HW and SW architectures has been defined for reference from the
simulator and to represent them as “architecture data” (A-data) to the CM (configuration manager) and
its inherent RTOS (real-time operating system).

In order to prepare the TSTD structure for simulation its hierarchical structure needs to be flattened by
applying an internal state enumeration. The start states are matched with the task trigger conditions
and are forming the initialisation part as well as the code for the initial state in the task code. A single
control program is built to be executed inside the simulator with respect to the specified trigger
conditions and durations (timeout intervals; critical to determine the next critical moment in time),
where the internal state switching mechanism is used to represent the execution progress as well as
context switches among the tasks. In the simulation the pre-emption points for task switching are the
states of the task, whereas in the execution of the program on the target platform the pre-emption
points are instruction oriented.

The internal state and external event monitoring is built around the main simulation engine to be able
to trigger the states based on their trigger conditions. The system service calls parameters are sent to
the CM module, where they are processed accordingly. The resulting context switches of these calls
affect the next critical moment’s of simulation units and hence the order in which they are executed.
The simulation engine has to consider this changes and dispatch the simulation units accordingly.

Especially the internal representation of external events in the simulation needs special attention. Their
control is enabled by specifying their occurrence time and kind (periodic, sporadic) in the simulation
tool. To enable this feature a console is foreseen for each (KERNEL) station in the simulation
window, which enables the input of external interrupt occurrences.

The resolution of the internal simulation clock is the same as the resolution of the RTC (real-time
clock), the RTOS is operating with, so the representation of time remains the same in both cases. It is
referenced by issuing the NOW system call, which returns the current value of the (simulation) real-
time clock.

For the practical purposes the station/collection/task configuration tables are maintained in the A-data
separately from the TCB which is used by the RTOS for task scheduling. Here, they are accessible to
the CM and the Simulation engine. In the simulation model the register context is replaced by the
reference to the simulated task and its current state. In case the task is switched with its initial context
the initial state (0) is enforced, in other cases the current state information is used. This dualism is
maintained in the target platform contexts also. If the task simulation execution reaches its end, the
current context points to the initial state of the task, awaiting its trigger condition to be fulfilled.

In the sequel the structure and the mechanisms of the simulator are described. After that the mentioned
data structures for the representation of internal state transition conditions and external events are
presented.
TSimUnit TSimUnitList

TSimulator TStorage
APPENDIX F 2

The structure above are only the data structures of the simulator itself. The simulation units are
inherited from the TSimUnit class (STATIONS, COLLECTIONS, TASKS and TSTD states). The
HW configuration parameters serve for the parameterisation of the virtual machine, RTOS and CM.
The structure of the simulation model is shown in Fig. 1.

Simulator

Station0 Stationn

Collection0 Collectionn

* *
Task0 Taskn

*
task TSTD representation

Figure 1: Simulation units structure

The top level simulation units are STATION nodes. They are COLLECTION nodes' parent nodes. All
COLLECTIONS nodes, attached to the same STATION node, form a linked list with references to
COLLECTIONS' TASK nodes also forming a linked list. The basic simulation units are TASKs,
referencing TSTD states through the internal state enumeration in the program code. External stimuli
can affect Stations (e.g.: interrupts), hence they are associated special tasks representing their software
triggers in the absence of appropriate hardware devices.

Verification method
Our verification method is based on co-simulation with EDF scheduling and time boundaries. It is
primarily meant for checking the timing properties of the modelled system in order to determine its
feasibility. The designed system model is transformed into an internal representation for (co-)
simulation, whose primary result is a successful execution or failure, whereas the secondary result is
its execution trace, from which additional information can be extracted. This information is then used
for discovering bottlenecks and unreachable states and resources in the designed prototypes.

For a successful verification, it is assumed that the system model is complete and consistent.
Intermediate checks must be done during the design of the system architecture to ensure this:
• Completeness check (all components, which are referred to, are present and fully described);
• Range and compatibility check (some parameters of components may be range checked to discover
obvious mistakes and to discover possible incompatibilities in the parameters of the interconnected
components);
APPENDIX F 3

• Software to hardware mapping check (every COLLECTION must be mapped onto a STATIONS'
state considering its and/or its supervisors' resources; e.g.: the number of tasks, which may be
handled by a single RTOS processing node is limited).

These checks are the preparation for integration verification, which is described in the forthcoming
sections.

System model
Our system model is an internal representation of the system, which is joined with the simulators
structures for verification. The internal hardware model consists of processing nodes, representing
STATIONS having the specified properties. The components of STATIONS are not modelled
separately, instead they are represented by the properties of the processing nodes. The internal
software model COLLECTIONS are mapped to the STATION's nodes of the hardware model based
on their (initial) state. The COLLECTION nodes are linked lists of TASK-representing nodes with
references to their internal TSTD representation.

Verification is done based on the following presumptions:


• There is only one global simulation clock in the system and all Real-Time Clocks (RTCs) are
synchronised with it;
• The time events relate to the corresponding STATIONs' RTC;
• Tasks are assigned deadlines for their execution (the only exception are short initialisation tasks);
• Task states (TSTD elements) have the following timing properties - time frame (minT,MaxT) for
the activity being performed within the state (in RTC time units); it is set by the designer /
programmer based on his / her experience or by employing an execution time analyser for the
specified STATION, in which case a much more precise estimate is gained.

The RTOS has an important role in the co-simulation, even though the emphasis is not on it. Our
RTOS supports the tasking model and system calls of the PEARL programming language. It supports
the deadline (Earliest Deadline First) driven scheduling. The possibility to upgrade it with other
scheduling strategies is left open. Its resources are parameterised (e.g.: number of tasks, synchronisers,
signals, events, queued events, etc.) by the properties of the KERNEL station.
RTOS model

A small RTOS kernel was developed in the framework of the virtual machine executing the system
calls of the program tasks, being analysed on the given architecture. The executive role of the
operating system in the system model has been retained. RTOS is assumed to operate within the
KERNEL station nodes.

Each RTOS processing node maintains a Real-Time Clock (RTC) for its client nodes. All of these
clocks need to be synchronised with the global simulation clock during verification. It services the
system calls, whereas the time needed to do this is considered as being included in the time frame of
the calling TASK's state. Its sole function is to change the system state and trigger task states, whose
trigger conditions relate to the internal data structures of the (operating) system.

The time, which is required to execute the operating system itself (schedule and dispatch cycle) is
assumed to be constant and is not considered separately. It is assumed not to affect the rate of
application program processing. This assumption is not too hard because we may have a separate OS
processing node for each application program node and this time may be considered as being part of
the system call service time, since every system call generally requires both mentioned operations.

Criteria function

Every verification or validation method requires the definition of a criteria function, which tells, when
a system fails, i.e. what the limits of the "normal'' execution of the system, being checked, are.
APPENDIX F 4

The concept of correctness for the described method of verification has been defined as follows:

"The system fails in the case when during co-simulation:


• the system reaches an undefined state or
• its predefined time frame is violated and no timeout-action is defined."

By taking the shortest and longest transition times through the TASK states (TSTD) of the system, it is
assumed that enough time domain can be covered to be able to generalise the results to an arbitrary
transition time of every state and herewith also the system as a whole.

Integration of simulation with EDF scheduling for verification

For the verification simulation with next critical event technique and EDF scheduling is used. The
time instant of the next critical moment is always determined by the simulation unit whose activation
time is the closest. This time is forwarded to all its parent units and, finally, it becomes the next global
critical moment.

On each step it is checked, whether timing or synchronisation errors have occurred. A "timeout-action''
(performed upon violation of the state's timeout condition) represents a controlled program fault.
Herewith, a transition is performed into a final state, from which there is no further transition. If, upon
the same error, this action is not defined for the current state, the system fails and the error is logged.
Otherwise, the transition into the next state is always tried in the minimum and performed in the
maximum time variants, if the pre-conditions for the transition are fulfilled. The transitions through
the task states executing at their stations are performed breadth first for all nodes, which share the
current critical moment, and the previous state is remembered on every transition. The execution
protocol is logged instantly for all simulation nodes.

EDF next event scheduling

Earliest Deadline First next event scheduling is based on the following timing information (see Fig. 2):
A : task activation time,
R : accumulated task run time (updated with the next critical event),
E : task end time (the time when the normal task end is expected based on its maximum run time; upon
a context switch the current time t1 needs to be remembered, because when the task is re-run, this
parameter needs to be reset based on the current time t2 and the following formula: E'=E+(t2-t1)),
D: task deadline (set, when A is known).

Rescheduling is done when a task is activated due to a scheduled event or on request. The task with
the earliest deadline is chosen for the execution and its current state is assigned the current time t as its
next critical moment (NOW). Tasks are scheduled to be executed on events, which represent internal
states of the system or its operating system. These represent the task's trigger conditions.

Initially all tasks are inactive (short initialisation tasks are the only exception). They are activated
explicitly or as result of a fulfilled schedule. When active, their current (TSTD) states possess a critical
activation time and are executed by the simulator. In addition associated with each task is a “critical
activation time indicator” Z, which represents the last moment in time when the task has to be (re-)
activated in order to meet its deadline. If it is missed, also its deadline is probably missed. A potential
“reserve” comes from the difference “D-E” (slack time). The task is (re-)activated through the RTOS
when its activation conditions are fulfilled and it is elected the currently most urgent task by the RTOS
scheduler. Then its current state is assigned the current time as the next critical moment and hence also
becomes the first in the queue of the active simulation units.

While rescheduling the following criteria (failure conditions) need to be checked for all tasks:
APPENDIX F 5

t < Z=D-(E-(A+R)), where Z represents the last time when the task needs to start/continue in order to
meet its deadline; and
t < E ≤ D must be true for all active tasks, since otherwise they have missed their deadlines.

A R CS t2 E E' D

t1 t2-t1 t

Figure 2: Task run with a single Context Switch

Tasks can be scheduled to be executed on interrupts. For simulation purposes, they are assigned
occurrence times. They represent a special simulation unit, which takes its next critical moments data
from the occurrence table. When these events occur, they trigger appropriate events through system
calls to the station’s CM/RTOS, which results in waking up appropriate tasks through the RTOS.

The course of simulation

During co-simulation (Fig. 3), the time of progression to the next state is calculated in two variants for
each state:
1. RTC + minT (to check the pre-conditions), and
2. RTC + maxT (for transition to a new state).
Simulation units (task
TSTD representations)
HW architecture
model
parameters
t
Simulator

CM/
Time parameters, RTOS
PEARL system calls,
natural language comments

Figure 3: The course of simulation

If in critical moment (2) the pre-condition for the transition to any further state is not fulfilled, the on-
timeout action is executed. If it is not defined, the system fails.

During simulation, the E and D parameters are set for each task when it is activated (the A parameter
is set). When a critical moment is reached, it is checked if herewith the time frame, given for the task,
has been violated, which results in the following consequences: (1) subtraction of the overhead from
the task’s slack time, or (2) the system fails as the task deadline is missed.

The simulation results are logged during the execution of every simulation unit, and each step is
accounted for also within all its parent simulation units.

This means that every task state logs its action into the task log, whereas a task logs its beginning and
end into the module / collection log. The collection logs the time when it was first allocated at the
station, possible subsequent re-loads, and the changes of states which triggered them into the station
log. The stations also log the times when they were communicating among each other.

Interpretation of Results

The simulation logs are checked manually for irregularities, which could represent faults in the
original design or timing/synchronisation errors, which might have occurred during the virtual
APPENDIX F 6

“execution” of the system model. Busy and idle times are considered for each station and, if necessary
and possible, load balancing actions are taken.

The process of analysing and fine tuning, also known as the profiling process, cannot be unified for
the very different designs, and hence must be done manually and remains the responsibility of the
designer.

A feasible design model, which is produced by the presented modelling and profiling process, retains
its value, if the foreseen execution time frames have been chosen correctly (i.e., the circumstances on
the time scale shall not change when the program part is extended to a fully functional program and
run on the target platform).

You might also like