S: A S F H R - T S: Tress Imulator OR ARD EAL IME Ystems

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 28

STRESS: A SIMULATOR FOR HARD REAL-TIME SYSTEMS

N. C. Audsley, A. Burns, M. F. Richardson and A. J. Wellings


Real-Time Systems Research Group, Department of Computer Science, University of York, UK.

SUMMARY
The STRESS environment is a collection of CASE tools for analysing and
simulating the behaviour of hard real-time safety-critical applications. It is
primarily intended as a means by which various scheduling and resource
management algorithms can be evaluated, but can also be used to study
the general behaviour of applications and real-time kernels. This paper
describes the structure of the STRESS language, its environment and
gives examples of its use.

KEYWORDS Simulation Hard Real Time Safety Critical CASE

INTRODUCTION
Increasingly, computer systems are being used in applications where a failure of the system can
result in loss of life or catastrophic damage. Consequently, the need has arisen for a theoretical
basis and practical methodology by which the correct design and construction of such systems can
be achieved. These systems are often referred to as hard real-time systems, where real-time
reflects the fact that they must directly interact with a changing physical environment and hard
refers to the fact that at least some system functions must be performed within specific timing
constraints.
For hard real-time systems, the verification of the correct and safe functioning of the
system, specifically the guaranteed meeting of timing requirements, is of paramount importance.
Past engineering practices have dictated that systems are verified on a testbed: arguments
regarding the timeliness of the system are established by using test results. This approach is not
ideal. Typically, systems fail, in a timing sense, in very constrained circumstances, with the
average-case, as often seen on the testbed, performing adequately.
To counter this problem, much research has been concerned with developing design
methodologies and feasibility theory to ensure timing requirements can be verified offline. A
number of problems are apparent with this theoretical approach:
• if timing requests cannot be met, little or no guidance is given as to how the
system design and/or implementation may be modified so that timing
requirements are met;
• design methodologies and scheduling theories do not, in the main, cater for
the effects of actual hardware, i.e. network controllers, DMA, interrupts,
sensors and actuators etc.;
• the timing characteristics of the kernel, and its effect on the application tasks,
are often ignored;
• tools are required to validate new scheduling theory.
Whilst theoretical solutions are required for all the above problems, simulation of both application
and kernel software on various architectures would aid the engineering of hard real-time systems.
For example, by observing where timing constraints are violated, systems engineers are guided
towards the software requiring modification.
The remainder of this paper describes STRESS, a general purpose simulation environment
for hard real-time systems. It comprises a single generic language for architecture description,
kernel software and application software; analysis facilities (within the bounds of current
scheduling theory) for kernel and application software; simulation facilities, incorporating
graphical display of simulation results. STRESS can be used in a number of different ways,
including the construction and simulation of applications and kernels for the hard real-time
systems domain.
Background
Hard real-time systems are generally viewed as comprising a set of tasks which when invoked (or
released) must complete their execution before a specified deadline. Processes may be released in
one of three general ways. Periodic tasks are released at regular intervals. Sporadic tasks are
released at variable intervals, although there exists a minimum time between successive releases of
the task. Aperiodic tasks can be released at any time, with no minimum time between successive
releases.
Feasibility analysis determines whether a set of tasks will meet their deadlines at run-time.
If analysis indicates that the deadlines of tasks will be met, those tasks are termed guaranteed.
Conventionally, such analysis occurs offline, before a system executes. Obviously, aperiodic
processes cannot be guaranteed since they may place, in the worst-case, an infinite load on the
processor.
A common approach to the implementation of hard real-time systems is to use a cyclic
executive, where tasks are executed according to a predefined schedule which is held in the
system, and which is repeated indefinitely1. The execution order is calculated off-line in advance.
Such an approach has a number of problems2: schedules are inflexible and may become
prohibitively long. More flexible approaches include:
• static priority scheduling: all tasks are assigned (offline) a priority; with the
highest priority runnable task executed at any time.
• dynamic priority scheduling: tasks are assigned priorities at run-time, with
those priorities able to change dynamically.
For both approaches, a priority pre-emptive dispatcher is assumed (i.e. the highest priority
runnable task executes at any point in time).
Within hard real-time systems, tasks are (conventionally) able to interact in a number of
ways. The provision of semaphores, to enable mutually exclusive access to shared resources (i.e.
synchronous), and mailboxes, to enable message-based communication (i.e. asynchronous) allows
most methods of interaction to be modelled.
The STRESS environment permits scheduling via dynamic or static priorities with both
synchronous and asynchronous inter-task communication.
Existing Simulation Systems
Conventionally, hard real-time system simulators are individually designed for a specific
architecture and application. For example, Stoyenko describes a schedulability analyser
("schedalyzer") for the Real-Time Euclid language3. This specifies a set of tasks, each of which is
executed periodically, or at a pre-defined time. Distributed systems are supported with single-
processor nodes communicating via mailbox memory accessed over a parallel data bus, with
processes scheduled using the earliest deadline algorithm. The schedulability analysis is tied
specifically to the Real-Time Euclid language, its run time system, and the hardware on which it is
run.
The ARTS kernel4 has been developed as a testbed for the ART (Advanced Real-Time
Technology) project. This comprises single processor nodes connected by token ring and Ethernet
network. The scheduler supplies a range of scheduling algorithms, which can be selected as
desired. As part of the system, Scheduler 1-2-3 provides schedulability analysis for the system,
and can also generate synthetic workloads5.

2
The PERTS environment enables the simulation of applications, together with a number of
kernels (all based upon a defined virtual kernel)6. Applications are described using a simulation
language (based upon C++). The motivation of PERTS is the direct generation of application
code from the simulation tool rather than the experimentation with novel kernel designs and
scheduling approaches.
In summary we note that the Schedalyzer is aimed at a specific language; both it and
Scheduler 1-2-3 are aimed at specific hardware environments. PERTS removes the single
architecture and operating system constraint, although the scope for experimentation in terms of
scheduling theory and kernel design is limited. None of the above tools permit hardware
architectures, application software and kernel software to be described using the same simulation
language.

OVERVIEW OF STRESS
The STRESS environment comprises a number of separate tools contributing to the simulation of
hard real-time systems, built upon a single simulation language. This language is specifically
designed with respect to hard real-time systems, to enable the expression of hardware description,
kernel software and application software.
An example STRESS program is given in Figure 1. The hardware architecture comprises a
single processing node (node_1), which in turn contains a single processor (proc_1). The
processor contains a single binary semaphore (S0) and two periodic tasks (J0 and J1). Each task
has a specified period, deadline and offset. Hence, task J0 will be released at times 4, 19, 34,...
with respective deadlines 19, 34, 49,... Each task executes code which consumes processor time,
and locks and unlocks semaphores. The notation [n,m] indicates that the task should consume
between n and m (inclusive) units (also referred to as ticks) of processor time. In all cases in this
example, n and m are the same, to specify exact amounts of processor time; otherwise, a value is
chosen randomly in the range (NB only such timing statements consume simulation, i.e.
processor, time). The instructions p(S0) and v(S0) lock and unlock (respectively) the semaphore
S0. Static priority scheduling is prescribed (via scheduler pripre) assuming deadline-monotonic
priority assignment (via order dma)7 and priority inheritance resource allocation8 (via inherit).
Further details of the STRESS language, including message passing, object types, construction of
kernel features etc., are given later in the paper.
After a STRESS program has been written, three other functions can be performed within
the STRESS environment:
(i) analyse task sets for feasibility given specified scheduling algorithms and
underlying kernel;
(ii) simulate the execution of such task sets in conjunction with the hardware
architecture and kernel;
(iii) display the output of the simulation in a useful and concise form.
These are illustrated in Figure 2, showing a STRESS session in progress. The following sections
give a brief overview of the front-end, feasibility analysis, execution simulator and display tool.

The STRESS Front-End


Starting the front-end of the STRESS environment produces the graphical interface shown in top-
left of Figure 2. Initially, the filename of the STRESS program is entered. Then, any of the
individual functions to analyse, simulate or display may be selected via buttons.
Feasibility Analysis
The feasibility analysis tool makes two main contributions to the STRESS environment:
(i) provision of information required by the simulator;
(ii) to determine whether the tasks specified in the STRESS program can be
guaranteed offline to meet their deadlines.
3
Point (i) reflects the fact that information required by the simulator is provided by the feasibility
analysis tool. For example, task priorities are set (the example STRESS program in Figure 1
requests that priorities are assigned in a deadline-monotonic manner). If the Priority Ceiling
Protocol8 is being used for resource control (in conjunction with static priority pre-emptive
scheduling), for each semaphore in the system, the simulator needs to know the priority of the
highest priority task that accesses that semaphore. This is determined as a side-effect of feasibility
analysis. The second point above refers to the main function of analysis of hard real-time systems,
that of determining feasibility.
Feasibility analysis is initiated by pressing the Analyse button in the front-end (see Figure
2). This causes a window to appear (see Figure 3) which enables the user to choose one of a
number of available analyses (we return to available analyses later in the paper). Once an analysis
has been selected, it is initiated via the Start button. Output from the analysis appears in a separate
window. For example, the STRESS program given in Figure 1 will result in output given in
Figure 4: the exact (i.e. sufficient and necessary) feasibility analysis reports that the task set can
be feasibly scheduled. If the analysis reports failure, details of the tasks that have not passed the
test are given.
Execution Simulator
The execution simulator is initiated by selecting the Simulate option on the front-end (see Figure
2). This brings up the STRESS Simulator window, illustrated at the top right of Figure 2. By
default, simulation will run for 100 ticks. If this is not adequate, different values can be entered in
the execution start and stop boxes. Other options include the logging of sporadic task release
times, executing all processes at their best or worst case execution times.
The simulation is started by selecting the simulate option in the STRESS Simulator
window. The simulation results are written to a file for latter use by the display tool. Essentially,
all events occurring in the simulation are recorded, with the contents are encoded to keep the file
size reasonable small.
Display Tool
The display tool is run by selecting the Display button on the front-end (see Figure 2). This
creates a new window, as shown at the bottom of Figure 2 (displaying the simulation of the
STRESS program in Figure 1). The display tool shows the execution of each task, with time
running horizontally. Boxes represent task execution, low-level circles task release, high-level
circles task completion, and vertical arrows indicating a task's deadline. Whilst a task is runnable
but pre-empted, a horizontal line is drawn along the timeline (for example between time 5 and 7
of task J0 in the figure). Whilst a task is blocked, a horizontal line is drawn above the timeline (for
example between time 7 and 8 of task J0 in the figure).There are several others which do not
appear in the example, but will be explained later in the paper.

THE STRESS LANGUAGE


One important feature of the STRESS environment is the language and its inherent ability to
express features required in hard real-time systems. This section describes the STRESS language
in detail, and the structure and form of systems which can be described in the language.
The basic architecture of systems which STRESS can simulate is intended to be as broad
as possible, and to impose a minimum of restrictions. The restrictions which inevitably exist in real
systems, for instance network or data bus bandwidth, can be modelled within the simulation
language, and are not enforced on the user.
The underlying structure is shown in Figure 5. Starting from the outermost level, a
STRESS system is a collection of processing nodes connected via a network. Nodes contain
mailboxes (for inter-processor communication), and processors. Processors contain tasks which
form the unit of concurrency in the system. Tasks are the only elements in the system that actually
execute and consume processor time.
4
Processors also contain mailboxes and semaphores (for task synchronisation). Also, both
nodes and processors may contain objects. Objects contain state and methods. Objects permit the
sharing of data and code. At the node level, this corresponds to shared memory within a
multiprocessor node. At the processor level, objects permit the modelling of shared logical
resources.
The following sections discuss some features of the STRESS language, with a full syntax
of the language given in Appendix A.
Networks, Mailboxes, Nodes And Processors
The nodes in a system are fully connected via a point-to-point network. This imposes few
limitations: any network topology can be simulated as a restriction upon total connectivity. Actual
communication over the network is represented by messages. A message is sent by a task to a
mailbox; and a task may receive a message from a mailbox. There are two restrictions; firstly, a
message which is sent over the network (i.e. from one node to another) may only be sent to a
mailbox at the node level; and secondly, a task may only receive from a node level mailbox on the
same node as it is executing. Mailboxes have an associated size, which by default is one, but
which can be explicitly declared in the STRESS program. The size represents the number of
messages which may be stored in the mailbox at any given time.
Each processing node contains one or more processors, and may also have objects,
semaphores and mailboxes which are global to that node. A wide variety of systems may be
constructed using these components. For instance, a single processor system has a single
processor on a single node; a set of uniprocessor nodes connected by a ring would be simulated as
a set of nodes (each with a single processor) with communication restricted to a ring within the
point-to-point network.
A simple example program illustrating the use of mailboxes and message passing between
nodes is shown in Figure 6. The system comprises two nodes, each with two processors. On the
first, a task periodically sends a message to a mailbox on the second node. There, the message
arrival triggers the execution of a task (via trigger mbox_2), which receives the message. When a
message is transmitted, a communication delay (in ticks) is specified, to represent the time taken
to cross the network. This is the significance of delay [4,8] in the send command. A delay is
chosen randomly (with uniform distribution) in the interval [4,8] inclusive, and the message will
not arrive at the mailbox for that time. There is a minimum delay time of one tick, except in one
special case which is noted in the next section. Also, a time-out may be associated with a
reception request, as by expiry 3 in the example. A token passing example is given later in this
paper.
Tasks
A task is the basic unit of concurrency. Tasks fall into three basic types; periodic, aperiodic and
sporadic. Periodic tasks are embedded in the periodic and endper keywords (likewise aperiodic
tasks in aperiodic ... endape and sporadic tasks in sporadic ... endspo). Depending on the task
type, various characteristics may be set (some of which are optional). For example, periodic tasks
require the values period and deadline to be set.
Aperiodic tasks can be released in one of three ways: by chance (i.e. chance 10 specifies
that there is a one-in-ten chance of release at a given tick); by message (trigger box77 indicates
that the task should be released when a message arrives in the mailbox box77); by event (event
startit causes the task to be released whenever the event startit occurs). An offset is interpreted to
mean that the task will definitely not execute until the offset time into the simulation. Sporadic
tasks are exactly as aperiodic tasks, except that arrival specifies a minimum time successive
between releases.
The keyword priority allows specific priorities to be assigned to tasks, where lower
numbers represent higher priorities; however, the analysis tool can specify its own priorities,
based on its evaluation of the task set.

5
Each task may have local variables and code which it executes. Local variables are either
scalar values or arrays, and can contain either integers or references to tasks, processors, etc.
Values stored in local variables are preserved between successive executions of tasks, with
variable initialisation occurring once, before the task ever executes. Two control constructs are
provided: if..then..else and loop (the else part is optional). The latter construct has associated
with it a constant integer which represents the maximum number of iterations. Examples of
variables and control statements are given in Figure 7.
Task types are provided to avoid repetitive declaration of code. For example, a network
interface process, common to many processors in the system, could be modelled as a task type,
with the task instantiated on each processor in the STRESS system, avoiding duplicate code.
Semaphores
Semaphores provide a basic method of implementing mutual exclusion between tasks on the same
processor or node. Semaphores located on a node can be accessed from any processor on the
node; a semaphore on a processor can only be accessed from that processor.
The basic mechanism is that of the Dijkstra semaphore, except that they may be initialised
so that up to n tasks may lock the semaphore at any one time. If a semaphore is fully locked when
a further task attempts to lock it, then that task becomes blocked on the semaphore. When a task
releases a semaphore, all tasks blocked on that semaphore become unblocked, and all will retry
the locking request. This allows arbitrary rescheduling to be performed when a semaphore is
released.
It is also possible to explicitly block a task on a semaphore, even if the task is not
requesting the semaphore, or the semaphore is not fully locked. This is needed in order to
implement resource control schemes such as the Priority Ceiling Protocol 8.
Objects And Methods
Objects provide shared memory communication between tasks within a single node. An object
cannot be accessed from a task running on a different node. Objects are accessed via methods,
and can contain local variables and local semaphores.
Objects may be declared at the node level or at the processor level, the only difference
being the scope. An object which is declared at the node level can be accessed from any task on
any processor on that node; an object declared at the processor level can only be accessed from
tasks on that processor.
A method invocation is effectively a function or subroutine call; it may be passed
arguments, and may return a result. The method contains code in exactly the same manner as a
task, and can access its arguments, the local variables and local semaphores. A method of an
object at the node level can also access the mailboxes and semaphores declared on that node; a
method of an object declared at the processor level can, in addition, access the mailboxes and
semaphores declared on that processor.
Methods can be invoked from one-another, with the restriction that recursion amongst
methods is not allowed. This is necessary in order that worst-case execution times can always be
bounded. The scoping rules are the same as for access to mailboxes and semaphores; a method of
an object at node level can access any method of any other object on that node, while a method of
an object at processor level can also access methods of objects on that processor.
Object types are provided to avoid repetitive declaration of code (in the same manner as
task types). For example, some kernel facilities could be modelled as methods within an object,
with the object instantiated on each processor in the STRESS system, avoiding duplicate code.

6
LANGUAGE EXTENSIONS FOR MODELLING KERNEL FACILITIES
At the lowest level, each processor is scheduled separately, and is not affected by events on other
processors, other than through semaphores and mailboxes. Note that, at the level of the STRESS
simulator, all scheduling is performed locally to each processor, and there is no global scheduling.
Each task has associated with it two priorities, the base priority baspri, and the effective
priority effpri. The base priority may be set in one of two main ways. Firstly, it may be set by use
of the priority keyword (within a STRESS program), setting the priority to a specific numerical
value. Secondly, it may be set by running the feasibility analyser; this will assign priorities based
on the scheduling algorithm which is to be used.
When the simulator is started, the effective priorities are set to the base priority values.
Then, at each tick, the task with the highest effective priority, which is free to run, will be
executed. If the effective priorities are left unchanged, the scheduling is effectively pre-emptive
static priority, with semaphore access occurring on a first-come first-served basis.
Other kernel facilities can be placed inside an object, with methods made available for
application tasks to make kernel calls. More sophisticated processor and resource scheduling can
be achieved within a STRESS program itself. This is detailed in the following sections.
Additional Language Features
To enable schedulers and resource managers to be written within a STRESS program, a number
of additional language features are made available in terms of a number of state values
(associated with each task) which can be accessed by STRESS programs. For example, the wcet
value defines maximum execution time of a task; the waiting and locking values define the tasks
blocked on a semaphore and the task locking a semaphore respectively. Also, a number of
variables are provided, for example, tick defines the current simulation tick; mynode provides the
name of the node on which a task is executing.
Finally, an additional construct (for) and keyword (of) are introduced to allow lists to be
scanned. These originate in expressions such as tasklist of myproc (list of tasks on the local
processor) or waiting of sema (the list of tasks blocking on a semaphore).
Implementing Schedulers
As an example, a scheduler for the earliest-deadline algorithm will be shown9. Under this
algorithm, the task which is closest to its deadline (and free to run) will be executed. The code for
this is shown in Figure 8. The scheduler is implemented as a task, early, which runs on every tick,
but does not consume any processor time. When it runs, it can adjust the effective priority values,
and the terminate. This leaves the low-level scheduler free to run whichever other task is free to
run and has the highest effective priority. It is noted that schedulers that respond to events (such
as task release), and have non-zero overheads, can also be implemented within the STRESS
language.
It is necessary to arrange that the scheduler task executes before any other task, in order
that it can set the effective priorities correctly. This is managed by setting its priority to zero.
Provided that other tasks on the processor are not given priorities which are negative, the
scheduler will have the highest priority, and will execute first.
When the scheduler task runs, it scans the list of tasks on the processor. For each task,
other than the scheduler task itself, the effective priority is set to the next task deadline (relative to
the current time). The closer the deadline is to the current time, the smaller this value will be, and
hence the higher priority.
Since we probably do not want to see the scheduler task when a simulation run is
displayed by the display tool, the task is marked hidden.
Libraries of schedulers may be created, files therein containing solitary tasks (e.g. the code
in Figure 8). Now, a shorthand may be used for including a scheduler onto a processor. For
example, the code in Figure 1 contains scheduler pripre indicating the scheduler task of name
pripre should be included at this point.
7
Implementing Resource Management
This section describes how resource management algorithms can be implemented in the STRESS
language. Essentially, additional code maybe executed before the in-built p and v operations. This
is structured as an object containing methods name p and v. Consider Figure 9 (containing the
code for simple priority inheritance resource control8). The method p in object inherit will be
executed whenever a p operation is executed by a task. Then, the in-built code within the
simulator for p is executed. Similarly, a method for the v operation is available.
In Figure 9, the p method checks whether the semaphore is locked; if it is then cnt of sema
will be zero. If it is locked, then the list of tasks locking the semaphore is scanned, and, if any has
a lower effective priority than the calling task (which is about to become blocked) then its
effective priority is increased to that of the caller. Hence, if the caller is blocked by a lower
priority task, than the effective priority of that task will be raised to that of the caller. Finally, on
return from the routine, the actual semaphore operation (i.e. in-built p) is invoked.
The corresponding v method initially resets the callers effective priority to its base priority.
It then scans all semaphores locked by the caller (other than the semaphore which is being
released), and each of the tasks blocked on such semaphores. The callers effective priority is
raised if any blocked task has a higher effective priority.
Libraries of resource managers may be built up in a similar manner to schedulers. Also, a
shorthand is available for inclusion of a resource manager on a processor. For example, the line
resource inherit within a processor declaration would ensure that the resource manager given in
Figure 9 is included.

FEASIBILITY ANALYSIS
Within the context of the STRESS environment feasibility analysis represents finding whether the
tasks in a STRESS program will meet their deadlines. Since many different models of scheduling
(and resource management) can be supported, provision for adding extra analysis modules for
new scheduling schemes is provided. This was illustrated in Figure 3, where the user was able to
select an appropriate analysis for the STRESS program. All feasibility analysis modules perform
initial task timing analysis, priority assignment and resource usage analysis.
For static priority scheduling a number of basic analyses are currently supported:
(i) the sufficient test of Lui et al9;
(ii) the sufficient and necessary test of Lehozcky et al10;
(iii) the sufficient test of Audsley et al11;
(iv) the sufficient and necessary test of Audsley et al11;
(v) the arbitrary priority assignment and sufficient and necessary test of
Audsley12.
To be appropriate for tests (i) and (ii), individual tasks must have deadlines equal to periods.
Tests (iii) and (iv) allow task deadlines to be less than their periods. Finally, (v) allows optimal
priority assignment across tasks with arbitrary timing constraints.
The analyses are complicated by the incorporation of sporadic tasks. Tests (i) and (ii)
above (rate-monotonic scheduling) require extra periodic server tasks to be introduced13. These
tasks reserve time for sporadic tasks to execute. STRESS programs containing sporadic tasks
wishing to use rate-monotonic scheduling are required to provide server tasks.
For tests (iii), (iv) and (v) above, the underlying model (deadline less than or equal to
period for individual tasks) enables sporadic tasks to be incorporated directly, without server
tasks or extensions to the feasibility tests11.
Two forms of analysis for dynamic scheduling are provided, earliest deadline9 and least-
laxity14. The analysis for both schemes is identical. Both assume that task deadlines are equal to
periods.
One main form of resource allocation is provided, namely the Ceiling Priority Protocol8.
This has minimal effect upon the feasibility analyses above. The maximum blocking times
8
provided by the resource analysis are easily incorporated into all of the analyses: for feasibility
purposes, blocking time can be treated as additional computation time. Thus, whilst considering
the feasibility of an individual task, the worst-case execution time of that task is temporarily
increased by its worst-case blocking time.
The feasibility phase provides a detailed assessment of the tasks in a program. Information
from previous phases, worst and best-case execution times, priority, resource usage and blocking
times, are combined with the results of the feasibility analysis to give the user a detailed system
analysis. If tasks are not feasible, the analysis guides the user toward which resource accesses or
task execution times need to be changed to enable the system to become feasible. When the
system is feasible, the analysis provides likely processor utilisation estimations for that system
(both worst and best-case).

EXPERIENCES WITH STRESS


In this section we relate a number of experiences with the STRESS environment. Three separate
simulations are described, showing the ability to develop new feasibility theory, model
applications and experiment with novel kernel designs.
Developing New Feasibility Theory: The Token Ring
In Figure 10, the STRESS code for simulating a Token Ring network, interface hardware and
device drivers. The simulation of token ring networks has lead to an understanding of their
behavioural characteristics and their impact on feasibility of systems using token rings for inter-
task communication. The effects of changing buffer sizes, message sizes, token rotation times,
packet transmission times and task timing characteristics have been studied, enabling feasibility
analysis of token ring networks to be developed15. In a similar vein, TDMA networks have also
been studied, culminating in accurate feasibility analysis15.
Within Figure 10, a token ring interface object type (ringface) is described together with
an interface task type (ringfacetask). These form the representation of the hardware (token ring
network and interface hardware). Two methods are provided: transmit to register a message
transmission request and got_tok invoked when a token arrives. The former method (called by
application tasks) places messages into a queue. The latter (call by the interface task) checks to
see if there are any outstanding message transmission requests, transmitting if there are. The ring
interface task (type) ringfacetask loops, each time waiting for the token to arrive (using got_tok).
When it does, it checks to see if there are any outstanding message transmission requests.
On each node in the system an object of type ringface in instantiated. Also, a processor is
declared (e.g. ringproc_1) on which a task of type ringfacetask is instantiated (i.e. simulating an
intelligent network interface processor). A second processor is declared on each node to act as an
applications processor. Tasks resident on this processor send messages to other such tasks on the
other nodes in the system. Delays (i.e. token rotation) across the network are modelled by
assuming that if no messages are to be sent by a node when holding the token, the time for the
token to arrive at the next node is 3 time units; if a message is sent (as well as the token) 122 time
units are required.
In Figure 11, the results of a simulation involving three nodes (node_1, node_2 and
node_3) connected via token ring is given, although only node_1 and node_2 are shown in the
figure. The figure shows the token passing (in the form of a message) from ringface_1 to
ringface_2 (and subsequently onto ringface_3) with the other tasks on the nodes executing and
placing messages into the transmit queue. Messages are shown arriving at a task by an arrow
below the timeline; messages sent by a task appear as arrows above (e.g. ringface_1 receives a
message at time 1, sends a message at time 3).
Application Simulation: The Mine Drainage Case Study
An oft cited example of a hard real-time system (it contains many characteristics which typify hard
real-time systems) is the software necessary to manage a pump control system in a mining
9
environment16,17. The system is used to pump water, which collects in a sump at the bottom of
the shaft, to the surface.
The system contains two stations, one controlling the pump, the other monitoring the
environment. The former controls the level of water in the sump, turning a pump on if the water
level is too high. The environment monitor detects methane (CH4) levels in the sump: if a
threshold level is surpassed, the pump is unsafe to operate until the level falls again.
In Figure 12 a STRESS implementation of this system is given. Two objects are defined,
environment and pump which model the pump station and environment station respectively. The
pump object allows the current status of the pump (ON or OFF) and water level (HIGH or LOW)
to be stored. Likewise, the environment station allows the status of methane level to be stored.
The sensor detecting high methane levels is modelled as sporadic task interrupt. This sends
a message which causes the interrupt handler to execute (sporadic task handler is triggered by a
message arriving in mailbox handler_start) which sets the status of the methane level and turns
off the pump. A periodic task (ch4_sensor) executes to simulate the methane level falling back to
a safe level. The task pump_control turns on the pump if the water level is high (if the methane
level is not high); turning off the pump if the water level is low. The water level is set by task
water_level.
In Figure 13 a display of the simulation of the mine pump STRESS program is given. The
interrupt for methane level high (interrupt) executes at time 3, causing handler to execute at time
4. Whilst executing, handler becomes blocked due to ch4_sensor executing a critical region (i.e.
method environment::get_status). By priority inheritance resource control8, ch4_sensor executes
the remainder of its critical region, freeing the resource and permitting handler to execute to
completion. Tasks water_level and pump_control can be seen to execute periodically.
The simulation of the mine pump drainage application has enabled the design of the
system to be validated formally, by feasibility analysis of the STRESS program, and informally via
the display of the simulation, noting that no deadlines are missed. Modifications to the system
could be made based upon the behaviour seen in Figure 13. For example, the periods of the
pump_control and the water_level tasks could be reduced to enable the frequency of pump
control.
New Scheduler Design: Dual Priority Pre-Emptive
The dual priority pre-emptive scheduling approach was developed using the STRESS
environment18. It prescribes that tasks are assigned two (fixed) priorities. Individual tasks
commence execution using one priority, changing to the other priority at a time during their
execution (i.e. one change of priority per task execution).
Figure 14 contains a STRESS program with the dual priority scheduler declared within it:
periodic task dual_priority_scheduler. The scheduler task executes every tick, changing the
priorities of application tasks at fixed points within their execution. For example, task_2
commences execution at priority 6, but 10 time units after each release of the task, the effective
priority of the task (effpri) is changed to 4.
The results of simulating the STRESS program in Figure 14 are given in Figure 15. For
example, task_3 commences execution with a lower priority than task_1 and is pre-empted by the
latter task at time 16. However, at time 17 the priority of task_3 changes to be higher than task_1
and so pre-empts the latter task and executes to completion. It is noted that if the tasks had only
one fixed priority then some task deadlines would be missed.

10
CONCLUSIONS
The STRESS simulator provides a flexible environment in which various scheduling and resource
management algorithms can be evaluated and tested. It also provides a means of evaluating and
testing task sets and kernel implementations corresponding to actual applications. The
combination of a flexible simulated architecture, and a simple but fairly powerful programming
language allows its application to a wide range of target architectures and systems. The flexibility
of STRESS has been shown, illustrated by the development of new feasibility theory for token
ring networks, modelling of a standard real-time application and the development of a new
scheduling policy (dual priority). This flexibility is not available in the other simulators discussed
previously3,4,6.
A number of possibly desirable features are not currently implemented. These include
clock drift at the node and at the processor level, and mode changes. However, these features
could be added without much difficulty. Finally, it is hoped that the STRESS environment will be
extended to enable code generation (or translation to another high-level language, e.g. C++) for
executing applications and kernels upon a testbed currently under development.
Having gone through many incarnations, STRESS is implemented upon two main
platforms: Sun 3 workstations and PC compatible windows machines. The STRESS environment
is reasonably mature and is currently being used by many academic and industrial research
institutions.

ACKNOWLEDGEMENTS
This work has been supported, in part, by the Information Engineering Advanced Technology
Programme grant GR/F 35920/4/1/1214 and by SERC grant number GR/H 39611.

11
APPENDIX A: STRESS LANGUAGE SYNTAX
This appendix describes the syntax of STRESS programs in EBNF. For brevity, expressions are
not defined, but contain the standard arithmetic and relational operators with the usual precedence
relationships, and may also include method invocation. Keywords are given in bold typeface, with
terminal symbols in capitals (i.e. NAME, NUMBER).

system ::= system [ objtype ]* [ tasktype ]* node* endsys


objtype ::= objtyp TYPENAME objitem* endobj
tasktype ::= tasktyp task
node ::= node NODENAME nodeitem* endnod
nodeitem ::= semadecl | mboxdecl | objdecl | processor
processor ::= proc PROCNAME procstyle procitem* endpro
procstyle ::= [ordertype | schedulertype | resourcetype]*
ordertype ::= order ORDERNAME
schedulertype::= scheduler SCHEDULERNAME
schedulertype::= scheduler task
resourcetype ::= resource RESOURCENAME
resourcetype ::= resource objdecl
procitem ::= semadecl | objdecl | taskdecl
semadecl ::= semaphore SEMANAME [ = NUMBER ]
mboxdecl ::= mailbox MAILBOXNAME [ = NUMBER ]
objdecl ::= object OBJNAME objitem* endobj
objdecl ::= object type TYPENAME NAME
objitem ::= semadecl | vardecl | methdecl
vardecl ::= variable VARNAME [ = NUMBER ]
methdecl ::= method METHODNAME "(" parlist ")" statement* endmet
parlist ::= | VARNAME [ "," VARNAME ]*
taskdecl ::= task | task type TYPENAME NAME
task ::= pertask | sportask | apertask
pertask ::= periodic TASKNAME period NUMBER deadline NUMBER peropt* vardecl*
except* statement* endper
peropt ::= offset NUMBER | priority NUMBER
sportask ::= sporadic TASKNAME arrivaltype aperstart aperopt* vardecl* except*
statement* endspo
arrivaltype ::= arrival NUMBER
apertask ::= aperiodic TASKNAME aperstart aperopt* vardecl* except* statement*
endape
aperstart ::= chance NUMBER | trigger MAILBOXNAME | event EVENTNAME
aperopt ::= offset NUMBER | priority NUMBER | deadline NUMBER
except ::= except ( overrun | budget | failure | expiry ) block
statement ::= timing | vop | pop | send | recv | fail | block | ifthen | loop |
forloop | returns | assign | calls | output
timing ::= "[" NUMBER "," NUMBER "]"
pop ::= p "(" SEMANAME ")"
vop ::= v "(" SEMANAME ")"
send ::= send expr to NODENAME "." MAILBOXNAME
delay "[" NUMBER "," NUMBER "]"
recv ::= recv MAILBOXNAME to VARNAME [ expiry NUMBER ]
fail ::= fail NUMBER
block ::= "{" statement [ statement ]* "}"
ifthen ::= if expr then statement [ else statement ]
loop ::= loop expr max NUMBER statement
forloop ::= for VARNAME in VARNAME [of VARNAME] max NUMBER statement
returns ::= return expr
assign ::= VARNAME ":=" expr
calls ::= OBJNAME "::" METHODNAME "(" explist ")"
explist ::= | expr [ "," expr ]*
output ::= output expr

12
REFERENCES
1. H. Kopetz, A. Damm, C. Koza, M. Mulazzani, W. Schwabl, C. Senft and R. Zalinger,
"Distributed Fault-Tolerant Real-Time Systems: The Mars Approach", IEEE Micro, pp.
25-40 (February 1989).
2. C. D. Locke, "Software Architecture for Hard Real-Time Applications: Cyclic
Executives vs. Fixed Priority Executives", Journal of Real-Time Systems, 4(1), pp. 37-53
(March 1992).
3. A. D. Stoyenko, "A Schedulability Analyser for Real-Time Euclid", Proc. 8th IEEE
Real-Time Systems Symposium, pp. 218-227 (December 1987).
4. H. Tokuda and C. W. Mercer, "ARTS: A Distributed Real-Time Kernel", ACM
Operating Systems Review Special Issue, pp. 29-53 (1989).
5. H. Tokuda and M. Kotera, "A Real-Time Tool Set for the ARTS Kernel", Proc. 9th
IEEE Real-Time Systems Symposium, pp. 289-299 (December 1988).
6. J. W. S. Liu, J. L. Redondo, Z. Deng, T. S. Tia, R. Bettati, A. Silberman, M. Storch, R.
Ha and W. K. Shih, "PERTS: A Prototyping Environment for Real-Time Systems",
Proc. 14th IEEE Real-Time Systems Symposium, pp. 184-188 (December 1993).
7. J. Y. T. Leung and J. Whitehead, "On the Complexity of Fixed Priority Scheduling of
Periodic Real-Time Tasks", Performance Evaluation (Netherlands), 2(4), pp. 237-250
(December 1982).
8. L. Sha, R. Rajkumar and J. P. Lehoczky, "Priority Inheritance Protocols: An Approach
to Real-Time Synchronisation", IEEE Trans. on Computers, 39(9), pp. 1175-1185
(September 1990).
9. C. L. Liu and J. W. Layland, "Scheduling Algorithms for Multiprogramming in a Hard
Real-Time Environment", JACM, 20(1), pp. 40-61 (1973).
10. J. P. Lehoczky, L. Sha and Y. Ding, "The Rate-Monotonic Scheduling Algorithm: Exact
Characterisation and Average Case Behaviour", Proc. 10th IEEE Real-Time Systems
Symposium, pp. 166-171 (December 1989).
11. N. C. Audsley, A. Burns, M. F. Richardson and A. J. Wellings, "Hard Real-Time
Scheduling: The Deadline-Monotonic Approach", Proc. 8th IEEE Workshop on Real-
Time Operating Systems and Software (May 1991).
12. N. C. Audsley, "Flexible Scheduling of Hard Real-Time Systems", DPhil. Thesis, Dept.
Computer Science, University of York, UK (August 1993).
14. L. Sha, R. Rajkumar and J. P. Lehoczky, "Aperiodic Task Scheduling for Hard Real-
Time Systems", Journal of Real-Time Systems, 1, pp. 27-69 (1989).
15. K. Tindell and J. Clark, "Holistic Schedulability Analysis for Distributed Hard Real-Time
Systems", Euromicro Journal (Special Issue on Parallel Embedded Real-Time Systems
(February 1994).
16. J. Kramer, J. Magee, M. Sloman and A. M. Lister, "CONIC: An Integrated Approach to
Distributed Computer Control Systems", IEE Proceedings (Part E), 180(1), pp. 1-10
(1983).
17. A. Burns and A. J. Wellings, "Real-Time Systems and Their Programming Languages",
pub. Addison-Wesley (1989).
18. A. Burns and A. J. Wellings, "Dual Priority Assignment: A Practical Method for
Increasing Processor Utilisation", Proc. 5th Euromicro Workshop on Real-Time
Systems, pp. 48-55 (1992).

13
system
node node_1
processor proc_1
order dma
scheduler pripre
resource inherit

semaphore S0

periodic J0
period 15 deadline 15 offset 4
[1,1] p(S0) [1,1] v(S0) [1,1]
endper

periodic J1
period 20 deadline 20 offset 0
[2,2] p(S0) [4,4] v(S0) [1,1]
endper

endpro
endnod
endsys
Figure 1: Simple STRESS Program.

14
Figure 2: The STRESS Environment.

15
Figure 3: Selecting an Analysis.

16
Exact Deadline Monotonic Analysis Selected.
Analysis starting ....
Exact Deadline Monotonic Analysis : proc_1@node_1 : Passes.
analysis returning.

Figure 4: Part of the Analysis Output.

17
Network

Node

Processor

Object Object

Method Method

Sema Var Sema Var

Task
Mbox

Sema Var Sema Mbox

Figure 5: Architecture of Simulated Systems.

18
system node node_2
processor proc_2
node node_1 mailbox box_2
processor proc_1
sporadic task_2
periodic task_1 arrival 4
period 10 deadline 4
offset 0 trigger mbox_2
deadline 10 variable var_2
[2, 3] [2, 3]
send 1 to node_2.mbox_2 recv mbox_2 to var_2
delay [4,8] expiry 3
[2, 2] endspo
endper endpro
endpro endnod
endnod
endsys

Figure 6: Message Communication.

19
periodic task1
period 10
deadline 5
variable var
variable tri
if tri > 100 then
tri := 0
var := 10
loop var > 0 max 20
{ tri := tri + var * var
var := var - 1
}
endper
Figure 7: Simple Task.

20
scheduler periodic early
period 1
deadline 0
offset 0
priority 0
hidden
variable task
for task in tasklist of myproc max 999999
if task != mytask
then effpri of task := deadline of task
endper
Figure 8: Earliest Deadline Scheduler.

21
resource object inherit
variable locker
variable waiter
variable myused

method p (sema)
if cnt of sema = 0 then
for locker in locking of sema max 999999
{ if effpri of locker > effpri of mytask then
effpri of locker := effpri of mytask
}
endmet

method v (sema)
effpri of mytask := baspri of mytask
for myused in locking of mytask max 999999
{ if myused != sema then
for waiter in waiting of sema max 999999
{ if effpri of waiter < effpri of mytask then
effpri of mytask := effpri of waiter
}
}
endmet
endobj
Figure 9: Priority Inheritance Resource Management.

22
system arrival 80 chance 40
deadline 20 priority 2
objtyp ringface [2,2]
variable dest[32] variable data[32] ringface::transmit (node_3.box_1, 0)
variable nmsg = 0 variable idx endspo
endpro
method transmit (tobox, value) endnod
if nmsg >= 32 then fail 100
dest[nmsg] := tobox node node_2
data[nmsg] := value mailbox tokb mailbox box_1
endmet mailbox box_2

method got_tok () object type ringface ringface


if nmsg > 0 then
{ send data[0] to dest[0] delay[12,12] processor ringproc_2
idx := 1 object next
loop idx <= nmsg max 999 method next_box ()
{ dest[idx - 1] := dest[idx] return node_3.tokb
data[idx - 1] := data[idx] endmet
nmsg := nmsg - 1 endobj
} task type ringfacetask ringfacetask_2
[2,2] endpro
return 1
} processor proc_1
return 0 sporadic task_1
endmet arrival 80 chance 40
endobj deadline 20 priority 1
[2,2]
tasktyp periodic ringfacetask ringface::transmit (node_1.box_1, 0)
period 99999 deadline 99999 endspo
priority 1 sporadic task_2
variable dummy arrival 80 chance 40
deadline 20 priority 2
if mynode = node_1 then [2,2]
send 0 to node_1.tokb delay [0,0] ringface::transmit (node_3.box_1, 0)
loop 1 max 99999 endspo
{ recv tokb to dummy endpro
[2,2] endnod
if ringface::got_tok ()
then node node_3
send 0 to next::next_box () mailbox tokb mailbox box_1
delay [12,12] mailbox box_2
else
send 0 to next::next_box () object type ringface ringface
delay [3, 3]
} processor ringproc_3
endper object next
method next_box ()
node node_1 return node_1.tokb
mailbox tokb mailbox box_1 endmet
mailbox box_2 endobj
task type ringfacetask ringfacetask_3
object type ringface ringface endpro

processor ringproc_1 processor proc_1


object next sporadic task_1
method next_box () arrival 80 chance 40
return node_2.tokb deadline 20 priority 1
endmet [2,2]
endobj ringface::transmit (node_1.box_1, 0)
task type ringfacetask ringfacetask_1 endspo
endpro sporadic task_2
arrival 80 chance 40
processor proc_1 deadline 20 priority 2
sporadic task_1 [2,2]
arrival 80 chance 40 ringface::transmit (node_2.box_1, 0)
deadline 20 priority 1 endspo
[2,2] endpro
ringface::transmit (node_2.box_1, 0) endnod
endspo
sporadic task_2 endsys

Figure 10: Token Ring Implementation.

23
Figure 11: STRESS Display of Token Ring Simulation.

24
system endmet
node node_1 method set_status (newstatus)
mailbox handler_start p (mutex)
[2, 3]
processor proc_1 status := newstatus
order program v (mutex)
resource inherit endmet
scheduler pripre endobj

object pump sporadic interrupt


semaphore mutex_pump = 1 arrival 30 chance 1 deadline 5
semaphore mutex_level = 1 offset 2 priority 1
variable status = 0 send 0 to handler_start
variable water_level = 0 delay [0,0]
variable tmp1 endspo
variable tmp2
sporadic handler
method get_status () arrival 30 trigger handler_start
p (mutex_pump) deadline 20 priority 2
[1, 1] variable tmp = 0
tmp1 := status [1, 2]
v(mutex_pump) recv handler_start to tmp
return tmp1 environment::set_status (1)
endmet pump::switch (0)
method switch (newval) endspo
p (mutex_pump)
if newval = 1 then periodic ch4_sensor
{ [4, 5] period 40 deadline 25 priority 3
status := 1 [1, 2]
} if environment::get_status() = 1
else then
{ [2, 3] { [1, 2]
status := 0 if rand_10 % 5 = 1 then
} environment::set_status (0)
v (mutex_pump) }
endmet endper
method get_level ()
p (mutex_level) periodic water_level
[1, 1] period 50 deadline 50 priority 4
tmp2 := water_level [1, 2]
v (mutex_level) if rand_5 % 10 = 1 then
return tmp2 pump::set_level (1)
endmet endper
method set_level (newlevel)
p (mutex_level) periodic pump_control
[2, 3] period 50 deadline 50 offset 25
water_level := newlevel priority 5
v (mutex_level) [1, 2]
endmet if environment::get_status() = 0
endobj then
{ [2, 3]
object environment if pump::get_status() = 0 &&
semaphore mutex = 1 pump::get_level() = 1
variable status = 1 then
variable tmp pump::switch (1)
}
method get_status () endper
p (mutex) endpro
[1,2]
tmp := status endnod
v (mutex) endsys
return tmp

Figure 12: Mine Pump Implementation.

25
Figure 13: STRESS Display of Mine Pump Simulation.

26
system
node node_1
processor proc_1

order program
scheduler periodic dual_priority_scheduler
period 1 deadline 0 offset 0 priority 0 hidden
if tick % 20 = 0 then effpri of task_2 := baspri of task_2
if tick % 20 = 10 then effpri of task_2 := 4
if tick % 28 = 0 then effpri of task_3 := baspri of task_3
if tick % 28 = 17 then effpri of task_3 := 3
if tick % 56 = 0 then effpri of task_4 := baspri of task_4
if tick % 56 = 50 then effpri of task_4 := 2
endper
periodic task_1
period 16 deadline 16 offset 0 priority 5
[4,4]
endper
periodic task_2
period 20 deadline 20 offset 0 priority 6
[5,5]
endper
periodic task_3
period 28 deadline 28 offset 0 priority 7
[11,11]
endper
periodic task_4
period 56 deadline 56 offset 0 priority 8
[6,6]
endper
endpro
endnod
endsys

Figure 14: Dual Priority Scheduler.

27
Figure 15: STRESS Display of Dual Priority Scheduler.

28

You might also like