Professional Documents
Culture Documents
S: A S F H R - T S: Tress Imulator OR ARD EAL IME Ystems
S: A S F H R - T S: Tress Imulator OR ARD EAL IME Ystems
S: A S F H R - T S: Tress Imulator OR ARD EAL IME Ystems
SUMMARY
The STRESS environment is a collection of CASE tools for analysing and
simulating the behaviour of hard real-time safety-critical applications. It is
primarily intended as a means by which various scheduling and resource
management algorithms can be evaluated, but can also be used to study
the general behaviour of applications and real-time kernels. This paper
describes the structure of the STRESS language, its environment and
gives examples of its use.
INTRODUCTION
Increasingly, computer systems are being used in applications where a failure of the system can
result in loss of life or catastrophic damage. Consequently, the need has arisen for a theoretical
basis and practical methodology by which the correct design and construction of such systems can
be achieved. These systems are often referred to as hard real-time systems, where real-time
reflects the fact that they must directly interact with a changing physical environment and hard
refers to the fact that at least some system functions must be performed within specific timing
constraints.
For hard real-time systems, the verification of the correct and safe functioning of the
system, specifically the guaranteed meeting of timing requirements, is of paramount importance.
Past engineering practices have dictated that systems are verified on a testbed: arguments
regarding the timeliness of the system are established by using test results. This approach is not
ideal. Typically, systems fail, in a timing sense, in very constrained circumstances, with the
average-case, as often seen on the testbed, performing adequately.
To counter this problem, much research has been concerned with developing design
methodologies and feasibility theory to ensure timing requirements can be verified offline. A
number of problems are apparent with this theoretical approach:
• if timing requests cannot be met, little or no guidance is given as to how the
system design and/or implementation may be modified so that timing
requirements are met;
• design methodologies and scheduling theories do not, in the main, cater for
the effects of actual hardware, i.e. network controllers, DMA, interrupts,
sensors and actuators etc.;
• the timing characteristics of the kernel, and its effect on the application tasks,
are often ignored;
• tools are required to validate new scheduling theory.
Whilst theoretical solutions are required for all the above problems, simulation of both application
and kernel software on various architectures would aid the engineering of hard real-time systems.
For example, by observing where timing constraints are violated, systems engineers are guided
towards the software requiring modification.
The remainder of this paper describes STRESS, a general purpose simulation environment
for hard real-time systems. It comprises a single generic language for architecture description,
kernel software and application software; analysis facilities (within the bounds of current
scheduling theory) for kernel and application software; simulation facilities, incorporating
graphical display of simulation results. STRESS can be used in a number of different ways,
including the construction and simulation of applications and kernels for the hard real-time
systems domain.
Background
Hard real-time systems are generally viewed as comprising a set of tasks which when invoked (or
released) must complete their execution before a specified deadline. Processes may be released in
one of three general ways. Periodic tasks are released at regular intervals. Sporadic tasks are
released at variable intervals, although there exists a minimum time between successive releases of
the task. Aperiodic tasks can be released at any time, with no minimum time between successive
releases.
Feasibility analysis determines whether a set of tasks will meet their deadlines at run-time.
If analysis indicates that the deadlines of tasks will be met, those tasks are termed guaranteed.
Conventionally, such analysis occurs offline, before a system executes. Obviously, aperiodic
processes cannot be guaranteed since they may place, in the worst-case, an infinite load on the
processor.
A common approach to the implementation of hard real-time systems is to use a cyclic
executive, where tasks are executed according to a predefined schedule which is held in the
system, and which is repeated indefinitely1. The execution order is calculated off-line in advance.
Such an approach has a number of problems2: schedules are inflexible and may become
prohibitively long. More flexible approaches include:
• static priority scheduling: all tasks are assigned (offline) a priority; with the
highest priority runnable task executed at any time.
• dynamic priority scheduling: tasks are assigned priorities at run-time, with
those priorities able to change dynamically.
For both approaches, a priority pre-emptive dispatcher is assumed (i.e. the highest priority
runnable task executes at any point in time).
Within hard real-time systems, tasks are (conventionally) able to interact in a number of
ways. The provision of semaphores, to enable mutually exclusive access to shared resources (i.e.
synchronous), and mailboxes, to enable message-based communication (i.e. asynchronous) allows
most methods of interaction to be modelled.
The STRESS environment permits scheduling via dynamic or static priorities with both
synchronous and asynchronous inter-task communication.
Existing Simulation Systems
Conventionally, hard real-time system simulators are individually designed for a specific
architecture and application. For example, Stoyenko describes a schedulability analyser
("schedalyzer") for the Real-Time Euclid language3. This specifies a set of tasks, each of which is
executed periodically, or at a pre-defined time. Distributed systems are supported with single-
processor nodes communicating via mailbox memory accessed over a parallel data bus, with
processes scheduled using the earliest deadline algorithm. The schedulability analysis is tied
specifically to the Real-Time Euclid language, its run time system, and the hardware on which it is
run.
The ARTS kernel4 has been developed as a testbed for the ART (Advanced Real-Time
Technology) project. This comprises single processor nodes connected by token ring and Ethernet
network. The scheduler supplies a range of scheduling algorithms, which can be selected as
desired. As part of the system, Scheduler 1-2-3 provides schedulability analysis for the system,
and can also generate synthetic workloads5.
2
The PERTS environment enables the simulation of applications, together with a number of
kernels (all based upon a defined virtual kernel)6. Applications are described using a simulation
language (based upon C++). The motivation of PERTS is the direct generation of application
code from the simulation tool rather than the experimentation with novel kernel designs and
scheduling approaches.
In summary we note that the Schedalyzer is aimed at a specific language; both it and
Scheduler 1-2-3 are aimed at specific hardware environments. PERTS removes the single
architecture and operating system constraint, although the scope for experimentation in terms of
scheduling theory and kernel design is limited. None of the above tools permit hardware
architectures, application software and kernel software to be described using the same simulation
language.
OVERVIEW OF STRESS
The STRESS environment comprises a number of separate tools contributing to the simulation of
hard real-time systems, built upon a single simulation language. This language is specifically
designed with respect to hard real-time systems, to enable the expression of hardware description,
kernel software and application software.
An example STRESS program is given in Figure 1. The hardware architecture comprises a
single processing node (node_1), which in turn contains a single processor (proc_1). The
processor contains a single binary semaphore (S0) and two periodic tasks (J0 and J1). Each task
has a specified period, deadline and offset. Hence, task J0 will be released at times 4, 19, 34,...
with respective deadlines 19, 34, 49,... Each task executes code which consumes processor time,
and locks and unlocks semaphores. The notation [n,m] indicates that the task should consume
between n and m (inclusive) units (also referred to as ticks) of processor time. In all cases in this
example, n and m are the same, to specify exact amounts of processor time; otherwise, a value is
chosen randomly in the range (NB only such timing statements consume simulation, i.e.
processor, time). The instructions p(S0) and v(S0) lock and unlock (respectively) the semaphore
S0. Static priority scheduling is prescribed (via scheduler pripre) assuming deadline-monotonic
priority assignment (via order dma)7 and priority inheritance resource allocation8 (via inherit).
Further details of the STRESS language, including message passing, object types, construction of
kernel features etc., are given later in the paper.
After a STRESS program has been written, three other functions can be performed within
the STRESS environment:
(i) analyse task sets for feasibility given specified scheduling algorithms and
underlying kernel;
(ii) simulate the execution of such task sets in conjunction with the hardware
architecture and kernel;
(iii) display the output of the simulation in a useful and concise form.
These are illustrated in Figure 2, showing a STRESS session in progress. The following sections
give a brief overview of the front-end, feasibility analysis, execution simulator and display tool.
5
Each task may have local variables and code which it executes. Local variables are either
scalar values or arrays, and can contain either integers or references to tasks, processors, etc.
Values stored in local variables are preserved between successive executions of tasks, with
variable initialisation occurring once, before the task ever executes. Two control constructs are
provided: if..then..else and loop (the else part is optional). The latter construct has associated
with it a constant integer which represents the maximum number of iterations. Examples of
variables and control statements are given in Figure 7.
Task types are provided to avoid repetitive declaration of code. For example, a network
interface process, common to many processors in the system, could be modelled as a task type,
with the task instantiated on each processor in the STRESS system, avoiding duplicate code.
Semaphores
Semaphores provide a basic method of implementing mutual exclusion between tasks on the same
processor or node. Semaphores located on a node can be accessed from any processor on the
node; a semaphore on a processor can only be accessed from that processor.
The basic mechanism is that of the Dijkstra semaphore, except that they may be initialised
so that up to n tasks may lock the semaphore at any one time. If a semaphore is fully locked when
a further task attempts to lock it, then that task becomes blocked on the semaphore. When a task
releases a semaphore, all tasks blocked on that semaphore become unblocked, and all will retry
the locking request. This allows arbitrary rescheduling to be performed when a semaphore is
released.
It is also possible to explicitly block a task on a semaphore, even if the task is not
requesting the semaphore, or the semaphore is not fully locked. This is needed in order to
implement resource control schemes such as the Priority Ceiling Protocol 8.
Objects And Methods
Objects provide shared memory communication between tasks within a single node. An object
cannot be accessed from a task running on a different node. Objects are accessed via methods,
and can contain local variables and local semaphores.
Objects may be declared at the node level or at the processor level, the only difference
being the scope. An object which is declared at the node level can be accessed from any task on
any processor on that node; an object declared at the processor level can only be accessed from
tasks on that processor.
A method invocation is effectively a function or subroutine call; it may be passed
arguments, and may return a result. The method contains code in exactly the same manner as a
task, and can access its arguments, the local variables and local semaphores. A method of an
object at the node level can also access the mailboxes and semaphores declared on that node; a
method of an object declared at the processor level can, in addition, access the mailboxes and
semaphores declared on that processor.
Methods can be invoked from one-another, with the restriction that recursion amongst
methods is not allowed. This is necessary in order that worst-case execution times can always be
bounded. The scoping rules are the same as for access to mailboxes and semaphores; a method of
an object at node level can access any method of any other object on that node, while a method of
an object at processor level can also access methods of objects on that processor.
Object types are provided to avoid repetitive declaration of code (in the same manner as
task types). For example, some kernel facilities could be modelled as methods within an object,
with the object instantiated on each processor in the STRESS system, avoiding duplicate code.
6
LANGUAGE EXTENSIONS FOR MODELLING KERNEL FACILITIES
At the lowest level, each processor is scheduled separately, and is not affected by events on other
processors, other than through semaphores and mailboxes. Note that, at the level of the STRESS
simulator, all scheduling is performed locally to each processor, and there is no global scheduling.
Each task has associated with it two priorities, the base priority baspri, and the effective
priority effpri. The base priority may be set in one of two main ways. Firstly, it may be set by use
of the priority keyword (within a STRESS program), setting the priority to a specific numerical
value. Secondly, it may be set by running the feasibility analyser; this will assign priorities based
on the scheduling algorithm which is to be used.
When the simulator is started, the effective priorities are set to the base priority values.
Then, at each tick, the task with the highest effective priority, which is free to run, will be
executed. If the effective priorities are left unchanged, the scheduling is effectively pre-emptive
static priority, with semaphore access occurring on a first-come first-served basis.
Other kernel facilities can be placed inside an object, with methods made available for
application tasks to make kernel calls. More sophisticated processor and resource scheduling can
be achieved within a STRESS program itself. This is detailed in the following sections.
Additional Language Features
To enable schedulers and resource managers to be written within a STRESS program, a number
of additional language features are made available in terms of a number of state values
(associated with each task) which can be accessed by STRESS programs. For example, the wcet
value defines maximum execution time of a task; the waiting and locking values define the tasks
blocked on a semaphore and the task locking a semaphore respectively. Also, a number of
variables are provided, for example, tick defines the current simulation tick; mynode provides the
name of the node on which a task is executing.
Finally, an additional construct (for) and keyword (of) are introduced to allow lists to be
scanned. These originate in expressions such as tasklist of myproc (list of tasks on the local
processor) or waiting of sema (the list of tasks blocking on a semaphore).
Implementing Schedulers
As an example, a scheduler for the earliest-deadline algorithm will be shown9. Under this
algorithm, the task which is closest to its deadline (and free to run) will be executed. The code for
this is shown in Figure 8. The scheduler is implemented as a task, early, which runs on every tick,
but does not consume any processor time. When it runs, it can adjust the effective priority values,
and the terminate. This leaves the low-level scheduler free to run whichever other task is free to
run and has the highest effective priority. It is noted that schedulers that respond to events (such
as task release), and have non-zero overheads, can also be implemented within the STRESS
language.
It is necessary to arrange that the scheduler task executes before any other task, in order
that it can set the effective priorities correctly. This is managed by setting its priority to zero.
Provided that other tasks on the processor are not given priorities which are negative, the
scheduler will have the highest priority, and will execute first.
When the scheduler task runs, it scans the list of tasks on the processor. For each task,
other than the scheduler task itself, the effective priority is set to the next task deadline (relative to
the current time). The closer the deadline is to the current time, the smaller this value will be, and
hence the higher priority.
Since we probably do not want to see the scheduler task when a simulation run is
displayed by the display tool, the task is marked hidden.
Libraries of schedulers may be created, files therein containing solitary tasks (e.g. the code
in Figure 8). Now, a shorthand may be used for including a scheduler onto a processor. For
example, the code in Figure 1 contains scheduler pripre indicating the scheduler task of name
pripre should be included at this point.
7
Implementing Resource Management
This section describes how resource management algorithms can be implemented in the STRESS
language. Essentially, additional code maybe executed before the in-built p and v operations. This
is structured as an object containing methods name p and v. Consider Figure 9 (containing the
code for simple priority inheritance resource control8). The method p in object inherit will be
executed whenever a p operation is executed by a task. Then, the in-built code within the
simulator for p is executed. Similarly, a method for the v operation is available.
In Figure 9, the p method checks whether the semaphore is locked; if it is then cnt of sema
will be zero. If it is locked, then the list of tasks locking the semaphore is scanned, and, if any has
a lower effective priority than the calling task (which is about to become blocked) then its
effective priority is increased to that of the caller. Hence, if the caller is blocked by a lower
priority task, than the effective priority of that task will be raised to that of the caller. Finally, on
return from the routine, the actual semaphore operation (i.e. in-built p) is invoked.
The corresponding v method initially resets the callers effective priority to its base priority.
It then scans all semaphores locked by the caller (other than the semaphore which is being
released), and each of the tasks blocked on such semaphores. The callers effective priority is
raised if any blocked task has a higher effective priority.
Libraries of resource managers may be built up in a similar manner to schedulers. Also, a
shorthand is available for inclusion of a resource manager on a processor. For example, the line
resource inherit within a processor declaration would ensure that the resource manager given in
Figure 9 is included.
FEASIBILITY ANALYSIS
Within the context of the STRESS environment feasibility analysis represents finding whether the
tasks in a STRESS program will meet their deadlines. Since many different models of scheduling
(and resource management) can be supported, provision for adding extra analysis modules for
new scheduling schemes is provided. This was illustrated in Figure 3, where the user was able to
select an appropriate analysis for the STRESS program. All feasibility analysis modules perform
initial task timing analysis, priority assignment and resource usage analysis.
For static priority scheduling a number of basic analyses are currently supported:
(i) the sufficient test of Lui et al9;
(ii) the sufficient and necessary test of Lehozcky et al10;
(iii) the sufficient test of Audsley et al11;
(iv) the sufficient and necessary test of Audsley et al11;
(v) the arbitrary priority assignment and sufficient and necessary test of
Audsley12.
To be appropriate for tests (i) and (ii), individual tasks must have deadlines equal to periods.
Tests (iii) and (iv) allow task deadlines to be less than their periods. Finally, (v) allows optimal
priority assignment across tasks with arbitrary timing constraints.
The analyses are complicated by the incorporation of sporadic tasks. Tests (i) and (ii)
above (rate-monotonic scheduling) require extra periodic server tasks to be introduced13. These
tasks reserve time for sporadic tasks to execute. STRESS programs containing sporadic tasks
wishing to use rate-monotonic scheduling are required to provide server tasks.
For tests (iii), (iv) and (v) above, the underlying model (deadline less than or equal to
period for individual tasks) enables sporadic tasks to be incorporated directly, without server
tasks or extensions to the feasibility tests11.
Two forms of analysis for dynamic scheduling are provided, earliest deadline9 and least-
laxity14. The analysis for both schemes is identical. Both assume that task deadlines are equal to
periods.
One main form of resource allocation is provided, namely the Ceiling Priority Protocol8.
This has minimal effect upon the feasibility analyses above. The maximum blocking times
8
provided by the resource analysis are easily incorporated into all of the analyses: for feasibility
purposes, blocking time can be treated as additional computation time. Thus, whilst considering
the feasibility of an individual task, the worst-case execution time of that task is temporarily
increased by its worst-case blocking time.
The feasibility phase provides a detailed assessment of the tasks in a program. Information
from previous phases, worst and best-case execution times, priority, resource usage and blocking
times, are combined with the results of the feasibility analysis to give the user a detailed system
analysis. If tasks are not feasible, the analysis guides the user toward which resource accesses or
task execution times need to be changed to enable the system to become feasible. When the
system is feasible, the analysis provides likely processor utilisation estimations for that system
(both worst and best-case).
10
CONCLUSIONS
The STRESS simulator provides a flexible environment in which various scheduling and resource
management algorithms can be evaluated and tested. It also provides a means of evaluating and
testing task sets and kernel implementations corresponding to actual applications. The
combination of a flexible simulated architecture, and a simple but fairly powerful programming
language allows its application to a wide range of target architectures and systems. The flexibility
of STRESS has been shown, illustrated by the development of new feasibility theory for token
ring networks, modelling of a standard real-time application and the development of a new
scheduling policy (dual priority). This flexibility is not available in the other simulators discussed
previously3,4,6.
A number of possibly desirable features are not currently implemented. These include
clock drift at the node and at the processor level, and mode changes. However, these features
could be added without much difficulty. Finally, it is hoped that the STRESS environment will be
extended to enable code generation (or translation to another high-level language, e.g. C++) for
executing applications and kernels upon a testbed currently under development.
Having gone through many incarnations, STRESS is implemented upon two main
platforms: Sun 3 workstations and PC compatible windows machines. The STRESS environment
is reasonably mature and is currently being used by many academic and industrial research
institutions.
ACKNOWLEDGEMENTS
This work has been supported, in part, by the Information Engineering Advanced Technology
Programme grant GR/F 35920/4/1/1214 and by SERC grant number GR/H 39611.
11
APPENDIX A: STRESS LANGUAGE SYNTAX
This appendix describes the syntax of STRESS programs in EBNF. For brevity, expressions are
not defined, but contain the standard arithmetic and relational operators with the usual precedence
relationships, and may also include method invocation. Keywords are given in bold typeface, with
terminal symbols in capitals (i.e. NAME, NUMBER).
12
REFERENCES
1. H. Kopetz, A. Damm, C. Koza, M. Mulazzani, W. Schwabl, C. Senft and R. Zalinger,
"Distributed Fault-Tolerant Real-Time Systems: The Mars Approach", IEEE Micro, pp.
25-40 (February 1989).
2. C. D. Locke, "Software Architecture for Hard Real-Time Applications: Cyclic
Executives vs. Fixed Priority Executives", Journal of Real-Time Systems, 4(1), pp. 37-53
(March 1992).
3. A. D. Stoyenko, "A Schedulability Analyser for Real-Time Euclid", Proc. 8th IEEE
Real-Time Systems Symposium, pp. 218-227 (December 1987).
4. H. Tokuda and C. W. Mercer, "ARTS: A Distributed Real-Time Kernel", ACM
Operating Systems Review Special Issue, pp. 29-53 (1989).
5. H. Tokuda and M. Kotera, "A Real-Time Tool Set for the ARTS Kernel", Proc. 9th
IEEE Real-Time Systems Symposium, pp. 289-299 (December 1988).
6. J. W. S. Liu, J. L. Redondo, Z. Deng, T. S. Tia, R. Bettati, A. Silberman, M. Storch, R.
Ha and W. K. Shih, "PERTS: A Prototyping Environment for Real-Time Systems",
Proc. 14th IEEE Real-Time Systems Symposium, pp. 184-188 (December 1993).
7. J. Y. T. Leung and J. Whitehead, "On the Complexity of Fixed Priority Scheduling of
Periodic Real-Time Tasks", Performance Evaluation (Netherlands), 2(4), pp. 237-250
(December 1982).
8. L. Sha, R. Rajkumar and J. P. Lehoczky, "Priority Inheritance Protocols: An Approach
to Real-Time Synchronisation", IEEE Trans. on Computers, 39(9), pp. 1175-1185
(September 1990).
9. C. L. Liu and J. W. Layland, "Scheduling Algorithms for Multiprogramming in a Hard
Real-Time Environment", JACM, 20(1), pp. 40-61 (1973).
10. J. P. Lehoczky, L. Sha and Y. Ding, "The Rate-Monotonic Scheduling Algorithm: Exact
Characterisation and Average Case Behaviour", Proc. 10th IEEE Real-Time Systems
Symposium, pp. 166-171 (December 1989).
11. N. C. Audsley, A. Burns, M. F. Richardson and A. J. Wellings, "Hard Real-Time
Scheduling: The Deadline-Monotonic Approach", Proc. 8th IEEE Workshop on Real-
Time Operating Systems and Software (May 1991).
12. N. C. Audsley, "Flexible Scheduling of Hard Real-Time Systems", DPhil. Thesis, Dept.
Computer Science, University of York, UK (August 1993).
14. L. Sha, R. Rajkumar and J. P. Lehoczky, "Aperiodic Task Scheduling for Hard Real-
Time Systems", Journal of Real-Time Systems, 1, pp. 27-69 (1989).
15. K. Tindell and J. Clark, "Holistic Schedulability Analysis for Distributed Hard Real-Time
Systems", Euromicro Journal (Special Issue on Parallel Embedded Real-Time Systems
(February 1994).
16. J. Kramer, J. Magee, M. Sloman and A. M. Lister, "CONIC: An Integrated Approach to
Distributed Computer Control Systems", IEE Proceedings (Part E), 180(1), pp. 1-10
(1983).
17. A. Burns and A. J. Wellings, "Real-Time Systems and Their Programming Languages",
pub. Addison-Wesley (1989).
18. A. Burns and A. J. Wellings, "Dual Priority Assignment: A Practical Method for
Increasing Processor Utilisation", Proc. 5th Euromicro Workshop on Real-Time
Systems, pp. 48-55 (1992).
13
system
node node_1
processor proc_1
order dma
scheduler pripre
resource inherit
semaphore S0
periodic J0
period 15 deadline 15 offset 4
[1,1] p(S0) [1,1] v(S0) [1,1]
endper
periodic J1
period 20 deadline 20 offset 0
[2,2] p(S0) [4,4] v(S0) [1,1]
endper
endpro
endnod
endsys
Figure 1: Simple STRESS Program.
14
Figure 2: The STRESS Environment.
15
Figure 3: Selecting an Analysis.
16
Exact Deadline Monotonic Analysis Selected.
Analysis starting ....
Exact Deadline Monotonic Analysis : proc_1@node_1 : Passes.
analysis returning.
17
Network
Node
Processor
Object Object
Method Method
Task
Mbox
18
system node node_2
processor proc_2
node node_1 mailbox box_2
processor proc_1
sporadic task_2
periodic task_1 arrival 4
period 10 deadline 4
offset 0 trigger mbox_2
deadline 10 variable var_2
[2, 3] [2, 3]
send 1 to node_2.mbox_2 recv mbox_2 to var_2
delay [4,8] expiry 3
[2, 2] endspo
endper endpro
endpro endnod
endnod
endsys
19
periodic task1
period 10
deadline 5
variable var
variable tri
if tri > 100 then
tri := 0
var := 10
loop var > 0 max 20
{ tri := tri + var * var
var := var - 1
}
endper
Figure 7: Simple Task.
20
scheduler periodic early
period 1
deadline 0
offset 0
priority 0
hidden
variable task
for task in tasklist of myproc max 999999
if task != mytask
then effpri of task := deadline of task
endper
Figure 8: Earliest Deadline Scheduler.
21
resource object inherit
variable locker
variable waiter
variable myused
method p (sema)
if cnt of sema = 0 then
for locker in locking of sema max 999999
{ if effpri of locker > effpri of mytask then
effpri of locker := effpri of mytask
}
endmet
method v (sema)
effpri of mytask := baspri of mytask
for myused in locking of mytask max 999999
{ if myused != sema then
for waiter in waiting of sema max 999999
{ if effpri of waiter < effpri of mytask then
effpri of mytask := effpri of waiter
}
}
endmet
endobj
Figure 9: Priority Inheritance Resource Management.
22
system arrival 80 chance 40
deadline 20 priority 2
objtyp ringface [2,2]
variable dest[32] variable data[32] ringface::transmit (node_3.box_1, 0)
variable nmsg = 0 variable idx endspo
endpro
method transmit (tobox, value) endnod
if nmsg >= 32 then fail 100
dest[nmsg] := tobox node node_2
data[nmsg] := value mailbox tokb mailbox box_1
endmet mailbox box_2
23
Figure 11: STRESS Display of Token Ring Simulation.
24
system endmet
node node_1 method set_status (newstatus)
mailbox handler_start p (mutex)
[2, 3]
processor proc_1 status := newstatus
order program v (mutex)
resource inherit endmet
scheduler pripre endobj
25
Figure 13: STRESS Display of Mine Pump Simulation.
26
system
node node_1
processor proc_1
order program
scheduler periodic dual_priority_scheduler
period 1 deadline 0 offset 0 priority 0 hidden
if tick % 20 = 0 then effpri of task_2 := baspri of task_2
if tick % 20 = 10 then effpri of task_2 := 4
if tick % 28 = 0 then effpri of task_3 := baspri of task_3
if tick % 28 = 17 then effpri of task_3 := 3
if tick % 56 = 0 then effpri of task_4 := baspri of task_4
if tick % 56 = 50 then effpri of task_4 := 2
endper
periodic task_1
period 16 deadline 16 offset 0 priority 5
[4,4]
endper
periodic task_2
period 20 deadline 20 offset 0 priority 6
[5,5]
endper
periodic task_3
period 28 deadline 28 offset 0 priority 7
[11,11]
endper
periodic task_4
period 56 deadline 56 offset 0 priority 8
[6,6]
endper
endpro
endnod
endsys
27
Figure 15: STRESS Display of Dual Priority Scheduler.
28