Jfair: A Scheduling Algorithm To Stabilize Control Applications

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Jfair: A Scheduling Algorithm to Stabilize Control Applications

Amir Aminifar, Petru Eles, Zebo Peng


Linkoping University, Sweden
AbstractControl applications are considered to be among
the core applications in cyber-physical and embedded realtime systems, for which jitter is typically an important factor.
This paper investigates whether it is possible to guarantee
certain amount of jitter for a given set of applications on a
shared platform. The effect of jitter on the stability of control
applications and its relation with the latency will be discussed.
The importance arises from the fact that it is considerably
easier to manage the constant part of the delay (known as
latency), while the process of coping with the varying part of
the delay (known as jitter) is more involved. The proposed
solution guarantees certain jitter limits, and at the same time
does not lead to overly pessimistic latency values. The results
are later used in a design optimization problem to minimize
the resource utilized.

In other words, our task is to design a scheduling policy to


guarantee the required lag limits. The complexity and the
context switch overhead of the proposed scheduling policy
will be discussed. In particular, it is shown that, in the worst
case, the number of preemptions by our proposed policy is at
most three times the number of preemptions by the optimal
policy. The proposed policy is then improved to have even
less number of preemptions. Finally, the bound on the jitter
for control applications is translated into stability guarantees.
It will also be demonstrated how to assign processor shares
and select lag limits in designing such systems in order to
guarantee stability for a set of control applications.
An engineering solution to avoid jitter is to buffer the
last part of the task execution until before the worst-case
response time of the tasks [16]. An important advantage
of our proposed approach is that the fairness mechanism
avoids the bursty behavior by the competing applications. In
other words, the interference by other applications is kept
within certain limits when the concept of lag is considered.
This leads to omission of the extreme values of the bestcase and, more importantly, worst-case response times, as
the interference caused by other tasks is more regulated.
While the former can also be done by postponing the
execution of the last portion of each job (i.e., buffering),
the latter is special to our approach and cannot be achieved
by simple buffering. This is crucial since for the stability of
control application in addition to the jitter, the latency is of
signicant importance.
The Pfair algorithm [15] addresses the problem of proportional progress for tasks execution. In other words, it
guarantees that the difference between the amount of time
allocated to each task and the corresponding value in the
uid model of execution does not exceed one time unit of
execution. The schedule may, however, experience preemption every time unit. As opposed to Pfair, in the proposed
approach, the constraint on the lag limits for each application
could be individually and arbitrarily selected. The concept
of regularity, similar to the lag concept, is discussed in
[17]. The authors also discuss the schedulability conditions
when the concept of regularity is considered. A closely
related technique is the server-based resource reservation
mechanism, and to use (, ) model [18] to design servers
to bound the output jitter. It will be shown that this solution
may suffer from considerably more number of preemptions
compared to our proposed algorithm.
In [19], Baruah et. al. develop two algorithms to minimize the output jitter, as opposed to limiting the jitter.
Di Natale and Stankovic [20] propose a framework based
on simulated annealing to synthesize static schedules with
minimum output jitter for non-preemptive tasks and messages, in distributed systems. Westmijze et. al. [21] study the

I. I NTRODUCTION AND R ELATED W ORK


The majority of controllers are implemented by software
tasks which read sensor information, process this information to compute the control input, and then apply this input
to the plant to be controlled. When several tasks share the
same computing unit, the execution of a task is affected by
the execution of the other tasks competing for the shared
resource. This, in turn, leads to latency and jitter in the
execution of the task that can affect the stability of the
application.
Formally, the latency is dened as the constant part of the
delay experienced by the task, while the varying part of the
delay is referred to as jitter. Today, the effect of the latency
from the sensing to the actuation or the effect of the jitter
in the task completion on control performance and stability
are well understood [1], [2].
Over the past decade, there has been considerable amount
of research in the area of controlscheduling co-design [3]
[14]. Typically, response-time analysis is used to calculate
the latency and jitter, and to investigate the stability [5], [13],
[14].
The approach in this paper is of another nature: given
a set of applications and the maximum amount of jitter
which can be tolerated by each application, is it possible to
schedule the system such that all applications satisfy their
jitter constraints? Here, we focus on the notion of lag [15],
i.e., the difference between the amount of time allocated to
a task on a shared processor and a dedicated processor with
speed equal to the task utilization. Towards this, we model
the schedule as a system where the state of the system is
the lag of all tasks. To bound the jitter of a task, the lag
should be kept within a specied limit, which is obtained
by considering the relation between the jitter and lag. It will
be shown that it is possible to limit the lags for a set of
applications with a total utilization not exceeding one, for
arbitrary positive lag limits. In order to limit the lags, and in
turn the jitters, we propose a simple online scheduling policy.

978-1-4799-8603-3/15/$31.00 2015 IEEE

63

effectiveness of a number of scheduling heuristics intended


to reduce the latency and jitter taking into account the
execution times of tasks as well as dependencies between
the tasks, the data structures accessed by the tasks, and the
memory hierarchy. However, the complexity of the model
considered in [21] renders it impossible to provide any
kind of guarantees. As opposed to our approach, the jitter
minimization requires the complete information about the
tasks parameters during the design phase [19][21].
Hong et. al. [22] propose a heuristic based on elastic
scheduling to reduce the jitter through adaptive deadline
adjustment. Recently, Phan et. al. [23] proposed to reduce
the output jitter of the task for xed-priority policy using
shapers. Essentially the authors propose to delay the already
released job for a certain amount of time to bound the
resource demands. However, the solutions proposed in [22]
and [23] are similar to the buffering mechanism discussed
previously. Mochocki et. al. [24] addressed the problem of
guaranteeing jitter constraints, but using the dynamic voltage
scaling (DVS) technique.
The fairness concept is dened by [15] in the sense
that tasks progress roughly proportional to their allocated
resource share. However, as discussed before, the algorithm
can only guarantee equal lag limits. As opposed to Pfair, our
proposed algorithm provides guarantees on the independent
amount of jitter an application can tolerate and therefore
the algorithm is referred to as Jfair. Note that fairness does
not necessarily translate into equality as applications may
tolerate different amount of variations in the response time
in order to remain stable.
To the best of our knowledge, the interplay between
the latency and jitter of online scheduling policies and
the connection to the stability of control applications has
not been discussed in the literature. The contributions and
novelty of the paper are summarized in the following,
We prove that it is possible to guarantee any jitter limits
for tasks as long as the utilization is not exceeding one.
The proof is constructive, i.e., a scheduling policy is
proposed to synthesize such a schedule.
It is also proved that the number of preemptions needed
by the proposed scheduling policy will be at most three
times of any other valid scheduling policy, including the
optimal one. An issue that has not been discussed in
the previous work, in the context of online scheduling
algorithms for limiting the output jitter.
It is shown, both theoretically and experimentally, that
the proposed approach outperforms the server-based
approach.
We address the analysis problem to select the lag limits,
given the constraints on the latency and jitter required
to guarantee stability.
Based on the proposed online scheduling policy, a
design optimization problem is formulated to minimize
the resource utilized to guarantee the stability of the
control applications. The design parameters are the lag
limits and processor shares.
Finally, we propose an improved scheduling policy, to

reduce the number of preemptions even further.


II. S YSTEM MODEL AND BACKGROUND
We are given n independent periodic tasks, each of which
denoted by i with computation time (execution time) ci
and period hi (for task periods we adopt a notation popular
in control systems). Each task i has a utilization, denoted
by ui = hcii , which is dened as the portion of resource
allocated to this task. The set of tasks is denoted by T. The
j-th instance of i is referred to as job and is denoted by
i,j .
We denote by ri,j and fi,j the release and nishing time
instants of the j-th job of i , respectively. The jitter Ji of
i is dened as
Ji = max{fi,j ri,j } min{fi,j ri,j }.
j

(1)

Such a quantity is also called output jitter, as it measures


the variation in the response time of a task.
Let us dene the execution function ei (t),


ei (t) =

1
0

if i is executing at time t,
otherwise.

For a given task schedule, the lag of task i is dened [15]


as,


i (t) = ui t

t
0

ei (x) dx.

(2)

The denition captures the difference between the amount


of time that should have been allocated to task i until time
t according to its utilization ui (i.e., ui t) and the amount
t
that is actually allocated (i.e., 0 ei (x) dx).
The lag at any instant t2 may also be written as a function
of the lag at any previous instant t1 ( t2 ), as follows


i (t2 ) = ui t2

t2
0

ei (x) dx


= ui t1 + ui (t2 t1 )

= i (t1 ) + ui (t2 t1 )

t1
0
t2

t1


ei (x) dx

t2
t1

ei (x) dx

ei (x) dx.
(3)

The lag limit is dened as the maximum allowed deviation


in the absolute value of the lag for task i , and is denoted
by i . A schedule is feasible if the lag limits are satised.
That is, a feasible schedule guarantees that,
|i (t)| i ,

t 0,

i = 1, . . . , n.

The relation between the jitter and the lag will be claried
in the following. Assuming that lag i (t2 ) for task i is
within the lag limits at time t2 we obtain,
|i (t2 )| i ,
|i (t1 ) + ui (t2 t1 ) k ci | i ,
k ci i (t1 ) + i
k ci i (t1 ) i
t2 t1
,
ui
ui
k ci i (t1 ) i
k ci i (t1 ) + i
t1 +
t 2 t1 +
,
ui
ui

64

10

15

20

(a) Pfair

10

15

20

(c) Jfair
Figure 1.

20

10

15

20

Example: three tasks with execution times ci = 5, periods h1 = 10, h2 = h3 = 20, lag limits |i (t)| 1.

i
Ji 2 .
ui

the Pfair algorithm, we set the lag limits |i (t)| 1,


although our policy does not require such a condition. The
results are summarized in Table I. The preemption density
for a taskset T is dened as,
=

(4)

hi

(7)

where mi is the number of preemptions of task i in one


period hi .
The three tasks 1 , 2 , and 3 are depicted in green, red,
and blue, respectively. The x-axis is the time axis and the
lags are shown on the y-axis. The upward arrows indicate
the release time of the corresponding jobs. The lags are also
depicted with the piecewise linear lines in the same color
as the task. However, for the sake of illustration, the lags
are displaced such that |i (t) i| 1. For example, the lag
for the red task 2 is drawn by red and lag 2 (t) = 0 at
time t if it intersects with line y = 2. Therefore, the lags
are valid as long as 2 (t) is between y = 1 and y = 3, i.e.,
|2 (t) 2| 1.
Figure 1(a) shows the schedule by the Pfair algorithm
[15]. In every time unit, the task executing is preempted,
which leads to 20 preemptions in total.
The schedule by the server-based approach (resource
reservation mechanism) is shown in Figure 1(b). As in
the previous case, the servers (or shapers) are designed to
guarantee the lag limits of |i (t)| 1 (a more detailed
discussion will be presented in Section IV). It can be seen
that the number of preemptions is 24, i.e., more than the
Pfair algorithm, for this particular example.
The proposed Jfair algorithm (discussed in Section IV) is
shown in Figure 1(c). Notice that while the lag limits are

Ji
(5)
ui .
2
where the maximum amount of jitter which task i can
tolerate is Ji .
We shall also dene the notion of pending work i (t) for
a periodic task i at t,
|i (t)| i =


 t
t
+ 1 ci
ei (x) dx.
hi
0

n

mi
i=1

This indicates that the limit on the output jitter for task
i translates into a constraint on lag,

15

(d) Improved Jfair

where t2 is the instant at which the k-th job of task i may


nish its execution and t1 is the release time of the 1-st job
of task i . The second inequality is obtained using Equation
(3). Therefore, the output jitter Ji , that is equivalent to the
variation in t2 , is bounded as follows,

i (t) =

10

(b) Server-based

(6)

By denition, the pending work may not be negative at any


time. Clearly, a task i may not execute when i (t) = 0.
III. M OTIVATIONAL EXAMPLE
In this section, we shall motivate the need for the scheduling policy proposed. Let us consider a set of three tasks
T = {1 , 2 , 3 }. The execution times of all tasks are ci = 5.
The sampling period of task 1 is h1 = 10, while for tasks
2 and 3 the periods are h2 = h3 = 20.
Figure 1 illustrates the task schedule under several
scheduling policies, the Pfair algorithm [15], the serverbased algorithm discussed in Section IV, and our proposed
Jfair algorithms. To compare our proposed approach against

65

Table I
S UMMARY OF THE CHARACTERISTICS OF THE SCHEDULE .
policy
Pfair
Server
Jfair
Improved Jfair

preemption density
1.0
1.2
0.7
0.5

time t = di (0), the lag i (t) = 0 and the relative deadline


i
i
and execution-time are di (t) = ui (1u
and ci (t) = 1u
,
i)
i
respectively. Therefore, the schedule is similar to a quasiperiodic task i with period and deadline hi = di =
i
i
i
i

ui + 1ui = ui (1ui ) and execution-time ci = 1ui . The
last subjob in each period might have shorter deadline and
execution time. However, note that the utilization  for all
c
subjobs is kept the same as the original task ui = hi = ui .
i
An example is shown in Figure 1(c). Let us consider task
1 shown in green. At time t = 0, the rst subjob is assigned
1
1
= 0.5(10.5)
= 4 and c1 (0) = d1 (0)
d1 (0) = u1 (1u
1)
u1 = 2. At time t = d1 (0) = 4, the next subjob is assigned
1
= 4 and c1 (4) = d1 (4)u1 = 2, but since
d1 (4) = 0.5(10.5)
subjob of task 3 has shorter deadline, this subjob may not
execute immediately. At time t = 4 + d1 (4) = 8, only 1
unit of the job execution is left and therefore the subjob has
d1 (8) = u1 (t)
= 2 and c1 (8) = d1 (8) u1 = 1.
1

lag limit
1
1
1
1

still respected, the number of preemptions is reduced to 14.


The improved Jfair algorithm will be discussed in Section
VI and the schedule generated by it for this example is
shown in Figure 1(d). Similar to Jfair, the lag limits are
respected, while the number of preemptions is reduced to
10. As it will be discussed later, the number of preemptions
by the optimal algorithm, for this example, is bounded from
below by 8. This indicates that the schedule generated by
the Jfair policy, while respecting the lag limits, will lead to
considerably less number of preemptions, when compared
to Pfair or servers.
In the next section, our goal is to design a scheduling
policy to guarantee the lag limits for all tasks.

B. Theoretical guarantees

IV. J FAIR SCHEDULING POLICY


In this section, we discuss the Jfair scheduling policy
and the theoretical guarantees provided by the proposed
approach. Without loss of generality,
we assume that the
n
resource is fully utilized, i.e.,
u
i=1 i = 1. This is simply possible
by
introducing
an
extra
task with utilization
n
u0 = 1 i=1 ui and implicit deadline. For instance, by
choosing the period and deadline equal to the hyper period
and the execution time in such a way that the task utilization
is equal to u0 .

We shall now show that the system could be scheduled


by our algorithm with any positive lag limits, as long as the
utilization of the task set is not exceeding one.
1) Non-violation of upper lag limit
In order to avoid violating the upper lag limits, the concept
of deadline is considered and the scheduler should allocate
the proper amount of resource share before the deadline of
the subjob is expired.
The relative deadline di (t) for subjob of task i is the
time before (or at) which the task must execute in order to
respect the lag constraint dened in Equation (5).
Let us assume that we would like to nd the deadline at
time t1 , with lag i (t1 ). Therefore, the objective is to nd
the time instant t2 at which lag i (t2 ) is at the upper lag
limit  i , if the task does not execute in the interval of [t1 , t2 ]
t
(i.e., t12 ei (x) dx = 0),

A. Scheduling policy
A job of task i has execution time ci and period hi .
Our proposed algorithm divides each job into several smaller
subjobs, for each of which the execution time and articial
deadline at time t are denoted by ci (t) and di (t), respectively. In fact, our objective is to assign the execution times
ci (t) and deadlines di (t) in such a way that the lag limits
are not violated.
The scheduling policy is as follows,
The scheduler is activated at the subjobs deadlines (and
at time 0) and when a subjob nishes its execution.
The scheduler is an EDF-based scheduler, i.e., the subjob with the earliest deadline has the highest priority.
The new deadlines and execution times of each subjob
are assigned at its own old deadlines (and at time 0).
The deadlines di (t) of the subjobs are set according to
Equation (11). In the case where i (t) < di (t) ui , the
.
deadline is modied to di (t) = ui (t)
i
The subjobs execution time at time t is assigned to be
ci (t) = di (t) ui .
To have a better understanding of the algorithm, let us
rst have a closer look at how our simple scheduling policy
works. Intuitively, the scheduler breaks a job into several
quasi-periodic subjobs.
Starting with i (0) = 0, the deadline and execution-time
i
i
are di (0) = ui (1u
and ci (0) = 1u
, respectively. At
i)
i

i (t2 ) = i (t1 ) + ui (t2 t1 )

t2
t1

ei (x) dx,

(8)

i = i (t1 ) + ui (t2 t1 ),

Note that (t2 t1 ) is the relative deadline found at time t1 ,


i.e., di (t1 ),
i = i (t1 ) + ui di (t1 ).

(9)

Finally, the relative deadline at time t1 is given by,


di (t1 ) =

i i (t1 )
.
ui

(10)

Of signicant importance, the deadline is the instant at


which the lag limits will be violated if the subjob does not
execute (at all) before or at that instant. This is different
from the typical notion of deadline in the real-time systems
community, where the job should nish its execution before

66

3) Scheduling properties
It should now be clear that the lag limits will not be
violated as long as the subjob execution-time ci (t) ci (t)
and the deadlines di (t) are met. The next theorem
nguarantees
that all lag limits will be respected, provided i=1 ui 1.
Theorem 1: Given a periodic taskset and the positive and
independent lag limits for each task, there exists a scheduling
policy that respects the lag limits, iff the utilization of
the taskset does not exceed one. The Jfair algorithm is an
instance of such a scheduling policy.
Proof: We shall rst focus on the proof of the if part. It
has been shown that the lag constraints are satised as long
as all tasks scheduled satisfy ci (t) ci (t) and all deadlines
di (t) are met. It has been shown, in the previous subsection,
that the former property is satised by itself under Jfair
policy. The latter is proved by contradiction in the following,
for the proposed policy.
Let us assume
n that we have the rst deadline miss at time
t1 , while i=1 ui 1. Let us further consider the longest
contiguous busy interval before time t1 where all the subjobs
executed in the interval of [t0 , t1 ] should have their release
times (or activation times) and deadlines in this interval. Let
us denote the start time of this interval by t0 and its length
by l = t1 t0 . Since we considered the longest contiguous
busy interval ending at t1 , just before time t0 , the resource
is not processing subjobs with deadlines less than or equal
to t1 . This means that t0 should be the release time (or
activation time) of a subjob with deadline before t1 . Since
the assigned execution times in Jfair are computed based on
ci (t) = di (t) ui , the demand of each task i in this interval,
denoted by xi , is less than or equal to l ui , i.e., l ui xi .
Taking all tasks (with both activation and
in this
 deadline 
interval) into consideration, we obtain i l ui  i xi .
Having a
deadline miss
1 indicates that
i xi >
 at time t
l.
From
l

x
and
x
>
l,
we
conclude
i
i
i
i
i
in

i l ui > l. This indicates that
i=1 ui > 1, which is a
contradiction.
The only if part is simply because no taskset with
utilization above one is schedulable (the deadlines will be
violated).
The next theorem discusses the number of preemptions when using Jfair algorithm, compared to the optimal
scheduling policy. We assume that each job experiences one
preemption when it nishes.
Theorem 2: The number of preemptions by the proposed
scheduling policy is at most three times the number of
preemptions by any feasible schedule under the given lag
limits.
Proof: First, we shall nd a lower bound on the number
of preemptions experienced by task i in one period hi ,
considering any feasible scheduling policy. Note that the
set of feasible scheduling policies includes also the optimal
scheduling policy, where the optimality is dened in terms
of number of preemptions.
The proof is based on the observation that an upper
bound on the maximum possible contiguous execution time,
denoted by x, for each task can be computed. The upper

the deadline. This means that the deadline can be modied


to include the execution-time of the subjobs as well,
i i (t)
i
+
ui
1 ui
i (t)
i
=
,

ui (1 ui )
ui

di (t) =

(11)

i
where we have considered the execution time ci (t) = 1u
.
i
Note that the resource allocated in the interval of [0, t+di (t)]
is ui (t + di (t)).
If a subjob of task i does not violate its deadline di (t),
the lag constraint i (t) i will not be violated before its
deadline. In other words, as long as the deadlines are met,
the lag will not exceed the upper limit, i.e., i (t) i .

2) Non-violation of lower lag limit


To avoid violating the lower lag limits, the scheduling
policy should limit the subjobs execution times. Assuming
at time t a task i has a lag i (t), the maximum execution
time ci (t) before hitting the lower lag limit i , is given by
ci (t),
i = i (t) + ui ci (t) ci (t),
i = i (t) + ci (t)(ui 1)
i + i (t)
ci (t) =
.
1 ui

(12)

This indicates that the lower lag limits will not be violated,
as long as
(13)
ci (t) ci (t).
Note that t is the instant at which the execution of the subjob
is assigned to it by the scheduler.
In our scheduling policy we want to enforce ci (t) ci (t)
and ci (t) = di (t) ui , which leads to,
ci (t) = min {ci (t), di (t) ui } .

(14)

Substituting ci (t) from Equation (12) and di (t) from Equation (11) in the above equation, we have,


i
i
i (t)
ci (t) = min
+
,
i (t) , (15)
1 ui
1 ui 1 ui
which obtains its maximum when i (t) = 0.
In general, the longer the subjobs, the fewer the number
of subjobs and the number of preemptions. Therefore, the
special case of i (t) = 0 is used in our algorithm by
carefully choosing the instant t at which the deadlines and
execution times are assigned. This leads to a quasi-periodic
i
and
pattern for subjobs with deadline di (t) = ui (1u
i)
execution time ci (t) = di (t) ui . Since i (t) = 0, the
execution time satises Equation (13). The last subjob might
have a shorter deadline, and accordingly shorter execution
time, and therefore, also satises Equation (13).
To sum up, the condition in Equation (13) is satised, and
there is no need to be considered explicitly.

67

where the server bandwidth and delay are dened as


follows,
Q
= 2 (P Q).
(19)
= ,
P
The jitter is then given by,

bound is obtained in the following scenario: the task has lag


i (t) = i and it executes without preemption until the lag
is i (t + x) = i ,
i = i + ui x x

x=

2 i
.
1 ui

(16)



c
w
J = R Rb = min 2 , + c .

From this, it is clear that the optimal number of preemptions

 
i)
in a
for a task may not be less than cxi = ci (1u
2 i
period hi . This is a lower bound on the minimum number
of preemptions for task i in one period hi .
Secondly, let us compute the number of preemptions
caused by task i in one period hi . The proof is based on the
fact that the number of preemptions in a scheduling policy
based on EDF is not more than the number of jobs. In short,
this is due to the fact that a task is only preempted when a
job of another task is released.
We shall now discuss the number of subjobs generated by
our proposed policy for task i in one period hi . Considering
the quasi-period pattern of

the number of subjobs


execution,
i)
. This is an upper bound
in one period hi will be ci (1u
i
on the maximum number of preemptions caused by task i in
one period hi , when Jfair is used. Further, we shall consider
one extra preemption
when the job nishes its execution,

ci (1ui )
+ 1.
i.e.,
i
Finally, observe that the following inequality holds for
a lower bound on the minimum number of preemptions of
task i by any algorithm (including the optimal one) and
an upper bound on the maximum number of preemptions
caused by task i in our Jfair algorithm, in one period hi ,


ci (1ui )
i

ci (1ui )
2 i

+1
 3.

Now we formulate an optimization problem to obtain the


optimal periodic server that guarantees the jitter and maximizes the server budget (or alternatively the server period)
in order to minimize the number of preemptions. Note that
the budget corresponds to the length of the subjobs of a
task, and therefore, is related to the number of preemptions.

From Equation (19), the server budget is Q = 2(1)


. The
optimization problem is as follows,

2 (1 )


2
c
,
s.t. min 2 , + c

max
,

(21)

where the maximum tolerable jitter is written based on the


lag limit 2
u (see Equation (4)). Note that we consider

n
u
=
1 and for the worst-case response time to be
i
i=1
nite we need u for each server [14]. This leads to the
special case of = u.
The above optimization problem can be transformed in
two simpler optimization problems, by transforming the
minimum function in the constraints to two disjunctive
constraints. Due to disjunction in the constraint of (21),
the solution is the best one among the solutions of the two
optimization problems.
The rst problem leads to the following budget,

(17)

Q=

Therefore, the number of preemptions by our algorithm


may not be more than three times that of the optimal
policy, where optimality is dened in terms of number of
preemptions.

,
2 (1 u)

(22)

that is smaller by a factor of 2, when compared to 1u


of
our algorithm (for all but the last subjob of each job).
The second problem leads to the following budget,

Q=

C. Comparison against server-based approach

,
1u
2

(23)

which is smaller than what is provided by our algorithm (for

all but the last subjob of each job), i.e., 1u


.
We conclude that the budget obtained by the resource
reservation mechanism is less than what is obtained by our
proposed algorithm.

We shall now briey discuss an approach based on the resource reservation mechanism to guarantee a certain amount
of jitter. We consider the important case of periodic servers
[25][28] and further assume that for each control task there
exists a dedicated server. The periodic server provides Q
amount of time to the task associated with it, every period
P it is activated.
Given the server period P and budget Q, the linear bounds
on the best-case and worst-case response times based on the
(, ) model are [14]
c
+ ,


c
b
, c ,
R = max

(20)

V. S TABILITY, ANALYSIS , AND DESIGN


It is well known that the latency and jitter in the execution
of the control tasks can lead to poor performance and may
even jeopardize the stability of the control applications. To
quantify the impact of the latency and jitter on the stability
of the plant, we use the Jitter Margin toolbox [30], [5], [2],
which provides sufcient stability conditions for a closedloop system with a linear continuous-time plant and a linear
discrete-time controller.

R =

(18)

68

10

Responsetime jitter J

k1 h

Rb

Rw

k2 h

Figure 3. Graphical interpretation of the latency and worst-case responsetime jitter.

6
5

tion, the lag limit is constrained by

Li + ai Ji bi ,

2 i
ci i
+ ai
bi ,
ui
ui
b i ui c i
,
i
2 ai 1

2
1
0

Stability curves
Linear lower bounds

6
8
Nominal delay L

10

12

and the control task should be scheduled for this lag limit
in order to guarantee the stability of the plant.
It is worth noting that reducing the lag limit i of an
already stable task leads to an increase in the latency Li ,
but a decrease in the jitter Ji (see Equation (25)). This does
not lead to instability as shown in the following,

Figure 2. The stability curves generated by the Jitter Margin toolbox and
their linear lower bounds (the area below the curves is the stable area) [29].

The Jitter Margin toolbox computes the maximum tolerable response-time jitter Ji for a given latency Li . The
solid curves in Figure 2 are examples of the stability
curves generated by the Jitter Margin toolbox. Any latencyjitter combination below the stability curve is guaranteed
to be stable. The graph is generated for the plant with
transfer function s1000
2 +s and a discrete-time Linear-QuadraticGaussian (LQG) controller. The upper and lower solid
curves correspond to sampling periods 6 ms and 12 ms,
respectively.
Given a sampling period, the stability curve can safely
and efciently be approximated by a linear function of
the latency and worst-case response-time jitter. The linear
stability condition for a control application is of the form
Li + ai Ji bi , where ai 1, bi 0. The latency L is
the constant part of the delay that the control application
experiences, whereas the worst-case response-time jitter Ji
captures the varying part of the delay (see Figure 3). The
linear lower bounds, depicted by the dashed lines, on the
original curves generated by the Jitter Margin toolbox are
also shown in Figure 2.

Li + ai Ji bi ,
2 i
ci i
+ ai
bi ,
ui
ui
ci
(2 ai 1)
+
i bi ,
ui
ui
since the coefcient of the lag limit i , i.e.,
positive.

(2ai 1)
,
ui

is

In this section, we shall formulate the design for stability


problem. Given a set of control tasks with the execution
times ci and periods hi , we would like to achieve the
minimum resource utilized to guarantee stability of the
plants associated with the control tasks. The optimization
variables are the resource share of each task and the lag
limits.
Often in many systems, the processing unit is not fully
utilized and this can be exploited to guarantee the stability
of control applications or alternatively achieve better performance. In this section, we discuss the former by assigning
more processing share to tasks.
As for the design problem, we should nd the optimal values of the processor share ination factors si and lag limits
i for all applications. The stability constraints considering
the ination factor si is

In order to use the stability condition introduced above,


the values of the latency (Li ) and worst-case responsetime jitter (Ji ) of the control task should be computed. As
mentioned before, the stability constraint is formulated as,
(24)

Li + ai Ji bi ,

The latency and jitter are given by,


ci i
ci
Ji
=

,
ui
2
ui
2 i
.
Ji =
ui

(27)

B. Design for stability and Jfair

A. Control stability and Jfair

Li + ai Ji bi .

(26)

2 i
ci i
+ ai
bi ,
ui s i
ui s i
ci + (2 ai 1) i bi ui si 0.

Li =

(25)

(28)

There are numerous optimization objectives that could be


discussed. One common objective is to minimize the processor utilization required to guarantee the stability of the

Given the control and real-time parameters of an applica-

69

Substituting in Equation (35), we obtain the optimal value


of s,

plants. This could simply be formulated as follows for the


proposed scheduling policy,
n


min

s1 ,...,sn , 1 ,..., n

ui s i + 

i=1

ci (1 ui si )
i

s.t. ci + (2 ai 1) i bi ui si 0,

(2 a 1)
c
s =
+
bu
bu

i = 1 . . . n,

where  is the switching overhead. We assume that the


i)
overhead is proportional to cci = ci (1u
(see Section IV),
i
i
i.e., the execution time of each subjob.
The problem could then be reformulated as n smaller
problems for each task i ,
min
si , i

s.t.

g(x) 0,


di (t) = min ci (t),

(29)

where ci (t) =
follows,
(30)

cu
bu=0

 c (1 u s)
+ (2 a 1) = 0

2


c + (2 a 1) b u s = 0

c + (2 a 1)
.
bu

(33)

(34)

(35)

From Equation (31), we nd ,


=

c
.
b

(36)

Substituting and s in Equation (32), we obtain a quadratic


equation with its positive root as the optimal value of ,

 c (b c)
.
2a1

j T\{i }


{dj (t)} ,

(40)

and set S(t) is dened as

di (t) 1


k Di (t)

uk , i (t)

(41)

, (42)

with Di (t) = {k T \ {i } | dk (t) di (t) x},


and x being a precision constant.
The new deadlines are assigned at the old deadlines or
when a task nishes its execution.
The improved scheduling policy (iJfair) respects the lag
limits as long as the utilization is not above one.
To show this, rst, we shall prove that the lower limit
on lags is never violated. This is accomplished by the
fact that the execution time, ci (t), is less than or equal to
the maximum execution time, ci (t) (since di (t) ci (t)).
Therefore, the lower lag limit will not be violated.
Next, it should be noticed that as long as the deadlines are
met, the upper lag limits are respected. Now, we shall show
that all deadlines will be met under the improved scheduling
policy. Since in any interval the demand (as in demand
bound function) is less than the length of the interval, all
deadlines will be satised.
We have discussed the correctness of the algorithm, in
the sense that the jitter limits are not violated. However, as
opposed to Jfair, here, we could not prove any limits on the
number of preemptions.

(32)

if we consider Equation (33). This leads to


s=

i +i (t)
1ui

min

A task has an execution time ci (t),


ci (t) = min

(31)

Considering Equation (32), if = 0, then u s = 1


which is not a valid solution when the platform is shared.
Therefore, without loss of generality, we consider > 0,
which leads to,
c + (2 a 1) b u s = 0,

(39)

S(t) = {i T | i (t) > 0 i (t) 0} .

The following equalities are obtained based on the KKT


conditions,
u

ci (1 ui si )
1.

In this section, the proposed Jfair scheduling policy is


improved to have less number of preemptions. We shall refer
to the algorithm in this section as iJfair (improved Jfair).
The difference originates from how the execution times and
articial deadlines are assigned.
The scheduling policy is modied as follows,
The deadlines of the tasks are assigned according to
Equation (10). For task i with the earliest deadline in
set S(t), however, the deadline is modied to,

must necessarily satisfy the following conditions


f (x ) + g(x ) =0,
g(x ) =0,
0.

(38)

VI. A N IMPROVED SCHEDULING POLICY

Since the focus is on task i , for the sake of presentation,


we drop the index i in the following.
To solve each problem, the KKT (Karush-Kuhn-Tucker)
necessary conditions for optimality [31] are considered. The
optimum solution, denoted by x , of the problem
s.t.

ui si + 

i=1

ci (1 ui si )
i
ci + (2 ai 1) i bi ui si 0.

 c (b c)
.
2a1

Finally, the taskset is schedulable and all plants are


guaranteed to be stable on a uniprocessor platform if
n

ui s i + 

min f (x)

(37)

70

Avgerage Preemption Density

70
Invalid Solutions [%]

Jfair vs Server
60
50
40
30
20
10
0
50

60

70

80

90

1
0.8

Pfair
Jfair
iJfair
LB

0.6
0.4
0.2
0
2

Utilization [%]

Figure 4.

Number of Tasks

Comparison between Jfair and server approaches.

Figure 5. Preemption density of Pfair, Jfair, iJfair, and a theoretical lower


bound.

VII. E XPERIMENTAL RESULTS


In this section, we present experimental results to demonstrate the efciency of the proposed algorithm. First, the
comparison between the Jfair algorithm and a server-based
approach is performed. Secondly, we compare the number
of preemptions in the schedules synthesized by different
scheduling policies.

B. Number of preemptions
To evaluate the efciency of the proposed scheduling
policies in Section VI, we compare our approach against the
Pfair algorithm [15] and a theoretical lower bound. To this
end, 100 benchmarks are generated with utilization one for
number of tasks varying between 2 to 6. The lag limits are
set to one for all tasks, i.e., i = 1, since this is a limitation
of Pfair algorithm.
The metric of comparison is preemption density and the
results are shown in Figure 5. The minimal density of
preemptions then is

A. Stability
To evaluate our proposed algorithm experimentally, we
have generated 500 benchmarks with a number of control
applications from 2 to 10. The plants considered are chosen
from a database consisting of inverted pendulums, ball
and beam processes, DC servos, and harmonic oscillators
[1], [5]. Such plants are considered to be representatives
of realistic control problems and are extensively used for
experimental evaluation. The UUniFast algorithm [32] is
used to generate a set of random tasks for a given utilization.
The switching overhead is  = r mini=1...n {ci }, where r
is a random variable uniformly distributed in the interval of
[0.01, 0.10].
We compare our proposed approach against a serverbased approach similar to Section IV-C. However, the approach based on the resource reservation mechanism essentially minimizes the bandwidth required for guaranteeing the stability of control applications. The experiments
are
nrepeated for several values of total task utilization
( i=1 ui ) and the results are shown in Figure 4. The metric
used
for this comparison
is the relative quality, dened as


NJfair N Server
100 , where NX is the number of benchNJfair
marks for which the approach X could nd a valid solution,
i.e., a solution for which all control applications are stable
and the systems is schedulable.
For the low task utilization, our approach outperforms
the server-based approach only in 10% of the benchmarks,
in terms of number of valid solutions. This percentage
increases to 59% for high task utilization, indicating that
in 59% of the benchmarks, our proposed Jfair approach
guarantees a stable and schedulable solution, while the
approach based on the resource reservation mechanism fails
to do so. The trend is also clearly visible, i.e., increasing the
task utilization leads to an increase in the number of invalid
solutions for the server-based approach, compared to Jfair.

n

ui (1 ui )
i=1

2 i

(43)

The theoretical lower bound on the number of preemptions


may be obtained using similar idea as in Theorem 2. To
obtain the preemption density for Jfair, iJfair, and Pfair, we
use simulation.
Note that both Jfair and iJfair outperform the Pfair algorithm in terms of average number of preemptions. Further,
the gap between the iJfair and theoretical lower bound is
small, when compared to the Pfair algorithm.
VIII. C ONCLUSIONS
It is well know that temporal behavior of the control tasks
affects the control quality and stability of the plants. In
particular, the latency and jitter are two decisive factors in
analyzing and designing embedded control systems or cyberphysical systems. We have proposed an online scheduling
policy to limit the variation in the response time of the tasks,
while considering also the latency. The correctness of the
algorithms is formally proved and the efciency of the algorithms is evaluated both theoretically and experimentally.
Finally, the results are used in a design optimization problem
in order to guarantee the stability of the control applications,
while minimizing the resource usage.
ACKNOWLEDGEMENTS
The authors would like to acknowledge Prof. Enrico Bini
for our fruitful discussions and his valuable comments on
an earlier draft of this paper and Prof. Anton Cervin and
Dr. Bo Lincoln for providing the Jitter Margin toolbox. The

71

authors would also like to acknowledge Prof. Jian-Jia Chen


for kindly accepting the shepherding process of the paper.

[16] D. Verma, H. Zhang, and D. Ferrari, Delay jitter control


for real-time communication in a packet switching network,
in Proceedings of the IEEE Conference on Communications
Software: Communications for Distributed Applications and
Systems, 1991, pp. 3543.
[17] A. K. Mok and X. A. Feng, Towards compositionality in
real-time resource partitioning based on regularity bounds, in
Proceedings of the 22nd IEEE Real-Time Systems Symposium,
2001, pp. 129138.
[18] X. Feng and A. Mok, A model of hierarchical real-time
virtual resources, in Proceedings of the 23th IEEE Real-Time
Systems Symposium, 2002, pp. 2635.
[19] S. Baruah, G. Buttazzo, S. Gorinsky, and G. Lipari, Scheduling periodic task systems to minimize output jitter, in Proceedings of the 6th IEEE Embedded and Real-Time Computing
Systems and Applications (RTCSA) Conference, 1999, pp. 62
69.
[20] M. D. Natale and J. A. Stankovic, Scheduling distributed
real-time tasks with minimum jitter, IEEE Transactions on
Computers, vol. 49, no. 4, pp. 303316, 2000.
[21] M. Westmijze, M. Bekooij, G. Smit, and M. Schrijver, Evaluation of scheduling heuristics for jitter reduction of realtime streaming applications on multi-core general purpose
hardware, in Proceedings of the 9th IEEE Symposium on
Embedded Systems for Real-Time Multimedia (ESTIMedia),
Oct 2011, pp. 140146.
[22] S. Hong, X. Hu, and M. Lemmon, Reducing delay jitter
of real-time control tasks through adaptive deadline adjustments, in Euromicro Conference on Real-Time Systems,
2010, pp. 229238.
[23] L. T. X. Phan and I. Lee, Improving schedulability of xedpriority real-time systems using shapers, in Proceedings of
the 19th IEEE Real-Time and Embedded Technology and
Applications Symposium, 2013, pp. 217226.
[24] B. Mochocki, X. Hu, R. Racu, and R. Ernst, Dynamic
voltage scaling for the schedulability of jitter-constrained
real-time embedded systems, in IEEE/ACM International
Conference on Computer-Aided Design, 2005, pp. 446449.
[25] S. Saewong, R. Rajkumar, J. Lehoczky, and M. Klein, Analysis of hierar hical xed-priority scheduling, in Proceedings of
the 14th Euromicro Conference on Real-Time Systems, 2002,
pp. 152160.
[26] G. Lipari and E. Bini, Resource partitioning among realtime applications, in Proceedings of the 15th Euromicro
Conference on Real-Time Systems, 2003, pp. 151158.
[27] I. Shin and I. Lee, Periodic resource model for compositional
real-time guarantees, in Proceedings of the 24th IEEE RealTime Systems Symposium, 2003, pp. 213.
[28] L. Almeida and P. Pedreiras, Scheduling within temporal partitions: response-time analysis and server design, in
Proceedings of the 4th ACM international conference on
Embedded software, 2004, pp. 95103.
[29] A. Aminifar, P. Eles, Z. Peng, and A. Cervin, Stabilityaware analysis and design of embedded control systems, in
Proceedings of the 11th ACM International Conference on
Embedded Software, 2013, pp. 23:123:10.
[30] C.-Y. Kao and B. Lincoln, Simple stability criteria for
systems with time-varying delays, Automatica, vol. 40, pp.
14291434, 2004.
[31] M. Bazaraa, H. Sherali, and C. Shetty, Nonlinear Programming: Theory and Algorithms. Wiley, 2006.
[32] E. Bini and G. C. Buttazzo, Measuring the performance of
schedulability tests, Real-Time Systems, vol. 30, no. 1-2, pp.
129154, 2005.

R EFERENCES
[1]
[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

om and B. Wittenmark, Computer-Controlled SysK. J. Astr


tems, 3rd ed. Prentice Hall, 1997.
A. Cervin, Stability and worst-case performance analysis of
sampled-data control systems with input and output jitter,
in Proceedings of the 2012 American Control Conference
(ACC), 2012.
D. Seto, J. P. Lehoczky, L. Sha, and K. G. Shin, On task
schedulability in real-time control systems, in Proceedings
of the 17th IEEE Real-Time Systems Symposium, 1996, pp.
1321.
H. Rehbinder and M. Sanfridson, Integration of off-line
scheduling and optimal control, in Proceedings of the 12th
Euromicro Conference on Real-Time Systems, 2000, pp. 137
143.
en, and G. Buttazzo,
A. Cervin, B. Lincoln, J. Eker, K. E. Arz
The jitter margin and its application in the design of realtime control systems, in Proceedings of the 10th International
Conference on Real-Time and Embedded Computing Systems
and Applications, 2004.
T. Nghiem, G. J. Pappas, R. Alur, and A. Girard, Timetriggered implementations of dynamic controllers, in Proceedings of the 6th ACM & IEEE International conference on
Embedded software, 2006, pp. 211.
E. Bini and A. Cervin, Delay-aware period assignment in
control systems, in Proceedings of the 29th IEEE Real-Time
Systems Symposium, 2008, pp. 291300.
F. Zhang, K. Szwaykowska, W. Wolf, and V. Mooney,
Task scheduling for control oriented requirements for cyberphysical systems, in Proceedings of the 29th IEEE Real-Time
Systems Symposium, 2008, pp. 4756.
P. Naghshtabrizi and J. P. Hespanha, Analysis of distributed
control systems with shared communication and computation
resources, in Proceedings of the 2009 American Control
Conferance (ACC), 2009.
R. Majumdar, I. Saha, and M. Zamani, Performance-aware
scheduler synthesis for control systems, in Proceedings of
the 9th ACM international conference on Embedded software,
2011, pp. 299308.
D. Goswami, M. Lukasiewycz, R. Schneider, and
S. Chakraborty, Time-triggered implementations of
mixed-criticality automotive software, in Proceedings of the
15th Conference for Design, Automation and Test in Europe
(DATE), 2012.
P. Kumar, D. Goswami, S. Chakraborty, A. Annaswamy,
K. Lampka, and L. Thiele, A hybrid approach to cyberphysical systems verication, in Proceedings of the 49th
Design Automation Conference, 2012.
A. Aminifar, S. Samii, P. Eles, Z. Peng, and A. Cervin,
Designing high-quality embedded control systems with guaranteed stability, in Proceedings of the 33th IEEE Real-Time
Systems Symposium, 2012, pp. 283292.
A. Aminifar, E. Bini, P. Eles, and Z. Peng, Designing
bandwidth-efcient stabilizing control servers, in Proceedings of the 34th IEEE Real-Time Systems Symposium, 2013,
pp. 298307.
S. K. Baruah, N. K. Cohen, C. G. Plaxton, and D. A. Varvel,
Proportionate progress: A notion of fairness in resource
allocation, Algorithmica, vol. 15, no. 6, pp. 600625, 1996.

72

You might also like