Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

1

Using Integer Programming Techniques in


Real-Time Scheduling Analysis
Abhishek Singh

Abstract—Real-time scheduling theory assists developers of embedded systems in verifying that the timing constraints required by
critical software tasks can be feasibly met on a given hardware platform. Fundamental problems in the theory are often formulated as
search problems for fixed points of functions and are solved by fixed-point iterations. These fixed-point methods are used widely
because they are simple to understand, simple to implement, and seem to work well in practice. These fundamental problems can also
be formulated as integer programs and solved with algorithms that are based on theories of linear programming and cutting planes
amongst others. However, such algorithms are harder to understand and implement than fixed-point iterations. In this research, we
arXiv:2210.11185v1 [cs.OS] 20 Oct 2022

show that ideas like linear programming duality and cutting planes can be used to develop algorithms that are as easy to implement as
existing fixed-point iteration schemes but have better convergence properties. We evaluate the algorithms on synthetically generated
problem instances to demonstrate that the new algorithms are faster than the existing algorithms.

1 I NTRODUCTION

I N real-time scheduling theory, a task with hard timing


constraints receives an infinite stream of requests and
must respond to each request within a fixed amount of
all sequences of request arrivals is called the schedula-
bility problem. When the scheduler is an FP (resp., EDF)
scheduler, the problem is called FP schedulability (resp.,
time. We consider a system comprising n independent pre- EDF schedulability). An algorithm which solves the prob-
emptible sporadic tasks, labeled 1, 2, . . . , n. Each sporadic lem is called a schedulability test. FP schedulability and
task i ∈ [n] has the following (integral) characteristics: EDF schedulability are fundamental problems in real-time
scheduling theory, with countless papers devoted to finding
Synbol Task Characteristic faster exact schedulability tests or sufficient schedulability
Ci worst-case execution time (wcet) tests with higher accuracy1 .
Ti minimum duration between successive request arrivals A constrained-deadline preemptive uniprocessor system
(period) with tasks [n] is FP schedulable if and only if the subsystem
Di maximum duration between request arrival and response
(relative deadline) with tasks [n − 1] is schedulable2 and
∃t ∈ (0, Dn ] : rbf(t) ≤ t
If the k -request for task i arrives at time t, then the following
statements must be true: holds [1], [2]. Here, rbf , the request bound function, is given
by
• The request must be completed by time t+Di in a feasible X  t 
schedule. In other words, t + Di is the absolute deadline t 7→ Cj . (1)
Tj
for the completion of the response to the request. j∈[n]
• The (k +1)-th request for task i cannot arrive before t+Ti . The above condition is satisfied if and only if the problem
If Di = Ti , then the deadline Di is said to be implicit; if
Di ≤ Ti , then the deadline Di is said to be constrained. min{t ∈ (0, Dn ] ∩ Z | rbf(t) ≤ t} (2)
If all tasks in a system have implicit deadlines, then the has an optimal solution. The optimal solution must satisfy
system is an implicit-deadline system. If all tasks in a the fixed point equation
system have constrained deadlines, then the system is a
constrained-deadline system; otherwise, it is an arbitrary- rbf(t) = t.
deadline system. The solution to this equation can be found by using fixed-
We assume that the tasks run on a single processor and point iteration, i.e., by starting with a safe lower t for
are scheduled using either a fixed-priority (FP) scheduler or the fixed point and iteratively updating t to rbf(t). This
an earliest-deadline-first (EDF) scheduler. An FP scheduler algorithm, called Response Time Analysis (RTA), is an exact
assigns static priorities to tasks. An EDF scheduler dynami- FP schedulability test for constrained-deadline preemptive
cally assigns the highest priority to any task with the earliest uniprocessor systems [1]. We show the execution of RTA
absolute deadline. for a small example in Figure 1—the up arrows denote the
Given a collection of tasks and a scheduler, the prob- computation of rbf t and the right arrows denote the update
lem of deciding whether timing constraints are met for
1. Given a problem instance, a sufficient test either declares that all
A. Singh is with the Department of Computer Science and Engineering at timing constraints are satisfied or it declares nothing, so the constraints
Washington University in Saint Louis, MO, 63130. may or may not be violated.
E-mail: abhishek.s@wustl.edu. 2. For FP schedulability, we assume that tasks are listed in decreasing
Manuscript received October, 2022; revised . order of priority.
2

solution for the source problem.3 The strong reduction from


FP schedulability is possible because the target, problem (8),
is a close generalization of FP schedulability.
17.5

15.0
FP Sched. strong reduction
12.5
rbf(t) −→

10.0 Problem (8) integer program


7.5

weak reduction
5.0 EDF Sched.
2.5

0.0
We formulate problem (8) as an integer program and
0 2 4 6 8 10 12 14 16
solve it by using the cutting plane methodology which
t −→ involves relaxing the integer program to a linear program,
solving the linear program, updating the integer program
Fig. 1: rbf(t) (blue, dashed) for tasks {(2, 4), (1, 5), (3.3, 15)} with a cut if the solution is not integral, and repeating the
and progress of RTA (red, solid arrows) from initial value 2. process (described in more detail in the next section). Two
questions must be answered to apply this technique:
• How is the cut generated?
t ← rbf t. Geometrically, the algorithm feels like climbing • How is the linear program solved?
steps.
We show that for our integer program cuts can be generated
An arbitrary-deadline P
preemptive uniprocessor system
so that they only change bounds of variables in the integer
is not EDF schedulable iff j∈[n] Uj > 1 or
program—thus, the number of constraints in the integer
∃t ∈ [0, L) : t < dbf(t), program does not increase after incorporating the cut. To
solve the linear program, we design a specialized algorithm
holds [3], [4]. Here, dbf , the demand bound function, is which uses O(n log(n)) arithmetic ops. We can also use the
given by ellipsoidal method, for instance, to solve the program in
weak polynomial time but such sophistication is excessive
t + Tj − Dj
 
for the problem at hand—our algorithm is conceptually
X
t 7→ Cj , (3)
Tj simple and easy to implement and strongly polynomial-
j∈[n],t≥Dj −Tj
time. By composing the strong (resp., weak) reduction with
and L is a large constant like lcmi Ti (more details on L are our cutting plane algorithm, we get an iterative algorithm
provided later). The above condition is satisfied if and only for FP schedulability (resp., EDF schedulability) which does
if the problem almost linear work in each iteration.
We show that RTA and QPA may also be viewed as
max{t ∈ [D1 , L) ∩ Z : t < dbf(t)} (4) cutting plane algorithms that generate cuts the same way
that we do. However, instead of solving the linear relaxation
has an optimal solution. The optimal solution must satisfy of the integer program in each iteration, they solve a further
the fixed point equation relaxation of the linear relaxation. This over-relaxation al-
dbf(t) − 1 = t. lows each iteration of RTA and QPA to complete using only
O(n) arithmetic ops but it also leads to slower convergence
The solution can be found by starting a safe upper bound t compared to our algorithm. We measure the number of
for the fixed point and iteratively updating t to dbf(t) − 1. iterations required by the algorithms in experiments on syn-
This algorithm, called Quick Processor-Demand Analysis, thetically generated problem instances for FP schedulability
is an exact EDF schedulability test for arbitrary-deadline (resp., EDF schedulability) to show that our algorithm is
preemptive uniprocessor systems [5]. Inventors of QPA use faster and more predictable than RTA (resp., QPA).
a slightly more elaborate iterative step [5, Th. 5] but the
above update works assuming that all data are integral.
Geometrically, QPA looks like climbing down the steps of 2 BACKGROUND
dbf . Similarities and differences between RTA and QPA are 2.1 Integer Programs and Cutting Plane Methods
summarized in Table 1.
We show that problem (2) (FP schedulability) and prob- An optimization problem over variable x that can be ex-
lem (4) (EDF schedulability) can be efficiently reduced to a pressed as
common problem (8). The reduction from FP schedulability min{c · x | x ∈ Rn≥0 , Ax ≥ b}, (5)
is strong in the sense that the source instance is transformed
once to the target instance to find the solution. The reduction
3. The distinction between strong and weak reductions is similar
from EDF schedulability is weak because we may need to to the distinction between Karp and Cook reductions for decision
create up to n instances of the target problem to find the problems but we are working with optimization problems.
3

RTA QPA
tasks constrained-deadline, preemptive arbitrary-deadline, preemptive
processor single single
scheduler fixed-priority earliest-deadline-first
functional iteration update t to rbf(t) update t to dbf(t) − 1
final value least fixed point of rbf(t), if system is schedulable greatest fixed point of dbf(t) − 1, if system is unschedulable

TABLE 1: Comparing RTA and QPA.

for some A ∈ Rm × Rn , b ∈ Rm , c ∈ Rn is a linear program. 2.2 Linear Programming Duality


Here c·x is the dot product of the vectors. If x is restricted to The linear program
be integral, then the problem is an (linear) integer program:
max{b · y | y ∈ Rm T
≥0 , A y ≤ c} (7)
min{c · x | x ∈ Zn≥0 , Ax ≥ b}. (6)
is called the dual of program (5), which is called the primal
A relaxation of an optimization problem with a maximiza- program in such a context. Linear programming duality is
tion (resp., minimization) objective is a simpler optimization the idea that for such a pair of programs exactly one of the
problem with optimal value at least as large (resp., small) as following statements is true:
the optimal value of the original problem. Problems can be • Both programs are infeasible.
relaxed in many ways but we will primarily be concerned • One program is unbounded, and the other program is
with relaxations of integer programs in which the set of infeasible.
feasible solutions is expanded by • Both programs have optimal solutions with the same
• allowing variables to be continuous instead of integral; objective value.
and/or More details about linear programming duality can be
• dropping constraints. found in standard textbooks [7].
If the relaxation only involves making the variables contin-
uous, then it is called a linear relaxation. Thus, program (5) 3 A G ENERALIZATION OF FP S CHEDULABILITY
is a linear relaxation of program (6). In this section, we consider the following problem:
Many optimization problems, including integer pro-  & ' 
grams, can be solved using the cutting-plane methodology  X t+α
i

min t ∈ [a, b] ∩ Z Cj + β ≤ t , (8)

which consists of repeating the following steps:  Ti 
j∈[n]
1) Solve a relaxation of the problem.
2) If the relaxation is found to be infeasible, then the where
original problem must be infeasible and we exit the • n is a positive integer.
algorithm. • For any i ∈ [n], Ci , Ti ∈ Z>0 are constants, and Ui
denotes the ratio Ci /Ti . We assume that j∈[n] Uj ≤ 1.
P
3) If the optimal solution for the relaxation is feasible for
n
the original problem, then the solution must also be • α ∈ Z , β ∈ Z, a ∈ Z, and b ∈ Z are constants.
optimal for the original problem and we return the It is useful to view Ci , Ti , and Ui as the wcet, period,
solution. and utilization, respectively, of task i but our methods do
4) Find one or more valid inequalities4 that separate the not require us to adhere to this interpretation. If (α, β, a, b)
current (nonintegral) solution from the convex hull of is equal to (0, 0, 1, Dn ), then problem (8) is identical to
the set of feasible solutions to the original problem. problem (2). Thus, problem (8) is a close generalization of
Such separating inequalities, or cuts, are guaranteed to problem (2).
exist when the set of feasible solutions of the relaxation If t = a is a feasible solution to the problem, then it is
contains the convex hull of the set of feasible solutions also the optimal solution. We assume that we are not in this
to the original problem. trivial case and t = a is infeasible, i.e.,
5) Add the inequalities to the problem description and go &
X a + αi
'
to the first step. Cj + β > a (9)
Ti
After each iteration, we have a better approximation for the j∈[n]
convex hull of the set of feasible solutions to the original Now, t ≥ a iff for all i ∈ [n],
problem, but the problem description, in general, is longer. & ' & '
Since the problem is more restricted in each iteration, the t + αi a + αi
≥ .
optimal values found in each iteration, denoted Z 1 , Z 2 , . . . , Ti Ti
must satisfy
The forward direction is trivial; the backward direction uses
Z1 ≤ Z2 ≤ · · ·
the inequality in problem (8) and assumption (9). Using the
assuming that the direction of optimization is minimization. equivalence, the problem can be rewritten as
These values are sometimes called dual bounds for the opti-
min t
mal value, denoted Z . More details about integer programs
s.t. t ≤ b
can be found in standard textbooks [6].
t− ≥
P
j∈[n] C j xj (t) β
4. An inequality is valid for a problem if it is satisfied by all feasible xi (t) ≥ xi , i ∈ [n]
solutions to the problem. t ∈ Z
4

where, for all i ∈ [n], we have 1 A cutting plane algorithm for problem (8).
1: Initialize xi = d(a + αi )/Ti e for all i ∈ [n].
& '
t + αi
xi (t) = , 2: if x · C + β ≤ a then . a is feasible
Ti
& ' 3: return a
a + αi 4: end if
xi = . 5: repeat
Ti
6: Solve linear program (11).
xi (t) is a function of the variable t, and xi is a constant lower 7: if no solutions then
bound for xi (t). 8: return “infeasible”
The above problem can be formulated as an integer 9: end if
program: 10: Update t to the optimal value.
11: Update xi to d(t + αi )/Ti e for all i ∈ [n].
min t 12: until x stabilizes or t > b.
s.t. t ≤ b 13: if t > b then
t − j∈[n] Cj xj ≥
P
β 14: return “infeasible”
Ti xi − t ≥ αi , i ∈ [n] (10) 15: else
xi ≥ xi , i ∈ [n] 16: return t
t ∈ Z 17: end if
x ∈ Zn
Here, xi is an integer variable (not a function), but the
the third constraint in program (10) and the fact that t∗ is a
relationship between xi and t is preserved in this trans-
strict lower bound for an optimal t, any optimal solution
formation from the previous problem because any optimal
(t, x) to program (10) must satisfy
solution (t, x) to the integer program satisfies
t + αj t∗ + αj
xj ≥ > = x∗j .
(& ' ) & '
t + αi t + αi Tj Tj
xi = max , xi =
Ti Ti Since xj is integral, the stronger inequality
P
t = j∈[n] Cj xj & '
t∗ + αj
xj ≥
In the second equality on the first line, we use the definition Tj
of xi and the fact that t ≥ a.
To apply the cutting plane methodology on pro- is also valid. Moreover, x∗j does not satisfy the inequality
gram (10), we relax it by by making the variables continuous x∗j ≥ d(t∗ + αj )/Tj e because x∗j = (t∗ + αj )/Tj is fractional.
and dropping the upper bound b for t: Therefore, xj ≥ d(t∗ + αj )/Tj e is a cut5 . This cut increases
the lower bound for xj in the original program but does
min t P not increase the number of constraints. There are at most n
s.t. t − j∈[n] Cj xj ≥ β such cuts all of which are incorporated into program (10) by
Ti xi − t ≥ αi , i ∈ [n] updating xi to
(11)
xi ≥ xi , i ∈ [n]
& '
t∗ + αi
t ∈ R .
Ti
x ∈ Rn
This is almost a linear relaxation of program (10). The for all i ∈ [n]. These updates maintain the integrality of x
decision to drop t ≤ b from the program is not consequential which is utilized in generating cuts in the next iteration.
because one of the following three conditions must be true: Algorithm 1 shows the cutting plane algorithm that
we have developed for problem (8). For any i ∈ [n], the
• The linear relaxation and program (11) are both infeasible.
minimum (resp., maximum) value of xi is d(a + αi )/Ti e
• Both programs are feasible and have the same optimal
(resp., d(b + αi )/Ti e), and xi can assume O(d(b − a)/Ti e)
value.
values. Since at least one element in x is incremented in
• Linear relaxation is infeasible, but program (11) is feasible
each iteration, the loop runs O(d(b − a)/ minj Tj en) times.
and has optimal value greater than b.
Inside each loop, the linear program can be solved in
The last condition can be checked after solving pro- time polynomial in the size of the representation of the
gram (11). So, from hereon, we refer to program (11) as program [8]. The size of the representation of the program
a linear relaxation. In any optimal solution (t∗ , x∗ ) to this is linearly bounded by |I|, the size of the representation of
linear program, we must have the original instance of problem (8). Thus, the algorithm has
t∗ = j∈[n] Cj x∗j
P O(d(b − a)/ minj Tj epoly(|I|)) running time.
 ∗
t + αi Theorem 1. Algorithm 1 is a cutting plane algorithm for prob-

∀i ∈ [n] : x∗i = max , xi lem (8) which uses
Ti
& ' !
If x∗ is integral, then t∗ is also integral and we have found b−a
O poly(|I|)
an optimal solution to program (10). Otherwise, there must minj∈[n] {Tj }
exist a j ∈ [n] where x∗j is fractional. x∗j cannot equal xj ,
which is integral; therefore, it must equal (t∗ +αj )/Tj . Using 5. Similar logic is used in generating Gomory cuts [6, Ch. 8].
5

arithmetic ops. We introduce a map f : [n] → Q given by


P P
β + j∈[n]\[k] αj Uj + j∈[k] xj Cj
k 7→ . (14)
1 − j∈[n]\[k] Uj
P
3.1 An Efficient Algorithm For Solving The Relaxation
Let v, w, z be the variables in the dual of the linear pro- It
Pmay be verified that the above optimal value Pfor k = 1 and
gram (11) such that they correspond, in order, to the three j∈[n] Uj = 1 is equal to f (1). If k > 1 or j∈[n] Uj < 1,
inequalities in the primal. Then, the dual is given by: then zk can be eliminated from the above program to get
max (1 − j∈[n]\[k] Uj )(f (k) − (xk Tk − αk ))v + xk Tk − αk
P
max βv + α ·P w+x·z
s.t. v ∈ [(1 − j∈[n]\[k] Uj )−1 , (1 − j∈[n]\[k−1] Uj )−1 ]
P P
s.t. v − j∈[n] wj = 1
−Ci v + Ti wi + zi = 0, i ∈ [n] From our assumption j∈[n] Uj ≤ 1, it follows that the left
P
v ∈ R≥0 end of the interval is positive—the constraint v ≥ 0, which
w, z ∈ Rn≥0 was present in program (13), is redundant and P not included
above. From the assumption that k > 1 or j∈[n] Uj < 1, it
By substituting zi /Ti for zi for all i ∈ [n] and eliminating w,
follows that the right end of the interval is well defined and
we get
greater than the left end of the interval. Thus, program (13)
max (β + α · U )v + j∈[n] (xj Tj − αj )zj
P
is feasible and bounded. Using the assumption that k > 1
or j∈[n] Uj < 1, we can also infer that f (k − 1) is well
P
(1 − j∈[n] Uj )v + j∈[n] zj = 1
P P
s.t.
Ui v − zi ≥ 0, i ∈ [n] (12) defined. Then, f satisfies the following identity:
v ∈ R≥0
z ∈ Rn≥0 f (k)(1 −
P
j∈[n]\[k] Uj ) =
f (k − 1)(1 − Uj ) + (xk Tk − αk )Uk .
P
For a fixed v , program (12) is identical to a fractional knap- j∈[n]\[k−1]
sack problem with n items where zi is the weight of item i Using the identity, the objective can also be written as
in the knapsack, Ui v is the total weight of item i available to
(1 − j∈[n]\[k−1] Uj )(f (k − 1) − (xk Tk − αk ))v + xk Tk − αk .
P
the thief, xi Ti − αi is the value of the item per P
unit weight,
and a full knapsack can weigh at most 1 − (1 − j∈[n] Uj )v . Substituting the two ends of the interval for v in the above
The fractional knapsack problem admits a greedy solution expressions for the objective, we get that the optimal value
in which the thief fills the knapsack with items in nonin- is f (k) or f (k − 1). The optimal values from the n linear
creasing order of value until the knapsack is full. At this programs can be combined to get the optimal value for the
point in our analysis, we assume that the tasks 1, 2, . . . , n P
linear program (12) as follows: if k = 1 and j∈[n] Uj =
are sorted in nonincreasing order of value, i.e., xi Ti − αi .
1, then the optimal value is maxk∈[n] f (k), otherwise the
Then, there must exist an optimal solution such that for
optimal value is maxk∈{0}∪[n] f (k).
some k ∈ [n],
The above analysis is summarized in Algorithm 2. We
zj = Uj v, j ∈ [k − 1] assert that the utilization does not exceed 1 on line 1. We
check unboundedness of the dual program on line 2; if the
zk ≤ Uk v condition is true, then we simply return a string “infeasible”
zj = 0, j ∈ [n] \ [k] on line 3. We enforce the order required by the analysis on
line 5. We compute maxk f (k) on lines 6–9, taking the well-
For each k ∈ [n], we can create a simpler version of linear definedness of f (0) into account. Sorting on line 5 takes
program (12) by adding the above constraints: O(n log(n)) arithmetic ops. Checking infeasibility on line
P P
max (β + j∈[n]\[k−1] αj Uj + j∈[k−1] xj Cj )v+ 2, and finding the maximum values on lines 6 and 8 use
O(n) arithmetic ops. Other lines use a constant number
(xk TkP− αk )zk
of arithmetic ops. Therefore, Algorithm 2 uses O(n log(n))
s.t. (1 − j∈[n]\[k−1] Uj )v + zk = 1
arithmetic ops. The next theorems follow.
Uk v − zk ≥ 0
v, zk ∈ R≥0 2 An efficient algorithm for problem (11).
(13)
j∈[n] Uj ≤ 1 P
P
1: assert
P
If k = 1, U
j∈[n] j = 1, and the coefficient of v , i.e.,
β + α · U , is positive, then the program is unbounded. If 2: if β + α · U > 0 and j∈[n] Uj = 1 then
the dual program is unbounded, then the primal program, 3: return “infeasible”
program (11), must be infeasible (see Section 2.2). Since the 4: end if
primal program is a relaxation of integer program (10), the 5: Sort the tasks and x in a nonincreasing order using the
integer program must also be infeasible (see Section 2.1). keyPxj Tj − αj for each j ∈ [n].
From now on, we assume that we have 6: if j∈[n] Uj = 1 then
P ruled out this
return maxk∈[n] {f (k)}
P j∈[n] Uj < 1 or
cause of infeasibility by ensuring that 7:
β + α · U ≤ 0. Now, if k = 1 and j∈[n] Uj = 1, then 8: end if
the coefficient of v must be zero or negative. In both cases, 9: return maxk∈{0}∪[n] {f (k)}
the optimal value is
P
β + j∈[n]\[1] αj Uj + x1 C1 Theorem 2. The linear program (11) can be solved by Algo-
. rithm 2, using O(n log(n)) arithmetic ops.
U1
6

Theorem 3. If Algorithm 1 calls Algorithm 2 to solve linear 3 An efficient algorithm for finding max f
program (11) on line 6, then it solves problem (8) using 1: k ← (1 if
P
j∈[n] Uj = 1 else 0)
& ' ! 2: while k < n and f (k + 1) ≥ f (k) do
b−a 2 3: k ←k+1
O n log(n)
minj∈[n] {Tj } 4: end while
5: return f (k)
arithmetic ops.

4 N EW S CHEDULABILITY T ESTS
3.2 An efficient algorithm for finding max f
4.1 FP Scheduling
By refining our previous analysis, we can find the maximum
value of f without necessarily computing f for all n (or n + Problem (2) can be reduced to problem (8) by initializing
1) values in its domain. Previously, we observed that when (α, β, a, b) in the target instance to (0, 0, 1, Dn ). However,
f (k−1) is well-defined, the objective of the k -th subproblem we consider a subtly different reduction which, when com-
is to maximize posed with Algorithm 1, yields a simpler algorithm. Prob-
P lem (2) is a question about the behavior of rbf in [0, Dn )
(1 − Uj )(f (k) − (xk Tk − αk ))v + xk Tk − αk
P
j∈[n]\[k] for constrained-deadline systems, but in this interval dt/Tn e
=(1 − j∈[n]\[k−1] Uj )(f (k − 1) − (xk Tk − αk ))v + xk Tk − αk . must equal 1. Thus, the condition rbf(t) ≤ t may be written
as
and the only constraint is that v is restricted to
& '
X t
Cj + Cn ≤ t. (16)
[(1 −
P
Uj )−1 , (1 −
P
Uj )−1 ]. Tj
j∈[n]\[k] j∈[n]\[k−1] j∈[n−1]

If j∈[n−1] Uj ≥ 1 then the problem is infeasible. So, we


P
The coefficient of v in the objective is nonnegative iff f (k) ≥
assume that this condition is checked initially, and now
xk Tk − αk iff f (k − 1) ≥ xk Tk − αk . If the coefficient is P
j∈[n−1] Uj < 1 holds. Using inequality 16, problem (2) can
P then v must equal the right end of the interval,
nonnegative,
be reduced to problem (8) by initializing (n, α, β, a, b) in the
i.e., (1 − j∈[n]\[k−1] Uj )−1 , and, hence, the optimal value
target instance to equal (n − 1, 0, Cn , Cn , Dn ). We solve the
is f (k − 1). Similarly, the coefficient of v in the objective is
target instance by using Algorithm 1 and Algorithm 2 (see
nonpositive iff f (k) ≤ xk Tk − αk iff f (k − 1) ≤ xk Tk − αk ,
Theorem 3). Since the total utilization is less than one, only
and if the coefficient is nonpositive then the optimal value
lines 5 and 9 in Algorithm 2 do meaningful work and f (0)
is f (k). Therefore, at least one of the following conditions
is well defined. f is given by
must be true: P
Cn + j∈[k] xj Cj
f (k − 1) ≤ f (k) ≤ xk Tk − αk , k 7→ , (17)
1 − j∈[n−1]\[k] Uj
P
(15)
xk Tk − αk ≤ f (k) ≤ f (k − 1)
and we have
f has a local maximum point at j iff f (j) ≥ f (j − 1) and Cn
f (j) ≥ f (j+1). The first condition defaults to true if f (j−1) f (0) = .
1−
P
j∈[n−1] Uj
not well defined —- this can happen at j = 0, or j = 1
is P
if j∈[n] Uj = 1. The second condition defaults to true if f (0) is independent of x and since line 9 in Algorithm 2
j = n because f (n + 1) is not well defined. Assuming that is bound to run at least once f (0) is a lower bound for t.
these corner cases are handled appropriately, f has a local Note that f (0) has been proposed as an initial value for RTA
maximum point at j iff before using a different analysis method [9]—in our case, the
bound pops out of Algorithm 2 when it is simplified. Since
xj+1 Tj+1 − αj+1 ≤ f (j) ≤ xj Tj − αj . f (0) is a lower bound for t, when reducing problem (2)
to problem (8), (n, α, β, a, b) in the target instance can be
Assume, for the sake of contradiction, that k1 and k2 are two initialized to (n − 1, 0, Cn , df (0)e, Dn ) and f (0) can be ig-
distinct local maxima of f with k1 < k2 and f (k2 ) > f (k1 ). nored on line 9 in Algorithm 2. Algorithm 1 and Algorithm 2
Then, we immediately get a contradiction: can be combined and simplified, as described above, to get
Algorithm 4. The next theorem follows from Theorem 3.
f (k2 ) ≤ xk2 Tk2 − αk2 ≤ xk1 +1 Tk1 +1 − αk1 +1 ≤ f (k1 ).
Theorem 4. Algorithm 4 solves problem (2) using
The first and third inequalities follow from the above char- & ' !
Dn − Cn /(1 − j∈[n−1] Uj ) 2
P
acterization of local maxima, and the middle inequality uses O n log(n)
the fact that the tasks are sorted in a nondecreasing order minj∈[n−1] {Tj }
using the key xj Tj − αj for all j ∈ [n]. Therefore, the
arithmetic ops.
first local maximum point must be the global maximum
point. Lines 6–9 in Algorithm 2 can be replaced by a call RTA starts with a lower bound t for the optimal t, and
to Algorithm 3. This optimization has no effect on the upper iteratively updates t to rbf(t) until t stabilizes or t > Dn .
bound for the worst-case running time of Algorithm 1 but This seems vaguely similar to the sequence of lower bounds
it reduces the average running time of Algorithm 2 which is produced by cutting-plane methods. RTA uses the same
the main workhorse in Algorithm 1. cut generation technique as Algorithm 1 but it does not
7

4 A cutting plane algorithm for problem (2). 4.2 EDF Scheduling


j∈[n−1] Uj ≥ 1 then
P
1: if 4.2.1 The upper bound L
2: return “infeasible” The interval of interest in problem (4) is [D1 , L). The hyper-
3: end if period of the system, denoted T , equals lcm{Ti | i ∈ [n]}
4: Initialize t = df (0)e (see Def. (17)). and can be used as L. A stronger bound
5: repeat
6: Update xi to dt/Ti e for all i ∈ [n − 1]. La = min{t ∈ (0, T ] | rbf(t) ≤ t}. (18)
7: Sort the tasks and x in a nonincreasing order using
the key xj Tj for each j ∈ [n − 1]. P be used as L but it is harder to compute. Finally, if
can
8: Update t to maxk∈[n−1] {f (k)} j∈[n] Uj < 1, then

9: until x stabilizes or t > Dn .


( )
j∈[n] (Tj − Dj )Uj
P
10: if t > Dn then Lb = max max{Dj − Tj }, (19)
1 − j∈[n] Uj
P
j∈[n]
11: return “infeasible”
12: else can also be used as L. Many researchers have contributed
13: return t to these bounds—more details are provided, for instance,
14: end if by George et al. [12]. The next theorem follows from the
definition of La and Theorem 3.
Theorem 5. If (α, β, a, b) in Algorithm 1 is initialized to
solve the linear program (11). Instead, it solves the further
(0, 0, 0, T ), then Algorithm 1, combined with Algorithm 2,
relaxation of program (11) created by omitting the second
computes the upper bound La using
constraint in the program: & ' !
T 2
min t P O n log(n)
minj∈[n] {Tj }
s.t. t − j∈[n] Cj xj ≥ 0
xi ≥ xi , i ∈ [n] arithmetic ops.
t ∈ R
x ∈ Rn 4.2.2 Solving EDF Schedulability
Baruah et al. [13, Thm. 3.5] propose the following integer
The optimal value for this simple program simply equals program for finding a feasible solution for the problem (4)
if L is ignored6 :
& '
X X t max 0
C j xj = Cj = rbf(t)
Tj s.t. P Ti xi ≤ t − Di , i ∈ [n]
j∈[n] j∈[n]
j∈[n] Cj xj ≥ t+1
t ∈ Z≥0
and can be computed by using O(n) arithmetic ops but this
x ∈ Zn
efficiency inside the loop comes at the cost of over-relaxing
the original problem, finding smaller dual bounds in each The single task (C1 , D1 , T1 ) = (2, 1, 100) misses its first
iteration, and eventually using a large number of iterations. deadline but the integer program
In contrast, Algorithm 4 does not over-relax the problem
and finds larger dual bounds at each step. The reduction in max 0
the number of iterations comes at the cost of an increase in s.t. 100x ≤ t−1
the worst-case number of arithmetic ops in the loop by a fac- 2x ≥ t+1
tor of log(n). We believe that in this trade-off, Algorithm 4 t ∈ Z≥0
is the winner because log(n) is not a high cost to pay for x ∈ Z
faster convergence using the proper dual bounds implied is infeasible and does not find any deadline misses. This
by linear relaxation. In Section 5, we compare the number example demonstrates that the formulation contains an off-
of iterations used by RTA to the number of iterations used by-one error: if we want xi to model b(t + Ti − Di )/Ti c in
by Algorithm 4 for synthetically generated task systems to the integer program, then we must have
show that the latter has better convergence properties.
A function like f is used by Lu et al. in a heuristic Ti xi ≤ t + Ti − Di .
method to solve problem (2) [10]. However, their method, However, the new formulation is still incorrect. The task set
being heuristic, tries to guess a good k ∈ [n] and tries f (k)
as the next dual bound. If the guess turns out to be lower i Ci Di Ti
than the current dual bound, they backtrack to using RTA. 1 5 10 13
In contrast, Theorem 1 guarantees that Algorithm 1 always 2 6 10 17
finds the best dual bound implied by linear relaxation. 3 1 31 20
Nguyen et al. also use a similar function in a linear-time
algorithm for problem (2) in the special case of harmonic
6. The integer program has been adapted for synchronous systems
periods [11]. Algorithm 1 may be viewed as a generalization by setting t1 and ki to 0. The notation has been changed to match our
of their algorithm to arbitrary periods. notation.
8

misses a deadline at time 10 but the integer program 5 A divide and conquer approach for problem (4).
max 0 1: Let task 1 denote the task with the smallest deadline.
s.t. 13x1 ≤ t+3 2: Sort the remaining tasks in a nondecreasing order using
17x2 ≤ t+7 the key Dj − Tj for each j .
20x3 ≤ t − 11 3: k ← n
5x1 + 6x2 + x3 ≥ t+1 4: while k > 0 do
t ∈ Z≥0 5: ak ← max{D1 , Dk − Tk }
x ∈ Z3 6: if k = n then
bk ← (La − 1 if j∈[k] Uj = 1 else − dfk (0)e)
P
7:
and its linear relaxation are infeasible. Intuitively, we expect 8: else
(t, x1 , x2 , x3 ) = (10, 1, 1, 0) to be feasible but it is not 9: bk ← (D1 −1 if Dk+1 −Tk+1 ≤ D1 else −dfk (0)e)
feasible because it violates the third inequality. The cause for 10: end if
the error is that the condition t ≥ Dj − Tj in the subscript 11: if bk < ak then
in the definition of dbf (definition (3)) is not accounted for 12: k ←k−1
in the formulation. Therefore, a little more care is needed to 13: continue
formulate the problem as an integer program. 14: end if
Instead of trying to formulate problem (4) as a single 15: Solve problem (20) by using Algorithm 1 with
integer program, we divide it into O(n) subproblems with (n, α, β, a, b) initialized to (k, [D1 − T1 , . . . , Dk −
simple integer programming formulations. We assume that Tk ], 1, −bk , −ak ).
task 1 has the smallest deadline. We assume that tasks 16: if an optimal solution exists then
[n] \ [1] are listed in a nondecreasing order using the key 17: return the negation of optimal value
Dj − Tj for each task j . Now, we consider n subproblems of 18: end if
problem (4) where t is restricted to the following n intervals: 19: k ←k−1
[D1 , max{D1 , D2 − T2 }), 20: end while
21: return “infeasible”
[max{D1 , D2 − T2 }, max{D1 , D3 − T3 }), . . . ,
[max{D1 , Dn−1 − Tn−1 }, max{D1 , Dn − Tn }),
[max{D1 , Dn − Tn }, L). This problem can be reduced to problem (8) by choos-
ing (n, α, β, a, b) in the target of the reduction to equal
Note that some of these intervals may be empty—for in- (k, [D1 −T1 , . . . , Dk −Tk ], 1, −bk , −ak ). After the reduction,
stance, if all deadlines are constrained, then only the last f is given by
interval is nonempty. In the k -th interval, the dbf function
1 + j∈[k]\[`] (Dj − Tj )Uj + j∈[`] xj Cj
P P
can be simplified to
` 7→ (21)
1 − j∈[k]\[`] Uj
P
X  t + Ti − Di 
Ci
Ti and denoted fk . For any k ∈ [n] if
P
i∈[k] j∈[k] Uj < 1 then
Algorithm 2 can be simplified to just lines 5 and 9, and the
because t < Dk+1 − Tk+1 ≤ Dk+2 − Tk+2 ≤ · · · ≤ L. Thus, bound
the k -th subproblem is given by: j∈[k] (Tj − Dj )Uj − 1
P
−fk (0) =
1 − j∈[k] Uj
P
max{t ∈ [ak , bk ] ∩ Z | t < dbf k (t)} (20)
where pops out of the algorithm (we provide fewer details here
X  t + Ti − Di  because the simplification steps
P are identical to the ones de-
dbf k (t) = Ci , scribed in Section 4.1). Since j∈[k] Uj < 1 for all k ∈ [n−1],
Ti
i∈[k] we may modify bk to
ak = max{D1 , Dk − Tk }, 
( 
 D1 − 1 k < n, Dk+1 − Tk+1 ≤ D1
max{D1 , Dk+1 − Tk+1 } − 1 k < n 
−dfk (0)e k < n, Dk+1 − Tk+1 > D1

bk =
L−1 k = n.  −dfk (0)e k = n, j∈[k] Uj < 1
P


By replacing t with −t, problem (20) can be rewritten as L − 1 P
k = n, j∈[k] Uj = 1.

a

min{t ∈ [−bk , −ak ] ∩ Z | t + dbf k (−t) ≥ 1}. The bound Lb is implicitly present in this definition of bk .
If an optimal value is found, then it must be negated to get The next theorems follow from the above discussion and
the value of the original instance. Using the identity b−xc = from Theorem 3.
−dxe, we have Theorem 6. If (n, α, β, a, b) in Algorithm 1 is initialized with
(k, [D1 − T1 , . . . , Dk − Tk ], 1, −bk , −ak ), then Algorithm 1,
j k
dbf k (−t) = i∈[k] −t+TTii−Di Ci
P
l m combined with Algorithm 2, solves problem (20) using
= − i∈[k] t+DTii−Ti Ci .
P & ' !
bk − ak 2
O k log(k)
Thus, the problem can be rewritten again as minj∈[k] {Tk }
l m
min{t ∈ [−bk , −ak ] ∩ Z | i∈[k] t+DTii−Ti Ci + 1 ≤ t}.
P
arithmetic ops.
9

Theorem 7. Algorithm 5 solves problem (4) using Mean Std. Dev. Max
P
& ' ! j∈[n−1] Uj RTA Alg 4 RTA Alg 4 RTA Alg 4
min{La , Lb } − D1 2
O n log(n) 0.70 7.57 4.61 1.87 1.66 18 14
minj∈[n] {Tj } 0.80 10.93 6.63 2.67 2.29 26 19
0.90 19.49 11.36 4.40 3.71 43 28
arithmetic ops. 0.99 125.48 60.11 21.90 17.74 211 140

A sufficient test for schedulability can be derived from TABLE 2: Statistics for n = 25 and variable utilization.
Algorithm 5 and Algorithm 2 by observing that for the k -th
subproblem,
j∈[k] (Tj − Dj )Uj < 1, then For n = 25 and variable utilization, we show normalized
P P
• if j∈[k] Uj = 1 and
infeasibility is detected on
P line 2 of Algorithm 2; and histograms of the running times of Algorithm 4 and RTA
j∈[k] (Tj − Dj )Uj < 1, then
P
• if j∈[k] Uj < 1 and in Figure 2. For lower utilizations, histograms have small
bk < ak and infeasibility is detected on line 11 of spreads and are skewed right, indicating that most instances
Algorithm 5. are solved quickly by the two algorithms. For higher uti-
The next theorem follows from these observations, and it lizations, histograms have large spreads and are more bell-
subsumes known tractable cases of EDF schedulability like shaped, suggesting that instances are more challenging for
implicit-deadline systems or systems with Di ≥ Ti for all i. both algorithms. For each utilization, the histogram for
Algorithm 4 is to the left of the histogram for RTA—this
Theorem 8. Problem (4) is infeasible if for all k ∈ [n] we have is true for both the peak of the histogram and the right
X end of the spread of the histogram where the frequency is
(Tj − Dj )Uj < 1. nearly zero. Thus, Algorithm 4 is faster on average and in
j∈[k]
the worst-case than RTA. For each utilization, the histogram
QPA is similar to RTA in the aspect that it over-relaxes for Algorithm 4 is thinner and taller than the histogram for
the original problem to do less work inside the loop at the RTA. Therefore, the running times of Algorithm 4 are more
cost of a larger number of iterations. In the next section, we predictable than those of RTA. We provide precise values for
will compare the number of iterations used by QPA to the average running times, standard deviations, and maximum
number of iterations used by Algorithm 1 for synthetically running times in Table 2.
generated task systems. For utilization 0.90 and variable n, we show normalized
histograms of the running times of Algorithm 4 and RTA in
Figure 3. Similar to the previous paragraph, the histogram
5 E MPIRICAL E VALUATION for Algorithm 4 is to the left of the histogram for RTA
in all cases. The histograms do not vary as much as they
5.1 FP Scheduling
did in the previous paragraph—while the total utilization
We compare the number of iterations used by RTA to the is bounded from above by 1, n is not, and harder instances
number of iterations used by Algorithm 4 for synthetically may be generated more frequently for much higher n but we
generated instances of Problem (2). For both algorithms, we decided that 100 is a reasonable upper bound for n in our
use the initial value experiments. From the figure, it is evident that Algorithm 4
X is faster on average and in the worst case than RTA for all
Cn /(1 − Uj ).
values of n.
j∈[n−1]
Although we show results for two slices of our experi-
The number of tasks in the system, n, is fixed to one of the ment, similar trends were observed for all other slices.
values
25, 50, 75, 100 5.2 EDF Scheduling
and the total utilization of the subsystem [n − 1] is fixed to We compare the number of iterations used by QPA to the
one of the values number of iterations used by Algorithm 5 for synthetically
generated instances of problem (4). Recall that Algorithm 5
0.70, 0.80, 0.90, 0.99 uses a divide-and-conquer approach to create n subprob-
lems where the k -th subproblem is the same as the original
For each total utilization value, we sample m utilization
problem except that t is restricted to [ak , bk ] ⊆ [D1 , L).
vectors uniformly using the Randfixedsum algorithm [14],
Although we present Algorithm 5 as a sequential algorithm
where m = 10000. Each utilization vector contains the
with subproblems solved in the order n, n − 1, . . . , the algo-
utilizations of the first n − 1 tasks of a system. Task n has a
rithm can also proceed in parallel. Moreover, the k -th sub-
very small utilization because we choose an arbitrarily large
problems can be further divided into smaller subproblems
value for Dn = Tn so that the generated instances are not
by considering subintervals of [ak , bk ]. If the sole concern
too easy for Algorithm 1—note that Dn is in the numerator
is to find a feasible (not necessarily optimal) solution to
in the characterization of running time in Theorem 4. To
problem (4), then once a feasible solution is found in one
complete the specification of the system, we sample n wcets
uniformly in [1, 1000] ∩ Z. The fixed interval reflects the idea
7. The synchrony hypothesis, used by reactive languages like Esterel,
that we do not expect a single reaction to an event to be too assumes that reactions are small enough to fit within a tick so that they
long in real systems7 . appear to be instantaneous to the programmer [15].
10

RTA 0.025
0.25
Cutting Plane 0.175
0.10

0.150
Normalized Frequencies

0.020
0.20
0.08
0.125

0.015
0.15
0.100 0.06

0.075 0.010
0.10 0.04

0.050

0.05 0.02 0.005


0.025

0.00 0.000 0.00 0.000


0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 0 5 10 15 20 25 0 10 20 30 40 0 25 50 75 100 125 150 175 200
util. = 0.70 util. = 0.80 util. = 0.90 util. = 0.99

Fig. 2: Normalized histograms of #iterations required by RTA and Algorithm 4 for n = 25 and variable utilization.

0.12
RTA 0.08
0.08
Cutting Plane
0.10 0.08 0.07
0.07
Normalized Frequencies

0.06
0.06
0.08
0.06
0.05
0.05

0.06
0.04 0.04
0.04

0.03 0.03
0.04

0.02 0.02
0.02
0.02
0.01 0.01

0.00 0.00 0.00 0.00


0 10 20 30 40 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50
n = 25 n = 50 n = 75 n = 100

P
Fig. 3: Normalized histograms of #iterations required by RTA and Algorithm 4 for j∈[n−1] Uj = 0.90 and variable n.

branch, then other branches may be aborted. Such branching Mean Std. Dev. Max
strategies are also applicable to QPA, and some branching
P
j∈[n] δj QPA Alg 1 QPA Alg 1 QPA Alg 1
heuristics have been proposed for the feasibility variant of
0.65 20.72 10.98 7.53 4.68 73 45
problem (4) by Zhang and Burns [16]. In our evaluation, we 0.75 24.58 12.61 9.21 5.61 81 51
use our sequential divide-and-conquer approach for QPA 0.85 29.19 14.44 12.14 6.99 137 76
too so that the comparison is really between Algorithm 1 0.95 35.76 16.81 18.96 9.60 231 112
and QPA. We measure the running time by adding the P
number of iterations for each subproblem. TABLE 3: Statistics for n = 50, j∈[n] δj = 0.95, and
The number of tasks in the system, n, is fixed to one of variable utilization.
the values
25, 50, 75, 100, to disappear—unlike the histograms in the previous sec-
tion, the histograms in this section contains outliers which
the total utilization of the system is fixed to one of the values
consume a very large number of iterations for at least
0.70, 0.80, 0.90, 0.99, one of the algorithms. From the maximum running time
columns in Table 3, we can see that the outliers belong
and the total density8 of the system is fixed to one of the to QPA. This suggests that the worst-case running times
values of Algorithm 1 may be bounded by a smaller function
1.25, 1.50, 1.75, 2.00. than the equivalent function for QPA. In the analysis of
Algorithm 1 in this paper, the worst-case bound is the same
Since the total utilization is less than one in all our config- as that of RTA/QPA, but we conjecture that tighter bounds
urations, we use the initial value Lb for both algorithms. may be found by better analysis. For each utilization, the
For each total utilization (resp., density) value, we sample histogram for Algorithm 1 is taller, thinner and to the left
m utilization vectors (resp., density vectors) of length n of the histogram for QPA (note the mean and standard
uniformly using the Randfixedsum algorithm. To complete deviation values in Table 3). Therefore, the running times
the specification of the system, we sample n wcets uniformly of Algorithm 4 are smaller and more predictable than those
in [1, 1000] ∩ Z. of QPA. P
For n = 50, j∈[n] Uj = 0.90, and variable density,
P
For n = 50, j∈[n] δj = 0.95 and variable utilization,
we show normalized histograms of the running times of we show normalized histograms of the running times of
QPA and Algorithm 1 in Figure 4, and provide summary QPA and Algorithm 1 in Figure 5. Similar to the previous
statistics in Table 3. Like the histograms in the previous paragraph, the histogram for Algorithm 1 is to the left of the
section, histograms have large spreads at higher utilizations histogram for QPA in all cases. The peaks of the histograms
suggesting that harder instances are generated more fre- for Algorithm 1 are more pronounced for lower densities.
quently at higher utilizations. However, the x-axis extends It is reasonable to expect easier instances to be generated
beyond the points where the tails of the histograms seem when density is closer to 1 because when density is less
than or equal to one the instances are always schedulable
8. The ratio δi = Ci /Di is called the density of task i. and solved by both algorithms within a constant number
11

QPA 0.08 0.07


0.06

Cutting Plane
0.08 0.07 0.06 0.05
Normalized Frequencies

0.06
0.05
0.04
0.06
0.05
0.04
0.03
0.04
0.04 0.03
0.03
0.02
0.02
0.02
0.02
0.01
0.01
0.01

0.00 0.00 0.00 0.00


0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70 80 0 20 40 60 80 100 120 140 0 50 100 150 200
util. = 0.70 util. = 0.80 util. = 0.90 util. = 0.99

P
Fig. 4: Normalized histograms of #iterations required by QPA and Algorithm 1 for n = 50, j∈[n] δj = 1.75, and variable
utilization.

0.175
QPA 0.07
Cutting Plane 0.10
0.04
0.150 0.06
Normalized Frequencies

0.08
0.125 0.05
0.03

0.100 0.06 0.04

0.02
0.075 0.03
0.04

0.050 0.02
0.01
0.02
0.025 0.01

0.000 0.00 0.00 0.00


0 10 20 30 40 50 0 20 40 60 80 100 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140
dens. = 1.25 dens. = 1.50 dens. = 1.75 dens. = 2.00

P
Fig. 5: Normalized histograms of #iterations required by QPA and Algorithm 1 for n = 50, j∈[n] Uj = 0.90, and variable
density.

of steps. Although we show results for a density between 1 ACKNOWLEDGMENTS


and 2, we observe similar results when the total density is The author would like to thank Pontus Ekberg and Sanjoy
greater than 2. Baruah for discussions on the subject.
P P
For j∈[n] Uj = 0.90, j∈[n] δj = 1.75, and variable
n, we show normalized histograms of the running times of R EFERENCES
QPA and Algorithm 1 in Figure 6. Similar to the previous
[1] M. Joseph and P. Pandya, “Finding Response Times in a Real-Time
paragraphs, the histogram for Algorithm 1 is taller, thinner System,” The Computer Journal, vol. 29, no. 5, pp. 390–395, Jan. 1986.
and to the left of the histogram for QPA in all cases. [2] J. Lehoczky, L. Sha, and Y. Ding, “The rate monotonic scheduling
We have shown results for three slices of our experiment. algorithm: Exact characterization and average case behavior,” in
[1989] Proceedings. Real-Time Systems Symposium, Dec. 1989, pp.
For all other slices, we found Algorithm 1 to be faster and 166–171.
more predictable than QPA. [3] S. Baruah, A. Mok, and L. Rosier, “Preemptively scheduling hard-
real-time sporadic tasks on one processor,” in [1990] Proceedings
11th Real-Time Systems Symposium, Dec. 1990, pp. 182–190.
[4] I. Ripoll, A. Crespo, and A. K. Mok, “Improvement in feasibility
testing for real-time tasks,” Real-Time Systems, vol. 11, no. 1, pp.
6 C ONCLUSION 19–39, Jul. 1996.
[5] F. Zhang and A. Burns, “Schedulability Analysis for Real-Time
In this paper, we reduced FP schedulability and EDF Systems with EDF Scheduling,” IEEE Transactions on Computers,
schedulability to a common target problem. Both source vol. 58, no. 9, pp. 1250–1258, Sep. 2009.
problems are difficult problems, and the reductions are [6] L. A. Wolsey, “Integer programming / laurence A. Wolsey.”
in Integer Programming, ser. Wiley-Interscience Series in Discrete
polynomial-time. Therefore, the target problem is a good Mathematics and Optimization. New York: J. Wiley, 1998.
abstraction of the source problems that captures the essence [7] J. Matoušek and B. Gärtner, Understanding and Using Linear Pro-
of their hardness. Faster algorithms for the abstract prob- gramming, 1st ed., ser. Universitext. Springer Berlin, Heidelberg,
2007.
lem immediately yield faster algorithms for FP and EDF [8] L. G. Khachiyan, “Polynomial algorithms in linear programming,”
schedulability—we hope that this fact will encourage re- USSR Computational Mathematics and Mathematical Physics, vol. 20,
searchers to study the abstract problem in greater detail and no. 1, pp. 53–72, Jan. 1980.
from different perspectives. [9] M. Sjodin and H. Hansson, “Improved response-time analysis cal-
culations,” in Proceedings 19th IEEE Real-Time Systems Symposium
We used techniques from the theory of integer program- (Cat. No.98CB36279), Dec. 1998, pp. 399–408.
ming to develop a cutting plane algorithm for the abstract [10] W.-C. Lu, J.-W. Hsieh, and W.-K. Shih, “A precise schedulability
problem and showed that fixed-point iteration algorithms test algorithm for scheduling periodic tasks in real-time systems,”
in Proceedings of the 2006 ACM Symposium on Applied Computing,
like RTA and QPA are suboptimal special cases of this ser. SAC ’06. New York, NY, USA: Association for Computing
algorithm. We hope that these newly discovered connections Machinery, Apr. 2006, pp. 1451–1455.
between fixed-point iteration methods and cutting plane [11] T. H. C. Nguyen, W. Grass, and K. Jansen, “Exact Polynomial Time
methods will stimulate work on incorporating more ad- Algorithm for the Response Time Analysis of Harmonic Tasks,” in
Combinatorial Algorithms, ser. Lecture Notes in Computer Science,
vanced theories from integer programming into real-time C. Bazgan and H. Fernau, Eds. Cham: Springer International
scheduling theory. Publishing, 2022, pp. 451–465.
12

0.07
QPA 0.06
0.10
Cutting Plane
0.06
0.05 0.04
Normalized Frequencies

0.08
0.05
0.04
0.03
0.06 0.04

0.03
0.03
0.02
0.04
0.02
0.02

0.01
0.02
0.01
0.01

0.00 0.00 0.00 0.00


0 20 40 60 80 0 20 40 60 80 100 120 140 160 0 20 40 60 80 100 120 140 160 0 20 40 60 80 100 120 140 160
n = 25 n = 50 n = 75 n = 100

P P
Fig. 6: Normalized histograms of #iterations required by QPA and Algorithm 1 for j∈[n] Uj = 0.90, j∈[n] δj = 1.75, and
variable n.

[12] L. George, N. Rivierre, and M. Spuri, “Preemptive and Non-


Preemptive Real-Time UniProcessor Scheduling,” p. 56, Jan. 1996.
[13] S. K. Baruah, L. E. Rosier, and R. R. Howell, “Algorithms and
complexity concerning the preemptive scheduling of periodic,
real-time tasks on one processor,” Real-Time Systems, vol. 2, no. 4,
pp. 301–324, Nov. 1990.
[14] P. Emberson, R. Stafford, and R. I. Davis, “Techniques For The Syn-
thesis Of Multiprocessor Tasksets,” in 1st International Workshop on
Analysis Tools and Methodologies for Embedded and Real-time Systems
(WATERS), Jul. 2010, pp. 6–11.
[15] F. Boussinot and R. de Simone, “The ESTEREL language,” Proceed-
ings of the IEEE, vol. 79, no. 9, pp. 1293–1304, Sep. 1991.
[16] F. Zhang and A. Burns, “Improvement to Quick Processor-
Demand Analysis for EDF-Scheduled Real-Time Systems,” in 2009
21st Euromicro Conference on Real-Time Systems, Jul. 2009, pp. 76–86.

You might also like