Professional Documents
Culture Documents
Using Integer Programming Techniques in Real-Time Scheduling Analysis
Using Integer Programming Techniques in Real-Time Scheduling Analysis
Abstract—Real-time scheduling theory assists developers of embedded systems in verifying that the timing constraints required by
critical software tasks can be feasibly met on a given hardware platform. Fundamental problems in the theory are often formulated as
search problems for fixed points of functions and are solved by fixed-point iterations. These fixed-point methods are used widely
because they are simple to understand, simple to implement, and seem to work well in practice. These fundamental problems can also
be formulated as integer programs and solved with algorithms that are based on theories of linear programming and cutting planes
amongst others. However, such algorithms are harder to understand and implement than fixed-point iterations. In this research, we
arXiv:2210.11185v1 [cs.OS] 20 Oct 2022
show that ideas like linear programming duality and cutting planes can be used to develop algorithms that are as easy to implement as
existing fixed-point iteration schemes but have better convergence properties. We evaluate the algorithms on synthetically generated
problem instances to demonstrate that the new algorithms are faster than the existing algorithms.
1 I NTRODUCTION
15.0
FP Sched. strong reduction
12.5
rbf(t) −→
weak reduction
5.0 EDF Sched.
2.5
0.0
We formulate problem (8) as an integer program and
0 2 4 6 8 10 12 14 16
solve it by using the cutting plane methodology which
t −→ involves relaxing the integer program to a linear program,
solving the linear program, updating the integer program
Fig. 1: rbf(t) (blue, dashed) for tasks {(2, 4), (1, 5), (3.3, 15)} with a cut if the solution is not integral, and repeating the
and progress of RTA (red, solid arrows) from initial value 2. process (described in more detail in the next section). Two
questions must be answered to apply this technique:
• How is the cut generated?
t ← rbf t. Geometrically, the algorithm feels like climbing • How is the linear program solved?
steps.
We show that for our integer program cuts can be generated
An arbitrary-deadline P
preemptive uniprocessor system
so that they only change bounds of variables in the integer
is not EDF schedulable iff j∈[n] Uj > 1 or
program—thus, the number of constraints in the integer
∃t ∈ [0, L) : t < dbf(t), program does not increase after incorporating the cut. To
solve the linear program, we design a specialized algorithm
holds [3], [4]. Here, dbf , the demand bound function, is which uses O(n log(n)) arithmetic ops. We can also use the
given by ellipsoidal method, for instance, to solve the program in
weak polynomial time but such sophistication is excessive
t + Tj − Dj
for the problem at hand—our algorithm is conceptually
X
t 7→ Cj , (3)
Tj simple and easy to implement and strongly polynomial-
j∈[n],t≥Dj −Tj
time. By composing the strong (resp., weak) reduction with
and L is a large constant like lcmi Ti (more details on L are our cutting plane algorithm, we get an iterative algorithm
provided later). The above condition is satisfied if and only for FP schedulability (resp., EDF schedulability) which does
if the problem almost linear work in each iteration.
We show that RTA and QPA may also be viewed as
max{t ∈ [D1 , L) ∩ Z : t < dbf(t)} (4) cutting plane algorithms that generate cuts the same way
that we do. However, instead of solving the linear relaxation
has an optimal solution. The optimal solution must satisfy of the integer program in each iteration, they solve a further
the fixed point equation relaxation of the linear relaxation. This over-relaxation al-
dbf(t) − 1 = t. lows each iteration of RTA and QPA to complete using only
O(n) arithmetic ops but it also leads to slower convergence
The solution can be found by starting a safe upper bound t compared to our algorithm. We measure the number of
for the fixed point and iteratively updating t to dbf(t) − 1. iterations required by the algorithms in experiments on syn-
This algorithm, called Quick Processor-Demand Analysis, thetically generated problem instances for FP schedulability
is an exact EDF schedulability test for arbitrary-deadline (resp., EDF schedulability) to show that our algorithm is
preemptive uniprocessor systems [5]. Inventors of QPA use faster and more predictable than RTA (resp., QPA).
a slightly more elaborate iterative step [5, Th. 5] but the
above update works assuming that all data are integral.
Geometrically, QPA looks like climbing down the steps of 2 BACKGROUND
dbf . Similarities and differences between RTA and QPA are 2.1 Integer Programs and Cutting Plane Methods
summarized in Table 1.
We show that problem (2) (FP schedulability) and prob- An optimization problem over variable x that can be ex-
lem (4) (EDF schedulability) can be efficiently reduced to a pressed as
common problem (8). The reduction from FP schedulability min{c · x | x ∈ Rn≥0 , Ax ≥ b}, (5)
is strong in the sense that the source instance is transformed
once to the target instance to find the solution. The reduction
3. The distinction between strong and weak reductions is similar
from EDF schedulability is weak because we may need to to the distinction between Karp and Cook reductions for decision
create up to n instances of the target problem to find the problems but we are working with optimization problems.
3
RTA QPA
tasks constrained-deadline, preemptive arbitrary-deadline, preemptive
processor single single
scheduler fixed-priority earliest-deadline-first
functional iteration update t to rbf(t) update t to dbf(t) − 1
final value least fixed point of rbf(t), if system is schedulable greatest fixed point of dbf(t) − 1, if system is unschedulable
where, for all i ∈ [n], we have 1 A cutting plane algorithm for problem (8).
1: Initialize xi = d(a + αi )/Ti e for all i ∈ [n].
& '
t + αi
xi (t) = , 2: if x · C + β ≤ a then . a is feasible
Ti
& ' 3: return a
a + αi 4: end if
xi = . 5: repeat
Ti
6: Solve linear program (11).
xi (t) is a function of the variable t, and xi is a constant lower 7: if no solutions then
bound for xi (t). 8: return “infeasible”
The above problem can be formulated as an integer 9: end if
program: 10: Update t to the optimal value.
11: Update xi to d(t + αi )/Ti e for all i ∈ [n].
min t 12: until x stabilizes or t > b.
s.t. t ≤ b 13: if t > b then
t − j∈[n] Cj xj ≥
P
β 14: return “infeasible”
Ti xi − t ≥ αi , i ∈ [n] (10) 15: else
xi ≥ xi , i ∈ [n] 16: return t
t ∈ Z 17: end if
x ∈ Zn
Here, xi is an integer variable (not a function), but the
the third constraint in program (10) and the fact that t∗ is a
relationship between xi and t is preserved in this trans-
strict lower bound for an optimal t, any optimal solution
formation from the previous problem because any optimal
(t, x) to program (10) must satisfy
solution (t, x) to the integer program satisfies
t + αj t∗ + αj
xj ≥ > = x∗j .
(& ' ) & '
t + αi t + αi Tj Tj
xi = max , xi =
Ti Ti Since xj is integral, the stronger inequality
P
t = j∈[n] Cj xj & '
t∗ + αj
xj ≥
In the second equality on the first line, we use the definition Tj
of xi and the fact that t ≥ a.
To apply the cutting plane methodology on pro- is also valid. Moreover, x∗j does not satisfy the inequality
gram (10), we relax it by by making the variables continuous x∗j ≥ d(t∗ + αj )/Tj e because x∗j = (t∗ + αj )/Tj is fractional.
and dropping the upper bound b for t: Therefore, xj ≥ d(t∗ + αj )/Tj e is a cut5 . This cut increases
the lower bound for xj in the original program but does
min t P not increase the number of constraints. There are at most n
s.t. t − j∈[n] Cj xj ≥ β such cuts all of which are incorporated into program (10) by
Ti xi − t ≥ αi , i ∈ [n] updating xi to
(11)
xi ≥ xi , i ∈ [n]
& '
t∗ + αi
t ∈ R .
Ti
x ∈ Rn
This is almost a linear relaxation of program (10). The for all i ∈ [n]. These updates maintain the integrality of x
decision to drop t ≤ b from the program is not consequential which is utilized in generating cuts in the next iteration.
because one of the following three conditions must be true: Algorithm 1 shows the cutting plane algorithm that
we have developed for problem (8). For any i ∈ [n], the
• The linear relaxation and program (11) are both infeasible.
minimum (resp., maximum) value of xi is d(a + αi )/Ti e
• Both programs are feasible and have the same optimal
(resp., d(b + αi )/Ti e), and xi can assume O(d(b − a)/Ti e)
value.
values. Since at least one element in x is incremented in
• Linear relaxation is infeasible, but program (11) is feasible
each iteration, the loop runs O(d(b − a)/ minj Tj en) times.
and has optimal value greater than b.
Inside each loop, the linear program can be solved in
The last condition can be checked after solving pro- time polynomial in the size of the representation of the
gram (11). So, from hereon, we refer to program (11) as program [8]. The size of the representation of the program
a linear relaxation. In any optimal solution (t∗ , x∗ ) to this is linearly bounded by |I|, the size of the representation of
linear program, we must have the original instance of problem (8). Thus, the algorithm has
t∗ = j∈[n] Cj x∗j
P O(d(b − a)/ minj Tj epoly(|I|)) running time.
∗
t + αi Theorem 1. Algorithm 1 is a cutting plane algorithm for prob-
∀i ∈ [n] : x∗i = max , xi lem (8) which uses
Ti
& ' !
If x∗ is integral, then t∗ is also integral and we have found b−a
O poly(|I|)
an optimal solution to program (10). Otherwise, there must minj∈[n] {Tj }
exist a j ∈ [n] where x∗j is fractional. x∗j cannot equal xj ,
which is integral; therefore, it must equal (t∗ +αj )/Tj . Using 5. Similar logic is used in generating Gomory cuts [6, Ch. 8].
5
Theorem 3. If Algorithm 1 calls Algorithm 2 to solve linear 3 An efficient algorithm for finding max f
program (11) on line 6, then it solves problem (8) using 1: k ← (1 if
P
j∈[n] Uj = 1 else 0)
& ' ! 2: while k < n and f (k + 1) ≥ f (k) do
b−a 2 3: k ←k+1
O n log(n)
minj∈[n] {Tj } 4: end while
5: return f (k)
arithmetic ops.
4 N EW S CHEDULABILITY T ESTS
3.2 An efficient algorithm for finding max f
4.1 FP Scheduling
By refining our previous analysis, we can find the maximum
value of f without necessarily computing f for all n (or n + Problem (2) can be reduced to problem (8) by initializing
1) values in its domain. Previously, we observed that when (α, β, a, b) in the target instance to (0, 0, 1, Dn ). However,
f (k−1) is well-defined, the objective of the k -th subproblem we consider a subtly different reduction which, when com-
is to maximize posed with Algorithm 1, yields a simpler algorithm. Prob-
P lem (2) is a question about the behavior of rbf in [0, Dn )
(1 − Uj )(f (k) − (xk Tk − αk ))v + xk Tk − αk
P
j∈[n]\[k] for constrained-deadline systems, but in this interval dt/Tn e
=(1 − j∈[n]\[k−1] Uj )(f (k − 1) − (xk Tk − αk ))v + xk Tk − αk . must equal 1. Thus, the condition rbf(t) ≤ t may be written
as
and the only constraint is that v is restricted to
& '
X t
Cj + Cn ≤ t. (16)
[(1 −
P
Uj )−1 , (1 −
P
Uj )−1 ]. Tj
j∈[n]\[k] j∈[n]\[k−1] j∈[n−1]
misses a deadline at time 10 but the integer program 5 A divide and conquer approach for problem (4).
max 0 1: Let task 1 denote the task with the smallest deadline.
s.t. 13x1 ≤ t+3 2: Sort the remaining tasks in a nondecreasing order using
17x2 ≤ t+7 the key Dj − Tj for each j .
20x3 ≤ t − 11 3: k ← n
5x1 + 6x2 + x3 ≥ t+1 4: while k > 0 do
t ∈ Z≥0 5: ak ← max{D1 , Dk − Tk }
x ∈ Z3 6: if k = n then
bk ← (La − 1 if j∈[k] Uj = 1 else − dfk (0)e)
P
7:
and its linear relaxation are infeasible. Intuitively, we expect 8: else
(t, x1 , x2 , x3 ) = (10, 1, 1, 0) to be feasible but it is not 9: bk ← (D1 −1 if Dk+1 −Tk+1 ≤ D1 else −dfk (0)e)
feasible because it violates the third inequality. The cause for 10: end if
the error is that the condition t ≥ Dj − Tj in the subscript 11: if bk < ak then
in the definition of dbf (definition (3)) is not accounted for 12: k ←k−1
in the formulation. Therefore, a little more care is needed to 13: continue
formulate the problem as an integer program. 14: end if
Instead of trying to formulate problem (4) as a single 15: Solve problem (20) by using Algorithm 1 with
integer program, we divide it into O(n) subproblems with (n, α, β, a, b) initialized to (k, [D1 − T1 , . . . , Dk −
simple integer programming formulations. We assume that Tk ], 1, −bk , −ak ).
task 1 has the smallest deadline. We assume that tasks 16: if an optimal solution exists then
[n] \ [1] are listed in a nondecreasing order using the key 17: return the negation of optimal value
Dj − Tj for each task j . Now, we consider n subproblems of 18: end if
problem (4) where t is restricted to the following n intervals: 19: k ←k−1
[D1 , max{D1 , D2 − T2 }), 20: end while
21: return “infeasible”
[max{D1 , D2 − T2 }, max{D1 , D3 − T3 }), . . . ,
[max{D1 , Dn−1 − Tn−1 }, max{D1 , Dn − Tn }),
[max{D1 , Dn − Tn }, L). This problem can be reduced to problem (8) by choos-
ing (n, α, β, a, b) in the target of the reduction to equal
Note that some of these intervals may be empty—for in- (k, [D1 −T1 , . . . , Dk −Tk ], 1, −bk , −ak ). After the reduction,
stance, if all deadlines are constrained, then only the last f is given by
interval is nonempty. In the k -th interval, the dbf function
1 + j∈[k]\[`] (Dj − Tj )Uj + j∈[`] xj Cj
P P
can be simplified to
` 7→ (21)
1 − j∈[k]\[`] Uj
P
X t + Ti − Di
Ci
Ti and denoted fk . For any k ∈ [n] if
P
i∈[k] j∈[k] Uj < 1 then
Algorithm 2 can be simplified to just lines 5 and 9, and the
because t < Dk+1 − Tk+1 ≤ Dk+2 − Tk+2 ≤ · · · ≤ L. Thus, bound
the k -th subproblem is given by: j∈[k] (Tj − Dj )Uj − 1
P
−fk (0) =
1 − j∈[k] Uj
P
max{t ∈ [ak , bk ] ∩ Z | t < dbf k (t)} (20)
where pops out of the algorithm (we provide fewer details here
X t + Ti − Di because the simplification steps
P are identical to the ones de-
dbf k (t) = Ci , scribed in Section 4.1). Since j∈[k] Uj < 1 for all k ∈ [n−1],
Ti
i∈[k] we may modify bk to
ak = max{D1 , Dk − Tk },
(
D1 − 1 k < n, Dk+1 − Tk+1 ≤ D1
max{D1 , Dk+1 − Tk+1 } − 1 k < n
−dfk (0)e k < n, Dk+1 − Tk+1 > D1
bk =
L−1 k = n. −dfk (0)e k = n, j∈[k] Uj < 1
P
By replacing t with −t, problem (20) can be rewritten as L − 1 P
k = n, j∈[k] Uj = 1.
a
min{t ∈ [−bk , −ak ] ∩ Z | t + dbf k (−t) ≥ 1}. The bound Lb is implicitly present in this definition of bk .
If an optimal value is found, then it must be negated to get The next theorems follow from the above discussion and
the value of the original instance. Using the identity b−xc = from Theorem 3.
−dxe, we have Theorem 6. If (n, α, β, a, b) in Algorithm 1 is initialized with
(k, [D1 − T1 , . . . , Dk − Tk ], 1, −bk , −ak ), then Algorithm 1,
j k
dbf k (−t) = i∈[k] −t+TTii−Di Ci
P
l m combined with Algorithm 2, solves problem (20) using
= − i∈[k] t+DTii−Ti Ci .
P & ' !
bk − ak 2
O k log(k)
Thus, the problem can be rewritten again as minj∈[k] {Tk }
l m
min{t ∈ [−bk , −ak ] ∩ Z | i∈[k] t+DTii−Ti Ci + 1 ≤ t}.
P
arithmetic ops.
9
Theorem 7. Algorithm 5 solves problem (4) using Mean Std. Dev. Max
P
& ' ! j∈[n−1] Uj RTA Alg 4 RTA Alg 4 RTA Alg 4
min{La , Lb } − D1 2
O n log(n) 0.70 7.57 4.61 1.87 1.66 18 14
minj∈[n] {Tj } 0.80 10.93 6.63 2.67 2.29 26 19
0.90 19.49 11.36 4.40 3.71 43 28
arithmetic ops. 0.99 125.48 60.11 21.90 17.74 211 140
A sufficient test for schedulability can be derived from TABLE 2: Statistics for n = 25 and variable utilization.
Algorithm 5 and Algorithm 2 by observing that for the k -th
subproblem,
j∈[k] (Tj − Dj )Uj < 1, then For n = 25 and variable utilization, we show normalized
P P
• if j∈[k] Uj = 1 and
infeasibility is detected on
P line 2 of Algorithm 2; and histograms of the running times of Algorithm 4 and RTA
j∈[k] (Tj − Dj )Uj < 1, then
P
• if j∈[k] Uj < 1 and in Figure 2. For lower utilizations, histograms have small
bk < ak and infeasibility is detected on line 11 of spreads and are skewed right, indicating that most instances
Algorithm 5. are solved quickly by the two algorithms. For higher uti-
The next theorem follows from these observations, and it lizations, histograms have large spreads and are more bell-
subsumes known tractable cases of EDF schedulability like shaped, suggesting that instances are more challenging for
implicit-deadline systems or systems with Di ≥ Ti for all i. both algorithms. For each utilization, the histogram for
Algorithm 4 is to the left of the histogram for RTA—this
Theorem 8. Problem (4) is infeasible if for all k ∈ [n] we have is true for both the peak of the histogram and the right
X end of the spread of the histogram where the frequency is
(Tj − Dj )Uj < 1. nearly zero. Thus, Algorithm 4 is faster on average and in
j∈[k]
the worst-case than RTA. For each utilization, the histogram
QPA is similar to RTA in the aspect that it over-relaxes for Algorithm 4 is thinner and taller than the histogram for
the original problem to do less work inside the loop at the RTA. Therefore, the running times of Algorithm 4 are more
cost of a larger number of iterations. In the next section, we predictable than those of RTA. We provide precise values for
will compare the number of iterations used by QPA to the average running times, standard deviations, and maximum
number of iterations used by Algorithm 1 for synthetically running times in Table 2.
generated task systems. For utilization 0.90 and variable n, we show normalized
histograms of the running times of Algorithm 4 and RTA in
Figure 3. Similar to the previous paragraph, the histogram
5 E MPIRICAL E VALUATION for Algorithm 4 is to the left of the histogram for RTA
in all cases. The histograms do not vary as much as they
5.1 FP Scheduling
did in the previous paragraph—while the total utilization
We compare the number of iterations used by RTA to the is bounded from above by 1, n is not, and harder instances
number of iterations used by Algorithm 4 for synthetically may be generated more frequently for much higher n but we
generated instances of Problem (2). For both algorithms, we decided that 100 is a reasonable upper bound for n in our
use the initial value experiments. From the figure, it is evident that Algorithm 4
X is faster on average and in the worst case than RTA for all
Cn /(1 − Uj ).
values of n.
j∈[n−1]
Although we show results for two slices of our experi-
The number of tasks in the system, n, is fixed to one of the ment, similar trends were observed for all other slices.
values
25, 50, 75, 100 5.2 EDF Scheduling
and the total utilization of the subsystem [n − 1] is fixed to We compare the number of iterations used by QPA to the
one of the values number of iterations used by Algorithm 5 for synthetically
generated instances of problem (4). Recall that Algorithm 5
0.70, 0.80, 0.90, 0.99 uses a divide-and-conquer approach to create n subprob-
lems where the k -th subproblem is the same as the original
For each total utilization value, we sample m utilization
problem except that t is restricted to [ak , bk ] ⊆ [D1 , L).
vectors uniformly using the Randfixedsum algorithm [14],
Although we present Algorithm 5 as a sequential algorithm
where m = 10000. Each utilization vector contains the
with subproblems solved in the order n, n − 1, . . . , the algo-
utilizations of the first n − 1 tasks of a system. Task n has a
rithm can also proceed in parallel. Moreover, the k -th sub-
very small utilization because we choose an arbitrarily large
problems can be further divided into smaller subproblems
value for Dn = Tn so that the generated instances are not
by considering subintervals of [ak , bk ]. If the sole concern
too easy for Algorithm 1—note that Dn is in the numerator
is to find a feasible (not necessarily optimal) solution to
in the characterization of running time in Theorem 4. To
problem (4), then once a feasible solution is found in one
complete the specification of the system, we sample n wcets
uniformly in [1, 1000] ∩ Z. The fixed interval reflects the idea
7. The synchrony hypothesis, used by reactive languages like Esterel,
that we do not expect a single reaction to an event to be too assumes that reactions are small enough to fit within a tick so that they
long in real systems7 . appear to be instantaneous to the programmer [15].
10
RTA 0.025
0.25
Cutting Plane 0.175
0.10
0.150
Normalized Frequencies
0.020
0.20
0.08
0.125
0.015
0.15
0.100 0.06
0.075 0.010
0.10 0.04
0.050
Fig. 2: Normalized histograms of #iterations required by RTA and Algorithm 4 for n = 25 and variable utilization.
0.12
RTA 0.08
0.08
Cutting Plane
0.10 0.08 0.07
0.07
Normalized Frequencies
0.06
0.06
0.08
0.06
0.05
0.05
0.06
0.04 0.04
0.04
0.03 0.03
0.04
0.02 0.02
0.02
0.02
0.01 0.01
P
Fig. 3: Normalized histograms of #iterations required by RTA and Algorithm 4 for j∈[n−1] Uj = 0.90 and variable n.
branch, then other branches may be aborted. Such branching Mean Std. Dev. Max
strategies are also applicable to QPA, and some branching
P
j∈[n] δj QPA Alg 1 QPA Alg 1 QPA Alg 1
heuristics have been proposed for the feasibility variant of
0.65 20.72 10.98 7.53 4.68 73 45
problem (4) by Zhang and Burns [16]. In our evaluation, we 0.75 24.58 12.61 9.21 5.61 81 51
use our sequential divide-and-conquer approach for QPA 0.85 29.19 14.44 12.14 6.99 137 76
too so that the comparison is really between Algorithm 1 0.95 35.76 16.81 18.96 9.60 231 112
and QPA. We measure the running time by adding the P
number of iterations for each subproblem. TABLE 3: Statistics for n = 50, j∈[n] δj = 0.95, and
The number of tasks in the system, n, is fixed to one of variable utilization.
the values
25, 50, 75, 100, to disappear—unlike the histograms in the previous sec-
tion, the histograms in this section contains outliers which
the total utilization of the system is fixed to one of the values
consume a very large number of iterations for at least
0.70, 0.80, 0.90, 0.99, one of the algorithms. From the maximum running time
columns in Table 3, we can see that the outliers belong
and the total density8 of the system is fixed to one of the to QPA. This suggests that the worst-case running times
values of Algorithm 1 may be bounded by a smaller function
1.25, 1.50, 1.75, 2.00. than the equivalent function for QPA. In the analysis of
Algorithm 1 in this paper, the worst-case bound is the same
Since the total utilization is less than one in all our config- as that of RTA/QPA, but we conjecture that tighter bounds
urations, we use the initial value Lb for both algorithms. may be found by better analysis. For each utilization, the
For each total utilization (resp., density) value, we sample histogram for Algorithm 1 is taller, thinner and to the left
m utilization vectors (resp., density vectors) of length n of the histogram for QPA (note the mean and standard
uniformly using the Randfixedsum algorithm. To complete deviation values in Table 3). Therefore, the running times
the specification of the system, we sample n wcets uniformly of Algorithm 4 are smaller and more predictable than those
in [1, 1000] ∩ Z. of QPA. P
For n = 50, j∈[n] Uj = 0.90, and variable density,
P
For n = 50, j∈[n] δj = 0.95 and variable utilization,
we show normalized histograms of the running times of we show normalized histograms of the running times of
QPA and Algorithm 1 in Figure 4, and provide summary QPA and Algorithm 1 in Figure 5. Similar to the previous
statistics in Table 3. Like the histograms in the previous paragraph, the histogram for Algorithm 1 is to the left of the
section, histograms have large spreads at higher utilizations histogram for QPA in all cases. The peaks of the histograms
suggesting that harder instances are generated more fre- for Algorithm 1 are more pronounced for lower densities.
quently at higher utilizations. However, the x-axis extends It is reasonable to expect easier instances to be generated
beyond the points where the tails of the histograms seem when density is closer to 1 because when density is less
than or equal to one the instances are always schedulable
8. The ratio δi = Ci /Di is called the density of task i. and solved by both algorithms within a constant number
11
Cutting Plane
0.08 0.07 0.06 0.05
Normalized Frequencies
0.06
0.05
0.04
0.06
0.05
0.04
0.03
0.04
0.04 0.03
0.03
0.02
0.02
0.02
0.02
0.01
0.01
0.01
P
Fig. 4: Normalized histograms of #iterations required by QPA and Algorithm 1 for n = 50, j∈[n] δj = 1.75, and variable
utilization.
0.175
QPA 0.07
Cutting Plane 0.10
0.04
0.150 0.06
Normalized Frequencies
0.08
0.125 0.05
0.03
0.02
0.075 0.03
0.04
0.050 0.02
0.01
0.02
0.025 0.01
P
Fig. 5: Normalized histograms of #iterations required by QPA and Algorithm 1 for n = 50, j∈[n] Uj = 0.90, and variable
density.
0.07
QPA 0.06
0.10
Cutting Plane
0.06
0.05 0.04
Normalized Frequencies
0.08
0.05
0.04
0.03
0.06 0.04
0.03
0.03
0.02
0.04
0.02
0.02
0.01
0.02
0.01
0.01
P P
Fig. 6: Normalized histograms of #iterations required by QPA and Algorithm 1 for j∈[n] Uj = 0.90, j∈[n] δj = 1.75, and
variable n.