Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

2nd IFAC Workshop on

Distributed Estimation and Control in Networked Systems


Annecy, France, September 13-14, 2010

Energy-Aware Consensus Algorithms in Networked


Sampled Systems ⋆
M. Lopez-Martinez ∗ , J.-C. Delvenne and Vincent D. Blondel ∗∗

Dept. of Systems and Automation Engineering, University of Seville, Spain,
(e-mail: mlm@esi.us.es).
∗∗
Dept. of Applied Mathematics, Université Catholique de Louvain,
Belgium, (e-mail: Jean-Charles.Delvenne@uclouvain.be,
vincent.blondel@uclouvain.be)

Abstract: This work presents a method to analyze the convergence to consensus of a network of
first order linear systems, when the signals associated to the interconnections are sampled from the
continuous time systems. In order to minimize the energy consumed in the process of communication,
we will look for the optimal sampling time such that the consensus is reached in a minimum number of
iterations (communications). The analysis is performed by minimizing several objective functions that
take into account a measure of the convergence rate to reach a consensus. These objective functions
mainly depend on the eigenvalues of the sampled transition matrix of the system. Finally, we present
a case study based on the torus topology, where a simple case of communication is analyzed and the
optimal sampling time to reach a consensus is obtained.

Keywords: Consensus Algorithm, Networked Control Systems, Sampled Systems.

1. INTRODUCTION On the other hand, the study of the consensus problem for
sampled continuous time systems has been scarcely studied,
This paper presents a method to analyze the best sampling and very few works are referenced in literature. In Gao et al.
time in a network such that consensus is reached with minimal (2009), a study of the consensus problem for multi-agent sys-
energy consumption in the communication procedure. tems is presented using sampled information, but the analysis is
performed considering the sampling period to be small enough.
In the literature there are many works that study different The work of Moreau (2005) reveals that more communica-
aspects concerning the consensus problem. From the seminal tion does not necessarily lead to faster convergence and may
work by Tsitsiklis (1984), where the consensus problem was eventually even lead to a loss of convergence. In this sense,
firstly studied, many researchers have focused their investi- in the work that we present, the main novelty is the idea of
gations on this field. In the survey paper Olfati-Saber et al. finding an optimal sampling time depending on an objective
(2007) and in the references therein, we can find the main prob- function to be designed. In this case, the objective is to find the
lems studied related to consensus. In this survey, the authors best sampling time such that the consensus is reached with a
compare several problems studied in continuous time and in minimum number of iterations and thus with minimal energy
discrete time, paying special attention to switching networks consumed in the communication process. We compare several
and time delays (Olfati-Saber et al. (2004)). More recently, in objective functions using some recent results developed in Carli
Seuret et al. (2008) and Seuret et al. (2009) the consensus prob- et al. (2009).
lem is analyzed in continuous time introducing delays in the
communication. With respect to the influence of the topology The paper is organized as follows. In Section 3, a very sim-
in the consensus problem, in Carli et al. (2008) it is shown ple motivating example introduces the problem and Section 4
that randomly time-varying (switching) graphs allow very fast presents the problem description. In Section 5, several opti-
convergence rates for consensus. Besides, in case that every mization problems are defined in order to minimize the energy
agent is restricted to communicate to a given small number of consumed in the communications, which is mainly related to
other agents, it has been proved that the optimal topology of the number of iterations. In Section 6 and Section 7, we intro-
communication is given by a de Bruijn graph (Delvenne et al. duce a method to solve the optimization problems for the case
(2009)). of first order systems and then several examples are analyzed.
Finally the major contributions of the work are summarized in
Section 8.
⋆ The authors gratefully acknowledge MEC (Spanish Ministry of Education
and Science) for funding this work under grants JC2009-00258, DPI2007- 2. NOTATION
64697, and EU STREP FP7 program for funding this work under project
FeedNetBack ICT-2007-2. The authors also acknowledge the support given Throughout the paper the following notation will be used:
by the Concerted Research Action (ARC) ”Large Graphs and Networks”
of the French Community of Belgium, and by the Belgian Programme on • z(Ak ) is a column vector with the eigenvalues of Ak
Interuniversity Attraction Poles (PAI) initiated by the Belgian Federal Science sorted from highest to lowest magnitude, and zi represents
Policy Office. the i-th coordinate.

978-3-902661-82-1/10/$20.00 © 2010 IFAC 191 10.3182/20100913-2-FR-4014.00029


NecSys'10
Annecy, France, Sept 13-14, 2010

• |zi | represents the magnitude of the i-th coordinate. T =0.1


k
0.4
• log represents the natural logarithm.

x(k)
• ⊤ represents the complex conjugate transpose. 0.2

0
3. MOTIVATING EXAMPLE 0 1 2 3 4 5 6 7 8 9 10

T =log 2
0.4 k
Consider the discrete time feedback interconnection of two

x(k)
identical continuous time first order linear systems, sampled 0.2

and hold (ZOH) at sampling time Tk (see Fig. 1). The dynamics 0
0 1 2 3 4 5 6 7 8 9 10

Tk T =1.5
k
y1 u2
0.4

x(k)
0.2
Σ 1 Σ
2
0
u1 Tk y2 0 1 2 3 4 5
Number of iterations
6 7 8 9 10

Fig. 1. Feedback interconnection of systems Σ1 (state x1 ) and Fig. 2. Number of iterations versus the sampling time.
Σ2 (state x2 ). log 2. It is also interesting to note that the number of iterations
of the continuous time closed loop system can then expressed to reach the consensus (within a prescribed error) for Tk = 0.1
as is larger than for Tk = 1.5. On the other hand, Fig. 3 shows the
same simulation than in Fig. 2 but with respect to time. As in
ẋ1 = −x1 + x2 (1) the previous case, the minimum time is reached for Tk = log 2.
However, with respect to the other sampling times, it can be
ẋ2 = −x2 + x1 (2) noticed how now the time to reach the consensus for Tk = 0.1
or in a matricial way is shorter than for Tk = 1.5.
 
−1 1 T =0.1
ẋ = Ax, with A= (3) k
1 −1 0.4
x(t)

0.2
and for the sampled closed loop system
0
0 0.5 1 1.5 2 2.5 3
x1,k+1 = ax1,k + bx2,k (4)
T =log 2
0.4 k
x2,k+1 = ax2,k + bx1,k (5)
x(t)

0.2
or in a matricial way
 
a b 0
0 0.5 1 1.5 2 2.5 3
xk+1 = Ak xk , with Ak = (6)
b a T =1.5
k
0.4
where a = e−Tk and b = 1 − a. From this last relation, it
x(t)

is easy to see that Ak has one eigenvalue in z1 = 1. The 0.2

other eigenvalue can be obtained by solving the characteristic 0


0 0.5 1 1.5 2 2.5 3
polynomial Time (sec)
(z − a)2 − b2 = 0,
which yields z2 = 2a − 1 = 2e−Tk − 1. The magnitude Fig. 3. Time response versus the sampling time.
of this second eigenvalue is |z2 | < 1 for all 0 < Tk <
∞, and this implies that the sampled closed loop system is The main question that arises at this point is the following:
critically stable. The eigenvector associated to z1 is the vector What are the optimal sampling times in a consensus problem
of ones. Thus, for any given initial state, x(0), it is possible where the systems involved are sampled?
to reach a consensus x(∞), where each component is equal Remark 1. Considering first order sampled systems, we can
to the mean of the components of x(0), that is, x1 (0)+x 2
2 (0)
. approximate the settling time 1 (within 5%) by ts = ns Tk ≃
In order to reach consensus with a minimum consumption of 3τ , where τ is the characteristic time of the system. Then,
energy spent in the communications, we have to minimize the from z2 = e−Tk /τ , we can say that z2 ≃ e−3/ns and hence
−3
number of iterations. The rate of convergence to consensus ns ≃ log z2 . In this way we can analyze the number of iterations
(z1 = 1) depends on the rest of the eigenvalues, and so it can as a function of the eigenvalue z2 , which also depends on the
be increased by reducing their magnitude. In this case, just the sampling time Tk . In the case of minimizing the settling time,
second one, which depends on the sampling time. Thus, the then we can study the evolution of ts = ns Tk ≃ log −3Tk
z2 with
minimum number of iterations is achieved when z2 = 0, and respect to the sampling time.
this implies to take a specific value of sampling time, which is
Tk = log 2. Figure 4 shows for these interconnected systems the relation
Figure 2 shows the evolution of the response of each system between the number of iterations to achieve the consensus, ns
with respect to the number of iterations. In this case the optimal 1 The settling time is defined as the time at which the system output has entered
sampling time to reduce the number of iterations, and hence and remained within a specified error band, usually symmetrical around the
the energy consumed in the communication, is given by Tk = steady state value, and normally defined by the 2 or 5% of that value.

192
NecSys'10
Annecy, France, Sept 13-14, 2010

Number of Iterations (ns) and Settling Time (ts) Theorem 2. If A¯k is critically stable (|z1 | = 1, |zi | < 1, i ∈
9
[2, N ]) and the sum of the elements of each row and each
n ~ −3/log |z |
8
s 2
t ~ −3T /log |z |
column is equal to 1, A¯k 1 = 1⊤ A¯k = 1, then
s k 2  
N
7 1 X
lim xk =  xj,0  1.
6 k→∞ N j=1
5
At this point of the exposition, the problem to solve can be
4 stated as follows:
3 Given the set of continuous time systems described by (7),
2
verifying Theorem 1, find the best sampling time, such that the
discrete consensus algorithm converge (verifying Theorem 2)
1 following a criteria of minimum energy (minimum number of
0
iterations).
0 0.5 1 1.5
Tk
5. OPTIMAL SAMPLING TIME FOR MINIMUM
ENERGY CONSUMPTION
Fig. 4. Number of iterations and settling time with respect to
sampling time. When analyzing the discrete-time interconnections, it is known
and the time that it takes ts (settling time) as a function of the that the rate of convergence of the consensus algorithm depends
sampling time. on the eigenvalues of the matrix Āk . It is clear that changing the
sampling time, the eigenvalues also change. The problem is to
4. PROBLEM DESCRIPTION determine the optimal sampling time such that the consensus is
reached in a minimum number of iterations (Minimum Energy
Consider a critically-stable continuous time interconnection of Consumption).
N first order linear systems, Σi , where the continuous time
dynamics of each system can be expressed in the following In order to obtain the optimal sampling time we have to specify
form: the criteria to minimize, which obviously depends on the eigen-
values of the discrete system described by Āk . Next, we will
describe several criteria, one group depending on the second
Σi : ẋi = Ai xi + Bi ui (7)
eigenvalue Āk , and a second group depending on a combination
where xi ∈ R is the state vector and ui ∈ R is the input vector. of all the eigenvalues. This separation of criteria is justified
Consider also that they are interconnected such that when the distance among the second eigenvalue and the others
X
ui = kij xj , j 6= i ∈ [1, N ]. (8) is not large, i.e., when the second eigenvalue is not the dominant
j one.
Then, the dynamics of the whole system can be expressed as
5.1 Second-eigenvalue criteria
ẋ = (A + BK)x = Āx (9)
and the following theorem can be applied. In this group, we can state two criteria. The first one, the
Theorem 1. If Ā is critically stable (λ1 = 0, ℜ(λi ) < 0, i ∈ magnitude of the second eigenvalue, that is, J1 = |z2 (Āk )|;
[2, N ]), and the sum of the elements of each row and each and the second one, related to the number of iterations needed to
column is equal to 0, A¯k 1 = 1⊤ A¯k = 0, then reach the 95% of the consensus state (steady state), as defined in

N
 Remark 1. This second index can be posed as J2 = log |z−3 2 (Āk )|
.
1 X
Since J2 is strictly increasing with respect to J1 , the solution
lim x(t) =  xj (0) 1,
t→∞ N j=1 of minimizing J1 and J2 gives rise to the same value of |z2 |. A
possible strategy is to minimize J1 and then compute the value
where 1 ∈ RN is a column vector of N ones. of J2 in order to obtain the estimated number of iterations.
In case the signals of each system are sampled and hold (ZOH) Then formally, the problem is to find the optimal sampling time
before exchanging the information, the dynamics of each sys- T ∗ by solving
tem can then be written as
T ∗ = arg min {J1 } = arg min {|z2 (Āk )|} (13)
Σi,k : xi,k+1 = Ai,k xi,k + Bi,k ui,k (10) Tk >0 Tk >0
or
where Ai,k = eAi Tk and Bi,k = A−1 i (Ai,k − I)Bi . Consider
now that they are interconnected such that T ∗ = arg min {J2 } = arg min {−3/ log |z2 (Āk )|} (14)
X Tk >0 Tk >0
ui,k = kij xj,k , j 6= i ∈ [1, N ]. (11)
j
Specifying the maximum number of iterations: The previous
indexes J1 and J2 can also be used in order to determine a range
Then, the dynamics of the whole system can be expressed as of sampling times under which the number of iterations to reach
almost the consensus is fixed. In this way, the problem is to find
xk+1 = (Ak + Bk K)xk = A¯k xk (12) the set of sampling times ΩT such that the following inequality
and the following theorem can be used. is satisfied

193
NecSys'10
Annecy, France, Sept 13-14, 2010

ΩT = {Tk : J2 < n∗s } = {Tk : −3/ log |z2 (Āk )| < n∗s } = without the eigenvalue whose magnitude is 1 and with a new
−3/n∗ eigenvalue 0. Now, taking into account that the trace of a matrix
= {Tk : J1 < e s }. (15) is the sum of its eigenvalues then this expression can be written
as
If we are interested also in minimizing the time consumed tm , X
E[e⊤ 2
k ek ] = (µ + σ )
2
|zi |2k < N ε2 (20)
we will take the minimum value of the sampling time Tm in
i6=1
ΩT , such that tm = n∗s Tm , being Tm defined as
where the eigenvalue of Āk with magnitude |z1 | = 1 is not
Tm = {min Tk : J2 < n∗s } = {min Tk : J1 < e−3/ns }.
∗ included. In order to simplify the expression, we can also
assume that µ2 + σ 2 = 1, which occurs for example when the
(16) mean is zero and the standard deviation is one, yielding
N
On the other hand, if the bandwidth is limited, we could choose
X
E[e⊤k ek ] = |zi |2k < N ε2 . (21)
the maximum value of Tk
i6=1
At this point we can define a new index as
TM = {max Tk : J2 < n∗s } = {max Tk : J1 < e −3/n∗
s }.
XN
(17) J3 = |zi |2k
i6=1
5.2 All-eigenvalues criteria or also written
J3 = kzkpp − 1
As in the previous section, we can look for optimal sampling with p = 2k, such that we can state several optimization
times that minimize other indexes, commonly used in the problems:
discrete-time consensus literature (see, e.g., Carli et al. (2009)).
Related to J2 , which represents the number of iterations such • Determine the optimal sampling time such that a desired
that the error with respect to the consensus value is less than a mean quadratic error is fulfilled in the minimum number
5%, we can use the expected value of the mean quadratic error of iterations  
at iteration k and also the sum of these values from the initial N
condition until iteration k. T ∗ = arg min k : J3 < 2 ε2
k>0 µ + σ2
Mean quadratic error: From J2 , we can derive the follow- • Given the value of k, determine the optimal sampling time
ing criteria. Assume that we have a state vector xk with such that the mean quadratic error is minimum at this
N components xk,i and such that the iteration k, and thus J3 ,
Pmean value of the N T ∗ = arg min J3 .
components is the scalar xm = N1 x0,i . We can impose
that the error of each component with respect to the steady This is also equivalent to minimize
state value is less than a certain value ε (e.g. ε = 5%) i.e. J4 = kzkpp
|xk,i − xm |2 < ε2 . We can relax P this criteria by adding all
with p = 2k. A special case is given when we fix k = 1,
the components, and then we get |xk,i − xm |2 < N ε2 , which means that to obtain the best evolution in the first
which is equivalent to kxk − xm 1k < N ε2 . If we define the
2
iteration we have to minimize the 2−norm of the vector of
error vector ek = xk − xm 1, and substitute for the expression, eigenvalues J4 = kzk22 .
yields e⊤ 2
k ek < N ε . Now, assuming that Āk verifies Theorem Example: In the case of just two agents and fixing k =
2, then xk = Āk xk−1 and also xm 1 = Āk xm 1. Hence, 1, then the index to minimize is J3 = |z2 |2 , which is
ek = I − 11⊤ Āk xk−1 = I − 11⊤ Ākk x0 . Substituting equivalent to minimize J1 .
k ⊤
⊤
in e⊤ ⊤
I − 11⊤ I − 11⊤ Ākk x0 and if we
 
k ek = x0 Āk
Sum of mean quadratic errors: From the previous section,
assume that Āk is normal, i.e. Ā⊤ ⊤
k Āk = Āk Āk and take into
⊤ indexes J3 and J4 can be extended for the case of considering
account that I − 11 is symmetric and idempotent, then we get not just the value of the mean quadratic error at a certain instant
to the following expression k, but the sum of all of them until instant kmax . In this way
k k k from
e⊤ ⊤ ⊤
k ek = x0 Āk I − 11⊤ I − 11⊤ Ākk x0 =
kX
max kX
max
k  k X
= x⊤ ⊤
0 Āk I − 11⊤ I − 11⊤ Āk x0 < N ε2 E[e⊤ 2 2
k ek ] = (µ + σ ) |zi |2k (22)
k=0 k=0 i6=1
(18)
we can derive the index
Finally, if we consider x0 as a random vector, whose compo-
kX
nents are independent and identically distributed with mean µ max X
and standard deviation σ, and we compute in the above expres- J5 = |zi |2k . (23)
sion the expected value, we obtain k=0 i6=1

2 2
  k    k Finally, from this index the following optimization problem can
k ek ] = (µ + σ ) trace
E[e⊤ k I − 11
Ā⊤ I − 11⊤ Āk .

be stated:
(19)
• Given kmax ∈ [1, ∞], obtain the optimal sampling time
Since Āk is normal by assumption, then the eigenvalues of such that the sum of the mean quadratic errors is mini-
Ā⊤
k I − 11

are the same that the eigenvalues of Āk , but mized. This is equivalent to solve

194
NecSys'10
Annecy, France, Sept 13-14, 2010

T ∗ = arg min J5 which


Pn are set to zero. ByPassumption, the matrix K verifies
Tk >0 n
i=1 k ij = 1 ∀j and j=1 kij = 1 ∀i, which implies
Remark 2. For the special case of considering kmax = ∞, that at least one eigenvalue has magnitude one. On the other
another index can be obtained, by taking into account the result
P∞ hand, this matrix does not depend on the sampling time, and
1
of the infinite arithmetic series k=0 |zi |2k = 1−|zi|
2 , then thus their eigenvalues only has to be computed once. Hence,
the process of minimization depends linearly on the parameter
X 1 a which enables an easy computation, and after this in order to
J6 = . (24)
1 − |zi |2 obtain the optimal sampling time, it is only needed to compute
i6=1
Tk = − log a.
6. CONSENSUS OF SAMPLED FIRST ORDER SYSTEMS
7. STUDY CASE: TORUS TOPOLOGY
Consider a critically-stable continuous time interconnection of
N first order linear systems, Σi , where the continuous time The graph that describes the communication topology as the
dynamics of each system can be expressed in the following torus is represented in Fig. 5. We assume that all nodes have
form: self–loops even though these are not plotted in the figure. The
Σi : ẋi = −xi + ui (25) matrix K is built using this graph without taking into account
self loops. We are going to study the case of a torus of 10x10
where xi ∈ R is the state and ui ∈ R is the input signal.
Consider also that they are interconnected such that
X
ui = kij xj , j 6= i ∈ [1, N ]. (26)
j
Then, the dynamics of the system can be expressed as
ẋ = (−I + K)x, (27)
where I is the identity matrix and K is the adjacency matrix i
excluding the elements of the main diagonal, which are set to
zero. From this expression, in order to verify Theorem 1, the
following conditions must be satisfied
Xn Xn
kij = 1 ∀j, kij = 1 ∀i. (28)
i=1 j=1
Fig. 5. Agent interconnection in a torus graph (Source: FeedNet-
Assuming that the signals of each system are sampled and Back EU FP7 Project).
hold (ZOH) before exchanging the information, the dynamics
of each system can be written as agents, where there are 10 agents per ring and every agent
Σi,k : xi,k+1 = ai xi,k + bi ui,k (29) belongs to two perpendicular rings. We consider that every
where ai = e−Tk and bi = 1 − ai . Consider now that they are agent is communicating with itself and with an adjacent agent
interconnected such X
that in the same ring (in-degree=out-degree=3) as shown in Fig. 5.
Hence, the matrix K can be written as
ui,k = kij xj,k , j 6= i ∈ [1, N ]. (30)
j R

O O O O O O O O I

Then, the dynamics of each system yields I R O O O O O O O O
X O I R O O O O O O O
Σi,k : xi,k+1 = ai xi,k + (1 − ai ) kij xj,k , j 6= i ∈ [1, N ] O

O I R O O O O O O

j O O O I R O O O O O
K= , (33)
O O O O I R O O O O
Finally, grouping the N equations we can express the dynamics O
 O O O O I R O O O

of the whole system as O
 O O O O O I R O O
O O O O O O O I R O
O O O O O O O O I R
xk+1 = (aI + (1 − a)K)xk = A¯k xk . (31)

6.1 Optimal sampling time where I is a 10x10 identity matrix, O is a matrix of 10x10 zeros,
and R is given by
As shown in Section 5, we can state our problem as the
0 0 0 0 0 0 0 0 0 k
 
optimization of an index (minimization of Ji i = 1, . . . , 6)
k 0 0 0 0 0 0 0 0 0
which depends on the eigenvalues of the matrix Āk . In the case 0 k 0 0 0 0 0 0 0 0
of sampled first order systems, the matrix A¯k = Ak + Bk K, 0

0 k 0 0 0 0 0 0 0

takes the values of Ak = aI, Bk = 1 − a and a = e−Tk . In this 0 0 0 k 0 0 0 0 0 0 1
way the eigenvalues of the matrix Āk can be calculated from R=
0
, with k = . (34)
0 0 0 k 0 0 0 0 0 2
0 0 0 0 0 k 0 0 0 0


0 0 0 0 0 0 k 0 0 0
z(Āk ) = z(aI + (1 − a)K) = a1 + (1 − a)z(K). (32)  
0 0 0 0 0 0 0 k 0 0
Notice that the matrix K represents the weighted adjacency ma- 0 0 0 0 0 0 0 0 k 0
trix of the graph excluding the elements of the main diagonal,

195
NecSys'10
Annecy, France, Sept 13-14, 2010

For this case, we obtain that the optimal sampling time that 8. CONCLUSIONS
minimizes the indexes J1 , J4 and J6 is the same, and it is given
by T ∗ = log 3. Next, two comparisons are made taking into This work has shown the importance of choosing an appropriate
sampling time when using consensus algorithms in networked
1 sampled systems. The implications of this sampling time se-
Tk=0.1 lection are mainly two, reduction of the energy consumption
and reduction of bandwidth requirements. The main drawback
x(k)

0.5
that may appear is that these reductions may be achieved at
0
the expense of increasing the time of convergence (in seconds)
0 10 20 30 40 50 60 70 80 90 100 of the consensus algorithm, which will have to be also taken
1 into account depending on the applications. For that reason, our
Tk=log 3
future development will be to define and study new indexes to
minimize a weighted relation between both, like for example
x(k)

0.5
could be J = J2 (1 + rTk ), where J2 = log−3 |z2 | , is used to
0 minimize the number of iterations and Tk J2 is introduced to
0 10 20 30 40 50 60 70 80 90 100
minimize the settling time.
1
Tk=3
ACKNOWLEDGEMENTS
x(k)

0.5

The authors gratefully acknowledge the Université Catholique


0
0 10 20 30 40 50 60 70 80 90 100
de Louvain and the University of Seville for making possible
Number of iterations the stay of the first author in Louvain-La-Neuve.

Fig. 6. Torus: Evolution of the states with respect to the number REFERENCES
of iterations
R. Carli, F. Fagnani, A. Speranzon and S. Zampieri. Com-
account different values of sampling times. Figure 6 shows that munication constraints in the average consensus problem.
the number of iterations to achieve the consensus with sampling Automatica, 44:671–684, 2008.
time Tk = log 3 and Tk = 3 is less than with sampling time R. Carli, F. Garin, and S. Zampieri. Quadratic indices for the
Tk = 0.1 sec. This implies that there is a range of sampling analysis of consensus algorithms. Proceedings of 4th Inter-
times where the network has low energy consumption (less national Workshop on Information Theory and Applications,
iterations) and requires low bandwidth to reach a consensus. 2009.
On the other hand, Fig. 7 shows that with respect to the time J.-C. Delvenne, R. Carli and S. Zampieri. Optimal strategies in
consumption, the best of the three options is to take Tk = 0.1. the average consensus problem. Systems & Control Letters,
If we minimize the index J = Tk J2 = log −3Tk 58:759–765, 2009.
|z2 | , which represents
A. Seuret, D.V. Dimarogonas and K.Johansson Consensus
an estimation of the settling time of a system with one real
under communication delays. 47th IEEE Conference on
dominant eigenvalue, then the optimal sampling time can be
Decision and Control, 2008.
computed as T ⋆ = arg minTk >0 J. For this example, this value
A. Seuret, D.V. Dimarogonas and K.Johansson Consensus of
is T ⋆ → 0, which means that the best option to minimize the
double integrator multi-agents under communication delay.
time consumption is to take the minimum sampling time, taking
8th IFAC Workshop on Time Delay Systems, 2009.
into account the bandwidth constraints of the network.
Y. Gao, L. Wang, G. Xie and B. Wu. Consensus of multi-
1
agent systems based on sampled-data control. International
T =0.1
k
Journal of Control, 82(12):2193–2205, 2009.
R. Olfati-Saber, J.A. Fax and R.M. Murray. Consensus and
x(t)

0.5
cooperation in networked multi-agent systems. Proceedings
of the IEEE, 95(1):215–233, 2004.
0
0 5 10 15 20 25 30 35 40 45 50
R. Olfati-Saber and R.M. Murray. Consensus problems in
1
networks of agents with switching topology and time-delays.
Tk=log 3 IEEE Transactions on Automatic Control, 49(9):1520–1533,
2004.
x(t)

0.5
L. Moreau. Stability of multiagent systems with time-
dependent communication links. IEEE Transactions on Au-
0
0 5 10 15 20 25 30 35 40 45 50
tomatic Control, 50(2):169–182, 2005.
1
J. Tsitsiklis. Problems in decentralized decision making and
T =3
k
computation. Ph.D. thesis, Department of EECs, MIT, 1984.
L. Xiao, S. Boyd and S.J. Kim. Distribute average consensus
x(t)

0.5
with least-mean-square deviation. J. Parallel Distrib. Com-
put., 67:33–46, 2007.
0
0 5 10 15 20 25 30 35 40 45 50
Time (sec)

Fig. 7. Torus: Evolution of the states with respect to time

196

You might also like