Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Signal Processing 134 (2017) 214–223

Contents lists available at ScienceDirect

Signal Processing
journal homepage: www.elsevier.com/locate/sigpro

Distributed target localization using quantized received signal strength MARK



Zeyuan Li , Pei-Jung Chung, Bernard Mulgrew
Institute for Digital Communications, University of Edinburgh, Edinburgh, United Kingdom

A R T I C L E I N F O A BS T RAC T

Keywords: In this paper, we propose a distributed gradient algorithm for received signal strength based target localization
Target localization using only quantized data. The Maximum Likelihood of the Quantized RSS is derived and Particle Swarm
Quantization Optimization is used to provide an initial estimate for the gradient algorithm. A practical quantization threshold
Received signal strength designer is presented for RSS data. To derive a distributed algorithm using only the quantized signal, the local
estimate at each node is also quantized. The RSS measurements and the local estimate at each sensor node are
quantized in different ways. By using a quantization elimination scheme, a quantized distributed gradient
method is proposed. In the distributed algorithm, the quantization noise in the local estimate is gradually
eliminated with each iteration. Section 5 shows that the performance of the centralized algorithm can reach the
Cramer Rao Lower Bound. The proposed distributed algorithm using a small number of bits can achieve the
performance of the distributed gradient algorithm using unquantized data.

1. Introduction are available for localization. Motivated by this, the Quantized RSS
(QRSS) model and its corresponding Cramer Rao Lower Bound (CRLB)
The deployment of a wireless sensor network (WSN) in a certain is proposed in [9]. In the QRSS model, measurements are quantized
area provides a powerful tool to monitor events or environmental before sending to the centre. There are many quantization methods,
conditions (e.g. temperature, sound, pressure). WSNs are characterized such as the uniform quantization method and the vector quantization
by large numbers of spatially distributed autonomous sensors with method. In any quantization scheme, thresholds are the most impor-
limited computation and power resources [1]. These limitations restrict tant parameters, especially when a small number of bits are used. In
the application of centralized algorithms based on a single fusion [10], a heuristic method to determine the optimum quantization
centre. One of the fundamental tasks that WSNs need to perform is to thresholds for target localization using quantized acoustic energy
localize the position of the target. Thus, the development of distributed measurement is presented. In [11], target localization using quantized
target localization algorithms for WSNs is an important issue. measurements considering the wireless channel statistics is presented.
Most WSNs positioning systems are accomplished using received Target localization using quantized data combined with coding is also
signal strength (RSS), angle of arrival (AOA), time of arrival (TOA) or presented in [12]. The well known proximity measurement [13,14]
time difference of arrival (TDOA) measurements or a combination of localization can be considered as a special case of QRSS model where
them. However, localization using TOA or TDOA measurements only one bit data is used.
requires complicated synchronization [2,3] which makes sensor node Another way to preserve energy is to avoid long range wireless
localization hardware expensive and it is not suitable for small, cheap transmission. Distributed processing that requires only local commu-
sensors. For AOA based localization, an antenna array is needed at each nications and processing helps to reduce the transmission energy.
sensor, which is also expensive. Therefore, in this work, RSS measure- Rabbat et al. [15], introduced an incremental gradient optimization
ment is considered, since it is practically simple and inexpensive to method for energy based acoustic source localization in WSNs. A
implement [4]. Location estimation using RSS measurements has been distributed projection onto convex set method has also been imple-
researched and simulated for WSN in [5–8]. mented for target localization [16], which is similar to the incremental
It is well known that sensor nodes in WSNs are characterized by gradient method. In [17], a consensus based distributed algorithm has
limited resources, such as energy and communication bandwidth. One been used to localize a source while the energy measurements follow a
way to save energy is to limit the data transmitted in the network. It is contaminated gaussian distribution. In [18], the RSS-based location
desirable that only multibit quantized data is transmitted within the estimation problem is relaxed into a semidefinite programming (SDP)
network. However, the majority of existing works assume analog data problem, and further solved by a consensus based distributed SDP


Corresponding author.
E-mail address: z.li@ed.ac.uk (Z. Li).

http://dx.doi.org/10.1016/j.sigpro.2016.12.003
Received 31 March 2016; Received in revised form 1 December 2016; Accepted 4 December 2016
Available online 08 December 2016
0165-1684/ © 2016 Elsevier B.V. All rights reserved.
Z. Li et al. Signal Processing 134 (2017) 214–223

method [19]. target.


Distributed iterative search techniques such as distributed gradient To estimate the target location, we employ RSS measurements.
method, distributed Gauss-Newton method and distributed expecta- Without loss of generality, we assume that the target and sensor nodes
tion maximization (EM) can also be applied to optimize the maximum are placed in a 2-dimensional space. Let u = [x, y]T denote the
likelihood (ML) function in a distributed manner. However, these coordinate of the target and ci = [xi , yi ]T the coordinate of sensor node
techniques have limitations because of the multimodal nature of the i where i = 1, …, N . Then the path loss Pi (in dBm) from the source to
ML function. In this work, we propose a distributed localization node i under log-normal shadowing is modeled as [22,23]
method consisting of two steps. In the first step, each sensor node di
exchanges its QRSS measurement with neighbours and then estimate Pi = P0 − 10α log10 + ni i = 1, …, N ,
d0 (1)
the target position by optimizing a local ML function. If the initializa-
tion of the local ML function is in the near vicinity of the global where P0 is the path loss measured at a reference distance d0,
minimum, where the local ML is approximately convex, the distributed di = u − ci is the distance between u and ci , α is the path loss
algorithm such as distributed gradient and distributed Gauss-Newton exponent which varies between 2 and 6 depending on the environment,
method can use the estimate as an initialization and take over from and ni represents the log-normal shadowing noise modeled as a zero-
there. Here in this paper, particle swarm optimization (PSO) is used for mean Gaussian variable with standard derivation σ dB. For simplicity,
initialization, as it can handle multimodal cost functions and has been we assume d 0 = 1 in the following derivations.
employed to optimize the ML function for RSS-based localization To save communication bandwidth and sensor energy, Pi is
problem [8]. To save communication, the data exchanged in the quantized [9]. Denote the quantized version of Pi as Ki, (i = 1, …, N ),
distributed process also needs to be quantized. The quantized dis- where Ki can take any discrete value from 0 to L = 2M − 1. For
tributed gradient (QDG) [20] is applied to conduct the process as only simplicity, we assume all the sensor nodes employ the same quantiza-
the target position needs to be exchanged between neighbours in tion threshold. In the quantization process, with the quantization
distributed gradient algorithm. Moreover, the target position is easy to thresholds s = [s0 , s1, …, sL ], the raw RSS measurements are quantized
quantize, as the target is located in an area of interest and the range of into discrete data Ki
the area can be known to the system. Unlike the distributed gradient
⎧0 if s0 < Pi < s1
algorithm, the dynamic range of the intermediate parameters ex- ⎪
⎪1 if s1 < Pi < s2
changed in algorithms such as distributed Gauss-Newton [21] is Ki = ⎨ ,
difficult to acquire and is thus challenging to quantize. An improved ⎪⋮ ⋮
⎪L − 1
QDG algorithm with less quantization bits is also proposed in the ⎩ if sL −1 < Pi < sL (2)
paper. The performance of the distributed algorithms is compared with where s0 = − ∞ and sL = ∞. The QRSS measurement Ki will be used to
the centralized PSO-ML algorithm via simulation. Note that the RSS estimate the target position.
measurements and the local estimate at sensor node are quantized in
different ways. 3. Localization strategies
The rest of the paper is organized as follows. In Section 2, the
problem formulation with the RSS and QRSS signal model is first In this section, we present localization strategies using the QRSS
introduced. In Section 3, the quantization thresholds are designed, and measures at the nodes. We first consider the centralized ML estimate
the ML problem using quantized data are solved using PSO. In Section and then the ML estimate will be used to calculate an initial value for
4, we propose the distributed algorithm using a gradient method with a the distributed method described in the next section.
quantization error compensation term. Simulation results are provided
in Section 5. Concluding remarks are given in Section 6.
3.1. Maximum likelihood estimation

2. Signal model Using the RSS measurement model (1) and the quantization
method (2), under the Gaussian shadowing noise assumption, the
Consider a wireless sensor network, as illustrated in Fig. 1, con- probability that node i takes a decision Ki (see [9]) is
sisting of N sensor nodes deployed on a certain area. Nodes are static
⎧ ⎛s − z ⎞ ⎛ s − zi ⎞
and able to communicate with adjacent nodes that lie within a given ⎪ Q⎜ 0 i
⎟ − Q⎜ 1 ⎟ Ki = 0
range. Assume that a target emits signals which can be heard by all ⎪ ⎝ σ ⎠ ⎝ σ ⎠
nodes in the network. The goal is to determine the location of the ⎪ ⎛ ⎞ ⎛ ⎞
⎪Q⎜ s1 − zi ⎟ − Q⎜ s2 − zi ⎟ Ki = 1
pi (Ki|u) = ⎨ ⎝ σ ⎠ ⎝ σ ⎠ ,

⎪⋮ ⋮
⎪ ⎛ sL −1 − zi ⎞ ⎛ s − zi ⎞
⎪ Q⎜ ⎟ − Q⎜ L ⎟ Ki = L − 1
⎩ ⎝ σ ⎠ ⎝ σ ⎠ (3)
where zi = P0 − 10α logdi and Q(·) is the cumulative distribution func-
tion (CDF) of a univariate zero-mean unit-variance Normal distribution
∞ 1 ⎛ t2 ⎞
Q (x ) = ∫x 2π
exp⎜ − ⎟dt .
⎝ 2⎠ (4)
Consider the presence of a certain unit that gathers all measure-
ments coming from the sensor nodes. After collecting data K , where
K = [K1, …, KN ], the fusion centre estimates the parameter vector
u = [x, y]T . Based on the notations and assumptions, it is easy to derive
the likelihood function at the fusion centre
N
p(K|u) = ∏ pi (Ki|u),
Fig. 1. A target in uniformly deployed sensor network with 80 sensor nodes. i =1 (5)

215
Z. Li et al. Signal Processing 134 (2017) 214–223

where the product of probabilities is due to the assumption of distribution in the interval [ − b /2, b /2]. Denote v as the square
independent RSS measurements. distance between node i to the target. As shown in [10], the pdf of vi
Using (5), it is easy to derive the log-likelihood function of K ⎧π vi 4π
N ⎪ 2 + 4 − 3 0 < vi ≤ b 2


b b b
f (K|u) = ∑ ln[pi (Ki|u)]. ⎛ 2b 2 − v ⎞
i =1 (6) f (vi ) = ⎨ 2 vi 4 vi − b 2 .
⎪ 2 arcsin⎜⎜ i
⎟⎟ − 4 + 3
− 2/ b 2 b 2 < vi ≤ 2b 2
Therefore, the ML estimator, u , is now the solution of the following ⎪b ⎝ vi ⎠ b b

maximization problem ⎩0 otherwise

l = argmaxf (K|u). (12)


u
u (7) According to the RSS model, the distance between the target and
As (7) is non-convex and non-linear [4], thus finding the global each node is at least 1 m. Assuming d 0 = 1, the probability that vi is
optimum is not an easy task. This is mostly overcome in PSO-based greater than 1 m is
solutions [24]. 1
Notice that the only parameters the fusion centre needs to know to γ=1− ∫0 f (vi )dvi
(13)
locate the target are sensor nodes locations, quantization threshold and
the RSS model, i.e., P0, α, σ2. All of these parameters can be pre- 8 π 1
=1 + − 2 − .
determined by performing off-line experiments. The CRLB of the QRSS 3b 3 b 2b 4 (14)
model, derived in [9], is summarized as follows. Hence if vi is greater than or equal to 1 m, we have
l be the unbiased estimator of the coordinates of the target u .
Let u
1
We define the location variance of the estimator to be σu2, then the fv (vi|vi ≥ 1) = f (vi ).
γ v (15)
CRLB asserts that
J11 + J22 With transmit power P0 and path loss exponent α known a prior,
σu2 ≥ [J−1]11 + [J−1]22 = 2
, using the probability transformation rule, the pdf of the received signal
J11J22 − J12 (8)
strength zi at a random location in the surveillance area is
where [·]ij represents the i , j entry of the matrix and the Fisher ⎧π 4 gi
g
information matrix (FIM) J is ⎪ + 4i − ϕ1 < zi ≤ P0
⎪b 2
b b3
⎡J J ⎤ ln10 ⎪ ⎛ 2b 2 − g ⎞
f (zi ) = g γ⎨ gi 4 gi − b 2 ,
J = ⎢ 11 12 ⎥ . 5α i ⎪ 2 arcsin⎜ i⎟ 2
⎣ J21 J22 ⎦ (9) ⎜ ⎟− 4 + − 2 ϕ2 < zi ≤ ϕ1
⎪b 2
⎝ gi ⎠ b b 3
b

⎩0 otherwise
The elements of J are as follows
(16)
J11 = 50α 2 ∑ ξi(x − xi )2 /((log10)2 πdi4σ 2 ),
i where
J22 = 50α 2 ∑ ξi(y − yi )2 /((log10)2 πdi4σ 2 ), gi = 10(P0 − zi )/(5α ), (17)
i

J12 = 50α 2 ∑ ξi(x − xi )(y − yi )/((log10)2 πdi4σ 2 ), 8 π 1


γ=1+ − 2 − ,
i (10) 3b 3 b 2b 4 (18)

where ϕ1 = P0 − 5α log10b 2 (19)


K −1 2 2 2 2
exp( − (si − zi ) /2σ ) − exp( − (si +1 − zi ) /2σ ) and
ξi = ∑ ⎛ s − zi ⎞ ⎛ s − zi ⎞
.
s =0 Q⎜ i ⎟ − Q⎜ i +1 ⎟ ϕ2 = P0 − 5α log102b 2. (20)
⎝ σ ⎠ ⎝ σ ⎠ (11)
All the information about the target position is contained in
received signal strength zi. If all the RSS measurements can be restored
from the QRSS measurement precisely, the target position can be
3.2. Threshold design estimated accurately. Therefore, the optimum quantization threshold
can be chosen so that in QRSS measurement K, there is maximum
Before solving the optimization problem in (7), we must determine information about zi. Similar to the derivation of (6), the log-likelihood
a set of quantization thresholds s for RSS measurements. As shown in function of QRSS data K received at any sensor is lnp(K |zi ), where
[9], a natural way to choose the optimal threshold is to minimize the
location estimation errors in ul with respect to the thresholds s . In other ⎧ ⎛s − z ⎞ ⎛ s − zi ⎞
⎪ Q⎜ 0 i
⎟ − Q⎜ 1 ⎟ K=0
words, minimizing the right hand side of (8) gives the optimal thresh- ⎪ ⎝ σ ⎠ ⎝ σ ⎠
old. However, as shown in (10), J11, J12 , J22 are all functions of the target ⎪ ⎛ ⎞ ⎛ ⎞
⎪Q⎜ s1 − zi ⎟ − Q⎜ s2 − zi ⎟ K=1
location which begs the question because it is the target location that p (K | z i ) = ⎨ ⎝ σ ⎠ ⎝ σ ⎠ .
must be estimated. Furthermore, J also contains parameters like the ⎪
locations of the sensor nodes. For many WSNs, the sensors will be ⎪⋮ ⋮
⎪ ⎛ sL −1 − zi ⎞ ⎛ s − zi ⎞
deployed randomly in the surveillance area. Due to the uncertainty of ⎪ Q⎜ ⎟ − Q⎜ L ⎟ K=L−1
⎩ ⎝ σ ⎠ ⎝ σ ⎠ (21)
nodes positions, it is difficult to set the thresholds before deployment. A
possible solution is to assume that the target position u and sensor Then the Fisher information with respect to z of the threshold
position ci follow a uniform distribution in an interval in the surveil- estimation problem is
lance area. Then we can calculate the probability density function (pdf)
of the received signal strength zi at a random location in the
surveillance area. Using this pdf, the thresholds can be determined
[10].
Firstly, assume that xi , yi , x, y are i.i.d and follow a uniform

216
Z. Li et al. Signal Processing 134 (2017) 214–223

⎡ ∂ 2lnp(K |z ) ⎤ ui(t + 1) = ui(t ) + vi(t + 1). (26)


Jz = − E ⎢ i

⎢⎣ ∂zi2 ⎥⎦ According to (25), when all the three terms on the right hand side
are small enough, the velocity is negligible. All the particles converges
⎡ ⎛ (s − z )2 ⎞ ⎛ (s − z )2 ⎞⎤
2
⎢exp⎜⎜ − l i

⎟ − exp⎜
⎜ − l +1 i

⎟ ⎥ to global optimum or a local optima. In this problem, the program will
L −1 ⎢⎣ ⎝ 2σ 2 ⎠ ⎝ 2σ 2 ⎠⎥⎦ terminate the optimization problem after certain iteration T.
= ∑ ,
K =0 2πσ 2p(K |zi ) (22)
4. Distributed algorithm
where E[·] represents an expectation value and ∂ denotes the partial
derivative. The derivation of (22) is shown in Appendix A. Then, the The distributed nature of WSNs allows in-network processing [30].
average Fisher information zi have in the surveillance area is Assume that nodes only communicate with their one-hop neighbours.
P0 By only local communication exchanges, nodes can agree to compute
F (s) = ∫ϕ Jzf (zi )dzi .
(23)
some desired quantity using consensus algorithm [31]. Compared with
2
the centralized approach, the distributed approach is more robust
Now, maximizing (23) with respect to s could solve the threshold against changes in network's topology due to mobility or node failure.
design problem As it can be seen, (6) describes the centralized cost function as a
argmaxF (s). sum of local cost functions, which can be implemented in a distributed
s (24) manner [32]. If the local ML function
When M=1,2, (24) can be solved using a method like grid search or fi = lnpi (Ki|u) (27)
genetic algorithm [25]. When M becomes larger, finding a maximum in
2M − 1 dimension is difficult. By using a uniform quantization thresh- is convex, then distributed optimization methods using consensus
old, the high dimensional optimization problem could be simplified to a algorithm will converge to global optimum [33]. In a multimodal
two parameter searching problem: the second threshold s1 and the functions, the PSO algorithm can search the region where the global
uniform quantization step size Δ = sl +1 − sl , for l = 1, …, L − 2 . solution lies. With local measurement and those from the neighbours,
Note that the only parameter the system needs to know is the size of the sensor node can compute an initialization near the minimum.
the surveillance area. The thresholds for RSS measurement can be When the initialization of the local ML function is in the near vicinity of
determined off-line. the global minimum, where the local ML is approximately convex, the
distributed gradient method can take over and solve the problem. Here,
3.3. Particle swarm optimization we describe a distributed implementation of the ML cost function (7).
The proposed algorithm involves two phases: QRSS measurement
Both the objective functions for threshold design (24) and location sharing phase and the distributed estimation phase. In the first phase,
estimation (7) are non-convex and multimodal, which poses a chal- each node exchanges its quantized measurement data with its neigh-
lenge to optimize. Iterative Search methods such as expectation bours. Assume each node knows the positions of its neighbours, with at
maximization (EM) and local search technique such as gradient, least 3 measurements, an initialization can be found using PSO. In the
Newton's method can be applied for optimization. However, all these second phase, at each iteration t, the location estimate ul i(t ) at node i is
methods have limitations because an initialization near the global quantized and then transmitted to its neighbours for update.
optimization is required. An alternative is to apply global optimization Eventually, target location estimate at every node will converge to
techniques, such as the numerous stochastic optimization algorithms some extent close to the global optimum. Note that a distributed PSO
including genetic algorithm (GA), PSO, simulated annealing (SA) etc., (DPSO) is introduced in [34,35] which can also solve the problem.
to conduct the optimization. The PSO is a simple optimization However, distributed gradient is simpler and faster than the DPSO, we
technique introduced by Eberhart and Kennedy [26], and has been will compare their performance in next section.
widely used in optimization problem. It is shown in [27], PSO is more Different quantization methods could be applied to locally estimate
computational efficient than GA in most tests. And it appears that PSO l t ), i.e. uniform quantization, exponential quantization [36]. Without
u(
outperforms the GA when used to solve unconstrained nonlinear loss of generality, we use uniform quantization, since it is easy to
problems. Thus, in this work, PSO is applied to optimize (24) to implement in hardware. In the case of n-bit quantizer, the uniform
determine a set of thresholds for RSS data. After the thresholds are quantized values can be expressed as
decided, PSO is also employed to optimize (7) to estimate the target ⎢ x − x min ⎥
location. xQ = ⎢ ⎥Δ + Δ/2 + x min ,
⎣ Δ ⎦ (28)
The PSO algorithm starts with a random position of particles that
min max
represent random guesses in search space. These particles are candi- where x and x represents the minimum and maximum values
date solutions to the problem under consideration. Each particle of the of the quantization threshold. The parameter Δ = (x max − x min )/2n is the
swarm associated with a random velocity, helps to update its position uniform quantization step size, which drives the quantization error.
and velocity according to its own as well as the group's flying Since we have no prior information of the target location, we set xmin
experiences that is the particles' best and global best information. and xmax as the minimum coordinate and maximum coordinate of the
The velocity vi of the ith particle at tth iteration is updated by surveillance area, i.e. ( − b /2 b /2).
Define the quantized estimate of the target position at node i as u lQi .
vi(t + 1) = ω vi(t ) + φ1r1 ⊙ (pi(t ) − ui(t )) + φ2r2 ⊙ (g − ui(t )) i = 1, …, P (25)
At the tth iteration, node i computes an update according to
where ⊙ denotes the element-wise product operator. r1 and r2 are N
random vectors uniformly distributed in [0 1]. pi(t ) is the best position l i(t + 1) =
u ∑ WijulQj (t ) − βζi(t ),
that particle i has so far; and g is the swarm best so far. φ1 and φ2 are j =1 (29)
acceleration constants. In this work, φ1 = φ2 = 2 . P is the number of the
particles in the swarm. ω is a time decreasing inertia weight to balance where β > 0 is a diminishing step size and ζi(t ) denotes the gradient of
l i(t ). W is a square matrix of dimension
node i cost function fi (x) at x = u
the local search and global search during the optimization process [28].
N × N and its element Wij is the factors by which the estimate of nodes
Based on various tests [29] and in this case, ω is set to linearly
uj contributes to the ith node. In order to preserve the average of the
decreases from 0.9 to 0.4. For each unit step, the position of the ith
initial values used in the distributed gradient algorithm, we use the
particle is updated as follows
Metropolis Weights [37]

217
Z. Li et al. Signal Processing 134 (2017) 214–223

⎧1/(1 + max{τi, τj}) if i ≠ j are connected



Wij = ⎨1 − ∑j ∈ 5i Wij if i = j ,

⎩ 0 if i and j are not connected (30)
where τi is the number of neighbours of node i and 5i denote the set of
neighbours of node i including i. (29) is referred as Quantized
Distributed Gradient (QDG-I) in the simulations. The gradient of fi
(27) is shown in Appendix B. The QDG algorithm is built under the
assumption that the network follows a synchronous communication
protocol. The asynchronous protocol requires the gossip algorithm
[38], which is out of the scope of this paper.
The quantized communication effect in distributed gradient method
has been fully studied in [20]. When the initialization is near the
vicinity of the global minimum, the quantized distributed gradient
algorithm could converge to the optimal objective value within some
error which depends on the number of quantization level.
In order to reduce the quantization error introduced in (28), we
Fig. 2. RMSE of PSO estimator and QRSS CRLB. Quantization threshold for binary data
utilize the local unquantized data [39]. The update in each iteration
is s = − 60.32 dBm . Quantization thresholds for quaternary data is
becomes s = [ − 49.85, − 58.33, − 65.78]dBm .
N
l i(t + 1) =
u ∑ WijulQj (t ) + uli(t ) − ulQi (t ) − βζi(t ). are also used to design the quantization threshold using the method
j =1 (31) presented in Section 3.2.
This algorithm is referred as QDG-II in the simulations. To better
understand how (31) works, we compact (31) into matrix form

l Q (t ) + U
l(t + 1) = WU lQ(t ) − β D(t ).
l (t ) − U 5.1. Simulations of the centralized algorithm
U (32)
l = [u
U l1Q, …, u l Q (t ) − U
lQN ] and D(t ) = [ζ1(t ), …, ζN (t )]. Define e(t ) = U l (t ) In this subsection, we compare the ML-PSO method in Section 3.3
as the quantization noise in UQ(t ). Further, (32) can be rewritten as with the CRLB. The number of the particles is 30 and the maximum
iteration is 100 in the centralized simulations.
u l(t ) − (I − W)e(t ) − β D(t ),
l(t + 1) = WU (33) A 100 m by 100 m surveillance area was considered for the
Expanding (33), we have simulations. A target is placed at [40, 50] m. The nodes in the network
are assumed to be deployed similarly to the configuration shown in
t − k +1 t − k +1
l(t + 1) = WtU
l(1) − Fig. 1. Notice that the threshold design method has no relationship
U ∑ Wt − k(I − W)e(k ) − β ∑ Wt − kD(k ).
k =1 k =1 (34) with the target and nodes positions. We assume that all nodes employ
identical threshold values and the shadow noise σ is 6 dB.
It has been proved in [37], In Fig. 2, the RMSE of the proposed estimator using binary data
1 T (1 bit) and quaternary data (2 bits) is compared with corresponding
lim Wt = 11 .
t →∞ N (35) CRLB and the NLS method. For each scenario, 600 Monte Carlo
simulations are performed. The root mean square error (RMSE) is
For certain k,
plotted as a function of the number of nodes N.
lim Wt − k +1(I − W)e(k ) = 0. While using binary data, the QRSS data are known as proximity or
t →∞ (36)
connectivity measurements. If the RSS measurement is above the
As shown in (36), when the iteration process goes on, the quantization threshold, the node and target is in the range of communication.
error introduced in the kth iteration and before kth iteration gradually Therefore, the node transmits 1 to the fusion centre. If the RSS
vanishes. However, recent quantization noise remains. measurement is below the threshold, the node and target is out of
The distributed localization algorithm discussed above can be the communication range, hence 0 is transmit to the fusion centre.
summarized in the following. Apparently, with proper quantization threshold, if there are both a
large number of nodes transmitting 1 s (in range) and 0 s (out of
1. Initialize the WSN with system parameters, b, P0, α and σ. range), the position of target will be well located. Therefore, as shown
Determine the thresholds for RSS measurements by optimizing in Fig. 2, when the number of nodes N increases, both the performance
(24) using PSO. of ML-PSO estimator and the NLS estimator increase significantly.
2. The ith node broadcasts its QRSS measurements to all neighboring Similar results can be seen in Fig. 2 when quaternary data is employed.
nodes. It also receives the broadcasts from its neighbours. In Fig. 3, we evaluate the performance of the proposed estimator
l i(1) using PSO.
3. The ith node calculates the initial value u employing different quantization bits. The performance of optimal
4. The ith node quantizes its current estimate u l i(t ) using (28) and threshold is added for comparison. The optimal threshold is obtained
broadcasts it to its neighbours and uses the quantized distributed by minimizing the right hand side of (8) using PSO. Again, the
gradient algorithm (29) or (31) to update the local estimates until simulation is based on 600 independent realizations. As it can be seen,
convergence. in Fig. 3 the PSO estimator using quantized RSS data is very close to its
CRLB. As number of quantization bit increases, the QRSS-CRLB
5. Numerical results improves and converges to the RSS-CRLB. Using only 5 bits, the
QRSS-CRLB has little difference from the RSS-CRLB. Also, it can be
In this section, we simulate several scenarios for both the centra- clearly seen that the performance of PSO can attain the QRSS-CRLB at
lized formulation (7) and the distributed algorithms (29), (31). In the all scenarios. With the optimal threshold, the QRSS-CRLB is closer to
simulations for every realization, the transmit power P0, and the path the RSS-CRLB, especially at low quantization bit, because the optimal
loss exponent α are −10 dBm and 3, respectively. The above parameters thresholds are only designed for this certain configuration.

218
Z. Li et al. Signal Processing 134 (2017) 214–223

Fig. 3. RMSE for PSO estimator using quantized data.

5.2. Simulations of the distributed algorithm

In the next simulations, we evaluate the performance of the


distributed algorithm proposed in Section 4. In order to study the
performance of distributed algorithm using different number of
quantization bits in the data exchange phase, we reduce the quantiza-
tion error in the RSS measurement by using 7-bit QRSS data in the
following simulations.
At first, the network is initialized such that each node is aware of its
one hop neighbours' positions and their 7-bit QRSS measurement,
hence each node can determine their metropolis weight and have an
initial estimate using the PSO estimator. Then the initial estimate is
quantized using (28) for further distributed processing. As we have no
prior information of the target location, the quantization range for the
simulations is the range of surveillance area. Note that, in reality, if we
have prior information of the target location, we could narrow the
quantization range, such that the same performance can be achieved
using less quantization bits in (28). In this example, we consider a Fig. 5. RMSE convergence of local estimates on 50 nodes in the distributed algorithms,
network containing 50 nodes deployed over an area of 100 m by 100 m. (a) QDG-I (b) QDG-II.
The target is located at [40,50]m. The communication range dc is 20 m
for each node. The network topology is shown in Fig. 4. The PSO is Mrun N
1
generally costly to completely solve an optimization problem. However, ARMSE = ∑ ∑ (uli(l ) − u)2 .
NMrun (37)
initialization does not need high accuracy. In this work, 20 particles l =1 i =1

and 40 iterations were used for initialization. The diminishing step size
Fig. 5a and Fig. 5b show how QDG-I and QDG-II converges within
in both QDG-I and QDG-II is β = 1/ t unless stated otherwise. The
average RMSE (ARMSE) in the following simulations is calculated as 100 iterations. Each line represents the RMSE of the local estimate at
each node in the iteration process. Both algorithms are given the same
initializations. It is also possible to notice that, after certain consensus
iterations using QDG-I, the estimate at each node become inactive.
However, in QDG-II, every node is still active and converging to better
values since the consensus part in QDG-II can reduce part of
quantization error.
Fig. 6 shows the average RMSE of the location estimate for QDG-I
and QDG-II using different number of quantization bits versus the
variance of shadowing. For comparison purposes, the distributed
gradient algorithm using RSS measurements (DG-RSS), DPSO, RSS-
CRLB and QRSS-CRLB are also included. The DG-RSS algorithm is
computed using (29) with RSS measurements and unquantized com-
munication. The DPSO also uses RSS measurements and unquantized
communication. The number of particles at each sensor node is 20 and
the maximum iteration is 100. All simulations in Fig. 6 includes 600
Monte Carlo simulations.
As expected, the two CRLBs are very close to each other since 7-bit
QRSS data is used. QDG-I with 8 bits has similar performance
compared with the DG algorithm using RSS measurements at different
level of shadowing noise, because the quantization level is high enough.
The performance of QDG-II with 5 bits is almost the same as the QDG-
Fig. 4. Network topology with 50 nodes. I with 8 bits since QDG-II uses a quantization error reducing consensus

219
Z. Li et al. Signal Processing 134 (2017) 214–223

Fig. 8. RMSE convergence of local estimates on 50 nodes using QDG-II.


Fig. 6. RMSE versus variance of shadowing noise plots in location estimation using
different algorithms.

algorithm. While 3 bit quantization is employed in QDG-II, the


quantization noise is too large and has more influence on localization
accuracy than shadowing noise. Therefore, it is shown in Fig. 6 that
QDG-II with 3 bits has very large RMSE when the variance of
shadowing is relatively low. Both QDG-I and QDG-II outperform
DPSO especially when the noise is high.
The convergence behavior of the distributed algorithm at shadow-
ing noise σ = 5 is depicted in Fig. 7. Each line represents the average
RMSE over 300 independent trials in the iteration process. QDG-I with
5 bits converges fastest with worst performance. QDG-I with 8 bits has
similar convergence rate with DG-RSS. Thus, to reach certain accuracy,
we need to increase the quantization level. However, QDG-II with
5 bits also has similar convergence rate as QDG-I with 8 bits which
means at certain performance accuracy, QDG-II costs less commu-
nication than QDG-I.
In the next simulations, with the same scenario settings as shown in
Fig. 9. RMSE versus variance of shadowing noise plots in location estimation using
Fig. 4, we test the convergence of QDG-II algorithm with node
QDG-I and QDG-II with 5 bits.
communication range dc = 35 m .
In Fig. 8, we plot the localization RMSE convergence when nodes
communication range is 35 m. For comparison purpose, we use the
same initialization as in Fig. 5a. Compared with Fig. 5a and Fig. 5b, the
estimates of local nodes are converging much faster.
The same experiment in Fig. 6 is carried out in Fig. 9 while the
communication range is 35 m. Compared with the results in Fig. 6, the
performance of both algorithms gets closer to CRLB, especially QDG-I.
It is important to test the performance of the distributed algorithms
for various network configurations. Another network with 25 nodes

Fig. 10. Network topology with 25 nodes.

shown in Fig. 10 is also simulated. The communication range dc in the


network is 30 m. Other system parameters are the same as previous
simulation. The performance in terms of RMSE is shown in Fig. 11.
Similarly, with the help of the quantization elimination schemes, QDG-
II is close to the DG with unquantized data and outperforms QDG-I.

6. Conclusion

Fig. 7. RMSE variation using QDG-I and QDG-II with different quantization bits. In this paper, we present a distributed target location estimation

220
Z. Li et al. Signal Processing 134 (2017) 214–223

model. A practical threshold design method for RSS data is presented.


Using this method, the CRLB of QRSS converges to the CRLB of RSS
data as the quantization level increases. Then we solve the ML cost
function of QRSS model using PSO. The PSO estimator can reach the
CRLB with a relatively small number of quantization bits and enough
number of nodes. Using the estimator described above, we initialize
each node with local QRSS data using PSO and transmit quantized
local estimates at each iteration. The quantized distributed gradient
algorithm is applied to solve the ML cost function in a distributed
manner. Simulation shows that with sufficient quantization levels in
the local estimate, there is little difference between the distributed
algorithm using quantized data and unquantized data. To reduce the
number of bits used in the local estimate, a quantization compensation
term is added into the distributed algorithm. Simulations show that
with the usage of the compensation term, the QDG method has similar
convergence rate and speed as the distributed gradient algorithm using
unquantized data.
Fig. 11. RMSE versus variance of shadowing noise plots in location estimation using
QDG-I and QDG-II with 5 bits in a 25-node network.

method that uses quantized data for WSNs based on a statistical RSS

Appendix A. Derivation of fisher information

By using (22), we have


∂lnp(K |zi ) 1 ∂p(K |zi )
=
∂zi p(K |zi ) ∂zi (A.1)
and

∂ 2lnp(K |zi ) 1 ⎡ ∂p(K |z ) ⎤2 2


1 ∂ p (K | z i )
=− ⎢ i
⎥ + .
∂zi2 p (K |zi ) ⎣ ∂zi ⎦
2
p(K |zi ) ∂zi2 (A.2)
Now, taking expectation of (A.2) with respect to p(K |zi )
L −1 ⎧ ⎡ ∂p(K |z ) ⎤2 ⎫
∂ 2lnp(K |zi ) ⎪1
2
1 ∂ p (K | z i ) ⎪
E[ ]= ∑ p(K |zi )⎨⎪− ⎢ i
⎥ + ⎬
∂zi2 ⎩ p (K |zi ) ⎣ ∂zi ⎦
2 2 ⎪
K =0 p(K |zi ) ∂zi ⎭
L −1 ⎧ ⎫
1 ⎡ ∂p(K |zi ) ⎤
2
⎪ ∂ 2p(K |zi ) ⎪
= ∑ ⎨− ⎢ ⎥ + ⎬.
K =0 ⎩ p(K |zi ) ⎣ ∂zi ⎦
⎪ 2 ⎪
∂zi ⎭ (A.3)
L −1
Notice that ∑K =0 p(K |zi ) = 1, the second term of (A.3) is actually

∂ ⎡⎢ ⎤
L −1 2 2 L −1
∂ p (K | z i )
∑ = 2 ∑
p(K |zi )⎥ = 0.
K =0 ∂z 2
∂zi ⎢⎣ K =0 ⎥⎦ (A.4)
With
⎛ (s − z )2 ⎞
exp⎜⎜ − l 2 i ⎟⎟
∂Q((sl − zi )/ σ ) ⎝ 2σ ⎠
= ,
∂zi 2π σ (A.5)
it is easy to show that
⎡ ⎛ (s − z )2 ⎞ ⎛ (s − z )2 ⎞⎤
2
⎢exp⎜⎜ − l i

⎟ − exp⎜
⎜ − l +1 i

⎟ ⎥
⎡ ∂p(K |z ) ⎤2 ⎢⎣ ⎝ 2σ 2 ⎠ ⎝ 2σ 2 ⎠⎥⎦
⎢ i
⎥ = .
⎣ ∂zi ⎦ 2πσ 2p(K |zi ) (A.6)
With (A.4) and (A.6), we can have
⎡ ⎛ (s − z )2 ⎞ ⎛ (s − z )2 ⎞⎤
2
⎢exp⎜⎜ − l i

⎟ − exp⎜
⎜ − l +1 i

⎟ ⎥
⎡ ∂ 2lnp(K |z ) ⎤ L −1 ⎢⎣ ⎝ 2σ 2 ⎠ ⎝ 2σ 2 ⎠⎥⎦
−E ⎢ i
⎥= ∑ .
⎢⎣ ∂z 2 ⎥⎦ K =0 2πσ 2p(K |zi ) (A.7)

221
Z. Li et al. Signal Processing 134 (2017) 214–223

Appendix B. Derivation of gradient

The Gradient of the local ML cost function (27) is derived as follows. From (27), we have
∂( − f (Ki|u)) 1 ∂p(Ki|u)
=−
∂x p(Ki|u) ∂x
1 ⎛ ∂Q((sl − P0 + 10α logdi )/ σ )
=− ⎜
p(Ki|u) ⎝ ∂x
∂Q(sl +1 − P0 + 10α logdi )/ σ ⎞
− ⎟.
∂x ⎠ (B.1)
With
⎛ (s − P + 10α logd )2 ⎞
10α(x − xi )exp⎜⎜ l 0 i
⎟⎟
∂Q((sl − P0 + 10α logdi )/ σ ) ⎝ 2σ 2 ⎠
= ,
∂x 2π σ ln10di2 (B.2)
the gradient is expressed as
∂( − lnf (Ki|u))
∂x
⎛ (s − P0 + 10α logdi )2 ⎞ ⎛ (s − P + 10α logd )2 ⎞
10α(x − xi )⎜⎜exp( l 2
⎟⎟ − exp⎜⎜ l +1 0 i
)⎟⎟
⎝ 2σ ⎠ ⎝ 2σ 2 ⎠
=− .
2π σ ln10di2p(Ki|u) (B.3)

References [14] N. Bulusu, J. Heidemann, D. Estrin, Gps-less low-cost outdoor localization for very
small devices, IEEE Pers. Commun. 7 (5) (2000) 28–34. http://dx.doi.org/
10.1109/98.878533.
[1] I.F. Akyildiz, I.H. Kasimoglu, Wireless sensor and actor networks: research [15] M. Rabbat, R. Nowak, Decentralized source localization and tracking [wireless
challenges, Ad Hoc Netw. 2 (4) (2004) 351–367. http://dx.doi.org/10.1016/ sensor networks], in: Proceedings of IEEE International Conference on Acoustics,
j.adhoc.2004.04.003 (URL 〈http://www.sciencedirect.com/science/article/pii/ Speech, and Signal Processing, (ICASSP'04), vol. 3, 2004, pp. iii–921–4. 〈http://dx.
S1570870504000319〉). doi.org/10.1109/ICASSP.2004.1326696〉.
[2] B. Xu, G. Sun, R. Yu, Z. Yang, High-accuracy tdoa-based localization without time [16] D. Blatt, A.O. Hero, Energy-based sensor network source localization via projection
synchronization, IEEE Trans. Parallel Distrib. Syst. 24 (8) (2013) 1567–1576. onto convex sets, IEEE Trans. Signal Process. 54 (9) (2006) 3614–3619. http://
http://dx.doi.org/10.1109/TPDS.2012.248. dx.doi.org/10.1109/TSP.2006.879312.
[3] P. Cheong, A. Rabbachin, J.P. Montillet, K. Yu, I. Oppermann, Synchronization, toa [17] Y. Liu, Y.H. Hu, Q. Pan, Distributed, robust acoustic source localization in a
and position estimation for low-complexity ldr uwb devices, in: Proceedings of wireless sensor network, IEEE Trans. Signal Process. 60 (8) (2012) 4350–4359.
2005 IEEE International Conference on Ultra-Wideband, 2005, pp. 480–484. http://dx.doi.org/10.1109/TSP.2012.2199314.
〈http://dx.doi.org/10.1109/ICU.2005.1570035〉. [18] B. Bejar, S. Zazo, A practical approach for outdoors distributed target localization in
[4] N. Patwari, J. Ash, S. Kyperountas, A. Hero, R. Moses, N. Correal, Locating the wireless sensor networks, EURASIP J. Adv. Signal Process. 2012 1 (2012) 1–11.
nodes: cooperative localization in wireless sensor networks, IEEE Signal Process. http://dx.doi.org/10.1186/1687-6180-2012-95 (URL 〈http://dx.doi.org/10.1186/
Mag. 22 (4) (2005) 54–69. http://dx.doi.org/10.1109/MSP.2005.1458287. 1687-6180-2012-95〉).
[5] N. Patwari, A. Hero, M. Perkins, N. Correal, R. O'Dea, Relative location estimation [19] J. Li, E. Elhamifar, I.J. Wang, R. Vidal, Consensus with robustness to outliers via
in wireless sensor networks, IEEE Trans. Signal Process. 51 (8) (2003) 2137–2148. distributed optimization, in: Proceedings of the 49th IEEE Conference on Decision
http://dx.doi.org/10.1109/TSP.2003.814469. and Control (CDC), 2010, pp. 2111–2117. 〈http://dx.doi.org/10.1109/CDC.2010.
[6] H.C. So, L. Lin, Linear least squares approach for accurate received signal strength 5717526〉.
based source localization, IEEE Trans. Signal Process. 59 (8) (2011) 4035–4040. [20] A. Nedic, A. Olshevsky, A. Ozdaglar, J. Tsitsiklis, Distributed subgradient methods
http://dx.doi.org/10.1109/TSP.2011.2152400. and quantization effects, in: Proceedings of the 47th IEEE Conference on Decision
[7] R. Ouyang, A.-S. Wong, C.-T. Lea, V. Zhang, Received signal strength-based and Control. CDC 2008, 2008, pp. 4177–4184. 〈http://dx.doi.org/10.1109/CDC.
wireless localization via semidefinite programming, in: Global Telecommunications 2008.4738860〉.
Conference, GLOBECOM 2009, IEEE, 2009, pp. 1–6. 〈http://dx.doi.org/10.1109/ [21] B. Bjar, P. Belanovic, S. Zazo, Distributed gauss-newton method for localization in
GLOCOM.2009.5425268. ad-hoc networks, in: Proceedings of 2010 Conference Record of the Forty Fourth
[8] H.A. Nguyen, H. Guo, K.S. Low, Real-time estimation of sensor node's position Asilomar Conference on Signals, Systems and Computers, 2010, pp. 1452–1454.
using particle swarm optimization with log-barrier constraint, IEEE Trans. 〈http://dx.doi.org/10.1109/ACSSC.2010.5757776〉.
Instrum. Meas. 60 (11) (2011) 3619–3628. http://dx.doi.org/10.1109/ [22] N. Patwari, J. Ash, S. Kyperountas, A. Hero, R. Moses, N. Correal, Locating the
TIM.2011.2135030. nodes: cooperative localization in wireless sensor networks, IEEE Signal Process.
[9] N. Patwari, A.O. Hero, III, Using proximity and quantized RSS for sensor Mag. 22 (4) (2005) 54–69. http://dx.doi.org/10.1109/MSP.2005.1458287.
localization in wireless networks, in: Proceedings of the 2nd ACM International [23] S. Gezici, A survey on wireless position estimation, Wirel. Pers. Commun. 44 (3)
Conference on Wireless Sensor Networks and Applications, WSNA '03, ACM, New (2008) 263–282. http://dx.doi.org/10.1007/s11277-007-9375-z (URL 〈http://dx.
York, NY, USA, 2003, pp. 20–29. 〈http://dx.doi.org/10.1145/941350.941354〉. doi.org/10.1007/s11277-007-9375-z〉).
URL 〈http://doi.acm.org/10.1145/941350.941354〉. [24] T. Panigrahi, G. Panda, B. Mulgrew, B. Majhi, Maximum lilkelihood source
[10] R. Niu, P. Varshney, Target location estimation in sensor networks with quantized localization in wireless sensor network using particle swarm optimization, in:
data, IEEE Trans. Signal Process. 54 (12) (2006) 4519–4528. http://dx.doi.org/ Proceedings International Conference on electronics Systems (ICES), 2011, pp.
10.1109/TSP.2006.882082. 111–115.
[11] X. Yang, R. Niu, E. Masazade, P. Varshney, Channel-aware tracking in multi-hop [25] P.J.M. Laarhoven, E.H.L. Aarts, (Eds.), Simulated Annealing: Theory and
wireless sensor networks with quantized measurements, IEEE Trans. Aerosp. Applications, Kluwer Academic Publishers, Norwell, MA, USA, 1987.
Electron. Syst. 49 (4) (2013) 2353–2368. http://dx.doi.org/10.1109/ [26] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of IEEE
TAES.2013.6621821. International Conference on Neural Networks, vol. 4, 1995, pp. 1942–1948.
[12] O. Ozdemir, R. Niu, P. Varshney, Channel aware target localization with quantized 〈http://dx.doi.org/10.1109/ICNN.1995.488968〉
data in wireless sensor networks, IEEE Trans. Signal Process. 57 (3) (2009) [27] R. Hassan, B. Cohanim, O. de Weck, G. Venter, A comparison of particle swarm
1190–1202. http://dx.doi.org/10.1109/TSP.2008.2009893. optimization and the genetic algorithm, in: Proceedings of the 46th AIAA/ASME/
[13] N. Sundaram, P. Ramanathan, Connectivity based location estimation scheme for ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference,
wireless ad hoc networks, in: Proceedings of Global Telecommunications Structures, Structural Dynamics, and Materials and Co-located Conferences.
Conference, GLOBECOM'02. IEEE, vol. 1, 2002, pp. 143–147. 〈http://dx.doi.org/ [28] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in: Evolutionary
10.1109/GLOCOM.2002.1188058〉. Computation Proceedings, 1998, IEEE World Congress on Computational

222
Z. Li et al. Signal Processing 134 (2017) 214–223

Intelligence, The 1998 IEEE International Conference on, 1998, pp. 69–73. [34] T. Panigrahi, G. Panda, B. Mulgrew, Distributed bearing estimation technique using
〈http://dx.doi.org/10.1109/ICEC.1998.699146〉. diffusion particle swarm optimisation algorithm, Wirel. Sens. Syst. IET 2 (4) (2012)
[29] Y. Shi, R. Eberhart, Empirical study of particle swarm optimization, in: Proceedings 385–393. http://dx.doi.org/10.1049/iet-wss.2011.0107.
of the 1999 Congress on Evolutionary Computation, CEC 99, vol. 3, 1999, p. 1950. [35] T. Panigrahi, G. Panda, B. Mulgrew, B. Majhi, Distributed {DOA} estimation using
〈http://dx.doi.org/10.1109/CEC.1999.785511〉. clustering of sensor nodes and diffusion {PSO} algorithm, Swarm Evolut. Comput.
[30] I. Schizas, G. Mateos, G. Giannakis, Distributed LMS for consensus-based in- 9 (2013) 47–57. http://dx.doi.org/10.1016/j.swevo.2012.11.001 (URL 〈http://
network adaptive processing, IEEE Trans. Signal Process. 57 (6) (2009) www.sciencedirect.com/science/article/pii/S2210650212000788〉).
2365–2382. http://dx.doi.org/10.1109/TSP.2009.2016226. [36] D. Thanou, E. Kokiopoulou, Y. Pu, P. Frossard, Distributed average consensus with
[31] R. Olfati-Saber, R. Murray, Consensus problems in networks of agents with quantization refinement, IEEE Trans. Signal Process. 61 (1) (2013) 194–205.
switching topology and time-delays, IEEE Trans. Autom. Control 49 (9) (2004) http://dx.doi.org/10.1109/TSP.2012.2223692.
1520–1533. http://dx.doi.org/10.1109/TAC.2004.834113. [37] L. Xiao, S. Boyd, S. Lall, Distributed Average Consensus with Time-Varying
[32] B. Johansson, C. Carretti, M. Johansson, On distributed optimization using peer- Metropolis Weights, 2006.
to-peer communications in wireless sensor networks, in: Proceedings of the 5th [38] S. Boyd, A. Ghosh, B. Prabhakar, D. Shah, Randomized gossip algorithms, IEEE
Annual IEEE Communications Society Conference on Sensor, Mesh and Ad Hoc Trans. Inf. Theory 52 (6) (2006) 2508–2530. http://dx.doi.org/10.1109/
Communications and Networks, SECON'08, 2008, pp. 497–505. 〈http://dx.doi. TIT.2006.874516.
org/10.1109/SAHCN.2008.66〉. [39] P. Frasca, R. Carli, F. Fagnani, R. Zampieri, Average consensus on networks with
[33] A. Nedic, A. Ozdaglar, Convex Optimization in Signal Processing and quantized communication, Int. J. Non-Linear Robust. Control 19 (2008)
Communications, Cambridge University Press, Ch. Cooperative Distributed Multi- 1787–1816.
Agent Optimization, 2008, pp. 240–386.

223

You might also like