Agreement Among Decentralized Decision Makers

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

Mathematics and Engineering 4th Year Project

Agreement Among Decentralized Decision Makers


Ryan Shrott

10016608

Ariel Hubert

10018729

Victor Li

10025708

Payton Karch

10002247

Professor Serdar Y
uksel
Thursday 16th April, 2015

Acknowledgments

We would like to extend our sincerest gratitude to Professor Serdar Y


uksel for his unwavering
support throughout the last eight months. His guidance led us to a better understanding
of the background knowledge required to study interaction dynamics. Professor Y
uksel
provided us with a lot of support and direction and kept us on track throughout the entire
process. We would also like to thank our classmates of Apple Math 15 for their support.
Finally, we would like to thank the rest of the professors in the engineering faculty at Queens
for giving us the tools we needed to obtain our results.

Abstract

We study the convergence of opinions in a multi-agent system under various assumptions


on interaction dynamics. We begin by studying the opinion dynamics of a network with
constant, doubly stochastic transition kernels. A general result is shown which provides
conditions of agreement. We extend this model by introducing a finite number of stubborn
agents. Convergence is shown to a convex combination of the initial opinions of the stubborn
agents. A user-prescribed state dependent update algorithm is introduced to model systems
with limited communication capabilities. We then move to study the convergence of opinions
in a random model that allows for stochastic agreement. Motivated by a cost reduction
phenomenon, a transition kernel is designed to force agents to interact more when their
opinions are widely dispersed and interact less when their opinions are tightly spread. The
complexity increases with each model to reflect more realistic behaviour. We obtain many
convergence results from these models, of which the implications are heavily explored.

Contents
1 Acknowledgments

2 Abstract

3 Introduction
3.1 General Overview . . . . . . . . . . . . . .
3.2 Historical Background . . . . . . . . . . .
3.3 Opinion Dynamics & Social Learning . . .
3.4 Engineering Applications . . . . . . . . .
3.4.1 Swarm Robotics . . . . . . . . . .
3.4.2 Problem: Landmine Removal . . .
3.4.3 Problem: Forest Fires . . . . . . .
3.4.4 Problem: Environmental Concerns
3.5 Other Applications . . . . . . . . . . . . .
3.5.1 Problem: Load Balancing . . . . .

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

4 Problem Description & Approach


5 Design & Engineering Models
5.1 Preliminaries . . . . . . . . . .
5.2 Notation and Definitions . . . .
5.3 Basic Model . . . . . . . . . . .
5.4 Stubborn Agent Model . . . . .
5.5 Communication Radius Model
5.6 Random Realization Model . .

.
.
.
.
.
.

10

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

6 Testing & Results


7 Discussion
7.1 Application of Models . . . . .
7.1.1 Landmine Removal . . .
7.1.2 Forest Fires . . . . . . .
7.1.3 Environmental Concerns
7.1.4 Load Balancing . . . . .

5
5
5
6
7
7
8
8
8
9
9

12
12
12
13
16
18
19
24

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

27
28
28
29
29
29

8 Conclusion
31
8.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
9 References

32

Introduction

3.1

General Overview

Decentralized agreement problems can be studied by examining the convergence of opinions.


In a general setting, the dynamics of a social network can be described by equations of the
form:
xt+1 = xt P (xt ) + wt
Systems of this form have been extensively covered in [3] by Condello. This paper will extend
these results by studying a complex update algorithm which has applications in engineering
and sociology. Most notably, the modelling has significant energy efficiency applications in
numerous interaction systems. Leading up to this model, three other systems will also be
studied. The basic model will be introduced to show agreement to the average initial opinion
under certain conditions. The role and implications of stubborn agents will be studied. The
existence of clusters will be demonstrated by studying a state dependent update algorithm
which models interaction dynamics through a prescribed interaction radius.
The thesis is formatted as follows:
In Chapter 4, the problem description will be formulated and the engineering approaches used to solve the problem will be discussed.
In Chapter 5, some notation and necessary definitions needed in order to understand
the modelling will be introduced. The four aforementioned models will also be extensively studied.
In Chapter 6, the simulated results for each of the four models will be shown.
In Chapter 7, the results will be discussed along with their relation to many engineering
and sociological applications.

3.2

Historical Background

The study of consensus, agreement and social learning has been present in mathematical
and philosophical literature over recent centuries. In 1788, Marquis de Condorcet proved
that truthful reporting of information by a large group of individuals with each holding
an opinion (correlated to some state ) is sufficient to the aggregation of information [1].
In 1907, Galton advocated the idea that a large group of relatively uninformed individuals
would have significantly more information than each individual separately. To defend his
theory, he went to a carnival where people were guessing an animals mass. He found that
although most individuals were completely wrong, the median guess was 1197 pounds (the
animal weighed 1198 pounds). Although Galtons claim is not true in general, it can be
viewed as a foundational constituent in the advancement of social dynamics [2].
More recently, the mathematics developed by J. Hajnal and M. S. Bartlett in the study
of Weak Ergodicity of non-homogenous Markov chains and by J. Wolfowitz in Products
of Indecompasable, Aperiodic, Stochastic Matrices has been foundational to the study of
many social learning questions [3]. Will social learning lead to a consensus? Will social

learning effectively aggregate dispersed information? How can one optimize his/her decisions to most effectively spread information? In areas such as coordination and distributed
control, it is important to study the consensus processes that a group of agents employ.
It is often desirable for a group of agents to reach some sort of agreement and have some
common knowledge regarding the state of the system [4].
The focus of this project will involve a randomly changing interaction graph. When considering an undirected graph, one may wish to compute the average (or some function) of
the initial values of the nodes. For example, the DeGroot model uses a weighted average of
the opinions of the nodes, wherein the weights are the respective trust levels of the agents.
Other averaging methods may also be employed. A simple method involves each node exchanging information with its neighbours using a basic averaging algorithm. Furthermore,
under specific connectivity assumptions, one can show that an agreement to the average
value will eventually occur.

3.3

Opinion Dynamics & Social Learning

The study of opinion dynamics is important because an individuals actions are controlled
by his/her beliefs. But how do these beliefs actually form? We are certainly not born
with opinions. Our beliefs are formed by our interactions with our social environment. For
example, our opinions on politics are dependent on our individual values. We vote on the
politician who we deem to have our own best interests in mind. We look for a politician
that is in a sense fair. But what exactly is fairness? It is certainly not the same notion in
all of us; otherwise everyone would vote for the same politician. In order to answer such
questions, the spread of information within a community must be studied [5].
Social learning is the process of updating ones beliefs as a function of ones own experiences,
others experiences, news from media sources, and propaganda from high-level sources. It is
important to note that even though each individual is updating his/her beliefs as a function
of his/her own opinions, there is an inherent social character to the process. For example,
a holistic view of a society may model the learning process like the spread of a virus. When
a group obtains a certain piece of information, they may immediately change their opinion
based on the new information. Another holistic model of social learning would be to examine social circles instead of individuals. For example, one would need to study the mutual
exclusivity and independence of such circles in order to determine the interaction properties
and the associated probability distributions [5].
In order to investigate the conditions and effectiveness of social learning, a number of questions must be investigated. What constitutes a consensus? Whether it be unanimous agreement, majority agreement, or just majority consent (i.e. where an individual may allow a
decision, but not necessarily actively support it), what are the conditions for a consensus
to form? How can we determine the consensus if it does exist? How does uncertainty play
a role in the aggregation of information? What are the conditions which guarantee the
dispersal of misinformation? The group will attempt to transcribe these basic questions to
a mathematical setting in which consensus can actually be proved.

3.4

Engineering Applications

There are many engineering applications that involve communication between decentralized
entities. A decentralized system is a system in which multiple agents use their local knowledge of a system to make decisions that coincide with the goals of the team. A decentralized
system can only achieve the maximum efficiency when the agents reach agreement. Without
agreement, the agents will undertake tasks redundantly or certain aspects may be outright
omitted. That is why the models developed in this paper all focus on the conditions under
which the system reaches consensus, with consensus being defined differently for different
systems. It is also important to note that the agents must agree on certain variables prior to
interacting. For example, in a situation where a group of robots is mapping a plot of land,
they must all agree on which direction is north, or else they will produce maps that have
different orientations and will never be able to achieve agreement. They must also have the
same notion of agreement so that they are all working towards the same objective.
One type of decentralized system is a sensor network. Sensor networks have many potential uses in environmental monitoring programs. For example, sensors could be spread
throughout forests to monitor for forest fires, or placed in oceans to track algae levels. The
possibilities are endless, which makes developing smarter models a priority for the advancement of sensor networks. There are many different models for the communication between
the individual sensors. One such model is detailed in a paper from the Computer Science
Department at the University of California [9]. It details a cluster-based decentralized system in which the sensors are arranged in clusters and these clusters send their data to a
cluster head that compiles the data and removes redundant information before sending
the data to a central location.

3.4.1

Swarm Robotics

One of the main engineering applications of the models developed in this paper is in the
field of swarm robotics. Swarm robotics is when a group of autonomous agents, such as
robots or drones, completes a task by working together and sharing information. These
robots are all self-controlled and only interact with each other, never interacting with an
outside source. Furthermore, none of the robots have any sense of superiority over any of
the other robots. This allows the system to be very robust, as the failure of one robot will
not significantly affect the group as a whole. Agents can easily be taken out and added back
into the swarm with little to no disruption to the system. Swarm robotics also has many
applications for implementation in areas where the conditions are too dangerous for humans
to operate. Robots can be programmed to work together to complete tasks that would be
of great risk to humans. A few of these tasks will be discussed in the following applications.
This illustrates the importance and practicality of studying interactions in a decentralized
system.

3.4.2

Problem: Landmine Removal

One specific use of swarm robotics is for landmine removal. This is a problem that has
been previously studied [13]. There are areas in the world where landmines lie just below
surface level, a potentially lethal threat to anyone navigating the terrain. These areas are
essentially off-limits to everyone except the locals, who know the safe paths through the
minefields. Removal of landmines is a two-step process. First, the location of the landmines
must be identified, and secondly, the landmine must be removed from the ground safely. A
swarm robotics network is an ideal way to locate the mines. Theoretically, the swarm could
be given an area to map, and fly over the area identifying the GPS locations of where mines
are located. Following this, the mines could be safely removed by some other means. Since
landmines are very dangerous, it would insufficient to have a single drone map an area, as
the drone could make an error due to design or due to noise. For that reason, multiple drones
would be needed to map every part of the area to ensure that no errors were made. This is
where the notion of agreement comes in. For example, say that for a location to be deemed
mapped, three different drones must observe the location and agree on either the presence
or absence of a mine. If the three drones do not have a consensus on whether or not a mine
exists at a certain location, then the area must be observed by more agents to verify the
status of the location. When the initial observances of a location do not agree, the agents
will need to communicate with other agents that the location requires further inspection.
When the initial robots have a consensus, then there is no need for communication to other
agents. The problem is to develop a communication protocol for these drones so that they
can reliably and efficiently find the locations of the mines.

3.4.3

Problem: Forest Fires

Another task that is quite dangerous to humans that can be completed by drones is fighting
forest fires. The drones could be equipped with the necessary hardware to fight the fire
and deployed into the affected area. If implemented effectively, this method of dealing with
forest fires eliminates the need for humans to be anywhere near the fire. In 2013, there were
34 fatalities related to fighting forest fires in the United States [10]. Having drones fight the
fire could reduce that number. Drones could also be equipped with sensors that provide data
on conditions of the fire that humans cannot perceive. This data could potentially make
the drones more effective fire fighters than humans could ever be. However, for the drones
to be able to extinguish or contain the fire, they would need an appropriate communication
system.

3.4.4

Problem: Environmental Concerns

One of the main uses for swarm robotics is the mapping/exploration of various terrains.
There are many scenarios where it would be advantageous to use robots to map a terrain,
such as mapping the sea floor. Depending on the type of robot and the type of terrain that
is being mapped, there are also many environmental issues that must be considered. For
example, say there is a mapping task that could be completed by humans with a high rate of
accuracy and no damage to the environment at all. This may take a long time and could be

very expensive. Conversely, the same task could be completed by drones in a much shorter
time and at a lower cost, but the drones may only achieve the same rate of accuracy as the
humans if each area was mapped by multiple drones. Additionally, when the drones map an
area, they may slightly damage the environment as well. If it is decided that the area is to
be mapped by drones, it is preferable to have the drones map the location as efficiently as
possible with minimal redundancy. The communication model for the robots must therefore
prevent excess trips over a location to minimize environmental damage.

3.5
3.5.1

Other Applications
Problem: Load Balancing

In addition to swarm robotics, there are many other potential applications for the developed
models, and decentralized systems in general. Load balancing, sensor networks and computer networks are engineering applications in which the models developed are applicable.
One particular application is load balancing over wireless networks [8]. The idea is to create
the fastest speeds for everyone who is using a device connected to the same network at any
given time. As the amount of usage fluctuates between areas, the system must be able to
adapt to handle more or less of the load. All parts of the network need to communicate with
each other to identify which areas have the largest usage, and which areas have relatively
low usage.

Problem Description & Approach

We wish to study agreement in decentralized systems, in particular by treating it as a


convergence problem and studying convergence under various conditions. The opinions of
agents in a typical decentralized system are constantly evolving over time. By examining
the conditions under which these agents interact, we are able to model these systems and
observe their behaviour in the limit as time goes to infinity. In this paper we will go into
depth on four interaction models to prove and make conclusions about their convergence
behaviour.
We approached the problem by formulating four distinct interaction settings which we
chose to model. We will refer to the first interaction model from here on as the basic
model. The basic model allowed for us to achieve a general understanding of the problem
and how opinions changed over time. In this model, each agent interacts with every other
agent at every time step, and updates their opinion based on these interactions. An even
weighting is applied to each of these interactions, but a higher weighting is assigned to each
individuals own opinion. The opinions of agents evolve by taking the weighted average
of their own opinion and the opinions of the agents with which they interact. An issue
we found with this model is that it operates under an idealistic setting and is thus fairly
unrealistic. However, it provided a good starting point to build on for the rest of our models.
The second model introduces stubborn agents and is aptly named the stubborn agent
model. Stubborn agents are agents whose opinions remain static overtime and do not
evolve after interaction with other agents. The normal agents in this model function similarly to the basic model except now additional distinct weightings are assigned to each of the
stubborn agents present in the model. The normal agents still interact with all other agents,
including stubborn agents, at each time step. We expect that agreement is not generally
achieved in a model that involves multiple stubborn agents with distinct opinions.
The two models discussed up to this point have involved state independent interaction
settings. To create a more practical model, we incorporated state dependence into the
dynamics of our model. In reality, individuals do not just talk to the same set of people
constantly. As opinions change, people begin to explore new areas and meet new people.
This led us to the creation of the first state dependent model, called the communication
radius model. In this model, agents no longer communicate with all other agents at each
time step. We introduce a fixed opinion radius that will define which agents communicate
with each other at each time step. For a particular agent, only agents who have an opinion
that falls within the opinion radius around his own opinion will communicate with him. This
is a more realistic model that incorporates the fact that agents are likely to only interact
with other agents whom share a similar opinion. The issue with this model is that clusters
of opinions form over time, and thus general consensus is not achieved. After clusters form,
this results in individuals only communicating with other agents that have the exact same
asymptotic opinions as themselves. The next step we took in our solution process was to
improve upon this state dependent model to reflect more realistic interaction behaviour by
incorporating randomness.
The final and most complex model was motivated by a new notion of convergence called
stochastic agreement as presented by Condellos [12]. It is a random model that allows for
10

stochastic agreement which we call the random realization model. We came up with
this model because we wanted to introduce randomness into the interaction dynamics since
real interactions are not perfect. In this model, we define multiple distinct possible realizations of interaction possibilities at each time step. Each of these realizations is assigned a
probability depending on the current spread of opinions among agents at each time step.
Each realization varies by the number of other agents an agent will interact with. The
model still incorporates state dependence, but the state simply determines the probability
distribution on the realizations. If the spread of opinions is large, a realization that has
more agents communicating has a greater likelihood of being implemented. On the other
hand, if the spread is minimal, a realization with fewer agents communicating has a greater
likelihood of being implemented. The reasoning behind this is that when opinions are very
spread out, agents in the system will have very different opinions from each other and will
want to communicate more to be able to learn from all these different opinions. When the
opinions are all fairly similar, agents will have a lower desire to communicate with each
other because there will be little to learn from interacting with an agent that has a very
similar opinion to their own. The probability distribution for the realizations is based off
of a normal distribution. The center of the distribution (i.e. point of highest probability)
is determined by the spread of opinions at any time step as mentioned before. We came
up with this distribution because it was an effective way to distribute the probabilities to
accurately reflect the outcomes we desired based on the opinion spread. After a realization is
selected, each agent interact with its closest n number of opinionated neighbours depending
on the realization. Furthermore, to increase the randomness in the model we also applied
independent and identically distributed stochastic noise to the system.
To study convergence in each of these models, we first came up with the necessary theory and definitions needed to mathematically derive the convergence results. Condellos
[12] work presented us with many of the mathematical notions we needed to prove our results. The next step we took was to then mathematically model the interaction dynamics in
MATLAB and see if they matched our proven results. The MATLAB simulations worked
well and confirmed the results we had proven for each model.
We worked through this process for each model individually to obtain the desired results.
Modelling in MATLAB allowed us to experiment with many different variables and see
how this affected convergence results. We tested the models for anywhere from 5 to 500
agents and changed variables such as the opinion radius in the communication radius model,
weightings assigned to agents in the stubborn agent model, and probability distributions in
the random realization model. This allowed us to test the robustness of our code and verify
that our convergence results still hold regardless of the variable changes. Most notably, the
changes in variables affected the speed at which convergence was achieved depending on the
model. Rate of convergence is something we wish to study in the future but has not been
discussed in depth in this paper.

11

5
5.1

Design & Engineering Models


Preliminaries

T
n
Let (xt )
t=1 be a sequence of row vectors where xt R t Z

xt = (x1t x2t . . . xnt )


xt is referred to as the state of opinions at time t. xit is referred to as the opinion of the ith
agent at time t.
The initial value of the state of opinons is assumed to be given as
x0 = (x10 x20 . . . xn0 )
xTt forms a Markov chain on Rn .

5.2

Notation and Definitions

Definition 5.1. The spectrum of a matrix M Rnn is given by


(M ) = {| is an eigenvalue of M }
Definition 5.2. A matrix M Rnn is said to be stochastic if
n
X

Mi,j = 1, Mi,j 0

j=1

Definition 5.3. A matrix M Rnn is said to be row stochastic if


n
X

Mi,j = 1, Mi,j 0

i=1

Definition 5.4. A matrix M Rnn is said to be doubly stochastic if it is both column


stochastic and row stochastic
n
n
X
X
Mi,j =
Mi,j = 1, Mi,j 0
j=1

i=1

Definition 5.5. (Dobrushins Ergodic Coefficient) Let P be a finite square matrix. We


define the Dobrushins Coefficient, denoted (P ), by
X
(P ) = min
min(Pi,k , Pj,k )
i,j

Remark. Note that (P ) > 0 if and only if, for every two rows, there exists one column for
which both terms are positive.
Definition 5.6. The average vector associated with x, where xT Rn is
||x||
[1 1 1]
n
Definition 5.7. Let xt = (x1t x2t . . . xnt ) be the state of opinions at time t, evolving by
some prescribed deterministic transition kernel. Agent i n is said to be stubborn if
(x) =

xit+1 = xit = = xio t > 0, xTo Rn


12

5.3

Basic Model

The first model of opinion dynamics studied in this thesis involves transition kernels that are
doublystochastic. The state of opinions at any discrete time t can be found by computing
the product of t doubly stochastic matrices.
Model 1 (Doubly Stochastic Transition Kernel). Let (xt )
t=0 be a sequence of row vectors
where xTt Rn t Z, evolving by the recursive relationship
xt+1 = xt Pt
where (Pt )
t=0 is a sequence of doubly stochastic matrices, Pt Mn (R) t Z

p1,1t p1,2t p1,nt


p2,1t p2,2t p2,nt

Pt = .
..
..
..
..
.
.
.
pn,1t pn,2t pn,nt
Let the initial state be given by
x0 = (x10 x20 . . . xn0 )
Next we study a theorem which shows an interesting and integral property of the system
when the transition kernel is stochastic: the conservation of the average opinion in the
Markov chain. In any two adjacent states of the model, the distribution of opinions among
the agents will vary, although the average value of the agents opinions will remain constant.
Theorem 5.1. Let P be a row stochastic matrix. Let xt+1 = xt Pt . Then the state of
opinions will be average conserving, i.e.
n
X

xit+1 =

i=1

n
X

xit = =

n
X

i=1

xio

i=1

Proof.

(x1t+1 x2t+1

Thus xit+1 =

Pn

k+1

p1,1t
p2,1t

. . . xnt+1 ) = (x1t x2t . . . xnt ) .


..

p1,2t
p2,2t
..
.

..
.

p1,nt
p2,nt

..
.

pn,1t

pn,2t

pn,nt

xkt Pk,i i n

Now,
n
X
i=1

xit+1 =

n X
n
X
i=1 k=1

xkt Pk,i =

n X
n
X
k=1 i=1

xkt Pk,i =

n
X
k=1

xkt

n
X
i=1

Pk,i =

n
X

xkt

k=1

We now introduce a lemma that will be useful in determining the conditions for the
Markov chain introduced in Model 1 to reach convergence.
13

Lemma 5.2. Let P Mn (R) be a doubly stochastic matrix. If


(P ) = min(i, k)

n
X

min(P (i, j), P (k, j)) > 0

j=1

If T , T Rn , where ||||1 = ||||1 , then


kP P k1 (1 (P )) k k1
Another important theorem is introduced, which discusses an essential characteristic of
the eigenvalues of a stochastic matrix.
Theorem 5.3. Let P Mn (R) be a stochastic matrix., then max((P )) = 1
Proof. First we will show that 1 (P ), with

p1,1
p2,1

P [1 1 1]T = .
..
pn,1

associated eigenvector [1 1 1]T



p1,2 p1,n
1
1
p2,2 p2,n

..
.. ..
..
.
.
. .
pn,2 pn,n
1

1
p1,1 + p1,2 + + p1,n
1
p2,1 + p2,2 + + p2,n

=
= 1 ..
..
.

pn,1 + pn,2 + + pn,n

Now we will show that (A), 1


If (P ) then v Rn such that P v = v.

p1,1 p1,2 p1,n


v1
p1,1 v1 + p1,2 v2 + + p1,n vn
v1
p2,1 p2,2 p2,n v2 p2,1 v2 + p2,2 v2 + + p2,n v2
v2

..
= ..
..
.. .. =
..
..
.

.
.
.
.
.
pn,1 pn,2 pn,n
vn
pn,1 v1 + pn,2 v2 + + pn,n vn
vn
Define j n such that vj = kvk
Thus we have the following:
vj = pj,1 v1 + + pj,n vn (pj,1 + + pj,n )vmax = vmax
By the definition of a stochastic matrix, we have that pj,1 + + pj,n = 1
Thus vmax vmax
which occurs only if 1

14

Of particular interest are the conditions for convergence of the Markov chain described
in Model 1. Further, it is important to also study the conditions for consensus to eventually
form in the model, i.e for the opinions of all the agents to converge to the same value.
Intuitively, the amount of communication between the agents will determine whether or not
consensus is achieved. Finally, we will extend the idea that the average opinion is conserved
and show that this is also true in the limit of the sequence. We have now established all the
tools necessary to prove the following theorem.
Theorem 5.4. Let (Pt )
t=0 , Pt Mn (R) t Z, be a sequence of doubly stochastic matrices.
Let the state of opinion at time t + 1 be given by
xt+1 = xt Pt
with initial opinion x0
If

t=0

(1 (Pt )) = 0 then a consensus will be reached in the model, given by


lim xt =

kx0 k1
n

x0 k = kx0 k
Proof. Let x
Tt , xTt Rn where k
The states of each opinion at time t are then given by:
x
t+1 = x
t Pt
xt+1 = xt Pt
These opinions are equivalently given by
x
t = x
0

t1
Y

Pk

k=0

xt = x0

t1
Y

Pk

k=0

We can then write






t1
t2
t2
t1


Y
Y
Y
Y



Pk = (
Pk )Pt
k
xt xt k1 = x
0
Pk x 0
x0
Pk )Pt (x0




k=0

k=0

k=0

k=0



t2
t2


Y
Y


Pk )
(1 (Pt )) (
x0
Pk ) (x0


k=0

k=0

..
.

t1
Y

(1 (Pk )) k
x0 x0 k1

k=0

15

Taking limits of both sides of the inequality yields


lim k
xt xt k 1

(1 (Pk )) k
x0 x0 k 1 = 0

k=0

Thus both sequences of opinions have equal limits


lim x
t = lim xt

Now let xt be an arbitrary state of opinion at time t with initial condition x0 .


xt+1 = xt Pt
Let x0t be the state of opinion at time t with initial condition x00 =

kx0 k
n [1

1 1]

x0t+1 = x0t Pt
Note that kx0 k = kx00 k, thus by the above result limt xt = limt x0t
We can write x0t as follows
x0t = x00

t1
Y

t1

Pk =

Y
kx0 k
[1 1 1]
Pk
n
k=0

k=0

Note that the row vector [1 1 1] is an eigenvector for every stochastic matrix with
associated eigenvalue 1.
=

kx0 k
kx0 k
[1 1 1](1t1 ) =
[1 1 1]
n
n

Thus
lim xt = lim x0t =

kx0 k
[1 1 1]
n

This concludes our discussion of the model with doubly stochastic transition kernels.
We have proven that in a network with full sufficient communication between agents, the
opinions will converge to the average of the initial opinions of all agents. We now investigate
a model which incorporates stubborn agents, whose opinions are static and not affected by
other agents in the model. Stubborn agents are able to affect the opinions of non-stubborn
agents in the system. The following model involves a stationary transition matrix which is
column stochastic. Some new machinery will need to be introduced in order to prove the
convergence result for this model.

5.4

Stubborn Agent Model

T
n
Model 2 (Stubborn Agents). Let (xt )
t=0 be a sequence of row vectors where xt R
t Z, evolving by the recursive relationship

xt+1 = xt P
16

Let agents 1, 2, , m be stubborn, and let the transition kernel, (Pt )


t=0 , Pt Mn (R), be
column stochastic and given by

1
a1

a1

..
..
..
..

.
.
.
.

1
am
...
am

P =

pm+1,m+1 pm+1,n

..
..
.
.

.
.
.
pn,m+1

pn,n
where a1 , ..., am > 0, and pm+i,m+j , where i, j n, are chosen to satisfy the definition of a
column stochastic matrix.
In a model with multiple stubborn agents, it is not expected that consensus will be
reached since the stubborn agents opinions remain constant and will, in general, never
agree with each other. However, consensus will be achieved by the subset of non-stubborn
agents whereby they will converge to a convex combination of the stubborn agents opinions.
Theorem 5.5. Let (xt )
t=0 evolve according to the relationship prescribed in model 2. Then
the opinions of the stubborn agents will remain constant and the opinions of the n m
non-stubborn agents will converge to a convex combination of the opinions of the stubborn
agents. In particular, the state vector will converge to

lim xt = (x10 x20 xm


0 (

a1 x10 + a2 x20 + + am xm
a1 x10 + a2 x20 + + am xm
0
0
) (
))
a1 + a2 + + am
a1 + a2 + + am

Proof. First we will show that limt xt exists:


xt = x0 P t
P can be decomposed into Jordan Normal Form: P = J1 where

..

J =

Jm+1,m+1

..

.
Jn,n
Exponentiating the matrix J to a power of t N yields

..

1
Jt =
t

Jm+1,m+1

..

t
Jn,n

17

Since the eigenvalues associated with Jordan blocks Jm+1,m+1 , ..., Jn,n are strictly less than
1, it follows that
t
t
lim Jm+1,m+1
= ... = lim Jn,n
=0
t

This implies that

lim J t =

..

.
1
0
..

.
0

Thus
lim xt = x0 lim P t = x0 lim J t 1 = x0 ( lim J t )1

= x0

..

.
1
0
..

limt xt will solve the equation limt xt = limt xt P .Explicitly solving this yields:

lim xt = [x10 x20 xm


0 (

a1 x10 + a2 x20 + + am xm
a1 x10 + a2 x20 + + am xm
0
0
) (
)].
a1 + a2 + + am
a1 + a2 + + am

We have thus proven that in the stubborn agent model, the opinions of the normal agents
will converge to a weighted average of the opinions of the stubborn agents. In networks
with more than one stubborn agent, overall consensus is generally not achieved because the
stubborn agents opinions will not change over time.

5.5

Communication Radius Model

We now introduce the first state-dependent model in our thesis, where the transition kernel
is an explicit function of the state variable, xt . This model is based off of practical opinion
dynamics applications as two agents will communicate if and only if the distance between
their opinions is within a certain fixed radius.

T
Model 3 (Communication Radius). Let (xt )
t=0 be a sequence of row vectors where xt
n
R t Z, evolving by the recursive relationship

xt+1 = xt P (xt )

18

where P : Rn Mn (R) is given by


(
ai,j > 0,
pi,j (xt ) =
0,

if |xi xj | < r
if |xi xj | r

where r > 0 and ai,j are chosen so that P (xt ) will be column stochastic. Note that this will
always be true since |xi xi | = 0 < r.
We now show that the communication radius model does not lead to consensus in general,
and that clusters of opinions may form under the dynamics specified in the model.
Theorem 5.6. Let (xt )
t=0 evolve according to the relationship outlined in Model 3. There
exists x0 Rn such that no consensus will be reached in the model, i.e.
lim xt 6= a(1 1 1), a R

Proof. For any given r > 0, choose x0 such that |xi0 xj0 | > r for i 6= j. Thus
(
1, if i = j
pi,j (xt ) =
0, if i =
6 j
This will yield
lim xt = a(x10 xn0 )

Clusters of opinions will likely form in this model where groups of agents will each converge to a different opinion. Thus, in general this model does not lead to overall consensus.

5.6

Random Realization Model

We now move to study the convergence of opinions with the addition of independent and
identically distributed stochastic noise. In order to study such a problem, we must introduce
a new notions of convergence: Stochastic agreement. Stochastic agreement and will allow
us to study noisy, possibly state-dependent agreement processes. We are interested in determining if a process has a bounded expected return time. Before we can precisely discuss
this further, some preliminary definitions are required. The Foster-Lyapunov Criteria for
stability of Markov chains will be heavily used in studying such systems. We wish to study
the stability of a Markov chain evolving with symmetric noise. In particular, we wish to
ensure that all entries of the state vector return infinitely often to some consensus set. This
will be made precise.
Definition 5.8. (Stochastic Agreement: Condello 2013) Let {xt }{t0} be a sequence of
random variables taking values in Rn . We define a consensus set, A by
A = {x : ||x (x)||1 A}
for some A R. Let us define a sequence of stopping times for a process xt by:
z+1 = min(t z : xt A )
19

with 0 = 0. We say that the process achieves stochastic agreement if


sup E[z+1 z |Fz ] <
z

and x Rn
Px (min(t 0 : xt A )) = 1
That is, xt will almost surely return to the consensus set in finite time.
Definition 5.9. (Stochastic Absolute Agreement: Condello 2013) Let {xt }{t0} be a sequence of random variables taking values in Rn . We define a consensus set, by
= {x : ||x||1 C}
for some C R. Let us define a sequence of stopping times for a process xt by:
z+1 = min(t z : xt )
with 0 = 0. We say that the process achieves stochastic absolute agreement if
sup E[z+1 z |Fz ]
z

and x Rn
Px (min(t 0 : xt A ) < ) = 1
That is, xt will almost surely return to a consensus set containing the origin.
Remark. Note that stochastic agreement means that the process has a bounded expected
return time to an agreement set A . Stochastic absolute agreement is identical except the
agreement set must contain the origin. Note that stochastic absolute agreement is a stronger
notion than stochastic agreement and therefore the former implies the latter.
The conditions under which these definitions of agreement are met will now be studied. In
particular, the criterion for stability in noisy chains is necessary in order to study recurrence.
Theorem 5.7. (Foster-Lyapunov Criteria for Stability of Markov Chains) Let xt be a irreducible Markov chain. Let V : X R;  0; b and S be a small set. If
E[V (xt+1 )|xt = x] V (x)  + b1{xS}
then {xt } is positive Harris recurrent.
We are now well equipped to study the problem of agreement in a stochastic, state
dependent system. The engineering model we shall consider is one in which an arbitrary
number of agents interact and each agent only interacts with some subset of the agents
whose opinions are closest to themselves. The system will be state dependent in the sense
that there will be different realizations of the probability transition kernel at each time stage.
A natural probability distribution on the possible realizations of the transition kernel will
emerge. In order to minimize communication costs, agents should interact more often when
their opinions are widely dispersed and interact less often when their opinions are close.

20

Definition 5.10. (Neighbouring agents) Let {xt } be a Markov Chain evolving according to
a state dependent probability transition kernel P . Let there exist n 1 possible realizations
of P , namely {Pi }, 1 i n 1, at each time step. Let xt denote a sorted state vector in
the sense of magnitude i.e, the vector has sorted xt from smallest to greatest . For Pi , where
i = odd, we say that the ath agent is a neighbour of the bth agent if the following conditions
is met:
a {j : x
b(i1)/2 xj x
b+(i1)/2 }
For Pi , where i = even, we say that the ath agent is a neighbour of the bth agent if the
following conditions is met:
a {j : x
bi/2+1 xj x
b+i/2 }
We also say that each agent is a neighbour of themselves.
Remark. For a realization Pi with i = odd, each agent chooses themselves and their closest
i1
i1
agents to the right in x
t and their closest opinionated
agents to
opinionated
2
2
the left in x
t . If i = even, the agent would proceed with a similar process except he would
additionally choose an extra agent i/2 entries to the right of himself in x
t . Our definition
ensures that each agent has exactly i neighbours for any realization of Pi . The example
below illustrates the process of determining an agents neighbours.
Example: Consider a group of four agents (n = 4). Using the algorithm written in Definition
5.10, calculate Pi , 1 i n 1 at some t = t0 . Suppose the opinions of the agents at
t = t0 are x1 = 3, x2 = 9, x3 = 5, x4 = 1. The opinion vector is:
xt0 = [x1 x2 x3 x4 ]
and the sorted state vector is:
xt0 = [x4 x1 x3 x2 ].
Then the possible realizations of the state transition kernel are:

1 0 0 0
0 1 0 0

P1 =
0 0 1 0
0 0 0 1

1/2 0 1/2 0
0 1/2 0 1/2

P2 =
0 1/2 1/2 0
1/2 0
0 1/2

1/3 0 1/3 1/3


0 1/3 1/3 1/3

P3 =
1/3 1/3 1/3 0
1/3 1/3 0 1/3
Definition 5.11. (Realization Distribution) Let {xt } be a Markov Chain evolving according
to a state dependent probability transition kernel P . Let there exist n1 possible realizations

of P , namely {Pi }, 0 i n 1, at each time step. Define i = P r Pi = P . Write
21

x1 , , xn as the entries of the state vector xt . Let the variables t and t denote the
standard deviation and average of this data set, respectively. That is:
n

1X
t =
xi
n i=1
v
u n
u1 X
(xi )2 )
t = t
n i=1
Now consider the function ft : R R defined as:
f (t ) = et
Note that the Im(ft ) = (0, 1) R. Also note that as t we have that ft 0. One
could partition Im(f ) into n 1 equal subintervals. In particular, we may write:
Im(f ) = [0,

1
2
n3 n2
n2 n1
1
)[
,
) [
,
)[
,
].
n1
n1 n1
n1 n1
n1 n1

n (d + 1) n d
,
). Let now d
We say that Pd has a dominated occurrence if f () [
n1
n1

be the associated probability of occurrence of Pi . An explicit formula for calculating j , j


is:
j = p|dj|+1
Note that the beta-distribution fully characterizes the probabilities of occurrence of the
possible realizations of P. To find i , i we can simply normalize:
j
i = Pn1
k=1

With this discussion, P r Pi = P is known.




Remark. Definition 5.11 is motivated by a cost reduction phenomena. In particular, the


agents interact more when their opinions are widely dispersed and interact less when their
opinions are similar. We proceed by investigating the agreement properties of such a random
realization model. In order to ascertain conditions for agreement properties, we first state
the following theorem by Condello. For the proof of this theorem, refer to [12].
Theorem 5.8. (Condello 2013) If xt , Ext [(F (Xt ))] > for some > 0, every realization
of F (Xt ) is doubly stochastic almost surely x, then Xt achieves Stochastic Agreement.
Theorem 5.9. (Neighboring Agreement with i.i.d Noise) Let {xt } be a Markov Chain evolving according to a state dependent probability transition kernel P . Let there exist n 1
possible realizations
of P , namely {Pi }, 0 i n 1, at each time step. Suppose that


P r Pi = P = i and i is chosen according to Definition 5.11. Suppose further that Pi is


the realization where each agent shares 1/ith of its opinion with its closest i neighbours (By
Definition 5.10). The evolution of the state can be written as:
xt+1 = xt P (xt ) + wt
where {wt } is an i.i.d noise process.
Then under these conditions xt achieves stochastic agreement.
22

n
Proof. Note that when n = even, (Pi ) > 0 for every i
and when n = odd, (Pi ) > 0
2
n+1
for every i
. By the definition of the realization distribution, i , we have that
2
any realization of P occurs with nonzero probability (bounded below). Therefore, for every
xt , E[(P (xt )] > for some > 0. Further, since every possible realization of P (xt ) at
any time stage is doubly stochastic, we have that P (xt ) is doubly stochastic for all xt . The
result follows by Theorem 5.8.
The random realization model achieves stochastic agreement while incorporating randomness into the network. This is the most complex model we have studied and is the most
practical model of the four. Explicit simulated results for all four models will be presented
in the following section with MATLAB plots.

23

Testing & Results

Below are the testing results produced in MATLAB for the models previously described.
Figure 1 shows the convergence of a finite number of agents that update their opinions
based on interactions with all the other agents in the network. The agents converge to an
opinion that is the average of the initial opinions.
Figure 2 shows a stubborn agent model with two stubborn agents. The two stubborn
agent opinions are clear on the graph as the constant lines. All the agents converge to an
opinion that is a convex combination of the initial opinions of the two stubborn agents.
Figure 3 demonstrates the formation of opinion clusters in the communication radius model.
All the agents within a certain cluster will converge to a single opinion. General consensus
is not achieved.
Figure 4 shows the stochastic agreement of a finite number of agents in the random realization model. The random noise incorporated into the model is made obvious on the
MATLAB plot. The average opinion at each time step is tracked by the line of circles. Note
that the average opinion is constantly changing, but all the agents maintain opinions around
the average. This demonstrates stochastic agreement.

Basic Model
Figure 1: Shows agreement to the average initial opinion

24

Stubborn Agents Model


Figure 2: Shows agreement to a convex combination of the initial opinions of the two
stubborn agents

Radius Model
Figure 3: The existence of clusters show that this model does not converge in general

25

Random Realization Model


Figure 4: Shows stochastic agreement

26

Discussion

We have looked at four key interaction models and have studied the convergence of opinions
in each model, both in the sense of approaching the same opinion in the limit and in the
sense of stochastic agreement. The results presented in the previous section from our MATLAB simulations demonstrate the conditions under which opinions will converge, and what
they converge to. A conclusion based on our results in each interaction model can be applied to many real life interaction dynamics, which will be made explicit later in this section.
In the basic model, we examined an interaction model in which all agents communicate
sufficiently. A first result that we have proved mathematically in this model is that the
opinions converge to the average of all the initial opinions of the agents. This is a very
idealistic and crude result, but can still be applied generally. In a society where individuals
trust each other equally and continuously communicate, each individual will tend to have
their opinion influenced equally by all the other individuals. This results in an overall convergence of each individual to the average of all the opinions. The average was conserved
in our model, so the average of all the opinions in the initial state is exactly the average of
the opinions at all other time steps as well.
In the stubborn agent model, a key result we proved is that in interaction settings involving
agents whose opinions are static, the opinions will gradually converge to a weighted average
of the stubborn agents opinions. The stubborn agents in this model communicated with
all other agents and thus are a constant source of influence on the other agents opinions.
One can thus conclude that stubborn agents with widespread communication play a very
strong role and can heavily influence others in a society. This is particularly important when
handling misinformation. If a network involves a single stubborn agent that is misinformed,
eventually all agents in the network will acquire the same misinformed opinion. Therefore,
when dealing with stubborn agents it is of great importance to ensure that they are correctly
informed.
The communication radius model presented the possibility of clusters of opinions forming.
The agents, in general, do not converge to the same opinion, but rather we have clusters of
agents each converging to a distinct opinion. The purpose of this model is to better reflect
real life interaction dynamics where agents will not necessarily be able to communicate with
all the agents in a society due to geographic, social, or financial barriers. We created a
model where agents only interact with other agents whom shared a similar opinion to their
own. In society, this interaction model can be exemplified by a country with multiple states.
Due to geographical barriers, individuals in different states will most likely not communicate
with each other. Within a certain state there may be full communication and thus, as in the
basic model, the opinions will converge to the average of all the opinions. It is then possible
for clusters of opinions to form for each state. There is convergence to an opinion within
the states, but as a whole, there is no general consensus of opinions across the states.
Finally, in the random realization model we introduced noise and the notion of stochastic agreement. This is more representative of real scenarios since networks are not perfect
in practice. In this model, the number of agents that each individual communicates with
is selected randomly based on a state dependent probability distribution. The average of
the opinions at any point in time will tend to fluctuate due to the added noise, however,
27

it is still clearly observed in Figure 4 that the opinions will tend to stay around the average at any point in time. This model is representative of hive mentality in society. In
a society, the average opinion of individuals will indeed fluctuate over time, but there are
many individuals that will conform to the average. At the very least, individuals will have
a tendency to not stray too far off from the norm. This particular notion of hive mentality
is observed both in natural and artificial settings. In particular, hive mentality is observed
in swarms of animals, such as bees or ants. Artificially, hive mentality can be observed in
swarm robotics which is one of the most important engineering applications of the study
of agreement among decentralized decision makers. The minds of humans actually function
very similarly to a swarm of animals. In the mind of a human, instead of individual bees,
we have individual neurons collectively collaborating and eventually reaching a consensus
resulting in an action to be executed [11]. This observation is significant in linking the
results thus proven to important sociological outcomes.
In a sociological setting, the study of opinions and the convergence of those opinions is
important to determine an optimal action for a given situation. In a stubborn agent example, if it is known beforehand that the intentions of the single stubborn agent are malicious,
then it could be prematurely decided that the other agents should not interact with the
stubborn agents. We know that given a single stubborn agent, the opinions of all agents
in the system will eventually converge to the stubborn agents opinion, provided that all
agents communicate with each other. Therefore, we try to cut off the malicious agent in
advance before it can spread its opinion to other agents.
In an economical setting, spending and market trends are very much influenced by popular opinion and can thus be forecasted and predicted. For example, being able to forecast
the shift of consumer preferences can give a retailer a huge advantage in being able to meet
the needs of consumers much earlier than its competitors. These shifts can be predicted
through big data analytics and the study of how preferences converge over time.

7.1

Application of Models

In the design section of the report, problems were introduced that were considered when
creating the models. This section discusses the application of the models and results proven
thus far to solve these problems.

7.1.1

Landmine Removal

The problem of landmine removal was introduced in Section 3.4.2. The random realization
model can be applied to this model to generate an effective solution to the problem. As
discussed in the random realization model, when the agents are closer to agreement they are
less likely to communicate with a large number of agents, and when the agents are farther
away from agreement, they are more likely to communicate with a large number of agents.
The benefit of this model is the reduced communications between the agents when they are
close to agreement. When the agents are far apart from agreement, there will likely be more
communication between them, resulting in a rapid convergence to a state where the agents

28

are at least almost in agreement. With the agents communicating less when they are close
to agreement, the communication costs for the system will be reduced and the amount of
power used by the drones will decrease as a result. The drones will be able to run longer,
or have the capacity to be equipped with more sophisticated surveying technology. The
random realization model would be a robust model to use for the communication protocol
of a team of drones who are removing landmines because they could be equipped with better
sensors to locate the landmines.

7.1.2

Forest Fires

The problem of fighting forest fires was introduced in Section 3.4.3. The radius model could
be applied in swarm robotics as the communication system for the robots in this situation.
The robots could identify the location of fire hotspots and then communicate this information to the nearby robots, which could then converge and aid in putting out the hotspot.
The robots would cluster around the hotspot at the same time other robots clustered around
other hotspots. Clusters would form based on whichever robots were closest to the hotspots.
This model would be effective because many different hotspots could be dealt with at the
same time.

7.1.3

Environmental Concerns

The problem of environmental damage was introduced in Section 3.4.4. One concern with
swarm robotics is that the drones will damage the terrain that they are traversing. The
random realization model described in this paper would prevent observation redundancy.
When there is disagreement about a location, there will be more communication between
the agents and more agents will observe an area. Conversely, when there is agreement, there
will be less communication and no more agents will observe an area. This would prevent
the drones from damaging the environment more than necessary.

7.1.4

Load Balancing

The problem of load balancing was introduced in Section 3.5.1.The random realization model
developed could be applied to a load balancing system by establishing a notion of agreement
defined as when a section of the network has a similar load to the surrounding sections of
the network. If there is a large difference between portions of the network, the network will
communicate more often to balance the load between areas of the network. For example,
if there was an event with a large number of people using wireless electronic devices, the
network in close physical proximity to the event would see a large increase in traffic. This
would result in an increase in communication among the areas of the network surrounding the event, and the load would ripple outwards from the event until it was more evenly
distributed. All of this would happen very quickly, with users experiencing little to no lag
in the system. When the load is evenly distributed throughout the network, the network
will communicate very little, only monitoring the loads across the network in case of a load
29

spike that needs to be balanced across the network. The load balancing system described,
like most decentralized systems, would be robust against failures in the network because if a
certain portion of the network was to fail the load could easily be redistributed throughout
the network. By the same principle, it would be easy to expand the network and integrate
new sections into the network.

30

8
8.1

Conclusion
Summary

Our thesis first began at investigating opinion dynamics systems with constant, doubly
stochastic transition kernels, which preserved the average opinion in the system. A general
result was shown which provided a condition for consensus in the system. The model then
evolved to incorporate stubborn agents, whom do not update their opinions, but are able to
influence the opinions of non-stubborn agents in the system. Further complexity was added
to our system by investigating a system with a state dependent transition kernel, where
agents only communicate with other agents with whom they have similar opinions. This
model was introduced to demonstrate the dynamics under which clusters of opinions may
form. The final model introduced in this thesis is the Random Realization model, which
is non-deterministic, incorporates i.i.d Gaussian noise and is state dependent. This model
was motivated by the practical engineering applications discussed earlier. The dynamics for
this model were designed to minimize the amount of communication between agents, yet
still ensure that the opinions of the agents remain stable and do not deviate greatly form
the average opinion.

8.2

Future Work

In practice, decisions must be made in finite time. Most forms of consensus and convergence
discussed in this model are asymptotic. However, it would be useful to study the rates of
convergence in the various models, and to provide some type of measure on the differences
between agents opinions at each time. Further, it would be useful to aim to design a system
which converges quickly, especially from an engineering point of view. Surely, with increased
rate of convergence, there would need to be increased amount of communication between
agents. It would be necessary to evaluate this trade off if this problem was studied in the
future.
This thesis only studied models of opinion dynamics that were strictly non-Bayesian. Another area of future study is to form models for opinion dynamics that are governed by
Bayesian probability theory. In a Bayesian model, all agents would be perfectly rational
and each have a complete probabilistic model based on past events that took place in the
system. In reality, humans are not perfectly rational, and thus do not operate under the
dynamics prescribed by Bayes. Studying a hybrid between the systems proposed in this
thesis and Bayesian models would also be of interest and could provide a better model for
human interactions.

31

References

[1] Schell, Barbara A Boyt (2007). Clinical And Professional Reasoning in Occupational
Therapy. Lippincott Williams and Wilkins. p. 372. ISBN 0-7817-5914-5
[2] Nndb.com, Marquis de Condorcet, 2014. [Online]. Available:
http://www.nndb.com/people/882/000093603/. [Accessed: 03- Oct- 2014].
[3] J. Wolfowitz, Products of indecomposable aperiodic stochastic matrices, Proc. Am.
Math. Soc., vol.15, pp. 733-736, 1963
[4] J. Hajnal, Weak ergodicity in non-homogeneous Markov chains, Proc. Cambridge
Philos. Soc., vol.54, pp. 233-247, 1958.
[5]D. Acemoglu and A. Ozdaglar, Opinion Dynamics and Learning in Social Networks,
Dynamic Games and Applications, vol.1, no.1, 2011.
[6] Mathworld.wolfram.com, Doubly Stochastic Matrix from Wolfram MathWorld, 2014.
[Online]. Available:
http://mathworld.wolfram.com/DoublyStochasticMatrix.html. [Accessed: 03- Oct- 2014].
[7] M. DeGroot, Reaching a consensus, Journal of the American Statistical Association,
vol. 69, no. 345, pp. 118121, 1974.
[8] Oddi, Guido; Pietrabissa, Antonio; Priscoli, Francesco Delli; Suraci, Vincenzo, A decentralized load balancing algorithm for heterogeneous wireless access networks, WTC 2014;
World Telecommunications Congress 2014; Proceedings of , vol., no., pp.1,6, 1-3 June 2014.
[9] N. Amini, A. Vahdatpour, W. Xu, M. Gerla and M. Sarrafzadeh, Cluster size optimization in sensor networks with decentralized cluster-based protocols, Computer Communications, vol. 35, no. 2, pp. 207-220, 2012.
[10] National Interagency Fire Center, Wildland Fire Fatalities by Year, 2014.
[11] J. Castro, You Have a Hive Mind, Scientificamerican.com, 2015. [Online]. Available: http://www.scientificamerican.com/article/you-have-a-hive-mind/. [Accessed: 05Apr- 2015].
[12] A. Condello, Stability of Agreement in State-Dependent Interaction Environments, 1st
ed. Kingston: Queens University Department of Mathematics and Statistics, 2013.
[13] Kumar, V.; Sahin, F., Cognitive maps in swarm robots for the mine detection application, Systems, Man and Cybernetics, 2003. IEEE International Conference on , vol.4,
no., pp.3364,3369 vol.4, 5-8 Oct. 2003.

32

You might also like