Professional Documents
Culture Documents
Session7Slides Draft Nov2 2020 2pm
Session7Slides Draft Nov2 2020 2pm
1
“sum of many small independent random effects...” (EK 18.1)
2
2 vs. k
-k -2
3
2 vs. k
-k -2
4
Writing the power law (scale-free)
distribution in linear form:
5
Y = Beta_0 + Beta_1 * X + epsilon
Figure 18.2. A power law distribution (such as this one for the number of Web page in-
links, from Broder et al. [79]) shows up as a straight line on a log-log plot. 6
2 vs. k
-k -2
7
Erdos-Renyi Random graph
• Start with n nodes
• Serves as a benchmark
8
Random Graph: Degree Distribution
9
Poisson distribution: random graphs
Poisson P(d) = (EXP(-(n-1)*p)*((n-1)*p)^d)/FACT(d)
0.16
0.14
0.12
0.1
Poisson P(d) = (EXP(-(n-1)*p)*((n-
1)*p)^d)/FACT(d)
0.08
0.06
0.04
0.02
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
10
Poisson
11
Scale-free
12
Figure 18.3. The distribution of popularity.
13
x-axis is ordered by popularity of book
15
What do they add?
• Captures something in the real world that is not
captured in the static models…
• Dynamic model…
16
Benchmark model: uniformly random growth
17
Degree distribution
• Start with a complete graph of m nodes (i.e. fully
connected)
18
Distribution of expected degrees
• Expected degree at time t for node i born at i,
where m < i < t is:
m + m/(i+1) + m/(i+2) + ... + m/t
19
Distribution of expected degrees
• Expected degree at time t for node i born at i,
where m < i < t is:
m + m/(i+1) + m /(i+2) + ... + m / t
20
Distribution of expected degrees
• Expected degree at time t for node i born at i,
where m < i < t is:
m + m/(i+1) + m/(i+2) + ... + m / t
21
Distribution of expected degrees
• Expected degree at time t for node i born at i, where
m < i < t is:
m + m/(i+1) + m/(i+2) + ... + m/t
23
25
x-axis corresponds to red-dotted line x-axis corresponds to green-dotted line
26
Based on Matthew Jackson ch. 4
20(1+log(100/i)) < 35
m = 20
27
Distribution of expected degrees
• Expected degree at time t for node i born at i, where m <
i < t is:
m + m/(i+1) + m/(i+2) + ... + m/ t
28
MEAN FIELD APPROXIMATION
29
Mean field approximation
• Continuous time approximation
• Distribution of expected degrees
• Check by simulation??
30
Mean-field approximation
RANDOM GROWTH
31
Distribution of expected degrees
(random growth)
• ddi(t)/dt = m/t and di(i)=m
• solution (simple differential equation):
di(t) = m + m log (t/i)
32
Mean-field approximation
PREFERENTIAL ATTACHMENT
GROWTH
33
Distribution of expected degrees
(preferential attachment)
• ddi(t)/dt = m(di(t)/2tm) and di(i)=m
34
Distribution of expected degrees
• ddi(t)/dt = m(di(t)/2tm) and di(i)=m
• di(t) = m (t/i)1/2
35
m=1
36
37
By Matthew Jackson (Fall 2015 MOOC)
38
Distribution of expected degrees
• Ft(d) = 1 – (m/d)2 and ft(d) = 2m2/d3
39
HYBRID MODELS
40
Model of hybrid attachment
• Fraction a incoming links made uniformly at random;
1 - a links are made by searching neighborhoods
(friends) of friends.
41
Relation to preferential attachnment
• In a network with half degree k and half degree 2k
individuals:
• Randomly select a link and then a node on one end
of it: 2/3 chance that it has degree 2k, 1/3 chance
that it has degree k
p(neighbor deg 2) = (1/3) (1/2 + 1 + 1/2) = 2/3 p(neighbor deg 2) = (1/2) (1/2 + 1/2) = 1/2
p(neighbor deg 1) = (1/3) (1/2 + 0 + 1/2) = 1/3 p(neighbor deg 1) = (1/2) (1/2 + 1/2) = 1/2
42
Relation to preferential attachment
• Consider a similar model: Randomly select a node,
and then look at its neighbor. Over a very large
network, this will approximate the prior model of
selecting a link first.
43
Relation to preferential attachnment
• Consider a similar model: Randomly select a node, and then
look at its neighbor. Over a very large network, this will
approximate the prior model of selecting a link first.
44
Friends of friends
• Randomly find a node
45
Simple Hybrid
• Fraction a uniformly at random, 1‐a via
preferential attachment:
46
Degree distribution
• Nodes that have expected degree less than d at some
time t are those i, such that:
47
Degree distribution
• F(d) = (t – i)/t
• F(d) = 1 – ((m+amx)/(d+amx))x
where x = 2/(1‐a)
• log(1‐F(d)) = c – x log(d+amx)
• estimate m directly (see last slide)
• select a to minimize distance between actual
distribution and model’s distribution
49
Spans Extremes
• F(d) = 1 – ((m+amx)/(d+amx))x
where x = 2/(1‐a)
50
Fitting hybrid models to the data
• F(d) = 1 – ((m+amx)/(d+amx))x
• log(1‐F(d)) = c – x log(d+amx)
51