Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Knowledge-Based Systems 212 (2021) 106588

Contents lists available at ScienceDirect

Knowledge-Based Systems
journal homepage: www.elsevier.com/locate/knosys

Understanding the game behavior with sentiment and unequal status


in cooperation network✩

Mengmeng Liu, Yinghong Ma , Le Song, Changyu Liu
Business School, Shandong Normal University, Jinan, 250014, China

article info a b s t r a c t

Article history: Cooperation network is one of the structural social relationships naturally formed in the evolution of
Received 18 May 2020 human societies. Previous research has focused on well-mixed structures, and yet most individuals in
Received in revised form 29 October 2020 real interactions have sentiments and unequal status in duration and changing in time. This raises
Accepted 30 October 2020
the question of whether cooperation can persist despite different sentiments and unequal status
Available online 9 November 2020
of individuals. In this paper, sentiments are included the positive and the negative, and unequal
Keywords: statues are the small node and the big node based the node degree. We develop a game model to
Cooperation network study cooperative behaviors based unequal statuses and sentiments, and experimentally examine the
Unequal status model by digital and real networks. Surprisingly, we find that small nodes are more prone to choose
Cooperative behavior positive cooperation relative to comparable big nodes on the promise of enough profits of the tacit
Evolutionary game
knowledge and the excess return. Our results reveal that the unequal status is the hidden mechanism
Reputation score
for cooperative behaviors, and provide a new prospective to investigate the evolution of cooperation
in more realistic environments.
© 2020 Elsevier B.V. All rights reserved.

1. Introduction scholars are academic leaders who have honorary titles, enough
financial founds and other resources; while other scholars are
Cooperative behaviors are widely rooted in many scenarios, invoices without financial support nor human resource. Hence, it
such as enterprises, actors, scholars, as well as animals. Scholars is reasonable to assume that players have the different abilities
cooperate to meet their respective needs based on social contacts, or the influences on their neighbors’ evolving traits [14]. High
trusts and sharing complementary resources [1], forming cooper- prestige’s players are easy to spread their strategies and influence
ation networks in science. The contributions of scientist coopera- their neighbors’ strategy choices [15], and the popularity of an
tion are not only the breakthrough in unattainable achievements individual also affects the cooperation evolution [16]. Players’
but also mutual transition and fusion of the knowledge. The connections and their weights can speed up the co-evolving of
social and professional collaborate relationships of scientists were strategies and the network structure [3,7,8,17].
Influence factors and structures of cooperations are two im-
proved to be popular cooperative behaviors [2]. The cooperation
portant aspects for evolutionary games. Influence factors impact
game is an effective tool to uncover the cooperative behaviors of
on the strategy choice of the player. The effect of variations of
individuals. The conditions that the emergence and sustainability
cost-to-benefit ratios on evolution of cooperative behaviors was
of cooperative behaviors can be identified by the generalized clas-
investigated through the novel classical games [3], the three-
sical game models, such as snow-drift game [3,4], public goods
player game model [18] and three-strategy game model [19].
game [5,6] and prisoner’s dilemma game [7–10]. All those clas-
The continuous supporting policy for cooperators in donation
sical games are the favorite mathematical methods to simulate game [20] and the increasing players who considered the affec-
social dilemma and identify Nash equilibrium. tion of strategies and the environment together [21] can improve
In many real scenarios, the statuses of players are different cooperation level.
and asymmetric in their power, wealth, influence and so on, The dynamic rewards, such as award factors and penalties
such as social networks [5,11] and biological systems [12,13]. In aimed at accounting for the benefit including innovators in a
academic cooperation, scholars’ status are often unequal, some group and the cost of unsuccessful insights over time, also in-
fluenced the cooperation level in scientific publications [22,23].
✩ This work is supported by National Natural Science Foundation of China One of the most important structural factors of players is the con-
(No.71471106). nection with neighborhood, called the node degree reflecting the
∗ Corresponding author. number of resources and cooperators. The power of a player mea-
E-mail address: yhma@sdnu.edu.cn (Y. Ma). sured by its degree is to qualify the games between players [24].

https://doi.org/10.1016/j.knosys.2020.106588
0950-7051/© 2020 Elsevier B.V. All rights reserved.
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

The degrees of knowledge-sharing or knowledge spillover were and j; Otherwise aij = 0. The degree of the node i in G is defined

measured by the networked evolutionary game [25], and rep- as the ∑number of neighbors of it, denoted by ki , ki = j aij .
resented by knowledge graphs [26]. There are also many other 2m = i∈V ki since each link is counted two times by its two
factors affecting the evolution of strategies or partners, such as connected nodes. In the following, parameters or influence factors
the history behaviors of individuals [27] and the related memory and the evolutionary game model are presented.
length [28].
In the process of cooperation, some scholars show positive 2.1. Influence factors
attitudes, and they pay much resource, costs and energies for
cooperation research. While, some scholars agree to cooperate In this model, we define five influence factors: the excess
but display negative manners: they do not give enough sup- return R and its distribution coefficient α , the development cost
ports or even provide nothing if the other side do not provide c, the reputation value f (ki ) of node vi with degree ki and the tacit
direct profits or maximize their interests. Such opportunistic knowledge value S.
behaviors of negative players like the free riding in the snow- • Excess return R and its distribution coefficient α . Two
drift game [3,4]. Those sentiments were analyzed or detected scholars cooperate in a project that would increase profits which
by semi-supervised learning model [29], and the degree of in- is defined as an excess return R, and two sides of cooperators
tensity for sentiments was predicted using stacked ensemble share R with the distribution coefficient α and 1 − α respectively,
method [30] for big social data. The positive or negative senti- where α is a human controllable factor by players 0 < α < 1.
ments of scholars might impact the willingness and the process The return R, evaluating the benefit for cooperation with others
of cooperations. This raises the question of whether cooperation in scientific research projects or publications, is similar with the
can persist despite different sentiments and unequal status of award factor in real world scenarios [22]. The return for the
individuals. However, sentiments of scholars in their cooperation focal nodes and their neighbors is one of the most important
have attracted little attention in previous researches. As a result, factors for the dynamics of cooperative behaviors. The rational
cooperation sentiments, individuals status and influence factors selection implied that the higher return had the higher value and
including the involved members, the game rules and the social the higher attractiveness [31]. We set α be the proportion of R
economic environment are worthy to be investigated. allocated to the big degree node, and 1 − α to the small one. The
In this paper, we develop a game model to study cooperative specific definitions of the big degree node and the small degree
behaviors based unequal statuses and sentiments. The equilib- node are introduced in Section 2.2.
rium points and stable points are analyzed by this model. The • Development cost c, c ≥ 0. The development cost c is
feasibility and efficiency of the model are shown by numerical the sum of the cost of development on the scientific project
simulations and real data analyses. Contributions of this paper in- and the cost of unsuccessful insights over time, including the
clude four aspects: the individual status represented by the node penalty introduced in real world scenario of innovation and de-
degrees is defined as the big node or the small node based the velopment [22]. For example, the employer’s salary, the cost of
node degree greater than the average degree of network or not; consumable materials and the penalty etc. Here, we assume that
and game strategies are partied into positive or negative based the positive cooperator cost c and the negative cooperator cost
cooperation sentiments; unequal status players with asymmetric nothing. A natural assumption is max{α R − c , (1 − α )R − c } ≥ 0.
payoffs push the game evolving, and small nodes are more prone • Tacit knowledge value S, S ≥ 0. The spillover of tacit
to choose positive cooperation relative to comparable big nodes knowledge is the process from the experienced scholars to the
on the promise of enough profits of the tacit knowledge and the novices [25], and concerned about skills, ideas, and experiences
excess return; the tacit knowledge encourages small nodes to that people have but may not be easily expressed [32]. Some-
cooperate with big ones positively. Our results provide a new times, the diffusion of tacit knowledge is unconsciously through
prospective to investigate the evolution of cooperation in more practice in a particular context of social networks [33]. To some
realistic environments. extent, the knowledge is ‘‘captured" when the knowledge holder
The rest of this work is arranged as follows: the evolutionary joins a network or a community of practice [34]. Therefore, tacit
game model including parameters, the model and evaluating knowledge S described as ‘‘know-how" from neighbors [35] is
indexes is presented in Section 2; in Section 3, a theorem for also a human controllable factor, and 0 ≤ S ≤ 1.
equilibrium points and the numerical simulations on game model • The reputation score. The reputation value of a scholar
are analyzed; the experiments on real data of General Relativity in network changes with the length of his academic research
and Quantum Cosmology (GR-QC) are taken in Section 4. The career. When cooperating with others, this scholar obtains the
effects on the evaluating indexes are simulated with GR-QC data; corresponding reputation value from his/her collaborators, so as
and in the final Section 5, discussion and conclusion are given. to increase his/her own profit. The reputation score is defined
in different ways. For example, Zhang and Zhen found the value
2. Modeling a novel evolutionary game of reputation has a saturation effect [15,36], and defined it as a
random real number in the interval [0, 1] or [1, 100] by Zhen and
In this work, we suppose that scholars work together for Wang [36,37]. In this work, we define a reputation value function
increasing their profits. The cooperation can only arise from a cer- corresponding to the degree of nodes, where the function is
tain number of scholars. Because of the limited ability or bounded ki −20
f (ki ) = (e− 5 + 1)−1 ,
rationality, it is impossible for scholars to choose the best strategy
each time to get the maximum benefit, but to optimize their where ki is the degree of node i. And the reputation function of a
strategies through continuous trials and errors. The evolutionary node is a non-linear relationship with its degree.
game is based on such assumptions.
A network is supposed as an unweighed and undirected graph, 2.2. The novel evolving game model
denoted by G = (V , E), where V is the set of nodes, E is the set
of connections or links of nodes i and j cooperating in projects, In this model, we divide players into two parts according to
and the sizes of V and E are denoted by |V | = n and |E | = m, the node degree and the average degree: one part is the small
respectively. Usually, G is represented by an adjacency matrix degree node (SN) and the other is the big degree one (BN). That
A = (aij ), where aij = 1 if there is a connection between nodes i is, we call that two players have unequal statuses. If the degree
2
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

of a node is less than the average degree, it will be defined as a determined by the averaged payoff of node i, while β → 1, fiti
SN, and the other is greater than the average, which is called a depends on the accumulated payoff of node i. In this work, we
BN. set β = 0.01 which means fiti heavily depends on the averaged
Because BNs have more neighbors than the SNs’, we suppose payoff of node i. Some other simulation results comparing with
that SNs desire to cooperate with BNs based on ‘richer club’ different β are added in Appendix C.
phenomena in cooperation networks. And we also suppose that Eqs. (3) and (4) are the evolutionary rule of players, where λ
players are bounded rationality to choose strategy and pursue is the irrationality of a player to adapt strategy based his payoff
payoff. We define two strategies based on the sentiments of and his neighbors’. λ → 0 means that players are rational,
players: positive cooperation (PC ) and negative cooperation (NC ), whatever fiti ≥ fitj or not, the node i will choose a strategy that
where PC means that the player cooperates with others and is is conducive to increasing its own payoff. While on the contrary
willing provide human resources, financial supports, technical case is λ → ∞, the player i will update the strategy randomly
instructions and tacit knowledge in the process of cooperation, regardless of the earnings of player i nor its neighbors, and the
while the NC player does not afford enough supports even non player is irrational. In this work, the scenario is the cooperation
support. A natural assumption is the cost of PC is greater than between scholars who had long term education which indicated
the NC and the cost of NC is zero. they are more rational to make strategy decision. So set λ =
If both of SN and BN take the PC strategy, BN gets α proportion 0.1 to describe the players are limited rational [43]. Some other
of the excess return R minus the development cost c, and SN gets irrational values are also compared with λ = 0.1 in Appendix D.
the remaining. Each player gets the reputation score from the PC
player. In this case, SN get tacit knowledge value S from BN. If 2.4. Evaluating metrics
they both take NC , they get nothing. One player takes PC and the
other takes NC , then NC node gets the reputation score from the There are three indexes chosen to measure the cooperation
PC node. If BN takes NC and SN takes PC , the SN still gets part level: the average probability p̄ of all nodes positive cooperation,
of the tacit knowledge, and suppose S /2. While BN takes PC and the average payoff Ū of the network, and the proportion g of the
SN takes NC , then the negative SN do not care or learn the tacit player and its neighbors who are all positive cooperators.
knowledge from BN, tacit knowledge of the SN is 0. Based on the ∑
xi (t)
above analysis, Table 1 can be obtained. p̄ = lim i∈V
, (5)
t →+∞ n
2.3. Rules for the model ∑
POi (t)
Evolution rules greatly influence cooperative behaviors in Ū = lim i∈V
, (6)
t →+∞ 2m
games. Fermi evolutionary rule coming from Fermi–Dirac dis-
tribution function in statistical physics [38,39] can be adopted
1 ∑
for implementing the strategy revision phase in which players g = 1(xi > 0.5, xj > 0.5, aij = 1), (7)
randomly chose one of their neighbors to compare their payoffs m
1≤i<j≤n
and decide to learn neighbors’ strategies or not. The rules for the
player to update his strategies are based on payoffs and strategies where 1(xi > 0.5, xj > 0.5, aij = 1) is the counts of the number
converging to an evolutionary stable strategy equilibrium [40,41]. of aij = 1 when xi > 0.5 and xj > 0.5 holds for nodes i and j.
Suppose that two players with unequal status take the strategy For a given adjacent matrix A of the scientific network, the
PC with probability x and y respectively, and we denote the evolutionary game on scientific cooperation networks is evolving
expected revenues of node i at time t by POi , then according Eqs. (3) to (7) by the following steps: set the evolving
∑ time length t ≤ T ; calculate the degree of each node for the initial
POi∈BN = xy(α (R − c) + f (kj )) + x(1 − y)(α R − c) time t = 0; set xi (0) to each node randomly and then compute p̄,
j∈SN (1) Ū and g by Eqs. (5)–(7); then update xi (t + 1) by xi (t), t = 0, 1, . . ..
+ (1 − x)y(α R + f (kj )),
3. Theoretical analysis on the model
or

POi∈SN = xy((1 − α )(R − c) + f (kj ) + S) In this section, the theory analysis of the model on stable
j∈BN points, and the numerical simulations of the trends of stable
(2) points under different conditions are discussed.
+ x(1 − y)((1 − α )R + S /2 − c)
+ (1 − x)y((1 − α )R + f (kj )). 3.1. Theory and the proof
We also denote the probability of player i positively cooper-
ating to one of his neighbor by xi (t) at time t. Hence, at time Theorem (BN vs. SN Game). In the game model of SN vs. BN as
t + 1, the strategy of player i is evolved by observing and learning shown in Table 1. There are five equilibrium points: (0, 0), (1, 0), (0,
(1−α )R+S /2−c
the strategy of its neighbor j, and then updating its strategy with 1), (1, 1) and (x∗ , y∗ ), where x∗ = (1−α )R−α c −S /2 , y∗ = α R−α R(1−−α
c
)c
.
probability xi (t + 1), If S < (1 − α )c, the evolutionary stable point is (0, 1) or (1, 0),
(x∗ , y∗ ) is the saddle point, (0, 0) and (1, 1) are not stable points; If
xi (t + 1) = xi (t)(1 − p(si ← sj )) + xj (t)p(si ← sj ), (3) S ≥ (1 − α )c, the only possible evolutionary stable point is (0, 1) if
where p(si ← sj ) is the transition probability with Fermi rule, L has.

1 Proof. Suppose that the probabilities of the BN i and SN j taking


p(si ← sj ) = , (4)
1 + exp( λ1 (fiti − fitj )) positive cooperation with others are x and y respectively when
SN and BN are playing game. Then the expected payoffs of the
where si ∈ {PC , NC } is the strategy of node i adopting, and fiti = i i
BN i taking strategies PC or NC are Ebp or Ebn , respectively. They
β POi + (1 −β ) PO i
. fiti is the combination of the accumulated payoff
k i are,
and the averaged payoff of node i, presented in Ref. [42]. We call
β is the mixed degree of fiti , 0 ≤ β ≤ 1. When β → 0 means fiti i
Ebp = y(α (R − c) + f (kj )) + (1 − y)(α R − c), and Ebn
i
= y(α R + f (kj )).
3
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Table 1
The payoff matrix of SN and BN with two strategies.
SN(j)
PC NC
BN PC α (R − c) + f (kj ), (1 − α )(R − c) + f (ki ) + S α R − c , (1 − α )R + f (ki )
(i) NC α R + f (kj ), (1 − α )R + S /2 − c 0,0

dξ (y) dξ (y)
Hence the mean payoff of the BN i is |
dy y=0
> 0 and dy y=1
| < 0. Then y = 1 is the evolutionary
stable point of SN.
Ebi = i
xEbp + (1 − i
x)Ebn . The evolving traces of the player to his stable points are
By Malthusian replication dynamic equation [44], the growth illustrated by the phrase diagrams, shown in Fig. 1. Set the point
probability of a BN i taking positive cooperation strategy is pro- (x, y) be any point in region [0, 1]×[0, 1]. The four corners of this
portional to x and Ebpi
− Ebi . Therefore, the replication dynamic region and (x∗ , y∗ ) are equilibriums. Fig. 1(a) shows that if y < y∗ ,
equation of BN i interacting with SN j is, x will evolve to 1; if y > y∗ , x will evolve to 0. The evolving trends
of y is shown in Fig. 1(b). The dynamic process of SN vs. BN game
ω(x) = x(Ebp
i
− Ebi ) = x(1 − x)(((1 − α )c − α R)y + α R − c). could be described by Fig. 1(c) and (x∗ , y∗ ) falls in the square area
of [0, 1] × [0, 1]. If the initial is in part I, the system will converge
Similarly, the expected payoffs of SN j taking strategies of PC
to the stable point (0, 1); if the initial state is in part II, the system
or NC respectively are
will first evolve to (x∗ , y∗ ), then turn to (0, 1) or (1, 0); if the initial
j
Esp = x((1 − α )(R − c) + f (ki ) + S) state is in part III, the system will first evolve to (x∗ , y∗ ), then turn
to (0, 1) or (1, 0). If the initial state is in the part IV, the system
+ (1 − x)((1 − α )R − c + S /2), and
will converge to the stable point (1, 0). In the phase perspective,
j
Esn = x((1 − α )R + f (ki )). the evolutionary stable point is (0, 1) or (1, 0), and (x∗ , y∗ ) is not
Then the mean payoff of the SN j is stable point.
Case 2.2. (1 − α )R + S /2 − c < 0 in Table 1. The discuss is
Esj = yEsp
j j
+ (1 − y)Esn . similar as Case 2.1.
Case 2.2.1. (1 − α )R + S /2 − c < (1 − α )R − α c − S /2, then
And therefore, the replication dynamic equation of the SN j
((1 − α )R + S /2 − c − ((1 − α )R − α c − S /2)x) < 0 holds. Hence,
interacting with BN i is dξ (y)
| > 0, dξdy(y) |y=0 < 0. So y = 0 is an evolutionary stable point
dy y=1
ξ (y) = y(Esp
j
− Esj ) of SN.
Case 2.2.2. If (1 − α )R + S /2 − c > (1 − α )R − α c − S /2. If
= y(1 − y)((α c + S /2 − (1 − α )R)x + (1 − α )R + S /2 − c). (1−α )R+S /2−c dξ (y) dξ (y)
x > (1−α )R−α c −S /2 , then dy |y=0 > 0 and dy |y=1 < 0. Then y = 1
By the above analysis, the two-dimensional dynamic system L is an evolutionary stable point of SN;
(1−α )R+S /2−c dξ (y) dξ (y)
formed by j ∈ SN and i ∈ BN is If x < (1−α )R−α c −S /2 , Then dy |y=0 < 0 and dy |y=1 > 0. Then
y = 0 is the evolutionary stable point of SN. In this circumstance,
⎨ω(x) = x(1 − x)(((1 − α )c − α R)y + α R − c),

there is no evolutionary point, because x > x∗ , y → 1, x < x∗ ,
L : ξ (y) = y(1 − y)((α c + S /2 − (1 − α )R)x (8) y → 0, and y > y∗ , x → 0, y < y∗ , x → 1, the trend of (x, y)
+ (1 − α )R + S /2 − c). rotates around (x∗ , y∗ ).

The evolving traces of the player to his stable points are
Let (x, y) be the variable of L. Then the local equilibrium points
illustrated by the phrase diagrams, shown in Fig. 2. Set the point
of L are (0, 0), (1, 0), (0, 1), (1, 1) and (x∗ , y∗ ), where x∗ =
(1−α )R+S /2−c (x, y) be any point in region [0, 1] × [0, 1]. The four corners of
, y∗ = α R−α R(1−−α
c
. We discuss the stable points by the
(1−α )R−α c −S /2 )c this region and (x∗ , y∗ ) are equilibriums. Fig. 2(a) shows that if
two equations of the system L:
dω(x) y < y∗ , x will evolve to 1; if y > y∗ , x will evolve to 0. The
Case 1. By the partial equation of dx = (1 − 2x)(α R − c −
evolving trends of y is shown in Fig. 2(b). The dynamic process
(α R − (1 − α )c)y) in the system L, it is found that:
dω(x) dω(x) of SN vs BN game could be described by Fig. 2(c) and (x∗ , y∗ ) falls
if y > α R−α R(1−−α
c
, dx |x=0 < 0 and dx |x=1 > 0, then x = 0 is an
)c in the square area of [0, 1]×[0, 1]. From Fig. 2(c), we can see that
α R−c dω(x)
evolutionary stable point of BN; and if y < α R−(1−α )c
, dx |x=0 > 0 there is no evolutionary stable point. By the above analysis, the
dω(x)
and | < 0, then x = 1 is an evolutionary stable point of BN.
dx x=1
theorem holds. □
dξ (y)
Case 2. By the partial equation of dy = (1 − 2y)((1 − α )R +
S /2 − c − ((1 − α )R − α c − S /2)x) in the system L and the payoff 3.2. Numerical simulations of influence factors on stale points
matrix, there are two subcases based the signs of (1 −α )R + S /2 − c
and (1 − α )R − α c − S /2: Whatever the evolutionary game models about the dynamic
mechanism of cooperative behaviors in evolving, the numeri-
Case 2.1. (1 − α )R + S /2 − c > 0 in Table 1. Then, we discuss
cal simulation, Monte Carlo simulation and replication dynamic
the sign of (1 − α )R − α c − S /2:
equation were used to analyze the stability and present op-
Case 2.1.1. If (1 −α )R −α c − S /2 < 0, then (1 −α )R + S /2 − c >
timal strategies [23,39,45,46]. Numerical simulation is one of
(1 − α )R − α c − S /2 and ((1 − α )R + S /2 − c − ((1 − α )R − α c −
dξ (y) dξ (y) the computational methods to implement numerical calculating
S /2)x) > 0 hold. Hence dy |y=1 < 0, dy |y=0 > 0. So y = 1 is an
using any structural individuals; Monte Carlo (MC) simulations
evolutionary stable point of SN.
is a numerical simulation method, which can be adopted for
Case 2.1.2. And If (1−α )R−α c −S /2 > 0, and if (1−α )R+S /2−
dξ (y) dξ (y) computing the interesting quantities. The underlying idea of MC
c > (1 − α )R − α c − S /2 holds. Hence dy |y=1 < 0, dy |y=0 > 0.
is to generate configuration subset from the whole phase space
So y = 1 is also an evolutionary stable point of SN. that are representative for the entire ensemble; and we use
Case 2.1.3. If (1 − α )R − α c − S /2 > 0 and (1 − α )R + S /2 − c < replication dynamics to analysis the system by the equivalence
(1 − α )R − α c − S /2. of payoff and fitness, the relation between the individual payoff
(1−α )R+S /2−c dξ (y) dξ (y)
If x > (1−α )R−α c −S /2 , dy |y=0 < 0 and dy |y=1 > 0. Then and its strategy. In this work, we use replication dynamics and
(1−α )R+S /2−c
y = 0 is an evolutionary stable point of SN; if x < (1−α )R−α c −S /2
, numerical simulations to simulate the two-dimensional dynamic
4
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 1. The phase diagrams of players evolving of the model Table 1 of Case 2.1.

Fig. 2. The phase diagrams of players evolving of the model Table 1 of Case 2.2.

system L of Eq. (8), and for the method of MC, we use it as a could evolve to stable point (0, 1)(shown in Figs. 3(a), 3(b) and
complementary approach and appendix the analysis by Figs. 16 3(d)). The SN likes to positive cooperation, while the BN likes
in Appendix B. to negative cooperation. When S = 0 in Figs. 3(a) and 3(d), the
To verify the feasibility of the model in Table 1 and the influ- evolution tends to the saddle point (x∗ , y∗ ). While the initial state
ences of parameters, we fix three of four parameters to discuss is at (0.95, 0.1), the evolutionary stable point is (1, 0) shown in
the dynamic trends of (x, y) to stable points in the system L, Fig. 3(c). If the BN has high probability to positive cooperation
and arrows show the trace of (x, y). Figs. 3–6 show influences and the SN has low at initial, then the system will evolve to (1,
of parameters S and α , and all results are iterated by 50 times 0). The BN will hold high probability to positive cooperation, and
independently. the SN decreases their probability to 0. We can see that (0, 1) or
(1, 0) is the evolutionary stable point when S < (1 − α )c and
3.2.1. The dynamic impact of S (x∗ , y∗ ) = (1/3, 1/3) is the saddle point, which are completely
We suppose that S is flowing from the big degree node to the consistent with the theorem of BN vs. SN. Fig. 4 shows the results
SN. In order to detect the effect of S, let S be a variable. According when S ≥ (1 − α )c. In Fig. 4(a), the initial state is (x0 , y0 ) =
to the theorem of BN vs. SN, we discussed two cases: S < (1 −α )c (0.1, 0.1). When S varies, the curves have little difference. These
and S ≥ (1 − α )c. We set α = 0.5, c = 0.4, R = 1 for system L, curves tend to (0, 1), which means the SN is prone to positively
and the evolutionary processes of L in the two cases are shown cooperate, and the BN is tend to negatively cooperate. Letting the
in Figs. 3 and 4 respectively. Where the horizontal axis x and the initial state be (x0 , y0 ) = (0.1, 0.95) as shown in Fig. 4(b), the
vertical axis y represent the probabilities of the BN and the SN curves are all tend to (0, 1) after 50 times whatever S changes.
taking positive cooperation strategy, respectively. In this case, the SN prefers to positively cooperate, and the BN is
Fig. 3 shows the results when S < (1 −α )c. The S varies from 0 averse to cooperation with the SN.
to 0.16. In Fig. 3(a), the initial state is (x0 , y0 ) = (0.1, 0.1). When In Fig. 4(c), the initial state is (x0 , y0 ) = (0.95, 0.1) and the
S = 0, the values of x and y are slowly increasing, it means that all curves evolve to stable point (0, 1). When S = 0.4, S = 0.6,
nodes prefer to improve their probability of positive cooperation. S = 0.8, and S = 1, y increases rapidly with little various of
The (x, y) will evolve to (0, 1) when S takes other values. In x, and converges to 1 before x decreases to 0.8, which means
Fig. 3(b), the initial state is (x0 , y0 ) = (0.1, 0.95). When S varies that the SN tends early to positively cooperate with BN once the
from 0 to 0.16, the (x, y) will evolve to (0, 1). In Fig. 3(d), the BN shows their positive cooperation willing. There is a special
dω(x)
initial state is (x0 , y0 ) = (0.95, 0.95). When S = 0, the values case, when S = 0.2, α = 0.5, R = 1, c = 0.4, then dx =
of x and y are slowly decreasing, which means that all nodes (1 − 2x)(α R − c − (α R − (1 − α )c)y) = (1 − 2x)(0.1 − 0.3y),
dξ (y)
prefer to reduce their probability of positive cooperation. When dy
= (1 − 2y)((1 − α )R + S /2 − c − ((1 − α )R − α c − S /2)x) =
S takes other values, the (x, y) will also evolve to (0, 1). So when (1 − 2y)(0.2 − 0.2x). And the initial state is (x0 , y0 ) = (0.95, 0.1),
dω(x)
the initial state is at (0.1, 0.1), (0.1, 0.95), (0.95, 0.95), the (x, y) y0 = 0.1 < 0.1/0.3, based on dx < 0, then x → 1. But
5
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 3. Simulations on (x, y) in system L, where S is a varying and S < (1 − α )c. The solid point is the values of an equilibrium point (x∗ , y∗ ) in the region of
0 ≤ x ≤ 1 and 0 ≤ y ≤ 1. For example, when S = 0, (x∗ , y∗ ) = (1/3, 1/3).

Fig. 4. Simulations on (x, y) in system L, where S is a variable and S ≥ (1 − α )c. The solid point is the values of an equilibrium point (x∗ , y∗ ) in the region of
0 ≤ x ≤ 1 and 0 ≤ y ≤ 1. For example, when S = 0.2, (x∗ , y∗ ) = (1, 1/3); when S = 0.4, 0.6, 0.8, 1, the values of (x∗ , y∗ ) are exceed the region 0 ≤ x ≤ 1 and
0 ≤ y ≤ 1.

dξ (y) dω(x) dξ (y)


when x reaches 1, dy = 0, then at this time, it is impossible to based on dx < 0. But when x reaches 0, dy = 0, then at
judge whether y should go in the direction of 0 or 1, so it remains this time, it is impossible to judge whether y should go in the
stationary at (1, 0.1). But (1, 0.1) is not an equilibrium point, so direction of 0 or 1, so it remains stationary at (0, 0.95). But (0,
it is not an evolutionary stable point. Letting the initial state be 0.95) is not an equilibrium point, so it is not an evolutionary
(x0 , y0 ) = (0.95, 0.95) as shown in Fig. 4(d), the curves are all stable point. In Fig. 5(c) there also has a special case α = 0.4, then
dω(x)
tend to (0, 1) after 50 times when S varies from 0.2 to 1. Based
dx
= (1 − 2x)(α R − c − (α R − (1 − α )c)y) = (1 − 2x)(−0.16y),
on the above analysis, when S ≥ (1 − α )c, if the system L has dξ (y)
= (1 − 2y)((1 − α )R + S /2 − c − ((1 − α )R − α c − S /2)x) =
dy
evolutionary stable point the stable point of evolution is (0, 1), (1 − 2y)(0.2 − 0.44x). And the initial state is (x0 , y0 ) = (0.95, 0.1),
which is completely consistent with the theorem of BN vs. SN. dξ (y)
x0 = 0.95 > 0.2/0.44, based on dy < 0, then y → 0. But
By Figs. 3 and 4, if S < (1 − α )c, (0, 1) or (1, 0) is the evo- dω(x)
lutionary stable point and there is a saddle point (x∗ , y∗ ). When when y reaches 0, dx = 0, then at this time, it is impossible to
S ≥ (1 − α )c, the SN will take the positive cooperation strategy. judge whether x should go in the direction of 0 or 1, so it remains
Even if the SN can only obtain a small value of tacit knowledge stationary at (0.95, 0). In the same way, it is not an evolutionary
spillover, the SN is willing to chose positive cooperation strategy stable point. If there is a stable point, Figs. 5(b)–5(d) show that
whatever strategy the BN adopts. And when S ≥ (1 − α )c, the the evolutionary stable point is (0, 1) when α is small; otherwise,
evolutionary stability strategy of (x, y) is (0, 1) no matter what the the evolutionary stable point is (1, 0) if α is big enough. The
initial probability is. In summary, the simulation results for the simulation results are consistent with the theorem of BN vs. SN.
two cases of S are completely consistent with the theorem of BN Then we consider the circumstance of S > (1 − α )c. Let α be
vs. SN. More simulations are shown in Figs. 11, 12 in Appendix A. various from 1 − S /c to 1, and S = 0.4, c = 0.4, R = 1. The
setting of these parameters ensures that the tacit knowledge can
3.2.2. The dynamic impact of α spillover from the BN to SN.
We detect the impact of the distribution coefficient α on the In Fig. 6(a), the initial state is (x0 , y0 ) = (0.1, 0.1). When α is
stable evolutionary points. Figs. 5 and 6 show the evolutionary increasing from 0.01, α = 0.2 to α = 0.4, y rapidly increases to
process of L when α varies. 1, while x is less than 0.1. When α = 0.8 and α = 0.99, x rapidly
Then we set S = 0, R = 1, c = 0.4. By Fig. 5, we can see that increases to 1 and then decreases rapidly after y increasing to 0.8.
(0, 1) or (1, 0) is the evolutionary stable point when S < (1 − α )c. This means the BN firstly positively cooperates due to the high
In Fig. 5(a), the initial state is (x0 , y0 ) = (0.1, 0.1). When α is proportion of excess return, but tends to negatively cooperate
small, such as α = 0.01, α = 0.2, α = 0.4, the BN does not after many rounds of games. In Fig. 6(b), (x0 , y0 ) = (0.1, 0.95),
like positive cooperation, while the SN likes positive cooperation, five of six curves of different α tending to (0, 1) that is not much
and (x, y) tends to (0, 1). On the contrary, when α is big, such different from the initial state. Only the curve of α = 0.99 has
as α = 0.6, 0.8, 0.99, (x, y) tends to (1, 0), and the BN takes a similar trace as Fig. 6(a). When SN’s cooperation intention is
positive cooperation, but the SN takes negative cooperation. A high, BN’s intention will decrease, and when SN’s cooperation
special case is α = 0.6 in Fig. 5(b), and when S = 0, R = 1, intention is low, BN’s intention will increase. When SN’s coop-
dω(x)
c = 0.4, then dx = (1 − 2x)(α R − c − (α R − (1 − α )c)y) = eration intention is between [0.2, 0.9], BN will keep unchanged.
dξ (y)
(1 − 2x)(0.2 − 0.44y), dy = (1 − 2y)((1 − α )R + S /2 − c − This shows that the changes of the cooperation intention of BN
((1 − α )R − α c − S /2)x) = (1 − 2y)(−0.16x). The initial state and SN are not only affected by the ratio of profit distribution,
is (x0 , y0 ) = (0.1, 0.95), y0 = 0.95 > 0.2/0.44, then x → 0 but also by the cooperation intention of opponent. Therefore,
6
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 5. Simulations on (x, y) in system L, where S = 0, R = 1, c = 0.4 and S < (1 − α )c, α is a variable.

Fig. 6. Simulations on (x, y) in system L, where S = 0.4, R = 1, c = 0.4, α is a variable and S > (1 − α )c. The solid points are the values of the equilibrium point
(x∗ , y∗ ) in the region of 0 ≤ x ≤ 1 and 0 ≤ y ≤ 1 with α varying. For example, α = 0.99, (x∗ , y∗ ) = (95/288, 295/493).

Fig. 7. Variations of p̄, Ū and g with S varying, where α = 0.5, R = 1 and c = 0.4. The trends of the three panels coincide. They are all increasing with the increase
of S. p̄ and Ū are always greater than zeros. While the ratio of the player and its neighbors all positive cooperation is going to less than 0.1 rapidly when S = 0,
which means the proportion of players who are willing to positively cooperate with their neighbors (whose positive cooperation probability is greater than 0.5) will
tend to less than 0.1 in the absence of tacit knowledge.

Fig. 8. The evaluations of the average probability of positive cooperation on GR-QC, where S is a variable, and α = 0.5, R = 1, c = 0.4. The horizontal axis in Panels
(a) and (b) are degrees and reputations respectively. The curves of average probabilities in two panels are clearly distinct and steady when the degrees are less than
9 or the reputations are less than 0.08 respectively. While, the curves are decreasing and mixing with the value decreasing of S. And finally, they tend to steady
when the degrees are more than 40. And the reputation of the node is defined in Section 2.1.

based on the changing trend of α = 0.99 in Figs. 6(a) and 6(b), increasing to 1 is faster than the two cases of α = 0.01 and
system L dynamically maintains the cooperation ratio of BN and 0.2. When BN has higher proportions excess return and higher
SN. In Fig. 6(c), the initial state is (x0 , y0 ) = (0.95, 0.1). In the positive cooperation probability, then so does the payoff of BN.
three cases of α = 0.4, α = 0.6, α = 0.8, the speed of y Hence the SN would follow the strategy of BN, increasing the
7
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 9. The values of p̄, Ū and g are varying with α , where R = 1, S = 0.4 and c = 0.4. The results are the average of 4000 iterations.

Fig. 10. The evolutions of the average probability of positive cooperation on the degree and the reputation score of GR-QC respectively when α is a variable, and
S = 0.4, R = 1, c = 0.4. The curves in two panels are distinct when the degrees are less than 9 or the reputations are less than 0.08 respectively, where the
reputation score of the node is defined in Table 2.

Fig. 11. α = 0.8, R = 1, c = 0.4 and S is a variable. In the case of S < (1 − α )c, the distribution of excess return to big degree nodes is high, and the cost is lower
than excess return. In the case of S < (1 − α )c, (x, y) tends to (1, 0). The small degree nodes do not want to positively cooperate with big degree nodes, and the big
degree nodes take contrary strategy.

Fig. 12. α = 0.8, R = 1, c = 0.4 and S > (1 − α )c. S is a variable. The small degree nodes are more prefer to positively cooperate with big degree nodes.

cooperation probability with times. With time evolving, the BN are higher than 0.9 after many generations. But when α = 0.2,
found that even its positive cooperation probability is decreasing, α = 0.4, α = 0.6, α = 0.8, values of x decline to 0 after about
and it also get a higher payoff. Then BN decreases its positive
cooperation probability until to 0. In Fig. 6(d), the initial state is 50 generations, which might show that the BN does not like to
(x0 , y0 ) = (0.95, 0.95). When α = 0.01, the values of x and y positively cooperate with SN.
8
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 13. S is a variable, α = 0.5, R = 1, c = 0.1. (a)–(d) are the case of S < (1 − α )c, and (e)–(h) are the case of S > (1 − α )c. SNs are more prefer to positively
cooperate with BNs when S > (1 − α )c in panels (e)–(h), (x, y) tends to (0, 1).

Fig. 14. S is a variable, α = 0.5, R = 0.85, c = 0.4. (a)–(d) are the case of S < (1 − α )c, and (e)–(h) are the case of S > (1 − α )c, respectively. Results in panels
(a)–(d) are similar to Fig. 3, where the solid points are the values of an equilibrium point (x∗ , y∗ ) in the region of 0 ≤ x ≤ 1 and 0 ≤ y ≤ 1. For example, when
S = 0, (x∗ , y∗ ) = (1/9, 1/9). Results in panels (e)–(h) are very similar to the results in Fig. 4.

Figs. 6(a) and 6(b) imply that the best strategy is not (0, 1) simulation results about the theorem of BN vs. SN are presented
when α = 0.8, while the best strategy is (0, 1) in Figs. 6(c) and in Figs. 11–15 in Appendix A.
6(d). So there is not evolutionary stable point when α = 0.8. Fig. 6 To sum up, when S < (1 − α )c, (1, 0) or (0, 1) is the
shows that (x, y) rotates around (x∗ , y∗ ) = (95/288, 295/493) evolutionary stable point, neither (0, 0) nor (1, 1) is stable point;
when α = 0.99. Hence, there is no evolutionary stable point. when S ≥ (1 − α )c, (0, 1) is the unique evolutionary stable point
if system L has. So the simulation results are consistent with the
Therefore, when S ≥ (1 − α )c, (0, 1) is the only evolutionary
theorem of BN vs. SN.
stable point if system L has. By the analysis on Fig. 6, it is found
that SN is more willing to positively cooperate than BN. It is also 4. Experiments on real cooperation networks
shown that under certain circumstances the BN is more prone
to positively cooperate when the proportion of excess return α In order to study the condition of a node having the highest
for BN is high enough. The simulation results are consistent with probability of positive cooperation, the best profits and the most
the conclusion in the theorem of BN vs. SN. More comparative neighbors with positive cooperation, we experiment the influence
9
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 15. α is a variable. Panels (a)–(h) and (i)–(p) show the results in cases of S < (1 −α )c and S > (1 −α )c respectively. (a)–(d) S = 0, R = 1, c = 0.1, (x, y) evolves to
(0, 1) or (1, 0); (e)–(h) S = 0, R = 0.9, c = 0.4, (x, y) evolves to (0, 1) or (1, 0); (i)–(l) S = 0.4, R = 1, c = 0.1, (x, y) evolves to (0, 1); (m)–(p) S = 0.4, R = 0.9, c = 0.4,
(x, y) displays similar to Fig. 6.

factors by metrics p̄, Ū and g on behalf of the real data GR-QC. In set whose degrees are greater than 8, and the remaining is the
addition, we also explore the influence of the unequal position SN set. There are 1724 small degree nodes, more than 70% of
caused by the network structure on the positive cooperation the total nodes is lower than the average, and not enough to 30%
probability of nodes. nodes, 710 big degree nodes whose degrees are bigger than the
average. In Table 2, the small node set is taken into V1 , and V2
4.1. Data to V8 are the big nodes, where V 2 ∪ V 3 is the most portion of
the BN. That is to say, most of the degree of BN is between 9
The data taken from SNAP(Stanford Network Analysis Project) and 30. The simulation of reputation function is shown in Table 2.
is a collaboration relationship between scholars of papers pub- When the node degree is smaller than 9, the reputation score is
lished in the discipline of General Relativity and Quantum Cos- very small, and f (ki ) increases with the growth of node’s degree.
mology (GR-QC) [47]. Authors and relations are presented by Especially, the trend is very significant with the degree increasing
nodes and links in GR-QC network respectively. Deleting nodes rapidly in the interval from 10 to 30. While the degree is higher
with only one neighbor and the link. Finally, |V | = 2434 and than 40, the increasing trend is to steady. When the degree of
|E | = 10939 of GR-QC. the node reaches 40, it means that the scholar has reached the
According to the average degree of nodes of GR-QC 8.9885 in academic peak position, and has a high academic influence, so his
Table 2, scholars are parted into two parts: the BN is the node reputation score is high. But when the node’s cooperated authors
10
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Table 2
The degrees with their intervals, and the simulation of reputation function f (ki ).

cooperate, there will have tacit knowledge spillover from BN


to SN. So SNs prefer to cooperate with BNs because the tacit
knowledge would enhance their research ability. In this means,
the behavior of nodes will display different phenomena when the
tacit knowledge S is varying in Table 1.
Fig. 7 shows the influence of S, values of p̄, Ū and g are all
increasing with S increasing. When S = 0, the average positive
cooperation probability p̄ closes to 0.35 after 400 times evolving
whatever the degrees of nodes. This phenomenon can also be
proved in Figs. 8(a) and 8(b). It is because the absent of tacit
knowledge value does not appeal small degree node to positively
cooperate, and the BN has no much interest in positive cooper-
ating with the SN. With S increasing from 0 to 0.6, p̄ is growing
higher than 0.7, and g increases to oscillated around 0.7. When
S reaches 1, Ū reaches the highest value followed by the p̄ can
reach 0.75 and g can reach 0.72. The effects of S on p̄ and g have
little difference when S increases from 0.6 to 1.
To further discuss the consistency of effect on the average
probability of positive cooperation, we have listed Figs. 8(a) and
8(b) to show the influence of structures. Fig. 8(a) shows that with
the growth of S, the average positive cooperation probability is
growing too when the degrees of nodes are not greater than 30.
While the curves of average positive cooperation probability are
disorder when the degrees are greater than 30. But for the same
S, the average probability of positive cooperations of SN is overall
higher than BN, and so does the reputations. It means that when
there has more tacit knowledge or possibility of fast growth, SNs
display more willingness to cooperate. While BNs have no obvious
intention of cooperation since the reputations of the BNs are high
and they have many more resources and followers. From the third
column in Table 2, the sum of the node’s degree, it is found that
BNs have more neighbors than SNs. The number of BNs is small
but the number of their neighbors is large. That causes to BNs
not prefer to cooperate with SNs. However, with S increasing,
the probabilities of nodes to positive cooperation show overall
upward trends.

4.3. Impacts of α and the node degree on the metric


Fig. 16. Simulations on the system L by MC method. Panels (a) S = 0.02, α =
0.5, c = 0.4, R = 1 and (b) S = 0.6, α = 0.5, c = 0.4, R = 1, (0, 1) or (1, 0) The two sides of the BN vs. SN model in Table 1 have many
is the evolutionary stable strategy profile when S < (1 − α )c, and so does (0,
disparities in resources or reputations, so the proportion of the
1) when S > (1 − α )c; By panels (c) S = 0, α = 0.2, c = 0.4, R = 1 and (d)
S = 0, α = 0.8, c = 0.4, R = 1, (0, 1) or (1, 0) is the evolutionary stable strategy return of two sides might unequal during the SN plays game
set when S < (1 − α )c; In Panel (e) S = 0.4, α = 0.2, c = 0.4, R = 1 and (f) with BN. How to determine the proportion of payoffs is another
S = 0.4, α = 0.6, c = 0.4, R = 1, (0, 1) is the evolutionary stable point when important problem. Hence, it needs to detect which side depends
S > (1 − α )c. on the proportion.
The influence of the proportion α on p̄, Ū and g are shown
in Fig. 9. In Fig. 9(a), the average probability p̄ is increasing
reached a threshold, its reputation influence would also reach a with α increasing. In Section 2.2, we made a basic assumption
threshold. about this model: positive cooperation is to increase the profits
of collaborators. According to this basic assumption and Fig. 9(b),
4.2. Impacts of S and the node degree on the metric the average payoff Ū shows that choosing α = 0.99 can maximize
the network’s payoff. And, according to Fig. 9(c), it is found that
It is believed that BNs in cooperation network are eminent the proportion g of two connected nodes all positive cooperation
scholars who have their own research fields. When authors at α = 0.99 is large. What is more, when α = 0.8 is selected, the
11
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 17. Simulation under β = 0.1 in Fermi rule in Eq. (4). Results on p̄, Ū, g with S or α varying respectively are much similar with Figs. 7 to 10.

average payoff Ū of the system is almost the same as that of α = probability of BN tends to 0. Because the excess return R assigning
0.99, and the largest of g can also be obtained. Comprehensive to BN is too low, then BN does not willing to positively cooper-
consideration, when α = 0.99, almost all of the return is allocated ate. And BNs have many small degree neighbors with positively
to BN. When α = 0.8, 20% of the return can be allocated to SN. cooperation willing, so the BNs get high network payoff by free-
So α = 0.8 is a reasonable profile which can not only achieve riding. The average probabilities of SNs in Figs. 10(a) and 10(b)
a high level of system’s payoff, but also ensure BN and SN both show decline trends when α increasing from 0.01 to 0.99, while
have considerable return. That is, when the distribution ratio of the average probabilities of BNs, especially in set V 2∪V 3 shown in
BN to excess return reaches 80%, the system partners can achieve Table 2, display increasing trends totally. By Figs. 10(a) and 10(b),
a reasonable income level. the average probabilities of nodes with small reputations or the
Fig. 10(a) shows the evolution that the average probability SNs are higher than 0.5 and overall higher than the BNs′ , which
of the nodes with different degrees positively cooperate with means that the SNs and the low score reputation nodes are willing
others based on the different proportion of excess return. When to positively cooperate whatever the proportion distributed to
α = 0.01, the average probability of SN is higher than 0.9, but the them.
12
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 18. Simulation under β = 0.5 in Fermi rule in Eq. (4). Results on p̄, Ū, g with S or α varying respectively are much similar with Figs. 7 to 10.

5. Discussion and conclusion the partner is identified as positive cooperation in the current
interaction; otherwise, he/she takes negative cooperation. Given
The theoretical significance of this work is to investigate the a threshold ρ , values of p̄ or Ū or g show similar trends when S
evolutionary game based on the unequal status players. This or α is varying. The average probabilities of positive cooperation
evolutionary game model provides an effective tool to understand are relatively higher when ρ is bigger. More simulation results on
the cooperative behaviors between two unequal status players. the cooperation intention recognition for a given threshold ρ are
The controllable influence factors on cooperative behaviors are shown in Figs. 22 of Appendix F.
also analyzed, such as the distribution of excess return and tacit By the analysis of Malthusian replication dynamic equations
knowledge. The positive or negative sentiments are behind in and the two-dimensional dynamic system L, five equilibrium
cooperative behaviors which should be recognized correctly. In points and two evolutionary stable points are analyzed, and prob-
literatures [48] and [49], cooperation intention recognition was abilities of positive cooperation strategy of this game model are
presented which promoted the emergence of cooperation. On the simulated and quantified by the numerical simulations and ex-
basis of previous studies, we set up nodes to identify their neigh- periments on real data under three metrics average positive
bors’ cooperative intentions. We use the confidence level [49] cooperation probability p̄, the average payoff Ū and the propor-
which defines the confidence level of participants to correctly tion g. The numerical simulation on the novel evolutionary game
predict the intention of partners. If it is greater than a given model shows that the two strategies: positive cooperation and
threshold ρ , he/she will positively cooperate if the intention of negative cooperation are coexist on the game. And it also shows
13
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 19. Implemented simulations of the irrational parameter λ. Simulations on p̄, Ū, g with S or α varying respectively. λ = 1 for panels (a)–(f), and λ = 0.01 for
panels (g)–(l). The results are much similar to Figs. 7 to 10.

that the SNs are more prone to choose positive cooperation than if players obtained the lowest guarantee of payoff as a protec-
BNs, and the BNs are aroused by more distribution of excess tive mechanism [52]. The heterogeneous allocation mechanism
return. And by computing the real network data of GR-QC, we enables nodes to form clusters around several rich cooperating
found that SNs are more preferred to cooperate with big ones. The neighbors initially [53]. Through the disproportion of excess re-
results in the real network have showed that the high proportion turn, large-scale nodes exert the influence to attract other nodes
of excess return can promote BNs to select positive cooperation, to improve the level of cooperation [15]. Simulations on GR-QC
but negatively on SNs. The excess return allocation for nodes is showed that the tacit knowledge has a positive effect on the
average probability of cooperation, especially for the SNs, which
heterogeneous which promotes the cooperative behavior, that is,
agrees with the previous researches [54,55]. In order to prove the
the high proportions of excess return to BNs would benefit for
catholicity about the conclusions drawn from the real networks,
collaboration between unequal nodes. The success of cooperation we compared simulation results of GR-QC with two other coop-
relies heavily on the correlation between nodes’ cost and their eration networks, CondMat and HepTh [47]. CondMat network
payoff allocated from the game [50]. If the core nodes focus on has 4002 nodes including 2989 small degree nodes, 12755 edges,
maintain the income balance between the node and its neighbors, and the average degree is 6.3743, HepTh network has 2834 nodes
the optimal distribution coefficient of payoff would be produced and 6006 edges, which the average degree is 4.2385, and there
which can minimize the overall probability of opportunistic or are 1984 small degree nodes and 850 big degree nodes. We im-
defection behavior [51]. The cooperation level can be improved plemented same experiments on CondMat and HepTh networks
14
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 20. Simulation results on CondMat network: the variations of p̄, Ū, g with S or α varying respectively, the degree distribution, and the probabilities of positive
cooperations of players.

as GR-QC. The results display similar phenomenon with GR-QC, tacit knowledge and the network structure. However, coopera-
shown in Figs. 20 and 21 of Appendix E. tions in real world are also related to social environments and
In addition, the rationality of players is a question concerned the comprehensive situations of players. The real GR-QC data in
with the evolutionary rules. λ is the irrationality of a player in this paper is static while the strategies in evolutionary games are
the evolutionary rule of Eqs. (3) and (4), where λ → 0 means dynamics. Then the data cannot show the evolution of strategy
that players are rational, otherwise, λ → ∞ shows the player and the evolution process of decision clearly. On the other hand,
will update his strategy randomly regardless his earnings nor his the influence factors, such as the tacit knowledge are assumed
neighbors’, which indicates the player is irrational. That is, the be the value from 0 to 1 which cannot present the real cases. So
larger of λ value, the more randomly when updating strategy. We researches of evolutionary strategies with time series and data
complement cases of λ = 1 and λ = 0.01 in Appendix D, as with real influence factors are some of the future research works.
shown in Figs. 19. When λ = 1, the evolution process needs a lot
of iterations (about 3000 times) to be stable, shown in Figs. 19(a)– Declaration of competing interest
19(f). In the real world, the two sides of the game cannot play so
many games, so it is not appropriate. While results in Figs. 19(g)– The authors declare that they have no known competing finan-
19(l) display good when λ = 0.01. Therefore, it is reasonable to cial interests or personal relationships that could have appeared
take λ = 0.1, which not only ensures the hypothesis of bounded to influence the work reported in this paper.
rationality, but also avoids too many iterations.
The sentiment of cooperation attitude highlights the impor- Acknowledgments
tance in the process collaboration, and sentiment recognition or
emotion processing of players is another important aspect for We would like to thank the anonymous reviewers for the
the closely related task of polarity detection, in which sentiment constructive comments and suggestions, which undoubtedly im-
analysis leverages human–computer interaction [56] and so on. proved the presentation of this paper. We also show our great
We will consider the future work for sentiment analysis on the appreciation to all the authors who collected and shared the data
networked evolutionary game. sets of GR-QC, CondMat and HepTh, we also show our thanks
Limitations of this work are inevitable. Evolving strategies for for National natural Science Foundation of China (No. 71471106)
the model are focused on the distribution of excess return, the supporting partly for this work.
15
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 21. Simulation results on HepTh network.

Appendix A simulated in the evolutionary rule of Eq. (3). λ tends to 0 means


that players are rational, while lambda tends to infinity, the
Numerical simulations on (x, y) in the system L are shown in player will update the strategy randomly regardless of his earning
Figs. 11–15 in two cases of S < (1 − α )c and S > (1 − α )c and his neighbors’, and the player is irrational. We simulate p̄,
respectively, where R, c are fixed, S or α is a variable, and arrows Ū and g with the variable S or α when λ = 1 and λ = 0.01
show the trends of evolving. respectively.

Appendix B Appendix E

The numerical simulations on the system L by MC method are Simulation results on CondMat and HepTh networks are
shown in Fig. 16. shown in Figs. 20 and 21.
There are countless initial states of BN and SN and there are The experiments are complemented on two real networks,
four special initial states to be simulated in this manuscript. We CondMat and HepTh data sets are taken from the SNAP [47]. We
randomly select 100 combinations strategy states of BN and SN. fix the values of influence parameters, α = 0.5, c = 0.4, R = 1,
The MC method is used to simulate (x, y) in the two-dimensional and to simulate the effects on the two networks by the three
dynamic system L by iteration. metrics, p̄, Ū and g.

Appendix C Appendix F

Comparisons of simulation results under different values of The simulations on the cooperation intention recognition for
β in Fermi rule. We simulate GR-QC with different values of β , a given threshold ρ , shown in Fig. 22.
shown in Figs. 17 and 18. We set different values of threshold ρ , and other parameters
are set as same as the results of Figs. 7 and 9. The complemented
Appendix D simulations of the values of p̄, Ū and g are shown in Fig. 22.
ρ = 0.2, α = 0.5, R = 1, c = 0.4 in Figs. 22(a), (e), (i);
Implemented simulations of the irrational parameter λ are ρ = 0.8, α = 0.5, R = 1, c = 0.4 Figs. 22(b), (f), (g); ρ = 0.2, S =
shown in Fig. 19. 0.4, R = 1, c = 0.4 in Figs. 22(c), (j), (k); ρ = 0.8, S = 0.4, R =
The parameter of λ referenced in [57,58], the irrationality of a 1, c = 0.4 in Figs. 22(d), (h), (l). These figures display results of
player to adapt strategy based his payoff and his neighbors’ are p̄, Ū and g with S or α varying when the threshold ρ = 0.2 and
16
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

Fig. 22. Parameters in the first two columns are α = 0.5, R = 1, c = 0.4, and the last two columns are S = 0.4, R = 1, c = 0.4. ρ = 0.2 in the first and the third
columns, ρ = 0.8 in the second and the fourth.

ρ = 0.8, respectively. Most values of p̄, Ū and g are increasing [11] L. Wang, T. Chen, X. You, Y. Wang, The effect of wealth-based anti-
when S is increasing. That is, the high tacit knowledge value S expectation behaviors on public cooperation, Physica A 493 (2018)
84–93.
can enhance the positive cooperation probability, the payoffs and
[12] A. McAvoy, C. Hauert, Asymmetric evolutionary games, PLoS Comput. Biol
the connections between nodes. The results are consistent with 11 (8) (2015).
Fig. 7. Similar analyses on α also agree with the results in Fig. 9. [13] M.A. Javarone, The host-pathogen game: An evolutionary approach to
biological competitions, Front. Phys. 6 (2018) 94.
[14] Z. Wang, T. Chen, X. Wang, J. Jin, M. Li, Evolution of co-operation
among mobile agents with different influence, Physica A 392 (19) (2013)
References
4655–4662.
[15] S. Zhang, Z. Zhang, Y. Wu, Y. Li, Y. Xie, Coevolution of teaching ability and
[1] C.-J. Chen, Y.-C. Hsiao, M.-A. Chu, Transfer mechanisms and knowledge cooperation in spatial evolutionary games, Sci. Rep. 8 (1) (2018) 14097.
transfer: The cooperative competency perspective, J. Bus. Res. 67 (12) [16] C. Liu, C. Shen, Y. Geng, S. Li, C. Xia, Z. Tian, L. Shi, R. Wang, S. Boccaletti, Z.
(2014) 2531–2541. Wang, Popularity enhances the interdependent network reciprocity, New
[2] M.E. Newman, Coauthorship networks and patterns of scientific col- J. Phys. 20 (12) (2018) 123012.
laboration, Proc. Natl. Acad. Sci. USA 101 (Supplement 1) (2004) [17] J.E. Bahbouhi, N. Moussa, Prisoner’s dilemma game model for e-commerce,
5200–5205. Appl. Math. Comput. 292 (2017) 128–144.
[3] A. Bandyopadhyay, S. Kar, Coevolution of cooperation and network struc- [18] E. El-Seidy, E.M. Elshobaky, K.M. Soliman, Two population three-player
ture in social dilemmas in evolutionary dynamic complex network, Appl. prisoner’s dilemma game, Appl. Math. Comput. 277 (2016) 44–53.
Math. Comput. 320 (2018) 710–730. [19] J. Zhang, Z. Chen, Z. Liu, Fostering cooperation of selfish agents through
[4] Y. Zhu, J. Zhang, Q. Sun, Z. Chen, Evolutionary dynamics of strategies for public goods in relation to the loners, Phys. Rev. E 93 (3) (2016) 032320.
threshold snowdrift games on complex networks, Knowl.-Based Syst. 130 [20] H. Yang, Z. Wu, Enhancement of cooperation by giving high-degree
(2017) 51–61. neighbors more help, J. Stat. Mech. Theory Exp. 2018 (6) (2018) 063407.
[5] C. Huang, W. Han, H. Li, H. Cheng, Q. Dai, J. Yang, Public coopera- [21] Y. Zhang, G. Shu, Y. Li, Strategy-updating depending on local environment
tion in two-layer networks with asymmetric interaction and learning enhances cooperation in prisoner’s dilemma game, Appl. Math. Comput.
environments, Appl. Math. Comput. 340 (2019) 305–313. 301 (2017) 224–232.
[6] Y. Zhang, J. Wang, C. Ding, C. Xia, Impact of individual difference and [22] G. Armano, M.A. Javarone, The beneficial role of mobility for the emergence
investment heterogeneity on the collective cooperation in the spatial of innovation, Sci. Rep. 7 (2017) 1781.
public goods game, Knowl.-Based Syst. 136 (2017) 150–158. [23] K. Lu, S. Wang, L. Xie, Z. Wang, M. Li, A dynamic reward-based incentive
[7] H. Takesue, Evolutionary prisoner’s dilemma games on the network with mechanism: Reducing the cost of P2P systems, Knowl.-Based Syst. 112
punishment and opportunistic partner switching, Europhys. Lett. 121 (4) (2016) 105–113.
(2018) 48005. [24] H. Xu, C. Tian, X. Xiao, S. Fan, Evolutionary investors’ power-based game
[8] H. Takesue, Effects of updating rules on the coevolving prisoner’s dilemma, on networks, Appl. Math. Comput. 330 (2018) 125–133.
Physica A 513 (2019) 399–408. [25] E. Ozkan-Canbolat, A. Beraha, Evolutionary knowledge games in social
[9] C. Liu, J. Shi, T. Li, J. Liu, Aspiration driven coevolution resolves social networks, J. Bus. Res. 69 (5) (2016) 1807–1811.
dilemmas in networks, Appl. Math. Comput. 342 (2019) 247–254. [26] S. Ji, S. Pan, E. Cambria, P. Marttinen, P.S. Yu, A survey on knowledge
[10] M.A. Javarone, A.E. Atzeni, The role of competitiveness in the Prisoners graphs: Representation, acquisition and applications, 2020, arXiv:2002.
Dilemma, Comput. Soc. Netw. 2 (1) (2015) 15. 00388v2 [cs.CL] 9 Aug 2020.

17
M. Liu, Y. Ma, L. Song et al. Knowledge-Based Systems 212 (2021) 106588

[27] M.B. Shareh, H. Navidi, H.H.S. Javadi, M. HosseinZadeh, Preventing Sybil [43] K. Zhang, H. Cheng, Co-evolution of payoff strategy and interaction strategy
attacks in P2P file sharing networks based on the evolutionary game in prisoner’s dilemma game, Physica A 461 (2016) 439–445.
model, Inform. Sci. 470 (2019) 94–108. [44] P.D. Taylor, L.B. Jonker, Evolutionarily stable strategies and game dynamics,
[28] F. Shu, Y. Liu, X. Liu, X. Zhou, Memory-based conformity enhances Math. Biosci. 40 (1–2) (1978) 145–156.
cooperation in social dilemmas, Appl. Math. Comput. 346 (2019) 480–490. [45] Q. Su, A. Li, L. Zhou, L. Wang, Interactive diversity promotes the evolution
[29] A. Hussain, E. Cambria, Semi-supervised learning for big social data of cooperation in structured populations, New J. Phys. 18 (10) (2016)
analysis, Neurocomputing 275 (2018) 1662–1673. 103007.
[30] M.S. Akhtar, A. Ekbal, E. Cambria, How intense are you? Predicting inten- [46] T. Hadzibeganovic, C.-Y. Xia, Cooperation and strategy coexistence in a
sities of emotions and sentiments using stacked ensemble, IEEE Comput. tag-based multi-agent system with contingent mobility, Knowl.-Based Syst.
Intell. Mag. 15 (1) (2020) 64–75. 112 (2016) 1–13.
[31] W. Ye, S. Fan, Evolutionary snowdrift game with rational selection based [47] J. Leskovec, J. Kleinberg, C. Faloutsos, Graph evolution: Densification and
on radical evaluation, Appl. Math. Comput. 294 (2017) 310–317. shrinking diameters, ACM Trans. Knowl. Discov. Data 1 (1) (2006) 2.
[32] R. Chugh, Do Australian universities encourage tacit knowledge transfer? [48] T.A. Han, L. Pereira, F. Santos, Intention recognition promotes the
in: International Conference on Knowledge Management and Information emergence of cooperation, Adapt. Behav. 3 (2011) 264–279.
Sharing, 2015, pp. 128–135. [49] T.A. Han, F.C. Santos, T. Lenaerts, L.M. Pereira, Synergy between intention
[33] F.L. Schmidt, J.E. Hunter, Tacit knowledge, practical intelligence, general recognition and commitments in cooperation dilemmas, Sci. Rep. 5 (1)
mental ability, and job knowledge, Curr. Dir. Psychol. Sci. 2 (1) (1993) (2015) 9312.
8–9. [50] Q. Su, L. Wang, H. Stanley, Understanding spatial public goods games on
[34] K. Goffin, U. Koners, Tacit knowledge, lessons learnt, and new product three-layer networks, New J. Phys. 20 (10) (2018) 103030.
development, J. Prod. Innov. Manage. 28 (2) (2011) 300–318. [51] J. Zhu, M. Fang, Q. Shi, P. Wang, Q. Li, Contractor cooperation mechanism
[35] G. Ryle, Knowing how and knowing that: The presidential address, Proc. and evolution of the Green supply chain in mega projects, Sustainability
Aristot. Soc. 46 (9) (1945) 1–16. 10 (11) (2018) 4306.
[36] W. Zhen, C. Tong, Y. Wang, Leadership by example promotes the emer- [52] C. Luo, Z. Jiang, Coevolving allocation of resources and cooperation in
gence of cooperation in public goods game, Chaos Solitons Fractals 101 spatial evolutionary games, Appl. Math. Comput. 311 (2017) 47–57.
(2017) 100–105. [53] B. Xu, Y. Lan, The distribution of wealth and the effect of extortion in
[37] C. Wang, L. Wang, J. Wang, S. Sun, C. Xia, Inferring the reputation enhances structured populations, Chaos Solitons Fractals 87 (2016) 276–280.
the cooperation in the public goods game on interdependent lattices, Appl. [54] T. Khoo, F. Fu, S. Pauls, Spillover modes in multiplex games: Double-edged
Math. Comput. 293 (2017) 18–29. effects on cooperation and their coevolution, Sci. Rep. 8 (1) (2018) 6922.
[38] C. Zhang, Q. Li, Z. Xu, J. Zhang, Stochastic dynamics of division of labor [55] B. Shen, The influence of endogenous knowledge spillovers on open
games in finite populations, Knowl.-Based Syst. 155 (2018) 11–21. innovation cooperation modes selection, Wirel. Pers. Commun. 102 (4)
[39] M.A. Javarone, Statistical Physics and Computational Methods for (2018) 2701–2713.
Evolutionary Game Theory, Springer International Publishing, 2018. [56] E. E. Cambria, Affective computing and sentiment analysis, IEEE Intell. Syst.
[40] K.A. Gemeda, G. Gianini, M. Libsie, An evolutionary cluster-game approach 31 (2) (2016) 102–107.
for wireless sensor networks in non-collaborative settings, Pervasive Mob. [57] I. Zisis, S. Guida, T.A. Han, G. Kirchsteiger, T. Lenaerts, Generosity motivated
Comput. 42 (2017) 209–225. by acceptance - evolutionary analysis of an anticipation game, Sci. Rep. 5
[41] F.C. Santos, J.M. Pacheco, A new route to the evolution of cooperation, J. (1) (2015) 18076.
Evol. Biol. 19 (3) (2006) 726. [58] D.G. Rand, C.E. Tarnita, H. Ohtsuki, M.A. Nowak, Evolution of fairness in
[42] A. Szolnoki, M. Perc, Z. Danku, Towards effective payoffs in the pris- the one-shot anonymous Ultimatum Game, Proc. Natl. Acad. Sci. USA 110
oner’s dilemma game on scale-free networks, Physica A 387 (8) (2008) (7) (2013) 2581–2586.
2075–2082.

18

You might also like