Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Combining Bayesian reconstruction entropy with maximum entropy method for analytic

continuations of matrix-valued Green’s functions


Songlin Yang,1 Liang Du,1, ∗ and Li Huang1, †
1
College of Physics and Technology, Guangxi Normal University, Guilin, Guangxi 541004, China
(Dated: January 2, 2024)
The Bayesian reconstruction entropy is considered an alternative to the Shannon-Jaynes entropy, as it does
not exhibit the asymptotic flatness characteristic of the Shannon-Jaynes entropy and obeys the scale invari-
ance. It is commonly utilized in conjunction with the maximum entropy method to derive spectral functions
from Euclidean time correlators produced by lattice QCD simulations. This study expands the application of
the Bayesian reconstruction entropy to the reconstruction of spectral functions for Matsubara or imaginary-
time Green’s functions in quantum many-body physics. Furthermore, it extends the Bayesian reconstruction
entropy to implement the positive-negative entropy algorithm, enabling the analytic continuations of matrix-
valued Green’s functions on an element-wise manner. Both the diagonal and off-diagonal components of the
arXiv:2401.00018v1 [hep-lat] 27 Dec 2023

matrix-valued Green’s functions are treated equally. Benchmark results for the analytic continuations of syn-
thetic Green’s functions indicate that the Bayesian reconstruction entropy, when combined with the preblur trick,
demonstrates comparable performance to the Shannon-Jaynes entropy. Notably, it exhibits greater resilience to
noises in the input data, particularly when the noise level is moderate.

I. INTRODUCTION its own advantages and disadvantages. There is no doubt


that the MaxEnt method is particularly popular due to its
In quantum many-body physics, single-particle Green’s ability to maintain a balance between computational effi-
function holds significant importance [1, 2]. It is typically ciency and accuracy [18–20]. Several open source toolkits,
computed using numerical methods such as quantum Monte such as ΩMAXENT [53], QMEM [54], MAXENT by Kraberger et
Carlo [3–5] and functional renormalization group [6, 7], or al. [55], MAXENT by Levy et al. [56], ALF [57], ACFlow [58],
many-body perturbative techniques like random phase ap- eDMFT [59], and ana cont [60], have been developed to im-
proximation [8, 9] and GW approximation [10–13], within the plement this method.
framework of imaginary time or Matsubara frequency formu- The MaxEnt method is rooted in Bayesian inference [61].
lation. This quantity provides valuable insights into the dy- Initially, the spectral function A(ω) is constrained to be non-
namic response of the system. However, it cannot be directly negative and is interpreted as a probability distribution. Sub-
compared to real-frequency experimental observables. There- sequently, the MaxEnt method endeavors to identify the most
fore, it is necessary to convert the calculated values to real probable spectrum by minimizing a functional Q[A], which
frequencies, which presents the fundamental challenge of the comprises the misfit functional L[A] and the entropic term
analytic continuation problem. In theory, the retarded Green’s S [A] [see Eq. (10)]. Here, L[A] quantifies the difference be-
function GR (ω) and the spectral function A(ω) can be derived tween the input and reconstructed Green’s function, while
from the imaginary-time Green’s function G(τ) or Matsub- S [A] is employed to regulate the spectrum [18]. The Max-
ara Green’s function G(iωn ) by performing an inverse Laplace Ent method is well-established for the diagonal components of
transformation [1, 2]. Due to the ill-conditioned nature of the matrix-valued Green’s functions, as the corresponding spec-
transformation kernel, solving it directly is nearly impossible. tral functions are non-negative (for fermionic systems). How-
In fact, the output for this inverse problem is highly sensi- ever, the non-negativity of the spectral functions cannot be
tive to the input. Even minor fluctuations or noises in G(τ) or assured for the off-diagonal components of matrix-valued
G(iωn ) can result in nonsensical changes for A(ω) or GR (ω), Green’s functions. As a result, conventional MaxEnt method
making analytic continuations extremely unstable. fails in this situation.
In recent decades, numerous techniques have been devel- Currently, there is a growing emphasis on obtaining the full
oped to address the challenges associated with analytic con- matrix representation of the Green’s function (or the corre-
tinuation problems. These methods include Padé approxima- sponding self-energy function) on the real axis [54, 62]. This
tion (PA) [14–17], maximum entropy method (MaxEnt) [18– is necessary for calculating lattice quantities of interest in
20], stochastic analytic continuation (SAC) [21–30], stochas- Dyson’s equation [59]. Therefore, there is a strong demand
tic optimization method (SOM) [31–35], stochastic pole ex- for a reliable method to perform analytic continuation of the
pansion (SPX) [36, 37], sparse modeling (SpM) [38, 39], entire Green’s function matrix [41–43]. To meet this require-
Nevanlinna analytical continuation (NAC) [40, 41], causal ment, several novel approaches have been suggested in recent
projections [42], Prony fits [43–45], and machine learning years as enhancements to the conventional MaxEnt method.
assistant approaches [46–52], and so on. Each method has The three most significant advancements are as follows: (1)
Auxiliary Green’s function algorithm. It is possible to create
an auxiliary Green’s function by combining the off-diagonal
and diagonal elements of the matrix-valued Green’s function
∗ liangdu@gxnu.edu.cn to ensure the positivity of the auxiliary spectrum [63–68]. The
† lihuang.dmft@gmail.com analytic continuation of the auxiliary Green’s function can
2

then be carried out using the standard MaxEnt method. (2) introduces the spectral representation of the single-particle
Positive-negative entropy algorithm. Kraberger et al. pro- Green’s function. Furthermore, it explains the fundamental
posed that the off-diagonal spectral functions can be seen as a formalisms of the MaxEnt method, the Bayesian reconstruc-
subtraction of two artificial positive functions. They extended tion entropy, and the principle of the positive-negative entropy
the entropic term to relax the non-negativity constraint pre- algorithm. Section III is dedicated to various benchmark ex-
sented in the standard MaxEnt method [55, 60]. This enables amples, including analytic continuation results for synthetic
simultaneous treatment of both the diagonal and off-diagonal single-band Green’s functions and multi-orbital matrix-valued
elements, thereby restoring crucial constraints on the mathe- Green’s functions. The spectral functions obtained by S SJ and
matical properties of the resulting spectral functions, includ- S BR are compared with the exact spectra. In Section IV, we
ing positive semi-definiteness and Hermiticity. (3) Maximum further examine the preblur trick and the auxiliary Green’s
quantum entropy algorithm. Recently, Sim and Han gener- function algorithm for S SJ and S BR . The robustness of S BR
alized the MaxEnt method by reformulating it with quantum with respect to noisy Matsubara data is discussed and com-
relative entropy, maintaining the Bayesian probabilistic inter- pared with that of S SJ . A concise summary is presented in
pretation [18, 61]. The matrix-valued Green’s function is di- Section V. Finally, Appendix A introduces the goodness-of-
rectly continued as a single object without any further approx- fit functional. The detailed mathematical derivations for S SJ
imation or ad-hoc treatment [54]. and S BR are provided in Appendices B and C, respectively.
Although various algorithms have been proposed to en-
hance the usefulness of the MaxEnt method [53–55], further
improvements in algorithms and implementations are always II. METHOD
beneficial. As mentioned above, the essence of the MaxEnt
method lies in the definition of the entropic term S [A]. Typ- A. Spectral representation of single-particle Green’s function
ically, it takes the form of Shannon-Jaynes entropy (dubbed
S SJ ) in the realm of quantum many-body physics [69]. How- It is well known that the single-particle imaginary-time
ever, alternative forms such as the Tikhonov regulator [70], Green’s function G(τ) can be defined as follows:
the positive-negative entropy [55], and the quantum relative
entropy [54] are also acceptable. It is worth noting that in the G(τ) = ⟨Tτ d(τ)d† (0)⟩, (1)
context of lattice QCD simulations, the extraction of spectral
functions from Euclidean time correlators is of particular im- where τ is imaginary time, Tτ means the time-ordered opera-
portant. The MaxEnt method is also the primary tool for the tor, d (d† ) denotes the annihilation (creation) operator [1, 2].
analytic continuation of lattice QCD data [71]. Burnier and Given G(τ), Matsubara Green’s function G(iωn ) can be con-
Rothkopf introduced an enhanced dimensionless prior distri- structed via direct Fourier transformation:
bution, known as the Bayesian reconstruction entropy (dubbed Z β
S BR ) [72, 73], which demonstrates superior asymptotic be- G(iωn ) = dτ e−iωn τG(τ), (2)
havior compared to S SJ [69]. They discovered that by using 0
S BR in conjunction with the MaxEnt method, a significant im-
provement in the reconstruction of spectral functions for Eu- where β is the inverse temperature of the system (β ≡ 1/T ),
clidean time correlators can be achieved. They latter extended and ωn is the Matsubara frequency. Note that ωn is equal to
S BR to support Bayesian inference of non-positive spectral (2n+1)π/β for fermions and 2nπ/β for bosons (n is an integer).
functions in quantum field theory [74]. But to the best of our Let us assume that the spectral function of the single-
knowledge, the application of S BR has been limited to post- particle Green’s function is A(ω), then the spectral represen-
processing for lattice QCD data thus far. Hence, some ques- tations of G(τ) and G(iωn ) read:
tions naturally arise. How effective is S BR for solving analytic Z +∞
continuation problems in quantum many-body simulations? Is G(τ) = K(τ, ω)A(ω)dω, (3)
it truly superior to S SJ ? Keeping these questions in mind, the −∞
primary objective of this study is to broaden the application
scope of S BR . We at first verify whether S BR can handle Mat- and
subara (or imaginary-time) Green’s functions. Then, we gen- Z +∞
eralize S BR to implement the positive-negative entropy algo- G(iωn ) = K(iωn , ω)A(ω)dω, (4)
−∞
rithm [55] for analytic continuations of matrix-valued Green’s
functions. The simulated results indicate that S BR works quite respectively. Here, K(τ, ω) and K(iωn , ω) are called the kernel
well in transforming imaginary-time or Matsubara Green’s functions. Their definitions are as follows:
function data to real frequency, no matter whether the Green’s
function is a matrix or not. We find that S BR has a tendency to e−τω
K(τ, ω) = , (5)
sharpen the peaks in spectral functions, but this shortcoming 1 ± e−βω
can be largely overcame by the preblur trick [55, 60]. Overall,
the performance of S BR is comparable to that of S SJ [69] for and
the examples involved. 1
K(iωn , ω) = . (6)
The rest of this paper is organized as follows. Section II iωn − ω
3

In the right-hand side of Eq. (5), + is for fermionic correla- C. Bayesian reconstruction entropy
tors and - is for bosonic correlators [18]. In the subsequent
discussion, our focus will be on fermionic correlators. The entropy S is also known as the Kullback-Leibler dis-
We observe that Eqs. (3) and (4) are classified as Fred- tance sometimes. In principle, there are multiple choices for
holm integral equations of the first kind. When A(ω) is given, it. Perhaps S SJ is the most frequently used in quantum many-
it is relatively straightforward to compute the corresponding body physics and condensed matter physics [18, 69]. It reads:
G(τ) and G(iωn ) by numerical integration methods. However, " !#
Z
the inverse problem of deducing A(ω) from G(τ) or G(iωn ) A(ω)
is challenging due to the ill-conditioned nature of the kernel S SJ [A] = dω A(ω) − D(ω) − A(ω) log . (12)
D(ω)
function K. To be more specific, the condition number of K is
very large because of the exponential decay of K with ω and τ. Here, D(ω) is called the default model, providing the essential
Consequently, direct inversion of K becomes impractical from features of spectra. Usually D(ω) is a constant or a Gaussian
a numerical standpoint [19]. Furthermore, the G(τ) or G(iωn ) function, but it can be determined by making use of high-
data obtained from finite temperature quantum Monte Carlo frequency behavior of input data as well [53]. If both A(ω)
simulations is not free from errors [3–5], further complicating and D(ω) have the same normalization, Eq. (12) reduces to:
the solution of Eqs. (3) and (4). Z !
A(ω)
S SJ [A] = − dω A(ω) log . (13)
D(ω)
B. Maximum entropy method
The S BR introduced by Burnier and Rothkopf [72, 73] is dom-
The cornerstone of the MaxEnt method is Bayes’ theorem. inant in high-energy physics and particle physics. It reads:
Let us treat the input Green’s function G and the spectrum Z " !#
A(ω) A(ω)
A as two events. According to Bayes’ theorem, the posterior S BR [A] = dω 1 − + log . (14)
probability P[A|G] can be calculated by: D(ω) D(ω)
P[G|A]P[A] Note that S SJ is constructed from four axioms [18]. While
P[A|G] = , (7)
P[G] for S BR , two of these axioms are replaced [72]. First of all,
where P[G|A] is the likelihood function, P[A] is the prior the scale invariance is incorporated in S BR . It means that S BR
probability, and P[G] is the evidence [61]. P[G|A] is assumed only depends on the ratio between A and D. The integrand
to be in direct proportion to e−L[A] , where the misfit functional in Eq. (14) is dimensionless, such that the choice of units for
L[A] reads: A and D will not change the result of the spectral reconstruc-
!2 tion. Second, S BR favors choosing smooth spectrum. The
N
1 2 1 X Gi − G̃i [A] spectra that deviate between two adjacent frequencies, ω1 and
L[A] = χ [A] = . (8) ω2 , should be penalized.
2 2 i=1 σi
Here, N is the number of input data points, σ denotes the stan-
dard deviation of G, G̃[A] is the reconstructed Green’s func- D. Positive-negative entropy
tion via Eqs. (3) and (4), and χ2 [A] is called the goodness-of-
fit functional (see Appendix A for more details). On the other The formulas discussed above are only correct for positive
hand, P[A] is supposed to be in direct proportion to eαS [A] , definite spectra with a finite norm, i.e.,
where α is a regulation parameter and S is entropy. Since the Z
evidence P[G] can be viewed as a normalization constant, it
is ignored in what follows. Thus, dω A(ω) > 0. (15)

P[A|G] ∝ eQ , (9) However, the norm could be zero for off-diagonal elements
where Q is a functional of A [18]. It is defined as follows: of matrix-valued Green’s functions due to the fermionic anti-
1 commutation relation [1, 42]:
Q[A] = αS [A] − L[A] = αS [A] − χ2 [A]. (10)
2
Z
The basic idea of the MaxEnt method is to identify the optimal dω A(ω) = 0. (16)
spectrum  that maximizes the posterior probability P[A|G]
(or equivalently Q[A]). In essence, our goal is to determine This suggests that the spectral function is not positive defi-
the most favorable  that satisfies the following equation: nite any more. The equations for S SJ and S BR [i.e., Eqs. (12)
and (14)] should be adapted to this new circumstance. The
∂Q
= 0. (11) positive-negative entropy algorithm proposed by Kraberger et
∂A A=Â al. is a graceful solution to this problem [55, 60]. They rewrite
Eq. (11) can be easily solved by the standard Newton’s algo- the off-diagonal spectral function A(ω) as the subtraction of
rithm [75]. In Appendices A, B and C, all terms in the right- two positive definite spectra:
hand side of Eq. (10) are elaborated in detail. We also explain
how to solve Eq. (11) efficiently. A(ω) = A+ (ω) − A− (ω). (17)
4

Here A+ (ω) and A− (ω) are independent, but have the same S ± [A] (or S [A+ , A− ]) is called the positive-negative entropy,
norm. Then the resulting entropy can be split into two parts: which was first used for the analysis of NMR spectra [76, 77].
The expressions of positive-negative entropy for S SJ and S BR
S ± [A] = S [A+ , A− ] = S [A+ ] + S [A− ]. (18) are as follows:

 √ 
Z  √  A2 + 4D2 + A 
±
S SJ [A] = dω  A + 4D − 2D − A log 
 2 2   , (19)
2D

and
 √ √ 
A 2 + D2 + D  A2 + D2 + D 
Z
= + log   .
±

S BR [A] dω 2 −
  (20)
D 2D

0.0 sBR (x) = 1 − x + log x, (23)


sBR
sBR
± √
0.5 sSJ
√ x2 + 4 + x
s±SJ (x) = x2 + 4 − 2 − x log , (24)
sSJ± 2
1.0

√ x2 + 1 + 1
1.5 s±BR (x) =2−( + 1 + 1) + log .
s(x)

x2 (25)
2
They are visualized in Figure 1.
2.0 Clearly, all the entropy densities are strictly convex and
non-positive (i.e., s ≤ 0 and s± ≤ 0). The ordinary en-
2.5 tropy density s(x) is just defined for positive x. sBR (x) be-
comes maximal at x = 1, and exhibits a similar quadratic
3.0 4 2 0 2 4 behavior around its maximum as sSJ (x). In the case x → 0
x (A ≪ D), sBR (x) is not suppressed, while sSJ (x) → −1. Thus,
sBR (x) avoids the asymptotic flatness inherent in sSJ (x) [72].
The positive-negative entropy density s± (x) is valid for any x
FIG. 1. Comparison of entropy densities for different entropic terms.
The entropy S is related to the entropy density s via Eq. (21). For the
(x ∈ R). Both s±BR (x) and s±SJ (x) are even functions. They also
positive-negative entropy S ± , D+ = D− = D is assumed. exhibit quadratic behaviors around x = 0, and s±BR (x) ≥ s±SJ (x).
In the limit of α → ∞, the goodness-of-fit functional χ2 [A]
has negligible weight, and the entropic term αS [A] becomes
The default models for A+ and A− are D+ and D− , respectively. dominant [53]. Thus, the maximization of Q[A] (equivalently,
To derive Eqs. (19) and (20), D+ = D− = D is assumed. For a the maximization of S [A]) yields  = D (at x = 1) for con-
detailed derivation, please see Appendices B and C. Similar to ventional entropy, in contrast to  = 0 (at x = 0) for the
± positive-negative entropy [55].
S BR [A], S BR [A] also exhibits scale invariance. In other words,
it depends on A/D only.

III. RESULTS
E. Entropy density
A. Computational setups
The integrand in the expression for entropy S is referred as
entropy density s, i.e., When the conventional MaxEnt method is combined with
±
Z S SJ (or S SJ ) [69], it is called the MaxEnt-SJ method. On the
S [A] = dω s[A(ω)]. (21) other hand, a combination of the MaxEnt method with S BR
±
(or S BR ) [72–74] is called the MaxEnt-BR method. We imple-
Next, we would like to make a detailed comparison about the mented both methods in the open source analytic continuation
entropy densities of different entropic terms. Supposed that toolkit, namely ACFlow [58]. All the benchmark calculations
x = A/D, the expressions for various entropy densities are involved in this paper were done by this toolkit. The simu-
collected as follows: lated results for the MaxEnt-SJ and MaxEnt-BR methods will
be presented in this section. Next, we will explain the compu-
sSJ (x) = x − 1 − x log x, (22) tational details.
5

(a) (b) (c)

(d) (e) (f )

FIG. 2. Analytic continuations of single-band Green’s functions. (a)-(c) For single off-centered peak case. (d)-(f) For two Gaussian peaks
with a gap case. In panels (a) and (d), the calculated and exact spectra are compared. In panels (b) and (e), the log10 (χ2 ) − log10 (α) curves are
plotted. The inverted triangle symbols indicate the optimal α parameters, which are determined by the χ2 -kink algorithm [53]. The deviations
of the reproduced Green’s function from the raw ones are shown in panels (c) and (f). Note that in panel (c), R is a function of Matsubara
frequency. However, R is τ-dependent in panel (c).

We always start from an exact spectrum A(ω), which con- where G̃ is evaluated by  via Eq. (3) or Eq. (4).
sists of one or more Gaussian-like peaks. Then A(ω) is used
to construct the imaginary-time Green’s function G(τ) via
Eq. (3), or the Matsubara Green’s function G(iωn ) via Eq. (4). B. Single-band Green’s functions
Here, for the sake of simplicity, the kernel function K is as-
sumed to be fermionic. Since the realistic Green’s function At first, the analytic continuations of single-band Green’s
data from finite temperature quantum Monte Carlo simula- functions are examined. The exact spectral functions are con-
tions is usually noisy, multiplicative Gaussian noise will be structed by a superposition of some Gaussian peaks. We con-
manually added to the clean G(τ) or G(iωn ) to imitate this sit- sider two representative spectra. (i) Single off-centered peak.
uation. The following formula is adopted: The spectral function is:
Gnoisy = Gclean [1 + δNC (0, 1)], (26)
(ω − ϵ)2
" #
A(ω) = exp − , (28)
where δ denotes the noise level of the input data and NC is the 2Γ2
complex-valued normal Gaussian noise [42]. Unless stated
otherwise, the standard deviation of G is fixed to 10−4 . The where ϵ = 0.5, Γ = 1.0, δ = 10−4 , σ = 10−4 , and β = 20.0.
numbers of input data for G(τ) and G(iωn ) are 1000 and 50, The input data is Matsubara Green’s function, which contains
respectively. During the MaxEnt simulations, the χ2 -kink al- 50 Matsubara frequencies (N = 50). The blur parameter for
gorithm [53] is used to determine the regulation parameter α. the MaxEnt-BR method is 0.45. (ii) Two Gaussian peaks with
The Bryan algorithm [78] is also tested. It always gives sim- a gap. The spectral function is:
ilar results, which will not be presented in this paper. Once
 (ω − ϵ1 )2 
 
S BR and S BR±
are chosen, the preblur trick [55, 60] is used f1
A(ω) = √ exp − 
to smooth the spectra. The blur parameter is adjusted case by 2πΓ1 2Γ21
case. Finally, the reconstructed spectrum is compared with the
 (ω − ϵ2 )2 
 
f2
true solution. We also adopt the following quantity to quan- + √ exp − , (29)
tify the deviation of the reconstructed Green’s function from 2πΓ2 2Γ22
the input one:
where f1 = f2 = 1.0, ϵ1 = −ϵ2 = 2.0, Γ1 = Γ2 = 0.5, δ = 10−4 ,
∆Gi Gi − G̃i [Â] σ = 10−3 , and β = 5.0. The synthetic data is imaginary-time
Ri = 100 = 100 , (27)
Gi Gi Green’s function, which contains 1000 imaginary time slices
6

(a) (b) (c)

A11 A22 A12

(d) (e) (f )

A11 A22 A12

(g) (h) (i)

A11 A22 A12

FIG. 3. Analytic continuations of two-band matrix-valued Green’s functions. For analytic continuations for the diagonal elements of the
± ±
Green’s function, S SJ and S BR are used in the calculations. While for the off-diagonal elements of the Green’s functions, S SJ and S BR are
employed. In the MaxEnt-BR simulations, the preblur trick is always applied, and the blur parameter b is fixed to 0.2. The default models for
off-diagonal calculations are constructed by A11 and A22 . See main text for more details. (a)-(c) θ = 0.1; (d)-(f) θ = 0.5; (g)-(i) θ = 0.9.

(N = 1000). The blur parameter for the MaxEnt-BR method tends to fit the noise in G(τ) or G(iωn ). (iii) Information-fitting
is 0.30. region. χ2 increases with the increment of α. It is comparable
The analytic continuation results are shown in Fig. 2. For with αS . The optimal α parameter is located at the crossover
case (i), it is clear that the exact spectrum and Matsubara between the noise-fitting region and the information-fitting re-
Green’s function are well reproduced by both the MaxEnt-SJ gion. For S BR and S SJ , their χ2 (α) curves almost overlap at the
method and the MaxEnt-BR method [see Fig. 2(a) and (c)]. default model region and the noise-fitting region, but differ at
For case (ii), the MaxEnt-SJ method works quite well. The the information-fitting region. It indicates that the optimal α
MaxEnt-BR method can resolve the gap and the positions of parameters for S SJ are larger than those for S BR .
the two peaks accurately. However, it slightly overestimates
their heights. We also observe that the difference between
G(τ) and G̃(τ) becomes relatively apparent in the vicinity of
C. Matrix-valued Green’s functions
τ = β/2 [see Fig. 2(d) and (f)]. Figure 2(b) and (e) plot
log10 (χ2 ) as a function of log10 (α). The plots can be split into
three distinct regions [53]: (i) Default model region (α → ∞). Next, the analytic continuations of matrix-valued Green’s
χ2 is relatively flat and goes to its global maximum. The en- functions are examined. We consider a two-band model. The
tropic term αS is much larger than χ2 . The obtained spectrum initial spectral function is a diagonal matrix:
A(ω) resembles the default model D(ω). (ii) Noise-fitting re-
gion (α → 0). χ2 approaches its global minimum, but it is
" #
A′11 (ω) 0
A (ω) =

. (30)
larger than αS . In this region, the calculated spectrum A(ω) 0 A′22 (ω)
7

Here A′11 (ω) and A′22 (ω) are constructed by using Eq. (29). For 1.0
A′11 (ω), the parameters are: f1 = f2 = 0.5, ϵ1 = 1.0, ϵ2 = 2.0, b = 0.0
Γ1 = 0.20, and Γ2 = 0.70. For A′22 (ω), the parameters are: b = 0.1
f1 = f2 = 0.5, ϵ1 = −1.0, ϵ2 = −2.1, Γ1 = 0.25, and Γ2 = 0.60. 0.8 b = 0.2
b = 0.3
The true spectral function is a general matrix with non-zero b = 0.4
off-diagonal components. It reads:
0.6 Exact

A( ω)
" #
A11 (ω) A12 (ω)
A(ω) = . (31)
A21 (ω) A22 (ω) 0.4
The diagonal matrix A′ (ω) is rotated by a rotation matrix R to
generate A(ω). The rotation matrix R is defined as follows: 0.2
cos θ sin θ
" #
R= ,
− sin θ cos θ
(32)
0.0 6 4 2 0 2 4 6
ω
where θ denotes the rotation angle. In the present work, we
consider three different rotation angles. They are: (i) θ = 0.1,
(ii) θ = 0.5, and (iii) θ = 0.9. Then A(ω) is used to con- FIG. 4. Test of the preblur trick. A single-band Green’s function is
struct the matrix-valued Green’s function G(iωn ) by using treated. The exact spectral function is generated by using Eq. (29).
The model parameters are the same as those used in Sec. III B. Only
Eq. (4). The essential parameters are: δ = 0.0, σ = 10−4 ,
the analytic continuation results obtained by the MaxEnt-BR method
and β = 40.0. The input data contains 50 Matsubara frequen- are shown.
cies. For analytic continuations of the diagonal elements of
G(iωn ), the default models are Gaussian-like. However, for
the off-diagonal elements, the default models
√ are quite differ- 0.10 b = 0.0
b = 0.1
ent. They are evaluated by D12 (ω) = A11 (ω)A22 (ω), where
A11 (ω) and A22 (ω) are the calculated spectral functions for di- b = 0.2
Exact
agonal elements [55, 60]. This means that we have to carry 0.05
out analytic continuations for the diagonal elements at first,
and then use the diagonal spectra to prepare the default mod-
0.00
A( ω)

els for the off-diagonal elements. For the MaxEnt-BR method,


the preblur algorithm is used to smooth the spectra. The blur
parameter is fixed to 0.2. 0.05
The simulated results are illustrated in Fig. 3. For the
diagonal spectral spectral functions (A11 and A22 ), both the
MaxEnt-SJ and MaxEnt-BR methods can accurately resolve 0.10
the peaks that are close to the Fermi level. However, for the
high-energy peaks, the MaxEnt-BR method tends to overes-
6 4 2 0 2 4 6
ω
timate their heights and yield sharper peaks [see the peaks
around ω = 2.0 in Fig. 3(a), (d), and (g)]. For the off-diagonal
spectral functions, only A12 is shown, since A21 is equivalent FIG. 5. Test of the preblur trick. A matrix-valued Green’s func-
tion is treated via the positive-negative entropy approach. The initial
to A12 . We observe that the major features of A12 are well
spectral function is generated by using Eq. (29). The model param-
captured by both the MaxEnt-SJ and MaxEnt-BR methods. eters are the same as those used in Sec. III C. The rotation angle θ is
Undoubtedly, the MaxEnt-SJ method works quite well in all 0.1. Only the off-diagonal spectral functions (A12 ) obtained by the
cases. The MaxEnt-BR method exhibits good performance MaxEnt-BR method are shown.
when the rotation angle is small [θ = 0.1, see Fig. 3(c)]. But
some wiggles emerge around ω = ±2.0 when the rotation an-
gle is large [θ = 0.5 and θ = 0.9, see Fig. 3(f) and (i)]. We ified. In this section, we would like to further discuss the pre-
test more rotation angles (0.0 < θ < 2.0). It seems that these blur algorithm, the auxiliary Green’s algorithm, and the noise
wiggles won’t be enhanced unless the blur parameter b is de- tolerance for the MaxEnt-BR method.
creased.

A. Effect of preblur
IV. DISCUSSIONS
The benchmark results shown in Section III imply that the
In previous section, analytic continuations for single-band MaxEnt-BR method has a tendency to generate sharp peaks
Green’s functions and matrix-valued Green’s functions by us- or small fluctuations in the high-energy regions of the spectra.
ing the MaxEnt-BR method have been demonstrated. How- The preblur algorithm is helpful to alleviate this phenomenon.
ever, there are still some important issues that need to be clar- In the context of analytic continuation, the preblur algorithm
8

(a) (c) (e)

(b) (d) (f )

FIG. 6. Analytic continuations of matrix-valued Green’s functions via the auxiliary Green’s function approach. The model parameters are
the same as those used in Sec. III C. For the MaxEnt-BR simulations, the preblur trick is always applied, and the blur parameter b is fixed to
0.2. Only the results for the off-diagonal elements are presented (A12 ). The upper panels show the spectral functions for the auxiliary Green’s
functions [see Eq. (35)]. The lower panels show the off-diagonal spectral functions evaluated by Eq. (36). (a)-(b) θ = 0.1; (c)-(d) θ = 0.5;
(e)-(f) θ = 0.9.

was introduced by Kraberger et al. [55]. The kernel function 0.2. If b is further increased, it is difficult to obtain a stable
K(iωn , ω) [see Eq. (6)] is “blurred” by using the following solution. Here, we should emphasize that these unphysical
expression: features seen in the spectra are not unique for the MaxEnt-BR
Z ∞ method. Actually, a similar tendency was already observed
Kb (iωn , ω) = dω′ K(iωn , ω′ )gb (ω − ω′ ). (33) in the early applications of the MaxEnt-SJ method in image
−∞ processing tasks. In order to address this problem, Skilling
suggested that the spectrum can be expressed as a Gaussian
Here, Kb (iωn , ω) is the blurred kernel, which is then used in convolution: A = gb ⋆ h, where h is a “hidden” function [79].
Eq. (8) to evaluate the χ2 -term. gb (ω) is a Gaussian function: Then the entropy is evaluated from h(ω), instead of A(ω). In
fact, Skilling’s approach is equivalent to the preblur trick.
exp (−ω2 /2b2 )
gb (ω) = √ , (34)
2πb
B. Auxiliary Green’s functions
where b is the blur parameter.
In Figures 4 and 5, we analyze the effects of the preblur
As stated before, in addition to the positive-negative en-
trick for two typical scenarios: (i) Positive spectral function
tropy method, the auxiliary Green’s algorithm can be used
with two separate Gaussian peaks. (ii) Complicated spec-
to continue the off-diagonal elements of the Green’s func-
tral function for off-diagonal Green’s function. Note that the
tions [63–68]. Its idea is quite simple. At first, an auxiliary
model parameters for generating the exact spectra are taken
Green’s function is constructed as follows:
from Sections III B and III C, respectively. It is evident that the
MaxEnt-BR method without the preblur trick (b = 0) has trou- Gaux (iωn ) = G11 (iωn ) + G22 (iωn ) + 2G12 (iωn ). (35)
ble resolving the spectra accurately. It usually favors sharp
peaks (see Fig. 4). Sometimes it may lead to undesirable arti- Supposed that Aaux (ω) is the corresponding spectral function
facts, such as the side peaks around ω = ±1.5 in Fig. 5. The for Gaux (iωn ). It is easy to prove that Aaux (ω) is positive semi-
preblur algorithm can remedy this problem to some extent. definiteness. So, it is safe to perform analytic continuation for
The major peaks are smoothed, and the artificial side peaks Gaux (iωn ) by using the traditional MaxEnt-SJ or MaxEnt-BR
are suppressed upon increasing b. But it’s not the case that method. Next, the off-diagonal spectral function A12 (ω) can
bigger is always better. There is an optimal b. For case (i), be evaluated as follows:
the optimal b is 0.3. A larger b (b > 0.3) will destroy the Aaux (ω) − A11 (ω) − A22 (ω)
gap, inducing a metallic state. For case (ii), the optimal b is A12 (ω) = . (36)
2
9

(a) (b) (c)

(d) (e) (f )

(g) (h)

FIG. 7. Robustness of the MaxEnt method with respect to the noisy Matsubara data. Here we treat the off-diagonal elements (G12 ) of the
two-band matrix-valued Green’s functions only via the positive-negative entropy method. The model parameters are the same as those used in
Sec. III C. The rotation angle θ is 0.5. The blur parameter for the MaxEnt-BR method is fixed to 0.2. (a)-(g) Off-diagonal spectral functions
(A12 ) extracted from noisy Matsubara data. δ denotes the noise level of the input data. It varies from 10−8 (tiny noise) to 10−2 (large noise). (h)
Integrated real axis error as a function of δ. It measures the deviations from the exact spectral function, see Eq. (37).

This algorithm is not restricted to the particle-hole symmet- θ = 0.5 and θ = 0.9. When the rotation angle is moderate or
ric case, as assumed in Ref. [64]. Here, we employ the two- large, the spectra obtained by the MaxEnt-SJ method are well
band’s example presented in Section III C again to test the consistent with the exact solutions. By using the MaxEnt-
combination of the auxiliary Green’s function algorithm and BR method, though the fluctuations in high-energy regions
the MaxEnt-BR method (and the MaxEnt-SJ method). The are greatly suppressed, small deviations from the exact spectra
model and computational parameters are kept, and the ana- still exist. Overall, the auxiliary Green’s function algorithm is
lytic continuation results are plotted in Fig. 6. inferior to the positive-negative entropy method in the exam-
ples studied. Especially when the rotation angle is small, the
Three rotation angles are considered in the simulations. We auxiliary Green’s function algorithm usually fails, irrespective
find the auxiliary Green’s function algorithm is numerically of which form of entropy is adopted.
unstable. (i) θ = 0.1. When the rotation angle is small, the am-
plitude (absolute value) of A12 (ω) is small. Both the MaxEnt-
SJ method and the MaxEnt-BR method can resolve the major
peaks around ω = ±1.0. But they will produce apparent os- C. Robustness with respect to noisy input data
cillations at higher energies. Increasing the b parameter fur-
ther could lead to wrong peaks. It seems that these unphys- Analytic continuation is commonly used on noisy Monte
ical features are likely due to the superposition of errors in Carlo data. The precision of the Monte Carlo data, which
Aaux (ω), A11 (ω), and A22 (ω). The performance of the MaxEnt- strongly depends on the sampling algorithm and the estima-
BR method is worse than that of the MaxEnt-SJ method. (ii) tor used, is rarely better than 10−5 [3–5]. The distributions
10

of inherent errors in the Monte Carlo data are often Gaus- functions (such as optical conductivity and spin susceptibil-
sian. This poses a strict requirement for the noise tolerance ity) [80, 81], and frequency-dependent transport coefficients
of the analytic continuation methods. Previous studies have with non-positive spectral weight [65, 66], etc. Further inves-
demonstrated that the MaxEnt-SJ method is robust to noise. tigations are highly desirable.
Here, we would like to examine the robustness of the MaxEnt- Finally, the MaxEnt-BR method, together with the MaxEnt-
BR method with respect to noisy Matsubara data. We recon- SJ method, has been integrated into the open source software
sider the analytic continuation of the off-diagonal elements of package ACFlow [58], which may be useful for the analytic
matrix-valued Green’s functions. As mentioned above, the continuation community.
synthetic Matsubara data for this case is assumed to be noise-
less (δ = 0.0). Now the noise is manually added by Eq. (26)
and the noise level δ is changed from 10−8 to 10−2 . The other
ACKNOWLEDGMENTS
computational parameters are the same as those used in Sec-
tion III C. In order to estimate the sensitivity of the analytic
continuation method to the noisy data, a new quantity, namely This work is supported by the National Natural Science
integrated real axis error, is introduced: Foundation of China (No. 12274380 and No. 11934020), and
the Central Government Guidance Funds for Local Scientific
and Technological Development (No. GUIKE ZY22096024).
Z
err(δ) = dω A(ω) − Âδ (ω) . (37)

Here, Âδ (ω) means the reconstructed (optimal) spectral func- Appendix A: Goodness-of-fit functional
tion under the given noise level δ.
In Figure 7(a)-(g), the convergence of the spectral functions Given the spectral function A(ω), the Matsubara Green’s
by the MaxEnt-BR method for simulated Gaussian errors with function G̃[A] could be reconstructed by using the following
varying magnitude is shown. When δ = 10−2 , the main peaks equation (the ˜ symbol is used to distinguish the reconstructed
at ω ≈ ±1.0 are roughly reproduced, while the side peaks Green’s function from the input Green’s function):
at ω ≈ 2.0 are completely smeared out. When δ = 10−3 ,
the major characteristics of the off-diagonal spectrum are suc- Z
cessfully captured. As the simulated errors decrease further G̃n [A] = dω K(iωn , ω)A(ω), (A1)
(δ < 10−3 ), the MaxEnt-BR method rapidly converges to the
exact solution. For comparison, the results for the MaxEnt- where K(iωn , ω) denotes the kernel function, and n is the
SJ method are also presented. It is evident that the MaxEnt- index for Matsubara frequency. For the sake of simplicity,
SJ method is less robust than the MaxEnt-BR method when Eq. (A1) is reformulated into its discretization form:
the noise level is moderate. It fails to resolve the satellite
peaks around ω = ±2.0. Only when δ ≤ 10−6 , the MaxEnt- X
G̃n [A] = Knm Am ∆m , (A2)
SJ method can recover the exact spectrum. Figure 7(h) ex-
m
hibits the integrated error err(δ). When 10−6 < δ < 10−3 , the
MaxEnt-BR method exhibits better robustness with respect to where Knm ≡ K(iωn , ωm ), Ai ≡ A(ωi ), and ∆i means the
noise than the MaxEnt-SJ method. weight of the mesh at real axis.
The goodness-of-fit functional χ2 [A] measures the distance
between the input Green’s function G and the reconstructed
V. CONCLUDING REMARKS Green’s function G̃[A]. Its expression is as follows:

In summary, we extend the application scope of S BR to an- N


X (Gn − G̃n [A])2
alytic continuations of imaginary-time Green’s functions and χ2 [A] = (A3)
n=1
σ2n
Matsubara Green’s functions in quantum many-body physics
and condense matter physics. It is further generalized to the
form of positive-negative entropy to support analytic contin- Here we just assume that there are N data points, and σn de-
uation of matrix-valued Green’s function, in which the posi- notes the standard derivative (i.e., the error bar) of Gn . Substi-
tive semi-definiteness of the spectral function is broken. We tuting Eq. (A2) into Eq. (A3), then we arrive:
demonstrate that the MaxEnt-BR method, in conjunction with N
(Gn − m Knm Am ∆m )2
P
the preblur algorithm and the positive-negative entropy algo- X
χ2 [A] = . (A4)
rithm, is capable of capturing the primary features of the diag-
n=1
σ2n
onal and off-diagonal spectral functions, even in the presence
of moderate levels of noise. Overall, its performance is on The first derivative of χ2 [A] with respect to Ai reads:
par with that of the MaxEnt-SJ method in the examples stud-
ied. Possible applications of the MaxEnt-BR method in the N
 
future include analytic continuations of anomalous Green’s ∂χ2 [A] X Kni ∆i X
=2 Knm Am ∆m − Gn  .

(A5)
∂Ai σ2n

functions and self-energy functions [64, 68], bosonic response n=1 m
11

Appendix B: Shannon-Jaynes entropy It is actually:

± ∂S SJ [A] 1 ∂χ2 [A]


In this appendix, the technical details for S SJ , S SJ , and the α − = 0. (B9)
MaxEnt-SJ method are reviewed. Most of the equations pre- ∂Ai 2 ∂Ai
sented in this appendix have been derived in Refs. [18], [55], Substituting Eqs. (B3) and (A5) into the above equation:
and [60]. We repeat the mathematical derivation here so that
this paper is self-contained. N
 
Kni ∆i X
! X
Ai
α∆i log + Knm Am ∆m − Gn  = 0. (B10)

σn2

Di n=1 m
1. Entropy
Eliminating ∆i :
The Shannon-Jaynes entropy is defined as follows: N
! X  
Ai Kni X
α log + Knm Am ∆m − Gn  = 0.

(B11)
σn m
2

Di
Z " !#
A(ω)
S SJ [A] = dω A(ω) − D(ω) − A(ω) log , (B1) n=1
D(ω)
Substituting Eqs. (B6) and (B5) into Eq. (B11):
The discretization form of Eq. (B1) is: X
X " !# α Vim um +
Ai
S SJ [A] = ∆i Ai − Di − Ai log . (B2) m
Di N
 
i X 1 X X
Unm ξm Vim  Knl Al ∆l − Gn  = 0.(B12)

The first derivative of S SJ [A] with respect to Ai reads: n=1
σn m
2
l

∂S SJ [A]
!
Ai
P
Now m Vim can be removed from the two terms in the left-
= −∆i log . (B3)
∂Ai Di hand side. Finally we get:
N
 
X 1 X
αum + ξm Unm  Knl Al ∆l − Gn  = 0.

2. Parameterization of spectral function (B13)
σn

2
n=1 l

A singular value decomposition can be readily performed Since Al depends on {um } as well, Eq. (B13) is actually a non-
for the kernel function K: linear equation about {um }. In the ACFlow package [58], the
Newton’s method is adopted to solve it. So we have to evalu-
K = UξV T , (B4) ate the following two variables and pass them to the Newton’s
where U and V are column-orthogonal matrices and ξ is the algorithm [75]:
vector of singular values. So, the matrix element of K reads: N
 
X 1 X
fm = αum + ξm Knl Al ∆l − Gn  ,

X Unm  (B14)
Kni = Unm ξm Vim . (B5) n=1
σn2
l
m

The columns of V can be understood as basis functions for the N


∂ fm X 1 X
spectral function. That is to say they span a so-called singular Jmi = = αδmi + ξm Unm Knl Al ∆l Vli , (B15)
value space. And in the singular value space the spectrum can ∂ui n=1
σn
2
l
be parameterized through:
  where Jmi can be considered as a Jacobian matrix. Note that
X Eq. (B7) is applied to derive Eq. (B15).
Al = Dl exp  Vlm um  .

(B6) Since the calculations for fm and Jmi are quite complicated
m
and time-consuming, the Einstein summation technology is
It is easy to prove that: used to improve the computational efficiency. The following
  three variables should be precomputed and stored during the
∂Al X initialization stage:
= Dl exp  Vlm um  Vli = Al Vli .

(B7)
∂ui m N
X 1
Bm = ξm UnmGn , (B16)
n=1
σ2n
3. Newton’s method
X 1
Next, we will derive the equations for {um }. Let us recall Wml = Unm ξm Unp ξ p Vlp ∆l Dl , (B17)
the stationary condition [see also Eq. (11)]: pn
σ2n

∂Q[A]
= 0. (B8) Wmli = Wml Vli .
∂Ai (B18)
12

Clearly, the three variables only depend on the input Green’s However, the ordinary MaxEnt method is only rigorous for
function G, the default model D, and the singular value de- non-negative spectrum [18]. In order to remedy this prob-
composition of the kernel function K. Now fm and Jmi can be lem, one could imagine that the off-diagonal spectral func-
reformulated in terms of them: tions originate from a subtraction of two artificial positive
X functions [55],
fm = αum + Wml wl − Bm , (B19)
l A = A+ − A− , (B22)

X
Jmi = αδmi + Wmli wl , (B20) A− = A+ − A. (B23)
l

where Assuming the independence of A+ and A− , the resulting en-


  tropy S SJ [A+ , A− ] is the sum of the respective entropies:
X
wl = exp  Vlm um  .

(B21) A+
Z " !#
+ + +
m S SJ [A , A ] =

dω A − D − A log +
D
A−
Z " !#
4. Positive-negative entropy dω A− − D − A− log . (B24)
D

For matrix-valued Green’s function, the spectral functions Thus, S SJ [A+ , A− ] is called the positive-negative entropy in
of the off-diagonal components could exhibit negative weight. the literature [55, 60]. We at first eliminate A−

A+ A+ − A
Z " !# Z " !#
+ + + + +
S SJ [A, A ] = dω A − D − A log + dω (A − A) − D − (A − A) log , (B25)
D D

and write: Substituting Eqs. (B31) and (B32) into Eq. (B24) to eliminate
1 A+ and A− , we obtain:
Q[A, A+ ] = αS SJ [A, A+ ] − χ2 [A]. (B26)
2

+
Z  √
S SJ [A , A ] =

dω A2 + 4D2 − 2D
Since we are searching for a maximum of Q with respect to A
and A+ , we also apply: √ 
 A2 + 4D2 + A  
− A log   . (B33)
∂Q[A, A+ ] ∂S SJ [A, A+ ]

= = 0, (B27) 2D
∂A+ ∂A+
Its discretization form becomes:
to eliminate A+ . The following equations show some interme-
diate steps:

X  q
+
S SJ [A , A ] =
−  A2 + 4D2 − 2D
m m m
A+ A+ − A
Z " ! !# 
m
dω log + log = 0, (B28)
D D  A2m + 4D2m + Am  
p 
− Am log   ∆m . (B34)
2Dm
A+ A+ − A
! !
log + log = 0, (B29) Note that Eq. (B33) is exactly the same with Eq. (19).
D D
S SJ [A+ , A− ] is just S SJ
±
[A]. The first derivative of S SJ [A+ , A− ]
with respect to Ai is:
A+ (A+ − A)
= 1. (B30) ∂S SJ [A+ , A− ] A+i
!
D2 = −∆i log . (B35)
∂Ai Di
Finally, we obtain:

+ A2 + 4D2 + A
A = , (B31) 5. Parameterization for positive-negative spectral functions
2
√ According to Eq. (B31) and Eq. (B32), we find:
A2 + 4D2 − A
A =

. (B32) A+ A− = D2 . (B36)
2
13

Inspired by Eq. (B6), it is naturally to parameterize A+ and A− 2. Parameterization of spectral function


in the singular value space as well:

X
 The original parameterization of A [see Eq. (B6)] can not be
A+i = Di exp  Vim um  ,

(B37) used here. A new parameterization scheme for A is necessary.
m Assumed that
1 1 X
= Vim um ,
 
 X − (C4)
= Di exp − Vim um  .

A−i (B38) Di Ai m
m
then we have:
Hence,
Dl
Al = .
   
X  X (C5)
Ai = Di exp  Vim um  − Di exp −
  P
Vim um  (B39) 1 − Dl m Vlm um
m m
So, the first derivative of Al with respect to ui reads:
∂Al
!
1
Ai = Di wi − . (B40) = Al Al Vli . (C6)
wi ∂ui

∂A+l
= A+l Vli . (B41) 3. Newton’s method
∂ui

∂A−l Next, we would like to derive fm and Jmi for the Bayesian
= −A−l Vli . (B42) reconstruction entropy. Substituting Eq. (C3) into Eq. (B9):
∂ui
N
 
Kni ∆i X
! X
1 1
α∆i + ∆  = 0. (C7)

∂A − K A − G
!
+ 1 σ2n
nm m m n
= (Al + Al )Vli = Dl wl + Vli .


(B43) Di A i 
∂ui wl n=1 m

By using the Einstein summation notations as defined in Sec- Eliminating ∆i :


tion B 3, we immediately obtain: ! X N
 
1 1 Kni X
α + Knm Am ∆m − Gn  = 0.

1
! − (C8)
σn m
X 2

fm = αum + Wml wl − − Bm , (B44) Di A i n=1
l
wl
Substituting Eq. (C4) into Eq. (C8):
!
X 1 X
Jmi = αδmi + Wmli wl + . (B45) α Vim um +
l
wl m
N
 
X 1 X X
Unm ξm Vim  Knl Al ∆l − Gn  = 0. (C9)

σn m

2
Appendix C: Bayesian reconstruction entropy n=1 l
P
±
In this appendix, the technical details for S BR , S BR , and the Eliminating m Vim again:
MaxEnt-BR method are discussed. N
 
X 1 X
αum + ξ ∆  = 0.

m U nm Knl Al l − G n (C10)
σ

2


n=1 n l
1. Entropy
Clearly, Eq. (C10) is the same with Eq. (B13). Thus, the equa-
The Bayesian reconstruction entropy reads: tions for fm and Jmi are as follows:
Z " !# N
 
A(ω) A(ω) 1
S BR [A] = + log .
X X
fm = αum + ξm Knl Al ∆l − Gn  ,

dω 1 − (C1) Unm  (C11)
D(ω) D(ω)
n=1
σn2
l
Its discretization form is:
X " Ai Ai
!#
N
S BR [A] = ∆i 1 − + log . (C2) ∂ fm X 1 X
Di Di Jmi = = αδmi + ξm U nm Knl Al Al ∆l Vli . (C12)
i ∂ui n=1
σ2n l
Its first derivative with respect to Ai is:
Note that Eq. (C11) is the same with Eq. (B14). However,
∂S BR [A]
!
1 1 Eq. (C12) differs from Eq. (B15) by a factor Al in the second
= −∆i − . (C3)
∂Ai Di Ai term of the right-hand side. To derive Eq. (C12), Eq. (C6)
14

A− A−
Z " !#
is used. Now the Einstein summation notation is adopted to + dω 1 − + log . (C16)
simplify the calculations of fm and Jmi again: D D
X Since A = A+ − A− , Eq. (C16) can be transformed into:
fm = αum + Wml wl − Bm , (C13)
A+ A+
Z " !#
l
S BR [A, A+ ] = dω 1 − + log
D D
A+ − A A+ − A
Z " !#
+ dω 1 − + log .
(C17)
X
Jmi = αδmi + Wmli Dl wl wl , (C14) D D
l
Then we should eliminate A+ in Eq. (C17). Because
where
∂S BR [A, A+ ]
Z !
1 1 2
=0= dω + + + − , (C18)
1 ∂A+ A A −A D
wl = P . (C15)
1 − Dl m Vlm um
we immediately get:
Here, the definitions of Wml , Wmli , and Bm could be found in √
+ A2 + D2 + D + A
Section B 3. A = , (C19)
2
and

4. Positive-negative entropy A2 + D2 + D − A
A =

. (C20)
2
Next, we would like to generalize the Bayesian reconstruc- From the definitions of A+ and A− , it is easily to prove:
tion entropy to realize the positive-negative entropy algorithm
to support the analytic continuation of matrix-valued Green’s 2A+ A− = (A+ + A− )D, (C21)
function [55]. The positive-negative entropy for the Bayesian
reconstruction entropy is defined as follows: and

Z " + +
!# A+ + A− = A2 + D2 + D. (C22)
A A
S BR [A+ , A− ] = dω 1 − + log
D D

We would like to express S BR [A+ , A− ] in terms of A:

A+ A− A+ A−
Z " !#
S BR [A+ , A− ] = dω 2 − − + log . (C23)
D D D2

Substituting Eq. (C21) into Eq. (C23):

A+ + A− A+ + A−
Z " !#
+
S BR [A , A ] = −
dω 2 − + log . (C24)
D 2D

Substituting Eq. (C22) into Eq. (C24):


 √ √ 
2 + D2 + D  A2 + D2 + D 
Z
+ A
S BR [A , A ] = + log   .


dω 2 − (C25)
D 2D

Its discretization form reads:

A2m + D2m + Dm  A2m + D2m + Dm 


 p p 
X 
+
S BR [A , A ] =  −
2 − + log   ∆m . (C26)
D m
2D m m

Note that Eq. (C25) is exactly the same with Eq. (20). S BR [A+ , A− ] with respect to Ai is:
S BR [A+ , A− ] is just S BR
±
[A]. Then the first derivative of
∂S [A+ , A− ]
!
1 1
= −∆i − (C27)
∂Ai Di A+i
15

This equation is very similar to Eq. (C3). first derivatives of A+l and A−l with respect to ui are:

∂A+l
5. Parameterization for positive-negative spectral functions = A+l A+l Vli , (C32)
∂ui

Now we assume that: and


Dl ∂A−l
A+l = , (C28)
= −A−l A−l Vli . (C33)
P
1 − Dl m Vlm um
∂ui
and
The first derivative of Al with respect to ui is:
Dl
A−l = . (C29)
1 + Dl ∂Al
P
Vlm um
m
= A+l A+l Vli + A−l A−l Vli . (C34)
∂ui
We also introduce:
1 After some simple algebra, we easily get:
w+l = P , (C30)
1 − Dl m Vlm um X
fm = αum + Wml (w+l − w−l ) − Bm , (C35)
and l

1
w−l = . (C31)
1 + Dl
P
Vlm um X
m
Jmi = αδmi + Wmli Dl (w+l w+l + w−l w−l ). (C36)
So A+l = Dl w+l , =A−l Dl w−l , and Al = Dl (w+l −w−l ). It is easy to l
verify that the definitions of A+l and A−l obey Eq. (C21). The

[1] John W. Negele and Henri Orland, Quantum Many-Particle Sys- [12] Giovanni Onida, Lucia Reining, and Angel Rubio, “Electronic
tems (Perseus Books, 1998). excitations: density-functional versus many-body Green’s-
[2] Piers Coleman, Introduction to Many-Body Physics (Cam- function approaches,” Rev. Mod. Phys. 74, 601–659 (2002).
bridge University Press, 2016). [13] Jan Gukelberger, Li Huang, and Philipp Werner, “On the dan-
[3] Emanuel Gull, Andrew J. Millis, Alexander I. Lichtenstein, gers of partial diagrammatic summations: Benchmarks for the
Alexey N. Rubtsov, Matthias Troyer, and Philipp Werner, two-dimensional Hubbard model in the weak-coupling regime,”
“Continuous-time Monte Carlo methods for quantum impurity Phys. Rev. B 91, 235114 (2015).
models,” Rev. Mod. Phys. 83, 349–404 (2011). [14] K. S. D. Beach, R. J. Gooding, and F. Marsiglio, “Reliable
[4] W. M. C. Foulkes, L. Mitas, R. J. Needs, and G. Rajagopal, Padé analytical continuation method based on a high-accuracy
“Quantum Monte Carlo simulations of solids,” Rev. Mod. Phys. symbolic computation algorithm,” Phys. Rev. B 61, 5147–5157
73, 33–83 (2001). (2000).
[5] James Gubernatis, Naoki Kawashima, and Philipp Werner, [15] Johan Schött, Inka L. M. Locht, Elin Lundin, Oscar Grånäs,
Quantum Monte Carlo Methods: Algorithms for Lattice Models Olle Eriksson, and Igor Di Marco, “Analytic continuation by
(Cambridge University Press, 2016). averaging Padé approximants,” Phys. Rev. B 93, 075104 (2016).
[6] Walter Metzner, Manfred Salmhofer, Carsten Honerkamp, [16] H. J. Vidberg and J. W. Serene, “Solving the Eliashberg equa-
Volker Meden, and Kurt Schönhammer, “Functional renormal- tions by means of N-point Padé approximants,” J. Low Temp.
ization group approach to correlated fermion systems,” Rev. Phys. 29, 179–192 (1977).
Mod. Phys. 84, 299–352 (2012). [17] Gergely Markó, Urko Reinosa, and Zsolt Szép, “Padé approx-
[7] W. Hanke C. Platt and R. Thomale, “Functional renormalization imants and analytic continuation of Euclidean Φ-derivable ap-
group for multi-orbital Fermi surface instabilities,” Adv. Phys. proximations,” Phys. Rev. D 96, 036002 (2017).
62, 453–562 (2013). [18] Mark Jarrell and J.E. Gubernatis, “Bayesian inference and the
[8] Judith Harl and Georg Kresse, “Accurate Bulk Properties from analytic continuation of imaginary-time quantum Monte Carlo
Approximate Many-Body Techniques,” Phys. Rev. Lett. 103, data,” Phys. Rep. 269, 133–195 (1996).
056401 (2009). [19] J. E. Gubernatis, Mark Jarrell, R. N. Silver, and D. S. Sivia,
[9] Guo P. Chen, Vamsee K. Voora, Matthew M. Agee, Sree Ganesh “Quantum Monte Carlo simulations and maximum entropy:
Balasubramani, and Filipp Furche, “Random-Phase Approxi- Dynamics from imaginary-time data,” Phys. Rev. B 44, 6011–
mation Methods,” Annu. Rev. Phys. Chem. 68, 421–445 (2017). 6029 (1991).
[10] Lars Hedin, “New Method for Calculating the One-Particle [20] R. N. Silver, D. S. Sivia, and J. E. Gubernatis, “Maximum-
Green’s Function with Application to the Electron-Gas Prob- entropy method for analytic continuation of quantum Monte
lem,” Phys. Rev. 139, A796–A823 (1965). Carlo data,” Phys. Rev. B 41, 2380–2389 (1990).
[11] F Aryasetiawan and O Gunnarsson, “The GW method,” Rep. [21] Anders W. Sandvik, “Stochastic method for analytic continua-
Prog. Phys. 61, 237 (1998). tion of quantum Monte Carlo data,” Phys. Rev. B 57, 10287–
10290 (1998).
16

[22] Anders W. Sandvik, “Constrained sampling method for analytic [42] Zhen Huang, Emanuel Gull, and Lin Lin, “Robust analytic con-
continuation,” Phys. Rev. E 94, 063308 (2016). tinuation of Green’s functions via projection, pole estimation,
[23] Hui Shao and Anders W. Sandvik, “Progress on stochastic an- and semidefinite relaxation,” Phys. Rev. B 107, 075151 (2023).
alytic continuation of quantum Monte Carlo data,” Phys. Rep. [43] Lei Zhang and Emanuel Gull, “Minimal Pole Representation
1003, 1–88 (2023). and Controlled Analytic Continuation of Matsubara Response
[24] K. S. D. Beach, “Identifying the maximum entropy method Functions,” (2023), arXiv:2312.10576 [cond-mat.str-el].
as a special limit of stochastic analytic continuation,” (2004), [44] Lexing Ying, “Pole Recovery From Noisy Data on Imaginary
arXiv:0403055 [cond-mat.str-el]. Axis,” J. Sci. Comput. 92, 107 (2022).
[25] Sebastian Fuchs, Thomas Pruschke, and Mark Jarrell, “Ana- [45] Lexing Ying, “Analytic continuation from limited noisy Mat-
lytic continuation of quantum Monte Carlo data by stochastic subara data,” J. Comput. Phys. 469, 111549 (2022).
analytical inference,” Phys. Rev. E 81, 056701 (2010). [46] Romain Fournier, Lei Wang, Oleg V. Yazyev, and QuanSheng
[26] K. Vafayi and O. Gunnarsson, “Analytical continuation of spec- Wu, “Artificial Neural Network Approach to the Analytic Con-
tral data from imaginary time axis to real frequency axis using tinuation Problem,” Phys. Rev. Lett. 124, 056401 (2020).
statistical sampling,” Phys. Rev. B 76, 035115 (2007). [47] Hongkee Yoon, Jae-Hoon Sim, and Myung Joon Han, “Ana-
[27] David R. Reichman and Eran Rabani, “Analytic continuation lytic continuation via domain knowledge free machine learn-
average spectrum method for quantum liquids,” J. Chem. Phys. ing,” Phys. Rev. B 98, 245101 (2018).
131, 054502 (2009). [48] Dongchen Huang and Yi-feng Yang, “Learned optimizers for
[28] Khaldoon Ghanem and Erik Koch, “Extending the average analytic continuation,” Phys. Rev. B 105, 075112 (2022).
spectrum method: Grid point sampling and density averaging,” [49] Rong Zhang, Maximilian E. Merkel, Sophie Beck, and Claude
Phys. Rev. B 102, 035114 (2020). Ederer, “Training biases in machine learning for the analytic
[29] Khaldoon Ghanem and Erik Koch, “Average spectrum method continuation of quantum many-body Green’s functions,” Phys.
for analytic continuation: Efficient blocked-mode sampling Rev. Res. 4, 043082 (2022).
and dependence on the discretization grid,” Phys. Rev. B 101, [50] Juan Yao, Ce Wang, Zhiyuan Yao, and Hui Zhai, “Noise
085111 (2020). enhanced neural networks for analytic continuation,” Mach.
[30] Olav F. Syljuåsen, “Using the average spectrum method to ex- Learn.: Sci. Technol. 3, 025010 (2022).
tract dynamics from quantum Monte Carlo simulations,” Phys. [51] Louis-François Arsenault, Richard Neuberg, Lauren A Hannah,
Rev. B 78, 174429 (2008). and Andrew J Millis, “Projected regression method for solving
[31] A. S. Mishchenko, N. V. Prokof’ev, A. Sakamoto, and B. V. Fredholm integral equations arising in the analytic continuation
Svistunov, “Diagrammatic quantum Monte Carlo study of the problem of quantum physics,” Inv. Prob. 33, 115007 (2017).
Fröhlich polaron,” Phys. Rev. B 62, 6317–6336 (2000). [52] Nathan S. Nichols, Paul Sokol, and Adrian Del Maestro,
[32] F. Bao, Y. Tang, M. Summers, G. Zhang, C. Webster, V. Scarola, “Parameter-free differential evolution algorithm for the analytic
and T. A. Maier, “Fast and efficient stochastic optimization for continuation of imaginary time correlation functions,” Phys.
analytic continuation,” Phys. Rev. B 94, 125149 (2016). Rev. E 106, 025312 (2022).
[33] Olga Goulko, Andrey S. Mishchenko, Lode Pollet, Nikolay [53] Dominic Bergeron and A.-M. S. Tremblay, “Algorithms for
Prokof’ev, and Boris Svistunov, “Numerical analytic contin- optimized maximum entropy and diagnostic tools for analytic
uation: Answers to well-posed questions,” Phys. Rev. B 95, continuation,” Phys. Rev. E 94, 023303 (2016).
014102 (2017). [54] Jae-Hoon Sim and Myung Joon Han, “Maximum quantum en-
[34] Igor Krivenko and Malte Harland, “TRIQS/SOM: Implementa- tropy method,” Phys. Rev. B 98, 205102 (2018).
tion of the stochastic optimization method for analytic continu- [55] Gernot J. Kraberger, Robert Triebl, Manuel Zingl, and Markus
ation,” Comput. Phys. Commun. 239, 166–183 (2019). Aichhorn, “Maximum entropy formalism for the analytic con-
[35] Igor Krivenko and Andrey S. Mishchenko, “TRIQS/SOM 2.0: tinuation of matrix-valued Green’s functions,” Phys. Rev. B 96,
Implementation of the stochastic optimization with consistent 155128 (2017).
constraints for analytic continuation,” Comput. Phys. Commun. [56] Ryan Levy, J.P.F. LeBlanc, and Emanuel Gull, “Implementa-
280, 108491 (2022). tion of the maximum entropy method for analytic continuation,”
[36] Li Huang and Shuang Liang, “Stochastic pole expansion Comput. Phys. Commun. 215, 149–155 (2017).
method for analytic continuation of the Green’s function,” Phys. [57] F. F. Assaad, M. Bercx, F. Goth, A. Götz, J. S. Hofmann,
Rev. B 108, 235143 (2023). E. Huffman, Z. Liu, F. Parisen Toldin, J. S. E. Portela, and
[37] Li Huang and Shuang Liang, “Reconstructing lattice QCD J. Schwab, “The ALF (Algorithms for Lattice Fermions) project
spectral functions with stochastic pole expansion and Nevan- release 2.0. Documentation for the auxiliary-field quantum
linna analytic continuation,” (2023), arXiv:2309.11114 [hep- Monte Carlo code,” SciPost Phys. Codebases , 1 (2022).
lat]. [58] Li Huang, “ACFlow: An open source toolkit for analytic con-
[38] Junya Otsuki, Masayuki Ohzeki, Hiroshi Shinaoka, and tinuation of quantum Monte Carlo data,” Comput. Phys. Com-
Kazuyoshi Yoshimi, “Sparse modeling approach to analytical mun. 292, 108863 (2023).
continuation of imaginary-time quantum monte carlo data,” [59] Kristjan Haule, Chuck-Hou Yee, and Kyoo Kim, “Dynamical
Phys. Rev. E 95, 061302 (2017). mean-field theory within the full-potential methods: Electronic
[39] Yuichi Motoyama, Kazuyoshi Yoshimi, and Junya Otsuki, structure of CeIrIn5 , CeCoIn5 , and CeRhIn5 ,” Phys. Rev. B 81,
“Robust analytic continuation combining the advantages of the 195107 (2010).
sparse modeling approach and the padé approximation,” Phys. [60] Josef Kaufmann and Karsten Held, “ana cont: Python pack-
Rev. B 105, 035139 (2022). age for analytic continuation,” Comput. Phys. Commun. 282,
[40] Jiani Fei, Chia-Nan Yeh, and Emanuel Gull, “Nevanlinna An- 108519 (2023).
alytical Continuation,” Phys. Rev. Lett. 126, 056402 (2021). [61] Athanasios Papoulis, Probability and Statistics (Prentice Hall,
[41] Jiani Fei, Chia-Nan Yeh, Dominika Zgid, and Emanuel 1989).
Gull, “Analytical continuation of matrix-valued functions: [62] M. De Raychaudhury, E. Pavarini, and O. K. Andersen, “Or-
Carathéodory formalism,” Phys. Rev. B 104, 165111 (2021). bital Fluctuations in the Different Phases of LaVO3 and YVO3 ,”
17

Phys. Rev. Lett. 99, 126402 (2007). Field Theories,” Phys. Rev. Lett. 111, 182003 (2013).
[63] Jan M. Tomczak and Silke Biermann, “Effective band structure [73] Alexander Rothkopf, “Bayesian inference of real-time dynam-
of correlated materials: the case of VO2 ,” J. Phys.: Condens. ics from lattice QCD,” Front. Phys. 10, 1028995 (2022).
Matter 19, 365206 (2007). [74] Alexander Rothkopf, “Bayesian inference of nonpositive spec-
[64] E. Gull and A. J. Millis, “Pairing glue in the two-dimensional tral functions in quantum field theory,” Phys. Rev. D 95, 056016
Hubbard model,” Phys. Rev. B 90, 041110 (2014). (2017).
[65] A. Reymbaut, D. Bergeron, and A.-M. S. Tremblay, “Maxi- [75] Jorge Nocedal and Stephen J. Wright, Numerical Optimization
mum entropy analytic continuation for spectral functions with (Springer New York, NY, 2006).
nonpositive spectral weight,” Phys. Rev. B 92, 060509 (2015). [76] E. D. Laue, J. Skilling, and J. Staunton, “Maximum entropy
[66] A. Reymbaut, A.-M. Gagnon, D. Bergeron, and A.-M. S. Trem- reconstruction of spectra containing antiphase peaks,” J. Magn.
blay, “Maximum entropy analytic continuation for frequency- Reson. 63, 418–424 (1985).
dependent transport coefficients with nonpositive spectral [77] Sibusiso Sibisi, “Quantified Maxent: An NMR Application,”
weight,” Phys. Rev. B 95, 121104 (2017). in Maximum Entropy and Bayesian Methods, edited by Paul F.
[67] Emanuel Gull, Olivier Parcollet, and Andrew J. Millis, “Super- Fougère (Springer Netherlands, Dordrecht, 1990) pp. 351–358.
conductivity and the Pseudogap in the Two-Dimensional Hub- [78] R. K. Bryan, “Maximum entropy analysis of oversampled data
bard Model,” Phys. Rev. Lett. 110, 216405 (2013). problems,” Eur. Biophys. J. 18, 165–174 (1990).
[68] Changming Yue and Philipp Werner, “Maximum entropy [79] J Skilling, “Fundamentals of MaxEnt in data analysis,” in Max-
analytic continuation of anomalous self-energies,” (2023), imum Entropy in Action: A Collection Of Expository Essays
arXiv:2303.16888 [cond-mat.supr-con]. (Oxford University Press, 1991).
[69] E. T. Jaynes, “Information Theory and Statistical Mechanics,” [80] O. Gunnarsson, M. W. Haverkort, and G. Sangiovanni, “Ana-
Phys. Rev. 106, 620–630 (1957). lytical continuation of imaginary axis data for optical conduc-
[70] A. N. Tikhonov, A. V. Goncharsky, V. V. Stepanov, and A. G. tivity,” Phys. Rev. B 82, 165125 (2010).
Yagola, Numerical Methods for the Solution of Ill-Posed Prob- [81] J. Schött, E. G. C. P. van Loon, I. L. M. Locht, M. I. Katsnelson,
lems (Springer Netherlands, 1995). and I. Di Marco, “Comparison between methods of analytical
[71] M. Asakawa, Y. Nakahara, and T. Hatsuda, “Maximum entropy continuation for bosonic functions,” Phys. Rev. B 94, 245140
analysis of the spectral functions in lattice QCD,” Prog. Part. (2016).
Nucl. Phys. 46, 459–508 (2001).
[72] Yannis Burnier and Alexander Rothkopf, “Bayesian Approach
to Spectral Function Reconstruction for Euclidean Quantum

You might also like