Supplementary For Security Proof

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Supplementary Material

Chenyang Li,1, ∗ Li Qian,1 and Hoi-Kwong Lo1, 2


1
Center for Quantum Information and Quantum Control,
Department of Electrical & Computer Engineering Toronto, M5S 3G4, Canada
2
Department of Physics, University of Toronto, Toronto, M5S 3G4, Canada

In this appendix, we first recall the parameter estimation and key rate computation of a perfect
continuous variable (CV) quantum key distribution (QKD) systems in Section I. Then, in Section II
and III, we calculate the equivalent transmittance and excess noise for intensity fluctuating sources
based on Alice’s different recorded data which is shown in the case (2A) and (2B). Finally, we
compute the secret key rate with finite-size effects in the Section IV.

PACS numbers:

I. SUPPLEMENTARY NOTES

A. Parameter estimation of a perfect CV QKD system

In this section, we will briefly review the parameter estimation and key rate computaion process for a perfect CV
QKD system. With a series of correlated data XA and XB , Alice and Bob can estimate the channel parameters.
Our model of choice is a Gaussian channel with fixed transmittance T and excess noise ε. these values satisfy[1, 2]:
2
VA =<XA >, (1)
2
VB =<XB >=T η(VA +ε)+1+vel ,

COV (XA ,XB )=<XA XB >= T ηVA ,

where η and vel are the detection efficiency and electronic noise of the homodyne detector that Bob needs to calibrate
in advance.
Then the parameter estimations for T and ε have the forms
√<XA XB >
T= √ 2>, (2)
η<XA

ε=(<XB − T ηXA >2 −1−vel )/T.

Considering the realistic model, i.e.,Bob’s detector is not accessible to Eve, the mutual information IAB and χBE
has the form [1, 3]

1 VA +1+χtot
IAB = log2 , (3)
2 1+χtot
λ1 −1 λ2 −1 λ3 −1 λ4 −1
χBE =G( )+G( )−G( )−G( ), (4)
2 2 2 2

∗ Electronic address: chenyangli@ece.utoronto.ca

Typeset by REVTEX
2

with

G(x)=(x+1)log2 (x+1)−xlog2 x; χtot =χline +χhom /T ; (5)


χline =1/T −1+ε; χhom =(1+vel )/η−1; (6)
1 √ 1 √
λ21,2 = (A± A2 −4B); λ23,4 = (C± C 2 −4D); (7)
2 2
A=V 2 (1−2T )+2T +T 2 (V +χline )2 ; B=T 2 (χline +1)2 ; (8)
√ √
V B+T (V +χline )+Aχhom √ V + Bχhom
C= ; D= B . (9)
T (V +χtot ) T (V +χtot )

B. Equivalent transmittance and excess noise in the source for case (2A)

Here,√we show the parameter estimation process for case (2A). If Alice recorded data is XA and the actual encoded
data is kXA , the equivalent transmittance can be obtained according to Eq.(2)

√ <xA1 xA0 > < kXA XA >
Ts = =
<x2A0 > <XA 2>

< kXA XA > √
= 2> =< k>.
<XA
Recall that the average Ek =1. (See intensity fluctuation model in the main text. ) By using Taylor expansion, we
can obtain
√ (k−1) (k−1)2
k=[1+(k−1)]1/2 =1+ − +O((k−1)3 ). (10)
2 8
Now, the equivalent transmittance can be shown as
√ 1
Ts =< k>2 ≃(1− Vk )2 . (11)
8
Next, the equivalent excess noise εs can be obtained from

1+Ts (VA +εs )=<x2A1 >+1=<kXA


2
>+1=VA +1. (12)

Therefore,
VA VA
εs = −VA = −VA
Ts (1− 18 Vk )2
1 1
≃VA (1+ Vk )−VA = VA Vk .
4 4

C. Equivalent transmittance and excess noise in the source for case (2B)


Here, we show the parameter estimation process for case (2B). If Alice recording data is kmax XA , the equivalent
transmittance can be obtained according to Eq.(2)
√ √
√ <xA1 xA0 > < kXA kmax XA >
Ts = =
<x2A0 > <kmax XA 2>
√ √
kmax < kXA XA > √ √
= 2 =< k>/ kmax .
kmax <XA >
Recall that the average Ek =1. using Taylor expansion, we can obtain
√ (k−1) (k−1)2
k=[1+(k−1)]1/2 =1+ − +O((k−1)3 ). (13)
2 8
3

Now, the equivalent transmittance can be shown as


√ 1
Ts =< k>2 /kmax ≃(1− Vk )2 /kmax . (14)
8
Next, the equivalent excess noise εs can be obtained from
2 2
1+Ts (kmax VA +εs )=<kA 1
>+1=<kXA >+1=VA +1. (15)
Therefore,
VA kmax VA
εs = −kmax VA = −kmax VA
Ts (1− 18 Vk )2
1 1
≃kmax VA (1+ Vk )−kmax VA = VA Vk kmax .
4 4

D. Finite-size scenario

In this section, we compute the key rate under finite-size scenario. Without loss of generality, we consider case (2B)
R
as a example. The secret key rate,R2B , with finite-size effects can be shown as[2, 4]:
n
Rf = {R2B R
(mL ,T L ,εU )−△(mL )}, (16)
N
where n is the number of Gaussian states used for secret key transmission, mL is the lower bound of the number of
the untagged Gaussian states, N is the total number of received Gaussian states, and △(mL ) is related to the security
of the privacy amplification in the finite case.
For simplicity, we will use the same approximate formula for △(mL ) obtained in [2, 4]:

log2 (2/ϵP A )
△(m )≈7
L
, (17)
mL
where ϵP A is the probability of error during privacy amplification.
With the failure probability ϵP E of parameter estimation, the bounds, T L and εU can be expressed as [2, 4]

√ 1+T ε 2
T ≈{ T −zϵP E /2
L
} , (18)
qVA
zϵP E/2 √2
εU ≈ε+ √ , (19)
T q

where q is the number of Gaussian states used for parameter estimation. zϵP E /2 is such that 1−erf(zϵP E /2 / 2)/2=
∫x 2
ϵP E /2 and erf is the error function defined as erf(x)= √2π 0 e−t dt. For example, if ϵP E =10−10 , one has zϵP E/2 ≈6.5.
Note that the standard value of T and ε are the overall transmittance and excess noise including the the source
imperfection effect and a channel model with fixed parameters. For an example, see Eq.(16) in the main text.
Next,we need to obtain the lower bound of the number of the untagged Gaussian states, mL . Without statistic
fluctuation, the probability of the untagged Gaussian states is ps . The number of success m follows a binomial
distribution with parameters n and ps . The mean value of m is nps and the variance of m is nps (1−ps ). With the
failure probability ϵugs of the untagged Gaussian states, the lower bound mL can be expressed as [5]

mL =nps −zϵugs /2 nps (1−ps ). (20)
The lower bound of the probability pL
s can be expressed as [5]

mL ps (1−ps )
pL
s= =ps −zϵugs /2 . (21)
n n

[1] J. Lodewyck, et al., Quantum key distribution over 25 km with an all-fiber continuous-variable system, Phys. Rev. A 76,
042305 (2007).
4

[2] A. Leverrier, F. Grosshans, and P. Grangier, Phys. Rev. A 81, 062343 (2010).
[3] S. Fossier, E. Diamanti, T. Debuisschert, A. Villing, R. Tualle-Brouri and P. Grangier, New Journal of Physics 11,045023
(2009).
[4] L. Ruppert, V. C. Usenko, and R. Filip, Phys. Rev. A 90, 062310 (2014).
[5] Binomial proportion confidence interval, https://en.wikipedia.org/wiki/Binomial_distribution.

You might also like