Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

Electrical Engineering in Japan, Vol. 163, No.

4, 2008
Translated from Denki Gakkai Ronbunshi, Vol. 126-C, No. 12, December 2006, pp. 1474–1482

Properties of Quantum Gain of Coding with Information Criterion by Binary


Linear Codes

YUKI ISHIDA,1 SHOGO USAMI,2 TSUYOSHI SASAKI USUDA,1 and ICHI TAKUMI3
1
Aichi Prefectural University, Japan
2
Meijo University, Japan
3
Nagoya Institute of Technology, Japan

SUMMARY may be attributed to an essential increase of the amount of


transmittable information due to information source exten-
In quantum information theory, superadditivity in the sion.
capacity of a quantum channel occurs as a special property. Recently, superadditivity was demonstrated clearly
We present a method of calculating the mutual information by the existence of codes exhibiting superadditivity at finite
analytically for binary linear codes by using square-root codeword length in the case of binary pure-state signals [3,
measurement as the decoding process. Many examples of 4]. On the other hand, no matter how ingenious the coding
codes showing the existence of superadditivity in capacity scheme, superadditivity cannot be realized in the case of
have been given in the past, but the scope of the method was individual decoding. Entanglement must be involved in the
not clarified. In the present paper, we show that the method decoding process in order to obtain superadditivity; that is,
can be applied to any binary linear codes. We also show that quantum combined measurement (entanglement measure-
the quantum channel capacity is almost fully attained in a ment) is required. In addition, as shown in Ref. 3, even if
finite codeword length by using approximation to simplex entanglement is involved in the decoding process, an in-
code. © 2008 Wiley Periodicals, Inc. Electr Eng Jpn, crease in the amount of transmittable information cannot be
163(4): 48–57, 2008; Published online in Wiley Inter- implemented so long as all signal sequences derived by n-th
Science (www.interscience.wiley.com). DOI 10.1002/ order extension of the information source are used with
eej.20646 equal probability. This means that the signal sequences
must be selected from the n-th order extension of the
information source in order to increase the information
Key words: quantum information theory; quantum content. Since selection of the signal sequences is equiva-
channel capacity; superadditivity; square-root measure- lent to coding, the increase of information content due to
ment; quantum gain by coding. superadditivity of the communication channel capacity is
called the quantum gain of coding.
Usually, an effort is made to reduce the decoding
error rate in coding. However, the information content is
1. Introduction
estimated when the discussion is conducted in terms of
In contrast to conventional information theory, quan- superadditivity of the quantum channel capacity. The infor-
tum information theory includes superadditivity of the mation content does not increase unless entanglement, a
channel capacity [1, 2]. Usually, when a channel is used specific quantum effect, is employed in the decoding
twice, the amount of transmittable information increases by scheme, and therefore, such an information increase is
no more than twice; with superadditivity, however, more called a quantum gain rather than merely a coding gain.
than twice as much information may be transmitted. That In addition, the world’s first proof-of-principle ex-
is, the superadditivity of a communication channel capacity periments on superadditive quantum gain were carried out
at the National Institute on Information and Communica-
Contract grant sponsor: Ministry of Education, Culture, Sports, Science
tion Technology (NICT, formerly the Communications Re-
and Technology, as a part of the SCOPE project of the Ministry of Internal search Laboratory) [5]. Linearly dependent signals of a
Affairs and Communications. single-photon system were used in the experiments. In the

© 2008 Wiley Periodicals, Inc.


48
^
future, the experiments must be extended to linearly inde- Here {Πj} denotes the quantum measurement process, and
pendent signals (coherent-state system) in order to imple- is called the decision operator. P(j|i) is the probability that
ment ultrafast, ultrareliable quantum communication. j is obtained when i is sent. Thus, the mutual information is
However, even in the case of binary signals (the simplest defined just as in conventional information theory:
linearly independent system), such basic properties as the
relationship between the quantum gain and the code struc-
(2)
ture have not yet been made clear. In addition, the quantum
gain implemented so far has been very small, only about
1.1 (versus uncoded transmission) in most cases. The prob- The maximum of Eq. (2) with respect to the quantum
able reason is that matrix eigenvalues and eigenvectors measurement process and the a priori probability is the
must be calculated when finding the mutual information for channel capacity at a codeword length of 1 (or the maxi-
a specific code by quantum combined measurement. mum mutual information without coding) C1 [8, 9]:
By using the group covariance of signals [6], the
authors have proposed a method [7] of finding the mutual (3)
information analytically for the case that the decoding
process is square-root measurement (SRM). Although we The channel capacity at a codeword length of n for n-th
expected this method to be applicable to arbitrary binary order extension of the information source (or the maximum
linear codes, in Ref. 7 we could demonstrate its applicabil- mutual information without coding) is defined similarly.
ity to only single parity check codes, and its range of The superadditivity of channel capacity is expressed
application remained unclear. as follows, with Cn denoting the capacity of the quantum
In this paper, we prove that the method of Ref. 7 channel at a codeword length of n:
allows the analytic solution of the mutual information for
arbitrary binary linear codes. In addition, we consider a (4)
further simplification of analytic solution for BCH codes
and their extensions, and the achievement factor of the Strictly speaking, superadditivity is recognized if the in-
quantum gain for codewords as long as about 1010. In equality sign applies in the above expression. Now, accord-
addition, we use approximation to find the mutual informa- ing to the quantum channel coding theorem, the maximum
tion of simplex codes, and examine the quantum gain rate C of errorless transmission is determined as the follow-
properties for extremely long codes with lengths of about ing limit value:
10300 to show that the quantum channel capacity is almost
(5)
fully attained.
Below, base-2 logarithms are assumed unless other-
wise specified. This is called the quantum channel capacity [2].
Thus, the simplest condition of quantum superaddi-
tivity is nC1 < Cn. In all examples of superadditivity re-
2. Superadditivity of Quantum Channel Capacity ported so far [3, 4, 10], the left inequality sign in the
and Mutual Information following expression applies:
(6)
2.1 Superadditivity of quantum channel
capacity Here In(X ; Y) / n − C1 is the lower bound of Cn / n − C1.
Therefore, In / n − C1 > 0 means that Cn / n − C1 > 0, which
Here we deal with a transmission model of classical is indicative of strict superadditivity. In this study, we refer
information via a quantum channel. Let {i|i = 0, 1, . . . , M to In(X ; Y) / n − C1 as the quantum gain. Since the highest
– 1} denote the alphabet of a classical information source, quantum gain is C − C1, it is meaningful to compare the
and {ξi} denote its a priori probability. The classical output actual quantum gain In(X ; Y) / n − C1 of specific codes with
information j for classical input information i is obtained the highest possible value.
by quantum measurement of one-to-one quantum states
^ . Generally, the quantum measurement process
(signals) ρi 2.2 Estimation of mutual information by
can be considered as a noisy channel without one-to-one square-root measurement
correspondence of the classical input–output information.
The communication channel is characterized by the condi- Appropriate coding and decoding must be performed
tional probability P(j|i) as follows: to show the superadditivity of the quantum channel capac-
ity. Here we follow the previous papers on superadditivity
(1)
[3, 4]. In particular, we perform coding by appropriate

49
selection of codewords (the a priori probabilities of the binary linear codes. In the previous paper, the formula was
selected codewords are assumed equal), while using applied to single parity check codes as an example of code
square-root measurement (SRM) for decoding. SRM is a meeting condition (29) in Ref. 7. Here we examine the
decoding process intended to minimize the error rate for range of application of the formula in another way.
arbitrary binary linear codes [3, 11]. Thus, the SRM for an Consider linear codes composed of the correspond-
M-ary pure-state signal system {|ψi〉 |i = 0, 1, . . . , M – 1} ing classical code when codeword states of length n are
^ (srm) = |µ 〉 〈µ ||j = 0, 1,
with equal a priori probabilities is {Π composed of binary letter states {|0〉, |1〉}, and M = 2k
j j j
. . . , M – 1}. Here the measured quantum state {|µj〉} is codewords are selected. From the group covariance of
defined as follows: linear codes, mutual information In (X ; Y) is found as fol-
lows [7]:
(7)
(12)

According to Ref. 12, the inner product of the signal |ψi〉


and the measurement quantum state |µj〉 is equal to the Thus, the mutual information can be found by calculating
i-th-row j-th-column element of the square root of the Gram the 0-th row P(j|0) of the channel matrix, that is, the 0-th
matrix of the signal system: row (Γ2k)10,j/ 2 of the square root of the Gram matrix. In
addition, the following is true for the 0-th row (Γ2k)10,j/ 2 of
(8) the square root of the Gram matrix:

Therefore, when SRM is applied, the channel matrix is


(9)
(13)
Now suppose that codeword states of length n are
composed of binary letter states {|0〉, |1〉}, and the code is
produced by selecting 2k codewords. The elements of the Here ωH(i) denotes the Hamming weight when i is ex-
Gram matrix can be expressed as follows, using the inner pressed as a binary number. Therefore, the first and second
product of the letter states κ = 〈0|1〉 and the Hamming terms represent the bit counts of the binary logical products
distance dH (ψi, ψj) between corresponding classical code- of j and k′, and k′ and l, respectively.
words:
(10) Proposition 1
Equation (13) gives an analytic solution of the square
The mutual information in the case of SRM can be root of the Gram matrix for arbitrary binary linear codes.
found from the channel matrix as follows: Lemma 1 The eigenvalues λj and eigenvectors lj of
the Gram matrix (Γ2k) of the covariant signals for a group
with exclusive ORing ⊕ are expressed as follows:

(14)
(11)

However, finding the eigenvalues and eigenvectors remains (15)


a difficult task, and square root calculation of a high-degree
Gram matrix is difficult.

3. Analytic Solution for Mutual Information of Here j = 0, 1, . . . , 2k − 1.


Binary Linear Codes Proof To prove the lemma, it will suffice to show
that
3.1 Derivation of analytic solution for mutual (16)
information
First, the i-th-row element of column vector Γ2 lj of the
k

Here we show that the SRM formula for the mutual Gram matrix of the covariant signals for a group with
information proposed in Ref. 7 holds true for arbitrary exclusive ORing ⊕ is

50
(17)

The i-th-row element of the column vector λjlj is (22)

Thus, different eigenvectors are mutually orthogonal.


Therefore, Eqs. (12) and (13) give an analytic solu-
tion of the mutual information of covariant signals for a
group with exclusive ORing ⊕. Since covariant signals for
a group with exclusive ORing ⊕ form binary linear codes,
(18) Proposition 1 is true.
It should be noted that the expression under the
radical on the right-hand side of Eq. (13) is an eigenvalue
Thus, Eq. (16) is true, as is evident from Eqs. (17) and (18). of the Gram matrix, and therefore, the expression under the
All eigenvectors must be orthonormalized in order to radical is nonnegative because the Gram matrix is positive
calculate the square root of the Gram matrix using eigen- definite.
values and eigenvectors. In general, eigenvalues (14) may
be degenerate. However, eigenvectors (15) are mutually
3.2 Quantum pseudo-cyclic code
orthogonal and normalized.
Lemma 2 Eigenvectors (15) are orthonormalized.
3.2.1 Derivation
Proof
Analytic solution (12), (13) for the mutual informa-
tion can be applied so long as the corresponding classical
code is a binary linear code. Now consider a binary pseudo-
cyclic code. Let G(x) denote the generator polynomial, and
(19) Ai (x) denote a polynomial with weights corresponding to
the binary notation of i. Consider the following i-th code-
word polynomial with codeword length n:
If j = j′, then j ⊕ j′ = 0; therefore,

(23)
(20)
Here wi,k ∈ {0, 1}; therefore, assuming |wi,k〉 ∈ {|0〉, |1〉},
Thus, the eigenvectors are normalized. the corresponding quantum codeword is
On the other hand, if j ≠ j′, then j″ = j ⊕ j′ ≠ 0. In this (24)
case, at least one nonzero bit exists in the binary notation
of j″. Now suppose that the u-th bit is nonzero (i.e., is 1). If Thus, a quantum pseudo-cyclic code can be obtained. Since
the u-th bit is zero when a nonnegative integer v smaller a pseudo-cyclic code is a linear code, the mutual informa-
than 2k − 1 is represented by k bits, then the u-th bit of tion can be found analytically as explained above. When
v + 2u is 1. Then, the generator polynomial G(x) is of degree m, the number
of codewords is 2n−m and (n, n − m) codes can be generated.
When G(x) = 1, selection of codewords is not performed.
In this case, the mutual information per symbol is equal to
(21) the maximum mutual information without coding C1, re-
gardless of the codeword length. When G(x) = x + 1, the
In addition, when arbitrary nonnegative integers smaller quantum code corresponds to an (n, n − 1) single parity
than 2k − 1 are represented by k bits, there are always pairs code for a codeword length of n; in this case, simplified
that differ only in the u-th bit. Therefore, analysis and characteristics have already been obtained [7].

51
(25)

(26)

(27)

Fig. 1. Ratios of mutual information per symbol of Writing x n = X for every D(j)
i (x) / x , we have
j

codes generated by G(x) = x2 + 1 to uncoded channel


capacity C1 as function of κ.

When applying the analytic solution of the mutual informa-


tion to generator polynomials of second and higher orders,
we found that many pseudo-cyclic codes have quantum
(28)
gain (except for some cases with low coding efficiency),
which offers the potential of superadditivity (see Fig. 1).
which is a (k + 2, k + 1) single parity code. Thus, the
3.2.2 Pseudo-cyclic code splitting and mutual pseudo-cyclic code generated by G(x) = x n + 1 can be split
into n single parity codes. In addition, the form of
information
D(j)
i (x) / x denotes the extraction of the (ln + j)-th bit
j

In a classical pseudo-cyclic code, splitting into mul- (l = 0, 1, . . . , k + 1) from the LSD (least significant digit)
tiple codes with generator polynomials of lower order may of the original code.
prove possible. In such cases, the mutual information of the
corresponding quantum pseudo-cyclic code found by SRM (2) For G(x) = x 2n + x n + 1
is the sum of the mutual information values of the quantum Similarly, consider the case of G(x) = x2n + xn + 1.
codes corresponding to the split codes. Therefore, there is Suppose that the i-th codeword of the generator polynomial
no need for data about codes obtained by synthesis. In G(x) = x2n + xn + 1(n ≤ m) of an (m + 2n + 1, m + 1)
addition, a synthesized code has the same degree of quan- pseudo-cyclic code is represented as Wi(x) = Ai(x)G(x),
tum gain as the component codes, which imposes a lower Ai(x) = ai,mxm + . . . + ai,1x + ai,0. Then,
limit on the maximum mutual information for long codes.
But the characteristics of the mutual information for short
codes are included in those for long codes, and the degree
of achievement of high quantum gain increases with the
codeword length.
Below we give examples of splittable classical codes.

(1) For G(x) = xn + 1


Suppose that the i-th codeword of the generator poly- (29)
nomial G(x) = xn + 1(n ≤ m) of (m + n + 1, m + 1) pseudo- (30)
cyclic code is represented as Wi(x) = Ai(x)G(x), Ai(x) =
ai,mxm + . . . + ai,1x + ai,0. Then,
Writing x n = X for every D(j)
i (x) / x , we have
j

52
2m−1 [(2m − 1) times each], 0, and 2m − 1. The mutual infor-
mation can be expressed as follows:

(31)
(33)
which is a (k + 3, k + 1) single parity code generated by
G(x) = x2 + x + 1. Thus, the pseudo-cyclic code generated
by G(x) = x 2n + x n + 1 can be split into n codes
(k + 3, k + 1).

(3) For G(x) = Gr(x)2


Generally, the following holds true for polynomials
in a binary field:

(32)

Using this relationship when G(x) = [Gr(x)]2, the pseudo-


cyclic code generated from G(x) can be split into two
pseudo-cyclic codes generated from Gr(x). By repeated
application of this procedure, splitting into 2n pseudo-cyclic
n
codes can be implemented for G(x) = [Gr(x)]2 .

3.3 Simplified analytic solutions of mutual


information for specific codes

3.3.1 Single parity code and simplex code


3.3.3 Extended BCH code
For single parity codes, a simplified form of analytic
solution (13) has already been proposed in Ref. 7. Such For binary linear codes whose minimum Hamming
simplified solutions can be obtained only if the Hamming distance is an odd number, the minimum Hamming distance
weights are small in number and their distribution is known. can be augmented by 1 by adding a parity bit to every
From Eq. (10), the Gram matrix can be found using the codeword. The (2m − 1, m + 1) code considered above is a
Hamming distances between codewords; however, the 0-th- primitive BCH code with an odd minimum Hamming dis-
row elements of the matrix can be found only from the tance 2m−1 − 1, thus meeting the above condition. We also
Hamming distance between every codeword and the zero obtained a simplified analytic solution for a (2m, m + 1)
codeword 00 . . . 0, that is, the Hamming weight of every code derived by adding a parity check bit to
codeword. Therefore, a small number of Hamming weights (2m − 1, m + 1) code. This is an extension of the primitive
is equivalent to a small number of different elements in the BCH code that exists for arbitrary m ≥ 2 [m = 2 gives just
0-th row of the Gram matrix. An analytic solution for a (4,3) parity code]. As regards the Hamming weight distri-
simplex codes is proposed in Ref. 3. Specific code features bution, there are three values: 2m−1 [(2m+1 − 2) times], 0, and
are utilized, such as the uniform distribution of the Ham- 2m; that is, the minimum Hamming weight is 2m−1. The
ming weights (there are only two Hamming weights, 0 and mutual information for this code can be expressed as fol-
M/2, and hence there are only two different elements in the lows:
0-th row of the Gram matrix: 1 and κM / 2).

3.3.2 BCH code


We attempted to simplify the analytic solution for
(2m − 1, m + 1) code. This code is a primitive BCH code (34)
that exists for arbitrary m ≥ 2. It offers high error correction
capability but the coding efficiency decreases drastically
with the codeword length. As regards the Hamming weight
distribution, generally there are four values: 2m−1 − 1 and

53
(38)

Figure 2 shows a comparison between the mutual


information per symbol of a two-word quantum code with
codeword length (extension order) n = 2, 3, 4 and the
channel capacity C1 for a unit codeword length. As is
evident from the diagram, the mutual information per sym-
bol decreases with higher extension order (greater Ham-
4. Properties of Quantum Gain ming distance between codewords). In addition, Eq. (38)
indicates that without redundant bits (n = 1), the mutual
information per symbol is equal to C1. Therefore, the quan-
4.1 On the existence of quantum gain
tum gain cannot be achieved in a binary code with two
In quantum state coding with n-th order extension, codewords of length n. Thus, the following is true (equality
the mutual information does not exceed nC1 if all 2n code- occurs at n = 1):
words are used as quantum codewords, without selection.
This is because the results of minimization of the decision
operator are equivalent to those of individual measurement
[3]. Therefore, for codes using the entire codeword space,
that is, codes of the largest scale (or with the greatest
number of codewords), the mutual information in the whole (39)
range 0 ≤ κ ≤ 1 after optimum decoding is equal to C1. The
measurement process to provide such optimum decoding is A combination of such codes would also not provide quan-
individual measurement when SRM is applied to every tum gain. Thus, although the (minimum) Hamming dis-
letter. tance between codewords has a strong effect on the
Now consider n-th order extension of binary quantum quantum gain, one cannot simply say that the greater the
signals {|0〉, |1〉} with equal a priori probabilities. Consider minimum Hamming distance, the higher the quantum gain
in particular a code that selects only two quantum code- that is achieved. Since two-word quantum codes cannot
words |ψ0〉 = |0〉|0〉 . . . |0〉 and |ψ1〉 = |1〉|1〉 . . . |1〉, that is, offer quantum gain, the simplest linear code that demon-
the code of smallest scale (or the smallest number of code- strates superadditivity is a single parity code with 4 code-
words). In this case, the Gram matrix Γ2, its square root words of length 3.
(Γ2)1 / 2, and the channel matrix are as follows: On the other hand, binary linear codes, except for the
two cases mentioned above, offer quantum gain in many

(35)
In the communication matrix, all diagonal elements are the
same, as are the off-diagonal elements. That is,
(36)

(37)

From Eq. (12), the mutual information per symbol


(1/n)In(X; Y) is

Fig. 2. Mutual information per symbol of quantum


code with two codewords and C1 as function of κ.

54
cases, although the gain properties vary with the generator Table 1. Achievement factor of quantum gain for
polynomial and the coding efficiency. primitive BCH codes with codeword length of 15 and 31
as function of κ
4.2 Achievement of quantum gain

Quantum gain occurs when the mutual information


of code of length n satisfies the following condition:

(40)

The objective of our research is the achievement of a


quantum channel capacity C. The achievement factor AF of
the quantum gain is defined as follows:

(41)
4.3 Limiting characteristic of quantum gain
AF = 1 means that the highest quantum gain is achieved. In achievement
addition, let us denote the maximum achievement factor
One of our objectives is to investigate how the mutual
with respect to codeword length as
information approaches the quantum channel capacity. This
(42) issue is considered in reliability function theory [16, 17]. In
the preceding section, we considered AFmax for codes of a
given length. However, dealing with longer codes is diffi-
For BCH and other codes with lengths up to 127, the
cult because the κ giving the maximum achievement factor
maximum achievement factor shows nearly the same value
becomes very close to 1, and the mutual information be-
for a given codeword length (see Fig. 3). If this tendency comes extremely small. Thus, we applied Maclaurin series
holds true for longer codes, finding the maximum achieve- approximation to simplex codes (similar to BCH codes in
ment factor for a certain class of codes is very significant. their characteristics) in order to calculate AFmax. Specifi-
At least, the lower limit of the achievement factor with cally, the following approximation formula was used for
respect to the codeword length can be obtained. analytic solution of the mutual information of a simplex
We found the maximum value of the achievement code [3]:
factor for (2m − 1, m) simplex codes, (2m − 1, m + 1) BCH
codes, and (2m, m + 1) extended BCH codes up to a certain
codeword length (see Table 1). Thus, an achievement factor
(43)
of about 2/3 is obtained for any of the three codes at a
codeword length of 230.

Here

(44)
(45)
Fig. 3. Achievement factor of quantum gain for
primitive BCH codes with codeword length Figure 4 presents the exact and approximate AFmax for
of 15 and 31 as function of κ. simplex codes as functions of the code scale parameter m

55
under energy constraints is achieved via coherent states
[18], thus reconfirming the importance of coherent states.
Since coherent-state properties are maintained even in case
of attenuation, an attenuated channel outputs pure states.
Thus, a study limited to pure states has some significance.
In this study, we have investigated the quantum gain
of coding from the viewpoint of information criterion.
However, coding is used primarily to minimize the error
probability, and hence, in the future, the quantum gain
properties must be explored in terms of the error probability
criterion.

Acknowledgment
Fig. 4. Maximum achievement factor of quantum gain
for simplex codes as a function of code scale m. The present study was supported by financial aid
from the Ministry of Education, Culture, Sports, Science
and Technology, as a part of the SCOPE project of the
Ministry of Internal Affairs and Communications.
(which corresponds to the number of information symbols).
In a simplex code, x decreases as m increases, which corre- REFERENCES
sponds to the approximation conditions. The approxima-
tion agrees closely with the strict solution at m > 10. AFmax 1. Kholevo AS. Information-theoretical aspects of
increases with the code length, without saturation, and the quantum measurement. Problemy Peredachi Infor-
full quantum channel capacity can be almost reached when matsii 1973;9:31–42.
κ is very close to 1. In simplex codes, κ resulting in AFmax 2. Kholevo AS. Capacity of a quantum communication
approaches 1 as m increases. This is because the parameter channel. Problemy Peredachi Informatsii 1979;15:3–
m in a simplex code governs not only the codeword length 11.
but also the coding efficiency. On the other hand, the 3. Sasaki M, Kato K, Izutsu M, Hirota O. Quantum
codeword length and coding efficiency can be designed channels showing superadditivity in classical capac-
independently for BCH codes. Thus, one can assume that ity. Phys Rev 1998;A58:146–158.
in a BCH code, the quantum channel capacity can be 4. Kato K, Osaki M, Hirota O. Calculation of mutual
achieved over nearly the entire range of κ. information for quantum code words with very tong
length. Abstracts of 4th International Conference on
5. Conclusions Quantum Communication, Measurement, and Com-
puting (QCM 98), p 30.
We have shown in this paper that the previously 5. Takeoka M, Fujiwara M, Mizuno J, Sasaki M. Imple-
proposed analytic solution for mutual information applies mentation of generalized quantum measurements:
to arbitrary binary linear codes. We applied this analytic Superadditive quantum coding, accessible informa-
solution to various quantum pseudo-cyclic codes, and veri- tion extraction, and classical capacity limit. Phys Rev
fied its effectiveness. In addition, we proposed simplified 2004;A69:052329.
analytic solutions for certain code classes and examined the 6. Usuda TS, Takumi I. Group covariant signals in quan-
achievement factor of the quantum gain for long codes. tum information theory. Quantum communication,
Specifically, we introduced the Maclaurin series approxi- computing, and measurement 2. In: Kumar P, D’Ari-
mation for the mutual information of simplex codes, and ano GM, Hirota O (editors). Plenum Press; 2000. p
examined the maximum achievement factor of the quantum 37–42.
gain for very long codes. We found that the achievement 7. Usami S, Usuda TS, Takumi I, Hata M. Trans IEICE
factor does not become saturated, and that the full quantum 1999;E82-A:2185–2190.
channel capacity can be nearly reached. 8. Ban M, Kurokawa K, Hirota O. Cut-off rate perfor-
Considering more realistic situations, mixed quan- mance of quantum communication channels with
tum states with classical noise must be considered rather symmetric states. Quantum Semiclass Opt
than pure states. However, in an attenuated channel, the 1999;1:206–218.
most important quantum channel model for representing 9. Osaki M, Ban M, Hirota O. On maximum mutual
optical fiber channels and space communication channels, information without coding. Quantum communica-
it was shown in 2004 that the quantum channel capacity tion, computing, and measurement 2. In: Kumar P,

56
D’Ariano GM, Hirota O (editors). Plenum Press; 14. Peres A, Wootters WK. Optimal detection of quan-
2000. p 17–26. tum information. Phys Rev Lett 1991;66:1119–1122.
10. Hirota O. A foundation of quantum channels with 15. Ban M, Yamazaki K, Hirota O. Accessible informa-
super additiveness for Shannon information. Appl tion in combined and sequential quantum measure-
Algebra Eng Commun Comput 2000;10:401–423. ment on a binary-state signal. Phys Rev
11. Usuda TS, Takumi I, Hata M, Hirota O. Minimum 1997;A55:22–26.
error detection of classical linear code sending 16. Burnashev MV, Holevo AS. On reliability function of
quantum communication channel. quant-
through a quantum channel. Phys Lett
ph/9703013, 1997.
1999;A256:104–108.
17. Kurokawa K, Hirota O. Properties of quantum reli-
12. Hausladen P, Jozsa R, Schumacher B, Westmoreland
ability function and its applications to several quan-
M, Wootters WK. Classical information capacity of
tum signals. Trans IEICE 2000;J83-A:57–66.
a quantum channel. Phys Rev 1996;A54:1869–1876. 18. Giovannetti V, Guha S, Lloyd S, Maccone L, Shapiro
13. Usami S, Usuda TS, Takumi I, Nakano R, Hata M. JH, Yuen HP. Classical capacity of the lossy bosonic
Superadditivity in capacity of quantum channel by channel: The exact solution. Phys Rev Lett
classical pseudo-cyclic codes. Quantum communica- 2004;92:027902.
tion, computing, and measurement 3. In: Tombesi P, 19. Hirota O. Foundation of quantum information sci-
Hirota O (editors). Plenum Press; 2001. p 35–38. ence. Morikita; 2002.

AUTHORS (from left to right)

Yuki Ishida (nonmember) graduated in 2005 from the Faculty of Information Science and Technology, Aichi Prefectural
University, and completed the master’s program in 2007. While in the master’s program, he pursued research on quantum
information theory. He is a member of IEICE.

Shogo Usami (nonmember) graduated in 1997 from the Faculty of Engineering, Nagoya Institute of Technology, majoring
in AI and computer science, completed the doctoral program in 2002, and was appointed a postdoctoral researcher in 2002.
Since 2004 he has been an assistant professor in the Department of Information Engineering at Meijo University. His current
research interests are coding theory and quantum information theory. He holds a D.Eng. degree, and is a member of IEICE,
IPSJ, and SITA.

Tsuyoshi Sasaki Usuda (member) graduated in 1990 from the Faculty of Engineering, Tamagawa University, majoring
in information and communications, completed the doctoral program in 1995, and was appointed a research associate at Nagoya
Institute of Technology. Since 2002 he has been an associate professor in the Department of Applied Information Technology
at Aichi Prefectural University. His current research interests are quantum information theory and its applications. He holds a
D.Eng. degree, and is a member of IEICE and SITA.

Ichi Takumi (member) graduated in 1982 from the Faculty of Engineering, Nagoya Institute of Technology, majoring in
electronic engineering, completed the master’s program in 1984, and was appointed a research associate in 1985. He is currently
a professor in the Department of Computer Science. His research interests include digital communications and adaptive system
processing. He holds a D.Eng. degree, and is a member of IEICE and SICE.

57

You might also like