Essays On Coding Theory 1St Edition Blake full chapter pdf docx

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 69

Essays on Coding Theory 1st Edition

Blake
Visit to download the full and correct content document:
https://ebookmass.com/product/essays-on-coding-theory-1st-edition-blake/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Kantian Commitments: Essays on Moral Theory and


Practice Barbara Herman

https://ebookmass.com/product/kantian-commitments-essays-on-
moral-theory-and-practice-barbara-herman/

Diplomatic Investigations: Essays On The Theory Of


International Politics Herbert Butterfield

https://ebookmass.com/product/diplomatic-investigations-essays-
on-the-theory-of-international-politics-herbert-butterfield/

Information Theory Coding And Cryptography 3rd Edition


Ranjan Bose

https://ebookmass.com/product/information-theory-coding-and-
cryptography-3rd-edition-ranjan-bose/

Blake Wilder FBI 11-The House on the Hill Gray

https://ebookmass.com/product/blake-wilder-fbi-11-the-house-on-
the-hill-gray/
Private Law and Practical Reason: Essays on John
Gardner's Private Law Theory (Oxford Private Law
Theory) Haris Psarras (Editor)

https://ebookmass.com/product/private-law-and-practical-reason-
essays-on-john-gardners-private-law-theory-oxford-private-law-
theory-haris-psarras-editor/

Geometric Patterns with Creative Coding: Coding for the


Arts 1st Edition Selçuk Artut

https://ebookmass.com/product/geometric-patterns-with-creative-
coding-coding-for-the-arts-1st-edition-selcuk-artut/

Chipless RFID Based on RF Encoding Particle.


Realization, Coding and Reading System 1st Edition
Arnaud Vena

https://ebookmass.com/product/chipless-rfid-based-on-rf-encoding-
particle-realization-coding-and-reading-system-1st-edition-
arnaud-vena/

Eduard Bernstein On Socialism Past And Present: Essays


And Lectures On Ideology 1st Edition Marius S.
Ostrowski

https://ebookmass.com/product/eduard-bernstein-on-socialism-past-
and-present-essays-and-lectures-on-ideology-1st-edition-marius-s-
ostrowski/

Essays on Ethics and Culture Sabina Lovibond

https://ebookmass.com/product/essays-on-ethics-and-culture-
sabina-lovibond/
Essays on Coding Theory

Critical coding techniques have developed over the past few decades for data storage,
retrieval and transmission systems, significantly mitigating costs for governments and
corporations that maintain server systems containing large amounts of data. This book
surveys the basic ideas of these coding techniques, which tend not to be covered in the
graduate curricula, including pointers to further reading. Written in an informal style,
it avoids detailed coverage of proofs, making it an ideal refresher or brief introduction
for students and researchers in academia and industry who may not have the time to
commit to understanding them deeply. Topics covered include fountain codes designed
for large file downloads; LDPC and polar codes for error correction; network,
rank-metric and subspace codes for the transmission of data through networks;
post-quantum computing; and quantum error correction. Readers are assumed to have
taken basic courses on algebraic coding and information theory.
I a n F. B l a k e is Honorary Professor in the Department of Electrical and
Computer Engineering at the University of British Columbia, Vancouver. He is a
fellow of the Royal Society of Canada, the Institute for Combinatorics and its
Applications, the Canadian Academy of Engineers and a Life Fellow of the IEEE. In
2000, he was awarded an IEEE Millennium Medal. He received his undergraduate
degree at Queen’s University, Canada and doctorate degree at Princeton University in
1967. He also worked in industry, spending sabbatical leaves with IBM and M/A-Com
Linkabit, and working with the Hewlett-Packard Labs from 1996 to 1999. His research
interests include cryptograph and algebraic coding theory, and he has written several
books in these areas.

Published online by Cambridge University Press


“This book is an essential resource for graduate students, researchers and professionals
delving into contemporary topics in coding theory not always covered in textbooks.
Each expertly-crafted essay offers a clear explanation of the fundamental concepts,
summarizing key results with a consistent notation and providing valuable references
for further exploration.”
Frank R. Kschischang, University of Toronto
“This volume lives up to its title: it explores many modern research directions in coding
theory without always insisting on complete proofs. Professor Blake nevertheless
manages to explain not only the results themselves, but also why things work the way
they do. This volume will be a wonderful supplement to in-depth presentations of the
topics that it covers.”
Alexander Barg, University of Maryland
“This book provides an excellent and comprehensive presentation of 16 major topics
in coding theory. The author brings the highly mathematical subjects down to a level
that can be understood with basic knowledge in combinatorial mathematics, modern
algebra, and coding and information theory. It can be used as a textbook for a graduate
course in electrical engineering, computer sciences and applied mathematics. It is also
an invaluable reference for researchers and practitioners in the areas of communications
and computer sciences.”
Shu Lin, retired from University of California, Davis
“This very unique contribution by Professor Blake consists of a collection of essays on
coding theory that can be read independently and yet are coherently written. It covers a
comprehensive list of topics of interest and is an excellent reference for anyone who is
not an expert on all of these topics.”
Raymond W. Yeung, The Chinese University of Hong Kong

Published online by Cambridge University Press


Essays on Coding Theory

I a n F. B l a k e
University of British Columbia

Published online by Cambridge University Press


Shaftesbury Road, Cambridge CB2 8EA, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India
103 Penang Road, #05–06/07, Visioncrest Commercial, Singapore 238467

Cambridge University Press is part of Cambridge University Press & Assessment,


a department of the University of Cambridge.
We share the University’s mission to contribute to society through the pursuit of
education, learning and research at the highest international levels of excellence.

www.cambridge.org
Information on this title: www.cambridge.org/9781009283373
DOI: 10.1017/9781009283403
© Ian F. Blake 2024
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press & Assessment.
First published 2024
A catalogue record for this publication is available from the British Library
A Cataloging-in-Publication data record for this book is available from the Library of Congress
ISBN 978-1-009-28337-3 Hardback
Cambridge University Press & Assessment has no responsibility for the persistence or
accuracy of URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.

Published online by Cambridge University Press


To Betty, always

Published online by Cambridge University Press


Published online by Cambridge University Press
Contents

Preface page xi
1 Introduction 1
1.1 Notes on Finite Fields and Coding Theory 1
1.2 Notes on Information Theory 16
1.3 An Overview of the Chapters 21
2 Coding for Erasures and Fountain Codes 26
2.1 Preliminaries 27
2.2 Tornado Codes and Capacity-Achieving Sequences 30
2.3 LT and Raptor Codes 44
3 Low-Density Parity-Check Codes 66
3.1 Gallager Decoding Algorithms A and B for the BSC 71
3.2 Performance of LDPC Codes on the BIAWGN Channel 77
3.3 Thresholds, Concentration, Gaussian Approximation, EXIT
Charts 87
4 Polar Codes 97
4.1 Preliminaries and Notation 98
4.2 Polar Code Construction 102
4.3 Subchannel Polarization and Successive Cancellation Decoding 111
5 Network Codes 128
5.1 Network Flows 129
5.2 Network Coding 135
5.3 Construction and Performance of Network Codes 142
6 Coding for Distributed Storage 157
6.1 Performance Limits on Coding for Distributed Storage 158

vii

Published online by Cambridge University Press


viii Contents

6.2 Regenerating Codes for Distributed Storage and Subpacketization 167


6.3 Array-Type Constructions of Regenerating Codes 174
7 Locally Repairable Codes 182
7.1 Locally Repairable Codes 182
7.2 Maximally Recoverable Codes 193
7.3 Other Types of Locally Repairable Codes 195
8 Locally Decodable Codes 206
8.1 Locally Decodable Codes 207
8.2 Coding for Local Decodability 211
8.3 Yekhanin’s 3-Query LDCs 223
9 Private Information Retrieval 230
9.1 The Single-Server Case 233
9.2 The Multiple-Server Case 237
9.3 Coding for PIR Storage 241
10 Batch Codes 255
10.1 Batch Codes 255
10.2 Combinatorial Batch Codes 265
10.3 The Relationship of Batch Codes to LDCs and PIR Codes 271
11 Expander Codes 275
11.1 Graphs, Eigenvalues and Expansion 275
11.2 Tanner Codes 283
11.3 Expander Graphs and Their Codes 290
12 Rank-Metric and Subspace Codes 298
12.1 Basic Properties of Rank-Metric Codes 299
12.2 Constructions of MRD Rank-Metric Codes 309
12.3 Subspace Codes 313
13 List Decoding 327
13.1 Combinatorics of List Decoding 331
13.2 The Sudan and Guruswami–Sudan Algorithms for RS Codes 334
13.3 On the Construction of Capacity-Achieving Codes 346
14 Sequence Sets with Low Correlation 361
14.1 Maximum-Length Feedback Shift Register Sequences 361
14.2 Correlation of Sequences and the Welch Bound 368
14.3 Gold and Kasami Sequences 375
15 Postquantum Cryptography 380
15.1 Classical Public-Key Cryptography 382

Published online by Cambridge University Press


Contents ix

15.2 Quantum Computation 386


15.3 Postquantum Cryptography 394
16 Quantum Error-Correcting Codes 404
16.1 General Properties of Quantum Error-Correcting Codes 405
16.2 The Standard Three-, Five- and Nine-Qubit Codes 408
16.3 CSS, Stabilizer and F4 Codes 413
17 Other Types of Coding 424
17.1 Snake-in-the-Box, Balanced and WOM Codes 424
17.2 Codes for the Gaussian Channel and Permutation Codes 429
17.3 IPP, Frameproof and Constrained Codes 434
Appendix A Finite Geometries, Linearized Polynomials and
Gaussian Coefficients 443
Appendix B Hasse Derivatives and Zeros of Multivariate
Polynomials 450

Index 454

Published online by Cambridge University Press


Published online by Cambridge University Press
Preface

The subject of algebraic coding theory arose in response to Shannon’s remark-


able work on information theory and the notion of capacity of communication
channels, which showed how the structured introduction of redundancy into
a message can be used to improve the error performance on the channel. The
first example of an error-correcting code was a Hamming code in Shannon’s
1948 paper. This was followed by the Peterson book on error-correcting codes
in 1961. The books of Berlekamp in 1968 and the joint volume of Peterson
and Weldon in 1972 significantly expanded access to the developing subject.
An impressive feature of these books was a beautiful treatment of the theory of
finite fields – at a time when most engineers had virtually no training in such
algebraic concepts.
Coding theory developed through the 1960s, although there were concerns
that the encoding and decoding algorithms were of such complexity that
their use in practice might be limited, given the state of electronic circuits
at that time. This was dispelled during the 1970s with the increasing capa-
bilities of microelectronics. They are now included in many communications
and storage applications and standards and form a critical part of such
systems.
Coding theory has expanded beyond the original algebraic coding theory
with very significant achievements in systems, such as LDPC coding, polar
coding and fountain codes, and these systems are capable of achieving capacity
on their respective channels that have been incorporated into numerous
standards and applications. It has also embraced new avenues of interest
such as locally decodable codes, network codes, list decoding and codes for
distributed storage, among many others.
While topics such as LDPC coding and polar coding are of great impor-
tance, only a few graduate departments will be able to devote entire courses
to them or even partially cover them in more general courses. Given the

xi

https://doi.org/10.1017/9781009283403.001 Published online by Cambridge University Press


xii Preface

pressure departments face, many of the other topics considered here may not
fare so well.
The idea of this book was to create a series of presentations of modest
length and depth in these topics to facilitate access to them by graduate
students and researchers who may have an interest in them but defer from
making the commitment of time and effort for a deeper understanding. Each
chapter is designed to acquaint the reader with an introduction to the main
results and possibilities without many of the proofs and details. They can
be read independently and a prerequisite is a basic course on algebraic
coding and information theory, although some of the topics present technical
challenges.
There are as many reasons not to write such a book as to write it. A few of
the areas have either excellent monographs or tutorials available on the web.
Also, it might be argued that an edited book on these topics with chapters
written by acknowledged experts would be of more value. Indeed, such a
volume is A Concise Encyclopedia of Coding Theory, W.C. Huffman, J.-L.
Kim and P. Solé, eds., 2021, CRC Press. However, the entries in such a volume,
as excellent as they usually are, are often of an advanced nature, designed for
researchers in the area to bring them abreast of current research directions. It
was felt that a series of chapters, written from a fairly consistent point of view
and designed to introduce readers to the areas covered, rather than provide
a deep coverage, might be of interest. I hope some readers of the volume will
agree. For many of the areas covered, the influence of the seminal papers on the
subjects is impressive. The attempt here is to explain and put into context these
important works, but for a serious researcher in an area, it does not replace the
need to read the original papers.
Choosing the level of the presentation was an interesting challenge. On
the one hand it was desired to achieve as good an appreciation of the results
and implications of an area as possible. The emphasis is on describing and
explaining contributions rather than proving and deriving, as well as providing
a few examples drawn mainly from the literature. On the other hand the
inclusion of too much detail and depth might discourage reading altogether.
It is hoped the compromise reached is satisfactory. While efforts were made to
render readable accounts for the topics, many readers might still find some of
the topics difficult.
Another problem was to choose a consistent notation when describing
results from different authors. Since one of the goals of the work was to provide
an entrée to the main papers of an area, an effort was made to use the notation
of the seminal works. Across the chapters, compromises in notation had to be
made and the hope is that these were reasonable. Generally there was a bias

https://doi.org/10.1017/9781009283403.001 Published online by Cambridge University Press


Preface xiii

toward describing code construction techniques for the areas which tended to
make some sections technically challenging.
I would like to thank the many colleagues around the world who provided
helpful and useful comments on many parts of the manuscript. First among
these are Shu Lin and Frank Kschischang, who read virtually all of the work
and consistently supported the effort. I cannot thank them enough for their
comments and suggestions. I would also like to thank Raymond Yeung, Vijay
Kumar, Eitan Yaakobi, Rob Calderbank, Amir Tasbihi and Lele Wang, who
read several of the chapters and provided expert guidance on many issues.
I would also like to thank the Department of Electrical and Computer
Engineering at the University of British Columbia and my colleagues there,
Vijay Bhargava, Lutz Lampe and Lele Wang, for providing such a hospitable
environment in which to pursue this work.
Finally, I would like to thank my wife Betty without whose love, patience
and understanding this book would not have been written.

https://doi.org/10.1017/9781009283403.001 Published online by Cambridge University Press


https://doi.org/10.1017/9781009283403.001 Published online by Cambridge University Press
1
Introduction

Since the early 2000s we have seen the basic notions of coding theory expand
beyond the role of error correction and algebraic coding theory. The purpose
of this volume is to provide a brief introduction to a few of the directions that
have been taken as a platform for further reading. Although the approach is to
be descriptive with few proofs, there are parts which are unavoidably technical
and more challenging.
It was mentioned in the Preface that the prerequisite for this work is a basic
course on algebraic coding theory and information theory. In fact only a few
aspects of finite fields, particularly certain properties of polynomials over finite
fields, Reed–Solomon codes and Reed–Muller codes and their generalizations
are considered to provide a common basis and establish the notation to be used.
The trace function on finite fields makes a few appearances in the chapters
and its basic properties are noted. Most of the information will be familiar
and stated informally without proof. A few of the chapters use notions of
information theory and discrete memoryless channels and the background
required for these topics is also briefly reviewed in Section 1.2. The final
Section 1.3 gives a brief description of the chapters that follow.

1.1 Notes on Finite Fields and Coding Theory


Elements of Finite Fields
A few basic notions from integers and polynomials will be useful in several
of the chapters as well as considering properties of finite fields. The greatest
common divisor (gcd) of two integers or polynomials over a field will be a
staple of many computations needed in several of the chapters. Abstractly, an
integral domain is a commutative ring in which the product of two nonzero
elements is nonzero, sometimes stated as a commutative ring with identity

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


2 1 Introduction

which has no zero divisors (i.e., two nonzero elements a,b such that ab = 0).
A Euclidean domain is an integral domain which is furnished with a norm
function, in which the division of an element by another with a remainder of
lower degree can be formulated. Equivalently the Euclidean algorithm (EA)
can be formulated in a Euclidean domain.
Recall that the gcd of two integers a,b ∈ Z is the largest integer d
that divides both a and b. Let F be a field and denote by F[x] the ring of
polynomials over F with coefficients from F. The gcd of two polynomials
a(x),b(x) ∈ F[x] is the monic polynomial (coefficient of the highest power
of x is unity) of the greatest degree, d(x), that divides both polynomials. The
EA for polynomials is an algorithm that produces the gcd of polynomials a(x)
and b(x) (the one for integers is similar) by finding polynomials u(x) and v(x)
such that
d(x) = u(x)a(x) + v(x)b(x). (1.1)

It is briefly described as follows. Suppose without loss of generality that


deg b(x) < deg a(x) and consider the sequence of polynomial division steps
producing quotient and remainder polynomials:

a(x) = q1 (x)b(x) + r1 (x), deg r1 < deg b


b(x) = q2 (x)r1 (x) + r2 (x), deg r2 < deg r1
r1 (x) = q3 (x)r2 (x) + r3 (x), deg r3 < deg r2
.. ..
. .
rk (x) = qk+2 (x)rk+1 (x) + rk+2 (x), deg rk+2 < deg rk+1
rk+1 (x) = qk+3 (x)rk+2 (x), d(x) = rk+2 (x).

That d(x), the last nonzero remainder, is the required gcd is established by
tracing back divisibility conditions. Furthermore, tracing back shows how two
polynomials u(x) and v(x) are found so that Equation 1.1 holds.
A similar argument holds for integers. The gcd is denoted (a,b) or
(a(x),b(x)) for integers and polynomials, respectively. If the gcd of two
integers or polynomials is unity, they are referred to as being relatively prime
and denoted (a,b) = 1 or (a(x),b(x)) = 1.
If the prime factorization of n is
e
n = p1e1 p2e2 · · · pkk , p1,p2, . . . ,pk distinct primes,

then the number of integers less than n that are relatively prime to n is given
by the Euler Totient function φ(n) where


k
φ(n) = piei −1 (pi − 1). (1.2)
i=1

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


1.1 Notes on Finite Fields and Coding Theory 3

A field is a commutative ring with identity in which elements have additive


inverses (0 denotes the additive identity) and nonzero elements have multi-
plicative inverses (1 denotes the multiplicative identity). It may also be viewed
as an integral domain in which the nonzero elements form a multiplicative
group.
A finite field is a field with a finite number of elements. For a finite field,
there is a smallest integer c such that each nonzero element of the field added
to itself a total of c times yields 0. Such an integer is called the characteristic
of the field. If c is not finite, the field is said to have characteristic 0. Notice
that in a finite field of characteristic 2, addition and subtraction are identical in
that 1 + 1 = 0. Denote the set of nonzero elements of the field F by F∗ .
Denote by Zn the set of integers modulo n, Zn = {0,1,2, . . . ,n − 1}. It is
a finite field iff n is a prime p, since if n = ab, a,b ∈ Z is composite, then it
has zero divisors and hence is not a field. Thus the characteristic of any finite
field is a prime and the symbol p is reserved for some arbitrary prime integer.
In a finite field Zp , arithmetic is modulo p. If a ∈ Zp,a = 0, the inverse of a
can be found by applying the EA to a < p and p which yields two integers
u, v ∈ Z such that
ua + vp = 1 in Z

and so ua + vp (mod p) ≡ ua ≡ 1 (mod p) and a −1 ≡ u (mod p). The


field will be denoted Fp . In any finite field there is a smallest subfield, a set
of elements containing and generated by the unit element 1, referred to as the
prime subfield, which will be Fp for some prime p.
Central to the notion of finite fields and their applications is the role of
polynomials over the field. Denote the ring of polynomials in the indeterminate
x over a field F by F[x] and note that it is a Euclidean domain (although the
ring of polynomials with two variables F[x,y] is not). A polynomial f (x) =
fn x n + fn−1 x n−1 + · · · + f1 x + f0 ∈ F[x],fi ∈ F is monic if the leading
coefficient fn is unity.
A polynomial f (x) ∈ F[x] is called reducible if it can be expressed as the
product of two nonconstant polynomials and irreducible if it is not the product
of two nonconstant polynomials, i.e., there do not exist two nonconstant
polynomials a(x),b(x) ∈ F[x] such that f (x) = a(x)b(x). Let f (x) be a
monic irreducible polynomial over the finite field Fp and consider the set of
pn polynomials taken modulo f (x) which will be denoted
 
Fp [x]/f (x) = an−1 x n−1 + an−2 x n−2 + · · · + a1 x + a0,ai ∈ Fp

where f (x) is the ideal in Fp generated by f (x). Addition of two polyno-


mials is obvious and multiplication of two polynomials is taken modulo the

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


4 1 Introduction

irreducible polynomial f (x), i.e., the remainder after division by f (x). The
inverse of a nonzero polynomial a(x) ∈ Fp [x]/f (x) is found via the EA as
before. That is since by definition (a(x),f (x)) = 1 there exist polynomials
u(x),v(x) such that
u(x)a(x) + v(x)f (x) = 1

and the inverse of a(x) ∈ Fp [x]/f (x) is u(x). Algebraically this structure
might be described as the factor field of the ring Fp [x] modulo the maximal
ideal f (x).
It follows the set Fp [x]/f (x) forms a finite field with p n elements. It is
conventional to denote q = p n and the field of pn elements as either Fpn or
Fq . Every finite field can be shown to have a number of elements of the form
q = p n for some prime p and positive integer n and that any two finite fields of
the same order are isomorphic. It will be noted that an irreducible polynomial
of degree n will always exist (see Equation 1.4) and so all finite fields can be
constructed in this manner.
In general, suppose q = pm and let f (x) be a monic irreducible polynomial
over Fq of degree m (which will be shown to always exist). The set of q m
polynomials over Fq of degree less than m with multiplication modulo f (x)
will then be a finite field with q m elements and designated Fq m . For future
reference denote the set of polynomials of degree less than m by Fq <m [x]
and those less than or equal by Fq ≤m [x]. Since it involves no more effort,
this general finite field Fq m will be examined for basic properties. The subset
Fq ⊆ Fq m is a field, i.e., a subset that has all the properties of a field, a subfield
of Fq m .
The remainder of the subsection contains a brief discussion of the structure
of finite fields and polynomials usually found in a first course of coding theory.
It is straightforward to show that over any field F (x m − 1) divides (x n − 1)
iff m divides n, written as

(x m − 1)(x n − 1) iff m | n.

Further, for any prime p,



(pm − 1)(pn − 1) iff m | n.

The multiplicative group of a finite field, F∗q m , can be shown to be cyclic


(generated by a single element). The order of a nonzero element α in a field F
is the smallest positive integer  such that α  = 1, denoted as ord (α) =  and
referred to as the order
 of α. Similarly if  is the smallest integer such that the
polynomial f (x) | x  − 1 , the polynomial is said to have order  over the

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


1.1 Notes on Finite Fields and Coding Theory 5

understood field. The order of an irreducible polynomial is also the order of


its zeros.
If β has order , then β i has order /(i,). Similarly if β has order  and γ
has order κ and (,κ) = 1, then the order of βγ is κ.
An element α ∈ Fq m of maximum order q m −1 is called a primitive element.
If α is primitive, then α i is also primitive iff (i,q m − 1) = 1 and there are
φ(q m − 1) primitive elements in Fq m .
A note on the representation of finite fields is in order. The order of
an irreducible polynomial f (x) over Fq of degree k can be determined by
successively dividing the polynomial (x n − 1) by f (x) over Fq as n increases.
If the smallest such n is q k − 1, the polynomial is primitive. To effect the
division, arithmetic in the field Fq is needed. If f (x) is primitive of degree k
over Fq , one could then take the field as the elements
 
Fq k = 0,1,x,x 2, . . . ,x q −2 .
k

By definition the elements are distinct. Each of these elements could be taken
modulo f (x) (which is zero in the field) which would result in the field
elements being all polynomials over Fq of degree less than k. Multiplication
in this field would be polynomials taken modulo f (x). The field element x
is a primitive element. While this is a valid presentation, it is also common to
identify the element x by an element α with the statement “let α be a zero of the
primitive polynomial f (x) of degree k over Fq .” The two views are equivalent.
There are φ(q k − 1) primitive elements in Fq k and since the degree of
an irreducible polynomial with one of these primitive elements as a zero is
necessarily k, there are exactly φ(q k − 1)/k primitive polynomials of degree
k over Fq .
Suppose f (x) is an irreducible nonprimitive polynomial of degree k over
Fq . Suppose it is of order n < q k − 1, i.e., f (x) | (x n − 1). One can define the
field Fq k as the set of polynomials of degree less than k
 
Fq k = ak−1 x k−1 + ak−2 x k−2 + · · · + a1 x + a0, ai Fq

with multiplication modulo f (x). The element x is not primitive if n < (q k −1)
but is an element of order n,n | q k − 1 (although there are still φ(q k − 1)
primitive elements in the field).
Let α ∈ Fq m be an element of maximum order (q m − 1) (i.e., primitive) and
denote the multiplicative group of nonzero elements as
 
F∗q m = α = 1,α,α 2, . . . ,α q −2 .
m

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


6 1 Introduction

Let β ∈ F∗q m be an element of order  which generates a cyclic multiplica-


tive subgroup of F∗q m of order  and for such a subgroup  | (q m − 1). The
order of any nonzero element in Fq m divides (q m − 1). Thus
 
(x − β), x q −1 − 1 =
m m
xq − x = (x − β) (1.3)
β∈Fq m β∈F∗q m

is a convenient factorization (over Fq m ).


Suppose Fq m has a subfield Fq k – a subset of elements which is itself a field.
The number of nonzero elements in Fq k is (q k − 1) and this set must form a
multiplicative subgroup of F∗q m and hence (q k − 1) | (q m − 1) and this implies
that k | m and that Fq k is a subfield of Fq m iff k | m. Suppose Fq k is a subfield
of Fq m . Then
k
β ∈ Fq m is in Fq k iff β q = β
and β = α j ∈ Fq m is a zero of the monic irreducible polynomial f (x) of
degree k over Fq . Thus
f (x) = x k + fk−1 x k−1 + · · · + f1 x + f0, fi ∈ Fq , i = 0,1,2, . . . ,k − 1
and f (α j ) = 0. Notice that
 q
f (x)q = x k + fk−1 x k−1 + · · · + f1 x + f0
q q q
= x kq + fk−1 x q(k−1) + · · · + f1 x q + f0
q
= x + fk−1 x
kq q(k−1) + · · · + f1 x + f0, as fi = fI for fi ∈ Fq
q

= f (x q )

and since β = α j is a zero of f (x) so is β q . Suppose  is the smallest integer



such that β q = β (since the field is finite there must be such an ) and let
 2 −1

Cj = α j = β,β q ,β q , . . . ,β q

referred to as the conjugacy class of β. Consider the polynomial



−1
j
g(x) = x − βq
i=0

and note that



−1
q 
−1 
−1
qj q j +1 j
g(x) =
q
x−β = x −β
q
= xq − βq = g(x q )
i=0 i=0 i=0

and, as above, g(x) has coefficients in Fq , i.e., g(x) ∈ Fq [x]. It follows that
g(x) must divide f (x) and since f (x) was assumed monic and irreducible it
must be that g(x) = f (x). Thus if one zero of the irreducible f (x) is in Fq m ,

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


1.1 Notes on Finite Fields and Coding Theory 7

all are. Each conjugacy class of the finite field corresponds to an irreducible
polynomial over Fq .
By similar reasoning it can be shown that if f (x) is irreducible of degree k
m m
over Fq , then f (x) | (x q − x) iff k | m. It follows that the polynomial x q − x
is the product of all monic irreducible polynomials whose degrees divide m.
Thus
m 
xq − x = f (x).
f (x)irreducible
over Fq
degreef (x)=k|m

This allows a convenient enumeration of the polynomials. If Nq (m) is the


number of monic irreducible polynomials of degree m over Fq , then by the
above equation
qm = kNq (k)
k|m

which can be inverted using standard combinatorial techniques as


1 m k
Nq (m) = μ q (1.4)
m k
k|m

where μ(n) is the Möbius function equal to 1 if n = 1, (−1)s if n is the


product of s distinct primes and zero otherwise. It can be shown that Nq (k) is
at least one for all prime powers q and all positive integers k. Thus irreducible
polynomials of degree k over a field of order q exist for all allowable
parameters and hence finite fields exist for all allowable parameter sets.
Consider the following example.
Example 1.1 Consider the field extension F26 over the base field F2 . The
6
polynomial x 2 − x factors into all irreducible polynomials of degree dividing
6, i.e., those of degrees 1,2,3 and 6. From the previous formula
N2 (1) = 2, N2 (2) = 1, N2 (3) = 2, N2 (6) = 9.
For a primitive element α the conjugacy classes of F26 over F2 are (obtained by
raising elements by successive powers of 2 mod 63, with tentative polynomials
associated with the classes designated):
α 1,α 2,α 4,α 8,α 16,α 32 ≈ f1 (x)
α 3,α 6,α 12,α 24,α 48,α 33 ≈ f2 (x)
α 5,α 10,α 20,α 40,α 17,α 34 ≈ f3 (x)
α 7,α 14,α 28,α 56,α 49,α 35 ≈ f4 (x)
α 9,α 18,α 36 ≈ f5 (x)
α 11,α 22,α 44,α 25,α 50,α 37 ≈ f6 (x)

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


8 1 Introduction

α 13,α 26,α 52,α 41,α 19,α 38 ≈ f7 (x)


α 15,α 30,α 60,α 57,α 51,α 39 ≈ f8 (x)
α 21,α 42 ≈ f9 (x)
α 23,α 46,α 29,α 58,α 53,α 43 ≈ f10 (x)
α 27,α 54,α 45 ≈ f11 (x)
α 31,α 62,α 61,α 59,α 55,α 47 ≈ f12 (x).
By the above discussion a set with  integers corresponds to an irreducible
polynomial over F2 of degree . Further, the order of the polynomial is the
order of the conjugates in the corresponding conjugacy class.
Notice there are φ(63) = φ(9 · 7) = 3 · 2 · 6 = 36 primitive elements in F26
and hence there are 36/6 = 6 primitive polynomials of degree 6 over F2 . If α
is chosen as a zero of the primitive polynomial f1 (x) = x 6 + x + 1, then the
correspondence of the above conjugacy classes with irreducible polynomials is

Poly. No. Polynomial Order


f1 (x) x6 + x + 1 63
f2 (x) x + x + x + x + x + 1 21
6 4 3 2

f3 (x) x6 + x5 + x2 + x + 1 63
f4 (x) x6 + x3 + 1 9
f5 (x) x3 + x2 + 1 7
f6 (x) x +x +x +x +1
6 5 3 2 63
f7 (x) x6 + x4 + x3 + x + 1 63
f8 (x) x6 + x5 + x4 + x2 + 1 21
f9 (x) x2 + x + 1 3
f10 (x) x +x +x +x+1
6 5 4 63
f11 (x) x3 + x + 1 7
f12 (x) x6 + x5 + 1 63

The other three irreducible polynomials of degree 6 are of orders 21 (two of


them, f2 (x) and f8 (x)) and nine (f4 (x)). The primitive element α in the above
conjugacy classes could have been chosen as a zero of any of the primitive
polynomials. The choice determines arithmetic in F26 but all choices will
lead to isomorphic representations. Different choices would have resulted in
different associations between conjugacy classes and polynomials.
Not included in the above table is the conjugacy class {α 63 } which
corresponds to the polynomial x + 1 and the class {0} which corresponds to
6
the polynomial x. The product of all these polynomials is x 2 − x.

The notion of a minimal polynomial of a field element is of importance for


coding. The minimal polynomial mβ (x) of an element β ∈ Fq n over Fq is that

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


1.1 Notes on Finite Fields and Coding Theory 9

the monic irreducible polynomial of least degree that has β as a zero. From
the above discussion, every element in a conjugacy class has the same minimal
polynomial.
Further notions of finite fields that will be required include that of a
polynomial basis of Fq n over Fq which is one of a form {1,α,α 2, . . . ,α n−1 }
for some α ∈ Fq n for which the elements are linearly independent over Fq .
2 n−1
A basis of Fq n over Fq of the form {α,α q ,α q , . . . ,α q } is called a normal
basis and such bases always exist. In the case that α ∈ Fq n is primitive (of
order q n − 1) it is called a primitive normal basis.

The Trace Function of Finite Fields


Further properties of the trace function that are usually discussed in a first
course on coding will prove useful at several points in the chapters. Let Fq n
be an extension field of order n over Fq . For an element α ∈ Fq n the trace
function of Fq n over Fq is defined as
n−1
i
Trq n |q (α) = αq .
i=0

The function enjoys many properties, most notably that [8]


(i) Trq n |q (α + β) = Trq n |q (α) + Trq n |q (β), α,β ∈ Fq n
(ii) Trq n |q (aα) = aTrq n |q (α), a ∈ Fq ,α ∈ Fq n
(iii) Trq n |q (a) = na, a ∈ Fq
(iv) Trq n |q is an onto map.
To show property (iv), which the trace map is onto (i.e., codomain is Fq ), it
is sufficient to show that there exists an element α of Fq n for which Trq n |q (α) =
0 since if Trq n |q (α) = b = 0,b ∈ Fq , then (property ii) Trq n |q (b−1 α) = 1 and
hence all elements of Fq are mapped onto. Consider the polynomial equation
n−1 n−2
xq + xq + ··· + x = 0
that can have at most q n−1 solutions in Fq n . Hence there must exist elements
of β ∈ Fq n for which Trq n |q (β) = 0. An easy argument shows that in fact
exactly q n−1 elements of Fq n have a trace of a ∈ Fq for each element of Fq .
Notice that it also follows from these observations that
n   n−1 n−2
xq − x = xq + xq + · · · + x − a
a∈Fq

since each element of Fq n is a zero of the LHS and exactly one term of
the RHS.

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


10 1 Introduction

Also, suppose [8] L(·) is a linear function from Fq n to Fq in the sense that
for all a1,a2 ∈ Fq and all α1,α2 ∈ Fq n
L(a1 α1 + a2 α2 ) = a1 L(α1 ) + a2 L(α2 ).
Then L(·) must be of the form

L(α) = Trq n |q (βα) = Lβ (α)
for some β. Thus the set of such linear functions is precisely the set
Lβ (·), β ∈ Fq n
and these are distinct functions for distinct β.
A useful property of the trace function ([8], lemma 3.51, [11], lemma 9.3)
is that if u1,u2, . . . ,un is a basis of Fq n over Fq and if
Trq n |q (αui ) = 0 for i = 1,2, . . . ,n, α ∈ Fq n ,
then α = 0. Equivalently if for α ∈ Fq n
Trq n |q (αu) = 0 ∀u ∈ Fq n , (1.5)
then α = 0. This follows from the trace map being onto. It will prove a useful
property in the sequel. It also follows from the fact that
⎡ ⎤
q q n−1
u1 u1 · · · u1
⎢ ⎥
⎢u uq · · · uq n−1 ⎥
⎢ 2 2 2 ⎥
⎢. . . .. ⎥
⎢ .. .. .. . ⎥
⎣ ⎦
q q n−1
un un · · · un
is nonsingular iff u1,u2, . . . ,un ∈ Fq n are linearly independent over Fq .
A formula for the determinant of this matrix is given in [8].
If μ = {μ1,μ2, . . . ,μn } is a basis of Fq n over Fq , then a basis ν =
{ν1,ν2, . . . ,νn } is called a trace dual basis if

1 if i = j
Trq n |q (μi νj ) = δi,j =
0 if i = j
and for a given basis a unique dual basis exists. It is noted that if μ =
{μ1, . . . ,μn } is a dual basis for the basis {ν1, . . . ,νn }, then given
n n
y= ai μi then y= Trq n |q (yνi ) μi , ai ∈ Fq . (1.6)
i=1 i=1

Thus an element y ∈ Fq n can be represented in the basis μ, by the traces


Trq n |q (yνj ), j = 1,2, . . . ,n.

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


1.1 Notes on Finite Fields and Coding Theory 11

It can be shown that for a given normal basis, the dual basis is also normal.
A convenient reference for such material is [8, 12].

Elements of Coding Theory


A few comments on BCH, Reed–Solomon (RS), Generalized Reed–Solomon
(GRS), Reed–Muller (RM) and Generalized Reed–Muller (GRM) codes are
noted. Recall that a cyclic code of length n and dimension k and minimum
distance d over Fq , designated as an (n,k,d)q code, is defined by a polynomial
g(x) ∈ Fq [x] of degree (n − k), g(x) | (x n − 1) or alternatively as a principal
ideal g(x) in the factor ring R = Fq [x]/x n − 1.
Consider a BCH code of length n | (q m − 1) over Fq . Let β be a primitive
n-th root of unity (an element of order exactly n). Let
 
g(x) = lcm mβ (x),mβ 2 (x), . . . ,mβ 2t (x)

be the minimum degree monic polynomial with the sequence β,β 2, . . . ,β 2t of


2t elements as zeros (among other elements as zeros). Define the BCH code
with length n designed distance 2t + 1 over Fq as the cyclic code C = g(x)
or equivalently as the code with null space over Fq of the parity-check matrix
⎡ ⎤
1 β β 2 · · · β (n−1)
⎢1 β 2 β 4 · · · β 2(n−1) ⎥
⎢ ⎥
H = ⎢. . .. .. .. ⎥.
⎣ .. .. . . . ⎦
1 β 2t β 2(2t) · · · β 2t (n−1)

That the minimum distance bound of this code, d = 2t + 1, follows since any
2t × 2t submatrix of H is a Vandermonde matrix and is nonsingular since the
elements of the first row are distinct.
A cyclic Reed–Solomon (n,k,d = n − k + 1)q code can be generated by
choosing a generator polynomial of the form


n−k
g(x) = (x − α i ), α ∈ Fq , α primitive of order n.
i=1

That the code has a minimum distance d = n − k + 1 follows easily from the
above discussion.
A standard simple construction of Reed–Solomon codes over a finite field
Fq of length n that will be of use in this volume is as follows. Let u =
{u1,u2, . . . ,un } be a set, referred to as the evaluation set (and viewed as a
set rather than a vector – we use boldface lowercase letters for both sets and

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


12 1 Introduction

vectors) of n ≤ q distinct evaluation elements of Fq . As noted, F<k q [x] is the


set of polynomials over Fq of degree less than k. Then another incarnation of
a Reed–Solomon code can be taken as
 
RSn,k (u,q) = cf = (f (u1 ),f (u2 ), . . . ,f (un )),f ∈ F<k
q [x]

where cf is the codeword associated with the polynomial f. That this is an


(n,k,d = n − k + 1)q code follows readily from the fact that a polynomial of
degree less than k over Fq can have at most k − 1 zeros. As the code satisfies
the Singleton bound d ≤ n − k + 1 with equality it is referred to as maximum
distance separable (MDS) code and the dual of such a code is also MDS.
Of course the construction is valid for any finite field, e.g., Fq  .
The dual of an RS code is generally not an RS code.
A slight but useful generalization of this code is the Generalized Reed–
Solomon (GRS) code denoted as GRSn,k (u,v,q), where u is the evaluation set
of distinct field nonzero elements as above and v = {v1,v2, . . . ,vn }, vi ∈ Fq∗
(referred to as the multiplier set) is a set of not necessarily distinct nonzero
elements of Fq . Then GRSn,k (u,v,q) is the (linear) set of codewords:
 
GRSn,k (u,v,q) = cf = (v1 f (u1 ),v2 f (u2 ), . . . ,vn f (un )), f ∈ F<k
q [x] .

Since the minimum distance of this linear set of codewords is n − k + 1


the code is MDS, for the same reason noted above. Clearly an RS code is a
GRSn,k (u,v,q) code with v = (1,1, . . . ,1).
The dual of any MDS code is MDS. It is also true [7, 9, 10] that the dual
of a GRS code is also a GRS code. In particular, given GRSn,k (u,v,q) there
exists a set w ∈ (F∗q )n such that

⊥ (u,v,q) = GRS
GRSn,k  n,n−k (u,w,q) 
= w1 g(u1 ),w2 g(u2 ), . . . ,wn g(un ), g ∈ F<n−k
q [x] .
(1.7)
In other words, for any f (x) ∈ F<k q [x] and g(x) ∈ Fq
<n−k
[x] for a given
evaluation set u = {u1, . . . ,un } and multiplier set v = {v1, . . . ,vn } there is
a multiplier set w = {w1, . . . , wn } such that the associated codewords cf ∈
GRSn,k (u,v,q) and cg ∈ GRSn,n−k (u,w,q) are such that

(cf ,cg ) = v1 f (u1 )w1 g(u1 ) + · · · + vn f (un )wn g(un ) = 0.

Indeed the multiplier set vector w can be computed as

 −1
wi = vi (ui − uj ) . (1.8)
j =i

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


1.1 Notes on Finite Fields and Coding Theory 13

To see this, for a given evaluation set u (distinct elements), denote


n 
e(x) = (x − ui ) and ei (x) = e(x)/(x − ui ) = (x − uk ),
i=1 k=i

a monic polynomial of degree (n − 1). It is clear that



ei (uj ) 1 if j = i
=
ei (ui ) 0 if j = i.

It follows that for any polynomial h(x) ∈ Fq [x] of degree less than n that takes
on values h(ui ) on the evaluation set u = {u1,u2, . . . ,un } can be expressed as
n
ei (x)
h(x) = h(ui ) .
ei (ui )
i=1

To verify Equation 1.7 consider applying this interpolation formula


to f (x)g(x) where f (x) is a codeword polynomial f (x) ∈ F<k q [x]
(in GRSn,k (u,v,q)) and g(x) ∈ Fq <n−k
[x] (in GRSn,k (u,v,q) ⊥ =
GRSn,n−k (u,w,q)) where it is claimed that the two multiplier sets v =
{v1,v2, . . . ,vn } and w = {w1,w2, . . . ,wn } are related as in Equation 1.8.
Using the above interpolation formula on the product f (x)g(x) (of degree
at most (n − 2)) gives
n
ek (x)
f (x)g(x) = f (uk )g(uk ) .
ek (uk )
k=1

The coefficient of x n−1 on the left side is 0 while on the right side is 1 (as ek (x)
is monic of degree (n − 1)) and hence

vk−1
n n
1
0= f (uk )g(uk ) = (vk f (uk )) g(uk )
ek (uk ) ek (uk )
k=1 k=1
n
= (vk f (uk ))(wk g(uk )) (by Equation 1.8)
k=1
= (cf,cg ) = 0.

It is noted in particular that



RSn,k (u,q) = GRSn,n−k (u,w,q)

for the multiplier set wi = j =i (ui − uj ).

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


14 1 Introduction

Reed–Muller Codes
Reed–Muller (RM) codes are discussed in some depth in most books on coding
(e.g., [3, 4]) with perhaps the most comprehensive being [2] which considers
their relationship to Euclidean geometries and combinatorial designs. The
properties of RM codes are most easily developed for the binary field but the
general case will be considered here – the Generalized Reed–Muller (GRM)
codes (generalized in a different sense than the GRS codes). The codes are
of most interest in this work for the construction of locally decodable codes
(Chapter 8) and their relationship to multiplicity codes introduced there.
Consider m variables x1,x2, . . . ,xm and the ring Fq [x1,x2, . . . ,xm ] = Fq [x]
of multivariate polynomials over Fq (see also Appendix B). The set of all
monomials of the m variables and their degree is of the form
 
xi = x1i1 x2i2 · xm
im
, i ∼ (i1,i2, . . . ,im ), degree = ij . (1.9)
j

A multivariate polynomial f (x) ∈ Fq [x] is the sum of monomials over Fq and


the degree of f is largest of the degrees of any of its monomials. Notice that
q
over the finite field Fq ,xi = xi and so only degrees of any variable less than q
are of interest.
In the discussion of these codes we will have the need for two simple
enumerations: (i) the number of monomials on m variables of degree exactly
d and (ii) the number of monomials of degree at most d. These problems are
equivalent to the problems of the number of partitions of the integer d into at
most m parts and the number of partitions of all integers at most d into at most
m parts. These problems are easily addressed as “balls in cells” problems as
follows.
For the first problem, place d balls in a row and add a further m balls. There
are d + m − 1 spaces between the d + m balls. Choose m − 1 of these spaces
in which to place markers (in d+m−1m−1 ways). Add markers to the left of the
row and to the right of the row. Place the balls between two markers into a
“bin” – there are m such bins. Subtract a ball from each bin. If the number of
balls in bin j is ij , then the process determines a partition of d in the sense
that i1 + i2 + · · · + im = d and all such partitions arise in this manner. Thus
the number of monomials on m variables of degree equal to d is given by
d +m−1  
= {i1 + i2 + · · · + im = d, ij ∈ Z≥0 }. (1.10)
m−1
To determine the number of monomials on (at most) m variables of total
degree at most d, consider the setup as above except now add another ball
to the row to have d + m + 1 balls and choose m of the spaces between

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


1.1 Notes on Finite Fields and Coding Theory 15

 
the balls in d+m
m ways in which to place markers corresponding to m + 1
bins. As before subtract a ball from each bin. The contents of the last cell are
regarded as superfluous and discarded to take into account the “at most” part
of the enumeration. The contents of the first m cells correspond to a partition
and the number of monomials on m variables of total degree at most d is
d +m  
= {i1 + i2 + · · · + im ≤ d, ij ∈ Z≥0 }. (1.11)
m
Note that it follows that
d
j +m−1 d +m
= ,
m−1 m
j =1

(i.e., the number of monomials of degree at most d is the number of monomials


of degree exactly j for j = 1,2, . . . ,d) as is easily shown by induction.
Consider the code of length q m denoted GRMq (d,m) generated by mono-
mials of degree at most d on m variables for d < q −1, i.e., let f (x) ∈ Fq [x] be
an m-variate polynomial of degree at most d ≤ q −1 (the degree of polynomial
is the largest degree of its monomials and no variable is of degree greater than
q − 1). The corresponding codeword is denoted

cf = f (a), a ∈ Fqm, f ∈ Fq [x], f of degree at most d ,

i.e., a codeword of length q m with coordinate positions labeled with all


elements of Fqm and coordinate labeled a ∈ Fqm with a value of f (a). It is
straightforward to show that the codewords corresponding to the monomials
are linearly independent over Fq and hence the code has length and dimension
m+d
code length n = q m and code dimension k = .
d
To determine a bound on the minimum distance of the code the theorem
([8], theorem 6.13) is used that states the maximum number of zeros of a
multivariate polynomial of m variables of degree d over Fq is at most dq m−1 .
Thus the maximum fraction of a codeword that can have zero coordinates is
d/q and hence the normalized minimum distance of the code (code distance
divided by length) is bounded by
1 − d/q, d < q − 1.
(Recall d here is the maximum degree of the monomials used, not code
distance.) The normalized (sometimes referred to as fractional or relative)
distance of a code will be designated as  = 1 − d/q. (Many works use δ to
denote this, used for the erasure probability on the BEC here.) Thus, e.g., for

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


16 1 Introduction

 = 2 (bivariate
m polynomials) this subclass of GRM codes has the parameters
d+2 2
q , d ,q − dq . Note that the rate of the code is
2
q

d +2
q 2 ≈ d 2 /2q 2 = (1 − )2 /2.
2
Thus the code can have rate at most 1/2.
For a more complete analysis of the GRM codes the reader should consult
([2], section 5.4). Properties of GRS and GRM codes will be of interest in
several of the chapters.

1.2 Notes on Information Theory


The probability distribution of the discrete random variable X, P r(X = x),
will be denoted PX (x) or as P (x) when the random variable is understood.
Similarly a joint discrete random variable X × Y (or XY ) is denoted P r(X =
x,Y = y) = PXY (x,y). The conditional probability distribution is denoted
P r(Y = y | X = x) = P (y | x). A probability density function (pdf) for a
continuous random variable will be designated similarly as pX (x) or a similar
lowercase function.
Certain notions from information theory are required. The entropy of a
discrete ensemble {P (xi ),i = 1,2, . . . } is given by
H (X) = − P (xi ) log P (xi )
i

and unless otherwise specified all logs will be to the base 2. It represents the
amount of uncertainty in the outcome of a realization of the random variable.
A special case will be important for later use, that of a binary ensemble
{p,(1 − p)} which has an entropy of
H2 (p) = −p log2 p − (1 − p) log2 (1 − p) (1.12)
referred to as the binary entropy function. It is convenient to introduce the
q-ary entropy function here, defined as

x logq (q − 1) − x logq x − (1 − x) logq (1 − x), 0 < x ≤ θ = (q − 1)/q
Hq (x) =
0, x=0
(1.13)
an extension of the binary entropy function. Notice that Hq (p) is the entropy
associated with the q-ary discrete symmetric channel and also the entropy of
the probability ensemble {1 − p,p/(q − 1), . . . ,p/(q − 1)} (total of q values).
The binary entropy function of Equation 1.12 is obtained with q = 2.

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


1.2 Notes on Information Theory 17


Similarly the entropy of a joint ensemble P (xi ,yi ),i = 1,2, . . . } is
given by
H (X,Y ) = − P (xi ,yi ) log P (xi ,yi )
xi ,yi

and the conditional entropy of X given Y is given by


H (X | Y ) = − P (xi ,yi ) log P (xi | yi ) = H (X,Y ) − H (Y )
xi ,yi

which has the interpretation of being the expected amount of uncertainty


remaining about X after observing Y , sometimes referred to as equivocation.
The mutual information between ensembles X and Y is given by
P (xi ,yj )
I (X;Y ) = P (xi ,yj ) log (1.14)
xi ,yi
P (xi )P (yj )

and measures the amount of information knowledge that one of the variables
gives on the other. The notation {X;Y } is viewed as a joint ensemble. It will
often be the case that X will represent the input to a discrete memoryless
channel (to be discussed) and Y the output of the channel and this notion
of mutual information has played a pivotal role in the development of
communication systems over the past several decades.
Similarly for three ensembles it follows that
P (xi ,yj ,zk )
I (X;Y,Z) = P (xi ,yj ,zk ) log .
P (xi )P (yj ,zk )
i,j,k

The conditional information of the ensemble {X;Y } given Z is


P (xi ,yj | zk )
I (X;Y | Z) = P (xi ,yj ,zk ) log
P (xi | zk )P (yj | zk )
i,j,k

or alternatively
P (xi ,yj ,zk )P (zk )
I (X;Y | Z) = P (xi ,yj ,zk ) log .
P (xi ,zk )P (yj ,zk )
i,j,k

The process of conditioning observations of X,Y on a third random variable Z


may increase or decrease the mutual information between X and Z but it can
be shown the conditional information is always positive. There are numerous
relationships between these information-theoretic quantities. Thus
I (X;Y ) = H (X) − H (X | Y ) = H (X) + H (Y ) − H (X,Y ).
Our interest in these notions is to define the notion of capacity of certain
channels.

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


18 1 Introduction

A discrete memoryless channel (DMC) is a set of finite inputs X and a


discrete set of outputs Y such that at each instance of time, the channel accepts
an input x ∈ X and with probability W (y | x) outputs y ∈ Y and each use of
the channel is independent of other uses and

W (y | x) = 1 for each x ∈ X.
y∈Y

Thus if a vector x = (x1,x2, . . . ,xn ), xi ∈ X is transmitted in n uses of the


channel, the probability of receiving the vector y = (y1,y2, . . . ,yn ) is given by

n
P (y | x) = W (yi | xi ).
i=1

At times the DMC might be designated simply by the set of transition


probabilities W = {W (yi | xi )}.
For the remainder of this chapter it will be assumed the channel input is
binary and referred to as a binary-input DMC or BDMC where X = {0,1}
and that Y is finite. Important examples of such channels include the binary
symmetric channel (BSC), the binary erasure channel (BEC) and a general
BDMC, as shown in Figure 1.1 (a) and (b) while (c) represents the more
general case.
Often, rather than a general BDMC, the additional constraint of symmetry is
imposed, i.e., a binary-input discrete memoryless symmetric channel by which
is meant a binary-input X = {0,1} channel with a channel transition probability
{W (y | x),x ∈ X,y ∈ Y} which satisfies a symmetry condition, noted later.
The notion of mutual information introduced above is applied to a DMC
with the X ensemble representing the channel input and the output ensemble Y
to the output. The function I (X;Y ) then represents the amount of information
the output gives about the input. In the communication context it would be
desirable to maximize this function. Since the channel, represented by the
channel transition matrix W (y | x), is fixed, the only variable that can be
adjusted is the set of input probabilities P (xi ),xi ∈ X. Thus the maximum

1−p 1−δ
0 0 0 0
p δ . ..
E X = {0,1} .. . Y
p δ
1 1 1 1
1−p 1−δ

BSC, C = 1 − H (p) BEC, C = 1 − δ BDMC W (y | x)

Figure 1.1 Binary-input DMCs

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


1.2 Notes on Information Theory 19

amount of information that on average can be transmitted through the channel


in each channel use is found by determining the set of input probabilities that
maximizes the mutual information between the channel input and output.
It is intuitive to define the channel capacity of a DMC as the maximum rate
at which it is possible to transmit information through the channel, per channel
use, with an arbitrarily low error probability:

channel capacity = C = max I (X,Y ) = I (W )
P (x),x∈X
 W (y | x)P (x)
= max x∈X,y∈Y W (y | x)P (x) log
P (x),x∈X P (x)(P (y)

where P (y) = x∈X W (y|x)P (x). For general channels, determining channel
capacity can be a challenging optimization problem. When the channels exhibit
a certain symmetry, however, the optimization is achieved with equally likely
inputs:
Definition 1.2 ([6]) A DMC is symmetric if the set of outputs can be
partitioned into subsets in such a way that for each subset, the matrix of
transition probabilities (with rows as inputs and columns as outputs) has the
property each row is a permutation of each other row and each column of
a partition (if more than one) is a permutation of each other column in the
partition.
A consequence of this definition is that for a symmetric DMC, it can be
shown that the channel capacity is achieved with equally probable inputs ([6],
theorem 4.5.2). It is simple to show that both the BSC and BEC channels
are symmetric by this definition. The capacities of the BSC (with crossover
probability p) and BEC (with erasure probability δ) are
CBSC = 1 + p log2 p + (1 − p) log2 (1 − p) and CBEC = 1 − δ. (1.15)
This first relation is often written
CBSC = 1 − H2 (p)
and H2 (p) is the binary entropy function of Equation 1.12.
For channels with continuous inputs and/or outputs the mutual information
between channel input and output is given by
 
p(x,y)
I (X,Y ) = p(x,y) log dxdy
x y p(x)p(y)
for probability density functions p(·,·) and p(·).
Versions of the Gaussian channel where Gaussian-distributed noise is added
to the signal in transmission are among the few such channels that offer

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


20 1 Introduction

N ∼ N(0,σ 2 )

+
X ∈ {±1} Y =X+N
Figure 1.2 The binary-input additive white Gaussian noise channel

tractable solutions and are designated additive white Gaussian noise (AWGN)
channels. The term “white” here refers to a flat power spectral density function
of the noise with frequency. The binary-input AWGN (BIAWGN) channel,
where one of two continuous-time signals is chosen for transmission during
a given time interval (0,T ) and experiences AWGN in transmission, can be
represented as in Figure 1.2:
Yi = Xi + Ni , and Xi ∈ {±1}, BIAWGN,
where Ni is a Gaussian random variable with zero mean and variance σ 2 ,
denoted Ni ∼ N(0,σ 2 ). The joint distribution of (X,Y ) is a mixture of discrete
and continuous and with P (X = +1) = P (X = −1) = 1/2 (which achieves
capacity on this channel) and with p(x) ∼ N(0,σ 2 ). The pdf p(y) of the
channel output is
2 2
1 1 − (y+1) 1 1 − (y−1)
p(y) = ·√ e 2σ 2 + · √ e 2σ 2
2 2π σ 2 2 2π σ 2
(1.16)
1 (y+1)2 (y−1)2
=√ exp − 2σ 2
+ exp − 2σ 2
8π σ 2
and maximizing the expression for mutual information of the channel (equally
likely inputs) reduces to

1
CBI AW GN = − p(y) log2 p(y)dy − log2 (2π eσ 2 ). (1.17)
y 2
The general shape of these capacity functions is shown in Figure 1.3 where
SNR denotes signal-to-noise ratio.
Another Gaussian channel of fundamental importance in practice is that of
the band-limited AWGN channel (BLAWGN). In this model a band-limited
signal x(t),t ∈ (0,T ) with signal power ≤ S is transmitted on a channel
band-limited to W Hz, i.e., (−W,W ) and white Gaussian noise with two-sided
power spectral density level No /2 is added to the signal in transmission. This
channel can be discretized via orthogonal basis signals and the celebrated and
much-used formula for the capacity of it is

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


1.3 An Overview of the Chapters 21

CBSC
1.0 CBI AW GN
1.0

0.5

0 p 0
0 0.5 1.0 SN R ∼ 1/σ 2
(a) (b)
Figure 1.3 Shape of capacity curves for (a) BSC and (b) BIAWGN

CBLAW GN = W log2 (1 + S/No W ) bits per second. (1.18)


The importance of the notion of capacity and perhaps the crowning
achievement of information theory is the following coding theorem, informally
stated, that says k information bits can be encoded into n coded bits, code rate
R = k/n, such that the bit error probability Pe of the decoded coded bits at the
output of a DMC of capacity C can be upper bounded by
Pe ≤ e−nE(R), R < C (1.19)
where E(R), the error rate function, is > 0 for all R < C. The result implies
that for any code rate R < C there will exist a code of some length n capable
of transmitting information with negligible error probability. Thus reliable
communication is possible even though the channel is noisy as long as one
does not transmit at too high a rate.
The information-theoretic results discussed here have driven research into
finding efficient codes, encoding and decoding algorithms for the channels
noted over many decades. The references [5, 15] present a more comprehensive
discussion of these and related issues.

1.3 An Overview of the Chapters


A brief description of the following chapters is given.
The chapter on coding for erasures is focused on the search for erasure-
correcting algorithms that achieve linear decoding complexity. It starts with a
discussion of Tornado codes. Although these codes did not figure prominently
in subsequent work, they led to the notion of codes from random graphs with
irregular edge distributions that led to very efficient decoding algorithms. In
turn these can be viewed as leading to the notion of fountain codes which

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


22 1 Introduction

are not erasure-correcting codes. Rather they are codes that can efficiently
recreate a file from several random combinations of subfiles. Such codes led
to the important concept of Raptor codes which have been incorporated into
numerous standards for the download of large files from servers in a multicast
network while not requiring requests for retransmissions of portions of a file
that a receiver may be missing, a crucial feature in practice.
Certain aspects of low-density parity-check (LDPC) codes are then dis-
cussed. These codes, which derive from the work of Gallager from the early
1960s, have more recently assumed great importance for applications as
diverse as coding for flash memories as well as a wide variety of communi-
cation systems. Numerous books have been written on various aspects of the
construction and analysis of these codes. This chapter focuses largely on the
paper of [13] which proved crucial for more recent progress for the analytical
techniques it introduced.
The chapter on polar codes arose out of the seminal paper [1]. In a deep
study of information-theoretic and analytical technique it produced the first
codes that provably achieved rates approaching capacity. From a binary-input
channel with capacity C ≤ 1, through iterative transformations, it derived
a channel with N = 2n inputs and outputs and produced a series of N C
sub-channels that are capable of transmitting data with arbitrarily small error
probability, thus achieving capacity. The chapter discusses the central notions
to assist with a deeper reading of the paper.
The chapter on network coding is devoted to the somewhat surprising idea
that allowing nodes (servers) in a packet network to process and combine
packets as they traverse the network can substantially improve throughput of
the network. This raises the question of the capacity of such a network and
how to code the packets in order to achieve the capacity. This chapter looks at
a few of the techniques that have been developed for multicast channels.
With the wide availability of the high-speed internet, access to information
stored on widely distributed databases became more commonplace. Huge
amounts of information stored in distributed databases made up of standard
computing and storage elements became ubiquitous. Such elements fail with
some regularity and methods to efficiently restore the contents of a failed
server are required. Many of these early distributed storage systems simply
replicated data on several servers and this turned out to be an inefficient
method of achieving restoration of a failed server, both in terms of storage
and transmission costs. Coding the stored information greatly improved
the efficiency with which a failed server could be restored and Chapter 6
reviews the coding techniques involved. The concepts involved are closely
related to locally repairable codes considered in Chapter 7 where an erased

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


1.3 An Overview of the Chapters 23

coordinate in a codeword can be restored by contacting a few other coordinate


positions.
Chapter 8 considers coding techniques which allow a small amount of
information to be recovered from errors in a codeword without decoding
the entire codeword, termed locally decodable codes. Such codes might find
application where very long codewords are used and users make frequent
requests for modest amounts of information. The research led to numerous
other variations such as locally testable codes where one examines a small
portion of data and is asked to determine if it is a portion of a codeword of
some code, with some probability.
Private information retrieval considers techniques for users to access infor-
mation on servers in such a manner that the servers are unaware of which infor-
mation is being sought. The most common scenario is one where the servers
contain the same information and users query information from individual
servers and perform computations on the responses to arrive at the desired
information. More recent contributions have shown how coded information
stored on the servers can achieve the same ends with greatly improved storage
efficiency. Observations on this problem are given in Chapter 9.
The notion of a batch code addresses the problem of storing information
on servers in such a way that no matter which information is being sought no
single server has to download more than a specified amount of information.
It is a technique to ensure load balancing between servers. Some techniques to
achieve this are discussed in Chapter 10.
Properties of expander graphs find wide application in several areas of
computer science and mathematics and the notion has been applied to the
construction of error-correcting codes with efficient decoding algorithms.
Chapter 11 introduces this topic of considerable current interest.
Algebraic coding theory is based on the notion of packing spheres in a
space of n-tuples over a finite field Fq according to the Hamming metric.
Rank-metric codes consider the vector space of matrices of a given shape
over a finite field with a different metric, namely the distance between two
such matrices is given by the rank of the difference of the matrices which
can be shown to be a metric on such a space. A somewhat related (although
quite distinct) notion is to consider a set of subspaces of a vector space over
a finite field with a metric defined between such subspaces based on their size
and intersection. Such codes of subspaces have been shown to be of value in
the network coding problem. The rank-metric codes and subspace codes are
introduced in Chapter 12.
A problem that was introduced in the early days of information theory was
the notion of list decoding where, rather than the decoding algorithm producing

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


24 1 Introduction

a unique closest codeword to the received word, it was acceptable to produce a


list of closest words. Decoding was then viewed as successful if the transmitted
codeword was on the list. The work of Sudan [14] introduced innovative
techniques for this problem which influenced many aspects of coding theory.
This new approach led to numerous other applications and results to achieve
capacity on such a channel. These are overviewed in Chapter 13.
Shift register sequences have found important applications in numerous
synchronization and ranging systems as well as code-division multiple access
(CDMA) communication systems. Chapter 14 discusses their basic properties.
The advent of quantum computing is likely to have a dramatic effect
on many storage, computing and transmission technologies. While still in
its infancy it has already altered the practice of cryptography in that the
US government has mandated that future deployment of crypto algorithms
should be quantum-resistant, giving rise to the subject of “postquantum
cryptography.” A brief discussion of this area is given in Chapter 15. While
experts in quantum computing may differ in their estimates of the time frame
in which it will become significant, there seems little doubt that it will have a
major impact.
An aspect of current quantum computing systems is their inherent instabil-
ity as the quantum states interact with their environment causing errors in the
computation. The systems currently implemented or planned will likely rely
on some form of quantum error-correcting codes to achieve sufficient system
stability for their efficient operation. The subject is introduced in Chapter 16.
The final chapter considers a variety of other coding scenarios in an effort
to display the width of the areas embraced by the term “coding” and to further
illustrate the scope of coding research that has been ongoing for the past few
decades beyond the few topics covered in the chapters.
The two appendices cover some useful background material on finite
geometries and multivariable polynomials over finite fields.

References
[1] Arıkan, E. 2009. Channel polarization: a method for constructing capacity-
achieving codes for symmetric binary-input memoryless channels. IEEE Trans.
Inform. Theory, 55(7), 3051–3073.
[2] Assmus, Jr., E.F., and Key, J.D. 1992. Designs and their codes. Cambridge Tracts
in Mathematics, vol. 103. Cambridge University Press, Cambridge.
[3] Blahut, R.E. 1983. Theory and practice of error control codes. Advanced Book
Program. Addison-Wesley, Reading, MA.
[4] Blake, I.F., and Mullin, R.C. 1975. The mathematical theory of coding. Academic
Press, New York/London.

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


References 25

[5] Forney, G.D., and Ungerboeck, G. 1998. Modulation and coding for linear
Gaussian channels. IEEE Trans. Inform. Theory, 44(6), 2384–2415.
[6] Gallager, R.G. 1968. Information theory and reliable communication. John Wiley
& Sons, New York.
[7] Huffman, W.C., and Pless, V. 2003. Fundamentals of error-correcting codes.
Cambridge University Press, Cambridge.
[8] Lidl, R., and Niederreiter, H. 1997. Finite fields, 2nd ed. Encyclopedia of Mathe-
matics and Its Applications, vol. 20. Cambridge University Press, Cambridge.
[9] Ling, S., and Xing, C. 2004. Coding theory. Cambridge University Press,
Cambridge.
[10] MacWilliams, F.J., and Sloane, N.J.A. 1977. The theory of error-correcting
codes: I and II. North-Holland Mathematical Library, vol. 16. North-Holland,
Amsterdam/New York/Oxford.
[11] McEliece, R.J. 1987. Finite fields for computer scientists and engineers. The
Kluwer International Series in Engineering and Computer Science, vol. 23.
Kluwer Academic, Boston, MA.
[12] Menezes, A.J., Blake, I.F., Gao, X.H., Mullin, R.C., Vanstone, S.A., and
Yaghoobian, T. 1993. Applications of finite fields. The Kluwer International Series
in Engineering and Computer Science, vol. 199. Kluwer Academic, Boston, MA.
[13] Richardson, T.J., and Urbanke, R.L. 2001. The capacity of low-density parity-
check codes under message-passing decoding. IEEE Trans. Inform. Theory, 47(2),
599–618.
[14] Sudan, M. 1997. Decoding of Reed Solomon codes beyond the error-correction
bound. J. Complexity, 13(1), 180–193.
[15] Ungerboeck, G. 1982. Channel coding with multilevel/phase signals. IEEE Trans.
Inform. Theory, 28(1), 55–67.

https://doi.org/10.1017/9781009283403.002 Published online by Cambridge University Press


2
Coding for Erasures and Fountain Codes

A coordinate position in a received word is said to be an erasure if the


receiver is using a detection algorithm that is unable to decide which symbol
was transmitted in that position and outputs an erasure symbol such as E
rather than risk making an error, i.e., outputting an incorrect symbol. One
might describe an erasure as an error whose position is known. For binary
information symbols the two most common discrete memoryless channels
are shown in Figure 1.1, the binary erasure channel (BEC) and the binary
symmetric channel (BSC), introduced in Chapter 1. The symbols p and δ will
generally refer to the channel crossover probability for the BSC and erasure
probability for the BEC, respectively. Both symbols sometimes occur with
other meanings as will be noted. Each such channel has a capacity associated
with it which is the maximum rate at which information (per channel use)
can be sent through the channel error-free, as discussed in Chapter 1. The
subject of error-correcting codes arose to meet the challenge of realizing such
performance.
It is to be emphasized that two quite different channel error models are used
in this chapter. The BEC will be the channel of interest in the first part of
this chapter. Thus a codeword (typically of a linear code) is received which
contains a mix of correct received symbols and erased symbols. The job of the
code design and decoder algorithm is then to “fill in” or interpolate the erased
positions with original transmitted symbols, noting that in such a model the
unerased positions are assumed correct.
Codes derived for the BEC led to the notion of irregular distribution codes
where the degrees of variable and check nodes of the code Tanner graph, to
be introduced shortly, are governed by a probability distribution. These in turn
led to fountain codes, which is the interest of the last section of the chapter.
In such channels each transmitted packet is typically a linear combination
of information packets over some fixed finite field. The receiver gathers a

26

https://doi.org/10.1017/9781009283403.003 Published online by Cambridge University Press


2.1 Preliminaries 27

sufficient number of transmitted packets (assumed without errors) until it is


able to retrieve the information packets by, e.g., some type of matrix inversion
algorithm on the set of packets received. The retriever then does not care which
particular packets are received, just that they receive a sufficient number of
them to allow the decoding algorithm to decode successfully. This is often
described as a “packet loss” channel, in that coded packets transmitted may be
lost in transmission due to a variety of network imperfections such as buffer
overflow or failed server nodes, etc. Such a packet loss situation is not modeled
by the DMC models considered.
While the two channel models examined in this chapter are quite different,
it is their common heritage that suggested their discussion in the same chapter.

2.1 Preliminaries
It is convenient to note a few basic results on coding and DMCs for future
reference. Suppose C = (n,k,d)q is a linear block code over the finite field
of q elements Fq , designating a linear code that has k information symbols
(dimension k) and (n − k) parity-check symbols and minimum distance d.
Much of this volume is concerned with binary-input channels and q = 2.
Suppose a codeword c = (c1,c2, . . . ,cn ) is transmitted on a BEC and the
received word is r = (r1,r2, . . . ,rn ) which has e erasures in positions E ⊂
{1,2, . . . ,n}, | E |= e. Then ri = E for i ∈ E for E the erasure symbol. The
unerased symbols received are assumed correct.
A parity-check matrix of the code is an (n − k) × n matrix H over Fq
such that

H · ct = 0tn−k

for any codeword c where 0n−k is the all-zero (n − k)-tuple, a row vector over
Fq . If the columns of H are denoted by h(i), i = 1,2, . . . ,n, then H · ct is
the sum of columns of H multiplied by the corresponding coordinates of the
codeword, adding to the all-zero column (n − k)-vector 0t . Similarly let He be
the (n−k)×e submatrix of columns of H corresponding to the erased positions
and ce be the e-tuple of the transmitted codeword on the erased positions. Then

He · cte = −yte

where yte is the (n − k)-tuple corresponding to the weighted sum of columns


of H in the nonerased positions, i.e., columns of H multiplied by the known
(unerased) positions of the received codeword.

https://doi.org/10.1017/9781009283403.003 Published online by Cambridge University Press


28 2 Coding for Erasures and Fountain Codes

As long as e ≤ d − 1 the above matrix equation can be solved uniquely


for the erased word positions, i.e., ce . However, this is generally a task of
cubic complexity in codeword length, i.e., O(n3 ). The work of this chapter
will show how a linear complexity with codeword length can be achieved with
high probability.
This chapter will deal exclusively with binary codes and the only arithmetic
operation used will be that of XOR (exclusive or), either of elements of F2 or
of packets of n bits in F2n . Thus virtually all of the chapter will refer to packets
or binary symbols (bits) equally, the context being clear from the problem of
interest.
It is emphasized that there are two different types of coding considered in
this chapter. The first is the use of linear block codes for erasure correction
while the second involves the use of fountain codes on a packet loss channel.
Virtually all of this chapter will use the notion of a bipartite graph to
represent the various linear codes considered, a concept used by Tanner in his
prescient works [37, 38, 39]. A bipartite graph is one with two sets of vertices,
say U and V and an edge set E, with no edges between vertices in the same
set. The graph will be called (c,d)-regular bipartite if the vertices in U have
degree c and those of V have degree d. Since | U |= n, then | V |= (c/d)n.
The U set of vertices will be referred to as the left vertices and V the right
vertices. Bipartite graphs with irregular degrees will also be of interest later in
the chapter.
There is a natural connection between a binary linear code and an (n−k)×n
parity-check matrix and a bipartite graph. Often the left vertices of the bipartite
code graph are associated with the entire n codeword coordinate positions and
referred to as the variable or information nodes or vertices. Equivalently they
represent the columns of the parity-check matrix. Similarly the (n − k) right
nodes or vertices are the constraint or check nodes which represent the rows
of the parity-check matrix. The edges of the graph correspond to the ones in
the check matrix in the corresponding rows and columns. Such a graphical
representation of the code is referred to as the Tanner graph of the code, a
notion that will feature prominently in many of the chapters.
The binary parity-check matrix of the code is an alternate view of the
incidence matrix of the bipartite graph. The following illustrates the Tanner
graph associated with the parity-check matrix for a Hamming (8,4,4)2 code
which is used in Example 2.3 shown also in Figures 2.1 and 2.3.
As a second graph representation of a binary linear code, it is equally
possible to have the left nodes of the graph as the k information nodes and
the (n − k) right nodes as the check nodes and this is the view for most of the
next section. As a matter of convenience this representation is referred to as the

https://doi.org/10.1017/9781009283403.003 Published online by Cambridge University Press


2.1 Preliminaries 29

x1
x2
0 0 0 1 1 1 1 0 x3 c1
0 1 1 0 0 1 1 0 x4 c2
H = 1 0 1 0 1 0 1 0 x5 c3
1 1 1 1 1 1 1 1 x6 c4
x7
x8
(a) (b)
Figure 2.1 (a) The Hamming (8,4,4)2 code and (b) its Tanner graph

normal graph representation of a code in this work, although some literature


on coding has a different meaning for the term “normal.” The Tanner graph
representation seems more common in current research literature.
The next section describes a class of linear binary codes, the Tornado codes,
which use a cascade of (normal) bipartite graphs and a very simple decoding
algorithm for correcting erasures. To ensure the effectiveness of decoding it
is shown how the graphs in the cascade can be chosen probabilistically and
this development introduced the notion of irregular distributions of vertex/edge
degrees of the left and right vertices of each graph in the cascade. This notion
has proved important in other coding contexts, e.g., in the construction of
LDPC codes to be considered in Chapter 3.
Section 2.3 introduces the notion of LT codes, standing for Luby trans-
form, the first incarnation of the important notion of a fountain code where
coded packets are produced at random by linearly XORing a number of
information packets, according to a probability law designed to ensure efficient
decodability. That section also considers Raptor codes, a small but important
modification of LT codes that has been standardized as the most effective way
to achieve large downloads over the Internet.
The notion of Tornado codes introduced the idea of choosing random bipar-
tite graphs to effect erasure decoding. Such a notion led to decoding algorithms
of fountain codes where a file is comprised of randomly linearly encoded
pieces of the file. These decoding algorithms achieve linear complexity rather
than the normally cubic complexity associated with Gaussian elimination with
a certain probability of failure. As noted, there is no notion of “erasure”
with fountain codes as there is with Tornado codes. A significant feature of
these fountain codes is that they do not require requests for retransmission
of missing packets. This can be a crucial feature in some systems since such

https://doi.org/10.1017/9781009283403.003 Published online by Cambridge University Press


30 2 Coding for Erasures and Fountain Codes

requests could overwhelm the source trying to satisfy requests from a large
number of receivers, a condition referred to as feedback implosion. This is the
multicast situation where a transmitter has a fixed number of packets (binary
n-tuples) which it wishes to transmit to a set of receivers through a network.
It is assumed receivers are not able to contact the transmitter to make requests
for retransmissions of missing packets. The receiver is able to construct the
complete set of transmitted information packets from the reception of any
sufficiently large set of coded packets, not a specific set. Typically the size
of the set of received packets is just slightly larger than the set of information
packets itself, to ensure successful decoding with high probability, leading to
a very efficient transmission and decoding process.
Most of the algorithms in the chapter will have linear complexity (in
codeword length or the number of information symbols), making them very
attractive for implementation.

2.2 Tornado Codes and Capacity-Achieving Sequences


The notion of Tornado codes was first noted in [7] and further commented on in
[2] with a more complete account in [10] (an updated version of [7]). While not
much cited in recent works they introduced novel and important ideas that have
become of value in LDPC coding and in the formulation of fountain codes. At
the very least they are an interesting chapter in coding theory and worthy of
some note.
For this section it is assumed transmission is on the BEC. Since only the
binary case is of interest the only arithmetic operation will be the XOR between
code symbols and thus the code symbols (coordinate positions) can be assumed
to be either bits or sequences of bits (packets). Any received packet is assumed
correct – no errors in it. Packets that are erased will be designated with a special
symbol, e.g., E (either a bit or packet) when needed.
Tornado codes can be described in three components: a cascade of a
sequence of bipartite graphs; a (very simple) decoding algorithm for each
stage, as decoding proceeds from the right to the left and a probabilistic design
algorithm for each of the bipartite graphs involved. As mentioned, the design
algorithm has proven influential in other coding contexts.
Consider the first bipartite graph B0 of Figure 2.2 with k left vertices,
associated with the k information packets and βk, β < 1, right nodes, the check
nodes (so each code in the cascade is a normal graph – one of the few places
in these chapters using normal graphs). It is assumed the codes are binary and,
as noted, whether bits or packets are used for code symbols is immaterial.

https://doi.org/10.1017/9781009283403.003 Published online by Cambridge University Press


Another random document with
no related content on Scribd:
“Hullo, Mark! What ages you have been!” exclaimed his cousin.
“We can make room at this corner—come along, old man.”
Mark and his companion found themselves posted at the two
corners at the end of the table, and were for the moment the
cynosure of all eyes.
In a few seconds, as soon as the newcomers had been looked
after and given the scraps, the party continued their interrupted
conversation with redoubled animation. They all appeared to know
one another intimately. Captain Waring had evidently fallen among
old friends. They discussed people and places—to which the others
were strangers—and Mrs. Bellett was particularly animated, and
laughed incessantly—chiefly at her own remarks.
“And so Lalla Paske is going to her Aunt Ida? I thought Ida
Langrishe hated girls. I wonder if she will be able to manage her
niece, and what sort of a chaperon she will make?”
“A splendid one, I should say,” responded a man in a suit like a
five-barred gate—“on the principle of set a thief to catch a thief.”
“And old Mother Brande, up at Shirani, is expecting a niece too.
What fun it will be! What rivalry between her and Ida! What husband-
hunting, and scheming, and match-making! It will be as good as one
of Oscar Wilde’s plays. I am rather sorry that I shall not be there to
see. I shall get people to write to me—you for one, Captain Waring,”
and she nodded at him graciously.
Mark noticed his companion, who had been drinking water
(deluded girl—railway station water), put down her glass hastily, and
fix her eyes on Mrs. Bellett. No one could call her pale now.
“I wonder what Mrs. Brande’s niece will be like?” drawled her
sister. “I wonder if she, like her aunt, has been in domestic service.
He, he, he!” she giggled affectedly.
There was a general laugh, in the midst of which a clear treble
voice was heard—
“If you particularly wish to know, I can answer that question.” It
was the pale girl who was speaking.
Mrs. Coote simply glared, too astounded to utter a syllable.
“I was not aware that my aunt had ever been in domestic service;
but I can relieve you at once of all anxiety about myself. I have never
been in any situation, and this is the nearest approach I have ever
made to the servants’ hall!”
If the lamp in front of them had suddenly exploded, there could
scarcely have been more general consternation. Mrs. Bellett gasped
like a newly-landed fish; Captain Waring, purple with suppressed
laughter, was vainly cudgelling his brain for some suitable and
soothing remark, when the door was flung back by the guard,
bawling—
“Take your seats—take your seats, please, passengers by the
Cawnpore mail.”
Undoubtedly the train had never arrived at a more propitious
moment. The company rose with one consent, thrust back their
chairs, snatched up their parcels, and hurried precipitately out of the
room, leaving Honor and her escort vis-à-vis and all alone.
“If those are specimens of Englishwomen in India,” she exclaimed,
“give me the society of the natives; that dear old creature in the hut
was far more of a lady.”
“Oh, you must not judge by Mrs. Bellett! I am sure she must be
unique. I have never seen any one like her, so far,” he remarked
consolingly.
“I told you,” becoming calmer, and rising as she spoke, “that I
could not hold my tongue. I can not keep quiet. You see I have lost
no time—I have begun already. Of course, the proper thing for me to
have done would have been to sit still and make no remark, instead
of hurling a bombshell into the enemy’s camp. I have disgraced
myself and you; they will say, ‘Evil communications corrupt good
manners.’ I can easily find a carriage. Ah, here is my treasure of a
chuprassi. You have been extremely kind; but your friends are
waiting for you, and really you had better not be seen with me any
longer.”
She was very tall; and when she drew herself up their eyes were
nearly on a level. She looked straight at him, and held out her hand
with a somewhat forced smile.
He smiled also as he replied, “I consider it an honour to be seen
with you, under any circumstances, and I shall certainly see you off.
Our train is not leaving for five minutes. A ladies’ compartment, I
presume, and not with Mrs. Bellett?”
They walked slowly along the platform, past the carriage in which
Mrs. Bellett and her sister were arranging their animals and parcels
with much shrill hilarity.
Miss Gordon was so fortunate as to secure a compartment to
herself—the imbecile chuprassi gibbering and gesticulating, whilst
the sahib handed in her slender stock of belongings. As the train
moved away, she leant out of the window and nodded a smiling
farewell.
How good-looking he was as he stood under the lamp with his hat
off! How nice he had been to her—exactly like a brother! She drew
back with a long breath, that was almost a sigh, as she said to
herself, “Of course I shall never see him again.”
CHAPTER XIII.
TOBY JOY.

Letter from Mrs. Brande, Allahabad,to Pelham Brande, Esq.,


Shirani:—
“Dear P.
“She arrived yesterday, so you may expect us on Saturday.
Send Nubboo down to Nath Tal Dâk Bungalow on Thursday,
to cook our dinner, and don’t allow him more than six coolies
and one pony. Honor seems to feel the heat a great deal,
though she is thin, and not fat like me. At first sight, I must tell
you, I was terribly disappointed. When I saw her step out of
the railway carriage, a tall girl in a crumpled white dress, with
a hideous bazar topee, and no puggaree—her face very pale
and covered with smuts, I felt ready to burst into tears. She
looked very nervous and surprised too. However, of course I
said nothing, she wasn’t to know that I had asked for the
pretty one, and we drove back to the Hodsons’ both in the
very lowest spirits. She was tired, the train had broken down,
and they had all to get out and walk miles in the middle of the
night. After a while, when she had had her tea, and a bath,
and a real good rest, and changed her dress, I declare I
thought it was another person, when she walked into the
room. I found her uncommonly good-looking, and in five
minutes’ time she seemed really pretty. She has a lovely
smile and teeth to match, and fine eyes, and when she
speaks her face lights up wonderfully. Her hair is brown, just
plain brown, no colour in it, but very thick and fine. I know you
will be awfully disappointed in her complexion, as you were
such a one for admiring a beautiful skin. She has not got any
at all.
“Just a pale clear colour and no more, but her figure is most
beautiful. Indeed, every time I look at her I notice something
new; now the nape of her neck, now her ears—all just so
many models. She is, of course, a little shy and strange, but is
simple and easily pleased; and, thank goodness, has no
grand airs. I took her to Madame Peter (such stuff her calling
herself Pierre) to order some gowns for dinner parties. I
thought of a figured yellow satin and a ruby plush, she being
dark; but she would not hear of them, and all she would take
was a couple of cottons. I can see she wants to choose her
own clothes, and that she would like to have a say in mine
too; and knows a good deal about dress, and fashions, and is
clever at milinery (I always forget if there are two ‘L’s,’ but you
won’t mind). She says she is fond of dancing and tennis, but
cannot ride or sing, which is a pity.
“She has brought a fiddle with her, and she plays on it, she
tells me. It reminds me of a blind beggar with a dog for
coppers, but the Hodsons say it is all the go at home; they
admire Honor immensely.
“I suppose Mrs. Langrishe’s girl has arrived. I hear she is
no taller than sixpence worth of half-pence, but the biggest
flirt in India.
“Yours affectionately,
“Sarabella Brande.
“P.S.—I hope Ben is well, and that he will take to her.”
Honor had also written home announcing her arrival, dwelling on
her aunt’s kindness, and making the best of everything, knowing that
long extracts from her letter would be read aloud to inquiring friends.
She felt dreadfully home-sick, as she penned her cheerful epistle.
How she wished that she could put herself into the envelope, and
find herself once more in that bright but faded drawing-room, with its
deep window-seats, cosy chairs, and tinkling cottage piano. Every
vase and bowl would be crammed with spring flowers. Jessie would
be pouring out tea, whilst her mother was telling her visitors that she
had had a nice long letter from Honor, who was in raptures with
India, and as happy as the day was long!
She took particular care that her tears did not fall upon the paper,
as she penned this deceitful effusion. It was dreadful not to see one
familiar face or object. This new world looked so wide, and so
strange. She felt lost in the immense bedroom in which she was
writing—with its bare lofty walls, matted floor, and creaking punkah.
A nondescript dog from the stables had stolen in behind one of the
door chicks. She called to it, eager to make friends. Surely dogs
were dogs the whole world over!—but the creature did not
understand what she said, simply stared interrogatively and slunk
away. She saw many novel sights, as she drove in the cool of the
evening in Mrs. Hodson’s roomy landau, along the broad planted
roads of Allahabad, and watched the bheesties watering the
scorching white dust, which actually appeared to steam and bubble;
she beheld rattling ekkas, crammed with passengers, and drawn by
one wicked-looking, ill-used pony; orderlies on trotting camels; fat
native gentlemen in broughams, lean and pallid English sahibs in
dog-carts. It was extremely warm; the so-called “evening breeze”
consisted of puffs of hot wind, with a dash of sand. Most of the
Allahabad ladies were already on the hills.
Mrs. Brande was far too well-seasoned an Anglo-Indian not to
appreciate the wisdom of travelling in comfort. She had her own
servants in attendance, and plenty of pillows, fans, ice, fruit, and
eau-de-Cologne; far be it from her to journey with merely a hand-bag
and parasol!
Honor in a comfortable corner, with several down cushions at her
back, and a book on her knee, sat staring out on the unaccustomed
prospect that seemed to glide slowly past the carriage windows.
Here was a different country to that which she had already traversed:
great tracts of grain, poppies, and sugar cane, pointed to the
principal products of the North West. She was resolved to see and
note everything—even to the white waterfowl, and the long-legged
cranes which lounged among the marshes—so as to be able to write
full details in the next home letter.
As they passed through the Terai—that breathless belt of jungle—
the blue hills began to loom largely into the view. Finally, the train
drew up at a platform almost at the foot of them, and one phase of
the journey was over.
Honor could not help admiring her aunt, as she stepped out with
an air that betokened that she was now monarch of all she surveyed
(she was encased in a cream-coloured dust-cloak and topee to
match, and looked like an immense button mushroom). She briefly
disposed of clamouring coolies, gave orders to her attendants in
vigorous Hindustani, and led the way to the back of the station,
where were a collection of long open boxes—each box had a seat,
and was tied to two poles—and all were assembled in the midst of a
maddening din and accents of an unknown tongue.
“We go in these jampans,” explained Mrs. Brande, briskly. “Get in,
Honor, and I’ll pack you up; tie on your veil, put your rug over your
knees, and you will be very comfortable.”
But Honor felt quite the reverse, when she found herself suddenly
hoisted up on men’s shoulders, and borne rapidly away in the wake
of her aunt, who seemed perfectly at home under similar
circumstances.
For some time they kept to a broad metalled road lined with great
forest trees, then they went across a swing-bridge, up a narrow
steep path, that twisted among the woods, overhanging the rocky
bed of an almost dry river. This so-called bridle-path wound round
the hills for miles, every sharp curve seemed to bring them higher;
once they encountered a drove of pack ponies thundering down on
their return journey to the plains, miserable thin little beasts, who
never seem to have time to eat—or, indeed, anything to eat, if they
had leisure. Mrs. Brande and her party met but few people, save
occasionally some broad-shouldered coolie struggling upward with a
huge load bound on his back, and looking like a modern Atlas. Once
they passed a jaunty native girl, riding a pony, man fashion, and
exchanging gibes and repartees with her companions, and once they
met a European—a young man dressed in flannels and a blazer,
clattering down at break-neck speed, singing at the top of his voice,
“Slattery’s Mounted Foot”—a curly-headed, sunburnt, merry youth,
who stopped his song and his steed the moment he caught sight of
Mrs. Brande.
“Hallo!” he shouted. “Welcome back! Welcome the coming.
Speed,” laying his hand on his heart, “the parting guest.”
“Where are you off to?” inquired the lady imperiously.
“Only to the station. We are getting up grand theatricals; and in
spite of coolies, and messages, and furious letters, none of our
properties have been forwarded, and I began to suspect that the
Baboo might be having a play of his own, and I am going down to
look him up. Am I not energetic? Don’t I deserve a vote of public
thanks?”
“Pooh! Your journey is nothing,” cried Mrs. Brande, with great
scorn. “Why, I’ve been to Allahabad, where the thermometer is 95° in
the shade.”
“Yes, down in all the heat, and for a far more worthy object,”
glancing at Honor. “You may rely on me, I shall see that you are
recommended for a D.S.O.”
“What an impudent boy you are!” retorted the matron; and half
turning her head, she said to her companion, “Honor, this is Mr. Joy
—he is quite mad. Mr. Joy, this is my niece, Miss Gordon, just out
from England” (her invariable formula).
Mr. Joy swept off his topee to his saddle-bow.
“And what’s the news?” continued Mrs. Brande. “Has Mrs.
Langrishe’s niece come up?” she asked peremptorily.
“Yes, arrived two days ago—the early bird, you see,” he added,
with a malicious twinkle of his little eyes.
“I don’t see; and every one knows that the worm was a fool. What
is she like?”
“Like a fairy, and dances to match,” replied Mr. Joy, with
enthusiasm.
“Come, come; what do you know about fairies? Is she pretty?”
“Yes, and full of life, and go, and chic.”
“Cheek! I’m not surprised at that, seeing she is Mrs. Langrishe’s
own niece.”
“Chic is a French word, don’t you know? and means—well, I can’t
exactly explain. Anyway, Miss Paske will be a great acquisition.”
“How?”
“Oh, you will soon be able to judge for yourself. She acts first
class, and plays the banjo like an angel.”
“What nonsense you talk, Toby Joy! Whoever heard of an angel
playing anything but a ’arp.”
“By the way, Miss Gordon,” said Toby, turning suddenly to her, “I
hope you act.”
“No; I have never acted in my life.”
“Oh, that is nothing! All women are born actresses. Surely, then,
you sing—you have a singing face?”
“I am sorry to say that, in that case, my face belies me.”
“Well, at any rate,” with an air of desperation, “you could dance in
a burlesque?”
“Get away!” screamed Mrs. Brande. “Dance in a burlesque! I am
glad her mother does not hear you. Never mind him, Honor; he is
crazy about acting and dancing, and thinks of nothing else.”
“All work and no plays, make Jack a dull boy,” he retorted.
“Who else is up?” demanded Mrs. Brande, severely.
“Oh, the usual set, I believe. Lloyds, Clovers, Valpys, Dashwoods,
a signalling class, a standing camp, a baronet; there is also a
millionaire just about half way. You’ll find a fellow called Waring at
Nath Tal Dâk Bungalow—he was in the service once, and has now
come in for tons of money, and is a gentleman at large—very keen
about racing and sport. I expect he will live at our mess.”
“Then he is not married?” said Mrs. Brande, in a tone of unaffected
satisfaction.
“Not he! Perish the thought! He has a companion, a young chap
he takes about with him, a sort of hanger-on and poor relation.”
“What is he like? Of course I mean the millionaire?”
“Oh, of course,” with an affable nod; “cheery, good-looking sort of
chap, that would be an A1 hero of a novel.”
Mrs. Brande glanced swiftly at Honor, and heaved a gentle sigh of
contentment as she exclaimed—
“Well, I suppose we ought to be moving on.”
“Yes, for you will find the bungalow crammed with Tommies and
their wives. Give the millionaire my love. Au revoir, Mrs. Brande. Au
revoir, Miss Gordon. You’ll think over the burlesque, and help us in
some way, won’t you?” and with a valedictory wave of his hand he
dashed off.
“He is a harmless lunatic, my dear,” explained the aunt to her
niece, as they were carried forward side by side. “Thinks of nothing
but play-acting, and always in hot water with his colonel; but no one
is ever really angry with Toby, he is such a mere boy.”
“He must be three and twenty, and——”
“Look at the baggage just in front,” interrupted Mrs. Brande,
excitedly. “These must be Captain Waring’s coolies,” and to Honor’s
amazement she imperiously called a halt, and interrogated them
sharply.
“Yes, for a sahib—two sahibs at Nath Tal,” grunted the hill men.
“What a quantity,” she cried, shamelessly passing each load in
solemn review. “See what a lovely dressing-bag and a tiffin-basket. I
believe”—reckoning—“no less than five portmanteaus, all solid
leather, Captain C. Waring; and look at the gun-cases, and that big
box between two men is saddlery—I know the shape.”
“Oh, Aunt Sara, do you not think we ought to get on?” urged her
companion. “We are delaying his men.”
“My dear child, learn to know that there is nothing a coolie likes
better than being delayed. There is no hurry, and I am really
interested in this young man. I want to see where he has been,
where he has come from.” In answer to an imperative sentence in a
tongue unknown to Honor, a grinning coolie turned his back, on
which was strapped a portmanteau, for Mrs. Brande’s deliberate
inspection.
It proved to be covered with labels, and she read aloud with much
unction and for Honor’s benefit—
“Victoria—that’s New South Wales—Paris, Brindisi, Bombay,
Poonah, Arkomon, Calcutta, Galle, Lucknow. Bless us and save us,
he has been staying at Government House, Calcutta, and been half
over the world! See what it is to have money!” and she made a sign
to her jampannis to continue her journey. Presently they passed two
more coolies, lightly loaded with a rather meagre kit; these she did
not think it necessary to question.
“Those are the cousin’s things,” she explained contemptuously,
“M. J., the hanger-on. Awful shabby, only a bag, and a couple of
boxes. You could tell the owner was a poor man.”
Honor made no reply. She began to have an idea that she had
seen this poor young man before, or were two cousins travelling
together for travelling’s sake, a common feature in India. It would not
surprise her much were she to find her companion of that three-mile
walk awaiting his slender baggage at Nath Tal Bungalow.
As Mrs. Brande was borne upwards, her spirits seemed to rise
simultaneously with her body. She was about to make the
acquaintance of a millionaire, and could cultivate his friendship
comfortably, undisturbed by the machinations of her crafty rival. She
would invite him to be her guest for the two days they would be
journeying together, and by this means steal a nice long march (in
every sense of the word) on Mrs. Langrishe!
CHAPTER XIV.
STEALING A MARCH.

As the sun died down, the moon arose above the hills and lighted
the travellers along a path winding by the shores of an irregular
mountain lake, and overhung by a multitude of cherry trees in full
blossom.
“Look!” cried Mrs. Brande, joyfully, “there in front you see the lights
of the Dâk Bungalow at last. You will be glad of your dinner, and I’m
sure I shall.”
Two men, who sat in the verandah of the same rest-house, would
also have been most thankful for theirs. The straggling building
appeared full of soldiers and their wives, and there seemed no
immediate prospect of a meal. The kitchen had been taken
possession of by the majestic cook of a burra mem sahib, who was
shortly expected, and the appetites of a couple of insignificant
strangers must therefore be restrained.
These travellers were, of course, Captain Waring and Mark Jervis,
whom the former invariably alluded to as “his cousin.” It was a
convenient title, and accounted for their close companionship. At first
Mark had been disposed to correct this statement, and murmur, “Not
cousins, but connections,” but had been silenced by Clarence
petulantly exclaiming—
“Cousins and connections are the same thing. Who cares a straw
what we are? And what’s the good of bothering?”
“I’m nearly mad with hunger,” groaned Captain Waring. “I’ve eaten
nothing for ten hours but one hard-boiled egg.”
“Smoke, as the Indians do,” suggested his comrade unfeelingly,
“or draw in your belt a couple of holes. Anyway, a little starvation will
do you no harm—you are getting fat.”
“I wonder, if I went and sat upon the steps with a placard round my
neck, on which was written, ‘I am starving,’ if this good lady would
give us a dinner? Hunger is bad enough, but the exquisite smell of
her roast mutton aggravates my pangs.”
“You have only to show yourself, and she will invite you.”
“How do you know, and why do you cruelly raise my hopes?”
“Because I hear that she is the soul of hospitality, and that she has
the best cook on the hills.”
“May I ask how you discovered this really valuable piece of
information?”
“From the harum-scarum youth who passed this afternoon. He
forgot to mention her name.”
“Here she comes along by the weir,” interrupted Waring. “Mark the
excitement among the servants—her meal will be ready to the
minute. She must be truly a great woman, and has already earned
my respect. If she asks me to dinner, I shall love her. What do you
say, Mark?”
“Oh, I think, since you put it in that way, that I should find it easier
to love the young lady!”
“I thought you fought shy of young ladies; and you must have cat’s
eyes if you can see one at this distance.”
“I have the use of my ears, and I have had nothing to do, but
concentrate my attention on what is evidently to be the only meal of
the evening. I heard the cook telling the khitmatghar to lay a place
for the ‘Miss Sahib.’”
“What a thing it is to be observant!” cried Captain Waring. “And
here they are. By George! she is a heavy weight!” alluding to Mrs.
Brande, who was now let down with a dump, that spoke a whole
volume of relief.
The lady ascended the verandah with slow and solid steps, cast a
swift glance at the famishing pair, and went into her own well-
warmed room, where a table neatly laid, and adorned with cherry-
blossoms, awaited her.
“Lay two more places,” were her first commands to the salaaming
Khitmatghar; then to her niece, “I am going to ask those two men to
dinner.”
“But you don’t know them, Aunt Sara!” she expostulated rather
timidly.
“I know of them, and that is quite enough at a dâk bungalow. We
are not so stiff as you are in England; we are all, as it were, in the
same set out here; and I am sure Captain Waring will be thankful to
join us, unless he happens to be a born idiot. In this bungalow there
is nothing to be had but candles and jam. I know it of old. People
who pass up, are like a swarm of locusts, and leave nothing behind
them, but empty tins and bottles. Now I can give him club mutton
and champagne.”
Having carefully arranged her dress, put on her two best diamond
rings, and a blue cap (N.B.—Blue had always been her colour), Mrs.
Brande sailed out into the verandah, and thus accosted the
strangers—
“I shall be very happy if you two gentlemen will dine with me in my
rooms.”
“You are really too good,” returned Captain Waring, springing to
his feet and making a somewhat exaggerated bow. “We shall be
delighted, for there seems no prospect of our getting anything to eat
before to-morrow.”
“You shall have something to eat in less than five minutes,” was
Mrs. Brande’s reassuring answer, as she led the way to her own
apartment.
“This,” waving her hand towards Honor, “is my niece, Miss
Gordon, just out from England. I am Mrs. Brande—my husband is in
the Council.”
“We have had the pleasure of meeting Miss Gordon before,” said
Captain Waring; “this will not be the first time we have sat at the
same table,” and he glanced at her, with sly significance.
“Yes,” faltered Honor, with a heightened colour, as she bowed and
shook hands with Mark. “This is the gentleman of whom I told you,
Aunt Sara, who rescued me when I was left alone in the train.”
“Ah! indeed,” said Mrs. Brande, sitting down as she spoke, and
deliberately unfolding her serviette, “I’m sure I’m greatly obliged to
him,” but she secretly wished that on that occasion Honor had been
befriended by his rich associate.
“Let me introduce him to you, Mrs. Brande—his name is Jervis,”
said Captain Waring, with his most jovial air. “He is young, idle, and
unmarried. My name is Waring. I was in the Rutlands, but I chucked
the service some time ago.”
“Well, now we know all about each other” (oh, deluded lady!) “let
us begin our dinner,” said Mrs. Brande. “I am sure we are all
starving.”
Dinner proved to be excellent, and included mahseer from the
lake, wild duck from the marshes, and club mutton. No! Mrs.
Brande’s “chef” had not been over praised. At first every one
(especially the hostess and Clarence Waring) was too frankly hungry
to talk, but after a time they began to discuss the weather, the local
insects, and their journey—not in the formal manner common to
Britons on their mournful travels—but in a friendly, homely fashion,
suitable to a whitewashed apartment, with the hostess’s bed in one
corner.
Whilst the two men conversed with her niece, Mrs. Brande
critically surveyed them, “took stock” as she said to herself. Captain
Waring was a man of five or six and thirty, well set up, and soldierly
looking; he had dark cropped hair, bold merry eyes, and was
handsome, though sunburnt to a deep tan, and his face was deeply
lined—those in his forehead looking as if they had been ruled and
cut into the very bone—nevertheless, his habitual expression was as
gay and animated as that of Toby Joy himself. He had an extremely
well-to-do air (undoubtedly had never known a money care in his
life), he wore his clothes with ease, they fitted him admirably, his
watch, studs, and linen were of the finest quality; moreover, he
appreciated a good dinner, seemed to accept the best of everything
as a matter of course, and looked about intelligently for peppers and
sauces, which were fortunately forthcoming.
“The companion,” as Mrs. Brande mentally called him, was a
younger man, in fact a mere youth of about two and twenty, well set
up, squarely built, with good shoulders and a determined mouth and
chin. He wore a suit of flannels, a silver watch, with a leather chain,
and looked exactly what he was—an idle, poor hanger-on!
Mrs. Brande left him to talk to Honor, and indeed entirely
neglected him for his more important kinsman. Her niece was
secretly aware of (and resented) her aunt’s preference, and
redoubled her efforts to entertain her slighted fellow-traveller. She
had a fellow-feeling for him also. Were they not both dependents—
both poor relations?
“Well, Captain Waring, so you are coming up to see Shirani?” said
Mrs. Brande, with her most gracious air.
“Yes, and I rather want to recall old times out here, and have a
nice lazy summer in the hills.”
“Then you have been in India all the winter?” (The inspection of his
kit the crafty lady kept to herself.)
“Yes. We came out in October. Had a bit of a shoot in Travancore,
and had a couple of months in Calcutta.”
“Then perhaps you came across a Miss Paske, there? Though I
don’t suppose she was in the Government House set. Her uncle is a
nobody.”
“To be sure. We know Miss Paske, don’t we, Mark? She was very
much in the Government House set. All the A.D.C’s adored her. A
little bit of a thing, with tow-coloured, fluffy hair, and a nez retroussé.”
“I know nothing about her nose or hair, but she is at Shirani now.”
“You don’t say so! I am delighted to hear it. She is capital fun!”
Mrs. Brande’s face fell. She sat crumbling her bread for some
seconds, and then said absently, “Did you notice those monkeys on
the way up?”
She had a peculiar habit of suddenly jumping from one topic to
another, figuratively, at the opposite pole. She declared that her
ideas travelled at times faster than her speech. Possibly she had her
own consecutive, if rapid, train of thought, and may thus have
connected Miss Paske with apes.
“Yes, swarms of those old grey fellows with black faces. I suppose
they have a fair club at Shirani, and keep up the whist-room? Are
there many men who play?”
“Only too many. I don’t approve of cards—at least gambling. I do
love a game of whist—I play a half-anna stamp on the rubber, just to
give it a little interest.”
“Do they play high at Shirani?” he asked with a touch of
impatience.
“Yes, I believe they do; and that horrid old Colonel Sladen is the
worst of all.”
“What! is he still up here? he used to play a first-class rubber.”
“He will play anything—high or low stakes—at either night or day
—he pays—his wife pays,” concluded Mrs. Brande, looking quite
ferocious.
“Oh, is she out again? Nice little woman.”
“Out again! She has never been home yet,” and she proceeded to
detail that lady’s grievances, whilst her companion’s roving eyes
settled on his cousin and Miss Gordon.
She was a remarkable-looking, even fascinating girl, quite different
to his impression of her at first sight. She had a radiant smile,
wonderfully expressive eyes (those eyes alone made her beautiful,
and lifted her completely out of the commonplace), and a high-bred
air. Strange that she should be related to this vulgar old woman, and
little did the vulgar old woman guess how she had been championed
by her English niece. The moon shining full on the lake tempted the
whole party out of doors. Captain Waring made a basely ungrateful
(but wholly vain) attempt to exchange ladies with his friend. Mrs.
Brande, however, loudly called upon him to attend her, as she paced
slowly down to the road; and as he lit his cigar at his cousin’s, he
muttered angrily under his moustache—
“I call this beastly unfair. I had the old girl all dinner time. You’ve
got six to four the best of it!”
CHAPTER XV.
A PROUD MOMENT.

Captain Waring envied his comrade, who, with Miss Gordon,


sauntered on a few paces ahead of him and, what he mentally
termed, “his old woman of the sea.” She never ceased talking, and
could not endure him out of her sight. The others appeared to get on
capitally; they had plenty to say to one another, and their frequent
laughter excited not alone his envy, but his amazement.
Mark was not a ladies’ man; this squiring of dames was a new
departure. Such an avocation was far more in his own line, and by all
the laws of the fitness of things, he should be in Mark’s place—
strolling by moonlight with a pretty girl along the shores of this lovely
mountain tarn. What were they talking about? Mark never could find
much to say to girls—straining his ears, not from the ungentlemanly
wish to listen, but merely from pure friendly curiosity—he paid but
scant attention to Mrs. Brande’s questions, and gave her several
misleading answers.
“His cousin had no profession—he was a gentleman at large—yes
—his protége—yes. He himself was a man of leisure—yes.” Yes—
yes—yes; he said “Yes” to everything indiscriminately; it is so easy to
say “Yes!”
“It is strange that we should come across one another twice on the
same journey,” remarked Jervis to his companion.
“If you had not come across me the first time, I suppose I should
be sitting in that train still!”
“Oh no; not quite so long as all that.”
“You won’t say anything to aunt about——”
“Good gracious, Miss Gordon! Do you think I look like a lunatic?”
“You see, I have such a dreadful way of coming out with things,
that I imagine that what is an irresistible temptation to me, might be
the same to other people!”
“You need not be afraid, as far as I am concerned. I can answer
for myself that I can hold my tongue. And how are you getting on?
Still counting the hours until your departure?” with an air of gay
interrogation.
“No, indeed. At first I was desperately home-sick; but I am getting
over that now.”
And gradually she was led on to talk of Jessie’s stories, of their
celebrated mulberry tree, and of the various quaint local characters.
Surely there was some occult influence in the scene; or was it the
frank air and pleasant voice of this young man, that thus unlocked
her lips? She felt as if she had known him quite a long time; at any
rate, he was her first acquaintance in India, and she once more
repeated to herself the comforting fact that he was also a poor
relation—that alone was a strong bond of sympathy. As they paced
the narrow road that edged the lake of Nath Tal, they laughed and
talked with a mutual enjoyment that filled the mind of Captain Waring
and Mrs. Brande (who were not so happily paired) with dismay on
the part of the lady, and disgust on the side of the gentleman.
Captain Waring would no doubt have found their conversation insipid
to the last degree; it contained no sugared compliments, and not the
smallest spice of sentiment or flirtation.
“I have a bargain to propose to you two gentlemen,” said Mrs.
Brande, ere they parted for the night. “We are going the same
marches, and to the same place; I shall be happy to provide the
commissariat, if you will be our escort and protect us. What do you
say?” appealing to Captain Waring with a smirk.
“My dear madam, I say that we close with your offer on the spot; it
is altogether in our favour,” was his prompt reply.
Mrs. Brande beamed still more effulgently. There was no occasion
to consult the other young man.

You might also like