Download as pdf or txt
Download as pdf or txt
You are on page 1of 484

Hadamard

Transforms

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Hadamard
Transforms
Sos Agaian
Hakob Sarukhanyan
Karen Egiazarian
Jaakko Astola

Bellingham, Washington USA

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


To our families for their love, affection, encouragement, and understanding.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Library of Congress Cataloging-in-Publication Data

Hadamard transforms / Sos Agaian ... [et al.].


p. cm. – (Press monograph ; 207)
Includes bibliographical references and index.
ISBN 978-0-8194-8647-9
1. Hadamard matrices. I. Agaian, S. S.
QA166.4.H33 2011
512.9 434–dc22
2011002632

Published by

SPIE
P.O. Box 10
Bellingham, Washington 98227-0010 USA
Phone: +1 360.676.3290
Fax: +1 360.647.1445
Email: Books@spie.org
Web: http://spie.org

Copyright 
c 2011 Society of Photo-Optical Instrumentation Engineers (SPIE)
All rights reserved. No part of this publication may be reproduced or distributed in
any form or by any means without written permission of the publisher.
The content of this book reflects the work and thoughts of the author(s). Every
effort has been made to publish reliable and accurate information herein, but the
publisher is not responsible for the validity of the information or for any outcomes
resulting from reliance thereon. For the latest updates about this title, please visit
the book’s page on our website.

Printed in the United States of America.


First printing

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Contents
Preface ............................................................................................................... xi
Acknowledgments ........................................................................................... xiii
Chapter 1 Classical Hadamard Matrices and Arrays ...................................... 1
1.1 Sylvester or Walsh–Hadamard Matrices .................................................. 1
1.2 Walsh–Paley Matrices .................................................................................... 11
1.3 Walsh and Related Systems .......................................................................... 13
1.3.1 Walsh system ..................................................................................... 15
1.3.2 Cal–Sal orthogonal system ........................................................... 17
1.3.3 The Haar system .............................................................................. 24
1.3.4 The modified Haar “Hadamard ordering” ............................... 29
1.3.5 Normalized Haar transforms ........................................................ 30
1.3.6 Generalized Haar transforms ....................................................... 32
1.3.7 Complex Haar transform ............................................................... 32
1.3.8 kn -point Haar transforms ............................................................... 32
1.4 Hadamard Matrices and Related Problems ............................................. 34
1.5 Complex Hadamard Matrices ...................................................................... 38
1.5.1 Complex Sylvester–Hadamard transform ............................... 39
1.5.2 Complex WHT ................................................................................. 41
1.5.3 Complex Paley–Hadamard transform ....................................... 42
1.5.4 Complex Walsh transform ............................................................ 42
References ....................................................................................................................... 45

Chapter 2 Fast Classical Discrete Orthogonal Transforms............................ 51

2.1 Matrix-Based Fast DOT Algorithms ......................................................... 52


2.2 Fast Walsh–Hadamard Transform .............................................................. 54
2.3 Fast Walsh–Paley Transform ........................................................................ 62
2.4 Cal–Sal Fast Transform.................................................................................. 70
2.5 Fast Complex HTs ........................................................................................... 75
2.6 Fast Haar Transform........................................................................................ 79
References ....................................................................................................................... 86

Chapter 3 Discrete Orthogonal Transforms and Hadamard Matrices ............ 93

3.1 Fast DOTs via the WHT ................................................................................ 94

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


vi Contents

3.2 FFT Implementation ....................................................................................... 95


3.3 Fast Hartley Transform .................................................................................. 106
3.4 Fast Cosine Transform ................................................................................... 115
3.5 Fast Haar Transform........................................................................................ 122
3.6 Integer Slant Transforms ............................................................................... 129
3.6.1 Slant HTs ............................................................................................ 130
3.6.2 Parametric slant HT ........................................................................ 131
3.7 Construction of Sequential Integer Slant HTs ........................................ 136
3.7.1 Fast algorithms ................................................................................. 141
3.7.2 Examples of slant-transform matrices ...................................... 142
3.7.3 Iterative parametric slant Haar transform construction....... 143
References ....................................................................................................................... 147

Chapter 4 “Plug-In Template” Method: Williamson–Hadamard Matrices ...... 155

4.1 Williamson–Hadamard Matrices ................................................................ 156


4.2 Construction of 8-Williamson Matrices ................................................... 168
4.3 Williamson Matrices from Regular Sequences ...................................... 173
References ....................................................................................................................... 182

Chapter 5 Fast Williamson–Hadamard Transforms ........................................ 189

5.1 Construction of Hadamard Matrices Using Williamson Matrices... 189


5.2 Parametric Williamson Matrices and Block Representation of
Williamson–Hadamard Matrices ................................................................ 192
5.3 Fast Block Williamson–Hadamard Transform ....................................... 195
5.4 Multiplicative-Theorem-Based Williamson–Hadamard Matrices ... 199
5.5 Multiplicative-Theorem-Based Fast Williamson–Hadamard
Transforms .......................................................................................................... 202
5.6 Complexity and Comparison........................................................................ 206
5.6.1 Complexity of block-cyclic, block-symmetric
Williamson–Hadamard transform .............................................. 206
5.6.2 Complexity of the HT from the multiplicative theorem ..... 208
References ....................................................................................................................... 209

Chapter 6 Skew Williamson–Hadamard Transforms ...................................... 213

6.1 Skew Hadamard Matrices ............................................................................. 213


6.1.1 Properties of the skew-symmetric matrices ............................ 213
6.2 Skew-Symmetric Williamson Matrices .................................................... 215
6.3 Block Representation of Skew-Symmetric Williamson–Hadamard
Matrices ............................................................................................................... 217
6.4 Fast Block-Cyclic, Skew-Symmetric Williamson–Hadamard
Transform............................................................................................................ 219
6.5 Block-Cyclic, Skew-Symmetric Fast Williamson–Hadamard
Transform in Add/Shift Architectures ...................................................... 222
References ....................................................................................................................... 224

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Contents vii

Chapter 7 Decomposition of Hadamard Matrices .......................................... 229

7.1 Decomposition of Hadamard Matrices by (+1, −1) Vectors ............. 230


7.2 Decomposition of Hadamard Matrices and Their Classification ..... 237
7.3 Multiplicative Theorems of Orthogonal Arrays and Hadamard
Matrix Construction ........................................................................................ 243
References ....................................................................................................................... 247

Chapter 8 Fast Hadamard Transforms for Arbitrary Orders ........................... 249

8.1 Hadamard Matrix Construction Algorithms ........................................... 249


8.2 Hadamard Matrix Vector Representation................................................. 251
8.3 FHT of Order n ≡ 0 (mod 4) ........................................................................ 256
8.4 FHT via Four-Vector Representation ........................................................ 263
8.5 FHT of Order N ≡ 0 (mod 4) on Shift/Add Architectures ................. 266
8.6 Complexities of Developed Algorithms ................................................... 268
8.6.1 Complexity of the general algorithm ........................................ 268
8.6.2 Complexity of the general algorithm with shifts................... 270
References ....................................................................................................................... 270

Chapter 9 Orthogonal Arrays .......................................................................... 275

9.1 ODs ....................................................................................................................... 275


9.1.1 ODs in the complex domain......................................................... 278
9.2 Baumert–Hall Arrays ...................................................................................... 280
9.3 A Matrices........................................................................................................... 282
9.4 Goethals–Seidel Arrays ................................................................................. 289
9.5 Plotkin Arrays ................................................................................................... 293
9.6 Welch Arrays ..................................................................................................... 295
References ....................................................................................................................... 301

Chapter 10 Higher-Dimensional Hadamard Matrices ....................................... 309

10.1 3D Hadamard Matrices .................................................................................. 311


10.2 3D Williamson–Hadamard Matrices ......................................................... 312
10.3 3D Hadamard Matrices of Order 4n + 2 .................................................. 318
10.4 Fast 3D WHTs................................................................................................... 325
10.5 Operations with Higher-Dimensional Complex Matrices .................. 329
10.6 3D Complex HTs ............................................................................................. 332
10.7 Construction of (λ, μ) High-Dimensional Generalized Hadamard
Matrices ............................................................................................................... 335
References ....................................................................................................................... 339

Chapter 11 Extended Hadamard Matrices ........................................................ 343

11.1 Generalized Hadamard Matrices................................................................. 343


11.1.1 Introduction and statement of problems .................................. 343

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


viii Contents

11.1.2 Some necessary conditions for the existence of genera-


lized Hadamard matrices ............................................................... 346
11.1.3 Construction of generalized Hadamard matrices of new
orders ................................................................................................... 347
11.1.4 Generalized Yang matrices and construction of genera-
lized Hadamard matrices ............................................................... 350
11.2 Chrestenson Transform .................................................................................. 351
11.2.1 Rademacher functions .................................................................... 351
11.2.2 Example of Rademacher matrices ............................................. 353
11.2.2.1 Generalized Rademacher functions .......................... 354
11.2.2.2 The Rademacher–Walsh transforms ......................... 355
11.2.2.3 Chrestenson functions and matrices ......................... 357
11.3 Chrestenson Transform Algorithms ........................................................... 359
11.3.1 Chrestenson transform of order 3n ............................................. 359
11.3.2 Chrestenson transform of order 5n ............................................. 361
11.4 Fast Generalized Haar Transforms ............................................................. 365
11.4.1 Generalized Haar functions .......................................................... 365
11.4.2 2n -point Haar transform................................................................. 367
11.4.3 3n -point generalized Haar transform ......................................... 369
11.4.4 4n -point generalized Haar transform ......................................... 371
11.4.5 5n -point generalized Haar transform ......................................... 374
References ....................................................................................................................... 379

Chapter 12 Jacket Hadamard Matrices ............................................................. 383

12.1 Introduction to Jacket Matrices ................................................................... 383


12.1.1 Example of jacket matrices .......................................................... 383
12.1.2 Properties of jacket matrices ........................................................ 385
12.2 Weighted Sylvester–Hadamard Matrices ................................................. 389
12.3 Parametric Reverse Jacket Matrices .......................................................... 392
12.3.1 Properties of parametric reverse jacket matrices................... 394
12.4 Construction of Special-Type Parametric Reverse Jacket
Matrices ............................................................................................................... 399
12.5 Fast Parametric Reverse Jacket Transform .............................................. 404
12.5.1 Fast 4 × 4 parametric reverse jacket transform ...................... 405
12.5.1.1 One-parameter case ........................................................ 405
12.5.1.2 Case of three parameters .............................................. 407
12.5.2 Fast 8 × 8 parametric reverse jacket transform ...................... 409
12.5.2.1 Case of two parameters ................................................. 409
12.5.2.2 Case of three parameters .............................................. 409
12.5.2.3 Case of four parameters ................................................ 411
12.5.2.4 Case of five parameters ................................................. 413
12.5.2.5 Case of six parameters .................................................. 414
References ....................................................................................................................... 416

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Contents ix

Chapter 13 Applications of Hadamard Matrices in Communication Sys-


tems .................................................................................................................... 419

13.1 Hadamard Matrices and Communication Systems ............................... 419


13.1.1 Hadamard matrices and error-correction codes ..................... 419
13.1.2 Overview of Error-Correcting Codes ........................................ 419
13.1.3 How to create a linear code .......................................................... 425
13.1.4 Hadamard code ................................................................................. 427
13.1.5 Graphical representation of the (7, 3, 4) Hadamard code .. 431
13.1.6 Levenshtein constructions............................................................. 431
13.1.7 Uniquely decodable base codes .................................................. 435
13.1.8 Shortened code construction and application to data
coding and decoding ....................................................................... 438
13.2 Space–Time Codes from Hadamard Matrices ........................................ 440
13.2.1 The general wireless system model ........................................... 440
13.2.2 Orthogonal array and linear processing design ..................... 442
13.2.3 Design of space–time codes from the Hadamard matrix ... 444
References ....................................................................................................................... 445

Chapter 14 Randomization of Discrete Orthogonal Transforms and Encry-


ption .................................................................................................................... 449

14.1 Preliminaries ...................................................................................................... 450


14.1.1 Matrix forms of DHT, DFT, DCT, and other DOTs............. 450
14.1.2 Cryptography .................................................................................... 452
14.2 Randomization of Discrete Orthogonal Transforms ............................ 453
14.2.1 The theorem of randomization of discrete orthogonal
transforms ........................................................................................... 454
14.2.2 Discussions on the square matrices P and Q .......................... 454
14.2.3 Examples of randomized transform matrix Ms..................... 456
14.2.4 Transform properties and features ............................................. 459
14.2.5 Examples of randomized discrete orthogonal transforms .. 459
14.3 Encryption Applications ................................................................................ 460
14.3.1 1D data encryption .......................................................................... 462
14.3.2 2D data encryption and beyond .................................................. 463
14.3.3 Examples of image encryption .................................................... 464
14.3.3.1 Key space analysis .......................................................... 464
14.3.3.2 Confusion property......................................................... 465
14.3.3.3 Diffusion property .......................................................... 466
References ....................................................................................................................... 470

Appendix ........................................................................................................... 475


A.1 Elements of Matrix Theory ........................................................................... 475
A.2 First Rows of Cyclic Symmetric Williamson-Type Matrices of
Order n, n = 3, 5, . . . , 33, 37, 39, 41, 43, 49, 51, 55, 57, 61, 63 [2] .... 479

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


x Contents

A.3 First Block Rows of the Block-Cyclic, Block-Symmetric


(BCBS) Williamson–Hadamard Matrices of order 4n, n = 3, 5,
. . . , 33, 37, 39, 41, 43, 49, 51, 55, 57, 61, 63 [2] ....................................... 484
A.4 First Rows of Cyclic Skew-Symmetric Williamson-Type Matri-
ces of Order n, n = 3, 5, . . . , 33, 35............................................................. 487
A.5 First Block Rows of Skew-Symmetric Block Williamson–Hada-
mard Matrices of Order 4n, n = 3, 5, . . . , 33, 35 .................................... 494
References ....................................................................................................................... 498

Index ................................................................................................................... 499

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Preface
The Hadamard matrix and Hadamard transform are fundamental problem-solving
tools used in a wide spectrum of scientific disciplines and technologies including
communication systems, signal and image processing (signal representation,
coding, filtering, recognition, and watermarking), and digital logic (Boolean
function analysis and synthesis, and fault-tolerant system design). They are thus
a key component of modern information technology. In communication, the
most important applications include error-correcting codes, spreading sequences,
and cryptography. Other relevant applications include analysis of stock market
data, combinatorics, experimental design, quantum computing, environmental
monitoring, and many problems in chemistry, physics, optics, and geophysical
analysis.
Hadamard matrices have attracted close attention in recent years, owing to their
numerous known and new promising applications. In 1893, Jacques Hadamard
conjectured that for any integer m divisible by 4, there is a Hadamard matrix of
the order m. Despite the efforts of a number of individuals, this conjecture remains
unproved, even though it is widely believed to be true. Historically, the problem
goes back to James Joseph Sylvester in 1867.
The purpose of this book is to bring together different topics concerning current
developments in Hadamard matrices, transforms, and their applications. Hadamard
Transforms distinguishes itself from other books on the same topic because it
achieves the following:
• Explains the state of our knowledge of Hadamard matrices, transforms, and their
important generalizations, emphasizing intuitive understanding while providing
the mathematical foundations and description of fast transform algorithms.
• Provides a concise introduction to the theory and practice of Hadamard matrices
and transforms. The full appearance of this theory has been realized only
recently, as the authors have pioneered, for example, multiplication theorems,
4m-point fast Hadamard transform algorithms, and decomposition Hadamard
matrices by vectors.
• Offers a comprehensive and unified coverage of Hadamard matrices with a
balance between theory and implementation. Each chapter is designed to begin
with the basics of the theory, progressing to more advanced topics, and then
discussing cutting-edge implementation techniques.

xi

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Acknowledgments
This work has been achieved through long-term research collaboration among the
following three institutions:
• Department of Electrical Engineering, University of Texas at San Antonio, USA.
• Institute for Informatics and Automation Problems of the National Academy of
Sciences of Armenia (IIAP NAS RA), Yerevan, Armenia.
• The Tampere International Center for Signal Processing (TICIP), Tampere
University of Technology, Tampere, Finland.
This work is partially supported by NSF Grant No. HRD-0932339. The authors
are grateful to these organizations.
Special thanks are due to Mrs. Zoya Melkumyan (IIAP) and to Mrs. Pirko
Ruotsalainen (TICIP), of the Official for International Affairs of TICSP, for great
help in organizing several of S. Agaian’s and H. Sarukhanyan’s trips and visits to
Finland. Thanks go to Mrs. Carol Costello for her careful editing of our manuscript.
We also express our appreciation to Tim Lamkins, Dara Burrows, Gwen Weerts,
Beth Kelley, and Scott McNeill, members of the SPIE editing team.

xiii

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 1
Classical Hadamard Matrices and
Arrays
In this chapter, we introduce the primary nonsinusoidal orthogonal transforms,
such as Hadamard, Haar, etc., which are extensively reported in the literature.1–81
The basic advantages of the Hadamard transform (HT) are as follows:
• Multiplication by HT involves only an algebraic sign assignment.
• Digital circuits can generate Hadamard functions because they assume only the
values +1 and −1.
• Computer processing can be accomplished using addition, which is very fast,
rather than multiplication.
• The continence case of these systems is very good for representing piecewise
constants or continuous functions.
• The simplicity and efficiency of the transforms is found in a variety of
practical applications.1–20 These include, for example, digital signal and
image processing, such as compact signal representation, filtering, cod-
ing, data compression, and recognition;4,14,31,40,54,55,57,60,61,66,69,73–75,77 speech
and biomedical signal analysis;1,13,14,17,31,35,46,48,67,68 and digital communica-
tion.3,4,11,22,31,45,49,65,70,74,76,78,79 A prime example is the code division multiple
access system (CDMA) cellular standard IS-95, which uses a 64-Hadamard
matrix in addition to experimental and combinatorial designs, digital logic,
statistics, error-correcting codes, masks for spectroscopic analysis, encryp-
tion, and several other fields.3,5,6,8,9,15,18,55,68 Among other possible applications,
which can be added to this list, are analysis of stock market data, combina-
torics, error-correcting codes, spreading sequences, experimental design, quan-
tum computing, environmental monitoring, chemistry, physics, optics, combi-
natorial designs, and geophysical analysis.1,3,6,7,12,14–19,25,27,33,34,38,48,49 In this
chapter, we introduce the commonly used Sylvester, Walsh–Hadamard, Walsh,
Walsh–Paley, and complex HTs.

1.1 Sylvester or Walsh–Hadamard Matrices


In this section, we present the Walsh–Hadamard transform (WHT) as an example
of a simpler, so-called HT. The concepts introduced have close analogs in other

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


2 Chapter 1

Figure 1.1 James Joseph Sylvester (1814–1897, London, England) is known especially
for his work on matrices, determinants, algebraic invariants, and the theory of numbers. In
1878, he founded the American Journal of Mathematics, the first mathematical journal in the
United States (from: www.gap-system.org/~history/Biographies).

transforms. The WHT, which is of considerable practical importance, is based on


the Sylvester matrix.
In 1867, in Sylvester’s (see Fig. 1.1) seminal paper, “Thoughts on inverse
orthogonal matrices, simultaneous sign-successions and tessellated pavements in
two or more colors with application to Newton’s rule, ornamental tile work and
the theory of numbers”,20 he constructed special matrices (called later Sylvester
matrices). He constructed these matrices recurrently as
 
H2n−1 H2n−1
H2 =
n , n = 1, 2, 3 . . . where H1 = [1], (1.1)
H2n−1 −H2n−1
which means that a Hadamard matrix of order 2n may be obtained from a known
Hadamard matrix H of order n simply by juxtaposing four copies of H and negating
one of those. It is easy to confirm that H2k , k = 1, 2, 3, . . . , is a square 2k × 2k matrix
whose elements are +1 or −1, and H2k H2Tk = 2k I2k .

Definition: A square matrix Hn of order n with elements −1 and +1 is called a


Hadamard matrix if the following equation holds true:

Hn HnT = HnT Hn = nIn , (1.2)

where HnT is the transpose of Hn , and In is the identity matrix of order n.


Sometimes, the Walsh–Hadamard system is called the natural-ordered
Hadamard system. We present the Sylvester-type matrices of orders 2, 4, 8, and
16, as follows:
⎛ ⎞
  ⎜⎜⎜⎜+ + + +⎟⎟⎟⎟
+ + ⎜+ − + −⎟
H2 = + − , H4 = ⎜⎜⎜⎜⎜+ + − −⎟⎟⎟⎟⎟ , (1.3a)
⎜⎝ ⎟⎠
+ − − +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 3

Figure 1.2 Sylvester-type Hadamard matrices of orders 2, 4, 8, 16, and 32.

⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟

⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+
⎜⎜⎜ + − − + + − −⎟⎟⎟⎟⎟
⎜+ − − + + − − +⎟⎟⎟⎟⎟
H8 = ⎜⎜⎜⎜⎜ ⎟, (1.3b)
⎜⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟

⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝+ + − − − − + +⎟⎟⎟⎟

+ − − + − + + −
⎛ ⎞
⎜⎜⎜+ + + + + + + + + + + + + + + +⎟⎟

⎜⎜⎜⎜+ − + − + − + − + − + − + − + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − − + + − − + + − − + + − −⎟⎟⎟⎟⎟
⎜⎜⎜+ − − + + − − + + − − + + − − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + − − − − + + + + − − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − + − − + − + + − + − − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − − − − + + + + − − − − + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
− − + − + + − + − − + − + + −⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟.
−⎟⎟⎟⎟⎟
H16 (1.3c)
⎜⎜⎜+ + + + + + + + − − − − − − −
⎜⎜⎜+ − + − + − + − − + − + − + − +⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+
⎜⎜⎜ + − − + + − − − − + + − − + +⎟⎟⎟⎟⎟

⎜⎜⎜+ − − + + − − + − + + − − + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + − − − − − − − − + + + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − + − − + − + − + − + + − + −⎟⎟⎟⎟

⎜⎜⎜+
⎝ + − − − − + + − − + + + + − −⎟⎟⎟⎟

+ − − + − + + − − + + − + − − +

The symbols + and − denote +1 and −1, respectively, throughout the book.
Figure 1.2 displays the Sylvester-type Hadamard matrices of order 2, 4, 8, 16,
and 32. The black squares correspond to the value of +1, and the white squares
correspond to the value of −1.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


4 Chapter 1

Table 1.1 Construction Sylvester


matrix (1, 3) element.
k/m 00 01 10 11

00 • • • •
01 • • • −1
10 • • • •
11 • • • •

Sylvester matrices can be constructed from two Sylvester matrices using


the Kronecker product (see the Appendix concerning the Kronecker product of
matrices and their properties),
⎛ ⎞
⎜+ + + +⎟⎟⎟
    ⎜⎜⎜⎜ ⎟
+ + + + ⎜⎜+ − + −⎟⎟⎟⎟⎟
H4 = H2 ⊗ H2 = ⊗ = ⎜⎜⎜⎜⎜ ⎟, (1.4a)
+ − + − ⎜⎜⎜+ + − −⎟⎟⎟⎟⎟
⎝ ⎠
+ − − +
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜+ − + − + − + −⎟⎟⎟⎟
⎛ ⎞ ⎜⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜+ + + +⎟⎟⎟ ⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟
  ⎜⎜ ⎟ ⎜ ⎟
+ + ⎜⎜⎜⎜+ − + −⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜+ − − + + − − +⎟⎟⎟⎟⎟
H8 = H2 ⊗ H4 = ⊗ ⎜⎜⎜ ⎟=⎜ ⎟ , (1.4b)
+ − ⎜⎜+ + − −⎟⎟⎟⎟ ⎜⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟⎟
⎜⎝ ⎟⎠ ⎜⎜⎜ ⎟
+ − − + ⎜⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟⎟
⎜⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟⎟

⎜⎜⎝ ⎠
+ − − + − + + −

and so on,
     
+ + + + + +
H2n = H2 ⊗ H2 ⊗ · · · ⊗ H2 = ⊗ ⊗ ··· ⊗ . (1.5)
+ − + − + −

We now present another approach to Hadamard matrix construction. We start


from a simple example. Let us consider the element walh (1, 3) of the matrix
H4 = [walh (m, k)]3m,k=0 at the intersection of its second row (m = 1) and the fourth
column (k = 3). The binary representation of m = 1 and k = 3 is (01) and (11);
hence v = (m, k) = 0 · 1 + 1 · 1 = 1, and walh (1, 3) = (−1)1 = −1. In other
words, the value of element walh (1, 3) in the Sylvester matrix H4 can be obtained
by summing up the element-wise multiplication of the binary mod 2 expansions of
1 and 3, raised to the power of −1 (see Table 1.1). Similarly, we find the remaining
elements in Table 1.2.
In general, the element walh (m, k) in the Sylvester matrix can be expressed as
n−1
mi ki
walh (m, k) = (−1) i=0 , (1.6)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 5

Figure 1.3 The first 16 discrete Walsh–Hadamard systems.

Table 1.2 Constructed 4 × 4 Sylvester


matrix.
k/m 00 01 10 11

00 1 1 1 1
01 1 −1 1 −1
10 1 1 −1 −1
11 1 −1 −1 1

where mi , ki are the binary expansions of m, k = 0, 1, . . . , 2n−1 ,

m = mn−1 2n−1 + mn−2 2n−2 + · · · + m1 21 + m0 20 ,


k = kn−1 2n−1 + kn−2 2n−2 + · · · + k1 21 + k0 20 . (1.7)

The set of functions {walh (0, k), walh (1, k), . . . , walh (n − 1, k)}, where

walh (0, k) = walh (0, 1) walh (0, 2) . . . walh (0, n − 1)


walh (1, k) = walh (1, 1) walh (1, 2) . . . walh (1, n − 1)
(1.8)
......
walh (n − 1, k) = walh (n − 1, 1) walh (n − 2, 2) . . . walh (n − 1, n − 1) ,

is called the discrete Walsh–Hadamard system, or the discrete Walsh–Hadamard


basis. The first 16 Walsh functions are shown in Fig. 1.3.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


6 Chapter 1

Figure 1.4 The first eight continuous Walsh Hadamard functions on the interval [0, 1).

The set of functions {walh (0, t), walh (1, t), . . . , walh (n − 1, t)} is called the
continuous Walsh–Hadamard system. The discrete Walsh–Hadamard system can
be generated by sampling continuous Walsh–Hadamard functions {walh (k, t), k =
0, 1, 2, . . . , n − 1} at t = 0, 1/N, 2/N, . . . , (N − 1)/N. The first eight continuous
Walsh–Hadamard functions are shown in Fig. 1.4.
The discrete 1D forward and inverse WHTs of the signal x[k], k = 0, 1, . . . , N −1
are defined as

1 1
y = √ HN x, (forward WHT) and x = √ HN y, (inverse WHT), (1.9)
N N

where HN is a Walsh–Hadamard matrix of order N, and the rows of these matrices


represent the discrete Walsh–Hadamard basis functions.
The discrete 1D forward and inverse WHTs of the signal x[k], k = 0, 1, . . . , N −1
are defined, respectively, as follows:

N−1
1
y[k] = √ x[n]walh [n, k], k = 0, 1, . . . , N − 1, (1.10)
N n=0
N−1
1
x[n] = √ y[k]walh [n, k], n = 0, 1, . . . , N − 1. (1.11)
N k=0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 7

Figure 1.5 Example of the representation a 2D signal (image) by a 2D Walsh–Hadamard


system.

Example of the representation of a signal by the Hadamard system


Let the signal vector x[n] be (9, 7, 7, 5)T . Then x[n] may be expressed as a
combination of the Hadamard basis functions
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎛ ⎞
⎜⎜⎜9⎟⎟⎟ ⎜⎜⎜1⎟⎟⎟ ⎜⎜⎜ 1⎟⎟⎟ ⎜⎜⎜ 1⎟⎟⎟ ⎜⎜⎜ 1⎟⎟⎟ ⎜⎜⎜9⎟⎟⎟ ⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜7⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜⎜⎜ ⎟⎟⎟⎟ ⎜⎜⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎜⎜⎜ ⎟⎟⎟ = 7 ⎜⎜⎜ ⎟⎟⎟ + ⎜⎜⎜ ⎟⎟⎟ + 0 ⎜⎜⎜ ⎟⎟⎟ + 1 ⎜⎜⎜⎜⎜−1⎟⎟⎟⎟⎟
7 ⎜ 1 ⎟ ⎜ 1 ⎟ ⎜ −1 ⎟ ⎜⎜⎜7⎟⎟⎟ = ⎜⎜⎜1 1 −1 −1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜1⎟⎟⎟⎟⎟ .
⎜⎜⎜7⎟⎟⎟ ⎜⎜⎜1⎟⎟⎟ ⎜⎜⎜−1⎟⎟⎟ ⎜⎜⎜−1⎟⎟⎟ ⎜⎜⎜ 1⎟⎟⎟ or ⎜⎜⎜7⎟⎟⎟ ⎜⎜⎜1 −1 −1 1⎟⎟⎟ ⎜⎜⎜0⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠ ⎜⎜⎝ ⎟⎟⎠ ⎜⎜⎝ ⎟⎟⎠ ⎜⎜⎝ ⎟⎟⎠ ⎜⎜⎝ ⎟⎟⎠ ⎜⎜⎝ ⎟⎟⎠ ⎜⎜⎝ ⎟⎟⎠ ⎜⎜⎝ ⎟⎟⎠
5 1 −1 1 −1 5 1 −1 1 −1 1
(1.12)

2D forward and inverse WHTs are defined as


N−1 N−1
1
y[k, m] = x[n, j]walh [n, k]walh [ j, m], k, m = 0, 1, . . . , N − 1,
N2 n=0 j=0
(1.13)
N−1 N−1
x[n, j] = y[k, m]walh [n, k]walh [ j, m], n, j = 0, 1, . . . , N − 1.
k=0 m=0

Or, the discrete 2D forward and inverse WHTs of a 2D signal X in the matrix form
is defined as

1
Y= HN XHNT ,
N2 (1.14)
X = HNT Y HN .

The 2D WHT can be computed via the 1D WHTs. In other words, the 1D WHT
is evaluated for each column of the input data (array) X to produce a new array A.
Then, the 1D WHT is evaluated for each row of A to produce y as in Fig. 1.5.
Let a 2D signal have the form
 
9 7
X= . (1.15)
5 3

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


8 Chapter 1

We want to show that the signal X may be expressed as a combination of the 2D


Walsh–Hadamard basis functions. First, we define
     
1 1 1 1 9 7 1 1 6 1
Y = H2 XH2 = T
= . (1.16)
4 4 1 −1 5 3 1 −1 2 0

Thus, the 2D Walsh–Hadamard discrete basis functions are obtained from the
1D basis function as follows:
   1 1    1 −1
1 1
1 1 = , 1 −1 = ,
1 1 1 1 1 −1
         (1.17)
1 1 1 1   1 −1
1 1 = , 1 −1 = .
−1 −1 −1 −1 −1 1

Therefore, the signal X may be expressed as


        
9 7 1 1 1 −1 1 1 1 −1
X= =6× +1× +2× +0× . (1.18)
5 3 1 1 1 −1 −1 −1 −1 1

Graphically, the representation of the 2D signal by a combination of the


Walsh–Hadamard functions may be represented as

(1.19)

where +1 is white and −1 is black.


The basis function 11 11 represents the average intensity level of the four
 
signal elements. The basis function 11 −1
−1 represents the detail consisting of one
 
horizontal crossing. The basis function −11 −11 represents the compliment of the
 
2D signal elements. The basis function −11 −11 represents the one zero crossing in
both horizontal directions.

Selected Properties
• The row vectors of H define a complete set of orthogonal functions.
• The elements in the first column and the first row are equal to one—all positive.
The elements in all of the other rows and columns are evenly divided between
positive and negative.
• The WHT matrix is orthogonal; this means that the inner product of its any two
distinct rows is equal to zero. This is equivalent to HH T = NIN . For example,
    
+ + + + 2 0
H2 H2 =
T
= = 2I2 . (1.20)
+ − + − 0 2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 9

• The Walsh–Hadamard matrix is symmetric [i.e., HNT = H, or HN−1 = (1/N)HN ].


• | det HN | = N N/2 .  
• For example, we have det ++ +− = 1(−1) − (1)(1) = −2,
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜− + −⎟⎟⎟ ⎜⎜⎜+ + −⎟⎟⎟ ⎜⎜⎜+ − −⎟⎟⎟ ⎜⎜⎜+ − +⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
det(H4 ) = det ⎜⎜+ − −⎟⎟ − det ⎜⎜⎜⎝+ − −⎟⎟⎠⎟ + det ⎜⎜⎜⎝+ + −⎟⎟⎠⎟ − det ⎜⎜⎜⎝+ + −⎟⎟⎠⎟ = 16. (1.21)
⎝ ⎠ + − + + − + − + −
− − +
• There is a very simple method to generate the Hadamard matrix HN of the order
N (N = 2n ) directly.79 Let us use a binary matrix Bn that has N = 2n rows and n
columns. For example, the first four counting matrices are
 
BT1 = 0 1 ,
 
0 0 1 1
B2 =
T
,
0 1 0 1
⎛ ⎞
⎜⎜⎜0 0 0 0 1 1 1 1⎟⎟⎟
BT3 = ⎜⎜⎜⎜⎝0 0 1 1 0 0 1 1⎟⎟⎟⎟⎠ , (1.22)
0 1 0 1 0 1 0 1
⎛ ⎞
⎜⎜⎜0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1⎟⎟⎟⎟
BT4 = ⎜⎜⎜⎜ ⎟.
⎜⎜⎝0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1⎟⎟⎟⎟⎠
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
It can be shown that if in the matrix Bn BTn we replace each 0 with +1 and each
1 with −1, we obtain the Hadamard matrix H2n of the order 2n . For n = 1, we
obtain
   0 0
0
B1 BT1 = 0 1 = ⇒ replace 0 → +1, 1 → −1
1 0 1
 
+1 +1
⇒ = H2 , (1.23)
+1 −1
⎛ ⎞ ⎛ ⎞
⎜⎜⎜0 0⎟⎟⎟  ⎜0 0 0 0⎟
⎜⎜⎜0 1⎟⎟⎟ 0 0 1 1 ⎜⎜⎜⎜⎜0 1 0 1⎟⎟⎟⎟⎟
B2 BT2 = ⎜⎜⎜⎜ ⎟⎟ = ⎜⎜⎜⎜ ⎟⎟ ⇒ replace [0, 1] → [+1, −1]
⎜⎜⎝1 0⎟⎟⎟⎟⎠ 0 1 0 1 ⎜⎜⎝0 0 1 1⎟⎟⎟⎟⎠
1 1 0 1 1 0
⎛+ + + +⎞
⎜⎜⎜⎜ ⎟⎟
⎜+ − + −⎟⎟⎟
⇒ ⎜⎜⎜⎜+ + − −⎟⎟⎟⎟ = H4 . (1.24)
⎜⎝ ⎟⎠
+ − − +
• The elements of the WHT matrix can be calculated as
⎛ n−1 ⎞
⎜⎜⎜ ⎟⎟⎟ n−1


walh (m, k) = exp ⎜⎜⎝ jπ mi ki ⎟⎟⎟⎠ = (−1) i=0 ,
mi ki

i=0

j = −1, since exp( jπr)
= cos(πr) + j sin(πr). (1.25)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


10 Chapter 1

• The discrete system {walh (m, k)}, m, k = 0, 1, . . . , N − 1 is called the Walsh–


Hadamard system. It can be shown that

walh (n, r + N) = walh (n, r). (1.26)

Also, the Walsh–Hadamard system forms a complete system of orthogonal


functions, i.e.,
N−1 
1 1, if k = s,
walh (k, n)walh (s, n) = k, s = 0, 1, 2, . . . , N − 1. (1.27)
N 0, otherwise,
n=0

In order to experimentally show the orthogonality of a matrix, it is necessary to


multiply every row of the matrix by every other row, element by element, and
examine the result. If the rows are different, then the sum should be zero (that
is, they have nothing in common,because
 everything cancels
√ out).

• The eigenvalues of matrix H2 = ++ +− are equal to + 2 and − 2; then, using
the properties of the Kronecker products of matrices, we conclude √ that the
eigenvalues of a Walsh–Hadamard  matrix of order n equal ± n. The eigen
decomposition of matrix H2 = ++ +− is given by
⎛ ⎞
⎜⎜⎜ π π⎟ ⎛√ ⎞
⎜⎜⎜cos − sin ⎟⎟⎟⎟ ⎜⎜⎜ 2
8 ⎟⎟⎟ , 0 ⎟⎟⎟
H2 = UDU , −1
where U = ⎜⎜⎜⎜ 8
⎟ D=⎝ ⎜ √ ⎟⎠ , (1.28)
⎜⎜⎝ π π ⎟⎟⎟ 0 − 2
sin cos ⎠
8 8
or
⎛  ⎞
⎜⎜⎜ √ √ ⎟⎟⎟

1 ⎜ 2 + 2 − 2 − 2⎟⎟⎟⎟
n
U = ⎜⎜⎜⎜⎜   ⎟⎟ Xi . (1.29)
2 ⎜⎝ √ √ ⎟⎟⎠ i=1
2− 2 2+ 2

It has been shown (see the proof in the Appendix) that if A is an N × N matrix
with Axn = an xn , n = 1, 2, . . . , N, and B is an M × M matrix with Bym = bm ym ,
m = 1, 2, . . . , M, then

(A ⊗ B)(xn ⊗ bm ) = an bm (xn ⊗ ym ). (1.30)

This means that if {xn } is a Karhunen–Loeve transform (KLT)46 for A, and {ym }
is a KLT for B, then xn ⊗ ym is the KLT transform for A ⊗ B. Using this fact, we
may find the eigenvalues and the eigen decomposition of matrix Hn .
• If H f and Hg are WHTs of vectors f and g, respectively, then H ( f ∗ g) =
H f · Hg, where * is the dyadic convolution of two vectors f and g, which is
defined by v(m) = k=0 N−1
f (k)g(m ⊕ k), where m ⊕ k is the decimal number whose
binary extension is [(m0 + k0 ) mod 2, (m1 + k1 ) mod 2, . . . , (mn−1 + kn−1 ) mod 2],
and m, k are given by Eq. (1.7).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 11

An example serves to illustrate these relationships. Let f T = ( f0 , f1 , f2 , f3 ) and


g = (g0 , g1 , g2 , g3 ). Compute
T

⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟ ⎜⎜⎜⎜ f0 ⎟⎟⎟⎟ ⎜⎜⎜⎜ f0 + f1 + f2 + f3 ⎟⎟⎟⎟ ⎜⎜⎜⎜F0 ⎟⎟⎟⎟
⎟ ⎟ ⎜ ⎟ ⎜ ⎟
⎜⎜⎜+ − + −⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ f1 ⎟⎟⎟⎟ ⎜⎜⎜⎜ f0 − f1 + f2 − f3 ⎟⎟⎟⎟ ⎜⎜⎜⎜F1 ⎟⎟⎟⎟
F = H4 f = ⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜ ⎟⎟ = ⎜⎜ ⎟ = ⎜ ⎟, (1.31a)
⎜⎜⎝+ + − −⎟⎟⎠ ⎜⎜⎜ f2 ⎟⎟⎟ ⎜⎜⎜ f0 + f1 − f2 − f3 ⎟⎟⎟⎟ ⎜⎜⎜⎜F2 ⎟⎟⎟⎟
⎜ ⎠⎟ ⎝⎜
⎝ ⎟⎠ ⎜⎝ ⎟⎠
+ − − + f f0 − f 1 + f 2 − f 3 F3
3
⎛ ⎞ ⎛⎜g ⎞⎟ ⎛⎜g + g + g + g ⎞⎟ ⎛⎜G ⎞⎟
⎜⎜⎜+ + + +⎟⎟ ⎜⎜ 0 ⎟⎟ ⎜⎜ 0
⎟⎜ ⎟ ⎜
1 2 3⎟ ⎟⎟ ⎜⎜⎜⎜ 0 ⎟⎟⎟⎟
⎜⎜⎜+ − + −⎟⎟⎟⎟ ⎜⎜⎜⎜g1 ⎟⎟⎟⎟ ⎜⎜⎜⎜g0 − g1 + g2 − g3 ⎟⎟⎟⎟ ⎜⎜⎜⎜G1 ⎟⎟⎟⎟
G = H4 g = ⎜⎜⎜⎜ ⎟⎜ ⎟ = ⎜ ⎟ = ⎜ ⎟.
−⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎜g2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜g0 + g1 − g2 − g3 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜G2 ⎟⎟⎟⎟
(1.31b)
⎜⎜⎝+ + −
+ − − + g3 ⎝ ⎠ ⎝ ⎠ ⎜⎝ ⎟⎠
g0 − g 1 + g 2 − g 3 G3

Now, compute vm = 3
k=0 fk g(m ⊕ k). We find that

v0 = f0 g0 + f1 g1 + f2 g2 + f3 g3 , v1 = f0 g1 + f1 g0 + f2 g3 + f3 g2 ,
(1.32)
v2 = f0 g2 + f1 g3 + f2 g0 + f3 g1 , v3 = f0 g3 + f1 g2 + f2 g1 + f3 g0 .

Now, we can check that

⎛ ⎞⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟ ⎜⎜⎜v0 ⎟⎟⎟ ⎜⎜⎜v0 + v1 + v2 + v3 ⎟⎟⎟ ⎜⎜⎜⎜F0G0 ⎟⎟⎟⎟
⎜⎜⎜+ ⎟⎜ ⎟ ⎜ ⎟ ⎜ ⎟
− + −⎟⎟⎟⎟ ⎜⎜⎜⎜v1 ⎟⎟⎟⎟ ⎜⎜⎜⎜v0 − v1 + v2 − v3 ⎟⎟⎟⎟ ⎜⎜⎜⎜F1G1 ⎟⎟⎟⎟
H4 v = ⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜ ⎟. (1.33)
⎜⎜⎝+ + − −⎟⎟⎠ ⎜⎜⎜v2 ⎟⎟⎟ ⎜⎜⎜v0 + v1 − v2 − v3 ⎟⎟⎟ ⎜⎜⎜F2G2 ⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠ ⎜
⎝ ⎠⎟
+ − − + v3 v0 − v 1 + v 2 − v 3 F3G3

Let x be an integer vector. Then, y = (y0 , y1 , . . . , yN−1 ) = HN x is also an integer


vector. Moreover, if y0 is odd (even), then all yi (i = 1, 2, . . . , N − 1) are odd
(even).

1.2 Walsh–Paley Matrices


The Walsh–Paley system (sometimes called the dyadic-ordered Walsh–Hadamard
matrix) introduced by Walsh in 192321 is constructed recursively by
⎛  ⎞
⎜⎜⎜PN/2 ⊗ 1 1 ⎟⎟⎟
PN = ⎜⎜⎜⎝   ⎟⎟⎟ , where P1 = (1), N = 2n , n = 1, 2, . . . . (1.34)
PN/2 ⊗ 1 −1 ⎠

Below, we present the Paley matrices of orders 2, 4, 8, and 16 (see Fig. 1.6).
For n = 1 we have
     
P1 ⊗ (++) (+) ⊗ (++) + +
P2 = = = . (1.35)
P1 ⊗ (+−) (+) ⊗ (+−) + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


12 Chapter 1

Figure 1.6 Walsh–Paley matrices of orders 2, 4, 8, 16, and 32.

For n = 2, and from definition of the Kronecker product, we obtain


⎛  ⎞ ⎛ ⎞
⎜⎜⎜ + + ⎟⎟⎟ ⎜+ + + +⎟⎟
 ⎜⎜⎜  ⎟⎟⎟ ⎜⎜⎜
P2 ⊗ (++) ⎜+ − ⊗ (++) ⎟⎟⎟ ⎜⎜⎜+ + − −⎟⎟⎟⎟⎟
P4 = = ⎜⎜⎜⎜  ⎟⎟⎟ = ⎜⎜⎜+ − + −⎟⎟⎟⎟⎠ . (1.36)
P2 ⊗ (+−) ⎜⎜⎜ + + ⎟⎟⎠ ⎜⎝

+ − ⊗ (+−) + − − +

For n = 3, we have
⎛⎛+ + + +⎟⎟
⎞ ⎞ ⎛+ + + + + + + +⎞⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜
⎜⎜⎜⎜⎜⎜+
⎜⎜⎜⎜⎜⎜ + − −⎟⎟⎟⎟⎟ ⎟⎟⎟ ⎜⎜⎜+
⎟⎟⎟ ⎜⎜⎜ + + + − − − −⎟⎟⎟⎟⎟
  ⎜⎜⎜⎜⎜⎜⎝+ − + −⎟⎟⎟⎠⎟ ⊗ (++) ⎟⎟⎟ ⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜+ ⎟
P ⊗ (++) ⎜⎜ + − − + + − − − − + +⎟⎟⎟⎟
P8 = 4 = ⎜⎜⎜⎜⎛+ + + +⎟⎟
⎞ ⎟⎟⎟ = ⎜⎜⎜
⎟⎟⎟ ⎜⎜⎜+ − + − + − +

−⎟⎟⎟⎟ . (1.37)
P4 ⊗ (+−) ⎜⎜⎜⎜⎜
⎜⎜⎜⎜⎜⎜+ + − −⎟⎟⎟⎟⎟ ⎟⎟⎟⎟ ⎜⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟

⎜⎜⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎝⎜⎜⎝+ − + −⎟⎟⎟⎟⎠ ⊗ (+−) ⎟⎠⎟ ⎜⎝⎜+ − − + + − − +⎟⎟⎟⎠
+ − − + + − − + − + + −

Similarly, for n = 4, we obtain


⎛ ⎞
⎜⎜⎜+ + + + + + + + + + + + + + + +⎟⎟

⎜⎜⎜⎜+ + + + + + + + − − − − − − − −⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+ + + + − − − − + + + + − − − −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + − − − − − − − − + + + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ + − − + + − − + + − − + + − −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ + − − + + − − − − + + − − + +⎟⎟⎟⎟⎟

  ⎜⎜⎜⎜+
⎜ + − − − − + + + + − − − − + +⎟⎟⎟⎟

P8 ⊗ (++) ⎜⎜+ + − − − − + + − − + + + + − −⎟⎟⎟⎟
= = ⎜⎜⎜⎜ ⎟.
−⎟⎟⎟⎟⎟
P16 (1.38)
P8 ⊗ (+−) ⎜⎜⎜+ − + − + − + − + − + − + − +
⎜⎜⎜ ⎟
⎜⎜⎜+ − + − + − + − − + − + − + − +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − + − − + − + + − + − − + − +⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ − + − − + − + − + − + + − + −⎟⎟⎟⎟⎟
⎜⎜⎜+ − − + + − − + + − − + + − − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − − + + − − + − + + − − + + −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎝ − − + − + + − + − − + − + + −⎟⎟⎟⎟

+ − − + − + + − − + + − + − − +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 13

The elements of the Walsh–Paley matrix of order N = 2n can be expressed as


n−1
(kn−m +kn−m−1 ) jm
wal p ( j, k) = (−1) m=0 , (1.39)

where jm , km are the m’th bits in the binary representations of j and k.


Let us consider an example. Let n = 2; then, from Eq. (1.39), we obtain

W p ( j, k) = (−1)k2 j0 +(k2 +k1 ) j1 +(k1 +k0 ) j2 . (1.40)

Because 3 = 0 · 22 + 1 · 21 + 1 · 20 , j2 = 0, j1 = 1, j0 = 1, and 3 = (011), 5 = (101),


then it is also true that wal p (3, 5) = (−1)2 = 1. Similarly, we can generate other
elements of a Walsh–Paley matrix of order 4.
Walsh–Paley matrices have properties similar to Walsh–Hadamard matrices.
The set of functions {wal p (0, k), wal p (1, k), . . . , wal p (n − 1, k)}, where

wal p (0, k) = wal p (0, 1) wal p (0, 2) . . . wal p (0, n − 1)


wal p (1, k) = wal p (1, 1) wal p (1, 2) . . . wal p (1, n − 1)
(1.41)
......
wal p (n − 1, k) = wal p (n − 1, 1) wal p (n − 1, 2) . . . wal p (n − 1, n − 1)

is called the discrete Walsh–Paley functions system, or the discrete Walsh–Paley


functions basis. The set of functions {wal p (0, t), wal p (1, t), . . . , wal p (n − 1, t)} is
called the continuous Walsh–Hadamard system, or the Walsh–Hadamard system.46
The 16-point discrete Walsh–Paley basis functions and the first eight continuous
Walsh–Paley functions are shown in Figs. 1.7 and 1.8, respectively. The 16 discrete
Walsh–Paley basis functions given in Fig. 1.7 can be generated by sampling
continuous Walsh–Paley functions at t = 0, 1/16, 2/16, 3/16, . . . , 15/16.
Comparing Figs. 1.8 and 1.4 we can find the following relationship between the
Walsh–Hadamard and the Walsh–Paley basic functions:

walh (0, t) = wal p (0, t), walh (4, t) = wal p (1, t),
walh (1, t) = wal p (4, t), walh (5, t) = wal p (5, t),
(1.42)
walh (2, t) = wal p (2, t), walh (6, t) = wal p (3, t),
walh (3, t) = wal p (6, t), walh (7, t) = wal p (7, t).

This means that most of the Walsh–Hadamard matrices and functions are true for
the Walsh–Paley basic functions case.

1.3 Walsh and Related Systems


The Walsh system differs from the Walsh–Hadamard system in the order of
the rows. Furthermore, we present the construction of the Walsh matrix and

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


14 Chapter 1

1 1 1 1

0 0 0 0

–1 –1 –1 –1

0 10 0 10 0 10 0 10

1 1 1 1

0 0 0 0

–1 –1 –1 –1

0 10 0 10 0 10 0 10

1 1 1 1

0 0 0 0

–1 –1 –1 –1

0 10 0 10 0 10 0 10

1 1 1 1

0 0 0 0

–1 –1 –1 –1

0 10 0 10 0 10 0 10

Figure 1.7 16-point discrete Walsh–Paley basis functions.

Figure 1.8 The first eight continuous Walsh–Paley functions in the interval [0, 1).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 15

the Walsh system. On the basis of this system, we derive two important
orthogonal systems, namely the Cal–Sal and Haar systems, to be discussed in
the following sections. Both of these systems have applications in signal/image
processing, communication, and digital logic.1–79 The Walsh–Hadamard function
was introduced in 1923 by Walsh.21
1.3.1 Walsh system
Walsh matrices are often described as discrete analogues of the cosine and sine
functions. The Walsh matrix is constructed recursively by
 
WN = W2 ⊗ A1 , (W2 R) ⊗ A2 , . . . , W2 ⊗ A(N/2)−1 , (W2 R) ⊗ A(N/2) , (1.43)
   
where W2 = ++ +− , R = 01 10 , and Ai is the i’th column of a Walsh matrix of order
N = 2n .
Example: Walsh matrices of order 4 and 8 have the following structures:
⎛ ⎞
⎜⎜⎜+ + + +⎟⎟⎟
⎜⎜⎜+ + − −⎟⎟⎟
W4 = ⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎝+ − − +⎟⎟⎟⎟⎠
+ − + −
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟⎟
⎜⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟ (1.44)
⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟⎟
⎜⎜⎜
+ + − − + + − −⎟⎟⎟⎟⎟
W8 = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜+ − − + + − − +⎟⎟⎟⎟⎟
⎜⎜⎜+ − − + − + + −⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟⎟
⎝ ⎠
+ − + − + − + −
Indeed,
       
+ 0 1 +
W4 = W2 ⊗ , W2 ⊗
+ 1 0 −
⎛ ⎞
         ⎜⎜⎜⎜+ + + +⎟⎟⎟⎟
+ + + + + 0 1 + ⎜⎜+ + − −⎟⎟⎟
= ⊗ , ⊗ = ⎜⎜⎜⎜ ⎟,
⎜⎜⎝+ − − +⎟⎟⎟⎟⎠
(1.45a)
+ − + + − 1 0 −
+ − + −
⎛ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎞
⎜⎜⎜ ⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜+⎟⎟⎟⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜−⎟⎟⎟ ⎜⎜⎜−⎟⎟⎟⎟⎟⎟
W8 = ⎜⎜⎜⎜W2 ⊗ ⎜⎜⎜⎜ ⎟⎟⎟⎟ , (W2 R) ⊗ ⎜⎜⎜⎜ ⎟⎟⎟⎟ , W2 ⊗ ⎜⎜⎜⎜ ⎟⎟⎟⎟ , (W2 R) ⊗ ⎜⎜⎜⎜ ⎟⎟⎟⎟⎟⎟⎟⎟
⎜⎝⎜ ⎜⎝⎜+⎟⎠⎟ ⎜⎝⎜−⎟⎠⎟ ⎜⎝⎜−⎟⎠⎟ ⎜⎝⎜+⎟⎠⎟⎟⎠⎟
+ − + −
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
  ⎜⎜⎜⎜+⎟⎟⎟⎟   ⎜⎜⎜⎜+⎟⎟⎟⎟   ⎜⎜⎜⎜+⎟⎟⎟⎟   ⎜⎜⎜⎜+⎟⎟⎟⎟
+ + ⎜⎜⎜+⎟⎟⎟ + + ⎜⎜⎜+⎟⎟⎟ + + ⎜⎜⎜−⎟⎟⎟ + + ⎜⎜⎜−⎟⎟⎟
= ⊗ ⎜ ⎟, ⊗ ⎜ ⎟, ⊗ ⎜ ⎟, ⊗ ⎜ ⎟.
+ − ⎜⎜⎜⎜⎝+⎟⎟⎟⎟⎠ − + ⎜⎜⎜⎜⎝−⎟⎟⎟⎟⎠ + − ⎜⎜⎜⎜⎝−⎟⎟⎟⎟⎠ − + ⎜⎜⎜⎜⎝+⎟⎟⎟⎟⎠
(1.45b)
+ − + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


16 Chapter 1

Figure 1.9 The first eight continuous Walsh functions in the interval [0, 1).

The elements of Walsh matrices can be also expressed as


n−1
( jn−i−1 + jn−i )ki
walw ( j, k) = (−1) i=0 , (1.46)

where N = 2n and jm , km are the m’th bits in the binary representations of j and k,
respectively.
The set of functions {walw (0, k), walw (1, k), . . . , walw (n − 1, k)}, where

walw (0, k) = {walw (0, 0), walw (0, 1), walw (0, 2), . . . , walw (0, n − 1)} ,
walw (1, k) = {walw (1, 0), walw (1, 1), walw (1, 2), . . . , walw (1, n − 1)} ,
.. .. .. .. ..
. . . . ... .
walw (n − 1, k) = {walw (n − 1, 0), walw (n − 1, 1), walw (n − 1, 2), . . . , walw (n − 1, n − 1)} ,
(1.47)

is called a discrete Walsh system, or discrete Walsh basis functions. The set of
functions {walw (0, t), walw (1, t), . . . , walw (n − 1, t)}, t ∈ [0, 1) are called continuous
Walsh functions (Fig. 1.9).
The continuous Walsh functions can be defined as

walw (2m + p, t) = walw [m, 2(t + 1/2)] + (−1)m+p walw [m, 2(t − 1/2)], t ∈ [0, 1),
(1.48)
where m = 0, 1, 2, . . ., walw (0, t) = 1, for all t ∈ [0, 1).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 17

Note that the Walsh functions may be also constructed by

walw (n ⊕ m, t) = walw (n, t)walw (m, t), t ∈ [0, 1), (1.49)

where the symbol ⊕ denotes the logic operation Exclusive OR, i.e., 0 ⊕ 0 = 0,
0 ⊕ 1 = 1, 1 ⊕ 0 = 1, and 1 ⊕ 1 = 0.
For example, let n = 3 or 3 = 0·22 +1·21 +1·20 and m = 5 or 5 = 1·22 +0·21 +1·20 ,
but

3 ⊕ 5 = (011) ⊕ (101) = (0 ⊕ 1)(1 ⊕ 0)(1 ⊕ 1) = (110) = 1 · 22 + 1 · 21 + 0 · 20 = 6.


(1.50)

Hence, we obtain
walw (3, t)walw (5, t) = walh (3 ⊕ 5, t) = walh (6, t). (1.51)

1.3.2 Cal–Sal orthogonal system


A Cal–Sal function system can be defined as

cal( j, k) = walw (2 j, k), j = 0, 1, 2, . . . ,


(1.52)
sal( j, k) = walw (2 j − 1, k), j = 1, 2, . . . ,

where walw ( j, k) is the ( j’th, k’th) element of the Walsh matrix defined in
Eq. (1.46).
n−1
The Cal–Sal matrix elements can be calculated by T ( j, k) = (−1) i=0 pi ki , where
j, k = 0, 2n − 1 and p0 = jn−1 , p0 = jn−1 , p1 = jn−2 + jn−1 , . . . , pn−2 = j1 + j2 ,
pn−1 = j0 + j1 .
Cal–Sal Hadamard matrices of orders 4 and 8 are of the following form:
⎛ ⎞
  ⎜⎜⎜+ + + +⎟⎟

+ + ⎜⎜⎜+ − − +⎟⎟⎟⎟
T2 = , T 4 = ⎜⎜⎜⎜ ⎟,
+ − ⎜⎜⎝+ − + −⎟⎟⎟⎟⎠
+ + − −
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟

⎜⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ (1.53)
⎜⎜⎜+ − − + + − − +⎟⎟⎟⎟⎟
⎜⎜⎜
+ − + − − + − +⎟⎟⎟⎟⎟
T 8 = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − − + − + + −⎟⎟⎟⎟

⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟
⎝ ⎠
+ + + + − − − −

Cal–Sal matrices of order 2, 4, 8, 16, and 32 are shown in Fig. 1.10, and the first
eight continuous Cal–Sal functions are shown in Fig. 1.11.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


18 Chapter 1

Figure 1.10 Cal–Sal matrices of order 2, 4, 8, 16, and 32.

Figure 1.11 The first eight continuous Cal–Sal functions in the interval [0, 1).

The following example shows the relationship between a 4 × 4 Cal–Sal


Hadamard matrix and continuous Cal–Sal functions:
⎛ ⎞
⎜⎜⎜+ + + +⎟⎟

→ walw (0, t)
⎜⎜⎜⎜+ − − +⎟⎟⎟⎟ → cal(1, t)
⎜⎜⎜ ⎟
−⎟⎟⎟⎟⎠
(1.54)
⎜⎜⎝+ − + → sal(2, t)
+ + − − → sal(1, t).

There are many selected properties of the Cal–Sal system. The Walsh functions
can be constructed by

walw (n ⊕ m, t) = walw (n, t)walw (m, t), (1.55)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 19

where the symbol ⊕ denotes the logic operation Exclusive OR, i.e.,
0 ⊕ 0 = 0, 0 ⊕ 1 = 1, 1 ⊕ 0 = 0, and 1 ⊕ 1 = 0. (1.56)
For example, let

n=3 or n = 0 · 22 + 1 · 21 + 1 · 20 , and m=5 or


m = 1 · 22 + 0 · 21 + 1 · 20 , (1.57)

then

n ⊕ m = (0 ⊕ 1) · 22 + (1 ⊕ 0) · 21 + (1 ⊕ 1) · 20 = 1 · 22 + 1 · 21 + 0 · 20 = 6,
(1.58)

thus,

walw (n ⊕ m, t) = walw (3 ⊕ 5, t) = walw (3, t)walw (5, t) = walw (6, t). (1.59)

Particularly, from this expression, we may have

walw (m ⊕ m, t) = walw (m, t)walw (m, t) = walw (0, t). (1.60)

Furthermore,

cal(0, k) = ww (0, k),


cal( j, k)cal(m, k) = cal( j ⊕ m, k),
sal( j, k)sal(m, k) = sal(( j − 1) ⊕ (m − 1), k),
sal( j, k)cal(m, k) = sal(m ⊕ ( j − 1) + 1, k), (1.61)
cal( j, k)sal(m, k) = sal( j ⊕ (m − 1) + 1, k),
cal(2m , k − 2−m−2 ) = sal(2m , k),
sal(2m , k − 2−m−2 ) = −cal(2m , k).

Also note the sequency-ordered Hadamard functions. In Fourier analysis,


“frequency” can be interpreted physically as the number of cycles/unit of time,
which also may be interpreted as one half of the number of zero crossings per unit
time. In analogy to the relationship of frequency to the number of zero crossings
or sign changes in periodic functions, Harmuth22 defines sequency as a half of the
average number of zero crossings. The sequency si of the Walsh basis system is
given by



⎪ i

⎨2,

⎪ if i is even,
s0 = 0, and si = ⎪
⎪ (1.62)


⎪ (i + 1)

⎩ , if i is odd.
2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


20 Chapter 1

We may classify the Hadamard systems with respect to sequency as follows:


• Sequency (Walsh) order is directly related to frequency and is superior for use
in communication and signal processing applications such as filtering, spectral
analysis, recognition, and others.
• Paley or dyadic order has analytical and computational advantages and is used
for most mathematical investigations.
• Hadamard or Natural order has computational benefits and is simple to generate
and understand.
Sometimes the number of sign changes along the column/row of the Hadamard
matrices is called the sequency of that column/row. Below, we present examples
of different Walsh–Hadamard matrices with corresponding sequences listed on the
right side.
(a) Natural-ordered Walsh–Hadamard matrix
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟

0
⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟⎟
⎜⎜⎜ 7
⎜⎜⎜+ + − − +
⎜⎜⎜ + − −⎟⎟⎟⎟⎟ 3
⎜+ − − + + − − +⎟⎟⎟⎟⎟
Hh (8) = ⎜⎜⎜⎜⎜
4
⎟ (1.63a)
⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟ 1
⎜⎜⎜+ − + − − ⎟
⎜⎜⎜ + − +⎟⎟⎟⎟ 6

⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟ 2
⎝ ⎠
+ − − + − + + − 5

(b) Sequency-ordered Walsh matrix


⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟

0
⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟⎟
⎜⎜⎜ 1
⎜⎜⎜+
⎜⎜⎜ + − − − − + +⎟⎟⎟⎟⎟ 1
⎜+ + − − + + − −⎟⎟⎟⎟⎟
Hw (8) = ⎜⎜⎜⎜⎜
2
⎟ (1.63b)
⎜⎜⎜+ − − + + − − +⎟⎟⎟⎟ 2
⎜⎜⎜+ ⎟
⎜⎜⎜ − − + − + + −⎟⎟⎟⎟ 3

⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟ 3
⎝ ⎠
+ − + − + − + − 4

(c) Dyadic-ordered Paley matrix


⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟

0
⎜⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟⎟ 1
⎜⎜⎜
⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟⎟ 2
⎜⎜⎜
+ + − − − − + +⎟⎟⎟⎟⎟
H p (8) = ⎜⎜⎜⎜⎜
1
⎟ (1.63c)
⎜⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟ 4

⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟ 3
⎜⎜⎜ ⎟
⎜⎜⎝+ − − + + − − +⎟⎟⎟⎟ 2

+ − − + − + + − 3

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 21

(d) Cal–Sal-ordered Hadamard matrix


⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟

wal(0, t) 0
⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ cal(1, t) 1
⎜⎜⎜+ − − +
⎜⎜⎜ + − − +⎟⎟⎟⎟⎟ cal(2, t) 2
⎜+ − + − − + − +⎟⎟⎟⎟⎟
Hcs (8) = ⎜⎜⎜⎜⎜
cal(3, t) 3
⎟ (1.63d)
⎜⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟ sal(4, t) 4

⎜⎜⎜+ − − + − + + −⎟⎟⎟⎟ sal(3, t) 3
⎜⎜⎜ ⎟
⎜⎜⎝+ + − − + + − −⎟⎟⎟⎟ sal(2, t) 2

+ + + + − − − − sal(1, t) 3

where + and − indicate +1 and −1, respectively.


The relationship among different orderings of Hadamard systems have been
discussed in the literature, particularly see Refs. 11 and 14. We will show that any
of the above Hadamard matrixes is the same as the Walsh–Hadamard matrix with
shuffled rows. The relations among ordering of Hadamard systems is schematically
given in Fig. 1.12, where
• BGC = Binary to gray code conversion
• GBC = Gray to binary code conversion
• GIC = Gray to binary inverse code conversion
• IGC = Binary inverse to Gray code conversion
• IBC = Binary inverse to binary code conversion
• BIC = Binary to binary inverse code conversion
The Gray code is a binary numeral system, or base-2 number system, where
two successive values differ in only one bit. Or, a Gray code is an encoding of
numbers so that adjacent numbers differ in only one bit. Gray codes were applied
to mathematical puzzles before they became known to engineers. The French
engineer Émile Baudot used Gray codes in telegraphy in 1878. He received the
French Legion of Honor medal for his work.80 Frank Gray, who became famous for
inventing the signaling method that came to be used for compatible color television,
invented a method to convert analog signals to reflected binary code groups using
a vacuum-tube-based apparatus. The method and apparatus were patented in 1953,
and the name of Gray stuck to the codes.79,80

Figure 1.12 Schematic of Hadamard systems.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


22 Chapter 1

Table 1.3 Binary Gray Codes.

Decimal 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Binary 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Gray 0000 0001 0011 0010 0110 0111 0101 0100 1100 1101 1111 1110 1010 1011 1001 1000

Mathematically, these relations are given as follows. Let b = (bn−1 , bn−2 , . . . , b0 ),


c = (cn−1 , cn−2 , . . . , c0 ), and g = (gn−1 , gn−2 , . . . , g0 ) denote code words in the n-
bit binary, inverse, and Gray code representations, respectively. Below, we give a
more detailed description of the conversion operations that are given in the above
scheme.
(a) Binary-to-Gray code conversion (BGC) given by

gn−1 = bn−1 ,
(1.64)
gi = bi ⊕ bi+1 , i = 0, 1, . . . , n − 2,

where the symbol ⊕ denotes addition modulo 2.


Example: Let b = (1, 0, 1, 0, 0, 1), then g = (1, 1, 1, 1, 0, 1). The schematic
presentation for this conversion is given as
Binary Code 1 ⊕ 0 ⊕ 1 ⊕ 0 ⊕ 0 ⊕ 1
↓ ↓ ↓ ↓ ↓ ↓ (1.65)
Gray Code 1 1 1 1 0 1
So, too, the binary code b = (1, 0, 1, 0, 0, 1) corresponds to the following Gray
code: g = (1, 1, 1, 1, 0, 1). In Table 1.3, there are Gray codes given for binary codes
of the decimal numbers of 0, 1, 2, . . . , 15.
(b) As can be seen from Table 1.3, each row differs from the row above/below
by only one bit.
(c) Conversion from Gray code to natural binary: Let {gk , k = 0, 1, . . . , n − 1}
be an n-bit Gray code and {bk , k = 0, 1, . . . , n − 1} the corresponding binary code
word. Gray-to-binary code conversion (GBC) can be done by

bn−1 = gn−1 ,
(1.66)
bn−i = gn−1 ⊕ gn−2 ⊕ · · · ⊕ gn−i , i = 2, 3, . . . , n.

The conversion from a Gray-coded number to binary can be achieved by using the
following scheme:
• To find the binary next-to-MSB (most significant bit), add the binary MSB and
the Gray code next-to-MSB.
• Fix the sum.
• Continue this computation from the first to last numbers.
Note that both the binary and the Gray-coded numbers will have a similar
number of bits, and the binary MSB (left-hand bit) and Gray-code MSB are always
the same.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 23

Example: Let g = (1, 0, 1, 0, 1, 1).


b5 = g5 = 1,
b4 = g4 ⊕ b5 = 0 ⊕ 1 = 1,
b3 = g3 ⊕ b4 = 1 ⊕ 1 = 0,
(1.67)
b2 = g2 ⊕ b3 = 0 ⊕ 0 = 0,
b1 = g1 ⊕ b2 = 1 ⊕ 0 = 1,
b0 = g0 ⊕ b1 = 1 ⊕ 1 = 0.
Thus, b = (1, 1, 0, 0, 1, 0).
Binary-to-binary-inverse code conversion (BIC): the formation ci is given by

ci = bn−i−1 , i = 0, 1, . . . , n − 1. (1.68)

Example: Let b = (1, 0, 1, 0, 0, 1), then we have c0 = b5 = 1, c1 = b4 = 0,


c2 = b3 = 1, c3 = b2 = 0, c4 = b1 = 0, c5 = b0 = 1. Hence, we obtain
c = (1, 0, 0, 1, 0, 1).
(d) Binary-inverse-to-binary code conversion (IBC): the formation bi is given by

bi = cn−i−1 , i = 0, 1, . . . , n − 1. (1.69)

Example: Let c = (1, 0, 0, 1, 0, 1), then b = (1, 0, 1, 0, 0, 1).


Gray to binary inverse code conversion (GIC): the formation ci , i = 0, 1, . . . , n−1
is initiated from the most significant bit as

c0 = gn−1 ,
(1.70)
ci−1 = gn−1 ⊕ gn−2 ⊕ · · · ⊕ gn−i , i = 2, 3, . . . , n.

Example: Let g = (1, 0, 1, 1, 0, 1), then we have


c0 = g5 = 1,
c1 = g5 ⊕ g4 = 1 ⊕ 0 = 1,
c2 = g5 ⊕ g4 ⊕ g3 = 1 ⊕ 0 ⊕ 1 = 0,
(1.71)
c3 = g5 ⊕ g4 ⊕ g3 ⊕ g2 = 1 ⊕ 0 ⊕ 1 ⊕ 1 = 1,
c4 = g5 ⊕ g4 ⊕ g3 ⊕ g2 ⊕ g1 = 1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 = 1,
c5 = g5 ⊕ g4 ⊕ g3 ⊕ g2 ⊕ g1 ⊕ g0 = 1 ⊕ 0 ⊕ 1 ⊕ 1 ⊕ 0 ⊕ 1 = 0.
Hence, we have c = (0, 1, 1, 0, 1, 1).
Binary-inverse-to-Gray code conversion (IGC): the formation gi , i = 0, 1, . . . ,
n − 1 is given by

gn−1 = c0 ,
(1.72)
gi = cn−i−1 ⊕ cn−i−2 , i = 0, 1, . . . , n − 2.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


24 Chapter 1

Example: Let c = (1, 0, 1, 1, 1, 0). A schematic presentation of this conversion is


given as

Binary Inverse 1 0 1 1 1 0
↓ ↓ ↓ ↓ ↓ ↓
⊕ ⊕ ⊕ ⊕ ⊕ ↓ (1.73)
Output Code 1 1 0 0 1 0
g0 g1 g2 g3 g4 g5

Thus, we obtain g = (0, 1, 0, 0, 1, 1).


Concerning signal representation/decomposition, the theory of communication
systems has traditionally been based on orthogonal systems such as sine and cosine
systems. The Cal–Sal system is similar to the Fourier system. The sinusoids in the
Fourier system are characterized by their frequency of oscillation in terms of the
number of complex cycles they make.22,46 .
Because Walsh functions form an orthogonal system, the Walsh series is defined
by

f (x) = cm wal(m, x), (1.74)
m=0
1
where cm = 0 f (x)wal(m, x)dx, m = 0, 1, 2, . . ..
For the Cal–Sal system, the analogy with the Fourier series motivates the
following representation:

f (x) = a0 + am cal(m, x) + bm sal(m, x), (1.75)
m=0

 1/2  1/2
where am = f
−1/2 
(x)cal(m, x)dx, bm = −1/2
f (x)sal(m, x)dx, m = 0, 1, 2, . . ..
Defining cm = a2m + b2m , αm = tan−1 (bm /am ), and plotting them versus the
sequence m yields plots similar to Fourier spectra and phase. Here, cm provides an
analogy to the modulus, while the artificial phase αm is analogous to a classical
phase.
It can be shown that any signal f (x) is square integrable over to [01]. Therefore,
f (x) can be represented by a Walsh–Fourier series. The Parseval identity is also
valid.

1.3.3 The Haar system


The Haar transform, almost 100 years old, was introduced by the Hungarian
mathematician Alfred Haar in 1910 (see Fig. 1.13).42,50–53 In the discrete Fourier
transform (DFT) and WHT, each transform coefficient is a function of all
coordinates in the original data space (global), whereas this is true only for

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 25

Figure 1.13 Alfréd Haar (1885–1933), Hungarian mathematician (http://www.gap-


system.org/~history/Mathematicians/Haar.html).

the first two Haar coefficients. The Haar transform is real, allowing simple
implementation as well as simple visualization and interpretation. The advantages
of these basis functions are that they are well localized in time and may be
very easily implemented and are by far the fastest among unitary transforms.
The Haar transform provides a transform domain in which a type of differential
energy is concentrated in localized regions. This kind of property is very useful in
image processing applications such as edge detection and contour extraction. The
Haar transform is the simplest example of an orthonormal wavelet transform. The
orthogonal Haar functions are defined as follows:42,46

H00 (k) = 1.
⎧ q + 0.5



q

⎪ 2(i−1)/2 , if (i−1) ≤ k < (i−1) ,


⎪ 2 2


Hiq (k) = ⎪ q + 0.5 q+1 (1.76)


⎪−2 (i−1)/2
, if (i−1) ≤ k < (i−1) ,


⎪ 2 2


⎩0, at all other points,

where i = 1, 2, . . . , n, q = 0, 1, . . . , 2i−1 − 1.
Note that for any n there will be 2n Haar functions. Discrete sampling of the
set of Haar functions gives the orthogonal matrix of order 2n . The Haar transform
matrix is defined as
⎛ ⎞
⎜⎜⎜ ⎟⎟⎟
n−1
⊗ +
[Haar]2n = H(2n ) = ⎜⎜⎜⎜⎝ √ ⎟⎟⎟ ,
H(2 ) (+1 1)
⎟⎠ n = 2, 3, . . . , (1.77)
2 I(2 ) ⊗ (+1 − 1)
n−1 n−1

+1 +1

where H(2) = +1 −1 , ⊗ is the Kronecker product, and I(2n ) is the identity matrix
of order 2n .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


26 Chapter 1


Below are the Haar matrices of orders 2, 4, 8, and 16 (here s = 2).
 
1 1
[Haar]2 = , (1.78a)
1 −1
⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟
⎜⎜⎜⎜1 1 −1 −1⎟⎟⎟⎟
[Haar]4 = ⎜⎜⎜⎜ ⎟⎟ , (1.78b)
⎜⎜⎝ s −s 0 0⎟⎟⎟⎟⎠
0 0 s −s
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜1 1 1 1 −1 −1 −1 −1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ s s −s −s 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜0 0 0 0 s s −s −s ⎟⎟⎟⎟⎟
[Haar]8 = ⎜⎜⎜⎜ ⎟,
⎜⎜⎜2 −2 0 0 0 0 0 0⎟⎟⎟⎟⎟
(1.78c)
⎜⎜⎜ ⎟
⎜⎜⎜0 0 2 −2 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 2 −2 0 0⎟⎟⎟
⎜⎝ ⎟⎠
0 0 0 0 0 0 2 −2
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜⎜1 1 1 1 1 1 1 1 −1 −1 −1 −1 −1 −1 −1 −1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ s s s s −s −s −s −s 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 s s s s −s −s −s −s ⎟⎟⎟
⎜⎜⎜2 2 −2 −2 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 2 2 −2 −2 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 2 2 −2 −2 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 0 0 0 0 0 0 0 0 0 0 0 2 2 −2 −2 ⎟⎟⎟⎟⎟
[Haar]16 = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜2s −2s 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 2s −2s 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 2s −2s 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 2s −2s 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 2s −2s 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 2s −2s 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 2s −2s 0 0 ⎟⎟⎟
⎜⎝ ⎟⎠
0 0 0 0 0 0 0 0 0 0 0 0 0 0 2s −2s
(1.78d)
Figure 1.14 shows the structure of Haar matrices of different orders, and
Fig. 1.15 shows the structure of continuous Haar functions.
The discrete Haar basis system can be generated by sampling Haar systems at
t = 0, 1/N, 2/N, . . . , (N − 1)/N. The 16-point discrete Haar functions are shown in
Fig. 1.16.
Properties:
(1) The Haar transform Y = [Haar]2n X (where X is an input signal) provides a
domain that is both globally and locally sensitive. The first two functions reflect
the global character of the input signal; the rest of the functions reflect the local
characteristics of the input signal. A local change in the data signal results in a
local change in the Haar transform coefficients.
(2) The Haar transform is real (not complex like a Fourier transform), so real data
give real Haar transform coefficients.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 27

Figure 1.14 The structure of Haar matrices of order 2, 4, 8, 16, 32, 64, 128, and 256.

Figure 1.15 The first eight continuous Haar functions in the interval [0, 1).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


28 Chapter 1

0.4 0.5 0.5 0.5

0.2 0 0 0

0 –0.5 –0.5 –0.5


0 10 20 0 10 20 0 10 20 0 10 20

0.5 0.5 0.5 0.5

0 0 0 0

–0.5 –0.5 –0.5 –0.5


0 10 20 0 10 20 0 10 20 0 10 20
1 1 1 1

0 0 0 0

–1 –1 –1 –1
0 10 20 0 10 20 0 10 20 0 10 20

1 1 1 1

0 0 0 0

–1 –1 –1 –1
0 10 20 0 10 20 0 10 20 0 10 20

Figure 1.16 The first 16 discrete Haar functions.

(3) The Haar matrix is orthogonal HN HNT = HNT HN = IN , where IN is the N × N


identity matrix. Its rows are sequentially ordered. Whereas the trigonometric
basis functions differ only in frequency, the Haar functions vary in both scale
(width) and position.
(4) The Haar transform is the one of the fastest of the orthogonal transforms. To
define the standard 2D Haar decomposition in terms of the 1D transform, first
apply the 1D Haar transform to each row, then apply the 1D Haar transform to
each column of the result. (See Fig. 1.17 for Haar-transformed images of 2D
input images.) In other words, the 2D Haar function is defined from the 1D
Haar functions as follows:
hm,n (x, y) = hm (x)hn (y), m, n = 0, 1, 2, . . . . (1.79)
(5) Similar to the Hadamard functions, the Haar system can be presented in
different
√ ways. For example, sequency ordering or Haar ordering (below
s = 2) ⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜1 1 1 1 −1 −1 −1 −1⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜ s s −s −s 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 s s −s −s ⎟⎟⎟
[Haar]8 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ , (1.80)
⎜⎜⎜2 −2 0 0 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜0 0 2 −2 0 0 0 0⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 2 −2 0 0⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 2 −2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 29

Figure 1.17 Two images (left) and their 2D Haar transform images (right).

and the natural ordering


⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜2 −2 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ s s −s −s 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 2 −2 0 0 0 0⎟⎟⎟⎟⎟
[Haar]8 = ⎜⎜⎜ ⎟⎟ . (1.81)
⎜⎜⎜⎜1 1 1 1 −1 −1 −1 −1⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 2 −2 0 0⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 s s −s −s ⎟⎟⎟⎟⎠
0 0 0 0 0 0 2 −2

1.3.4 The modified Haar “Hadamard ordering”


The modified Haar “Hadamard ordering” matrix of order 2n × 2n is generated
recursively as
⎛ ⎞
⎜⎜⎜H(2n−1 ) H(2n−1 ) ⎟⎟⎟
n ⎜
H2n = H(2 ) = ⎜⎜⎝ √ √ ⎟⎟⎟ , n = 2, 3, . . . , (1.82)

2n−1 I(2 ) − 2n−1 I(2 )
n−1 n−1

+ +
where H(2) = + − , and I(2n ) is the identity matrix of order 2n .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


30 Chapter 1


Example: For n = 2 and n = 8, we have (remember that s = 2)
⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟
⎟⎟
⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟
H(4) = ⎜⎜⎜⎜⎜ ⎟, (1.83a)
⎜⎜⎝ s 0 −s 0⎟⎟⎟⎟

0 −s 0 −s
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ s 0 −s 0 s 0 −s 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 s 0 −s 0 s 0 −s ⎟⎟⎟⎟
H(8) = ⎜⎜⎜⎜ ⎟⎟ . (1.83b)
⎜⎜⎜2 0 0 0 −2 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 2 0 0 0 −2 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 2 0 0 0 −2 0⎟⎟⎟⎟
⎝ ⎠
0 0 0 2 0 0 0 −2

1.3.5 Normalized Haar transforms


The unnormalized Haar binary spectrum was used as a tool for detection and
diagnosis of physical faults for practical MOS (metal-oxide semiconductor) digital
circuits and for self-test purposes.64,77 The normalized forward and inverse Haar
transform matrices of order N = 2n can be generated recursively by
⎛  ⎞
⎜⎜⎜[Haar]2n−1 ⊗ +1 +1 ⎟⎟⎟
[Haar]2n = ⎜⎜⎝   ⎟⎟⎠ , (1.84a)
I2n−1 ⊗ +1 −1
    
1 +1 +1
[Haar]−1 = [Haar] −1
⊗ , I ⊗ , (1.84b)
+1 −1
2n−1
2n 2n−1
2
   
where [Haar]2 = ++ +− , [Haar]−1 1 + +
2 = 2 + − . Figure 1.18 shows basis vectors for a
modified Haar matrix.
The normalized forward and inverse Haar orthogonal transform matrices of
order 4 and 8 are given as follows:
⎛⎛ ⎞ ⎞
⎜⎜⎜⎜⎜⎜1 1⎟⎟⎟  ⎟
⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⊗ +1 +1 ⎟⎟⎟⎟ ⎛⎜+ + + +⎟⎟

⎜⎜⎜⎝1 −1⎠ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎟⎟⎟ ⎜⎜⎜+ + − −⎟⎟⎟⎟
[Haar]4 = ⎜⎜⎜⎜⎛ ⎞ =⎜ ,
⎜⎜⎜⎜⎜⎜1 0⎟⎟⎟   ⎟⎟⎟⎟⎟ ⎜⎜⎜⎝+ − 0 0 ⎟⎟⎟⎟⎠
⎜⎜⎝⎜⎜⎜⎝ ⎟⎟⎟ ⊗ +1 −1 ⎟⎟
⎠ ⎠ 0 0 + −
0 1
⎛ ⎞(1.85a)
⎜⎜⎜1 1 2 0⎟⎟⎟
         ⎜ ⎟
1 1 1 1 +1 1 0 +1 1 ⎜⎜⎜1 1 −2 0⎟⎟⎟⎟
[Haar]−1
4 = ⊗ ; ⊗ = ⎜⎜⎜⎜ ⎟⎟ ,
2 2 1 −1 +1 0 1 −1 4 ⎜⎜⎜1 −1 0 2⎟⎟⎟⎟
⎝ ⎠
1 −1 0 −2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 31

Figure 1.18 The modified Haar 8 × 8 basis vectors.

⎛⎛ ⎞ ⎞ ⎛ ⎞
⎜⎜⎜⎜⎜⎜+ + + +⎟⎟⎟ ⎟ ⎜1 1 1 1 1 1 1 1⎟
⎜⎜⎜⎜⎜⎜+ + − −⎟⎟⎟   ⎟⎟⎟⎟ ⎜⎜⎜⎜1 1 1 1 −1 −1 −1 −1⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟⎟ ⊗ + + ⎟⎟⎟⎟ ⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜⎝+ − 0 0 ⎟⎟⎟⎠ ⎟⎟⎟ ⎜⎜⎜1 1 −1 −1 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜ 0 0 + − ⎟⎟⎟ ⎜⎜⎜0 0 0 0 1 1 −1 −1⎟⎟⎟
[Haar]8 = ⎜⎜⎜⎜⎛ ⎞ ⎟⎟⎟ = ⎜⎜⎜
⎟⎟⎟ ⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟⎟⎟ ,

⎜⎜⎜⎜⎜⎜+ 0 0 0 ⎟⎟⎟ ⎟ ⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 + 0 0 ⎟⎟⎟  ⎟ ⎜
⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⊗ + − ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 1 −1 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎝⎜⎜⎝0 0 + 0 ⎟⎟⎠ ⎟⎟⎠ ⎜⎜⎝0 0 0 0 1 −1 0 0⎟⎟⎠
0⎡ 0⎛ 0 + ⎞ 0 0⎛ 0 0 ⎞ 0 0 ⎤ 1 −1
⎢⎢⎢ ⎜⎜⎜1 1 2 0⎟⎟⎟   ⎜⎜⎜1 0 0 0⎟⎟⎟   ⎥⎥⎥
1 ⎢⎢⎢⎢ 1 ⎜⎜⎜⎜1 1 −2 0⎟⎟⎟⎟ +1 ⎜⎜0 1 0 0⎟⎟ +1 ⎥⎥
−1
[Haar]8 = ⎢⎢⎢ ⎜⎜⎜
−1 ⎟⎟⎟ ⊗ +1 , ⎜⎜⎜⎜⎜0 0 1 0⎟⎟⎟⎟⎟ ⊗ −1 ⎥⎥⎥⎥⎥ (1.85b)
2 ⎢⎣ 4 ⎜⎝ 1 0 2 ⎟⎠ ⎜⎝ ⎟⎠ ⎥⎦
1 −1 0 −2 0 0 0 1
⎛ ⎞
⎜⎜⎜1 1 2 0 4 0 0 0⎟⎟⎟
⎜⎜⎜⎜1 1 2 0 −4 0 0 0⎟⎟⎟⎟
⎜⎜⎜1 1 −2 0 0 4 0 0⎟⎟⎟
⎜⎜ ⎟⎟
1 ⎜⎜⎜⎜1 1 −2 0 0 −4 0 0⎟⎟⎟⎟
= ⎜⎜⎜ ⎟⎟ .
8 ⎜⎜⎜1 −1 0 2 0 0 4 0⎟⎟⎟⎟
⎜⎜⎜1 −1 0 2 0 0 −4 0⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎝1 −1 0 −2 0 0 0 4⎟⎟⎟⎠
1 −1 0 −2 0 0 0 −4

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


32 Chapter 1

1.3.6 Generalized Haar transforms


The procedure of Eq. (1.77) presents an elegant method to obtain the Haar
transform matrix of order 2n . Various applications have motivated modifications
and generalizations of the Haar transform. In particular, an interesting variation on
the Haar system uses more than one mother function.

1.3.7 Complex Haar transform


In Ref. 44, the authors have developed a so-called complex Haar transform,
⎛ ⎞
⎜⎜⎜[HC](2n−1 ) ⊗ (+1 − j) ⎟⎟⎟
[HC]2n = [HC](2 ) = ⎜⎜⎝ √
n ⎟⎟⎠ , n = 2, 3, . . . , (1.86)
2n−1 I(2n−1 ) ⊗ (+1 + j)
 − j
where [HC](2) = +1 +1 + j , ⊗ is the Kronecker product, and I(2 ) is the identity
n
n
matrix of order 2 . Note that instead of the initial matrix
 [HC](2) placed into the
above given recursion, we can also use the matrix 1j 1j .

1.3.8 kn -point Haar transforms


In Ref. 76, Agaian and Matevosyan developed a class of kn -point Haar transforms,
using k−1 mother functions, for the arbitrary integer k. The kn -point Haar transform
matrix can be recursively constructed by
⎛ ⎞
⎜⎜⎜[AH](kn−1 ) ⊗ ek ⎟⎟⎟
n ⎜
[AH](k ) = ⎜⎝ √ ⎟⎟⎠ , n = 2, 3, 4, . . . , (1.87)
kn−1 I(kn−1 ) ⊗ A1k

where ek is the entire one-row vector of length k, I(m) is the identity matrix of order
m, ⊗ is the sign of the Kronecker product, and [AH](k) = A(k) is an orthogonal
matrix of order k, which has the following form:
 
1 1 ··· 1 1
A(k) = . (1.88)
A1k

In particular, A(k) can be any orthogonal matrix (such as Fourier, Hadamard,


cosine, and others), where the first row is a constant discrete function, and the
remaining rows are either sinusoidal or nonsinusoidal functions. This method gives
possibilities for constructing a new class of Haar-like orthogonal transforms.
Some examples are provided as follows:
(1) If k = 2 and A12 = (+1 − 1), we obtain the classical Haar transform matrices
of order 4, 8, and so on.
(2) If k = 2 and A12 = [exp( jα) exp( jα)], we can generate the new complex Haar
transform matrices of order 2n . [AH](2), [AH](4), and [AH](8) matrices are
given as follows:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 33

 
1 1
[AH](2) = , (1.89a)
exp( jα) − exp( jα)
⎛   ⎞
⎜⎜[AH](2) ⊗ 1 1 ⎟
[AH](4) = ⎜⎜⎜⎝ √   ⎟⎟⎟⎟ =
2I(2) ⊗ e jα −e jα ⎠
⎛ ⎞
⎜⎜⎜1 1 1 1 ⎟⎟⎟ (1.89b)
⎜⎜⎜e jα jα
−e jα
−e jα ⎟⎟⎟
⎜⎜⎜ √ e√ ⎟⎟⎟
⎜⎜⎜ 2e jα − 2e jα 0 ⎟⎟⎟ ,
⎜⎜⎝ √ jα √ jα ⎟⎟⎠
0
0 0 2e − 2e
⎛   ⎞
⎜⎜[AH](4) ⊗ 1 1 ⎟
[AH](8) = ⎜⎜⎜⎝   ⎟⎟⎟⎟ =
2I(4) ⊗ e jα −e jα ⎠
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜e jα e√jα e jα√ e jα√ −e jα −e jα −e jα −e jα ⎟⎟⎟
⎜⎜⎜ √ ⎟⎟⎟
⎜⎜⎜ 2e jα 2e jα − 2e jα − 2e jα 0√ ⎟⎟⎟
⎜⎜⎜ 0√ 0√ 0√ ⎟⎟ (1.89c)
⎜⎜⎜0 jα ⎟
⎜⎜⎜ 0 0 0 2e jα 2e jα − 2e jα − 2e ⎟⎟⎟⎟⎟
⎜⎜⎜2e jα ⎟⎟⎟ .
⎜⎜⎜ −2e jα 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 2e jα
−2e jα
0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 0 0 0 2e jα −2e jα 0 0 ⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 2e jα −2e jα

(3) A new Haar-like system matrix is generalized based on the Haar matrix of
orders 3 and 9,
⎛ ⎞
⎜⎜⎜ 1 1 1 ⎟⎟⎟
⎜⎜⎜ √ √ ⎟⎟
⎜⎜⎜ 2 2 √ ⎟⎟⎟⎟
⎜⎜⎜ − 2⎟⎟⎟⎟
[AH](3) = ⎜⎜⎜ 2 2 ⎟⎟⎟ (1.90a)
⎜⎜⎜ √ √ ⎟⎟⎟
⎜⎜⎜ 6 6 ⎟⎟
⎝ − 0⎠
2 2
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 ⎟⎟⎟ 1 1
⎜⎜⎜⎜ √ √ √ √ √ √ ⎟⎟
⎜⎜⎜ 2 2 2 2 2 √ 2 √ ⎟⎟⎟⎟ √
⎜⎜⎜ − 2 − 2⎟⎟⎟⎟− 2
⎜⎜⎜ 2 2 2 2 2 2 ⎟⎟⎟
⎜⎜⎜ √ √ √ √ √ √ ⎟⎟⎟
⎜⎜⎜ 6 6 6 6 6 6 ⎟⎟⎟
⎜⎜⎜ − − 0 − 0 0 ⎟⎟⎟
⎜⎜⎜ 2 2 2 2 2 2 ⎟⎟⎟
⎜⎜⎜ √ √ ⎟⎟⎟
⎜⎜⎜ 6 6 √ ⎟⎟⎟
⎜⎜⎜ − 6 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜⎜ √18 √ ⎟⎟⎟
⎟⎟⎟
⎜⎜ 18
[AH](9) = ⎜⎜⎜⎜ 2 − 0 0 0 0 0 0 0 ⎟⎟⎟ .
⎟⎟⎟ (1.90b)
⎜⎜⎜ 2
√ √ ⎟⎟⎟
⎜⎜⎜ √ ⎟⎟⎟
⎜⎜⎜0 6 6
− 6 0 ⎟⎟⎟
⎜⎜⎜ 0 0 0 0
⎟⎟⎟
⎜⎜⎜ √
2 2
√ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 18

18
⎟⎟⎟
⎜⎜⎜ 0 0 0 0 0 0
⎟⎟⎟
⎜⎜⎜ 2 2
√ √ ⎟⎟⎟
⎜⎜⎜ √ ⎟
⎜⎜⎜0 0 0 0 0 0
6 6
− 6⎟⎟⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜ √ √ ⎟⎟⎟
⎜⎜⎜ 18 18 ⎟⎠
⎝0 0 0 0 0 0 − 0
2 2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


34 Chapter 1

1.4 Hadamard Matrices and Related Problems


More than a hundred years ago, in 1893, the French mathematician Jacques
Hadamard (see Fig. 1.19) constructed orthogonal matrices of orders 12 and 20
n
with entries ±1. In Ref. 30, it has been shown that for any real matrix B = bi, j
i, j=1
of order n with −1 ≤ bi, j ≤ +1, the following inequality holds:

"
n n
(det B)2 ≤ b2i, j , (1.91)
i=1 j=1

where equality is achieved when B is an orthogonal matrix. In the case bi, j = ±1,
the determinant will obtain its maximum absolute value, and B will be a Hadamard
matrix. Equality in this bound is attained for a real matrix M if and only if M
is a Hadamard matrix. A square matrix Hn of order n with elements −1 and +1
having a maximal determinant is known as a Hadamard matrix.72 The geometrical
interpretation of the maximum determinant problem is to look for n vectors from
the origin contained within the cubes −1 ≤ bi, j ≤ +1, i, j = 1, 2, . . . , n and forming
a rectangular parallelepiped of maximum volume.

Definition: A square matrix Hn of order n with elements −1 and +1 is called a


Hadamard matrix if the following equation holds:

Hn HnT = HnT Hn = nIn , (1.92)

where H T is the transpose of H, and In is the identity matrix of order n.


Equivalently, a Hadamard matrix is a square matrix with elements −1 and +1 in
which any two distinct rows agree in exactly n/2 positions (and thus disagree in
exactly n/2 positions).

Figure 1.19 Jacques Salomon Hadamard: 1865–1963.35

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 35

We have seen that the origin of the Hadamard matrix goes back to 1867, to the
time when Sylvester constructed Hadamard matrices of the order 2n . It is obvious
that the Sylvester, Walsh–Hadamard, Cal–Sal, and Walsh matrices are classical
examples of equivalent Hadamard matrices. Now we provide an example of a
Hadamard matrix of the order 12, which cannot be constructed from the above-
defined classical Hadamard matrices.

⎛ ⎞
⎜⎜⎜+ + + + + − − − + − − −⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜−
⎜⎜⎜ + − + + + + − + + + −⎟⎟⎟⎟⎟
⎜⎜⎜− ⎟⎟
⎜⎜⎜ + + − + − + + + − + +⎟⎟⎟⎟
⎜⎜⎜− ⎟⎟
⎜⎜⎜ − + + + + − + + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ − − − + + + + + − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜+ + + − − + − + + + + −⎟⎟⎟⎟
H12 = ⎜⎜⎜⎜ ⎟⎟. (1.93)
⎜⎜⎜+
⎜⎜⎜ − + + − + + − + − + +⎟⎟⎟⎟⎟
⎜⎜⎜+ ⎟⎟
⎜⎜⎜ + − + − − + + + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ ⎟⎟
⎜⎜⎜ − − − + − − − + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ + + − + + + − − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ − + + + − + + − + + −⎟⎟⎟⎟
⎝ ⎟⎠
+ + − + + + − + − − + +

The expression in Eq. (1.92) is equivalent to the statement that any two distinct
rows (columns) in a matrix Hn are orthogonal. It is clear that rearrangement
of rows (columns) in Hn and/or their multiplication by −1 will preserve this
property.

Definition of Equivalent Hadamard Matrices: Two matrices H1 and H2 are called


equivalent matrices if H2 = PH1 Q, where P and Q are permutation matrices.
These matrices have exactly one nonzero element in each row and column, and
this nonzero element is equal to +1 or −1.
It is not difficult to show that for a given Hadamard matrix it is always possible
to find an equivalent matrix having only +1 in the first row and column. Such a
matrix is called a normalized Hadamard matrix. On the other hand, it has been a
considerable challenge to classify the Hadamard matrices by equivalence.
Note that there are five known equivalent Hadamard matrices of order 16,23
three for 20,24 60 for 24,25,26 486 for 28,27 and 109 for 36.28 The lower bound
for the number of some equivalent classes are, for n = 44, at least 500, and for
n = 52, at least 638. Below, we present two nonequivalent Hadamard matrices of
order 16:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


36 Chapter 1

⎛ ⎞
⎜⎜⎜+ + + + + + + + + + + + + + + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − + − + − + − + − + − + − + −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ + − − + + − − + + − − + + − −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ − − + + − − + + − − + + − − +⎟⎟⎟⎟⎟

⎜⎜⎜⎜+ + + + − − − − + + + + − − − −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ − + − − + − + + − + − − + − +⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ + − − − − + + + + − − − − + +⎟⎟⎟⎟⎟

⎜⎜⎜+ − − + − + + − + − − + − + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜+ + + + + + + + − − − − − − − −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ − + − + − + − − + − + − + − +⎟⎟⎟⎟⎟
⎜⎜⎜+ + − − + + − − − − + + − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − − + + − − + − + + − − + + −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ + + + − − − − − − − − + + + +⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ − + − − + − + − + − + + − + −⎟⎟⎟⎟⎟

⎜⎜⎜+ + − − − − + + − − + + + + − −⎟⎟⎟⎟
⎝ ⎠
+ − − + − + + − − + + − + − − +
⎛ ⎞ (1.94)
⎜⎜⎜+ + + + + + + + + + + + + + + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + + + + − − − − − − − −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ + + + − − − − + + + + − − − −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ + + + − − − − − − − − + + + +⎟⎟⎟⎟⎟

⎜⎜⎜+ + − − + + − − + + − − + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜+ + − − + + − − − − + + − − + +⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ + − − − − + + + + − − − − + +⎟⎟⎟⎟⎟

⎜⎜⎜+ + − − − − + + − − + + + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟.
⎜⎜⎜⎜+ − + − + − + − + − + − + − + −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ − + − + − + − − + − + − + − +⎟⎟⎟⎟⎟
⎜⎜⎜+ − + − − + − + + − + − − + − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜+ − + − − + − + − + − + + − + −⎟⎟⎟⎟

⎜⎜⎜+ − − + + − − + + − − + + − − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − − + + − − + − + + − − + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝+ − − + − + + − + − − + − + + −⎟⎟⎟⎟

+ − − + − + + − − + + − + − − +

A list of nonequivalent Hadamard matrices can be found, for example, in


Ref. 29. Particularly, it is known that there is only one equivalent class of Hadamard
matrices for each of the orders n = 4, 8, and 12.
Let us prove that if Hn is a normalized Hadamard matrix of order n, n ≥ 4, then
n = 4t, where t is a positive integer. Three rows of this matrix can be represented
as

+ + ··· + + + ··· + + + ··· + + + ··· +


+ + ··· + + + ··· + − − ··· − − − ··· − .
+ + ··· + − − ··· − + + ··· + − − ··· −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 37

By denoting the number of each type of column as t1 , t2 , t3 , and t4 , respectively,


according to the orthogonality condition, we have

t1 + t2 + t3 + t4 = n,
t1 + t 2 − t 3 − t 4 = 0,
(1.95)
t1 − t 2 + t 3 − t 4 = 0,
t1 − t 2 − t 3 + t 4 = 0.

The solution gives t1 = t2 = t3 = t4 = n/4. This implies that if Hn is a Hadamard


matrix of order n, then n = 4t, or n = 0 (mod 4). Furthermore, the inverse problem
is stated.36

The Hadamard–Paley conjecture: Construct a Hadamard matrix of order n for any


natural number n with n = 0 (mod 4). Despite the efforts of many mathematicians,
this conjecture remains unproved, even though it is widely believed to be true. It is
the longest-standing open problem in mathematics and computer sciences.
There are many approaches to the construction of Hadamard matrices.31,33 The
simplest one is the direct product construction: the Kronecker product of two
Hadamard matrices of order m and n is the Hadamard matrix of order mn.
The survey by Seberry and Yamada33 indicates the progress that was made
during the last 100 years. Here, we give a brief survey on the construction of
Hadamard matrices, presented by Seberry.35 At present, the smallest unknown
orders are n = 4 · 167 and n = 4 · 179. Currently, several basic infinite classes
of Hadamard matrix construction are known, as follows:
• “Plug-in template” methods:31,33,39,47 The basic idea is based on the constriction
of a class of “special-component” matrices that can be plugged into arrays
(templates) of variables to generate Hadamard matrices. This is an extremely
productive method of construction. Several approaches for the construction of
special components and templates have been developed. In 1944, Williamson32
first constructed “suitable matrices” (Williamson matrices) that were used to
replace the variables in a formally orthogonal matrix. Generally, the templates
into which suitable matrices are plugged are orthogonal designs. They have
formally orthogonal rows (and columns), but may have variations such as
Goethals–Seidel arrays, Wallis–Whiteman arrays, Spence arrays, generalized
quaternion arrays, and Agayan (Agaian) families.31
• Paley’s methods: Paley’s “direct”
# construction presented in 1933 gives
36

Hadamard matrices of the order (pi + 1)(q j + 1), where pi ≡ 3 (mod 4), q j ≡ 1
i, j
(mod 4) are prime powers. Paley’s theorem states that Hadamard matrices can
be constructed for all positive orders divisible by 4 except those in the following
sequence: multiples of 4 not equal to a power of 2 multiplied by q + 1, for some
power q of an odd prime.
• Multiplicative methods:31,47 Hadamard’s original construction of Hadamard
matrices seems to be a “multiplication theorem” because it uses the fact that the

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


38 Chapter 1

Kronecker product of Hadamard matrices of orders 2a m and 2b n is a Hadamard


matrix of order 2a+b mn.47 In Ref. 31, Agayan (Agaian) shows how to multiply
these Hadamard matrices in order to get a Hadamard matrix of order 2a+b−1 mn
(a result that lowers the curve in our graph except for q, a prime). This result
has been extended by Craigen et al.,39 who have shown that this astonishing
ability to reduce the powers of 2 in multiplication could also be extended to the
multiplication of four matrices at a time.39
• Sequences approach:31,33,45,77 Several Hadamard matrix construction methods
have been developed based on Turyn, Golay, and m sequences, and also on δ and
generalized δ codes. For instance, it has been shown that there exist Hadamard
matrices of orders 4 · 3m , 4 · 13m , 4 · 17m , 4 · 29m , 4 · 37m , 4 · 53m , and 4qm , where
q = 2a 10b 26c + 1, and a, b, and c are nonnegative integers.
• Other methods: Kharaghani’s methods, or regular s-sets of regular matrices that
generate new matrices. In 1976, Wallis,37 in her classic paper, “On the existence
of Hadamard matrices,” showed that for any given odd number q, there exists
a t ≤ [2 log2 (q − 3)], such that there is a Hadamard matrix of order 2t q (and
hence for all orders 2 s q, s ≥ t). That was the first time a general bound had been
given for Hadamard matrices of all orders. This result has been improved by
Craigen and Kharaghani.38,39 In fact, as it was shown by Seberry and Yamada,33
Hadamard matrices are known to exist, of order 4q, for most q < 3000 (we have
results up to 40,000 that are similar). In many other cases, there exist Hadamard
matrices of order 23 q or 24 q. A quick look shows that the most difficult cases
are for q = 3 (mod 4).
• Computers approach: Seberry and her students have made extensive use of
computers to construct Hadamard matrices of various types.78

Problems for Exploration


(1) Show that if H1 and H2 are Hadamard matrices of order n and m, then there
exist Hadamard matrices of order mn/4.
(2) For any natural number n, how many equivalent classes of Hadamard matrices
of order n exist?
(3) For any natural number n, how many equivalent classes of specialized (for
example Williamson) Hadamard matrices of order n exist?

1.5 Complex Hadamard Matrices


In this section, we present some generalizations of the Sylvester matrix. We define
and present three recursive algorithms to construct the complex HTs, such as the
complex Sylvester–Hadamard, complex Paley, and complex Walsh transforms.

Definition: A matrix C of order n with elements {±1, ± j, j = −1} that satisfies
CC ∗ = nIn is called a complex Hadamard matrix, where C ∗ is the conjugate
transpose matrix of C.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 39

Hadamard’s inequality applies to complex matrices with elements in the unit


disk and thus also to matrices with entries ±1, ± j, and pairwise orthogonal rows
(and columns). Complex Hadamard matrices were first studied by Turyn.41 In
general, a complex Hadamard matrix may be obtained from another by using one or
several of the following procedures: (1) multiply a row or a column by an element
of unit modulus, (2) replace a row or a column by its (elementwise) conjugate, and
(3) permutate rows or permutate columns.
It has been shown that if H is a complex Hadamard matrix of order N, then
N = 2t. The problem of constructing complex Hadamard matrices of even orders
is still open.
Complex Hadamard Matrix Problem: Show/construct if n is any even number,
then a complex Hadamard matrix of order n exists.

Problems for exploration:

(1) Show that if H1 and H2 are complex Hadamard matrices of order n and m, then
there exists an Hadamard matrix of order mn/2.
(2) For any natural number n, how many equivalent classes of complex Hadamard
matrices of order n exist?

Properties of complex Hadamard matrices:

• Any two columns or rows in a complex Hadamard matrix are pairwise


orthogonals.
• A complex Hadamard matrix can be reduced to a normal form (i.e., the first
row and the first column contain only elements equal to +1) via elementary
operations.
• The sum of the elements in every row and column, except the first ones, of a
normalized complex Hadamard matrix is zero.

1.5.1 Complex Sylvester–Hadamard transform


First, we define a parametric Sylvester (PS) matrix, recursively, by
⎛ ⎞
⎜⎜⎜[PS ]2k−1 (a, b) [PS ]2k−1 (a, b)⎟⎟⎟
[PS ]2k (a, b) = ⎜⎝ ⎜ ⎟⎟⎠ , (1.96)
[PS ]2k−1 (a, b) −[PS ]2k−1 (a, b)

where
 
1 a
[PS ]2 (a, b) = .
b −1

Note that if a = b = 1, then the parametric [PS ]2k (a, b) Sylvester


√ matrix is a
classical Sylvester matrix. Also, if a = j, b = − j, and j = −1, the parametric
[PS ]2k ( j, − j) matrix becomes a so-called complex Sylvester–Hadamard matrix.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


40 Chapter 1

Complex Sylvester–Hadamard matrices of orders 2, 4, and 8 are given as


follows:
 
1 1
[PS ]2 = , (1.97a)
j −j
⎛ ⎞
⎜1 1 1 1⎟⎟⎟
  ⎜⎜⎜⎜ ⎟⎟
⎜⎜ j − j j − j ⎟⎟⎟⎟
= ⎜⎜⎜⎜⎜
[PS ]2 [PS ]2
[PS ]4 = ⎟, (1.97b)
[PS ]2 −[PS ]2 ⎜⎜⎜1 1 −1 −1⎟⎟⎟⎟⎟
⎝ ⎠
j −j −j j
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟

⎜⎜⎜ j − j j −j j −j j − j ⎟⎟⎟⎟⎟
⎜⎜⎜
⎛ ⎞ ⎜⎜⎜⎜1 1 −1
⎜ −1 1 1 −1 −1⎟⎟⎟⎟⎟
⎜⎜⎜[PS ]4 [PS ]4 ⎟⎟⎟ ⎜⎜⎜ j − j − j j j −j −j j ⎟⎟⎟⎟⎟
[PS ]8 = ⎜⎝ ⎟⎠ = ⎜⎜⎜ ⎟. (1.97c)
[PS ]4 −[PS ]4 ⎜⎜⎜1 1 1 1 −1 −1 −1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ j − j j −j −j j −j j ⎟⎟⎟⎟
⎜⎜⎜1 1 −1 ⎟
⎜⎝ −1 −1 −1 1 1⎟⎟⎟⎟

j −j −j j −j j j −j

In analogy with the Sylvester matrix, [PS ]2k ( j, − j) can be represented as a


Kronecker product of k Sylvester matrices of order 2:

[PS ]2k ( j, − j) = [PS ]2 ( j, − j) ⊗ [PS ]2 ( j, − j) ⊗ · · · ⊗ [PS ]2 ( j, − j). (1.98)

The i’th, k’th elements of the complex Sylvester–Hadamard matrix [PS ]2n may be
defined by

n−1
it +(it ⊕kt )/2
h(i, k) = (−1) t=0 , (1.99)

where (in−1 , in−2 , . . . , i0 ) and (kn−1 , kn−2 , . . . , k0 ) are binary representations of i and
k, respectively.40
For instance, from Eq. (1.98), for n = 2, we obtain

⎛ ⎞
⎜⎜⎜ 1 j j −1⎟⎟⎟
 ⎜⎜⎜
  ⎟

1 j 1 j ⎜⎜⎜− j −1 1 − j ⎟⎟⎟⎟⎟
[PS ]4 = ⊗ = ⎜⎜ ⎜ ⎟.
⎜⎜⎜− j 1 −1 − j ⎟⎟⎟⎟⎟
(1.100)
− j −1 − j −1
⎝⎜ ⎟⎠
−1 j j 1

The element h1,3 of [PS ]4 in the second row [i = (01)] and in the fourth column
[k = (11)] is equal to h1,3 = (−1)1+(1⊕1)/2+0+(0⊕1)/2 = (−1)1+1/2 = − j.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 41

The following matrix also is a complex Hadamard matrix:


⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟⎟
F4 = ⎜⎜⎜ ⎟. (1.101)
⎜⎜⎜1 − j −1 j ⎟⎟⎟⎟⎟
⎝ ⎠
1 j −1 − j

1.5.2 Complex WHT


Consider the recurrence
⎛ ⎞
⎜⎜[WH]cm−1 [WH]c1 ⊗ [WH]cm−2 ⎟⎟⎟
[WH]cm = ⎜⎜⎝ ⎟⎠ , m≥3 (1.102)
[WH]cm−1 −[WH]c1 ⊗ [WH]cm−2

where
⎛ ⎞
  ⎜⎜⎜1 1 1 1⎟⎟⎟
1 1 ⎜⎜⎜1 −1 −1 j ⎟⎟⎟⎟
[WH]c1 = , [WH]c2 = ⎜⎜⎜⎜ ⎟.
−j j ⎜⎜⎝1 1 −1 −1⎟⎟⎟⎟⎠
1 −1 1 −1

This recurrent relation gives a complex Hadamard matrix of order 2m . (It is also
called a complex Walsh–Hadamard matrix.)
Note that if H and Q1 = (A1 , A2 ) and Q2 = (B1 , B2 , B3 , B4 ) are complex
Hadamard matrices of orders m and n, respectively, then the matrices C1 and C2
are complex Hadamard matrices of order mn:

C1 = [H ⊗ A1 , (HR) ⊗ A2 ] ,
C2 = [H ⊗ B1 , (HR) ⊗ B2 , H ⊗ B3 , (HR) ⊗ B4 ] , (1.103)

where R is the back-diagonal identity matrix.


Example: Let
⎛ ⎞
  ⎜⎜⎜1 1 1 1⎟⎟⎟
1 1 ⎜⎜⎜1 −1 − j j ⎟⎟⎟
H = [WH]1 = Q2 = [WH]2 = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎝1 1 −1 −1⎟⎟⎟⎟⎠
and (1.104)
j −j
1 −1 j − j
Then, the following matrix is a complex Walsh–Hadamard matrix of order 8:
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
  ⎜⎜⎜⎜1⎟⎟⎟⎟   ⎜⎜⎜⎜ 1⎟⎟⎟⎟   ⎜⎜⎜⎜ 1⎟⎟⎟⎟   ⎜⎜⎜⎜ 1⎟⎟⎟⎟
1 1 ⎜⎜⎜1⎟⎟⎟ 1 1 ⎜⎜⎜−1⎟⎟⎟ 1 1 ⎜⎜⎜− j ⎟⎟⎟ 1 1 ⎜⎜⎜ j ⎟⎟⎟
C2 = ⊗ ⎜⎜⎜ ⎟⎟⎟ , ⊗ ⎜⎜⎜ ⎟⎟⎟ , ⊗ ⎜⎜⎜ ⎟⎟⎟ , ⊗ ⎜⎜⎜ ⎟⎟⎟ ,
j −j ⎜⎜⎝1⎟⎟⎠ −j j ⎜⎜⎝ 1⎟⎟⎠ j −j ⎜⎜⎝−1⎟⎟⎠ −j j ⎜⎜⎝−1⎟⎟⎠
1 −1 j −j
(1.105)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


42 Chapter 1

or
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 −1 −1 −j −j j j ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 1 −1 −1 −1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜1 −1 −1 −j − j ⎟⎟⎟⎟
[WH]3 = ⎜⎜⎜⎜⎜
1 j j
⎟. (1.106)
⎜⎜⎜ j −j −j j j −j −j j ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ j −j j −j 1 −1 1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ j −j −j j −j j j − j ⎟⎟⎟⎟⎟
⎝ ⎠
j −j j −j −1 1 −1 1

Figure 1.20 illustrates parts of discrete complex Hadamard functions.

1.5.3 Complex Paley–Hadamard transform


The following matrix is a complex Hadamard matrix of order 8 and is called a
Paley complex Hadamard matrix:
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 −j −j −1 −1 j j ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 − j −1 j 1 − j −1 j ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 j j −1 −1 − j − j ⎟⎟⎟⎟
⎟⎟⎟
W3 = ⎜⎜⎜⎜⎜
p
⎟⎟⎟ . (1.107)
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 − j j −1 1 j − j ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 j −1 − j 1 j −1 − j ⎟⎟⎟⎟⎟
⎝ ⎠
1 −1 j − j −1 1 − j j

Figure 1.21 illustrates parts of continuous complex Hadamard functions.

1.5.4 Complex Walsh transform


The complex Walsh transform matrix can be generated as
 
(1 1) ⊗ Wm−1
Wm = , (1.108)
(1 − 1) ⊗ H(m − 1) diag{Im−2 , jIm−2 }
   
where m > 1, H(1) = ++ +− , and diag{A, B} = 0A 0B .
This recurrent relation gives a complex Hadamard matrix of order 2m . (It is also
called the complex Walsh matrix.) For m = 3, we obtain
 
(1 1) ⊗ W2
W3 = , (1.109)
(1 − 1) ⊗ H(2) diag{I1 , jI1 }

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 43

1
0
–1

–5

–2

–2

1 3 5 7 1 3 5 7

Figure 1.20 The first eight real (left) and imaginary (right) parts of discrete complex
Hadamard functions corresponding to the matrix [WH]3 .

1 1
0 0
–1 –1

0 0

0 0

0 0

0 0

0 0

0 0

0 0

Figure 1.21 The first eight real (left) and imaginary (right) parts of continuous complex
Hadamard functions corresponding to the Paley complex Hadamard matrix W3p .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


44 Chapter 1

1 1
0 0
–1 –1

0 0

0 0

0 0

0 0

0 0

0 0

0 0

Figure 1.22 The first eight real (left) and imaginary (right) parts of continuous complex
Walsh–Hadamard functions corresponding to the complex Walsh matrix W3 .

or
⎛ ⎛ ⎞ ⎞
⎜⎜⎜ ⎜⎜⎜1 1 1 1⎟⎟⎟ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ ⎜⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟ ⎟⎟⎟
⎟⎟⎟
1) ⊗ ⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜(1
⎜⎜⎜ ⎜⎜⎜1 − j −1 j ⎟⎟⎟⎟⎟ ⎟⎟⎟
⎟⎟⎟
⎜⎜⎜ ⎝ ⎠ ⎟⎟
1 j −1 j
W3 = ⎜⎜⎜⎜ ⎛ ⎞⎛ ⎞ ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜1 0 0 0⎟⎟⎟ ⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜ ⎟⎜ ⎟⎟
⎜⎜⎜ ⎜1 −1 1 −1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 1 0 0⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜(1 − 1) ⊗ ⎜⎜⎜⎜ ⎟⎟ ⎟⎟
⎜⎜⎝ ⎜⎜⎜1 1 −1 −1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 j 0⎟⎟⎟⎟ ⎟⎟⎟⎟
⎝ ⎠⎝ ⎠⎠
1 −1 −1 1 0 0 0 j
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟
⎟⎟
⎜⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1
⎜⎜⎜ − j −1 j 1 − j −1 j ⎟⎟⎟⎟⎟

⎜⎜⎜1 j −1 − j 1 j −1 − j ⎟⎟⎟⎟
⎜ ⎟⎟⎟
= ⎜⎜⎜⎜⎜ ⎟⎟⎟ . (1.110)
⎜⎜⎜ ⎟
⎜⎜⎜1 1 j j −1 −1 − j − j ⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎜⎜ −1 j − j −1 1 − j j ⎟⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎝ 1 −j −j −1 −1 j j ⎟⎟⎟⎟

1 −1 − j j −1 1 j −j

Figure 1.22 illustrates parts of continuous complex Walsh–Hadamard functions.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 45

References

1. K. J. Horadam, Hadamard Matrices and Their Applications, illustrated ed.,


Princeton University Press, Princeton (2006).
2. W. D. Wallis, A. P. Street, and J. S. Wallis, Combinatorics: Room Squares,
Sum-Free Sets, Hadamard Matrices, Lecture Notes in Mathematics, 292,
Springer, New York (1972).
3. Y. X. Yang, Theory and Applications of Higher-Dimensional Hadamard
Matrices, Kluwer Academic and Science Press, Beijing/New York (2001).
4. R. Damaschini, “Binary encoding image based on original Hadamard
matrices,” Opt. Commun. 90, 218–220 (1992).
5. D. C. Tilotta, R. M. Hammaker, and W. G. Fateley, “A visible near-infrared
Hadamard transform in spectrometer based on a liquid crystal spatial light
modulator array: a new approach in spectrometry,” Appl. Spectrosc. 41,
727–734 (1987).
6. M. Harwit, N. J. A. Sloane, and N. J. A. Sloane, Hadamard Transform Optics,
Academic Press, New York (1979).
7. W. D. Wallis, Ed., Designs 2002, Further Computational and Constructive
Design Theory, 2nd ed., Kluwer Academic, Dordrecht (2003).
8. J. A. Decker Jr. and M. Harwit, “Experimental operation of a Hadamard
spectrometer,” Appl. Opt. 8, 2552–2554 (1969).
9. E. Nelson and M. Fredman, “Hadamard spectroscopy,” J. Opt. Soc. Am. 60,
1664–1669 (1970).
10. Ch. Koukouvinos and J. Seberry, “Hadamard matrices orthogonal designs and
construction algorithms,” available at Research Online, http://ro.uow.edu.au/
infopapers/308.
11. K. Beauchamp, Applications of Walsh and Related Functions, Academic Press,
New York (1984).
12. W. D. Wallis, Introduction to Combinatorial Designs, 2nd ed., Chapman &
Hall/CRC, Boca Raton (2007).
13. R. Gareth, The Remote Sensing Data Book, Cambridge University Press,
Cambridge, England (1999).
14. S. Agaian, J. Astola, and K. Egiazarian, Binary Polinomial Transforms and
Nonlinear Digital Filters, Marcel Dekker, New York (1995).
15. M. Nakahara and T. Ohmi, Quantum Computing, CRC Press, Boca Raton
(2008).
16. M. Nakahara, R. Rahimi and A. SaiToh, Eds., Mathematical Aspects of
Quantum Computing, Kinki University Series on Quantum Computing, Japan
(2008).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


46 Chapter 1

17. B. Rezaul, T. Daniel, H. Lai, and M. Palaniswami, Computational Intelligence


in Biomedical Engineering, CRC Press, Boca Raton (2007).
18. Y. J. Kim and U. Platt, Advanced Environmental Monitoring, Springer, New
York (2008).
19. M. C. Hemmer, Expert Systems in Chemistry Research, CRC Press, Boca
Raton (2007).
20. J. J. Sylvester, “Thoughts on inverse orthogonal matrices, simultaneous sign
successions and tesselated pavements in two or more colors, with applications
to Newton’s rule, ornamental til-work, and the theory of numbers,” Phil. Mag.
34, 461–475 (1867).
21. L. Walsh, “A closed set of normal orthogonal functions,” Am. J. Math. 55,
5–24 (1923).
22. H. F. Harmuth, Sequency Theory: Functions and Applications, Academic
Press, New York (1977).
23. M. Hall Jr., “Hadamard matrices of order 16,” Res. Summary, No. 36-10,
pp. 21–26 (1961).
24. M. Hall Jr., “Hadamard matrices of order 20,” Res. Summary, No. 36-12,
pp. 27–35 (1961).
25. N. Ito, J. S. Leon, and J. Q. Longiar, “Classification of 3-(24,12,5) designs
and 24-dimensional Hadamard matrices,” J. Comb. Theory, Ser. A 31, 66–93
(1981).
26. H. Kimura, “New Hadamard matrix of order 24,” Graphs Combin. 5, 235–242
(1989).
27. H. Kimura, “Classification of Hadamard matrices of order 28 with Hall sets,”
Discrete Math. 128 (1–3), 257–268 (1994).
28. J. Cooper, J. Milas and W. D. Wallis, “Hadamard Equivalence,” in
Combinatorial Mathematics, Springer, Berlin/Heidelberg, 686, 126–135
(1978).
29. N. J. A. Sloane, A Library of Hadamard Matrices. www.research.att.com/
∼njas/hadamard.

30. J. Hadamard, “Resolution d’une question relative aux determinants,” Bull. Sci.
Math. 17, 240–246 (1893).
31. S. S. Agaian, Hadamard Matrices and their Applications, Lecture Notes in
Mathematics, 1168, Springer, New York (1985).
32. J. Williamson, “Hadamard determinant theorem and sum of four squares,”
Duke Math. J. 11, 65–81 (1944).
33. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs,” in Surveys in Contemporary Design Theory, John Wiley & Sons,
Hoboken, NJ (1992).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 47

34. J. Seberry and A. L. Whiteman, “A new construction for conference matrices,”


Ars. Combinatoria 16, 119–127 (1983).
35. http://www.cs.uow.edu.au/people/jennie/lifework.html.
36. R. E. A. G. Paley, “On orthogonal matrices,” J. Math. Phys. 12 (3), 311–320
(1933).
37. J. S. Wallis, “On the existence of Hadamard matrices,” J. Combin. Theory, Ser.
A 21, 188–195 (1976).
38. H. Kharaghani, “A construction for block circulant orthogonal designs,” J.
Combin. Designs 4 (6), 389–395 (1998).
39. R. Craigen, J. Seberry, and X. Zhang, “Product of four Hadamard matrices,”
J. Comb. Theory, Ser. A 59, 318–320 (1992).
40. S. Rahardja and B. J. Falkowski, “Digital signal processing with complex
Hadamard transform,” in Proc. of Fourth ICSP-98, pp. 533–536 (1998).
41. R. J. Turyn, “Complex Hadamard matrices,” in Combinatorial Structures and
Applications, pp. 435–437, Gordon and Breach, London (1970).
42. A. Haar, “Zur Theorie der Orthogonalen Funktionensysteme,” Math. Ann. 69,
331–371 (1910).
43. K. R. Rao, M. Narasimhan, and K. Reveluni, “A family of discrete Haar
transforms,” Comput. Elect. Eng. 2, 367–368 (1975).
44. K. Rao, K. Reveluni, M. Narasimhan, and N. Ahmed, “Complex Haar
transform,” IEEE Trans. Acoust. Speech Signal Process. 2 (1), 102–104 (1976).
45. H. G. Sarukhanyan, “Hadamard matrices: construction methods and
applications,” Proc. 1st Int. Workshop on Transforms and Filter Banks, TICSP
Ser. 1, Tampere University, Finland, pp. 95–130 (1998).
46. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal
Processing, Springer-Verlag, New York (1975).
47. S. S. Agaian and H. G. Sarukhanyan, “Recurrent formulae of the construction
Williamson type matrices,” Math. Notes 30 (4), 603–617 (1981).
48. S. S. Agaian, “Advances and problems of fast orthogonal transforms for signal-
images processing applications (Parts 1 and 2),” in Ser. Pattern Recognition,
Classification, Forecasting Yearbook, Russian Academy of Sciences, Nauka,
Moscow, pp. 146–215 (1990).
49. S. Agaian, J. Astola, and K. Egiazarian, Polynomial Transforms and
Applications (Combinatorics, Digital Logic, Nonlinear Signal Processing),
Tampere University, Finland (1993).
50. A. Haar, “Zur Theorie der Orthogonalen Funktionensysteme,” Math. Ann. 69,
331–371 (1910).
51. B. S. Nagy, Alfréd Haar: Gesammelte Arbeiten, Budapest, Hungary (1959).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


48 Chapter 1

52. A. B. Németh, “On Alfred Haar’s original proof of his theorem on best
approximation,” in Proc. A. Haar Memorial Conf. I, II, Amsterdam, New York,
pp. 651–659 (1987).
53. B. S. Nagy, “Alfred Haar (1885–1933),” Resultate Math. 8 (2), 194–196
(1985).
54. K. J. L. Ray, “VLSI computing architectures for Haar transform,” Electron.
Lett. 26 (23), 1962–1963 (1990).
55. T. J. Davis, “Fast decomposition of digital curves into polygons using the Haar
transform,” IEEE Trans. Pattern Anal. Mach. Intell. 218, 786–790 (1999).
56. B. J. Falkowski and S. Rahardja, “Sign Haar Transform,” in Proc. of IEEE Int.
Symp. Circuits Syst., ISCAS ’94 2, 161–164 (1994).
57. K.-W. Cheung, C.-H. Cheung and L.-M. Po, “A novel multi wavelet-based
integer transform for lossless image coding,” in Proc. Int. Conf. Image
Processing, ICIP 99 1, 444–447, City Univ. of Hong Kong, Kobe (1999).
58. B. J. Falkowski and S. Rahardja, “Properties of Boolean functions in spectral
domain of sign Haar transform,” Inf. Commun. Signal Process 1, 64–68 (1997).
59. B. J. Falkowski and C.-H. Chang, “Properties and applications of paired Haar
transform,” Inf. Commun. Signal Process. 1997, ICICS 1, 48–51 (1997).
60. S. Yu and R. Liu, “A new edge detection algorithm: fast and localizing to a
single pixel,” in Proc. of IEEE Int. Symp. on Circuits and Systems, ISCAS ’93
1, 539–542 (1993).
61. T. Lonnestad, “A new set of texture features based on the Haar transform,”
Proc. 11th IAPR Int. Conf. on Pattern Recognition, Image, Speech and Signal
Analysis, (The Hague, 30 Aug.–3 Sept., 1992), 3, 676–679 (1992).
62. G. M. Megson, “Systolic arrays for the Haar transform,”in IEE Proc. of
Computers and Digital Techniques, vol. 145, pp. 403–410 (1998).
63. G. A. Ruiz and J. A. Michell, “Memory efficient programmable processor
chip for inverse Haar transform,” IEEE Trans. Signal Process 46 (1), 263–268
(1998).
64. M. A. Thornton, “Modified Haar transform calculation using digital
circuit output probabilities,” Proc. of IEEE Int. Conf. on Information,
Communications and Signal Processing 1, 52–58 (1997).
65. J. P. Hansen and M. Sekine, “Decision diagram based techniques for the Haar
wavelet transform,” in Proc. of Int. Conf. on Information, Communications and
Signal Processing 1, pp. 59–63 (1997).
66. Y.-D. Wang and M. J. Paulik, “A discrete wavelet model for target recognition,”
in Proc. of IEEE 39th Midwest Symp. on Circuits and Systems 2, 835–838
(1996).
67. K. Egiazarian and J. Astola, “Generalized Fibonacci cubes and trees for DSP
applications,” in Proc. of IEEE Int. Symp. on Circuits and Systems, ISCAS ’96
2, 445–448 (1996).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Classical Hadamard Matrices and Arrays 49

68. L. M. Kaplan and J. C.-C. Kuo, “Signal modeling using increments of extended
self-similar processes,” in Proc. of IEEE Int. Conf. on Acoustics, Speech, and
Signal Processing, ICASSP-94 4, 125–128 (1994).
69. L. Prasad, “Multiresolutional Fault Tolerant Sensor Integration and Object
Recognition in Images,” Ph.D. dissertation, Louisiana State University (1995).
70. B. J. Falkowski and C. H. Chang, “Forward and inverse transformations
between Haar spectra and ordered binary decision diagrams of Boolean
functions,” IEEE Trans. Comput. 46 (11), 1272–1279 (1997).
71. G. Ruiz, J. A. Michell and A. Buron, “Fault detection and diagnosis for MOS
circuits from Haar and Walsh spectrum analysis: on the fault coverage of
Haar reduced analysis,” in Theory and Application of Spectral Techniques,
C. Moraga, Ed., Dortmund University Press, pp. 97–106 (1988).
72. J. Brenner and L. Cummings, “The Hadamard maximum determinant
problem,” Am. Math. Mon. 79, 626–630 (1972).
73. R. E. A. C. Paley, “On orthogonal matrices,” J. Math. Phys. 12, 311–320
(1933).
74. W. K. Pratt, J. Kane, and H. C. Andrews, “Hadamard transform image
coding,” Proc. IEEE 57, 58–68 (1969).
75. R. K. Yarlagadda and J. E. Hershey, Hadamard Matrix Analysis and Synthesis,
Kluwer Academic Publishers, Boston (1996).
76. S. Agaian and A. Matevosian, “Haar transforms and automatic quality test of
printed circuit boards,” Acta Cybernet. 5 (3), 315–362 (1981).
77. S. Agaian and H. Sarukhanyan, “Generalized δ-codes and Hadamard
matrices,” Prob. Inf. Transmission 16 (3), 203–211 (1980).
78. S. Georgiou, C. Koukouvinos and J. Seberry, “Hadamard matrices, orthogonal
designs and construction algorithms,” available at Research Online, http://ro.
uow.edu.au/infopapers/308.
79. G. Ruiz, J. A. Michell and A. Buron, “Fault detection and diagnosis for MOS
circuits from Haar and Walsh spectrum analysis: on the fault coverage of
Haar reduced analysis,” in Theory and Application of Spectral Techniques,
C. Moraga, Ed., University Dortmund Press, pp. 97–106 (1988).
80. http://www.websters-online-dictionary.org/Gr/Gray+code.html.
81. F. Gray, “Pulse code communication,” U.S. Patent No. 2,632,058 (March 17
1953).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 2
Fast Classical Discrete
Orthogonal Transforms
The computation of unitary transforms is a complicated and time-consuming task.
However, it would not be possible to use orthogonal transforms in signal and
image processing applications without effective algorithms to calculate them. Note
that both complexity issues—efficient software and circuit implementations—are
the heart of the most applications. An important question in many applications
is how to achieve the highest computation efficiency of the discrete orthogonal
transforms (DOTs).1 The suitability of unitary transforms in each of the above
applications depends on the properties of their basis functions as well as on the
existence of fast algorithms, including parallel ones. A fast DOT is an efficient
algorithm for computing the DOT and its inverse with an essentially smaller
number of operations than direct matrix multiplication. The problem of computing
a transform has been extensively studied.2–45
Historically, the first efficient DFT algorithm, for length 2M, was described
by Gauss in 1805 and developed by Cooley and Tukey in 1965.45–64 Since
the introduction of the fast Fourier transform (FFT), Fourier analysis has
become one of the most frequently used tools in signal/image processing and
communication systems; other discrete transforms and different fast algorithms
for computing transforms have been introduced as well. In the past decade, fast
DOTs have been widely used in many areas such as data compression, pattern
recognition and image reconstruction, interpolation, linear filtering, spectral
analysis, watermarking, cryptography, and communication systems. The HTs, such
as the WHT and Walsh–Paley transform, are important members of the class of
DOTs.1 These matrices are known as nonsinusoidal orthogonal transform matrices
and have found applications in digital signal processing and communication
systems1–3,7–11,34,36,39,65 because they do not require any multiplication operations
in their computation. A survey of the literature of fast HTs (FHTs) and their
hardware implementations is found in Refs. 2, 4, 14–22, and 66–74. There are
many other practical problems where one needs to have an N-point FHT algorithm,
where N is an arbitrary 4t integer. We have seen that despite the efforts of several
mathematicians, the Hadamard conjecture remains unproved even though it is
widely believed to be true.

51

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


52 Chapter 2

This chapter describes efficient (in terms of space and time) computational
procedures for a commonly used class of 2n -point HT and Haar transforms. There
are many distinct fast HT algorithms involving a wide range of mathematics. We
will focus mostly on a matrix approach. Section 2.1 describes a general concept
of matrix-based fast DOT algorithms. Section 2.2 presents the 2n -point WHT.
Section 2.3 presents the fast Walsh–Paley transform. Section 2.4 presents fast Cal-
Sal transforms. Sections 2.5 and 2.6 describe the complex HTs and the fast Haar
transform algorithm.

2.1 Matrix-Based Fast DOT Algorithms


Recall that the DOT of the sequence f (n) is given by

N−1
1
Y[k] = √ f [n]φn [k], k = 0, 1, . . . , N − 1, (2.1)
N n=0


where {φn [k]} is an orthogonal system. Or, in matrix form, Y = (1/ N)HN f , and
Eq. (2.1) can be written as

Y[k] = f [0]φ0 [k] + f [1]φ1 [k] + · · · + f [n − 1]φN−1 [k], k = 0, 1, . . . , N − 1. (2.2)

It follows that the determination of each Y[k] requires N multiplications and N − 1


additions. Because we have to evaluate Y[k] for k = 0, 1, . . . , N − 1, it follows that
the direct determination of DFT requires N(N − 1) operations, which means that
the number of multiplications and additions/subtractions is proportional to N 2 , i.e.,
the complexity of DFT is O(N 2 ).
How can one reduce the computational complexity of an orthogonal transform?
The choice of a particular algorithm depends on a number of factors, namely,
complexity, memory/space, very large scale integration (VLSI) implementation,
and other considerations.
Complexity: It is obvious that any practical algorithm for the DOT depends
on the transform length N. There are many measures of the efficiency of an
implementation. We will use the linear combination of the number of arithmetic
operations [multiplications C × (N) and additions/subtractions C + (N)] needed to
compute the DOT as a measure of computational complexity,

C(N) = μ+C + (N) + μ×C × (N), (2.3)

where μ+ and μ× are weight constant.


C + (N) is called the additive complexity, and C × (N) is called multiplicative
complexity. This complexity is very important for VLSI implementation (the
implementation cost of the multiplication is much higher than the implementation
cost of the addition operations). This is one of the basic reasons these weights are
used.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 53

The idea of a fast algorithm is to map the given computational problem into
several subproblems, which leads to a reduction of the order of complexity of the
problem:

Cost(problem) = Sum{cost(mapping)} + Sum{cost(subproblem)}.

Usually, the fast DOT algorithm is based on decomposition of the computation


of the DOT of a signal into successively smaller DOTs. The procedure, which
reduces the computational complexity of the orthogonal transform, is known as a
fast discrete unitary (orthogonal) transform algorithm or fast transform.
The main problem when calculating the DOT relates to construction of
the decomposition, namely, the transition to the short DOT with minimal
computational complexity. There are several algorithms for efficient evaluation of
the DOT.2,38–64 The efficiencies of these algorithms are related to the following
question: How close are they to the respective lower bound? Realizable lower
bounds (the knowledge of the lower bounds tell us that it is impossible to
develop an algorithm with better performance than the lower bound) are not so
easily obtained. Another point in the comparison of algorithms is the memory
requirement.

General Concept in the Design of Fast DOT Algorithms: A fast transform T N f may
be achieved by factoring the transform matrix T N by the multiplication of k sparse
matrices. Typically, N = 2n , k = log2 , N = n, and

T 2n = Fn Fn−1 · · · F2 F1 , (2.4)

where Fi are very sparse matrices so that the complexity of multiplying by Fi is


O(N), I = 1, 2, . . . , n.
An N = 2n -point inverse transform matrix can be represented as

T 2−1
n = T 2n = (F n F n−1 · · · F 2 F 1 )
T T
= F1T F2T · · · Fn−1
T
FnT . (2.5)

Thus, one can implement the transform T N f via the following consecutive
computations:

f → F1 f → F2 (F1 f ) → · · · → Fn [· · · F2 (F1 f )]. (2.6)

On the basis of the factorization of Eq. (2.4), the computational complexity is


reduced from O(N 2 ) to O(N log N). Because Fi contains only a few nonzero terms
per row, the transformation can be efficiently accomplished by operating on f n
times. For Fourier, Hadamard, and slant transforms, Fi contains only two nonzero
terms in each row. Thus, an N-point 1D transform with Eq. (2.4) decomposition can
be implemented in O(N log N) operations, which is far fewer than N 2 operations.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


54 Chapter 2

Figure 2.1 2D transform flow chart.

The general algorithm for the fast DOT is given as follows:


• Input signal f of length N. √
• Precompute the constant related with transform T N f 1/ N.
• For i = 1, 2, . . . , n,√compute fi = Fi fi−1 , f0 = f .
• Multiply fn by 1/ √N.
• Output T N f = (1/ N) fn .

2D DOTs: The simplest and most common 2D DOT algorithm, known as the row-
column algorithm, corresponds to first performing the 1D fast DOTs (by any of
the DOT algorithms of all the rows and then of all the columns, or vice versa). 2D
transforms can be performed in two steps, as follows:
Step 1. Compute N-point 1D DOT on the columns of the data.
Step 2. Compute N-point DOT on the rows of the intermediate result.
This idea can be very easily extended to the multidimensional case (see Fig. 2.1).

2.2 Fast Walsh–Hadamard Transform


The HT, which is known primarily as the WHT, is one of the widely used
transforms in signal and image processing. Nevertheless, the WHT is just a
particular case of a general class of transforms based on Hadamard matrices.1,2,66,67
Fast algorithms have been developed1,3–11,65–67,75–79 for efficient computation of
these transforms. It is known that the discrete HT (DHT) is computationally
advantageous over the FFT. Being orthonormal and taking values +1 or −1 at each
point, the Hadamard functions can be used for series expansions of the signal.
Because the Walsh–Hadamard matrix consists of ±1s, the computation of the
WHT does not require any multiplication operations, and consequently requires no
floating-point operations at all. The WHT is useful in signal and image processing,
communication systems, image coding, image enhancement, pattern recognition,
etc. The traditional fast N = 2n -dimensional DHT needs N log2 N = n2n addition
operations. Note that the implementation of the WHT with straightforward matrix
multiplication requires N(N − 1) = 22n − 2n additions.
Now, let X = (x0 , x1 , . . . , xN−1 )T be an input signal. Forward and inverse 1D
WHTs of a vector X are defined as1,2,11

1
Y = √ HN X, (2.7)
N
1
Y = √ HN Y. (2.8)
N

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 55

It has been shown that a fast WHTs algorithm exists with C(N) = N log2 N
addition/subtraction operations.1 To understand the concept of the construction of
fast transform algorithms, we start with the 8-point WHT

1
F = √ H8 f, (2.9)
8

where f = (a, b, c, d, e, f, g, h)T is the input signal/vector.


Recall that the HT matrix of order 8 is of the following form:
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟
⎟⎟
⎜⎜⎜
⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ + − − + + − −⎟⎟⎟⎟
⎜⎜+ ⎟⎟
− − + + − − +⎟⎟⎟⎟
H8 = ⎜⎜⎜⎜⎜ ⎟, (2.10)
⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − − − − + − +⎟⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎝ + − − − − + +⎟⎟⎟⎟

+ − + + − + + −

where “+” denotes 1 and “−” denotes −1.


It is easy to check the following:
(1) The direct evaluation of the HT F = H8 f requires 7 × 8 = 56 additions:
⎛ ⎞
⎜⎜⎜a + e + c + g + b + f + d + h⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜a + e + c + g − b − f − d − h⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜a + e − c − g + b + f − d − h⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜a + e − c − g − b − f + d + h⎟⎟⎟⎟
F = H8 f = ⎜⎜⎜⎜ ⎟⎟ . (2.11)
⎜⎜⎜a − e + c − g + b −
⎜⎜⎜ f + d − h⎟⎟⎟⎟⎟
⎟⎟
⎜⎜⎜a − e + c − g − b +
⎜⎜⎜ f − d + h⎟⎟⎟⎟
⎟⎟
⎜⎜⎜a − e − c + g + b −
⎜⎜⎝ f − d + h⎟⎟⎟⎟
⎟⎠
a−e−c+g−b+ f +d−h

(2) The Hadamard matrix H8 can be expressed as the product of the following
three matrices:

H8 = B3 B2 B1 , (2.12)

where

B1 = H2 ⊗ I4 , (2.13)
B2 = (H2 ⊗ I2 ) ⊕ (H2 ⊗ I2 ), (2.14)
B3 = I4 ⊗ H2 , (2.15)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


56 Chapter 2

or
⎛ ⎞
⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 0 0 + 0 0 0 +⎟⎟⎟⎟
B1 = ⎜⎜⎜⎜ ⎟⎟ , (2.16)
⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 + 0 0 0 − 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 − 0 ⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 + 0 0 0 −
⎛ ⎞
⎜⎜⎜+ 0 + 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 + 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ 0 − 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 + 0 − 0 0 0 0 ⎟⎟⎟⎟
B2 = ⎜⎜⎜⎜ ⎟⎟ , (2.17)
⎜⎜⎜0 0 0 0 + 0 + 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 0 0 + 0 +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + 0 − 0 ⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 0 0 + 0 −
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 0 + − 0 0 0 0 ⎟⎟⎟⎟
B3 = ⎜⎜⎜⎜ ⎟⎟ . (2.18)
⎜⎜⎜0 0 0 0 + + 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 0 + − 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 + +⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 + −

(3) Using this factorization, the 8-point 1D HT can be implemented in 24 =


8 log2 8 operations. The proof follows.
The FHT algorithm can be realized via the following three steps:
Step 1. Calculate B1 f :
⎡ ⎤⎛ ⎞ ⎛ ⎞
⎢⎢⎢+ 0 0 0 + 0 0 0 ⎥⎥⎥ ⎜⎜a ⎟⎟ ⎜⎜a + e ⎟⎟
⎢⎢⎢ ⎥⎥⎥ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎢⎢⎢0 + 0 0 0 + 0 0 ⎥⎥⎥ ⎜⎜⎜b ⎟⎟⎟ ⎜⎜⎜b + f ⎟⎟⎟⎟⎟
⎢⎢⎢ ⎥⎥⎥ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎢⎢⎢0 0 + 0 0 0 + 0 ⎥⎥⎥ ⎜⎜⎜c ⎟⎟⎟ ⎜⎜⎜c + g ⎟⎟⎟⎟⎟
⎢⎢⎢ ⎥⎜ ⎟ ⎜ ⎟
⎢0 0 0 + 0 0 0 +⎥⎥⎥⎥ ⎜⎜⎜⎜d ⎟⎟⎟⎟ ⎜⎜⎜⎜d + h ⎟⎟⎟⎟
B1 f = ⎢⎢⎢⎢ ⎥⎥⎥⎥ ⎜⎜⎜⎜ ⎟⎟⎟⎟ = ⎜⎜⎜⎜ ⎟⎟ . (2.19)
⎢⎢⎢+ 0 0 0 − 0 0 0 ⎥⎥ ⎜⎜e ⎟⎟ ⎜⎜a − e ⎟⎟⎟⎟
⎢⎢⎢ ⎥⎥⎥ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎢⎢⎢0 + 0 0 0 − 0 0 ⎥⎥⎥ ⎜⎜⎜ f ⎟⎟⎟ ⎜⎜⎜b − f ⎟⎟⎟⎟⎟
⎢⎢⎢ ⎥⎥⎥ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎢⎢⎢0 0 + 0 0 0 − 0 ⎥⎥⎥ ⎜⎝⎜g ⎟⎠⎟ ⎜⎝⎜c − g ⎟⎟⎟⎠⎟
⎣ ⎦ d−h
0 0 0 + 0 0 0 − h

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 57

Step 2. Calculate B2 (B1 f ):


⎛ ⎞ ⎡ ⎤⎛ ⎞
⎜⎜⎜a + e ⎟⎟⎟ ⎢⎢⎢+ 0 + 0 0 0 0 0 ⎥⎥ ⎜⎜(a + e) + (c + g) ⎟⎟
⎥⎥⎥ ⎜⎜⎜ ⎟
⎜⎜⎜⎜b + f ⎟⎟⎟⎟ ⎢⎢⎢⎢0 + 0 + 0 0 0 0 ⎥⎥⎥ ⎜⎜⎜(b + f ) + (d + h)⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎢
⎜⎜⎜c + g ⎟⎟⎟⎟⎟ ⎢⎢⎢⎢⎢+ 0 − 0 0 0 0
⎥⎥⎥ ⎜⎜⎜
0 ⎥⎥ ⎜⎜(a + e) − (c + g) ⎟⎟⎟⎟

⎜⎜⎜ ⎟ ⎢ ⎥⎥⎥ ⎜⎜⎜ ⎟
⎜⎜⎜d + h ⎟⎟⎟⎟⎟ ⎢⎢⎢⎢⎢0 + 0 − 0 0 0 0 ⎥⎥⎥ ⎜⎜⎜(b + f ) − (d + h)⎟⎟⎟⎟⎟
B2 (B1 f ) = B2 ⎜⎜⎜ ⎟=⎢ ⎥⎜ ⎟ . (2.20)
⎜⎜⎜a − e ⎟⎟⎟⎟⎟ ⎢⎢⎢⎢⎢0 0 0 0 + 0 + 0 ⎥⎥⎥⎥ ⎜⎜⎜⎜(a − e) + (c − g) ⎟⎟⎟⎟
⎥⎥⎥ ⎜⎜⎜ ⎟
⎜⎜⎜b − f ⎟⎟⎟ ⎢⎢⎢0 + +⎥⎥⎥ ⎜⎜⎜(b − f ) + (d − h)⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎢ 0 0 0 0 0
⎜⎜⎜c − g ⎟⎟⎟⎟⎟ ⎢⎢⎢⎢⎢0 0 0 0 + 0 −
⎥⎥⎥ ⎜⎜⎜
0 ⎥⎥ ⎜⎜(a − e) − (c − g) ⎟⎟⎟⎟

⎜⎝ ⎟⎠ ⎢⎣ ⎦⎝ ⎠
d−h 0 0 0 0 0 + 0 − (b − f ) − (d − h)

Step 3. Calculate B3 [B2 (B1 f )]:


⎡ ⎤⎛ ⎞
⎢⎢⎢+ + 0 0 0 0 0 0 ⎥⎥⎥ ⎜⎜⎜(a + e) + (c + g) ⎟⎟⎟
⎢⎢⎢ ⎥⎥⎥ ⎜⎜⎜ ⎟
⎢⎢⎢+ − 0 0 0 0 0 0 ⎥⎥⎥ ⎜⎜⎜(b + f ) + (d + h)⎟⎟⎟⎟⎟
⎢⎢⎢0 0 + + 0 0 0 0 ⎥⎥⎥ ⎜⎜⎜(a + e) − (c + g) ⎟⎟⎟
⎢⎢⎢ ⎥⎜ ⎟
⎢⎢⎢0 0 + − 0 0 0 0 ⎥⎥⎥⎥⎥ ⎜⎜⎜⎜⎜(b + f ) − (d + h)⎟⎟⎟⎟⎟
B3 (B2 (B1 f )) = ⎢⎢⎢⎢ ⎥⎥ ⎜⎜ ⎟⎟
⎢⎢⎢0 0 0 0 + + 0 0 ⎥⎥⎥⎥⎥ ⎜⎜⎜⎜⎜(a − e) + (c − g) ⎟⎟⎟⎟⎟
⎢⎢⎢0 0 0 0 + − 0 0 ⎥⎥⎥ ⎜⎜⎜(b − f ) + (d − h)⎟⎟⎟
⎢⎢⎢ ⎥⎜ ⎟
⎢⎢⎢0 0 0 0 0 0 + +⎥⎥⎥⎥⎥ ⎜⎜⎜⎜⎜(a − e) − (c − g) ⎟⎟⎟⎟⎟
⎢⎣ ⎥⎦ ⎜⎝ ⎟⎠
0 0 0 0 0 0 + − (b − f ) − (d − h)
⎛ ⎞
⎜⎜⎜(a + e) + (c + g) + [(b + f ) + (d + h)]⎟⎟⎟
⎜⎜⎜⎜(a + e) + (c + g) − [(b + f ) + (d + h)]⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜(a + e) − (c + g) + [(b + f ) − (d + h)]⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜(a + e) − (c + g) − [(b + f ) − (d + h)]⎟⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟⎟. (2.21)
⎜⎜⎜(a − e) + (c − g) + [(b − f ) + (d − h)]⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜(a − e) + (c − g) − [(b − f ) + (d − h)]⎟⎟⎟⎟⎟
⎜⎜⎜(a − e) − (c − g) + [(b − f ) − (d − h)]⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠
(a − e) − (c − g) − [(b − f ) − (d − h)]

The comparison of Eq. (2.11) with Eq. (2.12) shows the following:
• That expression Eq. (2.12), which computes the DHT, produces exactly the same
result as evaluating the DHT definition directly [see Eq. (2.11)].
• The direct calculation of an 8-point HT H8 f requires 56 operations. However,
for calculation of H8 f via the fast algorithm, only 24 operations are required.
This is because each product of the matrix and vector requires only eight
additions or subtractions, since each sparse matrix has only two nonzero
elements in each row. Thus, all operations (additions, subtractions) that are
required for the H8 f calculation equal 24 = 8 log2 8. The difference in speed can
be significant, especially for long data sets, where N may be in the thousands or
millions.
• To perform the 8-point HT requires only eight storage locations.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


58 Chapter 2

Step1. Step2. Step3.


a a+e a+e+c+g a+e+c+g+b+f+d+h

b a+e+c+g–b–f–d–h
b+f b+f+d+h

c c+g a+e–c–g+b+f–d–h
a+e–c–g

d d+h a+e–c–g–b–f+d+h
b+f–d–h

e a–e a–e+c–g a–e+c–g+b–f+d–h

f b–f b–f+d–h a–e+c–g–b+f–d+h

g c–g a–e–c+g a–e–c+g+a–e–c+g

h d–h a–e–c+g a–e–c+g–a+e+c–g

Figure 2.2 All steps of the 8-point WHT shown simultaneously.

a (a+e+c+g)–(b+f+d+h)

b (a+e+c+g)– (b+f+d+h)

c (a+e–c–g)+ (b+f–d–h)

d (a+e–c–g)–(b+f–d–h)

e (a–e+c–g)+(b–f+d–H)

f (a–e+c–g)–(b–f+d–h)

g (a–e–c+g)+(b–f–d+h)

h (a–e–c+g)–(b–f–d+h)

Figure 2.3 The flow graph of the 8-point fast WHT.

• The inverse 8-point HT matrix can be expressed by the product of three matrices:
H8 = B1 B2 B3 .
The fast WHT algorithms are best explained using signal flow diagrams, as shown
in Fig. 2.2. These diagrams consist of a series of nodes, each representing a
variable, which is itself expressed as the sum of other variables originating from
the left of the diagram, with the node block connected by means of solid lines.
A dashed connecting line indicates a term to be subtracted. Figure 2.2 shows the
signal flow graph illustrating the computation of the WHT coefficients for N = 8
and shows the flow graph of all steps simultaneously.
In general, the flow graph is used without the node block (Fig. 2.3).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 59

The matrices B1 , B2 , and B3 can be expressed as

⎛ ⎞
⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟⎟
⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟  
⎜⎜⎜0 0 0 + 0 0 0 +⎟⎟⎟⎟⎟ + +
B1 = ⎜⎜⎜ ⎟= ⊗ I4 = H2 ⊗ I4 = I1 ⊗ H2 ⊗ I4 , (2.22)
⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟⎟ + −
⎜⎜⎜0 + 0 0 0 − 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 − 0 ⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 + 0 0 0 −
⎛ ⎞
⎜⎜⎜+ 0 + 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜0 + 0 + 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜+ 0 − 0 0 0 0 0 ⎟⎟⎟⎟⎟  
⎜⎜⎜

⎜ 0 + 0 − 0 0 0 0 ⎟⎟⎟⎟⎟ H 2 ⊗ I2 0
B2 = ⎜⎜⎜ ⎟⎟ = = I2 ⊗ H2 ⊗ I2 , (2.23)
⎜⎜⎜⎜0 0 0 0 + 0 + 0 ⎟⎟⎟⎟⎟ 0 H2 ⊗ I2
⎜⎜⎜0 0 0 0 0 + 0 +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 + 0 − 0 ⎟⎟⎟⎟⎠
0 0 0 0 0 + 0 −
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜⎜+ − 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜
0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
B3 = ⎜⎜⎜⎜⎜ ⎟⎟ = I4 ⊗ H2 = I4 ⊗ H2 ⊗ I1 . (2.24)
⎜⎜⎜⎜0 0 0 0 + + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 + − 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 0 0 + +⎟⎟⎟⎟⎠
0 0 0 0 0 0 + −

The order 2n -point WHT matrix H2n can be factored as follows:

H2n = Fn Fn−1 , . . . , F2 F1 , (2.25)

where Fi = I2i−1 ⊗ (H2 ⊗ I2n−i ), i = 1, 2, . . . , n.


For instance, the 16-point WHT matrix can be factored as H16 = F4 F3 F2 F1 ,
where

F1 = H2 ⊗ I8 , (2.26)
F2 = I2 ⊗ (H2 ⊗ I4 ) = (H2 ⊗ I4 ) ⊕ (H2 ⊗ I4 ), (2.27)
F3 = I4 ⊗ (H2 ⊗ I2 ) = (H2 ⊗ I2 ) ⊕ (H2 ⊗ I2 ) ⊕ (H2 ⊗ I2 ) ⊕ (H2 ⊗ I2 ), (2.28)
F4 = I8 ⊗ H2 . (2.29)

In Fig. 2.4, the flow graph of a 1D WHT for N = 16 is given.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


60 Chapter 2

X(0) Y(0)

X(1) Y(1)

X(2) Y(2)

X(3) Y(3)

X(4) Y(4)

X(5) Y(5)

X(6) Y(6)

X(7) Y(7)

X(8) Y(8)

X(9) Y(9)

X(10) Y(10)

X(11) Y(11)

X(12) Y(12)

X(13) Y(13)

X(14) Y(14)

X(15) Y(15)

Figure 2.4 Flow graph of the fast WHT.

Now, using the properties of the Kronecker product, we obtain the desired
results. From this, it is not difficult to show that the WHT matrix of order 2n can
be factored as
"
n
H2n = (I2m−1 ⊗ H2 ⊗ I2n−m ). (2.30)
m=1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 61

Lemma 2.2.1: The Walsh–Hadamard matrix of order N = 2n can be represented


as
 
H n−1 O2n−1
H2n = (H2 ⊗ I2n−1 ) 2 = (H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 ), (2.31)
O2n−1 H2n−1

where O2n−1 is the zero matrix of order 2n−1 .


Proof: Indeed, if we have
 
H2n−1 H2n−1
H2n = = H2 ⊗ H2n−1 = (H2 I2 ) ⊗ (I2n−1 H2n−1 ) (2.32)
H2n−1 −H2n−1

then
 
H2n−1 O2n−1
H2n = (H2 I2 ) ⊗ (I2n−1 H2n−1 ) = (H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 ) = (H2 ⊗ I2n−1 ) .
O2n−1 H2n−1
(2.33)

Using the Kronecker product property, we obtain the following:


Theorem 2.2.1: Let f be a signal of length N = 2n . Then,
(1) The Walsh–Hadamard matrix of order N = 2n can be factored as
"
n
H 2n = (I2m−1 ⊗ H2 ⊗ I2n−m ). (2.34)
m=1

(2) The WHT of the signal f can be computed with n2n addition/subtraction
operations.
Proof: From the definition of the Walsh–Hadamard matrix, we have
 
H n−1 H2n−1
H2n = 2 . (2.35)
H2n−1 −H2n−1

Using Lemma 2.2.1, we may rewrite this equation in the following form:

H2n = (H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 ) = (I20 ⊗ H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 ), I20 = 1. (2.36)

Using the same procedure with the Walsh–Hadamard matrix of order 2n−1 , we
obtain
 
H2n−2 H2n−2
H2n−1 = = (I20 ⊗ H2 ⊗ I2n−2 )(I2 ⊗ H2n−2 ). (2.37)
H2n−2 −H2n−2

Thus, from the above two relations, we get

H2n = (I20 ⊗ H2 ⊗ I2n−1 )(I2 ⊗ H2n−1 )


= (I20 ⊗ H2 ⊗ I2n−1 ) {I2 ⊗ [(I20 ⊗ H2 ⊗ I2n−2 )(I2 ⊗ H2n−2 )]} . (2.38)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


62 Chapter 2

Table 2.1 Additions/subtractions of 1D WHTs.

Order N Addition/subtraction Direct transform (N − 1)/(log2 N)

4 8 12 3/2
8 24 56 7/3
16 64 240 15/4
32 160 992 31/5
64 384 4,032 63/6
128 896 16,256 127/7
256 2,048 65,280 255/8
512 4,508 261,632 511/9
1025 10,240 125,952 1023/10

Thus, we have

H2n = (I20 ⊗ H2 ⊗ I2n−1 )(I2 ⊗ H2 ⊗ I2n−2 )(I2 ⊗ H2n−2 ). (2.39)

After n iterations, we obtain the desired results. The theorem is proved.


In Table 2.1, the number of additions/subtractions of 1D WHTs is given.

2.3 Fast Walsh–Paley Transform


In this section, we present factorizations of Walsh–Paley transform matrices.

Theorem 2.3.1: The Walsh–Paley matrix of order N = 2n can be factored as H2n :


 
I2n−1 ⊗ (+ +)
[WP]2n = (I2 ⊗ [WP]2n−1 ) . (2.40)
I2n−1 ⊗ (+ −)

Proof: From the definition of a Walsh–Paley matrix of order N = 2n , we have


 
[WP]2n−1 ⊗ (+ +)
[WP]2n = . (2.41)
[WP]2n−1 ⊗ (+ −)

Note that [WP]1 = (1). Using the properties of the Kronecker product, we obtain
⎛ ⎞
⎜⎜⎜[WP] n−1 ⊗ (+ +)⎟⎟⎟
[WP]2n = ⎜⎝ ⎜
⎜ 2 ⎟⎟⎟⎠
[WP] n−1 ⊗ (+ −)
⎛ 2 ⎞
⎜⎜⎜[WP] n−1 I n−1 ⊗ I (+ +)⎟⎟⎟
= ⎜⎜⎜⎝ 2 2 1
⎟⎟⎟⎠ (2.42)
[WP] n−1 I2n−1 ⊗ I1 (+ −)
⎛ 2 ⎞⎛ ⎞
⎜⎜⎜[WP] n−1 ⊗ I ⎟⎟⎟ ⎜⎜⎜I n−1 ⊗ I (+ +)⎟⎟⎟
= ⎜⎜⎝ ⎜ 2 1⎟ ⎜ 2
⎟⎟⎠ ⎜⎜⎝ 1 ⎟⎟⎟ .

[WP]2n−1 ⊗ I1 I2n−1 ⊗ I1 (+ −)

Then, from Eq. (2.42) and from the following identity:


    
AB A 0 B
= (2.43)
CD 0 C D

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 63

we obtain
  
[WP]2n−1 0 I2n−1 ⊗ (+ +)
[WP]2n = . (2.44)
0 [WP]2n−1 I2n−1 ⊗ (+ −)

Thus,
 
I2n−1 ⊗ (+ +)
[WP]2n = (I2 ⊗ [WP]2n−1 ) . (2.45)
I2n−1 ⊗ (+ −)
Example 2.3.1: The Walsh–Paley matrices of order 4, 8, and 16 can be factored
as
⎛ ⎞⎛ ⎞
   ⎜⎜⎜+ + 0 0 ⎟⎟⎟ ⎜⎜⎜+ + 0 0 ⎟⎟⎟
+ + ⎜ ⎟ ⎜
⎜⎜+ − 0 0 ⎟⎟⎟ ⎜⎜⎜0 0 + +⎟⎟⎟⎟
N = 4: [WP]4 = I2 ⊗ P1 = ⎜⎜⎜⎜ ⎟⎜ ⎟ , (2.46)
+ − ⎜⎜⎝0 0 + +⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝+ − 0 0 ⎟⎟⎟⎟⎠
0 0 + − 0 0 + −
  
+ +
N = 8: [WP]8 = I4 ⊗ (I2 ⊗ P1 ) P2 , (2.47)
+ −

where
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎛ ⎞ ⎜⎜⎜ ⎟
⎜⎜⎜+ + 0 0 ⎟⎟ ⎜⎜⎜0 0 0 0 + + 0 0 ⎟⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜
⎜⎜⎜0 0 + +⎟⎟ 0 0 0 0 0 0 + +⎟⎟⎟⎟⎟
P1 = ⎜⎜⎜⎜ ⎟⎟⎟ , P2 = ⎜⎜⎜⎜
⎜⎜⎝+ − 0 0 ⎟⎟⎠ ⎜⎜⎜+ − 0 0 0 0 0 0 ⎟⎟⎟⎟⎟ . (2.48)
⎜⎜⎜ ⎟⎟
0 0 + − ⎜⎜⎜⎜0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 + − 0 0 ⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 + −
  
+ +
N = 16: [WP]16 = I8 ⊗ (I4 ⊗ P1 ) (I2 ⊗ P3 ) , (2.49)
+ −

where
⎛+
⎜⎜⎜ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎞⎟⎟
⎜⎜⎜⎜0 0 + + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 + + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 + + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 + + 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 + + 0 0 ⎟⎟⎟⎟⎟
⎜0 + +⎟⎟⎟⎟
= ⎜⎜⎜⎜+
0 0 0 0 0 0 0 0 0 0 0 0 0
P16
⎜⎜⎜ − 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟ . (2.50)
⎜⎜⎜0 0 + − 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 + − 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 + − 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 + − 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎝0 0 0 0 0 0 0 0 0 0 0 0 + − 0 0 ⎟⎟⎠
0 0 0 0 0 0 0 0 0 0 0 0 0 0 + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


64 Chapter 2

x0 y0

x1 y4

x2 y2

x3 y6

x4 y1

x5 y5

x6 y3

x7 y7

Figure 2.5 Flow graph of a fast 8-point Walsh–Paley transform.

Note that an 8-point Walsh–Paley transform of vector x = (x0 , x1 , . . . , x7 )T ,


  
+ +
y = [WP]8 x = I4 ⊗ (I2 ⊗ P1 ) P2 x, (2.51)
+ −

can be calculated using the graph in Fig. 2.5.


Theorem 2.3.2: The Walsh–Paley matrix of order N = 2n can be factored as
n−1 
"  
I2m ⊗ (+ +)
[WP]2n = I2n−1−m ⊗ . (2.52)
I2m ⊗ (+ −)
m=0

Proof: From Theorem 2.3.1, we have


 
I2m−1 ⊗ (+ +)
[WP]2n = (I2 ⊗ [WP]2n−1 ) . (2.53)
I2m−1 ⊗ (+ −)

Using Theorem 2.3.1 once again, we obtain


 
I2m−2 ⊗ (+ +)
[WP]2n−1 = (I2 ⊗ [WP]2n−2 ) . (2.54)
I2m−1 ⊗ (+ −)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 65

From Eqs. (2.53) and (2.54), the results are


 
I m−1 ⊗ (+ +)
[WP]2n = (I2 ⊗ [WP]2n−1 ) 2
I2m−1 ⊗ (+ −)
   $  
I m−2 ⊗ (+ +) I2m−1 ⊗ (+ +)
= I2 ⊗ (I2 ⊗ [WP]2n−2 ) 2
I2m−2 ⊗ (+ −) I2m−1 ⊗ (+ −)
     
I2m−2 ⊗ (+ +) I2m−1 ⊗ (+ +)
= I4 ⊗ [WP]2n−2 I20 ⊗ . (2.55)
I2m−2 ⊗ (+ −) I2m−1 ⊗ (+ −)
After performing n iterations, we obtain Eq. (2.52).
Theorem 2.3.3: The Walsh matrix of order N = 2n can be expressed as

W2n = G2n [WP]2n , (2.56)

where G2n is the gray code permutation matrix, i.e.,


"
n−1
G 2n = I2m ⊗ diag {I2n−m−1 , R2n−m−1 } (2.57)
m=0

and
⎛ ⎞
⎜⎜⎜0 0 ··· 1⎟⎟0
⎜⎜⎜⎜0 ⎟
⎜⎜⎜ 0 ··· 0⎟⎟⎟⎟⎟
1
R2n = ⎜⎜⎜⎜... .. .. .. ⎟⎟⎟⎟ .
..
⎜⎜⎜ . . . ⎟⎟⎟
. (2.58)

⎜⎜⎜0 1 · · · 0 0⎟⎟⎟⎟
⎝ ⎠
1 0 ··· 0 0
This matrix can also be expressed as

W2n = Q2n H2n , (2.59)

where H2n is a Walsh–Hadamard matrix, and


n−1 
"  
I2m ⊗ (+ 0)
Q2 =
n I2n−m−1 ⊗ . (2.60)
I2m ⊗ (0 +)
m=0

Example 2.3.2: A factorization of Walsh matrices of orders 4, 8, and 16, using the
relation of Eq. (2.56) is obtained as follows:
(1) For N = 4, as [see Eq. (2.52)]

[WP]2 = H2 , (2.61)
  
H2 0 I2 ⊗ (+ +)
[WP]4 = (2.62)
0 H2 I2 ⊗ (+ −)

and
       
I 0 I 0 I2 0
G4 = I1 ⊗ 2 I2 ⊗ 1 = , (2.63)
0 R2 0 R1 0 R2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


66 Chapter 2

then, we obtain
   
I 0 H2 0 I2 ⊗ (+ +)
W4 = 2 . (2.64)
0 R2 0 H2 I2 ⊗ (+ −)

Adding more detail, we have W4 = A0 A1 A2 , where


⎛ ⎞
⎜⎜⎜+ 0 0 0 ⎟⎟

⎜⎜⎜0 + 0 0 ⎟⎟⎟⎟
A0 = ⎜⎜⎜⎜ ⎟,
+⎟⎟⎟⎟⎠
(2.65)
⎜⎜⎝0 0 0
0 0 + 0
⎛ ⎞
⎜⎜⎜+ + 0 0 ⎟⎟

⎜⎜⎜+ − 0 0 ⎟⎟⎟⎟
A1 = ⎜⎜⎜⎜ ⎟,
+⎟⎟⎟⎟⎠
(2.66)
⎜⎜⎝0 0 +
0 0 + −
⎛ ⎞
⎜⎜⎜+ + 0 0 ⎟⎟

⎜⎜⎜0 0 + +⎟⎟⎟⎟
A2 = ⎜⎜⎜⎜ ⎟.
0 ⎟⎟⎟⎟⎠
(2.67)
⎜⎜⎝+ − 0
0 0 + −

(2) For N = 8 from Eq. (2.52), we have


    
I ⊗ (+ +) I4 ⊗ (+ +)
[WP]8 = (I4 ⊗ H2 ) I2 ⊗ 2 (2.68)
I2 ⊗ (+ −) I4 ⊗ (+ −)
as
   
I4 0 I2 0
G8 = I ⊗ (I4 ⊗ I2 ) . (2.69)
0 R4 2 0 R2

Then, we obtain W8 = A0 A1 A2 A3 , where


⎛+ 0 ⎞⎟⎟
⎜⎜⎜ 0 0 0 0 0 0
⎜⎜⎜⎜0 + 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 + 0 0 0 0 ⎟⎟⎟⎟
⎜0 0 + 0 0 0 0 0 ⎟⎟⎟⎟⎟
A0 = ⎜⎜⎜⎜0 + 0 ⎟⎟⎟⎟ , (2.70a)
⎜⎜⎜ 0 0 0 0 0
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 0 +⎟⎟⎟⎟⎟
⎜⎝0 0 0 0 0 + 0 0 ⎟⎟⎟⎠
0 0 0 0 + 0 0 0
⎛+ + 0 ⎞⎟⎟
⎜⎜⎜ 0 0 0 0 0
⎜⎜⎜+ − 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
A1 = ⎜⎜⎜⎜0 + + 0 ⎟⎟⎟⎟ , (2.70b)
⎜⎜⎜ 0 0 0 0
⎜⎜⎜0
⎜⎜⎜ 0 0 0 + − 0 0 ⎟⎟⎟⎟⎟
⎜⎝0 0 0 0 0 0 + +⎟⎟⎟⎠
0 0 0 0 0 0 + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 67

⎛+ + 0 ⎞⎟⎟
⎜⎜⎜ 0 0 0 0 0
⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
A2 = ⎜⎜⎜⎜0 + + 0 ⎟⎟⎟⎟ , (2.70c)
⎜⎜⎜ 0 0 0 0
⎜⎜⎜0
⎜⎜⎜ 0 0 0 + − 0 0 ⎟⎟⎟⎟⎟
⎜⎝0 0 0 0 0 0 + +⎟⎟⎟⎠
0 0 0 0 0 0 + −
⎛+ + 0 ⎞⎟⎟
⎜⎜⎜ 0 0 0 0 0
⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + + 0 0 ⎟⎟⎟⎟
⎜⎜0 0 + 0 0 0 + +⎟⎟⎟⎟⎟
A3 = ⎜⎜⎜⎜+ − 0 ⎟⎟⎟⎟ . (2.70d)
⎜⎜⎜ 0 0 0 0 0
⎜⎜⎜0
⎜⎜⎜ 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎝0 0 0 0 + − 0 0 ⎟⎟⎟⎠
0 0 0 0 0 0 + −

(3) For N = 16 from Eq. (2.52), we obtain


⎡ ⎛ ⎞⎤ ⎡ ⎛ ⎞⎤ ⎛ ⎞
⎢⎢⎢ ⎜⎜⎜I2 ⊗ (+ +)⎟⎟⎟⎥⎥⎥ ⎢⎢⎢ ⎜⎜⎜I4 ⊗ (+ +)⎟⎟⎟⎥⎥⎥ ⎜⎜⎜I8 ⊗ (+ +)⎟⎟⎟

[WP]16 = (I8 ⊗ H2 ) ⎢⎣I4 ⊗ ⎜⎝ ⎜ ⎟ ⎥ ⎢
⎟⎠⎥⎦ ⎢⎣I2 ⊗ ⎜⎝ ⎜ ⎟ ⎥
⎟⎠⎥⎦ ⎝⎜⎜ ⎟⎟⎠ , (2.71)
I2 ⊗ (+ −) I4 ⊗ (+ −) I8 ⊗ (+ −)

because
      
I8 0 I 0 I 0
G16 = I2 ⊗ 4 I4 ⊗ 2 (I8 ⊗ I2 ) . (2.72)
0 R8 0 R4 0 R2

Then we have W16 = A0 A1 A2 A3 A4 , where

⎛ ⎞
⎜⎜⎜+ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟

⎜⎜⎜0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0

⎜⎜⎜0 0 0 + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 + 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟

⎜0 + 0 ⎟⎟⎟⎟
A0 = ⎜⎜⎜⎜⎜
0 0 0 0 0 0 0 0 0 0 0 0 0
⎟ (2.73)
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 + 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0 + 0 0 ⎟⎟⎟⎟

⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 +⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 + 0 0 0 0 ⎟⎟⎟⎟

⎜⎜⎜0 0 0 0 0 0 0 0 0 + 0 0 0 0 0 0 ⎟⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 0 0 + 0 0 0 0 0 0 0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


68 Chapter 2

⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟
⎜⎜⎜+
⎜⎜⎜ − 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜0 0 + + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 + − 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 + − 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 + + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 + − 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
A1 = ⎜⎜⎜⎜ ⎟,
0 ⎟⎟⎟⎟
(2.74)
⎜⎜⎜0 0 0 0 0 0 0 0 + + 0 0 0 0 0
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 + − 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + + 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 + − 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 + + 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 + − 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎝ 0 0 0 0 0 0 0 0 0 0 0 0 0 + +⎟⎟⎟⎟⎠
0 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 + + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜+ − 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 + − 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 + + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 + − 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜0 0 0 0 0 0 + − 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
A2 = ⎜⎜⎜⎜ ⎟,
0 ⎟⎟⎟⎟
(2.75)
⎜⎜⎜0 0 0 0 0 0 0 0 + + 0 0 0 0 0
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜⎜0 0 0 0 0 0 0 0 + − 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 + + 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0 + +⎟⎟⎟⎟
⎜⎜⎝0 0 0 0 0 0 0 0 0 0 0 0 + − 0 0 ⎟⎟⎟⎟⎠
0 0 0 0 0 0 0 0 0 0 0 0 0 0 + −
⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 + + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 + + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 + + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + − 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 + − 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜0 0 0 0 0 0 + − 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
A3 = ⎜⎜⎜⎜ ⎟,
0 ⎟⎟⎟⎟
(2.76)
⎜⎜⎜0 0 0 0 0 0 0 0 + + 0 0 0 0 0
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 + + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 + +⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0 0
⎜⎜⎜0 0 0 0 0 0 0 0 + − 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + − 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0
⎝ 0 0 0 0 0 0 0 0 0 0 0 + − 0 0 ⎟⎟⎟⎟⎠
0 0 0 0 0 0 0 0 0 0 0 0 0 0 + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 69

⎛ ⎞
⎜⎜⎜+ + 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + + 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 + + 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟

⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 + + 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟

⎜⎜⎜0 0 0 0 0 0 0 0 + + 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + + 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 + + 0 0 ⎟⎟⎟⎟

⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 0 0 + +⎟⎟⎟⎟
A4 = ⎜⎜⎜⎜ ⎟⎟ . (2.77)
⎜⎜⎜+ − 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 + − 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 + − 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ 0 0 0 0 0 + − 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟

⎜⎜⎜0
⎜⎜⎜ 0 0 0 0 0 0 0 + − 0 0 0 0 0 0 ⎟⎟⎟⎟⎟

⎜⎜⎜0 0 0 0 0 0 0 0 0 0 + − 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 0 0 0 0 0 0 0 0 + − 0 0 ⎟⎟⎟⎟

0 0 0 0 0 0 0 0 0 0 0 0 0 0 + −

Example 2.3.3: A factorization of Walsh matrices of orders 4, 8, and 16, using the
relation of Eq. (2.59) is obtained as follows:
Because

"
n−1
H2n = I2m ⊗ (H2 ⊗ I2n−m−1 ), (2.78)
i=0
⎡ ⎛ ⎞⎤
"
n−1 ⎢
⎢⎢⎢ ⎜⎜⎜I2m ⊗ (+ 0) ⎟⎟⎟⎥⎥⎥
Q 2n = ⎢⎣I2n−m−1 ⊗ ⎜⎜⎝ ⎟⎟⎠⎥⎥⎦ . (2.79)
m=0 R2m ⊗ (0 +)

Then, using Eq. (2.59), the Walsh matrix W2m can be factored as

W2m = B0 B1 , . . . , Bn−1 A0 A1 , . . . , An−1 , (2.80)


I m ⊗ (+ 
where Am = I2m ⊗ (H2 ⊗ I2n−m−1 ), Bm = I2n−m−1 ⊗ 2 0)
R2m ⊗ (0 +) , m = 0, 1, 2, . . . , n − 1.

The factorization of Walsh matrices of orders 4 and 8 are given as follows:


⎛ ⎞
⎜⎜⎜I2 ⊗ (+ 0) ⎟⎟⎟
W4 = (I2 ⊗ I2 ) ⎜⎝ ⎜ ⎟⎟⎠ (H2 ⊗ I2 )(I2 ⊗ H2 )
R2 ⊗ (0 +)
⎛ ⎞⎛ ⎞⎛ ⎞
⎜⎜⎜+ 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ 0 + 0 ⎟⎟⎟ ⎜⎜⎜+ + 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
0 0 + 0 ⎟⎟ ⎜⎜0 + 0 +⎟⎟ ⎜⎜+ − 0 0 ⎟⎟⎟⎟
= ⎜⎜⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟. (2.81)
⎜⎜⎝0 0 0 +⎟⎟⎠ ⎜⎜⎝+ 0 − 0 ⎟⎟⎠ ⎜⎜⎝0 0 + +⎟⎟⎟⎟⎠
⎟ ⎟
0 + 0 0 0 + 0 − 0 0 + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


70 Chapter 2

⎡ ⎛ ⎞⎤ ⎛ ⎞
⎢⎢ ⎜⎜I ⊗ (+ 0) ⎟⎟⎟⎥⎥⎥ ⎜⎜⎜I4 ⊗ (+ 0) ⎟⎟⎟
W8 = (I4 ⊗ I2 ) ⎢⎢⎣I2 ⊗ ⎜⎜⎝ 2 ⎟⎠⎥⎦ ⎜⎝ ⎟⎠ (H2 ⊗ I4 ) [I2 ⊗ (H2 ⊗ I2 )] (I4 ⊗ H2 )
R2 ⊗ (0 +) R4 ⊗ (0 +)
⎛ ⎞⎛ ⎞⎛ ⎞
⎜⎜⎜+ 0 0 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ 0 0 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟⎟
⎜⎜⎜⎜0 0 + 0 0 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 + 0 0 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟⎜ ⎟
⎜⎜⎜0 0 0 + 0 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 0 0 + 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 0 + 0 ⎟⎟⎟ ⎜⎜⎜0 0 0 + 0 0 0 +⎟⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜⎜0 0 0 0 + 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 0 0 0 0 0 +⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 0 0 0 + 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 + 0 0 0 − 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 0 0 0 +⎟⎟⎠ ⎜⎜⎝0 0 0 + 0 0 0 0 ⎟⎟⎠ ⎜⎜⎝0 0 + 0 0 0 − 0 ⎟⎟⎟⎟⎠
0 0 0 0 0 + 0 0 0 + 0 0 0 0 0 0 0 0 0 + 0 0 0 −
⎛ ⎞⎛ ⎞
⎜⎜⎜+ 0 + 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ + 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 + 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ − 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜+ 0 − 0 0 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟
⎜⎜⎜0 + 0 − 0 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 + − 0 0 0 0 ⎟⎟⎟⎟⎟
× ⎜⎜⎜ ⎜ ⎟
⎟⎜ ⎜ ⎟⎟ .
⎜⎜⎜0 0 0 0 + 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 0 0 + + 0 0 ⎟⎟⎟⎟⎟
(2.82)
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 0 0 0 0 + 0 +⎟⎟⎟ ⎜⎜⎜0 0 0 0 + − 0 0 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎜⎜⎝0 0 0 0 + 0 − 0 ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝0 0 0 0 0 0 + +⎟⎟⎟⎟⎠
0 0 0 0 0 + 0 − 0 0 0 0 0 0 + −

2.4 Cal–Sal Fast Transform

The elements of a Cal–Sal transform matrix of order N (N = 2n ) HHcs = (hu,v )u,v=0


N−1

can be defined as74

hu,v = (−1) p0 v0 +p1 v1 +···+pn−1 vn−1 , (2.83)

where u = 2n−1 un−1 + 2n−2 un−2 + · · · + u0 , v = 2n−1 vn−1 + 2n−2 vn−2 + · · · + v0 , and
pn−1 = u0 , pi = un−i−1 + un−i−2 , i = 0, 1, . . . , n − 2.
Let x = (x0 , x1 , . . . , xN−1 )T be an input signal vector; then, the forward and
inverse Cal–Sal transform can be expressed as

1
y= Hcs x, x = Hcs y. (2.84)
N

The Cal–Sal matrices of order 4 and 8 are given as follows:


⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟ wal(0, t) 0
⎜⎜⎜⎜ ⎟
1 −1 −1 1⎟⎟⎟⎟⎟ cal(1, t)
Hcs (4) = ⎜⎜⎜⎜⎜
1
⎟ (2.85)
⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟⎟ sal(2, t) 2
⎝ ⎠
1 1 −1 −1 sal(1, t) 1.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 71

⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟ wal(0, t) 0
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 −1 −1 −1 −1 1 1⎟⎟⎟⎟ cal(1, t) 1
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 −1 −1 1 1 −1 −1 1⎟⎟⎟⎟ cal(2, t) 2
⎜⎜⎜ ⎟⎟
⎜1 −1 −1 −1 −1 1⎟⎟⎟⎟
Hcs (8) = ⎜⎜⎜⎜⎜
1 1 cal(3, t) 3

−1⎟⎟⎟⎟⎟
(2.86)
⎜⎜⎜1 −1 1 −1 1 −1 1 sal(4, t) 4
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 −1 1 −1 1 1 −1⎟⎟⎟⎟⎟ sal(3, t) 3
⎜⎜⎜ ⎟
⎜⎜⎜1 1 −1 −1 1 1 −1 −1⎟⎟⎟⎟⎟ sal(2, t) 2
⎝ ⎠
1 1 1 1 −1 −1 −1 −1 sal(1, t) 1.

Similar to other HT matrices, the Cal–Sal matrix Hcs (N) of order N can be
factored into some sparse matrices leading to the fast algorithm. For example, we
have

⎛⎛ ⎞ ⎛ ⎞ ⎞ ⎛⎛ ⎞ ⎞
⎜⎜⎜⎜⎜⎜+ 0 ⎟⎟⎟ ⎜⎜⎜+ 0 ⎟⎟⎟ ⎟⎟⎟ ⎜⎜⎜⎜⎜+ +⎟⎟⎟ ⎟
⎜⎜⎜⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠ ⎟⎟ ⎜⎜⎜⎜⎝ ⎟ O2 ⎟⎟⎟⎟⎟
⎜⎜ 0 + 0 + ⎟⎟⎟⎟ ⎜⎜⎜ + −⎠ ⎟
Hcs (4) = ⎜⎜⎜⎜⎛ ⎞ ⎛ ⎞ ⎟⎟⎟ ⎜⎜⎜ ⎛ ⎞ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜0 +⎟⎟⎟ ⎜⎜⎜0 −⎟⎟⎟ ⎟⎟⎟⎟ ⎜⎜⎜⎜ O ⎜⎜⎜+ +⎟⎟⎟ ⎟⎟⎟
⎜⎝⎜⎝⎜ ⎟⎠ ⎜⎝ ⎟⎠ ⎠ ⎝ 2 ⎜⎝ ⎟⎠ ⎟⎠
+ 0 − 0 − +
⎛ ⎞⎛ ⎞
⎜⎜⎜ I2 I2 ⎟⎟⎟ ⎜⎜⎜H2 O2 ⎟⎟⎟
= ⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠ , (2.87)
R2 −R2 O2 H2 R2

⎛⎛ ⎞ ⎛ ⎞⎞
⎜⎜⎜⎜⎜⎜+ 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ 0 0 0 ⎟⎟⎟ ⎟⎟⎟
⎜⎜⎜⎜⎜⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟ ⎟⎟
⎜⎜⎜⎜⎜⎜0 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 + 0 ⎟⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟ ⎟⎟ ⎛⎛ ⎞ ⎞
⎜⎜⎜⎜⎜⎝0 + 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0 + 0 0 ⎟⎟⎟⎟⎟ ⎟⎟⎟⎟ ⎜⎜⎜⎜⎜⎜I2 I2 ⎟⎟⎟ ⎟⎟⎟
⎜⎜⎜ ⎠ ⎝ ⎠ ⎟ ⎜⎝ ⎜ ⎟⎠ O4 ⎟⎟⎟
+ + ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ I2 −I2 ⎟
Hcs (8) = ⎜⎜⎜⎜⎜⎛ ⎞ ⎟⎟⎟⎟
0 0 0 0 0 0
⎞ ⎛ ⎞ ⎟⎜ ⎛
⎜⎜⎜⎜⎜⎜0 0 0 +⎟⎟⎟ ⎜⎜⎜0 0 0 −⎟⎟⎟ ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ ⎜⎜⎜ I2 I2 ⎟⎟⎟ ⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟ ⎟ ⎝ O4 ⎜⎝ ⎟⎠ ⎟⎠
⎜⎜⎜⎜⎜⎜0 + 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0 − 0 0 ⎟⎟⎟⎟⎟ ⎟⎟⎟⎟⎟ −I2 I2
⎜⎜⎜⎜⎜⎜⎜⎜0 0 + ⎟
0 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜0 −
⎟⎟
0 ⎟⎟⎟⎟⎟ ⎟⎟⎟⎟⎟
⎜⎜⎝⎜⎜⎝ ⎠ ⎝
0
⎠⎠
+ 0 0 0 − 0 0 0
⎛⎛ ⎞ ⎞
⎜⎜⎜⎜⎜⎜+ +⎟⎟⎟ ⎟
⎜⎜⎜⎜⎝ ⎟⎠ O2 O2 O2 ⎟⎟⎟⎟⎟
⎜⎜⎜ + − ⎟⎟⎟⎟
⎜⎜⎜ ⎛ ⎞ ⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜+ +⎟⎟⎟
⎜⎜⎜⎜ O2 ⎝⎜ ⎟⎠ O2 O2 ⎟⎟⎟⎟⎟
+ − ⎟⎟⎟
× ⎜⎜⎜⎜⎜ ⎛ ⎞ ⎟⎟⎟ (2.88)
⎜⎜⎜ ⎜⎜⎜+ +⎟⎟⎟ ⎟
⎜⎜⎜ O2 O2 ⎜⎝ ⎟⎠ O2 ⎟⎟⎟⎟
⎜⎜⎜ + − ⎟⎟
⎜⎜⎜ ⎛ ⎞ ⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜+ +⎟⎟⎟ ⎟⎟⎟⎟
⎝ O2 O2 O2 ⎜⎝ ⎟⎠ ⎠⎟
+ −

Hcs (16) = B1 B2 B3 B4 , (2.89)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


72 Chapter 2

where
⎛⎛ ⎞ ⎛ ⎞⎞
⎜⎜⎜⎜⎜+ 0 0 0 0 0 0 0 ⎟⎟ ⎜⎜+ 0 0 0 0 0 0 0 ⎟⎟ ⎟⎟
⎜⎜⎜⎜⎜⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
0 0 0 + 0 0 0 ⎟⎟ ⎜⎜0 0 0 0 + 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 0 + 0 0 0 0 0 ⎟⎟ ⎜⎜0 0 + 0 0 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
0 + 0 ⎟⎟ ⎜⎜0 0 0 0 0 0 + 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜⎜⎜
0 0 0 0
⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 + 0 0 0 0 0 0 ⎟⎟ ⎜⎜0 + 0 0 0 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜ 0 0 0 0 + 0 0 ⎟⎟ ⎜⎜0 0 0 0 0 + 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 0 0 + 0 0 0 0 ⎟⎠ ⎜⎝0 0 0 + 0 0 0 0 ⎟⎟⎟⎠ ⎟⎟⎟⎟
⎜⎜⎜⎝ ⎟
⎜0 0 0 0 0 0 0 + 0 0 0 0 0 0 0 + ⎟⎟⎟⎟
B1 = ⎜⎜⎜⎜⎛ ⎞ ⎛ ⎞ ⎟⎟⎟ , (2.90)
⎜⎜⎜⎜⎜0 0 0 0 0 0 0 +⎟⎟ ⎜⎜0 0 0 0 0 0 0 −⎟⎟ ⎟⎟
⎜⎜⎜⎜⎜⎜0 ⎟ ⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜ 0 0 + 0 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 0 − 0 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 0 0 0 0 + 0 0 ⎟⎟ ⎜⎜0 0 0 0 0 − 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 + 0 0 0 0 0 0 ⎟⎟ ⎜⎜0 − 0 0 0 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
0 + 0 ⎟⎟ ⎜⎜0 0 0 0 0 0 − 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜⎜⎜
0 0 0 0
⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜⎜⎜0 0 + 0 0 0 0 0 ⎟⎟ ⎜⎜0 0 − 0 0 0 0 0 ⎟⎟⎟⎟ ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
0 0 0 0 + 0 0 ⎟⎠ ⎜⎝0 0 0 0 0 − 0 0 ⎟⎟⎟⎠ ⎟⎟⎟⎟
⎝⎝ ⎠
+ 0 0 0 0 0 0 0 − 0 0 0 0 0 0 0
⎛  ⎞
⎜⎜⎜ I4 I4 ⎟⎟⎟
⎜⎜⎜ I −I O8 ⎟⎟
B2 = ⎜⎜⎜⎜ ⎜ 4 4   ⎟⎟⎟⎟⎟ , (2.91)
⎜⎜⎝ O I4 I4 ⎟⎟⎟

8
−I4 I4
⎛   ⎞
⎜⎜⎜ I2 I2 ⎟⎟⎟
⎜⎜⎜ I −
O4 O4 O4 ⎟⎟⎟
⎜⎜⎜ 2 I 2   ⎟⎟⎟
⎜⎜⎜ I2 I2 ⎟⎟⎟
⎜⎜⎜ O O O ⎟⎟⎟
⎜⎜⎜ 4
−I2 I2  4 4 ⎟⎟⎟
B3 = ⎜⎜⎜  ⎟⎟⎟ , (2.92)
⎜⎜⎜ I2 I2 ⎟⎟⎟
⎜⎜⎜ O4 O4
I2 − I 2 
O4 ⎟⎟⎟
⎜⎜⎜  ⎟⎟⎟
⎜⎜⎜ I2 I2 ⎟⎟⎟⎟
⎜⎝ O4 O4 O4 ⎠
−I2 I2
⎛  ⎞
⎜⎜⎜ + + ⎟⎟⎟
⎜⎜⎜ + − O 2 O 2 O 2 O 2 O 2 O 2 O 2 ⎟⎟⎟
⎜⎜⎜   ⎟⎟⎟
⎜⎜⎜ + + ⎟⎟
⎜⎜⎜ O2
⎜⎜⎜ O O2 O2 O2 O2 O2 ⎟⎟⎟⎟⎟
− +  2  ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ + +
⎜⎜⎜ O2 O2 O2 O2 O2 O2 O2 ⎟⎟⎟⎟
⎜⎜⎜ + −   ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
+ +
⎜⎜⎜ O2
⎜⎜⎜ O2 O2 O2 O2 O2 O2 ⎟⎟⎟⎟⎟
− +   ⎟⎟⎟
B4 = ⎜⎜⎜ ⎟⎟⎟ . (2.93)
⎜⎜⎜ + +
⎜⎜⎜ O2 O2 O2 O2 O2 O2 O2 ⎟⎟⎟⎟
⎜⎜⎜ + −   ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ O2 + + ⎟⎟⎟
⎜⎜⎜ O 2 O 2 O 2 O 2
− + 
O 2 O 2 ⎟⎟⎟
⎜⎜⎜  ⎟⎟⎟
⎜⎜⎜⎜ O + + ⎟⎟⎟
⎜⎜⎜ 2 O O O O O O ⎟⎟⎟
2 2 2 2 2
+ −  2

⎜⎜⎜  ⎟⎟⎟⎟
⎜⎜⎜ + + ⎟⎟⎟
⎝ O2 O2 O2 O2 O2 O2 O2 ⎠
− +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 73

We will now introduce the column bit reversal (CBR) operation. Let A be an
m × m (m is the power of 2) matrix. [CBR](A) is the m × m matrix obtained from
matrix A whose columns are rearranged in bit reversal order. For example, consider
the following 4 × 4 matrix:

⎛ ⎞ ⎛ ⎞
⎜⎜⎜a11 a12 a13 a14 ⎟⎟
⎟ ⎜⎜⎜a11 a13 a12 a14 ⎟⎟

⎜⎜⎜a a22 a23 a24 ⎟⎟⎟⎟ ⎜⎜⎜a a23 a22 a24 ⎟⎟⎟⎟
A = ⎜⎜⎜⎜ 21 ⎟, then [CBR](A) = ⎜⎜⎜⎜ 21 ⎟.
a34 ⎟⎟⎟⎠⎟ a34 ⎟⎟⎟⎠⎟
(2.94)
⎜⎜⎝a31 a32 a33 ⎜⎜⎝a31 a33 a32
a41 a42 a43 a44 a41 a43 a42 a44

The horizontal reflection (HR) operation for any size matrix is defined as

⎛ ⎞
⎜⎜⎜a14 a13 a12 a11 ⎟⎟

⎜⎜⎜a a23 a22 a21 ⎟⎟⎟⎟
[HR](A) = ⎜⎜⎜⎜ 24 ⎟.
a31 ⎟⎟⎟⎠⎟
(2.95)
⎜⎜⎝a34 a33 a32
a44 a43 a42 a41

Similarly, we can define the block horizontal reflection (BHR) operation. Using
these notations, we can represent the Cal–Sal matrices Hcs (4), Hcs (8), and Hcs (16)
as follows:
For n = 4, we have

Hcs (4) = B1 B2 , (2.96)

where
   
[CBR](I2 ) [CBR](I2 ) H2 O2
B1 = , B2 = . (2.97)
[HR]{[CBR](I2 )} −[HR]{[CBR](I2 )} O2 [HR](H2 )

For n = 8, we have

Hcs (8) = B1 B2 B3 , (2.98)

where
 
[CBR](I4 ) [CBR](I4 )
B1 = ,
[HR]([CBR](I4 )) −[HR]{[CBR](I4 )}
 
H2 ⊗ I2 O4 (2.99)
B2 = ,
O4 [BHR](H2 ⊗ I2 )

B3 = (I4 ⊗ H2 ) .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


74 Chapter 2

For n = 16, we have

Hcs (16) = B1 B2 B3 B4 , (2.100)

where
⎛ ⎞
⎜⎜⎜ [CBR](I ) [CBR](I ) ⎟⎟⎟
B1 = ⎜⎜⎜⎝ 8 8 ⎟⎟⎟ ,

[HR]{[CBR](I8 )} −[HR]{[CBR](I8 )}
⎛ ⎞
⎜⎜⎜H ⊗ I O8 ⎟⎟⎟
B2 = ⎜⎜⎝ ⎜ 2 4 ⎟⎟⎟ ,

O8 [BHR](H2 ⊗ I4 )
⎛ ⎞ (2.101)
⎜⎜⎜H ⊗ I O4 ⎟⎟⎟
B3 = I2 ⊗ ⎜⎝ ⎜
⎜ 2 2
⎟⎟⎟⎠ ,
O4 [BHR](H2 ⊗ I2 )
⎛ ⎞
⎜⎜⎜H O ⎟⎟⎟
B4 = I4 ⊗ ⎜⎜⎝⎜ 2 2
⎟⎟⎠⎟ .
O2 [HR](H2 )

It can be shown that a Cal–Sal matrix of order N = 2n can be factored as follows:


For even n, n ≥ 2,

Hcs (2n ) = B1 B2 , . . . , Bn , (2.102)

where
⎛ ⎞
⎜⎜⎜ [CBR](I n−1 ) [CBR](I2n−1 ) ⎟⎟⎟
B1 = ⎜⎜⎝ ⎜ 2 ⎟⎟⎟ ,

[HR]{[CBR](I2n−1 )} −[HR]{[CBR](I2n−1 )}
⎛ ⎞ (2.103)
⎜⎜⎜H ⊗ I n−i O2n−i+1 ⎟⎟⎟
Bi = I2i−2 ⊗ ⎜⎝ ⎜
⎜ 2 2 ⎟⎟⎟ , i = 2, 3, . . . , n.

O2n−i+1 [BHR](H2 ⊗ I2n−i )

For odd n, n ≥ 3,

Hcs (2n ) = B1 B2 , . . . , Bn , (2.104)

where
⎛ ⎞
⎜⎜⎜ [CBR](I n−1 ) [CBR](I n−1 )
⎟⎟⎟
B1 = ⎜⎜⎜⎝ 2 2 ⎟⎟⎟ , Bn = I2n−1 ⊗ H2 ,

[HR]{[CBR](I2n−1 )} −[HR]{[CBR](I2n−1 )}
⎛ ⎞ (2.105)
⎜⎜⎜H ⊗ I n−i O2n−i+1 ⎟⎟⎟
Bi = I2i−2 ⊗ ⎜⎜⎝ ⎜ 2 2 ⎟
⎟⎟⎠ , i = 2, 3, . . . , n − 1.
O2n−i+1 [BHR](H2 ⊗ I2n−i )

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 75

2.5 Fast Complex HTs

In this section, we present the factorization of complex Hadamard matrices. As


mentioned above, the complex Hadamard matrix H is a unitary matrix with
elements ±1, ± j, i.e.,

HH ∗ = H ∗ H = NIN , (2.106)

where H ∗ represents the complex conjugate transpose of the matrix H, and j =



−1.
It can be proved that if H
 is acomplex Hadamard matrix of order N, then N is
even. The matrix [CS ]2 = −1j −1j is an example of a complex Hadamard matrix of
order 2. Complex Hadamard matrices of higher orders can be generated recursively
by using the Kronecker product, i.e.,

[CS ]2n = H2 ⊗ [CS ]2n−1 , n = 2, 3, . . . . (2.107)

Theorem 2.5.1: The complex Sylvester matrix of order 2n [see Eq. (2.107)] can be
factored as
⎡ n−1 ⎤
⎢⎢⎢" ⎥⎥⎥
[CS ]2n = ⎢⎢⎣⎢ (I2m−1 ⊗ H2 ⊗ I2n−m )⎥⎥⎦⎥ (I2n−1 ⊗ [CS ]2 ). (2.108)
m=1

Proof: Indeed, from the definition of complex Sylvester matrix in Eq. (2.107), we
have
 
[CS ]2n−1 [CS ]2n−1
[CS ]2n = = H2 ⊗ [CS ]2n−1 . (2.109)
[CS ]2n−1 −[CS ]2n−1

Rewriting Eq. (2.109) in the following form:

[CS ]2n = (H2 I2 ) ⊗ (I2n−1 [CS ]2n−1 ) (2.110)

and using the Kronecker product, we obtain

[CS ]2n = (H2 ⊗ I2n−1 )(I2 ⊗ [CS ]2n−1 ). (2.111)

Using Eq. (2.109) once again, we obtain

[CS ]2n = (H2 ⊗ I2n−1 ) ⊗ [I2 ⊗ (H2 ⊗ [CS ]2n−2 )]. (2.112)

After performing n − 1 iterations, we obtain the required results.

Note that [CS ]2n is a Hermitian matrix, i.e., [CS ]∗2n = [CS ]2n .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


76 Chapter 2

 
Because [CS ]2 = −1j −1j , it follows from Eq. (2.109) that the complex
Sylvester–Hadamard matrix of orders 4 and 8 are of the form
⎛ ⎞
⎜⎜⎜ 1 j 1 j ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜− j −1 − j −1⎟⎟⎟⎟
[CS ]4 = ⎜⎜⎜⎜⎜ ⎟, (2.113)
⎜⎜⎜ 1 j −1 − j ⎟⎟⎟⎟⎟
⎝ ⎠
− j −1 j 1
⎛ ⎞
⎜⎜⎜ 1 j 1 j 1 j 1 j ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− j −1 − j −1 − j −1 −j −1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 j −1 − j 1 j −1 − j ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜− j −1 j 1 − j −1 1⎟⎟⎟⎟⎟
[CS ]8 = ⎜⎜⎜⎜⎜
j
⎟. (2.114)
⎜⎜⎜ 1 j 1 j −1 −j −1 − j ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− j −1 − j −1 j 1 j 1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 1 j −1 − j −1 −j 1 j ⎟⎟⎟⎟⎟
⎝ ⎠
− j −1 j 1 j 1 −j −1

Now, according to Eq. (2.112), the matrix in Eq. (2.114) can be expressed as the
product of two matrices,

[CS ]8 = AB = A(B1 + jB2 ), (2.115)

where
⎛ ⎞
⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟

⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜0
⎜⎜⎜ 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜0 + +⎟⎟⎟⎟⎟
A = ⎜⎜⎜⎜⎜
0 0 0 0 0
⎟,
⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ + 0 0 0 − 0 0 ⎟⎟⎟⎟

⎜⎜⎜0 0 + 0 0 0 − 0 ⎟⎟⎟⎟
⎝ ⎠
0 0 0 + 0 0 0 0
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 + 0 0 0 0 0 ⎟⎟
⎟ ⎜⎜⎜0 + 0 + 0 0 0 0 ⎟⎟

⎜⎜⎜0 − 0 − 0 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜− 0 − 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜
⎜⎜⎜+
⎜⎜⎜ 0 − 0 0 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0
⎜⎜⎜ + 0 − 0 0 0 0 ⎟⎟⎟⎟⎟
⎜0 − + 0 ⎟⎟⎟⎟⎟ ⎜− + 0 ⎟⎟⎟⎟⎟
B1 = ⎜⎜⎜⎜⎜ B2 = ⎜⎜⎜⎜⎜
0 0 0 0 0 0 0 0 0
⎟, ⎟ . (2.116)
⎜⎜⎜⎜0 0 0 0 + 0 + 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 0 0 0 + 0 +⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜0 0 0 0 0 − 0 −⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 − 0 − 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 + 0 − 0 ⎟⎟⎟⎟ ⎜⎜⎝0 0 0 0 0 + 0 −⎟⎟⎟⎟
⎠ ⎠
0 0 0 0 0 − 0 + 0 0 0 0 − 0 + 0

Let F = (a, b, c, d, e, f, g, h)T be a vector column of length 8. The fast complex


Sylvester–Hadamard transform algorithm can be realized via the following steps:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 77

Step 1. Calculate B1 F:

⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 + 0 0 0 0 0 ⎟⎟ ⎜⎜a ⎟⎟ ⎜⎜ a + c ⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 − 0 − 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜b ⎟⎟⎟ ⎜⎜⎜−b − d ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟ ⎜
⎜⎜⎜+
⎜⎜⎜ 0 − 0 0 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜c ⎟⎟⎟⎟ ⎜⎜⎜⎜ a − c ⎟⎟⎟⎟⎟
⎜0 − + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜d ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−b + d ⎟⎟⎟⎟⎟
B1 F = ⎜⎜⎜⎜⎜
0 0 0 0
⎟⎜ ⎟ = ⎜ ⎟.
0 ⎟⎟⎟⎟ ⎜⎜⎜⎜e ⎟⎟⎟⎟ ⎜⎜⎜⎜ e + g ⎟⎟⎟⎟
(2.117)
⎜⎜⎜0 0 0 0 + 0 +
⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ 0 0 0 0 − 0 −⎟⎟ ⎜⎜ f ⎟⎟ ⎜⎜− f − h⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + 0 − 0 ⎟⎟ ⎜⎜g ⎟⎟ ⎜⎜ e − g ⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
0 0 0 0 0 − 0 + h −f + h

Step 2. Calculate A(B1 F):

⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟ ⎜⎜ a + c ⎟⎟ ⎜⎜ (a + c) + (e + g) ⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟ ⎜⎜⎜−b − d ⎟⎟⎟ ⎜⎜⎜−(b + d) − ( f + h)⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜0 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ a − c ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ (a − c) + (e − g) ⎟⎟⎟⎟⎟
⎜⎜⎜
+ +⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−b + d ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−(b − d) − ( f − h)⎟⎟⎟⎟⎟
A(B1 F) = ⎜⎜⎜⎜⎜
0 0 0 0 0 0
⎟⎜ ⎟=⎜ ⎟ . (2.118)
⎜⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜ e + g ⎟⎟⎟⎟ ⎜⎜⎜⎜ (a + c) − (e + g) ⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 0 0 − 0 0 ⎟⎟ ⎜⎜− f − h⎟⎟ ⎜⎜−(b + d) + ( f + h)⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎝0 0 + 0 0 0 − 0 ⎟⎟ ⎜⎜ e − g ⎟⎟ ⎜⎜ (a − c) − (e − g) ⎟⎟⎟⎟
⎠⎝ ⎠ ⎝ ⎠
0 0 0 + 0 0 0 0 −f + h −(b − d) + ( f − h)

Step 3. Calculate B2 F:

⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜0 + 0 + 0 0 0 0 ⎟⎟ ⎜⎜a ⎟⎟ ⎜⎜ b + d ⎟⎟
⎜⎜⎜− ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ 0 − 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜b ⎟⎟⎟ ⎜⎜⎜−a − c ⎟⎟⎟⎟⎟
⎜⎜⎜0 + 0 − 0 0 0 ⎟ ⎜ ⎟ ⎜
0 ⎟⎟⎟⎟ ⎜⎜⎜⎜c ⎟⎟⎟⎟ ⎜⎜⎜⎜ b − d ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜− + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜d ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−a + c ⎟⎟⎟⎟⎟
B2 F = ⎜⎜⎜⎜⎜
0 0 0 0 0
⎟⎜ ⎟ = ⎜ ⎟.
+⎟⎟⎟⎟ ⎜⎜⎜⎜e ⎟⎟⎟⎟ ⎜⎜⎜⎜ f + h⎟⎟⎟⎟
(2.119)
⎜⎜⎜0 0 0 0 0 + 0
⎜⎜⎜0 ⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜
⎟ ⎟
⎜⎜⎜ 0 0 0 − 0 − 0 ⎟⎟ ⎜⎜ f ⎟⎟ ⎜⎜−e − g ⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 + 0 −⎟⎟ ⎜⎜g ⎟⎟ ⎜⎜ f − h⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
0 0 0 0 − 0 + 0 h −e + g

Step 4. Calculate A(B2 F):

⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 0 0 + 0 0 0 ⎟⎟ ⎜⎜ b + d ⎟⎟ ⎜⎜ (b + d) + ( f + h)⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜⎜0 + 0 0 0 + 0 0 ⎟⎟⎟ ⎜⎜⎜−a − c ⎟⎟⎟ ⎜⎜⎜−(a + c) − (e + g) ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜0 0 + 0 0 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ b − d ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ (b − d) + ( f − h)⎟⎟⎟⎟⎟
⎜⎜⎜
+ +⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−a + c ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜−(a − c) − (e − g) ⎟⎟⎟⎟⎟
A(B2 F) = ⎜⎜⎜⎜⎜
0 0 0 0 0 0
⎟⎜ ⎟=⎜ ⎟ . (2.120)
⎜⎜⎜+ 0 0 0 − 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜ f + h⎟⎟⎟⎟ ⎜⎜⎜⎜ (b + d) − ( f + h)⎟⎟⎟⎟
⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ + 0 0 0 − 0 0 ⎟⎟ ⎜⎜−e − g ⎟⎟ ⎜⎜−(a + c) + (e + g) ⎟⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 + 0 0 0 − 0 ⎟⎟ ⎜⎜ f − h⎟⎟ ⎜⎜ (b − d) − ( f − h)⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
0 0 0 + 0 0 0 0 −e + g −(a − c) + (e − g)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


78 Chapter 2

Figure 2.6 Flow graph of fast 8-point complex Sylvester–Hadamard transform: (a) real
part; (b) imaginary part.

Step 5. Output the 8-point complex Sylvester–Hadamard transform coefficients,


real and imaginary parts (i.e., see Fig. 2.6):
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ (a + c) + (e + g) ⎟⎟⎟ ⎜⎜⎜ (b + d) + ( f + h)⎟⎟⎟
⎜⎜⎜−(b + d) − ( f + h)⎟⎟⎟ ⎜⎜⎜−(a + c) − (e + g) ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ (a − c) + (e − g) ⎟⎟⎟⎟⎟ ⎜⎜⎜ (b − d) + ( f − h)⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜−(b − d) − ( f − h)⎟⎟⎟⎟⎟ ⎜−(a − c) − (e − g) ⎟⎟⎟⎟⎟
A (B1 F) + jA(B2 F) = ⎜⎜⎜⎜⎜ ⎟+ j ⎜⎜⎜⎜⎜ ⎟ . (2.121)
⎜⎜⎜ (a + c) − (e + g) ⎟⎟⎟⎟⎟ ⎜⎜⎜ (b + d) − ( f + h)⎟⎟⎟⎟⎟
⎜⎜⎜−(b + d) + ( f + h)⎟⎟⎟ ⎜⎜⎜−(a + c) + (e + g) ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜⎜⎜⎜ ⎟⎟
⎜⎜⎝ (a − c) − (e − g) ⎟⎟⎟⎟⎠ ⎜⎜⎝ (b − d) − ( f − h)⎟⎟⎟⎟⎠
−(b − d) + ( f − h) −(a − c) + (e − g)

The flow graph of the 8-point complex Sylvester–Hadamard transform of the


vector (a, b, c, d, e, f, g, h)T , with split real and imaginary parts, is given in Fig. 2.6.
From Eq. (2.109), it follows that to perform [CS ]N (N = 2n ) transforms, it is
necessary to perform two N/2-point Sylvester–Hadamard transforms. Hence, the
complexity of the complex Sylvester–Hadamard transform is

C + ([CS ]N ) = N log2 (N/2) = (n − 1)2n . (2.122)

For example, C + ([CS ]4 ) = 4, C + ([CS ]8 ) = 16, and C + ([CS ]16 ) = 48. From
Theorem 2.5.1, it follows that the complex Hadamard matrix of order 16 can be
represented as

[CS ]16 = A1 A2 A3 B1 + jA1 A2 A3 B2 , (2.123)

where

A1 = H2 ⊗ I8 ,

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 79

x0 y0

x1 y1

x2
y2

x3 y3

x4 y4

x5 y5

x6 y6

x7 y7

x8 y8

x9 y9

x10 y10

x11 y11

x12 y12

x13 y13

x14 y14

x15 y15

Figure 2.7 Flow graph of the fast 16-point complex Sylvester–Hadamard transform (real
part).

A2 = (H2 ⊗ I4 ) ⊕ (H2 ⊗ I4 ),
A3 = (H2 ⊗ I2 ) ⊕ (H2 ⊗ I4 ) ⊕ (H2 ⊗ I4 ),
   
+ 0 0 +
B1 = I8 ⊗ T 1 , B2 = I8 ⊗ T 2 , where T 1 = , T2 = . (2.124)
0 − − 0

Flow graphs of the 16-point complex Sylvester–Hadamard transform of the


vector (x0 , x1 , . . . , x15 ), with split real and imaginary parts, are given in Figs. 2.7
and 2.8.

2.6 Fast Haar Transform


This section presents a fast Haar transform computation algorithm,

1 1
X= [Haar]N f = H(N) f, (2.125)
N N

where [Haar]N = H(N) is the Haar transform matrix of order N, and f is the signal
vector of length N.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


80 Chapter 2

x0 y0i

x1 y1i

x2 y2i

x3 y3i

x4 y4i

x5 y5i

x6 y6i

x7 y7i

x8 y8i

x9 y9i

x10 i
y10

x11 i
y11

x12 i
y12

x13 i
y13

x14 i
y14

x15 i
y15

Figure 2.8 Flow graph of the fast 16-point complex Sylvester–Hadamard transform
(imaginary part).

First, consider an example. Let N = 8, and let the input data vector be
f = (a, b, c, d, e, f, g, h)√T . It is easy to check that the direct evaluation of the Haar
transform (below s = 2)
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟ ⎜⎜⎜a ⎟⎟⎟ ⎜⎜⎜a + b + c + d + e + f + g + h⎟⎟⎟
⎜⎜⎜1 1 1 1 −1 −1 −1 −1⎟⎟⎟ ⎜⎜⎜b ⎟⎟⎟ ⎜⎜⎜a + b + c + d − e − f − g − h⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟⎟⎟
⎜⎜⎜ s s −s −s 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜c ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ s(a + b − c − d) ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 0 0 0 s s −s −s ⎟⎟⎟ ⎜⎜⎜d ⎟⎟⎟ ⎜⎜⎜ s(e + f − g − h) ⎟⎟⎟
H(8) f = ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜⎜ 2 −2 0 0 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜
e 2(a − b) ⎟⎟⎟

⎜⎜⎜0 0 2 −2 0 0 0 0⎟⎟⎟ ⎜⎜⎜ f ⎟⎟⎟ ⎜⎜⎜ ⎜ ⎟ ⎜ 2(c − d) ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎝0 0 0 0 2 −2 0 0⎟⎟⎠ ⎜⎜⎝g ⎟⎟⎠ ⎜⎜⎝ 2(e − f ) ⎟⎟⎟

0 0 0 0 0 0 2 −2 h 2(g − h)
(2.126)

requires 56 operations.
The Haar matrix H(8) order N = 8 may be expressed by the product of three
matrices

H(8) = H1 H2 H3 , (2.127)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 81

where

⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜0 0 1 1 0 0 0 0⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 s 0 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜0 0 0 s 0 0 0 0⎟⎟⎟⎟⎟ ⎜0 0 1 −1 0 0 0 0⎟⎟⎟⎟⎟
H1 = ⎜⎜⎜⎜⎜ ⎟⎟ , H2 = ⎜⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎜⎜0 0 0 0 1 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 0 0 2 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 1 0 0⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 2 0 0⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 0 0 1 0⎟⎟⎟⎟⎠ ⎜⎜⎝0 0 0 0 0 0 2 0⎟⎟⎟⎟⎠
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 2
⎛ ⎞
⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟

⎜⎜⎜0 0 1 1 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜0 0 0
⎜⎜⎜ 0 1 1 0 0⎟⎟⎟⎟⎟
⎜0 0 0 0 0 0 1 1⎟⎟⎟⎟⎟
H3 = ⎜⎜⎜⎜⎜ ⎟. (2.128)
⎜⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟⎟

⎜⎜⎜0 0 1 −1 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝0 0 0 0 1 −1 0 0⎟⎟⎟⎟

0 0 0 0 0 0 1 −1

Consider the fast Haar transform algorithm step by step.

Step 1. Calculate

⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜a ⎟⎟⎟ ⎜⎜⎜a + b⎟⎟⎟
⎜⎜⎜0 0 1 1 0 0 0 0⎟⎟⎟ ⎜⎜⎜b ⎟⎟⎟ ⎜⎜⎜c + d ⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜0 0 0 0 1 1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜c ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜e + f ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜0 0 0 0 0 0 1 1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜d ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜g + h⎟⎟⎟⎟⎟
H3 f = ⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟. (2.129)
⎜⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜⎜e ⎟⎟⎟⎟ ⎜⎜⎜⎜a − b⎟⎟⎟⎟⎟
⎜⎜⎜0 0 1 −1 0 0 0 0⎟⎟⎟ ⎜⎜⎜ f ⎟⎟⎟ ⎜⎜⎜c − d ⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎝0 0 0 0 1 −1 0 0⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝g ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝e − f ⎟⎟⎟⎟⎠
0 0 0 0 0 0 1 −1 h g−h

Step 2. Calculate

⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜a + b⎟⎟⎟ ⎜⎜⎜a + b + (c + d)⎟⎟⎟
⎜⎜⎜⎜0 0 1 1 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜⎜c + d ⎟⎟⎟⎟ ⎜⎜⎜⎜e + f + (g + h)⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜e + f ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a + b − (c + d)⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟ ⎜
0 0 1 −1 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜⎜g + h⎟⎟⎟⎟ ⎜⎜⎜⎜e + f − (g + h)⎟⎟⎟⎟⎟
H2 (H3 f ) = ⎜⎜⎜⎜⎜ ⎟⎜ ⎟=⎜ ⎟. (2.130)
⎜⎜⎜0 0 0 0 2 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a − b⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ 2(a − b) ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 2 0 0⎟⎟⎟ ⎜⎜⎜c − d ⎟⎟⎟ ⎜⎜⎜ 2(c − d) ⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 2 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜e − f ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ 2(e − f ) ⎟⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
0 0 0 0 0 0 0 2 g−h 2(g − h)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


82 Chapter 2

1/8
A

1/8
B
–1

sqrt(2)8
–1 C

sqrt(2)8
D
–1
–1
2/8
E

–1 2/8
F
2/8
–1
G

2/8
H
–1

Figure 2.9 Signal flow diagram of the fast 8-point 1D Haar transform.

Step 3. Calculate
⎛ ⎞⎛ ⎞
⎜⎜⎜1 1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜a + b + (c + d)⎟⎟⎟
⎜⎜⎜1 −1 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜e + f + (g + h)⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟
⎜⎜⎜0 0 s 0 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a + b − (c + d)⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 s 0 0 0 0⎟⎟⎟ ⎜⎜⎜e + f − (g + h)⎟⎟⎟⎟⎟
H1 [H2 (H3 f )] = ⎜⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎜⎜⎜⎜0 0 0 0 1 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ 2(a − b) ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 1 0 0⎟⎟⎟ ⎜⎜⎜ 2(c − d) ⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟
⎜⎜⎝0 0 0 0 0 0 1 0⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝ 2(e − f ) ⎟⎟⎟⎟⎠
0 0 0 0 0 0 0 1 2(g − h)
⎛ ⎞
⎜⎜⎜a + b + c + d + (e + f + g + h)⎟⎟⎟
⎜⎜⎜⎜a + b + c + d − (e + f + g + h)⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ s[a + b − (c + d)] ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟

⎜ s[e + f − (g + h)] ⎟⎟⎟.
= ⎜⎜⎜ ⎟⎟⎟ (2.131)
⎜⎜⎜⎜ 2(a − b) ⎟⎟⎟
⎜⎜⎜ 2(c − d) ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎝ 2(e − f ) ⎟⎟⎠
2(g − h)

Thus, the 8-point Haar transform may be performed via 14 =√ 8 + 4 + 2


operations (additions and subtractions), and two multiplications by 2, and four
multiplications by 2, which can be done via the binary shift operation. By an
analogy of the HT, the Haar transform can be represented by Fig. 2.9.
We can see that the matrices H1 , H2 , and H3 can be expressed as
  $
1 1 √
H1 = diag , 2I2 , 2I4 , (2.132)
1 −1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 83

⎧⎛ % &⎞ ⎫


⎪ ⎜⎜⎜I ⊗ ⎟⎟⎟ ⎪



⎨⎜⎜ ⎜ 2 1 1 ⎟
⎟ ⎪

H2 = diag ⎪ ⎜
⎜⎜⎝ % & ⎟

⎟⎟⎠ , I ⎪ , (2.133)



4 ⎪


⎩ I2 ⊗ 1 −1 ⎭
⎛ % &⎞
⎜⎜⎜I ⊗ ⎟
1 1 ⎟⎟⎟⎟
H3 = ⎜⎜⎜⎜⎝
4
  ⎟⎟⎠ . (2.134)
I4 ⊗ 1 −1

Theorem 2.6.1: The Haar matrix of order 2n can be generated recursively as


⎛ ⎞ ⎛ n−1 ⎞
⎜⎜⎜H(2n−1 ) 0 ⎟⎟⎟ ⎜⎜I(2 ) ⊗ (+1 + 1)⎟⎟
H(2 ) = ⎜⎜⎝
n √ ⎜
⎟⎟⎠ ⎜⎝ ⎟⎟⎠ , n = 2, 3, . . . , (2.135)
0 2n−1 I(2n−1 ) I(2n−1 ) ⊗ (+1 − 1)

where
 
1 1
H = H(2) = , (2.136)
1 −1

⊗ is the Kronecker product, and I(2n−1 ) is the identity matrix of order 2n .

Proof: From the definition of the Haar matrix, we have


⎛ ⎞
⎜⎜⎜ H(2n−1 ) ⊗ (+1 + 1) ⎟⎟⎟

H(2 ) = ⎜⎝ √
n ⎟⎟⎠ , n = 2, 3, . . . . (2.137)
2n−1 I(2n−1 ) ⊗ (+1 − 1)

Using the property of the Kronecker product from Eq. (2.137), we obtain
⎛ * +* + ⎞
⎜⎜⎜ H(2n−1 ) ⊗ I(20 ) I(2n−1 ) ⊗ (+1 + 1) ⎟⎟⎟
H(2n ) = ⎜⎜⎜⎜⎝* √ +* + ⎟⎟⎟⎟⎠ , n = 2, 3, . . . . (2.138)
2n−1 I(2n−1 ) ⊗ I(20 ) I(2n−1 ) ⊗ (+1 − 1)

Then, from Eq. (2.138) and from the following property of matrix algebra:
    
AB A 0 B
= ,
CD 0 C D

we obtain
   n−1 
H(2n−1 ) √ 0 I(2 ) ⊗ (+1 + 1)
H(2 ) =
n
, n = 2, 3, . . . . (2.139)
0 n−1 n−1
2 I(2 ) I(2 n−1
) ⊗ (+1 − 1)

Examples:
(1) Let n = 2; then, the Haar matrix of order four can be represented as a product
of two matrices:
  
H(2) √ 0 I(2) ⊗ (+1 + 1)
H(4) = = H1 H2 ; (2.140)
0 2I(2) I(2) ⊗ (+1 − 1)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


84 Chapter 2


where (s = 2),
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 0 0⎟⎟⎟ ⎜⎜⎜1 1 0 0⎟⎟⎟
⎜⎜⎜1 −1 0 0⎟⎟⎟ ⎜⎜⎜0 0 1 1⎟⎟⎟
H1 = ⎜⎜⎜⎜ ⎟⎟ , H2 = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎝0 0 s 0⎟⎟⎟⎟⎠ ⎜⎜⎝1 −1 0 0⎟⎟⎟⎟⎠
(2.141)
0 0 0 s 0 0 1 −1

(2) Let n = 3; then, the Haar matrix of order 8 can be expressed as a product of
three matrices,

H(8) = H1 H2 H3 , (2.142)

where   $
1 1 √
H1 = diag , 2I2 , 2I4 ,
1 −1
⎧⎛ % &⎞ ⎫

⎪ ⎜⎜⎜I ⊗ ⎟ ⎪ ⎪


⎨⎜⎜⎜ 1 ⎟⎟⎟⎟ ⎪ ⎪
& ⎟⎟⎟ , I4 ⎬
2 1
H2 = diag ⎪ ⎜⎜⎜ % ,

⎪ ⎟⎠ ⎪ ⎪

⎩⎝ I 2 ⊗ 1
⎪ −1 ⎪

⎛ % &⎞
⎜⎜⎜I ⊗ ⎟
⎜⎜⎜ 4 1 1 ⎟⎟⎟⎟
H3 = ⎜⎜⎜ % & ⎟⎟⎟ . (2.143)
⎝I4 ⊗ ⎟⎠
1 −1

To prove this statement from Eq. (2.139), we have


  
H(4) 0 I(4) ⊗ (+1 + 1)
H(8) = . (2.144)
0 2I(4) I(4) ⊗ (+1 − 1)

Now, using Eq. (2.140), it can be presented as


⎛   ⎞
⎜⎜⎜ H(2) √ 0 I(2) ⊗ (+1 + 1) ⎟⎟⎟  
⎜ 0 ⎟⎟⎟ I(4) ⊗ (+1 + 1)
H(8) = ⎜⎜⎜⎜ 0 2I(2) I(2) ⊗ (+1 − 1) ⎟⎟⎠ I(4) ⊗ (+1 . (2.145)
⎝ − 1)
0 2I(4)

Now, from Eq. (2.145), and following the property of matrix algebra,
    
AB 0 A 0 B 0
= ,
0 αI(M) 0 αI(M) 0 I(M)

we obtain
⎛ ⎞⎛ ⎞
⎜⎜⎜H(2) √ 0 0 ⎟⎟ ⎜⎜I(2) ⊗ (+1
⎟⎟ ⎜ + 1) 0 ⎟⎟  
⎜⎜⎜ ⎟ I(4) ⊗ (+1 + 1)
H(8) = ⎜⎜ 0 2I(2) 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜⎝I(2) ⊗ (+1 − 1) 0 ⎟⎟⎟⎟
⎠ I(4) ⊗ (+1
⎝ ⎠ − 1)
0 0 2I(4) 0 I(4)
= H1 H2 H3 . (2.146)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 85

Now we can formulate the general theorem.

Theorem 2.6.2: Let H(N) = H(2n ) be a Haar transform matrix of order N = 2n .


Then,

(1) The Haar matrix of order N = 2n can be represented as a product of n sparse


matrices:

H(2n ) = Hn Hn−1 · · · H1 , (2.147)

where

Hn = diag H(2), 21/2 I(2), 2I(4), 23/2 I(8), . . . , 2(n−1)/2 I(2n−1 ) , (2.148)
 n−1 
I(2 ) ⊗ (1 1)
H1 = , (2.149)
I(2n−1 ) ⊗ (1 −1)
⎛ m−1   ⎞
⎜⎜⎜I(2 ) ⊗ 1 1 0 ⎟⎟⎟
⎜⎜⎜   ⎟⎟⎟
Hm = ⎜⎜⎜I(2 ) ⊗ 1 −1
m−1
0 ⎟⎟⎟ , m = 2, 3, . . . , n − 1. (2.150)
⎜⎝ ⎟
m ⎠
0 I(2 − 2 )
n

(2) The Haar transform may be calculated via 2(2n − 1) operations or via O(N)
operations.
(3) Only 2n storage locations are returned to perform the 2n -point Haar transform.
(4) The inverse 2n -point Haar transform matrix be represented as

H −1 (2n ) = H T (2n ) = H1T H2T · · · HnT . (2.151)

Note that each Hm [see Eq. (2.150)] has the 2m rows with only two nonzero
elements and 2n − 2m rows with only one nonzero element, so the product of a
matrix Hm by a vector requires only 2n − 4 addition operations, an H1 transform
[see, Eq. (2.148)] requires only 2n additions, and an Hn transform requires 2
additions and 2n − 2 multiplications.
So a 2n -point Haar transform requires 2· 2n − 2 addition and 2n − 2 multiplication
operations.
From Eqs. (2.148) and (2.150), we obtain the following factors of the Haar
transform matrix of order 16:

  ⎛ ⎞ ⎛ ⎞
⎜⎜⎜I2 ⊗ (+ +) O2×12 ⎟⎟⎟ ⎜⎜⎜I4 ⊗ (+ +) O4×8 ⎟⎟⎟
I ⊗ (+ +)
H1 = 8 , ⎜ ⎜
H2 = ⎜⎜⎜⎝I2 ⊗ (+ −) O2×12 ⎟⎟⎟⎠ , H3 = ⎜⎜⎜⎝I4 ⊗ (+ −) O4×8 ⎟⎟⎟⎟⎠ ,

I8 ⊗ (+ −)
O12×4 I12 O8×8 I8
  $
+ + √ √
H4 = diag , 2I2 , 2I4 , 8I8 , (2.152)
+ −

where Om×n is the zero matrix of size m × n.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


86 Chapter 2

References
1. S. Agaian, Advances and problems of the fast orthogonal transforms
for signal-image processing applications (Part 1), Pattern Recognition,
Classification, Forecasting, Yearbook, 3, Russian Academy of Sciences,
Nauka, Moscow (1990) 146–215 (in Russian).
2. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal
Processing, Springer-Verlag, New York (1975).
3. G. R. Reddy and P. Satyanarayana, “Interpolation algorithm using
Walsh–Hadamard and discrete Fourier/Hartley transforms,” Circuits and
Systems 1, 545–547 (1991).
4. C.-F. Chan, “Efficient implementation of a class of isotropic quadratic filters
by using Walsh–Hadamard transform,” in Proc. of IEEE Int. Symp. on Circuits
and Systems, June 9–12, Hong Kong, 2601–2604 (1997).
5. R. K. Yarlagadda and E. J. Hershey, Hadamard Matrix Analysis and Synthesis
with Applications and Signal/Image Processing, Kluwer Academic Publishers,
Boston (1996).
6. L. Chang and M. Wu, “A bit level systolic array for Walsh–Hadamard
transforms,” IEEE Trans. Signal Process 31, 341–347 (1993).
7. P. M. Amira and A. Bouridane, “Novel FPGA implementations of
Walsh–Hadamard transforms for signal processing,” IEE Proc. of Vision,
Image and Signal Processing 148, 377–383 (2001).
8. S. K. Bahl, “Design and prototyping a fast Hadamard transformer for
WCDMA,” in Proc. of 14th IEEE Int. Workshop on Rapid Systems
Prototyping, 134–140 (2003).
9. S. V. J. C. R. Hashemian, “A new gate image encoder; algorithm, design
and implementation,” in Proc. of 42nd IEEE Midwest Symp. Circuits and
Systems 1, 418–421 (1999).
10. B. J. Falkowski and T. Sasao, “Unified algorithm to generate Walsh functions
in four different orderings and its programmable hardware implementations,”
IEE Proc.-Vis. Image Signal Process. 152 (6), 819–826 (2005).
11. S. Agaian, Advances and problems of the fast orthogonal transforms
for signal-image processing applications (Part 2), Pattern Recognition,
Classification, Forecasting, Yearbook, 4, Russian Academy of Sciences,
Nauka, Moscow (1991) 156–246 (in Russian).
12. S. Agaian, K. Tourshan, and J. Noonan, “Generalized parametric slant-
Hadamard transforms,” Signal Process 84, 1299–1307 (2004).
13. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson–Hadamard
transforms,” Multiple Valued Logic Soft Comput. J. 10 (2), 173–187 (2004).
14. S. Agaian, K. Tourshan, and J. Noonan, “Performance of parametric
Slant-Haar transforms,” J. Electron. Imaging 12 (3), 539–551 (2003)
[doi:10.1117/1.1580494].

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 87

15. S. Agaian, K. P. Panetta, and A. M. Grigoryan, “Transform based image


enhancement algorithms with performance measure,” IEEE Trans. Image
Process 10 (3), 367–380 (2001).
16. A. M. Grigoryan and S. Agaian, “Method of fast 1-D paired transforms for
computing the 2-D discrete Hadamard transform,” IEEE Trans. Circuits Syst.
II 47 (10), 1098–1104 (2000).
17. S. Agaian and A. Grigorian, “Discrete unitary transforms and their relation
to coverings of fundamental periods. Part 1,” Pattern Recog. Image Anal. 1,
16–24 (1994).
18. S. Agaian and A. Grigorian, “Discrete unitary transforms and their relation to
coverings of fundamental periods. Part 2,” Pattern Recogn. Image Anal. 4 (1),
25–31 (1994).
19. S. Agaian and D. Gevorkian, “Synthesis of a class of orthogonal transforms,
parallel SIMD algorithms and specialized processors,” Pattern Recogn. Image
Anal. 2 (4), 396–408 (1992).
20. S. Agaian, D. Gevorkian, and H. Bajadian, “Stability of orthogonal series,”
Kibernetica VT (Cybernet. Comput. Technol.) 6, 132–170 (1991).
21. S. Agaian and D. Gevorkian, “Complexity and parallel algorithms of the
discrete orthogonal transforms,” Kibernetika VT (Cybernet. Comput. Technol.)
5, 124–171 (1990).
22. S. Agaian and A. Petrosian, Optimal Zonal Compression Method Using Ortho-
gonal Transforms, Armenian National Academy Publisher, 3–27 (1989).
23. S. Agaian and H. Bajadian, “Stable summation of Fourier–Haar series with
approximate coefficients,” Mat. Zametky (Math. Note) 39 (1), 136–146 (1986).
24. S. Agaian, “Adaptive images compression via orthogonal transforms,” in Proc.
of Colloquium on Coding Theory between Armenian Academy of Sciences and
Osaka University, Yerevan, 3–9 (1986).
25. S. Agaian and D. Gevorkian, “Parallel algorithms for orthogonal transforms,”
Colloquium Math. Soc. Janos Bolyai, Theory of Algorithms (Hungary) 44,
15–26 (1984).
26. S. Agaian, A. Matevosian, and A. Muradian, “Digital filters with respect to a
family of Haar systems,” Akad. Nauk. Arm. SSR. Dokl. 77, 117–121 (1983).
27. S. Agaian and A. Matevosian, “Fast Hadamard transform,” Math. Prob.
Cybernet. Comput. Technol. 10, 73–90 (1982).
28. S. Agaian and A. Matevosian, “Haar transforms and automatic quality test of
printed circuit boards,” Acta Cybernetica 5 (3), 315–362 (1981).
29. S. S. Agaian, C. L. Philip and C. Mei-Ching, “Fibonacci Fourier transform and
sliding window filtering,” in Proc. of IEEE Int. Conf. on System of Systems
Engineering, SoSE’07, April 16–18, 1–5 (2007).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


88 Chapter 2

30. S. S. Agaian and O. Caglayan, “Super fast Fourier transform,” presented at


IS&T/SPIE 18th Annual Symp. on Electronic Imaging Science and Techno-
logy, Jan. 15–19, San Jose, CA (2006).
31. S. S. Agaian and O. Caglayan, “Fast encryption method based on new FFT
representation for the multimedia data system security,” presented at IEEE
SMC, Taiwan (Oct. 2006).
32. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “Reversible
Hadamard transforms,” in Proc. of 2005 Int. TICSP Workshop on Spectral
Methods and Multirate Signal Processing, June 20–22, Riga, Latvia, 33–40
(2005).
33. S.S. Agaian and O. Caglayan, “New fast Fourier transform with linear
multiplicative complexity I,” in IEEE 39th Asilomar Conf. on Signals, Systems
and Computers, Oct. 30–Nov. 2, Pacific Grove, CA (2005).
34. S. Agaian, K. Tourshan, and J. Noonan, “Parametric Slant-Hadamard
transforms with applications,” IEEE Trans. Signal Process. Lett. 9 (11),
375–377 (2002).
35. N. Brenner and C. Rader, “A new principle for fast Fourier transformation,”
IEEE Acoust. Speech Signal Process 24, 264–266 (1976).
36. E. O. Brigham, The Fast Fourier Transform, Prentice-Hall, Englewood Cliffs,
NJ (2002).
37. J. W. Cooley and J. W. Tukey, “An algorithm for the machine calculation of
complex Fourier series,” Math. Comput. 19, 297–301 (1965).
38. T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein, Introduction to
Algorithms, 2nd ed., MIT Press, Cambridge, MA and McGraw-Hill, New York
(especially Ch. 30, Polynomials and the FFT) (2001).
39. P. Duhamel, “Algorithms meeting the lower bounds on the multiplicative
complexity of length-2n DFTs and their connection with practical algorithms,”
IEEE Trans. Acoust. Speech Signal Process. 38, 1504–1511 (1990).
40. P. Duhamel and M. Vetterli, “Fast Fourier transforms: a tutorial review and a
state of the art,” Signal Process 19, 259–299 (1990).
41. A. Edelman, P. McCorquodale, and S. Toledo, “The future fast Fourier
transforms,” SIAM J. Sci. Comput. 20, 1094–1114 (1999).
42. M. Frigo and S. G. Johnson, “The design and implementation of FFTW3,”
Proc. of IEEE 93 (2), 216–231 (2005).
43. W. M. Gentleman and G. Sande, “Fast Fourier transforms: for fun and profit,”
Proc. AFIPS 29 (ACM), 563–578 (1966).
44. H. Guo and C. S. Burrus, “Fast approximate Fourier transform via wavelets
transform,” Proc. SPIE 2825, 250–259 (1996) [doi:10.1117/12.255236].
45. H. Guo and G. A. Sitton, “The quick discrete Fourier transform,” Proc. of IEEE
Conf. Acoust. Speech and Signal Processing (ICASSP) 3, pp. 445–448 (1994).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 89

46. M. T. Heideman, D. H. Johnson, and C. S. Burrus, “Gauss and the history of


the fast Fourier transform,” IEEE ASSP Mag. 1 (4), 14–21 (1984).
47. M. T. Heideman and C. S. Burrus, “On the number of multiplications
necessary to compute a length-2n DFT,” IEEE Trans. Acoust. Speech. Signal
Process 34, 91–95 (1986).
48. S. G. Johnson and M. Frigo, “A modified split-radix FFT with fewer arithmetic
operations,” IEEE Trans. Signal Process 55 (1), 111–119 (2007).
49. T. Lundy and J. Van Buskirk, “A new matrix approach to real FFTs and
convolutions of length 2k ,” Computing 80 (1), 23–45 (2007).
50. J. Morgenstern, “Note on a lower bound of the linear complexity of the fast
Fourier transform,” J. ACM 20, 305–306 (1973).
51. C. H. Papadimitriou, “Optimality of the fast Fourier transform,” J. ACM 26,
95–102 (1979).
52. D. Potts, G. Steidl, and M. Tasche, “Fast Fourier transforms for nonequispaced
data: A tutorial,” in Modern Sampling Theory: Mathematics and Applications,
J. J. Benedetto and P. Ferreira, Eds., 247–270 Birkhauser, Boston (2001).
53. V. Rokhlin and M. Tygert, “Fast algorithms for spherical harmonic
expansions,” SIAM J. Sci. Comput. 27 (6), 1903–1928 (2006).
54. J. C. Schatzman, “Accuracy of the discrete Fourier transform and the fast
Fourier transform,” SIAM J. Sci. Comput. 17, 1150–1166 (1996).
55. O. V. Shentov, S. K. Mitra, U. Heute, and A. N. Hossen, “Subband DFT.
I. Definition, interpretations and extensions,” Signal Process 41, 261–277
(1995).
56. S. Winograd, “On computing the discrete Fourier transform,” Math. Comput.
32, 175–199 (1978).
57. H. G. Sarukhanyan, “Hadamard matrices: construction methods and appli-
cations,” in Proc. of Workshop on Transforms and Filter Banks, Feb. 21–27,
Tampere, Finland, 95–130 (1998).
58. H. G. Sarukhanyan, “Decomposition of the Hadamard matrices and fast
Hadamard transform,” in Computer Analysis of Images and Patterns, Lecture
Notes in Computer Science, 1296 (1997).
59. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs,” in Surveys in Contemporary Design Theory, John Wiley & Sons,
Hoboken, NJ (1992).
60. S. S. Agaian, “Apparatus for Walsh–Hadamard Transform,” in cooperation
with D. Gevorkian and A. Galanterian, USSR Patent No. SU 1832303 A1
(1992).
61. S. S. Agaian, “Parallel Haar Processor,” in cooperation with D. Gevorkian,
A. Galanterian, and A. Melkumian, USSR Patent No. SU 1667103 A1 (1991).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


90 Chapter 2

62. S. S. Agaian, “Hadamard Processor for Signal Processing,” Certificate of


Authorship No. 1098005, USSR (1983).
63. S. S. Agaian, “Haar Type Processor,” in cooperation with K. Abgarian,
Certificate of Authorship No. 1169866, USSR (1985).
64. S. S. Agaian, “Haar Processor for Signal Processing,” in cooperation with
A. Sukasian, Certificate of Authorship No. 1187176, USSR (1985).
65. S. S. Agaian, “Generalized Haar Processor for Signal Processing,” in
cooperation with A. Matevosian and A. Melkumian, Certificate of Authorship
No. 1116435, USSR (1984).
66. K. R. Rao, V. Devarajan, V. Vlasenko, and M. A. Arasimhan, “CalSal
Walsh–Hadamard transform,” IEEE Transactions on ASSP ASSP-26, 605–607
(1978).
67. J. J. Sylvester, “Thoughts on inverse orthogonal matrices, simultaneous sign
successions and tesselated pavements in two or more colors, with applications
to Newton’s Rule, ornamental till-work, and the theory of numbers,” Phil.
Mag. 34, 461–475 (1867).
68. Z. Li, H. V. Sorensen, and C. S. Burus, “FFT and convolution algorithms an
DSP micro processors,” in Proc. of IEEE Int. Conf. Acoust., Speech, Signal
Processing, 289–294 (1986).
69. R. K. Montoye, E. Hokenek, and S. L. Runyon, “Design of the IBM RISC
System/6000 floating point execution unit,” IBM J. Res. Dev. 34, 71–77 (1990).
70. S. S. Agaian and H. G. Sarukhanyan, “Hadamard matrices representation by
(−1, +1)-vectors,” in Proc. of Int. Conf. Dedicated to Hadamard Problem’s
Centenary, Australia (1993).
71. S. Y. Kung, VLSI Array Processors, Prentice-Hall, Englewood Cliffs, NJ
(1988).
72. D. Coppersmith, E. Feig, and E. Linzer, “Hadamard transforms on multiply/
add architectures,” IEEE Trans. Signal Process. 42 (4), 969–970 (1994).
73. S. Samadi, Y. Suzukake and H. Iwakura, “On automatic derivation of fast
Hadamard transform using generic programming,” in Proc. of 1998 IEEE
Asia–Pacific Conf. on Circuit and Systems, Thailand, 327–330 (1998).
74. M. Barazande-Pour and J. W. Mark, “Adaptive MHDCT coding of images,” in
Proc. IEEE Image Proces. Conf., ICIP-94 1, Austin, TX, 90–94 (Nov. 1994).
75. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in
Mathematics, 1168, Springer-Verlag, New York (1985).
76. R. Stasinski and J. Konrad, “A new class of fast shape-adaptive orthogonal
transforms and their application to region-based image compression,” IEEE
Trans. Circuits Syst. Video Technol. 9 (1), 16–34 (1999).
77. B. K. Harms, J. B. Park, and S. A. Dyer, “Optimal measurement techniques
utilizing Hadamard transforms,” IEEE Trans. Instrum. Meas. 43 (3), 397–402
(June 1994).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Classical Discrete Orthogonal Transforms 91

78. C. Anshi, Li Di and Zh. Renzhong, “A research on fast Hadamard transform


(FHT) digital systems,” in Proc. of IEEE TENCON 93, Beijing, 541–546
(1993).
79. S. Agaian, Optimal algorithms for fast orthogonal transforms and their
realization, Cybernetics and Computer Technology, Yearbook, 2, Nauka,
Moscow (1986) 231–319.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 3
Discrete Orthogonal Transforms
and Hadamard Matrices
The increasing importance of large vectors in processing and parallel computing
in many scientific and engineering applications requires new ideas for designing
superefficient algorithms of the transforms and their implementations. In the
past decade, fast orthogonal transforms have been widely used in areas such
as data compression, pattern recognition and image reconstruction, interpolation,
linear filtering, spectral analysis, watermarking, cryptography, and communication
systems. The computation of unitary transforms is complicated and time
consuming. However, it would not be possible to use orthogonal transforms in
signal and image processing applications without effective algorithms to calculate
them. The increasing requirements of speed and cost in many applications have
stimulated the development of new fast unitary transforms such as Fourier, cosine,
sine, Hartley, Hadamard, and slant transforms.1–100
A class of HTs (such as the Hadamard matrices ordered by Walsh and
Paley) plays an imperfect role among these orthogonal transforms. These
matrices are known as nonsinusoidal orthogonal transform matrices and have
been applied in digital signal processing.1–9,12,14,20–23,25–27,31,32,38,39,41,43,50,54
Recently, HTs and their variations have been widely used in audio and
video processing.2,10,12,19,33,70,73,74,80,82,83,85,87,89,100 For efficient computation of
these transforms, fast algorithms were developed.3,7,9,11,15,28,41,42,45,51,53,54,59–61,79,81
These algorithms require only N log2 N addition and subtraction operations (N =
2k , N = 12 · 2k , N = 4k , and several others). Alternatively, the achievement
of commonly used transforms has motivated many researchers in recent years to
generalize and parameterize these transforms in order to expand the range of their
applications and provide more flexibility in representing, encrypting, interpreting,
and processing signals.
Many of today’s advanced workstations (for example, IBM RISC/system
6000, model 530) and other signal processors are designed for efficient, fused
multiply/add operations15–17 in which the primitive operation is a multiply/add
±a ±bc operation, where a, b, and c are real numbers.
In Ref. 17, the decimation-in-time “radix-4” HT was developed with the support
of multiply/add instruction. The authors have shown that the routine of the new
“radix-4” algorithm is 5.6–7.4% faster than a regular “radix-4” algorithm routine.15

93

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


94 Chapter 3

In this chapter, we present the WHT based on the fast discrete orthogonal
algorithms such as Fourier, cosine, sine, slant, and others. The basic idea of these
algorithms is the following: first we compute the WHT coefficients, then using
the so-called correction matrix, we convert these coefficients to transform domain
coefficients. These algorithms are useful for development of integer-to-integer
DOTs and for new applications, such as data hiding and signal/image encryption.

3.1 Fast DOTs via the WHT


An N-point DOT can be defined as

X = F N x, (3.1)

where x = (x0 , x1 , . . . , xN−1 ) and X = (X0 , X1 , . . . , XN−1 ) denote the input and
output column vectors, respectively, and F N is an arbitrary DOT matrix of order N.
We can represent Eq. (3.1) in the following form:

1
X = FN x = F N HN HNT x, (3.2)
N

where HN is an HT matrix of order N = 2n . Denote AN = (1/N)F N HN or


F N = AN HN (recall that HN is a symmetric matrix). Then, Eq. (3.2) takes the
form

X = AN HN x. (3.3)

In other words, the HT coefficients are computed first and then they are used to
obtain the coefficients of discrete transform F N . This is achieved by the transform
matrix AN , which is orthonormal and has a block-diagonal structure. We will
call AN a correction transform. Thus, any transform can be decomposed into two
orthogonal transforms, namely, (1) an HT and (2) a correction transform.

Lemma 3.1.1: Let the orthogonal transform matrix F N = F2n have the following
representation:
 
F 2n−1 F 2n−1
F2n = , (3.4)
B2n−1 −B2n−1

where F 2n−1 stands for an appropriate permutation of F2n−1 and B2n−1 is an


2n−1 × 2n−1 submatrix of F2n . Then,

A2n = 2n−1 I2 ⊕ 2n−2 B2 H2 ⊕ 2n−3 B4 H4 ⊕ · · · ⊕ 2B2n−2 H2n−2 ⊕ B2n−1 H2n−1 , (3.5)

that is, the AN matrix has a block-diagonal structure, where ⊕ denotes the direct
sum of matrices.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 95

Proof: Clearly, this is true for n = 1. Let us assume that Eq. (3.5) is valid for
N = 2k−1 ; i.e.,

A2k−1 = 2k−2 I2 ⊕ 2k−3 B2 H2 ⊕ 2k−4 B4 H4 ⊕ · · · ⊕ 2B2k−3 H2k−3 ⊕ B2k−2 H2k−2 , (3.6)

and show that it takes place for N = 2k .


From the definition of the correction transform matrix AN , we have
  
F 2k−1 F 2k−1 H2k−1 H2k−1
A2k = F2k H2k =
B2k−1 −B2k−1 H2k−1 −H2k−1
= 2F 2k−1 H2k−1 ⊕ 2B2k−1 H2k−1 . (3.7)

Using the definitions of F2k−1 and H2k−1 once again, we can rewrite Eq. (3.7) as

A2k = 4F 2k−2 H2k−2 ⊕ 4B2k−2 H2k−2 ⊕ B2k−1 H2k−1 . (3.8)

From Eq. (3.6), we conclude that

A2k = 2k−1 I2 ⊕ 2k−2 B2 H2 ⊕ 2k−3 B4 H4 ⊕ · · · ⊕ 2B2k−2 H2k−2 ⊕ B2k−1 H2k−1 . (3.9)

For example, Hadamard-based discrete transforms of order 16 can be represented


as the following (see Refs. 18 and 79–82), where X denotes nonzero elements:
⎛ ⎞
⎜⎜⎜X 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜⎜0 X 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 X X 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 X X 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 X X X X 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 X X X X 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 X X X X 0 0 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 X X X X 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 X X X X X X X X ⎟⎟⎟⎟⎟ .
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 X X X X X X X X ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 X X X X X X X X ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 X X X X X X X X ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 X X X X X X X X ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 X X X X X X X X ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 X X X X X X X X ⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 0 0 X X X X X X X X

3.2 FFT Implementation


Now we want to compute the Fourier transform9,33,40,80–82 using the HT. The
N-point DFT can be defined as

X = F N x, (3.10)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


96 Chapter 3

where x = (x0 , x1 , . . . , xN−1 )T and X = (X0 , X1 , . . . , XN−1 )T denote the input and
output column vectors, respectively, and
 N−1
F N = WNkm (3.11)
k,m=0

is the Fourier transform matrix of order N = 2k , where


 
2π 2π 2π √
WN = exp − j = cos − j sin , j= −1. (3.12)
N N N

We can check that for any integers r, p, and for k, m = 0, 1, . . . , N/2 − 1,

WNk+N/2 = WNk , WNk+N/2 = WNk , (3.13)


WN2km = WN2k(m+N/2) , WN(2k+1)m = −WN(2k+1)(m+N/2) . (3.14)

Now we represent Eq. (3.10) in the following form:


1
X = F N IN x = F N HN HNT x, (3.15)
N
where HN in Eq. (3.13) is a Sylvester–Hadamard matrix of order N = 2n , i.e.,

HNT = HN , HN HNT = NIN , (3.16)

and
 
H2n−1 H2n−1
H 2n = , H1 = (1). (3.17)
H2n−1 −H2n−1

Denoting AN = (1/N)F N HN or F N = AN HN and using Eq. (3.15), we obtain

X = AN HN x = AN (HN x). (3.18)

This means that first, the HT coefficients are computed, and then they are used to
obtain the DFT coefficients. Using Eqs. (3.13) and (3.14), we can represent the
DFT matrix by Eq. (3.4). Hence, according to Lemma 3.1.1, the matrix

AN = (1/N)F N HN (3.19)

can be represented as a block-diagonal structure [see Eq. (3.5)].


We show the procedure in Fig. 3.1 as a generalized block diagram.
Remark: The expression in Eq. (3.15) is true for any orthogonal transform with
 
B2n−1 B2n−1
K2n = , K1 = (1). (3.20)
S 2n−1 −S 2n−1
(See, for example, the modified Haar transform.)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 97

Figure 3.1 Generalized block diagram of the procedure for obtaining HT coefficients.

Without losing the generalization, we prove it for the cases N = 4, 8, and 16.
Case N = 4: The Fourier matrix of order 4 is
⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟
 3 ⎜⎜⎜1 − j −1 j ⎟⎟⎟⎟
F4 = W4km = ⎜⎜⎜⎜ ⎟.
⎜⎜⎝1 −1 1 −1⎟⎟⎟⎟⎠
(3.21)
k,m=0
1 j −1 − j
Using the permutation matrix
⎛ ⎞
⎜⎜⎜1 0 0 0⎟⎟

⎜⎜⎜0 0 1 0⎟⎟⎟⎟
P1 = ⎜⎜⎜⎜ ⎟,
0⎟⎟⎟⎠⎟
(3.22)
⎜⎜⎝0 1 0
0 0 0 1
we can represent the matrix F4 in the following equivalent form:
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 0 0 0⎟⎟⎟ ⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜1 1 1 1⎟⎟⎟  
⎜⎜⎜0 0 1 0⎟⎟⎟ ⎜⎜⎜1 − j −1 j ⎟⎟⎟ ⎜⎜⎜1 −1 1 −1⎟⎟⎟ H2 H2
F 4 = P1 F4 = ⎜⎜⎜ ⎜ ⎟
⎟⎜ ⎜ ⎟
⎟=⎜ ⎜ ⎟
⎟= . (3.23)
⎜⎜⎝0 1 0 0⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝1 −1 1 −1⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝1 − j −1 j ⎟⎟⎟⎟⎠ B2 −B2
0 0 0 1 1 j −1 − j 1 j −1 − j
Then, we obtain
   
1 0 1 0
A4 = (1/4)F4 H4 = (1/4) (2H2 H2 ⊕ 2H2 B2 ) = ⊕ , (3.24)
0 1 0 −j
i.e., A4 is the block-diagonal matrix.
Case N = 8: The Fourier matrix of order 8 is
⎛ 0 ⎞
⎜⎜⎜W8 W80 W80 W80 W80 W80 W80 W80 ⎟⎟⎟
⎜⎜⎜⎜ 0 ⎟⎟
⎜⎜⎜W8 W81 W82 W83 −W80 −W81 −W82 −W83 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 ⎟⎟
⎜⎜⎜W8 W82 −W80 −W82 W80 W82 −W80 −W82 ⎟⎟⎟⎟
⎜⎜⎜ 0 ⎟⎟
⎜⎜W W83 −W82 W81 −W80 −W83 W82 −W81 ⎟⎟⎟⎟
F8 = ⎜⎜⎜⎜ 80 ⎟⎟ . (3.25)
⎜⎜⎜W −W 0 W 0 −W 0 W 0
⎜⎜⎜ 8 −W80 W80 −W80 ⎟⎟⎟⎟⎟
8 8
⎜⎜⎜W 0 −W 1 W 2 −W 3 −W 0
8 8
⎟⎟
⎜⎜⎜ 8 W81 −W82 W83 ⎟⎟⎟⎟
8 8 8
⎜⎜⎜W 0 −W 2 −W 0 W 2 W 0
8 ⎟⎟
⎜⎜⎝ 8 −W82 −W80 W82 ⎟⎟⎟⎟
8 8 8 8 ⎟⎠
W80 −W83 −W82 −W81 −W80 W83 W82 W81

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


98 Chapter 3

Using the permutation matrix


⎛ ⎞
⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 1 0 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 0 1 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜0 0⎟⎟⎟⎟
Q1 = ⎜⎜⎜⎜⎜
0 0 0 0 0 1
⎟, (3.26)
⎜⎜⎜0 1 0 0 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 1 0 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 1 0 0⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 0 1

represent the matrix F8 in the following equivalent form:


 
F4 F4
F8 = , (3.27)
B4 −B4

where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜W 0 W 0 W 0 W 0 ⎟⎟⎟ ⎜⎜⎜W 0 W 1 W 2 W 3 ⎟⎟⎟
⎜⎜⎜⎜ 8 8 8 8⎟ ⎟ ⎜⎜⎜⎜ 8 8 8 8⎟ ⎟
⎜⎜⎜W 0 W 2 −W 0 −W 2 ⎟⎟⎟⎟⎟ ⎜⎜⎜W 0 W 3 −W 2 W 1 ⎟⎟⎟⎟⎟
F4 = ⎜⎜⎜⎜ 8 8 8 8⎟ ⎟, B4 = ⎜⎜⎜⎜ 8 8 8 8⎟ ⎟ . (3.28)
⎜⎜⎜W 0 −W 0 W 0 −W 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜W 0 −W 1 W 2 −W 3 ⎟⎟⎟⎟⎟
⎜⎜⎜⎝ 8 8 8 8⎟ ⎟⎟⎠ ⎜⎜⎜⎝ 8 8 8 8⎟ ⎟⎟

W80 −W82 −W80 W82 W80 −W83 −W82 −W81

Now, using the permutation matrix Q2 = P1 ⊕ I4 , represent the matrix F8 in the


following equivalent form:
⎛ ⎞
⎜⎜⎜F2 F2 F2 F2 ⎟⎟⎟
⎜ ⎟
F8 = ⎜⎜⎜⎜ B2 −B2 B2 −B2 ⎟⎟⎟⎟ , (3.29)
⎝ ⎠
B4 −B4

where
⎛ ⎞
⎜⎜⎜1 a − j −a∗ ⎟⎟⎟ √
    ⎜⎜⎜ ⎟
1 1 1 −j ⎜1 −a∗ j a ⎟⎟⎟⎟⎟ 2
F2 = H2 = , B2 = , B4 = ⎜⎜⎜⎜ , a= (1 − j).
1 −1 1 j ⎜⎜⎜1 −a − j a∗ ⎟⎟⎟⎟⎟ 2
⎝ ⎠
1 a∗ j −a
(3.30)

We can show that the correction matrix of order 8 has the following form:

1
A8 = (D0 ⊕ D1 ⊕ D2 ) , (3.31)
8

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 99

where
 
1− j 1+ j
D0 = 8I2 , D1 = 4 ,
1+ j 1− j
⎛ ⎞
⎜⎜⎜(1 − j) + (a − a∗ ) (1 − j) − (a − a∗ ) (1 + j) + (a + a∗ ) (1 + j) − (a + a∗ )⎟⎟
⎜⎜⎜(1 + ∗ ⎟
j) + (a − a∗ ) (1 + j) − (a − a∗ ) (1 − j) − (a + a∗ ) (1 − j) + (a + a )⎟⎟⎟⎟
D2 = 2 ⎜⎜⎜⎜ ⎟.
⎜⎜⎝(1 + j) − (a − a∗ ) (1 − j) + (a − a∗ ) (1 + j) − (a + a∗ ) (1 + j) + (a + a∗ )⎟⎟⎟⎟⎠
(1 − j) − (a − a∗ ) (1 + j) + (a − a∗ ) (1 − j) + (a + a∗ ) (1 − j) − (a + a∗ )
(3.32)
√ √
Because a − a∗ = − j 2 and a + a∗ = 2,
⎛ √ √ ⎞ ⎛ √ √ ⎞
⎜⎜⎜1 1 + √2 1 − ⎟ ⎜⎜⎜ 1 + − ⎟
⎜⎜⎜
1 √2⎟⎟⎟⎟ ⎜⎜⎜ √ 2 1 √2 −1 −1⎟⎟⎟⎟
⎜1 1 − √2 1 + ⎟
⎟ ⎜ ⎟⎟
D2 = 2 ⎜⎜⎜⎜⎜
1 √2⎟⎟⎟⎟ − 2 j ⎜⎜⎜⎜−1 + √2 −1 − √2 1 1⎟⎟⎟⎟ . (3.33)
⎜⎜⎜1 ⎟⎟⎟ ⎜⎜⎜ 1 − ⎟

1 1 − √2 1 + 2
√ ⎟⎠ ⎜⎝ √2 1 + √2 −1 −1⎟⎟⎟⎠
1 1 1+ 2 1− 2 −1 − 2 −1 + 2 1 1
√ √
We introduce the notations: b = (1/4) + ( 2/4), c = (1/4) − ( 2/4). Now the
correction matrix A8 = Ar8 + jAi8 can be written as
⎛ ⎞
   ⎜⎜⎜⎜1/4 1/4 b c ⎟⎟⎟⎟
1 0 1/2 1/2 ⎜⎜⎜1/4 1/4 c b⎟⎟⎟
Ar8 = ⊕ ⊕⎜ ⎟,
1/2 1/2 ⎜⎜⎜⎜⎝1/4 1/4 c b⎟⎟⎟⎟⎠
(3.34)
0 1
1/4 1/4 b c
⎛ ⎞
    ⎜⎜⎜⎜ b c −1/4 −1/4⎟⎟⎟⎟
0 0 −1/2 1/2 ⎜⎜⎜−c −b 1/4 1/4⎟⎟⎟
Ai8 = ⊕ ⊕⎜ ⎟.
1/2 −1/2 ⎜⎜⎜⎜⎝ c b −1/4 −1/4⎟⎟⎟⎟⎠
(3.35)
0 0
−b −c 1/4 1/4

Now, the 8-point Fourier transform z = F8 x can be realized as follows. First, we


perform the 8-point HT y = H8 x, then we compute the 8-point correction transform
z = Ar8 y + jAi8 y. The flow graphs corresponding to the real and imaginary parts of
the correction transform are given in Fig. 3.2.
In Fig. 3.2, we see that an 8-point correction transform needs 14 real addition, 8
real multiplication, and 4 shift operations.
 15
Case N = 16: Let F16 = W16 mn
be a Fourier transform matrix of order 16.
m,n=0
Denoting the rows of the Fourier matrix F16 by 0, 1, . . . , 15, we can represent
the matrix F16 in the following equivalent form with rows 0, 2, . . . , 12, 14;
1, 3, . . . , 13, 15:
 
F8 F8
F16 = , (3.36)
B8 −B8

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


100 Chapter 3

y0 z0r y0 z0i = 0

y1 z1r y1 z1i = 0

1/2 1/2
y2 z2r = z3r y2 z2i = – z3i

y3 y3

1/4 a
y4 z4r = z7r y4 z4i

b b

y5 y5 z5i
a

a 1/4 i
y6 z5r = z6r y6 z6

a b

y7 y7 z7i
b

Figure 3.2 Flow graph (real and imaginary parts) of an 8-point correction transform.

where
⎛ ⎞
⎜⎜⎜W 0 0
W16 0
W16 0
W16 0
W16 0
W16 0
W16 0 ⎟
W16 ⎟⎟⎟
⎜⎜⎜⎜ 16 ⎟⎟
⎜⎜⎜W 0 6 ⎟ ⎟⎟⎟
⎜⎜⎜ 16
2
W16 4
W16 6
W16 −W16
0
−W16
2
−W16
4
−W16 ⎟⎟⎟
⎜⎜⎜ 0 4 ⎟ ⎟⎟⎟⎟
⎜⎜⎜W16 4
W16 −W16
0
−W16
4 0
W16 4
W16 −W16
0
−W16 ⎟⎟
⎜⎜⎜
⎜⎜W 0 2 ⎟ ⎟⎟⎟
6
−W16
4 2
−W16
0
−W16
6 4
−W16
F8 = ⎜⎜⎜⎜⎜ 16 ⎟⎟⎟ ,
W16 W16 W16
⎜⎜⎜W 0 0 ⎟

⎜⎜⎜ 16 −W16
0 0
W16 −W16
0 0
W16 −W16
0 0
W16 −W16 ⎟⎟⎟⎟
⎜⎜⎜ 0 ⎟⎟
6 ⎟ ⎟⎟⎟
⎜⎜⎜W16 −W16
0 4
W16 −W16
6
−W16
0 2
W16 −W16
4
W16 ⎟⎟⎟
⎜⎜⎜ 0 4 ⎟ ⎟⎟⎟
⎜⎜⎜W16 −W16
2
−W16
0 4
W16 −W16
0
−W16
4
−W16
0
W16 ⎟⎟⎟
⎜⎜⎝
2 ⎠
0
W16 −W16
6
−W16
4
−W16
2 0
W16 6
W16 4
W16 W16
⎛ ⎞ (3.37)
⎜⎜⎜W 0 1
W16 2
W16 3
W16 4
W16 5
W16 6
W16 7 ⎟
W16 ⎟⎟⎟
⎜⎜⎜⎜ 16 ⎟⎟
⎜⎜⎜W 0 5 ⎟
⎜⎜⎜ 16
3
W16 6
W16 −W16
1
−W16
4
−W16
7 2
W16 W16 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 ⎟⎟
3 ⎟
⎜⎜⎜W16 5
W16 −W16
2
−W16
7 4
W16 −W16
1
−W16
6
W16 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜W 0 1 ⎟
7
−W16
6 5
−W16
4 3
−W16
2
W16 ⎟⎟⎟⎟
B8 = ⎜⎜⎜⎜⎜ 16
W16 W16 W16
⎟⎟ .
⎜⎜⎜W 0 7 ⎟ ⎟⎟⎟
⎜⎜⎜ 16 −W16
1 2
W16 −W16
3 4
W16 −W16
5 6
W16 −W16 ⎟⎟⎟
⎜⎜⎜ 0 5 ⎟ ⎟⎟⎟
⎜⎜⎜W16 −W16
3 6
W16 1
W16 −W16
4 7
W16 2
W16 −W16 ⎟⎟⎟
⎜⎜⎜ 0 3 ⎟ ⎟⎟⎟
⎜⎜⎜W16 −W16
5
−W16
2 7
W16 4
W16 1
W16 −W16
6
−W16 ⎟⎟⎟
⎜⎜⎝
1 ⎠
0
W16 −W16
7
−W16
6
−W16
5
−W16
4
−W16
3
−W16
2
−W16

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 101

Similarly, we obtain

    ⎛ ⎞
⎜⎜⎜W 0 W16 0 ⎟
⎟⎟⎟
F
F8 = 4
F4
, F4 =
F2 F2
, F2 = ⎜⎜⎝ ⎜ 16 ⎟⎟ ,
B4 −B4 B2 −B2 0 ⎠
0
W16 −W16
⎛ ⎞
⎜⎜⎜W 0 2
W16 4
W16 6 ⎟
W16 ⎟⎟⎟
⎛ ⎞ ⎜
⎜⎜⎜ 16 ⎟⎟ (3.38)
⎜⎜⎜W 0 4 ⎟⎟⎟⎟ ⎜⎜⎜W 0 2 ⎟ ⎟⎟⎟
W16 6
W16 −W16
4
W16
B2 = ⎜⎜⎜⎝ 16 ⎟⎟⎠ , B4 = ⎜⎜⎜ ⎜ 16 ⎟⎟⎟ .
⎜ 6 ⎟
0
W16 −W16
4
⎜⎜⎜W16 −W16 W16 −W16 ⎟⎟⎟⎟⎟
⎜ 0 2 4
⎜⎝ 0 2 ⎠

W16 −W166
−W16
4
−W16

Therefore, the Fourier transform matrix of order 16 from Eq. (3.36) can be
represented in the following equivalent form:
⎛ ⎞
⎜⎜⎜F2 F2 F2 F2 F2 F2 F2 F2 ⎟⎟⎟
⎜⎜⎜ ⎟

⎜ B2 −B2 B2 −B2 B2 −B2 B2 −B2 ⎟⎟⎟⎟
F16 = ⎜⎜⎜ ⎟⎟⎟ . (3.39)
⎜⎜⎝ B4 −B4 B4 −B4 ⎟⎟⎠
B8 −B8

Using the properties of the exponential function W, we obtain

π π
1
W16 = cos − j sin = c − js = b,
8 8 √
π π 2
2
W16 = cos − j sin = (1 − j) = a,
4 4 2
3π 3π
3
W16 = cos − j sin = s − jc = − jb∗ ,
8 8
π π
4
W16 = cos − j sin = − j, (3.40)
2 2
5π 5π
5
W16 = cos − j sin = s + jc = jb,
8 8 √
3π 3π 2
6
W16 = cos − j sin = (1 + j) = a∗ ,
4 4 2
7π 7π
7
W16 = cos − j sin = −c − js = −b∗ .
8 8

Using Eq. (3.40), we obtain

B2 = B12 + jB22 , (3.41)

where
     
1 0 0 −1 1 1
B12 = , B22 = , F2 = . (3.42)
1 0 0 1 1 −1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


102 Chapter 3

We can also check that

B4 = B14 + jB24 , (3.43)

where
⎛ √ √ ⎞ ⎛ √ √ ⎞
⎜⎜⎜ 2 2 ⎟⎟⎟ ⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜1 0 ⎟
⎟⎟⎟ ⎜
⎜⎜⎜0 − −1 ⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟
⎟ ⎜
⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟⎟ ⎜
⎜⎜⎜0 2 2 ⎟⎟⎟

⎜⎜⎜ 1 0 ⎟

⎟ ⎜ 1 − ⎟⎟
B4 = ⎜⎜⎜
1
√2 ⎟
√2 ⎟⎟⎟⎟ , B4 = ⎜⎜⎜⎜2 ⎜ √2 √2 ⎟⎟⎟⎟⎟ . (3.44)
⎜⎜⎜ 2 2 ⎟⎟⎟ ⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜1 − 0 − ⎟⎟⎟ ⎜⎜⎜0 −1 − ⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟⎟ ⎟ ⎜
⎜⎜⎜ √2 √2 ⎟⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜ ⎟
⎜⎝ 2 2 ⎟⎟⎠ ⎝0 − 2 1 2 ⎟⎟⎟⎠
1 − 0 −
2 2 2 2
⎛ ⎞
⎜⎜⎜1 b a − jb∗ − j jb a∗ −b∗ ⎟⎟
⎜⎜⎜⎜1 − jb∗ a∗ −b ⎟
⎜⎜⎜ j b∗ a jb ⎟⎟⎟⎟⎟
⎜⎜⎜1 jb −a b∗ − j −b −a∗ − jb∗ ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟

1 −b −a ∗
j − jb∗ −a b ⎟⎟⎟⎟
B8 = ⎜⎜⎜⎜⎜
jb
∗ ⎟ . (3.45)
⎜⎜⎜1 −b a ∗
jb − j − jb a ∗
b ⎟⎟⎟⎟
⎜⎜⎜1 jb∗ a∗ b ⎟
⎜⎜⎜ j −b∗ a − jb ⎟⎟⎟⎟
⎜⎜⎜1 − jb −a −b∗ − j b −a∗ jb∗ ⎟⎟⎟⎟⎟
⎝ ⎠
1 b∗ −a∗ − jb j jb∗ −a −b

We can see that

B8 = B18 + jB28 , (3.46)

where
⎛ √ √ ⎞
⎜⎜⎜ 2 2 ⎟⎟
⎜⎜⎜1
⎜⎜⎜
c s 0 s −c⎟⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜1 s −c 0 c s⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜1 2 2 ⎟
⎜⎜⎜ s − c 0 −c − s⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
2 2 ⎟
⎜⎜⎜1 −c − s 0 s − c⎟⎟⎟⎟
B8 = ⎜⎜⎜⎜⎜
1
√2 √2 ⎟⎟⎟ ,
⎟⎟⎟
⎜⎜⎜ 2 2 ⎟
⎜⎜⎜1 −c −s 0 −s c⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟
⎜⎜⎜1 −s c 0 −c −s⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜1 2 2 ⎟
⎜⎜⎜ −s − −c 0 c − −s⎟⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎝ ⎟⎟⎟
2 2 ⎟
1 c − −s 0 −s − −c⎠
2 2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 103

⎛ √ √ ⎞
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 −s −
2
c −1
2
−s⎟⎟⎟⎟⎟
⎜⎜⎜ c
⎟⎟⎟
⎜⎜⎜ √2 √2
⎟⎟⎟
⎜⎜⎜ 2 2 ⎟
⎜⎜⎜0 −c s 1 s − c⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜0 2 2 ⎟⎟
⎜⎜⎜ c s −1 s − −c⎟⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
√ √ ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟
⎜⎜⎜0 −s − c 1 −c −s⎟⎟⎟⎟

B18 = ⎜⎜⎜⎜ √2 √2 ⎟⎟⎟ .
⎟⎟⎟ (3.47)
⎜⎜⎜ ⎟
⎜⎜⎜0 −
2
c −1 −c
2
s⎟⎟⎟⎟⎟
⎜⎜⎜ s
⎟⎟⎟
⎜⎜⎜ √2 √2 ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟
⎜⎜⎜0 c −s 1 −s − −c⎟⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜ √ √ ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟
⎜⎜⎜0 −c −s −1 −s − c⎟⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎜⎜ √ √ ⎟⎟⎟
⎜⎜⎜ 2 2 ⎟⎟
⎝0 s − −c 1 c s⎠
2 2

Now, using Eq. (3.39), the Fourier transform matrix can be represented in the
following equivalent form:

F16 = F16
1
+ jF16
2
, (3.48)

where
⎛ ⎞
⎜⎜⎜H2 H2 H2 H2 H2 H2 H2 H2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ B1 −B12 B12 −B12 B12 −B12 B12 −B12 ⎟⎟⎟⎟
1
F16 = ⎜⎜⎜⎜ 2 ⎟⎟⎟ ,
⎜⎜⎜ B14 −B14 B14 −B14 ⎟⎟⎟⎟
⎜⎜⎝ ⎟⎠
B18 −B18
⎛ ⎞ (3.49)
⎜⎜⎜O2 O2 O2 O2 O2 O2 O2 O2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ B2 −B22 B22 −B22 B22 −B22 B22 −B22 ⎟⎟⎟⎟
2
F16 = ⎜⎜⎜⎜ 2 ⎟⎟⎟ ,
⎜⎜⎜ B24 −B24 B24 −B24 ⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠
B28 −B28

where O2 is the zero matrix of order 2.


According to Lemma 3.1.1, the correction transform matrix takes the following
form:

A16 = A116 + jA216 , (3.50)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


104 Chapter 3

where

1  
A116 = 8I2 ⊕ 8B12 H2 ⊕ 4B14 H4 ⊕ 2B18 H8 ,
16 (3.51)
1  
A216 = O2 ⊕ 8B22 H2 ⊕ 4B24 H4 ⊕ 2B28 H8 .
16

Now, using the following notations:


√ √
u = 1 + 2, v = 1 − 2, s1 = 1 + 2s, s2 = 1 − 2s,
c1 = 1 + 2c, c2 = 1 − 2c, e = u + 2s, f = v + 2s,
(3.52)
g = u − 2s, h = v − 2s, r = u + 2c, t = v + 2c,
p = u − 2c, q = v − 2c,

we can represent the blocks of the correction matrix as


   
1 1 −1 1
B12 H2 = , B2 H2 =
2
,
1 1 1 −1
⎛ ⎞ ⎛ ⎞
⎜⎜⎜u v 1 1⎟⎟⎟ ⎜⎜⎜−1 −1 v u⎟⎟⎟
⎜⎜⎜u v 1 1⎟⎟⎟ ⎜⎜⎜ 1 1 −v −u⎟⎟⎟
B14 H4 = ⎜⎜⎜⎜ ⎟⎟ , B2 H4 = ⎜⎜⎜
⎜⎜⎜−1 −1 u v ⎟⎟⎟⎟⎟ ,

⎜⎜⎝v u 1 1⎟⎟⎟⎟⎠ 4
⎝ ⎠
v u 1 1 1 1 −u −v
⎛ ⎞
⎜⎜⎜e g t q c1 c2 s2 s1 ⎟⎟⎟
⎜⎜⎜e g t q c c s s ⎟⎟⎟
⎜⎜⎜ 2 1 1 2⎟
⎜⎜⎜ f h p r c1 c2 s1 s2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜ f h p r c2 c1 s2 s1 ⎟⎟⎟⎟⎟
B18 H8 = ⎜⎜⎜⎜⎜ ⎟,
⎜⎜⎜g e q t c2 c1 s1 s2 ⎟⎟⎟⎟⎟ (3.53)
⎜⎜⎜g e q t c1 c2 s2 s1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜h f r p c2 c1 s2 s1 ⎟⎟⎟⎟⎟
⎝ ⎠
h f r p c1 c2 s1 s2
⎛ ⎞
⎜⎜⎜−s1 −s2 −c2 −c1 q t g e ⎟⎟⎟
⎜⎜⎜⎜ s1 s2 c2 c1 −t −q − f −h ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−s2 −s1 −c2 −c1 r p h f ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
s1 c2 c1 −p −r − f −h ⎟⎟⎟⎟
B28 H8 = ⎜⎜⎜⎜⎜ 2
s
⎟⎟ .
⎜⎜⎜⎜−s2 −s1 −c1 −c2 t q e g ⎟⎟⎟⎟⎟
⎜⎜⎜ s2 s1 c1 c2 −q −t −g −e ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝−s1 −s2 −c1 −c2 p r f h ⎟⎟⎟⎟

s1 s2 c1 c2 −r −p −h − f

Now we want to show that the transform can be realized via fast algorithm. We
denote y = H16 x. Then, X = (1/16)A16 y. We perform the transform as

H16 y = A116 y + jA216 y. (3.54)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 105

Let z = (z0 , z1 , . . . , z15 ) and y = (y0 , y1 , . . . , y15 ). First we compute a real part of
this transform. Using the following notations:

A1 = y2 + y3 , B1 = y6 + y7 , B2 = uy4 + vy5 , B3 = vy4 + uy5 ,


C1 = c1 y12 + c2 y13 , C2 = c2 y12 + c1 y13 , S 1 = s1 y14 + s2 y15 , S 2 = s2 y14 + s1 y15 ,
(3.55)
E = ey8 + gy9 + ty10 + qy11 , F = f y8 + hy9 + py10 + ry11 ,
G = gy8 + ey9 + qy10 + ty11 , H = hy8 + f y9 + ry10 + py11 ,

we obtain

zr0 = y0 , zr1 = y1 , zr2 = A1 , zr3 = A1 ,


zr4 = B1 + B2 , zr5 = zr4 , zr6 = B1 + B3 , zr7 = zr6 ,
(3.56)
zr8 = E + C1 + S 2 , zr9 = E + C2 + S 1 , zr10 = F + C1 + S 2 , zr11 = F + C2 + S 2 ,
zr12 = G + C2 + S 1 , z13 = G + C1 + S 2 , zr14 = H + C2 + S 2 , zr15 = H + C1 + S 1 .
r

The imaginary part of a 16-point Fourier correction transform can be realized as


follows: Denoting

A1 = y2 − y3 , Bi1 = y4 + y5 ,
Bi2 = uy6 + vy7 , Bi3 = vy6 + uy7 ,
(3.57)
C1i = c1 y10 + c2 y11 , C2i = c2 y10 + c1 y11 ,
S 1i = s1 y8 + s2 y9 , S 2i = s2 y8 + s1 y9 ,
Q = qy12 + ty13 + gy14 + ey15 ,
T = ty12 + qy13 + hy14 + f y15 ,
(3.58)
R = ry12 + py13 + hy14 + f y15 ,
P = py12 + ry13 + f y14 + hy15 ,

we obtain

zi0 = 0, zi1 = 0, zi2 = −Ai1 , zi3 = Ai1 ,


zi4 = −Ai1 + Bi2 , zi5 = −zi4 , zi6 = −Ai1 + Bi1 , zi7 = −zi6 ,
(3.59)
zi8 = Q − S 1i − C2i , zi9 = −T + S 1i + C2i , zi10 = R − S 2i − C2i , zi11 = −P + S 2i + C2i ,
zi12 = T − S 2i − C1i , zi13 = −Q + S 2i + C1i , zi14 = P + S 1i − C1i , zi15 = −R + S 1i + C1i .

It can be shown that the complexity of a 16-point correction transform is 68


real addition and 56 real multiplication operations. Therefore, the 16-point Fourier

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


106 Chapter 3

Figure 3.3 Flow graph of the real part of 16-point Fourier correction transform.

transform, if using the correction transform, needs only 68+64 = 132 real addition
and 56 real multiplication operations (see Figs. 3.3 and 3.4).

3.3 Fast Hartley Transform


, -N−1
Let [Hart]N = ak,n k,n=0 be a discrete Hartley transform9,33,66,75,81,82 matrix of order
N = 2r , with
 
1 2π
ak,n = √ Cas kn , (3.60)
N N

where

Cas (α) = cos(α) + sin(α). (3.61)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 107

Figure 3.4 Flow graph of the imaginary part of a 16-point Fourier correction transform.

The N-point forward discrete Hartley transform of vector x can be expressed as

1
z = [Hart]N x = [Hart]N HN HN x = BN x, (3.62)
N

where
BN = (1/N)[Hart]N HN , (3.63)

and HN is the Sylvester–Hadamard matrix of order N,


 
H2n−1 H2n−1
H2n = , H1 = (1), n ≥ 2. (3.64)
H2n−1 −H2n−1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


108 Chapter 3

Thus, an N-point Hartley transform can be calculated by two steps as follows:


Step 1: y = H2n−1 x.
Step 2: z = B2n−1 y.
Denote
, -N−1 , -N−1
[Hart]N = C N + S N , C N = ck,n k,n=0 , S N = sk,n k,n=0 , (3.65)

where

ck,n = cos(2π/N)kn, sk,n = sin(2π/N)kn, k, n = 0, 1, . . . , N − 1. (3.66)

We can check that

c2k,N/2+n = c2k,n , c2k+1,N/2+n = −c2k+1,n ,


(3.67)
S 2k,N/2+n = s2k,n , s2k+1,N/2+n = −s2k+1,n .

Using Eq. (3.67), we can represent a discrete Hartley transform matrix by


Eq. (3.4). Hence, according to Lemma 3.1.1, the matrix

1
BN = [Hart]N HN (3.68)
N
can be represented as a block-diagonal structure [see Eq. (3.5)]. Without losing the
generalization, we can prove it for the cases N = 4, 8, and 16.
Case N = 4: The discrete Hartley transform matrix of order 4 is
⎛ ⎞
⎜⎜⎜c0,0 + s0,0 c0,1 + s0,1 c0,2 + s0,2 c0,3 + s0,3 ⎟⎟

⎜⎜⎜c + s c1,1 + s1,1 c1,2 + s1,2 c1,3 + s1,3 ⎟⎟⎟⎟
[Hart]4 = ⎜⎜⎜⎜ 1,0 1,0
⎟.
c2,3 + s2,3 ⎟⎟⎟⎠⎟
(3.69)
⎜⎝⎜c2,,0 + s2,0 c2,1 + s2,1 c2,2 + s2,2
c3,0 + s3,0 c3,1 + s3,1 c3,2 + s3,2 c3,3 + s3,3

By using the relations in Eq. (3.67) and ordering the rows of [Hart]4 as 0, 2, 1, 3,
we obtain
⎛ ⎞
⎜⎜⎜c0,0 + s0,0 c0,1 + s0,1 c0,0 + s0,0 c0,1 + s0,1 ⎟⎟ 
⎟ 
⎜⎜⎜c + s c2,1 + s2,1 c2,0 + s2,0 c2,1 + s2,1 ⎟⎟⎟⎟
[Hart]4 = ⎜⎜⎜⎜ 2,0 2,0
⎟⎟⎟ = A2 A2 , (3.70)
⎜⎜⎝c1,0 + s1,0 c1,1 + s1,1 −(c1,0 + s1,0 ) −(c1,1 + s1,1 )⎟⎟⎠ P2 −P2
c3,0 + s3,0 c3,1 + s3,1 −(c3,0 + s3,0 ) −(c3,1 + s3,1 )

where

A2 = P2 = H2 , (3.71)

i.e., [Hart]4 is the Hadamard matrix; therefore, the correction transform in this case
(B4 ) is the identity matrix.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 109

Case N = 8: The discrete Hartley transform matrix of order 8 can be represented


as
⎛ ⎞
⎜⎜⎜H2 H2 H2 H2 ⎟⎟⎟
⎜ ⎟
[Hart]8 = ⎜⎜⎜⎜⎜H2 −H2 H2 −H2 ⎟⎟⎟⎟⎟ , (3.72)
⎝ ⎠
P4 −P4

where
⎛ √ ⎞
⎜⎜⎜1 ⎟⎟
√ ⎟⎟⎟⎟
2 1 0
  ⎜⎜⎜⎜
1 1 ⎜⎜1 0 −1 2⎟⎟⎟⎟
H2 = , P4 = ⎜⎜⎜⎜ √ ⎟.
⎜⎜⎜1 − 2 1 0 ⎟⎟⎟⎟⎟
(3.73)
1 −1
⎜⎜⎝ √ ⎟⎟⎠
1 0 −1 − 2

Note that

[Hart]8 = Q2 Q1 [Hart]8 , (3.74)

where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜⎜0 ⎟
0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜0 ⎟
0⎟⎟⎟⎟⎟
⎜⎜⎜ 0 1 0 0 0 0
⎟ ⎜⎜⎜ 0 1 0 0 0 0

⎜⎜⎜0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 1 0 0
⎟ ⎜⎜⎜ 1 0 0 0 0 0

⎜⎜⎜0 0 0 0 0 0 1 0⎟⎟⎟⎟ ⎜⎜⎜0 0 0 1 0 0 0 0⎟⎟⎟⎟
Q1 = ⎜⎜⎜⎜ ⎟⎟ , Q2 = ⎜⎜⎜⎜ ⎟⎟ . (3.75)
⎜⎜⎜0 1 0 0 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 1 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 1 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 1 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜⎝0 0 0 0 0 1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎝0 0 0 0 0 0 1 0⎟⎟⎟⎟⎟
⎠ ⎠
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1

The correction matrix in this case will be B8 = (1/8)[4I2 ⊕ 4I2 ⊕ 2P4 H4 ], i.e.,

⎡ ⎛ ⎞⎤
⎢⎢⎢ ⎜⎜⎜ b a s −s ⎟⎟⎟⎥⎥⎥
1 ⎢⎢ ⎢ ⎜
⎜⎜ s −s a b⎟⎟⎟⎟⎥⎥⎥⎥
B8 = ⎢⎢⎢⎢I4 ⊕ ⎜⎜⎜⎜ ⎟⎥ ,
⎜⎜⎝ a b −s s ⎟⎟⎟⎟⎠⎥⎥⎥⎥⎦
(3.76)
8 ⎢⎢⎣
−s s b a

where

s= 2, a = 2 − s, b = 2 + s. (3.77)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


110 Chapter 3

Figure 3.5 Flow graph of the 8-point Hartley correction transform.

We can see that the third block of matrix B8 may be factorized as (see Fig. 3.5)
⎛ ⎞ ⎛ ⎞⎛ ⎞⎛ ⎞
⎜⎜⎜ b a s −s ⎟⎟⎟ ⎜⎜⎜1 1 0 1⎟⎟⎟ ⎜⎜⎜2 0 0 0⎟⎟ ⎜⎜1 1 0 0⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ s −s a b⎟⎟⎟ ⎜⎜⎜0 1 1 −1⎟⎟⎟ ⎜⎜⎜0 s 0 0⎟⎟ ⎜⎜1 −1 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟⎜
⎜⎜⎜ a b −s s ⎟⎟⎟⎟⎟ = ⎜⎜⎜⎜⎜1 −1 0 −1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 0 2
⎟⎟⎟ ⎜⎜⎜ ⎟.
0⎟⎟⎠ ⎜⎜⎝0 0 1 −1⎟⎟⎟⎟⎠
(3.78)
⎝ ⎠ ⎝ ⎠⎝
−s s b a 0 −1 1 1 0 0 0 s 0 0 1 −1

Case N = 16: Using the properties of the elements of a Hartley matrix [see
Eq. (3.67) ], the Hartley transform matrix of order 16 can be represented as
⎛ ⎞
⎜⎜⎜A2 A2 A2 A2 A2 A2 A2 A2 ⎟⎟⎟
⎜⎜⎜⎜P2 −P P2 −P2 P2 −P2 P2

−P2 ⎟⎟⎟⎟
A16 = ⎜⎜⎜⎜ 2
⎟⎟⎟ , (3.79)
⎜⎜⎝ P4 −P4 P4 −P4 ⎟⎟⎠
P8 −P8

where
 
1 1
A2 = C 2 + S 2 = ,
1 −1
 
1 1
P2 = Pc2 + P2s = ,
1 −1
⎛ √ ⎞ (3.80)
⎜⎜⎜1 2 1 0√ ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
1 0√ −1 2⎟⎟⎟⎟
P4 = Pc4 + P4s = ⎜⎜⎜⎜⎜ ⎟,
⎜⎜⎜1 − 2 1 0√ ⎟⎟⎟⎟⎟
⎝ ⎠
1 0 −1 − 2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 111

and P8 = Pc8 + P8s [here we use the notations ci = cos(iπ/8) and si = sin(iπ/8)]:

⎛ ⎞
⎜⎜⎜1 c1 c2 c3 0 −c3 −c2 −c1 ⎟⎟
⎜⎜⎜1 ⎟
⎜⎜⎜ c3 −c2 −c1 0 c1 c2 −c3 ⎟⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎜⎜ −c3 −c2 c1 0 −c1 c2 c3 ⎟⎟⎟⎟

⎜⎜1 −c1 c2 −c3 0 c3 −c2 c1 ⎟⎟⎟⎟
P8 = ⎜⎜⎜⎜
c ⎟,
c1 ⎟⎟⎟⎟⎟
(3.81)
⎜⎜⎜1 −c1 c2 −c3 0 c3 −c2
⎜⎜⎜ ⎟
⎜⎜⎜⎜1 −c3 −c2 c1 0 −c1 c2 c3 ⎟⎟⎟⎟

⎜⎜⎜1 c3 −c2 −c1 0 c1 c2 −c3 ⎟⎟⎟⎟
⎝ ⎠
1 c1 c2 c3 0 −c3 −c2 −c1
⎛ ⎞
⎜⎜⎜0 s1 s2 s3 1 s3 s2 s1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 s3 s2 −s1 1 −s1 s2 s3 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟
⎜⎜⎜ s3 −s2 −s1 1 −s1 −s2 s3 ⎟⎟⎟⎟⎟
⎜⎜⎜0 ⎟
s1 −s2 s3 1 s3 −s2 s1 ⎟⎟⎟⎟
P8 = ⎜⎜⎜⎜
c ⎟⎟ . (3.82)
⎜⎜⎜0 −s1 s2 −s3 1 −s3 s2 −s1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 −s3 s2 s1 1 s1 s2 −s3 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎝0 −s3 −s2 s1 1 s1 −s2 −s3 ⎟⎟⎟⎟

0 −s1 −s2 −s3 1 −s3 −s2 −s1

From Eq. (3.79) and Lemma 3.1.1, we obtain the Hartley correction matrix as

1  
B16 = 8A2 H2 ⊕ 8P2 H2 ⊕ 4P4 H4 ⊕ 2P8 H8 ; (3.83)
16

denoted by

s = 2, a = 2 − s, b = 2 + s,
e = 1 − s, f = 1 + s,
   
π 3π π 3π
c+ = 2 cos + cos , c− = 2 cos − cos , (3.84)
8 8 8 8
   
π 3π π 3π
s+ = 2 sin + sin , s− = 2 sin − sin .
8 8 8 8

And using Eqs. (3.80)–(3.82), we can compute

A2 H2 = P2 H2 = 2I2 ,
⎛ ⎞
⎜⎜⎜ b a s −s ⎟⎟⎟
⎜⎜⎜ s −s a b⎟⎟⎟
P4 H4 = ⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎝ a b −s s ⎟⎟⎟⎟⎠
(3.85)
−s s b a
P8 H8 = Pc8 H8 + P8s H8 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


112 Chapter 3

After several mathematical manipulations, we obtain


⎛ ⎞
⎜⎜⎜1 1 1 + c− 1 − c− f + c+ f − c+ e e ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 + c+ 1 − c+ e − c− e + c− f f ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 − c+ 1 + c+ e + c− e − c− f f ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜
⎜1 1 1 − c− 1 + c− f − c+ f + c+ e e ⎟⎟⎟⎟
Pc8 H8 = ⎜⎜⎜⎜ ⎟⎟⎟ , (3.86)
⎜⎜⎜1 1 1 − c− 1 + c− f − c+ f + c+ e e ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 − c+ 1 + c+ e + c− e − c− f f ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ + + −
⎜⎜⎜⎝1 1 1 + c 1 − c e − c e + c

f f ⎟⎟⎟⎟
⎟⎠
1 1 1 + c 1 − c f + c f − c+
− − +
e e
⎛ ⎞
⎜⎜⎜ f + s+ f − s+ e e −1 −1 −1 + s− −1 − s− ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ f − s− f + s− e e −1 −1 −1 + s+ −1 − s+ ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜e − s− e + s− f f −1 −1 −1 + s+ −1 − s+ ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜
⎜e + s+ e − s+ f f −1 −1 −1 + s− −1 − s− ⎟⎟⎟⎟
P8 H8 = ⎜⎜⎜⎜
s ⎟⎟⎟ , (3.87)
⎜⎜⎜ f − s+ f + s+ e e −1 −1 −1 − s− −1 + s− ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ f + s− f − s− e e −1 −1 −1 − s+ −1 + s+ ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜e + s e − s f f −1 −1 −1 − s
− − +
−1 + s+ ⎟⎟⎟⎟
⎝ ⎟⎠
e − s+ e + s+ f f −1 −1 −1 − s− −1 + s−
⎛ ⎞
⎜⎜⎜b + s+ b − s+ a + c− a − c− s + c+ s − c+ −s + s− −s − s− ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜b − s− b + s− a + c+ a − c+ −s − c− −s + c− s + s+ s − s+ ⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜a − s− a + s− b − c+ b + c+ −s + c− −s − c−
⎜⎜⎜ s + s+ s − s+ ⎟⎟⎟⎟⎟
⎜⎜⎜a + s+ a − s+ b − c− b + c− s − c+ s + c+ ⎟⎟
−s + s− −s − s− ⎟⎟⎟⎟
P8 H8 = ⎜⎜⎜⎜ ⎟⎟⎟ .
⎜⎜⎜b − s+ b + s+ a − c− a + c− s − c+ s + c+ −s − s− −s + s− ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜b + s− b − s− a − c+ a + c+ −s + c− −s − c− s − s+ s + s+ ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ −
⎜⎜⎜⎝a + s a − s b + c b − c −s − c −s + c
− + + − +
s − s+ s + s+ ⎟⎟⎟⎟
+ + − − +
⎟⎠
a−s a+s b+c b−c s+c s − c+ −s − s− −s + s−
(3.88)

Now, we wish to show that the Hartley transform can be realized via fast
algorithms. The 16-point Hartley transform z = [Hart]16 x can be realized as
follows. First, we perform the 16-point HT y = H16 x, then we compute the 16-point
correction transform. Using Eq. (3.83), we find that

⎛ ⎞ ⎛ ⎞
⎜⎜⎜y4 ⎟⎟⎟ ⎜⎜⎜y8 ⎟⎟⎟
    ⎜ ⎟ ⎜⎜⎜y ⎟⎟⎟

⎜y ⎟ ⎟ ⎜ 9⎟
z = 8A2 H2 0 ⊕ 8P2 H2 2 ⊕ 4P4 H4 ⎜⎜⎜⎜ 5 ⎟⎟⎟⎟ ⊕ 2P8 H8 ⎜⎜⎜⎜⎜.. ⎟⎟⎟⎟⎟ .
y y
(3.89)
y1 y3 ⎜⎜⎝y6 ⎟⎟⎠ ⎜⎜⎜. ⎟⎟⎟
y7 ⎝ ⎠
y15

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 113

From Eq. (3.22), we obtain


     
z0 y 2y0
= A2 H 2 0 = ,
z1 y1 2y1
     
z2 y 2y2
= P2 H2 2 = ,
z3 y3 2y3
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ (3.90)
⎜⎜⎜z4 ⎟⎟⎟ ⎜⎜⎜y4 ⎟⎟⎟ ⎜⎜⎜2(y4 + y5 ) + s(y4 − y5 + y6 − y7 )⎟⎟⎟
⎜⎜⎜z ⎟⎟⎟ ⎜⎜⎜y ⎟⎟⎟ ⎜⎜⎜2(y + y ) + s(y − y − y + y )⎟⎟⎟
⎜⎜⎜ 5 ⎟⎟⎟ = P4 H4 ⎜⎜⎜ 5 ⎟⎟⎟ = ⎜⎜⎜ 6 7 ⎟
⎜⎜⎜y6 ⎟⎟⎟ ⎜⎜⎜2(y4 + y5 ) − s(y4 − y5 + y6 − y7 )⎟⎟⎟⎟⎟ .
7 4 5 6
⎜⎜⎜z6 ⎟⎟⎟
⎝ ⎠ ⎝ ⎠ ⎝ ⎠
z7 y7 2(y6 + y7 ) − s(y4 − y5 − y6 + y7 )

The coefficients
⎛ ⎞ ⎛ ⎞
⎜⎜⎜z8 ⎟⎟⎟ ⎜⎜⎜y8 ⎟⎟⎟
⎜⎜⎜z ⎟⎟⎟ ⎜⎜⎜y ⎟⎟⎟
⎜⎜⎜ 9 ⎟⎟⎟ ⎜⎜ 9 ⎟⎟
⎜⎜⎜⎜.. ⎟⎟⎟⎟ = P8 H8 ⎜⎜⎜⎜⎜.. ⎟⎟⎟⎟⎟ (3.91)
⎜⎜⎝. ⎟⎟⎠ ⎜⎜⎝. ⎟⎟⎠
z15 y15

can be calculated by the following formulas:

z8 = A1 + B1 + C1 + D,
z9 = A3 + B3 − C3 + D,
z10 = A5 + B5 + C3 − D,
z11 = A7 + B7 − C4 + D,
(3.92)
z12 = A2 + B2 − C2 + D,
z13 = A4 + B4 + C2 − D,
z14 = A6 + B6 − C3 − D,
z15 = A8 + B8 + C4 + D,

where

A1 = b(y8 + y9 ) + s+ (y8 − y9 ), A2 = b(y8 + y9 ) − s+ (y8 − y9 ),


A3 = b(y8 + y9 ) − s− (y8 − y9 ), A4 = b(y8 + y9 ) + s− (y8 − y9 ),
A5 = b(y10 + y11 ) − c+ (y10 − y11 ), A6 = b(y10 + y11 ) + c+ (y10 − y11 ),
A7 = b(y10 + y11 ) − c− (y10 − y11 ), A8 = b(y10 + y11 ) + c− (y10 − y11 ),
B1 = a(y10 + y11 ) + c− (y10 − y11 ), B2 = a(y10 + y11 ) − c− (y10 − y11 ),
B3 = a(y10 + y11 ) + c+ (y10 − y11 ), B4 = a(y10 + y11 ) − c+ (y10 − y11 ), (3.93)
B5 = a(y8 + y9 ) − s− (y8 − y9 ), B6 = a(y8 + y9 ) + s− (y8 − y9 ),
B7 = a(y8 + y9 ) + s+ (y8 − y9 ), B8 = a(y8 + y9 ) − s+ (y8 − y9 ),
C1 = c+ (y12 − y13 ) + s− (y14 − y15 ), C2 = c− (y12 − y13 ) − s+ (y14 − y15 ),
C3 = c− (y12 − y13 ) + s+ (y14 − y15 ), C4 = c+ (y12 − y13 ) − s− (y14 − y15 ),
D = s(y12 + y13 − y14 − y15 ).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


114 Chapter 3

Figure 3.6 Flow graph for P4 H4 y transform and computation of Ai , Bi+4 , i = 1, 4.

A5 y12 C1
c+
–1
A6 y13 C3
c

b B3 y14 C2
s+
y8 a B1 y15 C4
s

c+ s
y9 B4 y12 D

c B2 y13

A7 y14

–1
A8 y15

Figure 3.7 Flow graphs for computation of Ai+4 , Bi , D, and Ci , i = 1, 4.

Algorithm: The 16-point Hartley transform algorithm using HT is formulated as


follows:
Step 1. Input column vector x = (x0 , x1 , . . . , x15 ).
Step 2. Perform 16-point HT y = H16 x.
Step 3. Compute zi , i = 0, 1, . . . , 15 using Eqs. (3.90)–(3.93).
Step 4. Output spectral coefficients zi , i = 0, 1, . . . , 15.
It can be shown that the complexity of a 16-point correction transform is
61 additions, 15 multiplications, and 6 shifts. Therefore, the 16-point Hartley
transform using a correction transform needs only 61 + 64 = 125 addition, 15
multiplication, and 6 shift operations. Figures 3.6–3.8 are the flow graphs for
Hartley correction coefficients calculation.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 115

Figure 3.8 Flow graphs of the 16-point Hartley correction transform.

3.4 Fast Cosine Transform


Let C N be the N × N transform matrix of a discrete cosine transform of type 2
(DCT-2), i.e.,
 N−1
(2n + 1)kπ
C N = ak cos , (3.94)
2N k,n=0

where

2
a0 = , ak = 1, k  0. (3.95)
2
For more detail on DCT transforms, see also Refs. 9, 19, 32, 33, 40, 49, 80–82,
and 98.
We can check that C N is an orthogonal matrix, i.e., C N C NT = (N/2)IN . We denote
the elements of the DCT-2 matrix (without normalizing coefficients ak ) by

(2n + 1)kπ
ck,n = cos , k, n = 0, 1, . . . , N − 1. (3.96)
2N

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


116 Chapter 3

One can show that

N
c2k,n = c2k,N−n−1 , c2k+1,n = c2k+1,N−n−1 , k, n = 0, 1, . . . , − 1. (3.97)
2

From Eq. (3.97), it follows that the matrix C N can be represented as


 
C N/2 C N/2
CN ≡ . (3.98)
DN/2 −DN/2

Hence, according to Lemma 3.1.1, the matrix AN = (1/N)C N HN has a block-


diagonal structure, where HN is a Sylvester–Hadamard matrix of order N. Without
losing the generalization, we can prove it for the cases N = 4, 8, and 16.
Case N = 4: The DCT matrix of order 4 has the form [here we use the notation
ci = cos(iπ/8)]

⎛ ⎞
⎜⎜⎜1 1 1 1 ⎟⎟

⎜⎜⎜c c3 −c3 −c1 ⎟⎟⎟⎟
C4 = ⎜⎜⎜⎜ 1 ⎟.
⎜⎜⎝c2 −c2 −c2 c2 ⎟⎟⎟⎟⎠
(3.99)
c3 c1 −c1 −c3

Using the following permutation matrices,

⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 0 0 0⎟⎟⎟ ⎜⎜⎜1 0 0 0⎟⎟⎟
⎜⎜⎜⎜ ⎟ ⎜⎜⎜⎜ ⎟
0⎟⎟⎟⎟⎟ 0⎟⎟⎟⎟⎟
P1 = ⎜⎜⎜⎜⎜ P2 = ⎜⎜⎜⎜⎜
0 0 1 0 1 0
⎟, ⎟, (3.100)
⎜⎜⎜0 1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 0 1⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
0 0 0 1 0 0 1 0

we obtain
⎛ ⎞
⎜⎜⎜1 1 1 1 ⎟⎟  
⎜⎜⎜c −c2 c2 −c2 ⎟⎟⎟⎟⎟
C4 = P1C4 P2 = ⎜⎜⎜⎜ 2 ⎟⎟⎟ = C2 C2 . (3.101)
⎜⎜⎝c1 c3 −c1 −c3 ⎟⎟⎠ D2 −D2
c3 c1 −c3 −c1

Therefore, the correction matrix in this case takes the form

1  2 0  2(c1 + c3 ) 2(c1 − c3 )
A4 = 2C2 H2 ⊕ 2D2 H2 = ⊕ . (3.102)
4 0 2c2 2(c1 + c3 ) −2(c1 − c3 )

A flow graph of a 4-point cosine correction transform is given in Fig. 3.9.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 117

Figure
√ 3.9 Flow graph of the 4-point cosine correction transform (r1 = c1 + c3 , r2 = c1 − c3 ,
s = 2).

Case N = 8: The DCT matrix of order 8 has the form [here we use the notation
ci = cos(iπ/16)]
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 ⎟⎟

⎜⎜⎜c c3 c5 c7 −c7 −c5 −c3 −c1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1
⎜⎜⎜c2
⎜⎜⎜ c6 −c6 −c2 −c2 −c6 c6 c2 ⎟⎟⎟⎟⎟

⎜c −c7 −c1 −c5 c5 c1 c7 −c3 ⎟⎟⎟⎟
C8 = ⎜⎜⎜⎜⎜ 3 ⎟. (3.103)
⎜⎜⎜c4 −c4 −c4 c4 c4 −c4 −c4 c4 ⎟⎟⎟⎟
⎜⎜⎜c5 ⎟
⎜⎜⎜ −c1 c7 c3 −c3 −c7 c1 −c5 ⎟⎟⎟⎟

⎜⎜⎜c6 −c2 c2 −c6 −c6 c2 −c2 c6 ⎟⎟⎟⎟
⎝ ⎠
c7 −c5 c3 −c1 c1 −c3 c5 −c7

Let
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟⎟ ⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 1 0 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜0 1 0 0 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 0 1 0 0 0⎟⎟⎟⎟ ⎜⎜⎜0 0 1 0 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
0⎟⎟⎟⎟⎟ 0⎟⎟⎟⎟⎟
P1 = ⎜⎜⎜⎜⎜ P2 = ⎜⎜⎜⎜⎜
0 0 0 0 0 0 1 0 0 0 1 0 0 0
⎟, ⎟,
⎜⎜⎜0 1 0 0 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 0 0 1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 1 0 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 0 1 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 0 1 0 0⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0
⎛ ⎞
⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 1 0 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎛ ⎞
⎜⎜⎜0 1 0 0 0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜1 0 0 0⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟  
⎜0 0 0⎟⎟⎟⎟⎟ ⎜0 0⎟⎟⎟⎟⎟
P3 = ⎜⎜⎜⎜⎜ Q = ⎜⎜⎜⎜⎜
0 1 0 0 0 1 0 Q 0
⎟, ⎟, P4 = . (3.104)
⎜⎜⎜0 0 0 0 1 0 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 0 1⎟⎟⎟⎟⎟ 0 Q
⎜⎜⎜⎜0 0 0⎟⎟⎟⎟⎟
⎟ ⎝ ⎠
⎜⎜⎜ 0 0 0 1 0

0 0 1 0
⎜⎜⎜0 0 0⎟⎟⎟⎟⎟
⎜⎜⎝ 0 0 0 0 1

0 0 0 0 0 0 0 1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


118 Chapter 3

Using the above-given matrices, we obtain the block representation for the DCT
matrix of order 8 as
⎛ ⎞
⎜⎜⎜C2 C2 C2 C2 ⎟⎟⎟
⎜⎜⎜ ⎟
C8 = P3 P1C8 P2 P4 = ⎜⎜ B2 −B2 B2 −B2 ⎟⎟⎟⎟ , (3.105)
⎝ ⎠
D4 Q −D4 Q
where
⎛ ⎞
    ⎜⎜⎜c1 c3 c7 c5 ⎟⎟⎟
1 1 c2 c6 ⎜⎜⎜c3 −c7 −c5 −c1 ⎟⎟⎟
C2 = , B2 = , D4 Q = ⎜⎜⎜⎜c −c ⎟
c3 c7 ⎟⎟⎟⎟⎠ . (3.106)
c4 −c4 c6 −c2 ⎜⎝ 5 1
c7 −c5 −c1 c3

Therefore, the correction matrix can take the following block-diagonal form:
⎡ ⎛ ⎞⎤
⎢⎢⎢     ⎜⎜⎜⎜ a1 a2 a3 a4 ⎟⎟⎟⎟⎥⎥⎥⎥

1 ⎢⎢ 1 0 r 1 r2 ⎜⎜−b b2 b3 −b4 ⎟⎟⎟⎥⎥⎥
A8 = ⎢⎢⎢⎢8 ⊕4 ⊕ ⎜⎜⎜⎜ 1 ⎟⎥ ,
⎜⎝⎜−b4 b3 −b2 b1 ⎟⎟⎟⎠⎟⎥⎥⎥⎦⎥
(3.107)
8 ⎢⎣⎢ 0 c4 −r2 r1
−a4 −a3 a2 a1

where
a1 = c1 + c3 + c5 + c7 , a2 = c1 − c3 − c5 + c7 ,
a3 = c1 + c3 − c5 − c7 , a4 = c1 − c3 + c5 − c7 ,
b1 = c1 − c3 + c5 + c7 , b2 = c1 + c3 − c5 + c7 , (3.108)
b3 = c1 + c3 + c5 − c7 , b4 = c1 − c3 − c5 − c7 ,
r1 = c2 + c6 , r2 = c2 − c6 .

Case N = 16: Denote rk = cos(kπ/32). From the cosine transform matrix C16 of
order 16 we generate a new matrix by the following operations:
(1) Rewrite the rows of the matrix C16 in the following order: 0, 2, 4, 6, 8, 10, 14,
1, 3, 5, 7, 9, 11, 13, 15.
(2) Rewrite the first eight rows of the new matrix as 0, 2, 4, 6, 1, 3, 5, 7.
(3) Reorder the columns of this matrix as follows: 0, 1, 3, 2, 4, 5, 7, 6, 8, 9, 11, 10,
12, 13, 15, 14.
Finally, the DCT matrix of order 16 can be represented by the equivalent block
matrix as
⎛ ⎞
⎜⎜⎜C2 C2 C2 C2 C2 C2 C2 C2 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜A2 −A2 A2 −A2 A2 −A2 A2 −A2 ⎟⎟⎟⎟
⎜⎜⎜⎜ B11 B12 −B11 −B12 B11 B12 −B11 −B12 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜B B22 −B21 −B22 B21 B22 −B21 −B22 ⎟⎟⎟⎟
C16 = ⎜⎜⎜⎜⎜ 21 ⎟, (3.109)
⎜⎜⎜ B31 B32 B34 B33 −B31 −B32 −B34 −B33 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ B41 B42 B44 B43 −B41 −B42 −B44 −B43 ⎟⎟⎟⎟⎟
⎜⎜⎜ B ⎟
⎜⎝ 51 B52 B54 B53 −B51 −B52 −B54 −B53 ⎟⎟⎟⎠
B61 B62 B64 B63 −B61 −B62 −B64 −B63

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 119

where
   
1 1 r4 r12
C2 = r −r , A2 = r −r ;
8 8 12 4
       
r r r r r −r r r
B11 = r2 −r6 , B12 = −r14 −r10 , B21 = r10 −r2 , B22 = −r6 r14 ,
6 14 10 2 14 10 2 6
       
r r r r r r r r
B31 = r1 r3 , B32 = −r7 r5 , B33 = −r9 −r11 , B34 = −r15 −r13 ,
3 9 11 15 5 1 13 7
       
r5 r15 −r3 −r7 −r13 r9 r11 r1
B41 = r −r , B42 = r −r , B43 = r r , B22 = −r −r , (3.110)
7 11 15 3 1 13 9 5
       
r −r r −r13 −r15 −r3 r7 r11
B51 = r9 −r5 , B52 = r1 r9 , B53 = −r3 r7 , B54 = −r5 r15 ,
11 1 13
       
r −r −r r r r r −r
B61 = r13 −r7 , B62 = −r5 r1 , B63 = r11 −r15 , B64 = −r3 r9 .
15 13 9 11 7 5 1 3

Now, the matrix in Eq. (3.109) can be presented as


⎛ ⎞
⎜⎜⎜C2 C2 C2 C2 C2 C2 C2 C2 ⎟⎟⎟
⎜⎜⎜⎜A2 −A A2 −A2 A2 −A2 A2

−A2 ⎟⎟⎟⎟
C16 = ⎜⎜⎜⎜ 2
⎟⎟⎟ , (3.111)
⎜⎜⎝ D4 −D4 D4 −D4 ⎟⎟⎠
D8 −D8
where
⎛ ⎞
  ⎜⎜⎜ B31 B32 B34 B33 ⎟⎟

B11 B12 ⎜⎜⎜ B B42 B44 B43 ⎟⎟⎟⎟
D4 = , D8 = ⎜⎜⎜⎜ 41 ⎟.
B53 ⎟⎟⎟⎠⎟
(3.112)
B21 B22 ⎜⎝⎜ B51 B52 B54
B61 B62 B64 B63

Now, according to Lemma 3.1.1, we have


1 * +
A16 = 8C2 H2 ⊕ 8A2 H2 ⊕ 4D4 H4 ⊕ 2D8 H8 . (3.113)
16
We introduce the notations
q1 = r2 + r14 , q2 = r6 + r10 , t1 = r2 − r14 , t2 = r6 − r10 ;
a1 = r1 + r15 , a2 = r3 + r13 , a3 = r5 + r11 , a4 = r7 + r9 ; (3.114)
b1 = r1 − r15 , b2 = r3 − r13 , b3 = r5 − r11 , b4 = r7 − r9 .

Using Eq. (3.112) and the above-given notations, we find that


   
2 0 r4 + r12 r4 − r12
C2 H2 = , D2 H2 = ,
0 2r8 −r4 + r12 r4 + r12
⎛ ⎞
⎜⎜⎜ d1,1 d1,2 d1,3 d1,4 ⎟⎟⎟ (3.115)
⎜⎜⎜ d d2,2 d2,3 d2,4 ⎟⎟⎟⎟
D4 H4 = ⎜⎜⎜⎜ 2,1 ⎟,
⎜⎜⎝ 2,4 d2,3 −d2,2 −d2,1 ⎟⎟⎟⎟⎠
d
−d1,4 −d1,3 d1,2 d1,1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


120 Chapter 3

where

d1,1 = q1 + q2 , d1,2 = q1 − q2 , d1,3 = t1 + t2 , d1,4 = t1 − t2 ,


(3.116)
d2,1 = −q1 + t2 , d2,2 = q1 + t2 , d2,3 = t1 + q2 , d2,4 = −t1 + q2 .

The elements of matrix D8 H8 can be presented as

P1,1 = a1 + a2 + a3 + a4 , P1,2 = a1 − a2 − a3 + a4 ,
P1,3 = a1 + a2 − a3 − a4 , P1,4 = a1 − a2 + a3 − a4 ,
P1,5 = b1 + b2 + b3 + b4 , P1,6 = b1 − b2 − b3 + b4 ,
P1,7 = b1 + b2 − b3 − b4 , P1,8 = b1 − b2 + b3 − b4 ;

P2,1 = −b1 + b2 − b4 − a3 , P2,2 = b1 + b2 + b4 − a3 ,


P2,3 = b1 + b2 − b4 + a3 , P2,4 = −b1 + b2 + b4 + a3 ,
P2,5 = a1 + a2 + a4 + b3 , P2,6 = −a1 + a2 − a4 + b3 ,
P2,7 = −a1 + a2 + a4 − b3 , P2,8 = a1 + a2 − a4 − b3 ;

P3,1 = a1 − a2 + a3 − b4 , P3,2 = −a1 − a2 + a3 + b4 ,


P3,3 = a1 + a2 + a3 + b4 , P3,4 = −a1 + a2 + a3 − b4 ,
P3,5 = −b1 − b2 + b3 − a4 , P3,6 = b1 − b2 + b3 + a4 ,
P3,7 = −b1 + b2 + b3 + a4 , P3,8 = b1 + b2 + b3 − a4 ;

P4,1 = a1 − a3 − b2 + b4 , P4,2 = a1 + a2 + b2 + b4 , (3.117)


P4,3 = −a1 − a3 + b2 + b4 , P4,4 = −a1 + a3 − b2 + b4 ,
P4,5 = −b1 + b3 − a2 + a4 , P4,6 = −b1 − b3 + a2 + a4 ,
P4,7 = b1 + b3 + a2 + a4 , P4,8 = b1 − b3 − a2 + a4 ;

P5,1 = P4,8 , P5,2 = P4,7 , P5,3 = P4,6 , P5,4 = P4,5 ,


P5,5 = −P4,4 , P5,6 = −P4,3 , P5,7 = −P4,2 , P5,8 = −P4,1 ;

P6,1 = −P3,8 , P6,2 = −P3,7 , P6,3 = −P3,6 , P6,4 = −P3,5 ,


P6,5 = P3,4 , P6,6 = P3,3 , P6,7 = P3,2 , P6,8 = P3,1 ;

P7,1 = P2,8 , P7,2 = P2,7 , P7,3 = P2,6 , P7,4 = P2,5 ,


P7,5 = −P2,4 , P6,6 = −P2,3 , P7,7 = −P2,2 , P7,8 = −P2,1 ;

P8,1 = −P1,8 , P8,2 = −P1,7 , P8,3 = −P1,6 , P8,4 = −P1,5 ,


P8,5 = P1,4 , P8,6 = P1,3 , P8,7 = P1,2 , P8,8 = P1,1 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 121

Therefore, the matrix P = D8 H8 is given by

⎛ ⎞
⎜⎜⎜ P1,1 P1,2 P1,3 P1,4 P1,5 P1,6 P1,7 P1,8 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 2,1 P P 2,2 P 2,3 P 2,4 P2,5 P 2,6 P 2,7 P 2,8 ⎟ ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ P3,1 P3,2 P3,3 P3,4 P3,5 P3,6 P3,7 P3,8 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ P4,1 P4,2 P4,3 P4,4 P4,5 P4,6 P4,7 P4,8 ⎟⎟⎟⎟⎟
P = ⎜⎜⎜ ⎜ ⎟⎟ . (3.118)
⎜⎜⎜ P4,8 P4,7 P4,6 P4,5 −P4,4 −P4,3 −P4,2 −P4,1 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−P −P −P −P P3,4 P3,3 P3,2 P3,1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 3,8 3,7 3,6 3,5
⎟⎟
⎜⎜⎜
⎜⎜⎜ P2,8 P2,7 P2,6 P2,5 −P2,4 −P2,3 −P2,2 −P2,1 ⎟⎟⎟⎟⎟
⎜⎝ ⎟⎠
−P1,8 −P1,7 −P1,6 −P1,5 P1,4 P1,3 P1,2 P1,1

The following shows that the cosine transform can be done via a fast algorithm.
Denote y = H16 x. Then, z = A16 y. Using Eq. (3.113), we find that

⎛ ⎞ ⎛ ⎞
⎜ y ⎟ ⎜⎜⎜y8 ⎟⎟⎟
    ⎜
⎜⎜⎜ ⎟⎟⎟4 ⎟ ⎜⎜⎜y ⎟⎟⎟
⎜ ⎟ ⎜ 9⎟
⊕ 4D4 H4 ⎜⎜⎜⎜ 5 ⎟⎟⎟⎟ ⊕ 2D8 H8 ⎜⎜⎜⎜⎜.. ⎟⎟⎟⎟⎟ .
y0 y2 y
z = 8C2 H2 ⊕ 8D2 H2 (3.119)
y1 y3 ⎜⎜⎝y6 ⎟⎟⎠ ⎜⎜⎜⎝. ⎟⎟⎟⎠
y7 y15


From Eqs. (3.115) and (3.116), we obtain (here s = 2)

z0 = 2y0 ,
z1 = sy1 ,
z2 = r4 (y2 + y3 ) + r12 (y2 − y3 ),
z3 = r4 (y2 − y3 ) + r12 (y2 + y3 ),
(3.120)
z4 = q1 (y4 + y5 ) + q2 (y4 − y5 ) + t1 (y6 + y7 ) + t2 (y6 − y7 ),
z5 = −q1 (y4 − y5 ) + t2 (y4 + y5 ) + t1 (y6 − y7 ) + q2 (y6 + y7 ),
z6 = −t1 (y4 − y5 ) + q2 (y4 + y5 ) − q1 (y6 − y7 ) − t2 (y6 + y7 ),
z7 = −t1 (y4 + y5 ) + t2 (y4 − y5 ) + q1 (y6 + y7 ) − q2 (y6 − y7 ).

Now, using the following notations:

n1 = y8 + y9 , n2 = y10 + y11 , n3 = y12 + y13 , n4 = y14 + y15 ,


(3.121)
m1 = y8 − y9 , m2 = y10 − y11 , m3 = y12 − y13 , m4 = y14 − y15 ,

we obtain

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


122 Chapter 3

z8 = a1 (n1 + n2 ) + a2 (m1 + m2 ) + a3 (m1 − m2 ) + a4 (n1 − n2 )


+ b1 (n3 + n4 ) + b2 (m3 + m4 ) + b3 (m3 − m4 ) + b4 (n3 − n4 ),
z9 = −b1 (m1 − m2 ) + b2 (n1 + n2 ) − a3 (n1 − n2 ) − b4 (m1 + m2 )
+ a1 (m3 − m4 ) + a2 (n3 + n4 ) + b3 (n3 − n4 ) + a4 (m3 + m4 ),
z10 = a1 (m1 + m2 ) − a2 (n1 − n2 ) + a3 (n1 + n2 ) − b4 (m1 − m2 )
− b1 (m3 + m4 ) − b2 (n3 − n4 ) + b3 (n3 + n4 ) − a4 (m3 − m4 ),
z11 = a1 (n1 − n2 ) + b4 (n1 + n2 ) − a3 (m1 + m2 ) − b2 (m1 − m2 )
− b1 (n3 − n4 ) + a4 (n3 + n4 ) + b3 (m3 + m4 ) − a2 (m3 − m4 ),
(3.122)
z12 = b1 (n1 − n2 ) − b3 (m1 + m2 ) − a2 (m1 − m2 ) + a4 (n1 + n2 )
+ a1 (n3 − n4 ) − a3 (m3 + m4 ) + b2 (m3 − m4 ) − b4 (n3 + n4 )
z13 = −b1 (m1 + m2 ) − b2 (n1 − n2 ) − b3 (n1 + n2 ) + a4 (m1 − m2 )
− a1 (m3 + m4 ) + a2 (n3 − n4 ) + a3 (n3 + n4 ) − b4 (m3 − m4 ),
z14 = a1 (m1 − m2 ) + a2 (n1 + n2 ) − a4 (m1 + m2 ) − b3 (n1 − n2 )
+ b1 (m3 − m4 ) − b2 (n3 + n4 ) − b4 (m3 + m4 ) − a3 (n3 − n4 ),
z15 = −b1 (n1 + n2 ) + b2 (m1 + m2 ) − b3 (m1 − m2 ) + b4 (n1 − n2 )
+ a1 (n3 + n4 ) − a2 (m3 + m4 ) + a3 (m3 − m4 ) − a4 (n3 − n4 ).

Algorithm: The 16-point cosine transform algorithm using the HT is formulated


as follows:
Step 1. Input column vector x = (x0 , x1 , . . . , x15 ).
Step 2. Perform a 16-point HT, y = H16 x.
Step 3. Compute the coefficients zi , I = 0, 1, . . . , 15.
Step 4. Perform shift operations (three bits for z0 , . . . , z3 , two bits for z4 , . . . , z7 ,
and one bit for z8 , . . . , z15 ).
Output the results of Step 4.
The flow graphs of the cosine correction transform are given in Figs. 3.10–3.13.

3.5 Fast Haar Transform

Let H2 be a Hadamard matrix of order 2. The Haar matrix9,33,81,82,84 of order 2n+1


can be represented as
 
X2n ⊗ (1 1)
X2n+1 = n/2 . (3.123)
2 I2n ⊗ (1 − 1)

We can check that XN is an orthogonal matrix of order N (N = 2n ), i.e.,

XN XNT = NIN . (3.124)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 123

2
y0 z0

s
y1 z1

r4
y2 z2

r12

r12
y3 z3
–r4
z4

t1
q1 q2
t2

q2 z6 t2
y4 y6
t1 q1

q2 t2
q1 t1
y5 y7
z5 q1
t1
q2
t2

z7

Figure 3.10 Flow graph of the computation of components zi , i = 0, 1, . . . , 7.

From Eq. (3.123), it follows that the matrix X2n can be represented as
⎛ ⎞
⎜⎜⎜X n−1 X2n−1 ⎟⎟⎟
X2n ≡ ⎜⎝ ⎜
⎜ 2 ⎟⎟⎟ . (3.125)

2(n−1)/2 I2n−1 −2(n−1)/2 I2n−1

Hence, according to Lemma 3.1.1, the correction matrix

1
AN = XN HN (3.126)
N

has a block-diagonal structure, where HN is a Sylvester–Hadamard matrix of


order N.
Without losing the generalization, we can prove it for the cases N = 4, 8,
and 16.
Case N = 4: The discrete Haar transform matrix of order 4 has the form
⎛ ⎞
⎜⎜⎜X X2 ⎟⎟⎟⎟

X4 = ⎜⎜⎝ √ 2
√ ⎟⎟⎠ , (3.127)
2I2 − 2I2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


124 Chapter 3

Figure 3.11 Flow graph of the computation of Ai , Bi , i = 1, 2, 3, 4, and components z8


and z9 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 125

Figure 3.12 Flow graph of the computation of components zi , i = 10, 11, 12, 13.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


126 Chapter 3

Figure 3.13 Flow graph of the computation of components z14 and z15 .

where X2 = H2 . We can see that the correction matrix, i.e., A4 = X4 H4 , has a


block-diagonal form:
⎛ ⎞
⎜⎜⎜1 0 0 0 ⎟⎟


 1 ⎜⎜⎜0 1 0 0√ ⎟⎟⎟⎟⎟
1 √
⎜ √
A4 = 2I2 ⊕ 2 2H2 = ⎜⎜⎜ ⎟⎟ . (3.128)
4 2 ⎜⎜⎜0 0 √2 √2⎟⎟⎟⎟
⎝ ⎠
0 0 2 − 2

Case N = 8: From Eq. (3.125), we obtain


⎛ ⎞
⎜⎜⎜H H H √2 ⎟⎟⎟⎟
H
⎜⎜⎜ √ √2 √2
2
X8 = ⎜⎜ 2I2 − 2I2 2I2 − 2I2 ⎟⎟⎟⎟ . (3.129)
⎝ ⎠
2I4 −2I4

In this case, the correction matrix is represented as


⎛ ⎞
⎜⎜⎜1 0 0 0 0 0 0 0⎟⎟

⎜⎜⎜0 1 0 0√ 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ √ ⎟⎟
⎜⎜⎜0 0 2 2 0 0 0 0⎟⎟⎟⎟

1 √  1 ⎜⎜⎜⎜0 0 √2 − √2 0 0 0 0⎟⎟⎟⎟ .
⎟⎟
A8 = 4I2 ⊕ 4 2H2 ⊕ 4H4 = ⎜⎜⎜⎜ ⎟ (3.130)
8 2 ⎜⎜⎜0 0 0 0 1 1 1 1⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜0 0 0 0 1 −1 1 −1⎟⎟⎟⎟

⎜⎜⎜0 0 0
⎝ 0 1 1 −1 −1⎟⎟⎟⎟⎠
0 0 0 0 1 −1 −1 1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 127

Case N = 16: Consider a Haar matrix of order 16. For n = 4 from Eq. (3.33), we
obtain
⎛ ⎞
⎜⎜⎜X X8 ⎟⎟⎟⎟
X16 = ⎜⎝ ⎜
⎜ 8
⎟⎟⎠ ,
23/2 I8 −23/2 I8
⎛ ⎞
⎜⎜⎜X4 X4 ⎟⎟⎟
X8 = ⎝⎜ ⎟⎠ , (3.131)
2I4 −2I4
⎛ ⎞
⎜⎜⎜X2 X2 ⎟⎟⎟⎟
X4 = ⎜⎝ ⎜ √ √ ⎟⎠ .
2I2 − 2I2

Note that
 
+ +
X2 = H2 = . (3.132)
+ −

Hence, using Eq. (3.131), the Haar transform matrix X16 of order 16 is represented
as
⎛ ⎞
⎜⎜⎜H H2 ⎟⎟⎟⎟⎟
⎜⎜⎜ 2 H2 H2 H2 H2 H2 H2
⎜⎜⎜ √ √ √ √ √ √ √ √ ⎟⎟⎟
⎜⎜⎜ 2I2 − 2I2 2I2 − 2I2 2I2 − 2I2 2I2 − 2I2 ⎟⎟⎟⎟
X16 = ⎜⎜⎜ ⎟⎟⎟ ,
⎜⎜⎜⎜ 2I4 −2I4 2I4 −2I4
⎟⎟⎟
⎟⎟⎟
⎜⎜⎜ ⎟⎠

23/2 I8 −23/2 I8
(3.133)

or as (here s = 2)
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜⎜1 −1 1 −1 1 −1 1 −1 1 −1 1 −1 1 −1 1 −1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ s 0 −s 0 s 0 −s 0 s 0 −s 0 s 0 −s 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 −s 0 −s 0 −s s 0 −s ⎟⎟⎟⎟⎟
⎜⎜⎜ s 0 s 0 s 0
⎜⎜⎜2 0 0 0 −2 0 0 0 2 0 0 0 −2 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 2 0 0 0 −2 0 0 0 2 0 0 0 −2 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 2 0 0 0 −2 0 0 0 2 0 0 0 −2 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 2 0 0 0 −2 0 0 0 2 0 0 0 −2 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜2s 0 0 0 0 0 0 0 −2s 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟ .
⎜⎜⎜ ⎟
⎜⎜⎜0 2s 0 0 0 0 0 0 0 −2s 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 2s 0 0 0 0 0 0 0 −2s 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 2s 0 0 0 0 0 0 0 −2s 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 2s 0 0 0 0 0 0 0 −2s 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 2s 0 0 0 0 0 0 0 −2s 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 2s 0 0 0 0 0 0 0 −2s 0 ⎟⎟⎟
⎜⎝ ⎟⎠
0 0 0 0 0 0 0 2s 0 0 0 0 0 0 0 −2s

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


128 Chapter 3

y0 z0

8
l2

y1 z1

z2
y2

8 √2
H2
x0

y3
H16 z3

y4 z4

x15 4
H4

y7 z7

z8
y8

4 √2
H8

y15 z15

Figure 3.14 Flow graph of a 16-point Haar transform algorithm.

The corresponding correction matrix takes the following block-diagonal form:

1
A16 = (4I2 ⊕ 2sH2 ⊕ 2H4 ⊕ sH8 ) . (3.134)
4

Now we want to show that the Haar transform can be realized via a fast
algorithm. Denote y = H16 x and z = A16 y. Using Eq. (3.134), we find that
⎡ ⎛ ⎞ ⎛ ⎞⎤
⎢⎢⎢     ⎜⎜⎜y4 ⎟⎟⎟ ⎜⎜⎜y8 ⎟⎟⎟⎥⎥⎥
1 ⎢⎢ y ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟⎥⎥
z = ⎢⎢⎢⎢4 0 ⊕ 2sH2 2 ⊕ 2H4 ⎜⎜⎜⎜... ⎟⎟⎟⎟ ⊕ sH8 ⎜⎜⎜⎜... ⎟⎟⎟⎟⎥⎥⎥⎥ .
y
(3.135)
4 ⎣⎢ y1 y3 ⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠⎥⎦
y7 y15

Algorithm: A 16-point Haar transform algorithm using an HT is formulated as


follows:
Step 1. Input the column vector x = (x0 , x1 , . . . , x15 ).
Step 2. Perform the 16-point HT y = H16 x.
Step 3. Perform the 2-, 4-, and 8-point HT of vectors (y2 , y3 ), (y4 , . . . , y7 ), and
(y8 , . . . , y15 ), respectively.
Step 4. Output the results of step 3 (see Fig. 3.14).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 129

3.6 Integer Slant Transforms


In the past decade, fast orthogonal transforms have been widely used in such areas
as data compression, pattern recognition and image reconstruction, interpolation,
linear filtering, and spectral analysis. The increasing requirements of the speed
and cost in many applications have stimulated the development of new fast unitary
transforms such as HT and slant transforms. We can observe a considerable interest
in many applications of the slant transform.
A phenomenon characteristic of digital images is the presence of approximately
constant or uniformly changing gray levels over a considerable distance or
area. The slant transform is specifically defined for efficient representation of
such images. Intel uses the slant transform in their “Indeo” video compression
algorithm.
Historically, Enomoto and Shibata conceived the first 8-point slant transform
in 1971. The slant vector, which can properly follow gradual changes in the
brightness of natural images, was a major innovation.19 Since its development, the
slant transform has been generalized by Pratt, Kane, and Welch,20 who presented
the procedure for computing the slant transform matrix of order 2n . The slant
transform has the best compaction performance among the nonsinusoidal fast
orthogonal transforms—Haar, Walsh–Hadamard, and slant—but is not optimum
in its performance measured among the all-sinusoidal transforms such as the
DCT and the KLT9 for the first-order Markov models. In general, there is a
tradeoff between the performance of an orthogonal transform and its computational
complexity. The KLT is an optimal transform but has a high computational
complexity. Therefore, the need arises for slant transform improvement schemes
that yield performance comparable to that of the KLT and the DCT, without
incurring their computational complexity.
To improve the performance of the slant HT, we introduce in this chapter a
construction concept for a class of parametric slant transforms that includes, as a
special case, the commonly used slant transforms and HTs. Many applications have
motivated modifications and generalizations of the slant transform. Agaian and
Duvalian21 developed two new classes of kn -point slant transforms for the arbitrary
integer k. The first class was constructed via HTs and the second one via Haar
transforms. The same authors have also investigated Walsh–Hadamard, dyadic-
ordered, and sequency-ordered slant transforms. Recently, Agaian27 introduced a
new class of transforms called the multiple β slant-HTs. This class of transforms
includes, as a special case, the Walsh–Hadamard and the classical slant HTs.
Agaian and Duvalian have shown that this new technique outperforms the classical
slant-HT.
In the application of wavelet bases to denoising and image compression, for
example, the time localization and the approximation order (number of vanishing
moments) of the basis are both important. Good time-localization properties lead
to the good representation of edges. The slant transform is the prototype of the
slantlet wavelet transform, which could be based on the design of wavelet filter
banks with regard to both time localization and approximation order.22

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


130 Chapter 3

Most linear transforms, however, yield noninteger outputs even when the inputs
are integers, making them unsuitable for many applications such as lossless
compression. In general, the transformed coefficients require theoretically infinite
bits for perfect representation. In such cases, the transform coefficients must be
rounded or truncated to a finite precision that depends on the number of bits
available for their representation. This, of course, introduces an error, which in
general degrades the performance of the transform. Recently, reversible integer-to-
integer wavelet transforms have been introduced.23 An integer-to-integer transform
is an attractive approach to solving the rounding problem, and it offers easier
hardware implementation. This is because integer transforms can be exactly
represented by finite bits.
The purpose of Section 3.6 is to show how to construct an integer slant transform
and reduce the computational complexity of the algorithm for computing the 2D
slant transform. An effective algorithm for computing the 1D slant transform via
Hadamard is also introduced.

3.6.1 Slant HTs


This subsection briefly reviews slant HTs. Currently, slant transforms are usually
constructed via Hadamard or Haar transforms.9,33–35,90 The slant transform
satisfies the following properties:9,86
• Its first-row vector is of constant value.
• Its second-row vector represents the parametric slant vector.
• Its basis vectors are orthonormal.
• It can be calculated using a fast algorithm.
The forward and inverse slant HTs of order N = 2n , n = 1, 2, . . ., are defined as

X = S 2n x, x = S 2Tn X, (3.136)

where S 2n is generated recursively:20


 
1 S n−1 O2n−1 1
S 2n = √ Q 2n 2 = √ Q2n (I2 ⊗ S 2n−1 ) , (3.137)
2 O 2 n−1 S 2n−1
2
√  
where Om denotes the m × m zero matrix, S 2 = (1/ 2) ++ +− , and Q2n is the 2n × 2n
matrix defined as
⎛ ⎞
⎜⎜⎜1 0 O0 1 0 O0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜a n b n O −a ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟⎟
2 2 0 2n b2n O0

⎜⎜⎜O0 O0 I n−1 O0 O0 I n−1 ⎟⎟⎟


Q2n = ⎜⎜⎜⎜ 2 −2 2 −2 ⎟⎟⎟ ,
⎟⎟⎟ (3.138)
⎜⎜⎜0 1 O 0 −1 O ⎟
⎜⎜⎜⎜ 0 0
⎟⎟⎟⎟
⎜⎜⎜⎜−b2n a2n O0 b2n a2n O0 ⎟⎟⎟⎟
⎜⎝ 0 ⎟⎠
O 0
O I2n−1 −2 O 0
O −I2n−1 −2
0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 131

where O0 and O0 are row and column zero vectors, respectively, and the parameters
a2n and b2n are defined recursively by
 −(1/2)
b2n = 1 + 4a22n−1 , a2n = 2b2n a2n−1 , a2 = 1. (3.139)

From Eq. (3.138), it follows that Q2n QT2n is the diagonal matrix, i.e.,

Q2n QT2n = diag 2, 2(a22n + a22n ), 2I2n−1 −2 , 2, 2(a22n + a22n ), 2I2n−1 −2 . (3.140)

Because a22n + b22n = 1, Q2n is the orthogonal matrix and Q2n QT2n = 2I2n .

Example 3.6.1: Sequency-ordered slant-Hadamard matrices of orders 4 and 8:


(1) Slant Hadamard matrix of order 4:
⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
1 ⎜⎜⎜3 1 −1 −3⎟⎟⎟⎟⎟
1
·√
S 4 = ⎜⎜⎜ ⎟ 5
. (3.141)
2 ⎜⎜⎜1 −1 −1 1⎟⎟⎟⎟⎟
⎝ ⎠ 1
·√
1 −3 3 −1 5

(2) Slant Hadamard matrix of order 8:


⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜7 −7⎟⎟⎟⎟⎟
1
⎜⎜⎜ 5 3 1 −1 −3 −5 ·√

⎜⎜⎜ ⎟⎟⎟ 21

⎜⎜⎜3 1 −1 −3 −3 −1 1 3⎟⎟⎟⎟ 1
·√
⎜⎜⎜ ⎟⎟⎟ 5

1 ⎜⎜⎜⎜7 −1 −9 −17 17 9 1 −7⎟⎟⎟⎟ ·√
1
S 8 = √ ⎜⎜⎜ ⎟⎟⎟ 105 . (3.142)
8 ⎜⎜⎜⎜1 −1 −1 1 1 −1 −1 1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 −1 −1 1 −1 1 1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
1⎟⎟⎟⎟
1
⎜⎜⎜1 −3 3 −1 −1 3 −3 ·√
⎜⎜⎝ ⎟⎟⎟ 5

1 −3 3 −1 1 −3 3 −1⎠ 1
·√
5

3.6.2 Parametric slant HT


Construction 191–93 : We introduce the following matrices:
⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟
⎜⎜⎜⎜ ⎟
a b −b −a⎟⎟⎟⎟⎟
[PS ]4 (a, b) = ⎜⎜⎜⎜⎜ ⎟, (3.143)
⎜⎜⎜1 −1 −1 1⎟⎟⎟⎟⎟
⎝ ⎠
b −a a −b

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


132 Chapter 3

⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜a b c d −d −c −b −a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜e f − f −e −e − f f e ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜b −d −a −c a d −b ⎟⎟⎟⎟
[PS ]8 (a, b, c, d, e, f ) = ⎜⎜⎜⎜⎜
c
⎟. (3.144)
⎜⎜⎜1 −1 −1 1 1 −1 −1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜c −a d b −b −d a −c ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ f −e e − f −f e −e f ⎟⎟⎟⎟⎟
⎝ ⎠
d −c b −a a −b c −d

We call the matrices in Eqs. (3.143) and (3.144) parametric slant Hadamard
matrices. Note that [PS ]4 (1, 1) and [PS ]8 (1, 1, 1, 1, 1, 1) are Hadamard matrices of
order 4 and 8, respectively. Note also that the matrix in Eq. (3.144) is a slant-type
matrix if it satisfies the following conditions:

a ≥ b ≥ c ≥ d, e ≥ f, and ab = ac + bd + cd.

We can check that


* +
[PS ]4 (a, b)[PS ]T4 (a, b) = I2 ⊗ 4 ⊕ 2(a2 + b2 ) ,
[PS ]8 (a, . . . , f )[PS ]T8 (a, . . . , f ) (3.145)
* +
= I2 ⊗ 8 ⊕ 2(a2 + · · · + d2 ) ⊕ 4(e2 + f 2 ) ⊕ 2(a2 + · · · + d2 ) .

Note that a parametric orthonormal slant Hadamard matrix of order 8 can be


defined as

⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 ⎟⎟
⎟⎟⎟
⎜⎜⎜⎜ ⎟ 2
−a ⎟⎟⎟⎟
·√
⎜⎜⎜a b c d −d −c −b a2 + · · · + d 2
⎜⎜⎜ ⎟⎟⎟ .
⎜⎜⎜ ⎟
⎜⎜⎜e f − f −e −e − f f e ⎟⎟⎟⎟ ·
2
⎜⎜⎜ ⎟⎟⎟ e2 + f 2
⎜ ⎟
1 ⎜⎜⎜⎜b −d −a −c c a d −b ⎟⎟⎟⎟ ·√
2

[PS ]8 (a, b, c, d, e, f ) = √ ⎜⎜⎜ ⎟⎟⎟ a2 + · · · + d 2 . (3.146)


8 ⎜⎜⎜⎜1 −1 −1 1 1 −1 −1 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
−c ⎟⎟⎟⎟⎟
2
⎜⎜⎜c −a d b −b −d a ·√
⎜⎜⎜ ⎟⎟⎟ a2 + · · · + d 2
⎜⎜⎜ ⎟ .
⎜⎜⎜ f −e e −f −f e −e f ⎟⎟⎟⎟ 2
⎜⎜⎜ ⎟⎟⎟ e2 + f 2
⎝d ⎟
−c b −a a −b c −d ⎠ ·√
2
a2 + · · · + d 2

It is not difficult to verify if a = 4, b = c = e = 2, f = 1, and d = 0, because the


orthonormal slant Hadamard matrix obtained from Eq. (3.146) has the following
form:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 133

⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜4 −4⎟⎟⎟⎟⎟
1
⎜⎜⎜ 2 2 0 0 −2 −2 ·√

⎜⎜⎜ ⎟⎟⎟ /6
⎜⎜⎜2 1 −1 −2 −2 −1 1 2⎟⎟⎟⎟ ·
2
⎜⎜⎜ ⎟⎟⎟ 5

1 ⎜⎜⎜⎜⎜2 0 −4 −2 −2⎟⎟⎟⎟ 1
2 4 0 ·√
√ ⎜⎜⎜ ⎟⎟⎟ 6
. (3.147)
8 ⎜⎜⎜1
⎜⎜⎜ −1 −1 1 1 −1 −1 1⎟⎟⎟⎟⎟
⎜⎜⎜2 ⎟⎟
⎜⎜⎜ −4 0 2 −2 0 4 −2⎟⎟⎟⎟ ·√
1

⎜⎜⎜ ⎟⎟⎟ /6

⎜⎜⎜1 −2 2 −1 −1 2 −2 1⎟⎟⎟⎟ ·
2
⎜⎜⎜ ⎟⎟⎟ 5
⎝0 −2 2 −4 4 −2 2 0⎠ 1
·√
6

Construction 2:25–27 Introduce the following expressions for a2n and b2n [see Eq.
(3.139)] to construct parametric slant HTs of order 2n :
. .
3 · 22n−2 22n−2 − β2n
a2n = , b2 n = , (3.148)
4 · 22n−2 − β2n 4 · 22n−2 − β2n

where

a2 = 1− 22n−2 ≤ β2n ≤ 22n−2 . and (3.149)


00 00
It can be shown that for β2n > 022n−2 0, slant HTs lose their orthogonality.
The parametric slant-transform matrices fulfill the requirements of the classical
slant-transform matrix outlined in previous sections (see also Ref. 27). However,
the parametric slant-transform matrix is a parametric matrix with parameters
β4 , β8 , . . . , β2n .
Properties:
(1) The parametric slant transform falls into one of at least four different
categories, depending on the β2n value chosen, and they include as special cases
the slant-HT and HTs of order 2n . Particularly,
• For β4 = β8 = · · · = β2n = 1, we obtain the classical slant transform.20,26,27
• For β2n = 22n−2 for all n ≥ 2, we obtain the WHT.9
• For β4 = β8 = · · · = β2n = β, β ≤ |4|, we refer to this case as the constant-β
slant transform.27
• For β4  β8  · · ·  β2n , −22n−2 ≤ β2n ≤ 22n−2 , n = 2, 3, 4, . . ., we refer
to this case as the multiple-β slant transform. In this case, some of the β2n
values can be equal, but not all of them.
(2) Parametric slant-transform matrices fulfill the following requirements of the
classical slant transform:
• Its first-row vector is of constant value.
• Its second-row vector represents the parametric slant vector.
• Its basis vectors are orthonormal.
• It has a fast algorithm (see Section 1.2).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


134 Chapter 3

(3) It is easily verified that the parametric slant-transform matrix can be


represented as

M4 H4 for n = 2
S 2n = (3.150)
C2n H2n for n > 2
with  
C2n−1 O2n−1
C2 = M2
n n ,
O2n−1 C2n−1
and
⎛ ⎞
⎜⎜⎜1 0 O0 0 0 O0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜⎜0 b2n O0 a2n 0 O0 ⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 0 ⎟⎟⎟
⎜ O O I O O O 2 −2 ⎟
M2n = ⎜⎜⎜⎜ 2 −2 ⎟⎟⎟ ,
n−1 n−1

⎟⎟⎟ (3.151)
⎜⎜⎜0 0 O0 0 1 O ⎟⎟⎟
⎜⎜⎜ 0
⎟⎟⎟
⎜⎜⎜⎜0 a2n O0 −b2n 0 O0 ⎟⎟⎠
⎝ 0 0
O O O2n−1 −2 O0 O0 I2n−1 −2
where M2 = I2 · Om denotes a zero matrix of order m, Im denotes an identity
matrix of order m, H2n is the Hadamard-ordered Walsh–Hadamard matrix of
size 2n , the parameters a2n and b2n are given in Eq. (3.148), and O0 and O0
denote the zero row and zero column, both of length 2n−1 − 2, respectively.
Example: For 2n = 8 we have, respectively, classical case (β2n = 1), constant-β
case (β2n = 1.7), multiple-β case (β4 = 1.7 and β8 = 8.1), and Hadamard case
(β4 = 4, β8 = 16). Figure 3.15 shows the basis vectors for this example.
(1) Classical case (β2n = 1):
⎛ ⎞
⎜⎜⎜1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.5275 1.0911 0.6547 0.2182 −0.2182 −0.6547 −1.0911 −1.5275⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.0000 −1.0000 −1.0000 1.0000 1.0000 −1.0000 −1.0000 1.0000⎟⎟⎟⎟
⎜ ⎟⎟
1 ⎜⎜⎜⎜0.4472 −1.3416 1.3416 −0.4472 0.4472 −1.3416 1.3416 −0.4472⎟⎟⎟⎟
S Classical = √ ⎜⎜⎜ ⎟⎟.
8 ⎜⎜⎜⎜1.3416 0.4472 −0.4472 −1.3416 −1.3416 −0.4472 0.4472 1.3416⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0.6831 −0.0976 −0.8783 −1.6590 1.6590 0.8783 0.0976 −0.6831⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎝1.0000 −1.0000 −1.0000 1.0000 −1.0000 1.0000 1.0000 −1.0000⎟⎟⎟⎟

0.4472 −1.3416 1.3416 −0.4472 −0.4472 1.3416 −1.3416 0.4472
(3.152)
(2) Constant-β case (β2n = 1.7):
⎛ ⎞
⎜⎜⎜1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.5088 1.1245 0.6310 0.2467 −0.2467 −0.6310 −1.1245 −1.5088⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.0000 −1.0000 −1.0000 1.0000 1.0000 −1.0000 −1.0000 1.0000⎟⎟⎟⎟
⎜ ⎟⎟
1 ⎜⎜⎜⎜0.5150 −1.3171 1.3171 −0.5150 0.5150 −1.3171 1.3171 −0.5150⎟⎟⎟⎟
S Const = √ ⎜⎜⎜ ⎟⎟.
8 ⎜⎜⎜⎜1.3171 0.5150 −0.5150 −1.3171 −1.3171 −0.5150 0.5150 1.3171⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0.6770 −0.0270 −0.9312 −1.6352 1.6352 0.9312 0.0270 −0.6770⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎝1.0000 −1.0000 −1.0000 1.0000 −1.0000 1.0000 1.0000 −1.0000⎟⎟⎟⎟

0.5150 −1.3171 1.3171 −0.5150 −0.5150 1.3171 −1.3171 0.5150
(3.153)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 135

Figure 3.15 Parametric slant-transform basis vectors for (2n = 8): (a) classical case,
(b) constant-β case (β2n = 1.7), (c) multiple-β case (β4 = 1.7 and β8 = 8.1), and (d) Hadamard
case (β4 = 4, β8 = 16).

(3) Multiple-β case (β4 = 1.7, β8 = 8.1):


⎛ ⎞
⎜⎜⎜1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000 1.0000⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.4218 1.1203 0.7330 0.4315 −0.4315 −0.7330 −1.1203 −1.4218⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.0000 −1.0000 −1.0000 1.0000 1.0000 −1.0000 −1.0000 1.0000⎟⎟⎟⎟
⎜ ⎟⎟
1 ⎜⎜⎜⎜0.5150 −1.3171 1.3171 −0.5150 0.5150 −1.3171 1.3171 −0.5150⎟⎟⎟⎟
S Multiple = √ ⎜⎜⎜ ⎟⎟.
8 ⎜⎜⎜⎜1.3171 0.5150 −0.5150 −1.3171 −1.3171 −0.5150 0.5150 1.3171⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0.8446 0.1013 −0.8532 −1.5964 1.5964 0.8532 −0.1013 −0.8446⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎝1.0000 −1.0000 −1.0000 1.0000 −1.0000 1.0000 1.0000 −1.0000⎟⎟⎟⎟

0.5150 −1.3171 1.3171 −0.5150 −0.5150 1.3171 −1.3171 0.5150
(3.154)
(4) Hadamard case (β4 = 4, β8 = 16):
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟

⎜⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜ + − − − − + +⎟⎟⎟⎟
1 ⎜⎜⎜⎜⎜+ ⎟
+ − − + + + −⎟⎟⎟⎟
S Had = √ ⎜⎜⎜ ⎟. (3.155)
8 ⎜⎜⎜+ − − + + − − +⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ − + + − − + −⎟⎟⎟⎟

⎜⎜⎝+ − + − − + − +⎟⎟⎟⎠
+ − + − + − + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


136 Chapter 3

3.7 Construction of Sequential Integer Slant HTs


This section presents a new class of sequential integer slant HTs. The sequential
number of a function is the number of sign inversions or “zero crossings” on the
interval of definition. A matrix is said to be sequential if the sequential number of
its rows grows with the number of the rows.
Sequency ordering, used originally by Ref. 23, is most popular in signal theory
because it ranks the transform coefficients roughly according to their variances for
signal statistics commonly encountered in practice and because of the analogies
with the frequency ordering of the Fourier functions.
Lemma 3.7.1: 25,26 Let S N and S −1
N be forward and inverse sequential slant-
transform matrices. Then

S 2N = [H2 ⊗ A1 , H1 ⊗ A2 , . . . , H2 ⊗ AN−1 , H1 ⊗ AN ] ,
⎡ −1 ⎤
⎢⎢⎢H2 ⊗ B1 ⎥⎥⎥
⎢⎢⎢ −1 ⎥
⎢⎢⎢H1 ⊗ B2 ⎥⎥⎥⎥⎥
⎢ ⎥⎥⎥⎥ (3.156)
−1
S 2N = ⎢⎢⎢⎢⎢... ⎥⎥⎥
⎢⎢⎢ −1
⎢⎢⎢H2 ⊗ BN−1 ⎥⎥⎥⎥⎥
⎣ −1 ⎦
H1 ⊗ B N

are the forward and inverse sequential slant HT matrices of order 2N, where Ai
−1
and Bi are the i’th column
+ +and i’th row of the S N and S N matrices, respectively,
+ +
H2 = + − , and H1 = − + .
The construction will be based on the parametric sequential integer slant
matrices and Lemma 3.7.1. Examples of parametric sequential slant matrices and
their inverse matrices of order 3 and 5 are given below:
⎛ ⎞
⎜⎜⎜ 1 1 1 ⎟⎟⎟⎟
⎜⎜

⎛ ⎞ ⎜⎜⎜ 3 2a 6b ⎟⎟⎟⎟⎟
⎜⎜⎜1 1 1⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜ ⎟
[PS ]3 (a, b) = ⎜⎜⎜⎜a 0 −a⎟⎟⎟⎟ , [PS ]−1 (a, b) = ⎜⎜⎜⎜⎜ 1 0 − 1 ⎟⎟⎟⎟⎟ ,
⎝ ⎠ 3
⎜⎜⎜ 3 3b ⎟⎟⎟⎟
b −2b b ⎜⎜⎜ ⎟
⎜⎜⎝ 1 1 1 ⎟⎟⎟⎠

3 2a 6b
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − ⎟
⎜⎜⎜ 5 5b
⎜⎜⎜ 6a 10b 15c ⎟⎟⎟⎟⎟
⎛ ⎞ ⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1 1 ⎟⎟ ⎜
⎜ − ⎟
10c ⎟⎟⎟⎟⎟
⎟ ⎜⎜⎜ 5 10b 0
⎜⎜⎜ 2b b 0 −b −2b⎟⎟⎟⎟ ⎜ 5b
⎜⎜⎜ ⎟ ⎜
⎜⎜⎜ 1 ⎟
[PS ]5 (a, b, c) = ⎜⎜⎜⎜ a 0 −2a 0 a ⎟⎟⎟⎟ , [PS ]−1 1 1 ⎟⎟⎟⎟ .
⎟ 5 =⎜ ⎜
⎜ 0 − 0 ⎟⎟
⎜⎜⎜−b 2b 0 −2b b ⎟⎟⎟⎟⎠ ⎜⎜⎜ 5 3a 15c ⎟⎟⎟⎟
⎜⎝ ⎜⎜⎜ ⎟
2c −3c 2c −3c 2c ⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − 0 − − ⎟⎟
⎜⎜⎜ 5 10b 5b 10c ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎝ 1 1 1 1 1 ⎟⎟⎟⎠

5 5b 6a 10b 15c
(3.157)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 137

Remark 1: The slant transform matrices in Eqs. (3.143), (3.144), and (3.157)
possess the sequency property in ordered form.
Remark 2: One can construct a class of slant HTs of order 3 · 2n , 4 · 2n , 5 · 2n ,
and 8 · 2n , for n = 1, 2, . . ., by utilizing Lemma 3.7.1 and the parametric integer
slant-transform matrices in Eqs. (3.143), (3.144), (3.156), and (3.157).
Example 3.7.1: (a) Using Eqs. (3.156) and (3.157), for N = 3 and n = 1, we have
the forward integer slant HT matrix [PS ]6 and inverse slant HT matrix [PS ]−1
6
of order 6:
⎡ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎤
⎢⎢⎢  ⎜1⎟   ⎜ 1 ⎟⎟  
⎟⎟⎟ + + ⎜⎜⎜⎜⎜ 1⎟⎟⎟⎟⎟⎥⎥⎥⎥⎥
⎢⎢ + + ⎜⎜⎜⎜⎜ ⎟⎟⎟⎟⎟ +
⎢ + ⎜⎜⎜⎜
[PS ]6 (a, b) = ⎢⎢ ⊗ a , ⊗ ⎜ 0 ⎟⎟⎟ , ⊗ ⎜−a⎟⎥
⎣ + − ⎜⎜⎝ ⎟⎟⎠ − + ⎜⎝⎜ ⎠ + − ⎜⎜⎝ ⎟⎟⎠⎥⎥⎦
b −2b b
⎛1 1 1 ⎞
⎜⎜⎜ 1 1 1⎟⎟
⎜⎜⎜a a 0
⎜⎜⎜ 0 −a −a⎟⎟⎟⎟⎟

⎜⎜b b −2b −2b
⎜ b b⎟⎟⎟⎟
= ⎜⎜⎜ ⎟. (3.158)
⎜⎜⎜⎜1 −1 −1 1 1 −1⎟⎟⎟⎟

⎜⎜⎝a −a 0 0 −a a⎟⎟⎟⎠
b −b 2b −2b b −b
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 3 2a 6b 3 2a
⎜⎜⎜ 6b ⎟⎟⎟⎟⎟
⎛  % & ⎞ ⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ + + 1 1 1 ⎟⎟⎟ ⎜⎜⎜ − − − ⎟⎟⎟⎟
⎜⎜⎜⎜ + ⊗ ⎟ ⎜
⎜⎜⎜ 3 2a 6b 3 2a 6b ⎟⎟⎟
⎜⎜⎜ − 3 2a 6b ⎟⎟⎟⎟ ⎜ 1 ⎟⎟⎟⎟

⎜⎜⎜ +  % & ⎟⎟⎟⎟ 1 ⎜⎜⎜⎜ 1 0 − 1 − 1 0 ⎟⎟
⎜⎜⎜ − 1 1 ⎟⎟⎟ ⎜⎜ 3b ⎟⎟⎟⎟.
[PS ]−1 (a, b) = ⎜⎜⎜ + ⊗ 0 − ⎟⎟⎟ = ⎜⎜⎜⎜ 3 3b 3

+ ⎜⎜⎜⎜ 1 1 ⎟⎟⎟
6
⎜⎜⎜ 3 3b ⎟
⎟ 2 1 1
⎜⎜⎜ +  % &⎟ ⎟ ⎜⎜⎜ 0 − − ⎟⎟⎟⎟
1 1 ⎟⎟⎟⎟
0
⎜⎝ + 1
⎠ ⎜⎜⎜ 3 3b 3 3b ⎟⎟⎟
⊗ − ⎜ ⎟
+ − 3 2a 6b ⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − − ⎟⎟
⎜⎜⎜ 3 2a 6b 3 2a 6b ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎝ 1 1 1 1 1 1 ⎟⎟⎟
− − − ⎠
3 2a 6b 3 2a 6b
(3.159)
(b) Using Eqs. (3.143) and (3.157), for N = 4 and n = 1, we obtain, using the
notation c = 2(a2 + b2 ),
⎡ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎤
⎢⎢⎢  ⎜⎜1⎟⎟   ⎜⎜ 1⎟⎟   ⎜⎜ 1⎟⎟   ⎜⎜ 1⎟⎟⎥⎥
⎢⎢⎢⎢ + + ⎜⎜⎜⎜⎜a⎟⎟⎟⎟⎟ + + ⎜⎜⎜⎜⎜ b⎟⎟⎟⎟⎟ + + ⎜⎜⎜⎜⎜−b⎟⎟⎟⎟⎟ + + ⎜⎜⎜⎜⎜−a⎟⎟⎟⎟⎟⎥⎥⎥⎥⎥
[PS ]8 (a, b) = ⎢⎢⎢ ⊗ ⎜ ⎟, ⊗ ⎜ ⎟, ⊗ ⎜ ⎟, ⊗ ⎜ ⎟⎥
⎢⎢⎣ + − ⎜⎜⎜⎜⎝1⎟⎟⎟⎟⎠ − + ⎜⎜⎜⎜⎝−1⎟⎟⎟⎟⎠ + − ⎜⎜⎜⎜⎝−1⎟⎟⎟⎟⎠ − + ⎜⎜⎜⎜⎝ 1⎟⎟⎟⎟⎠⎥⎥⎥⎥⎦
b −a a −b
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜a a b b −b −b −a −a⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜⎜1 1 −1 −1 −1 −1 1 1⎟⎟⎟⎟⎟
⎜b b −a −a a a −b −b⎟⎟⎟
= ⎜⎜⎜⎜⎜ ⎟. (3.160)
⎜⎜⎜1 −1 −1 1 1 −1 −1 1⎟⎟⎟⎟⎟
⎜⎜⎜a −a −b b −b b a −a⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝1 −1 1 −1 −1 1 −1 1⎟⎟⎟⎟⎠
b −b a −a a −a b −b

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


138 Chapter 3

⎛⎛ ⎞ ⎞
⎜⎜⎜⎜⎜⎜+ +⎟⎟⎟ % 1 a 1 b ⎟⎟⎟&
⎜⎜⎜⎜⎜⎝ ⎜ ⎟⎟⎠ ⊗ ⎟⎟⎟
⎜⎜⎜ + − 4 c 4 c ⎟⎟⎟
⎜⎜⎜⎛ ⎟⎟
⎜⎜⎜⎜⎜+ −⎞⎟⎟ % 1 & ⎟⎟
b 1 a ⎟⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜ ⎟⎟⎟ ⊗ − − ⎟
⎜⎜⎜⎝ ⎠ c 4 c ⎟⎟⎟⎟⎟
1 ⎜
⎜⎜⎜⎛ + + 4
[PS ]−1 = ⎟
(a, b)
2 ⎜⎜⎜⎜⎜+ +⎟⎟ ⎜
⎜ ⎞ % & ⎟⎟⎟
b 1 a ⎟⎟⎟⎟
8
⎜⎜⎜⎜⎜⎜⎝ ⎟⎟ 1
⎟⎟
⎜⎜⎜ + −⎟⎠ ⊗ 4 − −
c 4 c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎛ ⎞ & ⎟⎟⎟⎟
⎜⎜⎜⎜⎜⎜+ −⎟⎟⎟ % 1 a 1 b ⎟⎟
⎜⎝⎜⎜⎝ ⎟⎟⎠ ⊗ − − ⎟⎠
+ + 4 c 4 c
⎛ ⎞
⎜⎜⎜ 1 a 1 b 1 a 1 b ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 4 c 4 c 4 c 4 c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 a 1 b 1 a 1 b ⎟⎟⎟⎟
⎜⎜⎜ − − − − ⎟⎟⎟
⎜⎜⎜ 4 c 4 c 4 c 4 c ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 b 1 a 1 b 1 a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 4 c − 4 −
c

4

c 4 c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 b 1 a 1 b 1 a ⎟⎟⎟⎟

1 ⎜⎜⎜⎜⎜ 4 c − 4 −
c 4 c

4
− ⎟⎟⎟
c ⎟⎟⎟ .
= ⎜⎜⎜ ⎟⎟ (3.161)
2 ⎜⎜⎜ 1 b 1 a 1 b 1 a ⎟⎟⎟⎟
⎜⎜⎜ − − − − ⎟⎟
⎜⎜⎜ 4 c 4 c 4 c 4 c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜ 1 b 1 a 1 b 1 a ⎟⎟⎟⎟
⎜⎜⎜ − − − − ⎟⎟⎟
⎜⎜⎜ 4 c 4 c 4 c 4 c ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 a 1 b 1 a 1 b ⎟⎟⎟⎟
⎜⎜⎜ − − − − ⎟⎟
⎜⎜⎜ 4 c 4 c 4 c 4 c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎝ 1 a 1 b 1 a 1 b ⎟⎟⎟⎠
− − − −
4 c 4 c 4 c 4 c

(c) For N = 5 and n = 1, we have integer slant HT matrix [PS ]10 of order 10:
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 2b −b −b −2b −2b⎟⎟⎟⎟⎟
⎜⎜⎜ 2b b b 0 0
⎟⎟
⎜⎜⎜
⎜⎜⎜ a a 0 0 −2a −2a 0 0 a a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−b −b 2b 2b 0 0 −2b −2b b b ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜ 2c 2c −3c −3c 2c −3c 2c ⎟⎟⎟⎟⎟
[PS ]10 (a, b, c) = ⎜⎜⎜⎜⎜
2c 3c 2c
⎟⎟ , (3.162)
⎜⎜⎜ 1
⎜⎜⎜ −1 −1 1 1 −1 −1 1 1 −1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 2b ⎟⎟⎟
⎜⎜⎜ −2b −b b 0 0 b −b −2b 2b⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ a −a 0 0 −2a 2a 0 0 a −a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜−b b −2b 2b 0 0 2b −2b b −b ⎟⎟⎟⎟⎟
⎜⎝ ⎟⎠
2c −2c 3c −3c 2c −2c 3c −3c 2c −2c

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 139

⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − − ⎟⎟
⎜⎜⎜ 5 5a 6a 10a 15c 5 5a 6a 10a 15c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − − − − − ⎟⎟
⎜⎜⎜ 5 5a 6a 10a 15c 5 5a 6a 10a 15c ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 1
⎜⎜⎜ 1 1 1 1 1 1 1 ⎟⎟⎟⎟⎟
0 − − − 0 − ⎟
⎜⎜⎜ 5
⎜⎜⎜ 10b 5b 10c 5 10b 5b 10c ⎟⎟⎟⎟⎟

⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ 0 − 0 − ⎟⎟
⎜⎜⎜ 5 10b 5b 10c 5 10b 5b 10c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟⎟
−1 1 ⎜⎜⎜⎜ 5 0 − 0 0 − 0
15c ⎟⎟⎟⎟⎟ .

[PS ]10 (a, b, c) = ⎜⎜⎜ 3a 15c 5 3a

2 ⎜⎜⎜ 1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 −
1
0
1

1
0
1
0 − ⎟
⎜⎜⎜ 5 3a 15c 5 3a 15c ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − 0 − − − 0 ⎟⎟
⎜⎜⎜ 5 10b 5b 10c 5 10b 5b 10c ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − 0 − − − 0 − − ⎟
⎜⎜⎜ 5
⎜⎜⎜ 10b 5b 10c 5 10b 5b 10c ⎟⎟⎟⎟⎟

⎜⎜⎜ 1
⎜⎜⎜ 1 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟⎟
− − ⎟
⎜⎜⎜ 5
⎜⎜⎜ 5b 6a 10b 15c 5 5b 6a 10b 15c ⎟⎟⎟⎟⎟

⎜⎝⎜ 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟⎠
− − − − −
5 5b 6a 10b 15c 5 5b 6a 10b 15c
(3.163)
Some useful properties of the integer slant HT matrix are given below.
Properties:
(a) The slant HT matrix S 2N is an orthogonal matrix only if N is a power of two.
(b) If S N is sequential, then S 2N is also a sequential integer slant HT matrix [see
Eq. (3.156)].
Proof: Let Ri and R1i be i’th rows of S N and S 2N , respectively, and let ui, j be
an i’th and j’th element of S N , i, j = 0, 1, . . . , N − 1. The top half of S 2N ,
R1i i = 0, 1, . . . , N − 1, is obtained from (1, 1) ⊗ ui, j , which does not alter the
sequential number of the rows.
Thus, the sequential number of R1i is equal to the sequential number of Ri ,
i = 0, 1, . . . , N − 1. The bottom half of S 2N , R1i , i = N, N + 1, . . . , 2N − 1, is obtained
from (1, −1) ⊗ ui, j , and (−1, 1) ⊗ ui, j . This causes the sequential number of each row
to increase by N. Thus, the sequential number of each R1i , i = N, N + 1, . . . , 2N − 1,
is equal to the sequential number of its corresponding Ri , i = 0, 1, . . . , N − 1 plus
N. This implies that the sequential number of R1i i = 0, 1, . . . , 2N − 1 grows with
its index and S 2N is sequential, as can be seen from the examples given above.
(c) One can construct the same size slant-transform matrix in different ways.
Indeed, the slant-transform matrix of order N = 16 can be obtained by two
ways using Lemma 3.7.1 with initial matrix [PS ]4 (a, b) [see Eq. (3.143)]
or using Lemma 3.7.1 once with the initial matrix [PS ]8 (a, b, c, d, e, f ) [see
Eq. (3.144)]. It shows that we can construct an integer slant transform of
order 2n .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


140 Chapter 3

(d) The integer slant matrices [PS ]4 (a, b) and [PS ]−1 4 (a, b) = Q4 (a, b) can be
factored as
⎛ ⎞⎛ ⎞
⎜⎜⎜1 1 0 0⎟⎟⎟ ⎜⎜⎜1 0 0 1⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜0 0 b a⎟⎟⎟ ⎜⎜⎜0 1 1 0⎟⎟⎟⎟⎟
[PS ]4 (a, b) = S 2 S 1 = ⎜⎜⎜⎜ ,
⎜⎜⎜1 −1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 1 −1 0⎟⎟⎟⎟⎟
⎝ ⎠⎝ ⎠
0 0 −a b 1 0 0 −1
⎛ ⎞⎛ ⎞ (3.164)
⎜⎜⎜1 0 1 0⎟⎟⎟ ⎜⎜⎜c 0 c 0⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟
⎜0 1 0 1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜c 0 −c 0⎟⎟⎟⎟⎟
Q4 (a, b) = Q2 Q1 = ⎜⎜⎜⎜ ⎟⎜ ⎟,
⎜⎜⎜⎝0 1 0 0⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝0 a 0 b⎟⎟⎟⎟⎠
1 0 −1 −1 0 b 0 −a

where
c = (a2 + b2 )/2. (3.165)

Let S N be an integer slant matrix of order N. We introduce the following


matrix:
S 2N = [H2 ⊗ A1 , H1 ⊗ A2 , . . . , H2 ⊗ AN−1 , H1 ⊗ AN ] , (3.166)
   
where Ai is the i’th column of S N , and H1 = +− ++ , H2 = ++ +− .
We can see that the matrix S 2N is an integer slant matrix of order 2N. In reality,
we have
T
S 2N S 2N = H2 H2T ⊗ A1 AT1 + H1 H1T ⊗ A2 AT2 + · · ·
+ H2 H2T ⊗ AN−1 ATN−1 + H1 H1T ⊗ AN ATN
N
(3.167)
= 2I2 ⊗ Ai ATi = diag {a1 , a2 , . . . , a2N } .
i=1

For N = 4, we have
* +
S 8 S 8T = 2 I4 ⊕ 2(a2 + b2 ) . (3.168)

We can also check that the inverse matrix of S 4 (a, b) has the following form:
⎛ ⎞
⎜⎜⎜c a c b⎟⎟⎟
⎜ ⎟
1 ⎜⎜⎜⎜c b −c −a⎟⎟⎟⎟ a2 + b2
Q4 (a, b) = ⎜⎜⎜ ⎟⎟⎟ , c = , (3.169)
4c ⎜⎜⎜c −b −c a⎟⎟⎟ 2
⎝ ⎠
c −a c −b
i.e., S 4 (a, b)Q4 (a, b) = Q4 (a, b)S 4 (a, b) = I4 , and if parameters a and b are both
even or odd, the matrix in Eq. (3.169) is an integer matrix without granting a
coefficient.
One can verify that the following matrices are mutually inverse matrices of
order 8:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 141

S 8 (a, b) = [H2 ⊗ A1 , H1 ⊗ A2 , H2 ⊗ A3 , H1 ⊗ A4 ] ,
1 (3.170)
Q8 (a, b) = [H2 ⊗ Q1 , H1 ⊗ Q2 , H2 ⊗ Q3 , H1 ⊗ Q4 ] ,
4c
where Ai and Qi are the i’th column and row of the matrices S 4 (a, b) and Q4 (a, b),
respectively.

3.7.1 Fast algorithms


It is not difficult to show that the slant matrix in Eq. (3.137) can be represented as

S (2n ) = M2n ⊗ I2 ⊗ S (2n−1 ), (3.171)

where
⎛ ⎞
⎜⎜⎜1 0 O0 0 0 ⎟⎟⎟ O0
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 bn O0 an 0 O0 ⎟⎟⎟
⎜⎜⎜ 0 ⎟⎟
⎜O O0 I2n−1 −2 O0 O0 O2n−1 −2 ⎟⎟⎟⎟
M2n = ⎜⎜⎜⎜⎜ ⎟⎟⎟ , (3.172)
⎜⎜⎜⎜0 0 O0 0 1 O0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 an O0 −bn 0 O0 ⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠
O0 O0 O2n−1 −2 O0 O0 I2n−1 −2

where Om denotes a zero matrix of order m and M2 = I2 . One can show that a slant
matrix of order 2n can be factored as

S (2n ) = S n S n−1 · · · S 1 , (3.173)

where

S i = (I2n−i ⊗ M2i ) (I2n−i ⊗ H2 ⊗ I2i−1 ) . (3.174)

It is easy to prove that the fast algorithm based on decomposition in Eq. (3.173)
requires C + (2n ) addition and C × (2n ) multiplication operations,

C + (2n ) = (n + 1)2n − 2, C × (2n ) = 2n+1 − 4. (3.175)

We see that the integer slant matrices in Eqs. (3.143) and (3.169) can be factored
as
⎛ ⎞⎛ ⎞
⎜⎜⎜1 1 0 0⎟⎟⎟ ⎜⎜⎜1 0 0 1⎟⎟⎟
⎜⎜⎜⎜ ⎟⎜ ⎟
a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 1 1 0⎟⎟⎟⎟⎟
S 4 (a, b) = S 2 S 1 = ⎜⎜⎜⎜⎜
0 0 b
⎟⎜ ⎟. (3.176)
⎜⎜⎜1 −1 0 0⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 1 −1 0⎟⎟⎟⎟⎟
⎝ ⎠⎝ ⎠
0 0 −a b 1 0 0 −1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


142 Chapter 3

⎛ ⎞⎛ ⎞
⎜⎜⎜1 0 1 0⎟⎟⎟ ⎜⎜⎜c 0 c 0⎟⎟⎟
⎜⎜⎜⎜0 ⎟⎜
1 0 1⎟⎟⎟⎟ ⎜⎜⎜⎜c

0 −c 0⎟⎟⎟⎟
Q4 (a, b) = Q2 Q1 = ⎜⎜⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ . (3.177)
⎜⎜⎜0 1 0 −1⎟⎟⎟⎟ ⎜⎜⎜⎜0 a 0 b⎟⎟⎟⎟
⎝ ⎠⎝ ⎠
1 0 −1 0 0 b 0 −a
Now, using the above-given representation of matrices S 4 (a, b), Q4 (a, b), and the
formula in Eq. (3.170), we can find the following respective complexities:
• 2n+1 additions and 2n multiplications for forward transform.
• 2n+1 additions and 2n+1 multiplications for inverse transform.

3.7.2 Examples of slant-transform matrices


In this section, we give some sequency-ordered slant-transform matrices obtained
from parametric transforms.
⎛ ⎞
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜1 1 ⎟⎟⎟⎟ ⎛ ⎞
⎜⎜⎜ 1
⎟⎟⎟ ⎜⎜⎜1 1 1 1⎟⎟⎟
⎜ √ √ ⎜⎜⎜3 1 −1 −3⎟⎟⎟
1 ⎜⎜ 6 ⎟⎟⎟⎟⎟ ,
1
⎜ ⎟⎟⎟ · √
S 3 = √ ⎜⎜⎜⎜⎜ 6 − ⎟ S 4 = ⎜⎜⎜⎜ ⎟ 5
(3.178)
2 ⎟⎟⎟ ⎜⎜⎜⎝1 −1 −1 1⎟⎟⎟⎟⎠
0
3 ⎜⎜⎜ 2
√ ⎟⎟⎟⎟
1
⎜⎜⎜ √ 1 −1 3 −1
· √
⎜⎜⎝ 2 √ 2 ⎟⎟⎠ 5
− 2
2 2
⎛ ⎞
⎛ ⎞ ⎜⎜⎜1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜2 2 0 0 −2 −2⎟⎟⎟⎟⎟
/
⎜⎜⎜ 1 1 1 1 1⎟⎟⎟ 3
⎜⎜⎜ ⎟ ·
⎜⎜⎜ 2 1 0 −1 −2⎟⎟⎟⎟⎟ 1 ⎜⎜⎜
⎜⎜⎜
⎟⎟⎟
⎟ /
8

⎜⎜⎜1 1 −2 −2 1 1⎟⎟⎟⎟⎟
·
⎜⎜⎜ ⎟⎟⎟ /
2
·
1

S 5 = ⎜⎜⎜ 1 0 −2 0 1⎟⎟⎟⎟ ·
5
S 6 = ⎜⎜⎜ ⎟⎟ 2
(3.179)
⎜⎜⎜
⎜⎜⎜−1 2 0 −2 1⎟⎟⎟⎟⎟
⎟ 6
⎜⎜⎜⎜1 −1 −1 1 1 −1⎟⎟⎟⎟⎟ /
1
⎜⎜⎜ ⎟
⎜⎜⎜2 −2 0 0 −2 2⎟⎟⎟⎟⎟
· 3
⎜⎜⎝ ⎟⎟⎠ 2 ·
8
2 −3 2 −3 2 ⎜⎜⎜ ⎟
⎝1 −1 2 −2 1 −1⎟⎟⎠
1 /
· 1
6 ·
2

⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟
⎟⎟⎟
⎜⎜⎜⎜4 2 2 0 0 −2 −2 −4⎟⎟⎟⎟
/
⎜⎜⎜ ⎟⎟⎟ ·
1
⎜⎜⎜ 6
2⎟⎟⎟⎟⎟
/
⎜⎜⎜2 1 −1 −2 −2 −1 1 2
⎜⎜⎜ ⎟⎟⎟ ·
⎜⎜⎜ 5

−2⎟⎟⎟⎟⎟
/
⎜⎜⎜2 0 −4 −2 2 4 0 ·
1
S 8 = ⎜⎜⎜⎜ ⎟⎟ (3.180)
1⎟⎟⎟⎟⎟
6
⎜⎜⎜⎜1 −1 −1 1 1 −1 −1 /
⎜⎜⎜ ⎟⎟
−2⎟⎟⎟⎟
1
⎜⎜⎜2 −4 0 2 −2 0 4 ·
⎜⎜⎜ ⎟⎟⎟ /
6

⎜⎜⎜1 ⎟
1⎟⎟⎟⎟
2
⎜⎜⎜ −2 2 −1 −1 2 −2 ·

⎜⎜⎝ ⎟⎟⎟ /
5

0⎠
1
0 −2 2 −4 4 −2 2 ·
6

⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1⎟⎟
⎟⎟⎟
⎜⎜⎜⎜ /
⎜⎜⎜4 3 2 1 0 −1 −2 −3 −4⎟⎟⎟⎟ ·
3
⎜⎜⎜ ⎟⎟⎟ 20
⎜⎜⎜1 ⎟ /

⎜⎜⎜ 1 1 −2 −2 −2 1 1 1⎟⎟⎟⎟ ·
1

⎜⎜⎜ ⎟⎟⎟ /
2

⎜⎜⎜1 0 −1 −2 0 2 1 0 −1⎟⎟⎟⎟ ·
3
⎜⎜⎜ ⎟⎟⎟ 4
⎜ 3⎟⎟⎟⎟
S 9 = ⎜⎜⎜⎜3 0 −3 0 −3 1
0 0 0 · (3.181)
⎜⎜⎜ ⎟⎟⎟ 2
/
⎜⎜⎜2
⎜⎜⎜ −1 −4 3 0 −3 4 1 −2⎟⎟⎟⎟⎟ ·
3

⎜⎜⎜ ⎟⎟⎟ /
20

−1⎟⎟⎟⎟⎟
3
⎜⎜⎜1 −2 1 0 0 0 −1 2 ·
⎜⎜⎜ ⎟⎟⎟ /
4
⎜⎜⎜ ⎟
1⎟⎟⎟⎟
1
⎜⎜⎜1 −2 1 1 −2 1 1 −2 ·
2
⎜⎝ ⎟⎟
1⎠
1
1 −2 1 −2 4 −2 1 −2 ·
2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 143

⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
−13⎟⎟⎟⎟⎟
/
⎜⎜⎜13 9 15 11 5 1 7 3 −3 −7 −1 −5 −11 −15 −9 1
⎜⎜⎜ ⎟⎟ ·
⎜⎜⎜ 1 −1 −1 −1 −1 −1 −1 −1 −1 1 ⎟⎟⎟⎟⎟
85
⎜⎜⎜ 1 1 1 1 1 1
⎟⎟
⎜⎜⎜ /
⎜⎜⎜ 1 1 1 1 −3 −3 −3 −3 3 3 3 3 −1 −1 −1 −1 ⎟⎟⎟⎟⎟ ·
1
⎜⎜⎜ ⎟⎟ 5
⎜⎜⎜ 3
⎜⎜⎜ 1 −1 −3 −9 −3 3 9 9 3 −3 −9 −3 −1 1 3 ⎟⎟⎟⎟⎟ ·
1

⎜⎜⎜ ⎟⎟⎟ 5
/
⎜⎜⎜ 3 1 −1 −3 −3 −1 1 3 −3 −1 1 3 3 1 −1 −3 ⎟⎟⎟⎟ ·
1
⎜⎜⎜ ⎟⎟⎟ 5
⎜⎜⎜ 9
⎜⎜⎜ 3 −3 −9 3 1 −1 −3 −3 −1 1 3 −9 −3 3 9 ⎟⎟⎟⎟⎟ ·
1

⎜⎜ ⎟⎟⎟ 5
−13⎟⎟⎟⎟
/
1 ⎜⎜⎜⎜⎜11 5 −3 −13 11 5 −3 −13 11 5 −3 −13 11 5 −3
⎟⎟⎟ ·
1
S 16 = ⎜⎜⎜ 85
.
4 ⎜⎜⎜ 1 1 −1 −1 1 1 −1 −1 1 1 −1 −1 1 1 −1 −1 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ /
⎜⎜⎜ 3 −3 ⎟⎟⎟⎟
1
−3 −3 3 1 −1 −1 1 −1 1 1 −1 −3 3 3 ·
⎜⎜⎜ ⎟⎟⎟ 5
⎜⎜⎜ 1 −1 −1 −1 1 −1 −1 1 −1 −1 −1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1
⎟⎟
/
⎜⎜⎜
−1 ⎟⎟⎟⎟
1
·
⎜⎜⎜ 1 −1 −1 1 −3 3 3 −3 3 −3 −3 3 −1 1 1 5
⎜⎜⎜ ⎟⎟⎟
1 ⎟⎟⎟⎟⎟
1
⎜⎜⎜ 1 −3 3 −1 −3 9 −9 3 3 −9 9 −3 −1 3 −3 ·
⎜⎜⎜ ⎟⎟⎟
5
/
⎜⎜⎜
−1 ⎟⎟⎟⎟
1
⎜⎜⎜ 1 −3 3 −1 −1 3 −3 1 −1 3 −3 1 1 −3 3 ·
⎜⎜⎜ ⎟⎟⎟ 5

⎜⎜⎜ 3 3 ⎟⎟⎟⎟⎟
1
⎜⎜⎜ −9 9 −3 1 −3 3 −1 −1 3 −3 1 −3 9 −9 ·
⎟⎟⎟
5
/
⎜⎝
−1 ⎠
1
1 −3 3 −1 1 −3 3 −1 1 −3 3 −1 1 −3 3 ·
5

(3.182)
3.7.3 Iterative parametric slant Haar transform construction
The forward and inverse parametric slant Haar transforms of order 2n (n ≥ 1) with
parameters β22 , β23 , . . . , β2n are defined as29
X = S 2n (β22 , β23 , . . . , β2n )x,
(3.183)
x = S 2Tn (β22 , β23 , . . . , β2n )X,
where x is an input data vector of length 2n and S 2n is generated recursively as
 
A2 ⊗ S 2,2n−1
S 2n = S 2n (β22 , β23 , . . . , β2n ) = Q2n , (3.184)
I2 ⊗ S 2n −2,2n−1
where S 2,2n−1 is a matrix of the dimension 2 × 2n−1 comprising the first two rows of
S 2n−1 , and S 2n−1 −2,2n−1 is a matrix of the dimension 2n−1 − 2 × 2n−1 comprising the
third to the 2n−1 rows of S 2n−1 , ⊗ denotes the operator of the Kronecker product, and
 
1 1 1
A2 = √ . (3.185)
2 1 −1
S 4 is the 4-point parametric slant HT constructed in the previous chapter. Q2n is
the recursion kernel matrix defined as
⎡ ⎤
⎢⎢⎢1 0 0 0 · · · 0⎥
⎢⎢⎢0 b n a n 0 · · · 0⎥⎥⎥⎥⎥
⎢⎢⎢ 2 2 ⎥⎥
⎢⎢⎢⎢0 a2n −b2n 0 · · · 0⎥⎥⎥⎥⎥
Q2n = ⎢⎢⎢⎢0 0 0 1 · · · 0⎥⎥⎥⎥⎥ , (3.186)
⎢⎢⎢ ⎥

⎢⎢⎢.. .. .. . .⎥
⎢⎢⎣. . . 0 . . .. ⎥⎥⎥⎥

0 0 0 0 ··· 1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


144 Chapter 3

and a2n , b2n are defined as in Eq. (3.148):


. .
3 · 22n−2 22n−2 − β2n
a 2n = , b2 n = , (3.187)
4 · 22 − β2n
n−2
4 · 22n−2 − β2n

where
n−2 n−2
−22 ≤ β2n ≤ 22 , n ≥ 3. (3.188)

The following remarks are relevant.


Properties: The parametric slant Haar transform fulfills the following require-
ments of the classical slant transform:
• Its first-row vector is of constant value.
• Its second-row vector represents the parametric slant vector.
• It has a fast algorithm (see Section 2.2).
• Its basis vectors are orthonormal.
Proof of orthonormality: Let R1 = A2 ⊗ S 2,2n−1 and R2 = I2 ⊗ S 2n−1 −2,2n−1 . Then,

R1 RT1 = (A2 ⊗ S 2,2n−1 )(A2 ⊗ S 2,2n−1 )T = A2 AT2 ⊗ S 2,2n−1 S 2,2


T
n−1 = I2 ⊗ I2 = I4 , (3.189)

R1 RT2 = (A2 ⊗ S 2,2n−1 )(I2 ⊗ S 2n−1 −2,2n−1 )T = A2 I2T ⊗ S 2,2n−1 S 2Tn−1 −2,2n−1
= A2 ⊗ O2,2n−1 −2 = O4,2n −4 , (3.190)
R2 RT1 = (I2 ⊗ S 2n−1 −2,2n−1 )(A2 ⊗ S 2,2n−1 ) = T
I2 AT2 ⊗ T
S 2n−1 −2,2n−1 S 2,2 n−1

= AT2 ⊗ O2n−1 −2 = O2n −4,4 , (3.191)


R2 RT2 = (I2 ⊗ S 2n−1 −2,2n−1 )(I2 ⊗ S 2n−1 −2,2n−1 ) = T
I2 I2T ⊗ S 2n−1 −2,2n−1 S 2Tn−1 −2,2n−1
= I2 ⊗ I2n−1 −2 = I2n −4 , (3.192)

but
⎡ .. ⎤
⎡ ⎤ ⎢⎢⎢ R RT . R1 RT2 ⎥⎥⎥⎥⎥
⎢⎢⎢R1 ⎥⎥⎥ 1 2 ⎢⎢⎢ 1 1 ⎥⎥⎥ T

S 2n S 2Tn = Q2n ⎢⎢⎢⎢⎣−−⎥⎥⎥⎥⎦ RT ... RT QT2n = Q2n ⎢⎢⎢⎢− − − ..
. − − −⎥⎥⎥⎥ Q2n
1 2 ⎢⎢⎢ ⎥⎥⎦
R2 ⎣ ..
R2 RT1 . R2 RT2
⎡ .. ⎤
⎢⎢⎢⎢ I4 . O4,2n −4 ⎥⎥⎥⎥⎥
⎢⎢⎢ ⎥⎥
= Q2n ⎢⎢⎢⎢− − −− ... − − −−⎥⎥⎥⎥ QT2n (3.193)
⎢⎢⎢ ⎥⎥⎥
⎣ . ⎦
O2n −4,4 .. I2n −4,4

where On,m is an n × m zero matrix. Hence, S 2n is orthonormal, and S 2n S 2Tn =


Q2n I2n QT2n = I2n . This completes the proof.
0 n−2 0
It is easily verified that for β n > 0022 00, parametric slant-transform matrices
2
lose their orthogonality. Slant-transform matrix S 2n is a parametric matrix with
(β4 , β8 , . . . , β2n ) parameters.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 145

Figure 3.16 Parametric slant Haar transform basis vectors for (2n = 8): (a) classical case
(β4 = 1, β8 = 1), (b) multiple-β case (β4 = 4, β8 = 16), (c) constant-β case (β4 = 1.7, β8 = 1.7),
and (d) multiple-β case (β4 = 1.7, β8 = 7).

The parametric slant-Haar transform falls into one of at least three different
categories according to β2n values:
• For β4 = β8 = · · · = β2n = β = 1, we obtain the classical slant Haar transform.20
• For β4 = β8 = · · · = β2n = β for β ≤ |4|, we refer to this as the constant-β slant
Haar transform.
n−2 n−2
• For β4  β8  · · ·  β2n for −22 ≤ β2n ≤ 22 , n = 2, 3, 4, . . ., we refer to this
as the multiple-β slant Haar transform; some of the β2n values can be equal, but
not all of them.

Example: The parametric slant-Haar transforms of order 8 yield, respectively, for


the following cases: The classical case, (β4 = β8 = · · · = β2n = β = 1), gives in
ordered form, the multiple-β case (β4 = 4, β8 = 16), the constant-β case (β4 = 1.7,
β8 = 1.7), and the multiple-β case (β4 = 1.7, β8 = 7). Their basis vectors are
shown, respectively, in Fig. 3.16.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


146 Chapter 3

(a) The classical case, (β4 = β8 = 1):


⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟
⎟⎟⎟
⎜⎜⎜⎜
⎜⎜⎜7 5 3 1 −1 −3 −5 −7⎟⎟⎟⎟ ·√
1
⎜⎜⎜ ⎟⎟⎟ 21
⎜⎜⎜3 1 −1 −3 −3 ⎟
⎜⎜⎜ −1 1 3⎟⎟⎟⎟ ·√
1

⎜⎜⎜ ⎟⎟⎟ 5

1 ⎜⎜⎜⎜7 −1 −9 −17 17 9 1 −7⎟⎟⎟⎟ ·√
1
S Classical = √ ⎜⎜⎜ ⎟⎟⎟ 105 . (3.194)
8 ⎜⎜⎜⎜1 −1 −1 1 0 0 0 0⎟⎟⎟⎟ √
· 2
⎜⎜⎜⎜0 0 0 0 ⎟⎟
⎜⎜⎜ 1 −1 −1 1⎟⎟⎟⎟ √
· 2
⎜⎜⎜ ⎟⎟⎟ /

⎜⎜⎜1 −3 3 −1 0 0 0 0⎟⎟⎟⎟ ·
2
⎜⎜⎜ ⎟⎟⎟ /
5
⎜⎝ ⎟
−3 −1⎠
2
0 0 0 0 1 3 ·
5

(b) The multiple-β case (β4 = 4, β8 = 16). Note that this is a special case of Haar
transform:
⎛ ⎞
⎜⎜⎜ ⎟
⎜⎜⎜1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎜⎜ 1 1 1 −1 −1 −1 −1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟

⎜⎜⎜1 −1 1 −1 1 −1 1 −1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟

⎜ ⎟
1 ⎜1 −1 1 −1 −1 −1 1 ⎟⎟⎟⎟
= √ ⎜⎜⎜⎜⎜ √
1
S Multiple
⎜ √ √ √ ⎟⎟⎟. (3.195)
8 ⎜⎜ 2
⎜⎜⎜ 2 − 2 − 2 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜ √ √ √ √ ⎟⎟⎟
⎜⎜⎜ 2 − 2 − 2 2 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ √ √ √ √ ⎟⎟⎟
⎜⎜⎜0 − 2 − 2⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 2 2
⎜⎝ √ √ √ √ ⎟⎟⎠
0 0 0 0 2 − 2 − 2 2

(c) The constant-β case (β4 = 1.7, β8 = 1.7):


⎛ ⎞
⎜⎜⎜1 ⎟⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.5087 1.1246 0.6310 0.2466 −0.2466 −0.6310 −1.1246 −1.5087⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0.6771 −0.0272 −0.9311 −1.6351 1.6351 0.9311 0.0272 −0.6771⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
1 ⎜⎜1.3172 0.5151 −0.5151 −1.3172 −1.3172 −0.5151 0.5151 1.3172⎟⎟⎟⎟⎟
S Constant = √ ⎜⎜⎜⎜⎜ √ √ √ √ ⎟⎟⎟.
8 ⎜⎜⎜ 2 − 2 − 2 ⎟⎟⎟
⎜⎜⎜ 2 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0.7283 −1.8628 1.8628 −0.7283 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ √ √ √ √ ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 2 − 2 − 2 2 ⎟⎟⎟⎟
⎜⎝ ⎟⎟⎠
0 0 0 0 0.7283 −1.8628 1.8628 −0.7283
(3.196)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 147

(d) The multiple-β case (β4 = 1.7, β8 = 7):


⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1.4410 1.1223 0.7130 0.3943 −0.3943 −0.7130 −1.1223 −1.4410⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0.8113 0.0752 −0.8700 −1.6060 1.6060 0.8700 −0.0752 −0.8113⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
1 ⎜⎜1.3171 0.5150 −0.5150 −1.3171 −1.3171 −0.5150 0.5150 1.3171⎟⎟⎟⎟⎟
S Multiple = √ ⎜⎜⎜⎜⎜ √ √ √ √ ⎟⎟⎟.
8 ⎜⎜⎜ 2 − 2 − 2 2 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0.7283 −1.8627 1.8627 −0.7283 0 ⎟⎟⎟
⎜⎜⎜⎜ √
0

0

0
√ ⎟⎟⎟

⎜⎜⎜0 − 2 − 2 2 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 2
⎟⎠

0 0 0 0 0.7283 −1.8627 1.8627 −0.7283
(3.197)

References
1. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in
Mathematics, 1168, Springer-Verlag, Berlin (1985).
2. R. Stasinski and J. Konrad, “A new class of fast shape-adaptive orthogonal
transforms and their application to region-based image compression,” IEEE
Trans. Circuits Syst. Video Technol. 9 (1), 16–34 (1999).
3. M. Barazande-Pour and J.W. Mark, “Adaptive MHDCT coding of images,”
in Proc. IEEE Image Proces. Conf., ICIP-94 1, 90–94 (Nov. 1994).
4. G.R. Reddy and P. Satyanarayana, “Interpolation algorithm using Walsh–
Hadamard and discrete Fourier/Hartley transforms,” in Circuits and Systems
1990, Proc.33rd Midwest Symp. 1, 545–547 (Aug. 1990).
5. Ch.-Fat Chan, “Efficient implementation of a class of isotropic quadratic
filters by using Walsh–Hadamard transform,” in Proc. of IEEE Int. Symp.
on Circuits and Systems, Hong Kong, 2601–2604 (June 9–12, 1997).
6. B. K. Harms, J. B. Park, and S. A. Dyer, “Optimal measurement techniques
utilizing Hadamard transforms,” IEEE Trans. Instrum. Meas. 43 (3), 397–402
(1994).
7. C. Anshi, Li Di and Z. Renzhong, “A research on fast Hadamard transform
(FHT) digital systems,” in Proc. of IEEE TENCON 93, Beijing, 541–546
(1993).
8. H.G. Sarukhanyan, “Hadamard matrices: construction methods and
applications,” in Proc. of Workshop on Transforms and Filter Banks,
Tampere, Finland, 95–130 (Feb. 21–27, 1998).
9. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal
Processing, Springer-Verlag, New York (1975).
10. S.S. Agaian and H.G. Sarukhanyan, “Hadamard matrices representation by
(−1, +1)-vectors,” in Proc. Int. Conf. Dedicated to Hadamard Problem’s
Centenary, Australia, (1993).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


148 Chapter 3

11. H. G. Sarukhanyan, “Decomposition of the Hadamard matrices and fast


Hadamard transform,” in Computer Analysis of Images and Patterns, Lecture
Notes in Computer Science, 1296 575–581 (1997).
12. R. K. Yarlagadda and E. J. Hershey, Hadamard Matrix Analysis and
Synthesis with Applications and Signal/Image Processing, Kluwer Academic
Publishers, Boston (1996).
13. J. Seberry and M. Yamada, Hadamard Matrices, Sequences and Block
Designs, Surveys in Contemporary Design Theory, John Wiley & Sons,
Hoboken, NJ (1992).
14. S. Samadi, Y. Suzukake and H. Iwakura, “On automatic derivation of fast
Hadamard transform using generic programming,” in Proc. 1998 IEEE Asia-
Pacific Conf. on Circuit and Systems, Thailand, 327–330 (1998).
15. D. Coppersmith, E. Feig, and E. Linzer, “Hadamard transforms on
multiply/add architectures,” IEEE Trans. Signal Process. 42 (4), 969–970
(1994).
16. Z. Li, H.V. Sorensen and C.S. Burus, “FFT and convolution algorithms and
DSP microprocessors,” in Proc. of IEEE Int. Conf. Acoustic, Speech, Signal
Processing, 289–294 (1986).
17. R. K. Montoye, E. Hokenek, and S. L. Runyon, “Design of the IBM RISC
System/6000 floating point execution unit,” IBM J. Res. Dev. 34, 71–77
(1990).
18. J.-L. Wu, “Block diagonal structure in discrete transforms,” Computers and
Digital Techniques, Proc. IEEE 136 (4), 239–246 (1989).
19. H. Enomoto and K. Shibata, “Orthogonal transform coding system for
television signals,” in Proc. of Symp. Appl. Walsh Func., 11–17 (1971).
20. W. K. Pratt, J. Kane, and L. Welch, “Slant transform image coding,” IEEE
Trans. Commun. 22 (8), 1075–1093 (1974).
21. S. Agaian and V. Duvalyan, “On slant transforms,” Pattern Recogn. Image
Anal. 1 (3), 317–326 (1991).
22. I. W. Selesnick, “The slantlet transform,” IEEE Trans. Signal Process. 47
(5), 1304–1313 (1999).
23. I.W. Selesnick, “The slantlet transform time-frequency and time-scale
analysis,” in Proc. of IEEE-SP Int. Symp., 53–56 (1998).
24. J. L. Walsh, “A closed set of normal orthogonal functions,” Am. J. Math. 45,
5–24 (1923).
25. S. Agaian, H. Sarukhanyan and K. Tourshan, “Integer slant transforms,” in
Proc. of CSIT Conf., 303–306 (Sept. 17–20, 2001).
26. S. Agaian, K. Tourshan, and J. P. Noonan, “Parametric slant-Hadamard
transforms with applications,” IEEE Signal Process. Lett. 9 (11), 375–377
(2002).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 149

27. S. Agaian, K. Tourshan, and J. Noonan, “Generalized parametric slant


Hadamard transforms,” Signal Process. 84, 1299–1307 (2004).
28. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson-Hadamard
transforms,” Multiple Valued Logic Soft Comput. J. 10 (2), 173–187 (2004).
29. S. Agaian, K. Tourshan, and J. Noonan, “Performance of parametric
slant-Haar transforms,” J. Electron. Imaging 12 (3), 539–551 (2003)
[doi:10.1117/1.1580494].
30. W. K. Pratt, L. R. Welch, and W. H. Chen, “Slant transform for image
coding,” IEEE Trans. Commun. COM-22 (8), 1075–1093 (1974).
31. P. C. Mali, B. B. Chaudhuri, and D. D. Majumber, “Some properties and
fast algorithms of slant transform in image processing,” Signal Process. 9,
233–244 (1985).
32. L. R. Rabiner and B. Gold, Theory and Application of Digital Signal
Processing, Prentice-Hall, Englewood Cliffs, NJ (1975).
33. A. Jain, Fundamentals of Digital Image Processing, Prentice-Hall,
Englewood Cliffs, NJ (1989).
34. B. J. Fino and V. R. Algazi, “Slant Haar transform,” Proc. of IEEE 62,
653–654 (1974).
35. K. R. Rao, J. G. K. Kuo, and M. A. Narasimhan, “Slant-Haar transform,” Int.
J. Comput. Math. B 7, 73–83 (1979).
36. J. F. Yang and C. P. Fang, “Centralized fast slant transform algorithms,”
IEICE Trans. Fundam. Electron. Commun. Comput. Sci. E80-A (4), 705–711
(1997).
37. Z. D. Wang, “New algorithm for the slant transform,” IEEE Trans. Pattern
Anal. Mach. Intell. 4 (5), 551–555 (1982).
38. S. Agaian, H. Sarukhanyan and Kh. Tourshan, “New classes of sequential
slant Hadamard transform,” in Proc. of Int. TICSP Workshop on Spectral
Methods and Multirate Signal Processing, SMMSP’02, Toulouse, France
(Sept. 7–8, 2002).
39. S. Minasyan, D. Guevorkian, S. Agaian and H. Sarukhanyan, “On ‘slant-like’
fast orthogonal transforms of arbitrary order,” in Proc of VIPromCom-2002,
4th EURASIP–IEEE Region 8 Int. Symp. on Video/Image Processing and
Multimedia Communications, Zadar, Croatia, 309–314 (June 16–19, 2002).
40. Z. Wang, “Fast algorithms for the discrete W transform and for the discrete
Fourier transform,” IEEE Trans. on Acoust. Speech Signal Process. ASSP-32
(4), 803–816 (1984).
41. S. Venkataraman, V. R. Kanchan, K. R. Rao, and M. Mohanty, “Discrete
transforms via the Walsh–Hadamard transform,” Signal Process. 14 (4),
371–382 (1988).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


150 Chapter 3

42. M. M. Anguh and R. R. Martin, “A 2-dimensional inplace truncation


Walsh transform method,” J. Visual Comm. Image Represent. (JVCIR) 7 (2),
116–125 (1996).
43. K. Rao, K. Revuluri, M. Narasimhan, and N. Ahmed, “Complex Haar
transform,” IEEE Trans. Acoust. Speech Signal Process. 24 (1), 102–104
(1976).
44. K. Rao, V. Devarajan, V. Vlasenko, and M. Narasimhan, “Cal-Sal Walsh–
Hadamard transform,” IEEE Trans. Acoust. Speech Signal Process. 26 (6),
605–607 (1978).
45. P. Marti-Puig, “Family of fast Walsh Hadamard algorithms with identical
sparse matrix factorization,” IEEE Signal Process. Lett. 13 (11), 672–675
(2006).
46. L. Nazarov and V. Smolyaninov, “Use of fast Walsh–Hadamard trans-
formation for optimal symbol-by-symbol binary block codes decoding,”
Electron. Lett. 34, 261–262 (1998).
47. M. Bossert, E.M. Gabidulin and P. Lusina, “Space-time codes based on
Hadamard matrices proceedings,” in Proc. IEEE Int. Symp. Information
Theory, p. 283 (June 25–30, 2000).
48. Y. Beery and J. Snyders, “Optimal soft decision block decoders based on fast
Hadamard transformation,” IEEE Trans. Inf. Theory IT-32, 355–364 (1986).
49. J. Astola and D. Akopian, “Architecture-oriented regular algorithms for
discrete sine and cosine transforms,” IEEE Trans. Signal Process. 47, 11–19
(1999).
50. A. Amira, P. Bouridane, P. Milligan, and M. Roula, “Novel FPGA imple-
mentations of Walsh–Hadamard transforms for signal processing,” Proc. Inst.
Elect. Eng., Vis., Image, Signal Process. 148, 377–383 (Dec. 2001).
51. S. Boussakta and A. G. J. Holt, “Fast algorithm for calculation of both
Walsh–Hadamard and Fourier transforms,” Electron. Lett. 25, 1352–1354
(1989).
52. J.-L. Wu, “Block diagonal structure in discrete transforms,” Proc. of Inst.
Elect. Eng., Comput. Digit. Technol. 136 (4), 239–246 (1989).
53. M. H. Lee and Y. Yasuda, “Simple systolic array algorithm for Hadamard
transform,” Electron. Lett. 26 (18), 1478–1480 (1990).
54. M. H. Lee and M. Kaven, “Fast Hadamard transform based on a simple
matrix factorization,” IEEE Trans. Acoust. Speech Signal Process. ASP-34
(6), 166–667 (1986).
55. A. Vardy and Y. Beery, “Bit-level soft decision decoding of Reed-Solomon
codes,” IEEE Trans. Commun. 39, 440–445 (1991).
56. A. Aung, B. P. Ng, and S. Rahardja, “Sequency-ordered complex Hadamard
transform: properties, computational complexity and applications,” Signal
Process. IEEE Trans. 56 (8, Pt. 1), 3562–3571 (2008).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 151

57. B. Guoan, A. Aung, and B. P. Ng, “Pipelined hardware structure for


sequency-ordered complex Hadamard transform,” IEEE Signal Process. Lett.
15, 401–404 (2008).
58. I. Dumer, G. Kabatiansky, and C. Tavernier, “List decoding of biorthogonal
codes and the Hadamard transform with linear complexity,” IEEE Trans. Inf.
Theory 54 (10), 4488–4492 (2008).
59. D. Sundararajan and M. O. Ahmad, “Fast computation of the discrete Walsh
and Hadamard transforms,” IEEE Trans. Image Process. 7 (6), 898–904
(1998).
60. J.-D. Lee and Y.-H. Chiou, “A fast encoding algorithm for vector quantization
based on Hadamard transform,” in Proc. of Industrial Electronics, IECON
2008, 34th Annual Conf. of IEEE, 1817–1821 (Nov. 10–13, 2008).
61. P. Knagenhjelm and E. Agrell, “The Hadamard transform—a tool for index
assignment,” IEEE Trans. Inf. Theory 42 (4), 1139–1151 (July 1996).
62. K. Zeger and A. Gersho, “Pseudo-Gray coding,” IEEE Trans. Commun. 38
(12), 2147–2158 (1990).
63. S.-C. Pei and W.-L. Hsue, “The multiple-parameter discrete fractional
Fourier transform,” IEEE Signal Process. Lett. 13 (6), 329–332 (2006).
64. J. M. Vilardy, J. E. Calderon, C. O. Torres, and L. Mattos, “Digital images
phase encryption using fractional Fourier transform,” Proc. IEEE Conf.
Electron., Robot. Automotive Mech. 1, 15–18 (Sep. 2006).
65. J. Guo, Z. Liu, and S. Liu, “Watermarking based on discrete fractional
random transform,” Opt. Commun. 272 (2), 344–348 (2007).
66. V. Kober, “Fast algorithms for the computation of sliding discrete Hartley
transforms,” IEEE Trans. Signal Process. 55 (6), 2937–2944 (2007).
67. P. Dita, “Some results on the parameterization of complex Hadamard
matrices,” J. Phys. A 37 (20), 5355–5374 (2004).
68. W. Tadej and K. Kyczkowski, “A concise guide to complex Hadamard
matrices,” Open Syst. Inf. Dyn. 13 (2), 133–177 (2006).
69. F. Szollosi, “Parametrizing complex Hadamard matrices,” Eur. J. Comb. 29
(5), 1219–1234 (2007).
70. V. Senk, V. D. Delic, and V. S. Milosevic, “A new speech scrambling
concept based on Hadamard matrices,” IEEE Signal Process. Lett. 4 (6),
161–163 (1997).
71. J. A. Davis and J. Jedwab, “Peak-to-mean power control in OFDM, Golay
complementary sequences, and Reed-Muller codes,” IEEE Trans. Inf. Theory
45 (7), 2397–2417 (1999).
72. G. Guang and S. W. Golomb, “Hadamard transforms of three-term
sequence,” IEEE Trans. Inf. Theory 45 (6), 2059–2060 (1999).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


152 Chapter 3

73. W. Philips, K. Denecker, P. de Neve, and S. van Assche, “Lossless quanti-


zation of Hadamard transform coefficients,” IEEE Trans. Image Process. 9
(11), 1995–1999 (2000).
74. M. Ramkumar and A. N. Akansu, “Capacity estimates for data hiding in
compressed images,” IEEE Trans. Image Process. 10 (8), 1252–1263 (2001).
75. S.S. Agaian and O. Caglayan, “New fast Hartley transform with linear multi-
plicative complexity,” presented at IEEE Int. Conf. on Image Processing,
Atlanta, GA (Oct. 8–11, 2006).
76. S.S. Agaian and O. Caglayan, “Fast encryption method based on new FFT
representation for the multimedia data system security,” presented at IEEE
Int. Conference on Systems, Man, and Cybernetics, Taipei, Taiwan (Oct.
8–11, 2006).
77. S.S. Agaian and O. Caglayan, “New fast Fourier transform with linear
multiplicative complexity,” presented at IEEE 39th Asilomar Conf. on
Signals, Systems and Computers, Pacific Grove, CA (Oct. 30–Nov. 2, 2005).
78. S.S. Agaian and O. Caglayan, “Super fast Fourier transform,” presented
at IS&T/SPIE 18th Annual Symp. on Electronic Imaging Science and
Technology, San Jose, CA (Jan. 15–19, 2006).
79. D. F. Elliot and K. R. Rao, Fast Transforms, Algorithms, Applications,
Academic Press, New York (1982).
80. R. J. Clarke, Transform Coding of Image, Academic Press, New York (1985).
81. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal
Processing, Springer-Verlag, New York (1975).
82. A. K. Jain, Fundamentals of Digital Image Processing, Prentice Hall,
Englewood Cliffs, NJ (1989).
83. E. R. Dougherty, Random Processes for Image and Signal Processing,
SPIE Press, Bellingham, WA, and IEEE Press, Piscataway, NJ (1999)
[doi:10.1117/3.268105].
84. P. C. Mali, B. B. Chaudhuri, and D. D. Majumder, “Properties and some
fast algorithms of the Haar transform in image processing and pattern
recognition,” Pattern Recogn. Lett. 2 (5), 319–327 (1984).
85. H. Enomoto and K. Shibata, “Orthogonal transform system for television
signals,” IEEE Trans. Electromagn. Comput. 13, 11–17 (1971).
86. P. C. Mali, B. B. Chaudhuri, and D. D. Majumder, “Some properties and
fast algorithms of slant transform in image processing,” Signal Process. 9,
233–244 (1985).
87. P. Bahl, P.S. Gauthier and R.A. Ulichney, “PCWG’s INDEO-C video com-
pression algorithm,” http://www.Europe.digital.com/info/DTJK04/ (11 April
1996).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Discrete Orthogonal Transforms and Hadamard Matrices 153

88. I. W. Selesnick, “The slantlet transform,” IEEE Trans. Signal Process. 47


(5), 1304–1313 (1999).
89. C.E. Lee and J. Vaisey, “Comparison of image transforms in the coding of the
displaced frame difference for block-based motion compensation,” in Proc. of
Canadian Conf. on Electrical and Computer Engineering 1, 147–150 (1993).
90. B. J. Fino and V. R. Algazi, “Slant-Haar transform,” Proc. IEEE Lett. 62,
653–654 (1974).
91. S. Agaian, K. Tourshan, and J. P. Noonan, “Parametric slant-Hadamard
transforms with applications,” IEEE Signal Process. Lett. 9 (11), 375–377
(2002).
92. S. Agaian, K. Tourshan and J.P. Noonan, “Parametric slant-Hadamard
transforms,” presented at IS&T/SPIE 15th Annual Symp. Electronic Imaging
Science and Technology, Image Processing: Algorithms and Systems II,
Santa Clara, CA (Jan. 20–24, 2003).
93. S. Agaian, K. Tourshan, and J. Noonan, “Partially signal dependent slant
transforms for multispectral classification,” J. Integr. Comput.-Aided Eng. 10
(1), 23–35 (2003).
94. M. D. Adam and F. Kossentni, “Reversible integer-to-integer wavelet trans-
forms for image compression: performance evaluation and analysis,” IEEE
Trans. Image Process. 9 (6), 1010–1024 (2000).
95. M. M. Anguh and R. R. Martin, “A truncation method for computing slant
transforms with applications to image processing,” IEEE Trans. Commun. 43
(6), 2103–2110 (1995).
96. R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge
University Press, Cambridge, England (1991).
97. P. C. Mali and D. D. Majumder, “An analytical comparative study of a class
of discrete linear basis transforms,” IEEE Trans. Syst. Man. Cybernet. 24 (3),
531–535 (1994).
98. N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine transform,” IEEE
Trans. Comput. C-23, 90–93 (1974).
99. W. K. Pratt, “Generalized Wiener filtering computation techniques,” IEEE
Trans. Comput. C-21, 636–641 (1972).
100. J. Pearl, “Walsh processing of random signals,” IEEE Trans. Electromagn.
Comput. EMC-13 (4), 137–141 (1971).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 4
“Plug-In Template” Method:
Williamson–Hadamard Matrices
We have seen that one of the basic methods used to build Hadamard matrices
is based on construction of a class of “special-component” matrices that can
be plugged into arrays (templates) of variables to generate Hadamard matrices.
Several approaches for constructing special-component matrices and templates
have been developed.1–36 In 1944, Williamson1,2 first constructed “suitable
matrices” (Williamson matrices) to replace the variables in a formally orthogonal
matrix. Generally, the arrays into which suitable matrices are plugged are
orthogonal designs, which have formally orthogonal rows (and columns) but may
have variations, such as Goethals–Seidel arrays, Wallis–Whiteman arrays, Spence
arrays, generalized quaternion arrays, Agayan (Agaian) families, Kharaghani’s
methods, and regular s-sets of regular matrices that give new matrices.3–35,37,38
This is an extremely prolific construction method.34 There are several interesting
schemes for constructing the Williamson matrices and the Williamson arrays.1–83
In addition, it has been found that the Williamson–Hadamard sequences possess
very good autocorrelation properties that make them amenable to synchronization
requirements, and they can thus be used in communication systems.42 In addition,
Seberry, her students, and many other authors have made extensive use of
computers for relevant searches.35,76,79,80 For instance, Djokovic found the first
odd number, n = 31, for which symmetric circulant Williamson matrices exist.79,80
There are several interesting papers concerning the various types of Hadamard
matrix construction.82–98 A survey of the applications of Williamson matrices can
be found in Ref. 78.
In this chapter, two “plug-in template” methods of the construction of Hadamard
matrices are presented. The first method is based on Williamson matrices and
the Williamson array “template”; the second one is based on the Baumert–Hall
array “template.” Finally, we will give customary sequences based on construction
of new classes of Williamson and generalized Williamson matrices. We start the
chapter with a brief description of the construction of Hadamard matrices from
Williamson matrices. Then we construct a class of Williamson matrices. Finally,
we show that if Williamson–Hadamard matrices of order 4m and 4n exist, then
Williamson–Hadamard matrices of order mn/2 exist.

155

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


156 Chapter 4

4.1 Williamson–Hadamard Matrices


First, we briefly describe the Williamson approach to construction of the Hadamard
matrices. It is the first and simplest plug-in template method for generating
Hadamard matrices.
Theorem 4.1.1: (Williamson1,2 ) If four (+1, −1) matrices An , Bn , Cn , Dn of order
n exist that satisfy both of the following conditions:

PQT = QPT , P, Q ∈ {An , Bn , Cn , Dn } ,


(4.1)
An ATn + Bn BTn + CnCnT + Dn DTn = 4nIn ,

then
⎛ ⎞
⎜⎜⎜ An Bn Cn Dn ⎟⎟⎟
⎜⎜⎜−B ⎟
⎜⎜⎜ n An −Dn Cn ⎟⎟⎟⎟⎟ (4.2)
⎜⎜⎜⎝−Cn Dn An −Bn ⎟⎟⎠⎟
−Dn −Cn Bn An

is a Hadamard matrix of order 4n. This theorem can be proved by direct


verification.
Definitions: The “template” [matrix (4.2)] is called a Williamson array. The four
symmetric cyclic (+1, −1) matrices A, B, C, D with the condition of Eq. (4.1) are
called Williamson matrices.1–3
The cyclic matrix Q of order m is defined as

Q = a0 U 0 + a1 U 1 + · · · + am−1 U m−1 , (4.3)

where U is the (0, 1) matrix of order m with first row (0 1 0 · · · 0), second row
obtained by one-bit cyclic shifts, third row obtained by 2-bit cyclic shifts, and so
on. For m = 5, we have the following matrices:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜0 1 0 0 0⎟⎟⎟ ⎜⎜⎜0 0 1 0 0⎟⎟⎟
⎜⎜⎜0 0 1 0 0⎟⎟⎟ ⎜⎜⎜0 0 0 1 0⎟⎟⎟
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
U = ⎜⎜⎜⎜⎜0 0 0 1 0⎟⎟⎟⎟⎟ , U 2 = ⎜⎜⎜⎜⎜0 0 0 0 1⎟⎟⎟⎟⎟ ,
⎜⎜⎜0 0 0 0 1⎟⎟⎟ ⎜⎜⎜1 0 0 0 0⎟⎟⎟
⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠
1 0 0 0 0 0 1 0 0 0
⎛ ⎞ ⎛ ⎞ (4.4)
⎜⎜⎜0 0 0 1 0⎟⎟⎟ ⎜⎜⎜0 0 0 0 1⎟⎟⎟
⎜⎜⎜⎜0 0 0 0 1⎟⎟⎟⎟ ⎜⎜⎜1 0 0 0 0⎟⎟⎟
⎜⎜ ⎟⎟
⎜⎜⎜ ⎟⎟⎟
U = ⎜⎜⎜1 0 0 0 0⎟⎟⎟ , U = ⎜⎜⎜⎜⎜0 1 0 0 0⎟⎟⎟⎟⎟ .
3 4
⎜⎜⎜0 1 0 0 0⎟⎟⎟ ⎜⎜⎜0 0 1 0 0⎟⎟⎟
⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠
0 0 1 0 0 0 0 0 1 0

Note that the matrix U satisfies the following conditions:

U 0 = Im , U p U q = U p+q , U m = Im . (4.5)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 157

Therefore, the cyclic matrix of order n with first row (a0 a1 a2 · · · an−1 ) has the
form
⎛ ⎞
⎜⎜⎜a0 a1 · · · an−1 ⎟⎟⎟
⎜⎜⎜⎜an−1 a0 · · · an−2 ⎟⎟⎟⎟⎟
C(a0 , a1 , . . . , an−1 ) = ⎜⎜⎜⎜⎜.. .. . . .. ⎟⎟⎟⎟ . (4.6)
⎜⎜⎜. . . . ⎟⎟⎟
⎝ ⎠
a1 a2 · · · a0

In other words, each row of A is equal to the previous row rotated downward by
one element. Thus, a cyclic matrix of order n is specified (or generated) by its
first row and denoted by C(a0 , a1 , . . . , an−1 ). For example, starting with the vector
(a, b, c, d), we can form the 4 × 4 cyclic matrix
⎛ ⎞
⎜⎜⎜a b c d⎟⎟

⎜⎜⎜⎜d a b c ⎟⎟⎟⎟
⎜⎜⎜ ⎟.
b⎟⎟⎟⎟⎠
(4.7)
⎜⎜⎝c d a
b c d a

It can be shown that the multiplication of two cyclic matrices is also cyclic. This
can be proved by direct verification. For N = 4, we obtain the multiplication
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a0 a1 a2 a3 ⎟⎟ ⎜⎜b0
⎟⎜
b1 b2 b3 ⎟⎟ ⎜⎜c0
⎟ ⎜
c1 c2 c3 ⎟⎟

⎜⎜⎜⎜a3 a0 a1 a2 ⎟⎟⎟⎟ ⎜⎜⎜⎜b3 b0 b1 b2 ⎟⎟⎟⎟ ⎜⎜⎜⎜c3 c0 c1 c2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟=⎜ ⎟.
a1 ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝b2 b1 ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝c2 c1 ⎟⎟⎟⎟⎠
(4.8)
⎜⎜⎝a2 a3 a0 b3 b0 c3 c0
a1 a2 a3 a0 b1 b2 b3 b0 c1 c2 c3 c0

If A, B, C, D are cyclic symmetric (+1, −1) matrices of order n, then the first
relation of Eq. (4.1) is automatically satisfied, and the second condition becomes

A2 + B2 + C 2 + D2 = 4nIn . (4.9)

Examples of the symmetric cyclic Williamson matrices of orders 1, 3, 5, and 7


are as follows:
(1) Williamson matrices of order 1: A1 = B1 = C1 = D1 = (1).
(2) The first rows and Williamson matrices of order 3 are given as follows:

A3 = (1, 1, 1), B3 = C3 = D3 = (1, −1, −1); (4.10)


⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜+ − −⎟⎟⎟
A3 = ⎜⎜⎜⎜⎝+ + +⎟⎟⎟⎟⎠ , B3 = C3 = D3 = ⎜⎜⎜⎜⎝− + −⎟⎟⎟⎟⎠ . (4.11)
+ + + − − +
(3) The first rows and Williamson matrices of order 5 are given as follows:

A5 = B5 = (1, −1, −1, −1, −1), C5 = (1, 1, −1, −1, 1),


(4.12)
D5 = (1, −1, 1, 1, −1);

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


158 Chapter 4

⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ − − − −⎟⎟
⎟⎟⎟ ⎜⎜⎜+ + − − +⎟⎟
⎟⎟⎟ ⎜⎜⎜+ − + + −⎟⎟

⎜⎜⎜⎜− + − − −⎟⎟⎟ ⎜⎜⎜+
⎜⎜⎜ + + − −⎟⎟⎟ ⎜⎜⎜− +
⎜⎜⎜ − + +⎟⎟⎟⎟⎟
⎜⎜ ⎟ ⎟ ⎟
A5 = B5 = ⎜⎜⎜⎜− − + − −⎟⎟⎟⎟ , C5 = ⎜⎜⎜⎜− + + + −⎟⎟⎟⎟ , D5 = ⎜⎜⎜⎜+ − + − +⎟⎟⎟⎟ . (4.13)
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎝− − − + −⎟⎟ ⎜⎜⎝− − + + +⎟⎟ ⎜⎜⎝+ + − + −⎟⎟⎟⎟
⎠ ⎠ ⎠
− − − − + + − − + + − + + − +

(4) The first rows and Williamson matrices of order 7 are given as follows:

A7 = B7 = (1, 1, −1, 1, 1, −1, 1), C7 = (1, −1, 1, 1, 1, 1, −1),


(4.14)
D7 = (1, 1, −1, −1, −1, −1, 1);
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + − + + − +⎟⎟⎟ ⎜⎜⎜+ − + + + + −⎟⎟

⎜⎜⎜⎜+ + + − + + −⎟⎟⎟⎟ ⎜⎜⎜⎜− + − + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜
⎜⎜⎜− + + + − + +⎟⎟⎟⎟⎟ ⎜⎜⎜+ − + − + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟
A7 = B7 = ⎜⎜+ − + + + − +⎟⎟⎟⎟ , C7 = ⎜⎜⎜⎜+ + − + − + +⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜⎜+ + − + + + −⎟⎟⎟⎟⎟ ⎜⎜⎜+ + + − + − +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜− + + − + + +⎟⎟⎟ ⎜⎝ + + + − + −⎟⎟⎟⎟
⎝ ⎠ ⎠
+ − + + − + + −+ + + + − +
⎛ ⎞ (4.15)
⎜⎜⎜+ + − − − − +⎟⎟

⎜⎜⎜+ + + − − − −⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜− + +
⎜⎜ + − − −⎟⎟⎟⎟⎟

D7 = ⎜⎜⎜⎜− − + + + − −⎟⎟⎟⎟ .
⎜⎜⎜ ⎟
⎜⎜⎜− − − + + + −⎟⎟⎟⎟
⎜⎜⎜− − − ⎟
⎜⎝ − + + +⎟⎟⎟⎟

+ − − − − + +

The first rows of cyclic symmetric Williamson matrices 34 of orders n = 3, 5, . . . ,


33, 37, 39, 41, 43, 49, 51, 55, 57, 61, 63 are given in Appendix A.2.
By plugging in the above-presented Williamson matrices of orders 3 and 5
into Eq. (4.2), we obtain a Williamson–Hadamard matrix of order 12 and 20,
respectively:
⎛ ⎞
⎜⎜⎜+ + + + − − + − − + − −⎟⎟

⎜⎜⎜+ + + − + − − + − − + −⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+
⎜⎜⎜ + + − − + − − + − − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜− + + + + + − + + + − −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − + + + + + − + − + −⎟⎟⎟⎟⎟
⎜⎜⎜+ + − + + + + + − − − +⎟⎟⎟⎟⎟
H12 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ , (4.16)
⎜⎜⎜− + + + − − + + + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − + − + − + + + + − +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ + − − − + + + + + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜− + + − + + + − − + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝+ − + + − + − + − + + +⎟⎟⎟⎟

+ + − + + − − − + + + +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 159

⎛ ⎞
⎜⎜⎜+ − − − − + − − − − + + − − + + − + + −⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − − − − + − − − + + + − − − + − + +⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ − + − − − − + − − − + + + − + − + − +⎟⎟⎟⎟

⎜⎜⎜−
⎜⎜⎜ − − + − − − − + − − − + + + + + − + −⎟⎟⎟⎟⎟

⎜⎜⎜⎜− − − − + − − − − + + − − + + − + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−
⎜⎜⎜ + + + + + − − − − − + − − + + + − − +⎟⎟⎟⎟⎟

⎜⎜⎜+ − + + + − + − − − + − + − − + + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − + + − − + − − − + − + − − + + + −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ + + − + − − − + − − − + − + − − + + +⎟⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜ + + + − − − − − + + − − + − + − − + +⎟⎟⎟⎟
H20 = ⎜⎜⎜⎜ ⎟⎟⎟. (4.17)

⎜⎜⎜⎜− − + + − + − + + − + − − − − − + + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− − − + + − + − + + − + − − − + − + + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − − − + + − + − + − − + − − + + − + +⎟⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ + − − − + + − + − − − − + − + + + − +⎟⎟⎟⎟

⎜⎜⎜− + + − − − + + − + − − − − + + + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − − + − − + + − + − − − − + − − − −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − + − − − − − + + − + − − − − + − − −⎟⎟⎟⎟

⎜⎜⎜−
⎜⎜⎜ + − + − + − − − + − − + − − − − + − −⎟⎟⎟⎟⎟

⎜⎜⎜− − + − + + + − − − − − − + − − − − + −⎟⎟⎟⎟
⎝ ⎠
+ − − + − − + + − − − − − − + − − − − +

In Table 4.1, we give some well-known classes of Williamson matrices.


We denote the set of orders of Williamson matrices given in 1–8 by L. Note
that there are no Williamson matrices for order 155 by a complete computer
search, and no Williamson-type matrices are known for the orders 35, 155, 171,
203, 227, 291, 323, 371, 395, 467, 483, 563, 587, 603, 635, 771, 875, 915, 923,
963, 1131, 1307, 1331, 1355, 1467, 1523, 1595, 1643, 1691, 1715, 1803, 1923,
and 1971.
Table 4.1 Classes of Williamson matrices.

No. Orders of cyclic symmetric Williamson matrices

1 n, where n ∈ W = {3, 5, 7, . . . , 29, 31, 43}3,7


2 (p + 1)/2, where p ≡ 1(mod 4) is a prime power8

No. Orders of Williamson matrices

1 n, where n ≤ 100 except 35, 39, 47, 53, 67, 73, 83, 89, and 949
2 3a , where a is a natural number10
3 (p + 1)pr /2, where is a prime power, and r is a natural number11,12
4 n(4n + 3), n(4n − 1), where n ∈ {1, 3, 5, . . . , 25}13
5 (p + 1)(p + 2), where p ≡ 1(mod 4) is a prime number and p + 3 is an order of symmetric Hadamard
matrix9
6 2n(4n + 7), where 4n + 1 is a prime number and n ∈ {1, 3, 5, . . . , 25}9
7 2.39, 2.103, 2.303, 2.333, 2.669, 2.695, 2.160911
8 2n, where n is an order of Williamson-type matrices9

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


160 Chapter 4

Lemma 4.1.1: (Agaian, Sarukhanyan14 ) If A, B, C, D are Williamson matrices


of order n, then the matrices
 
1 A+B C+D
X= ,
2 C + D −A − B 
(4.18)
1 A−B C−D
Y= ,
2 −C + D A − B
are (0, ±1) matrices of order 2n and satisfy the conditions
X ∗ Y = 0, ∗ is an Hadamard product,
XY T = Y X T ,
(4.19)
X ± Y is a (+1, −1) matrix,
XX T + YY T = 2nI2n ,
where the Hadamard product of two matrices A = (ai, j ) and B = (bi, j ) of the same
dimension is defined as A ∗ B = (ai, j , bi, j ).
Example 4.1.1: X, Y matrices of order 2n for n = 3, 5.
For n = 3:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 0 + − −⎟⎟⎟ ⎜⎜⎜0 + + 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 − + −⎟⎟⎟ ⎜⎜⎜+ 0 + 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜⎜0 0 + − − +⎟⎟⎟⎟ ⎜⎜ ⎟
⎟⎟⎟ , Y = ⎜⎜⎜⎜⎜+ + 0 0 0 0 ⎟⎟⎟⎟⎟
X = ⎜⎜⎜⎜ ⎟. (4.20)
⎜⎜⎜+ − − − 0 0 ⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎝− + − 0 − 0 ⎟⎟⎠ ⎜⎜⎜0 0 0 + 0 +⎟⎟⎟⎟
⎝ ⎠
− − + 0 0 − 0 0 0 + + 0
For n = 5:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ − − − − + 0 0 0 0 ⎟⎟⎟ ⎜⎜⎜ 0 0 0 0 0 0 + − − +⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜− + − − − 0 + 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜ 0 0 0 0 0 + 0 + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜− − + − − 0 0 + 0 0 ⎟⎟⎟⎟ ⎜⎜⎜ 0 0 0 0 0 − + 0 − −⎟⎟⎟⎟
⎜⎜⎜− − ⎟ ⎟
⎜⎜⎜ − + − 0 0 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜ 0
⎜⎜⎜ 0 0 0 0 − − + 0 −⎟⎟⎟⎟⎟
⎜⎜⎜− − ⎟ ⎟
− − + 0 0 0 0 +⎟⎟⎟⎟ ⎜⎜⎜ 0 0 0 0 0 + − − + 0 ⎟⎟⎟⎟
X = ⎜⎜⎜⎜ ⎟⎟ , Y = ⎜⎜⎜⎜ ⎟⎟ . (4.21)
⎜⎜⎜+ 0 0 0 0 − + + + +⎟⎟⎟⎟ ⎜⎜⎜ 0 − + + − 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ 0 + 0 0 0 + − + + +⎟⎟⎟⎟⎟ ⎜⎜⎜− 0 − + + 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ 0 0 + 0 0 + + − + +⎟⎟⎟⎟ ⎜⎜⎜+ − 0 − + 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎝ 0 0 0 + 0 + + + − +⎟⎟⎟⎟ ⎜⎜⎝+ + − 0 − 0 0 0 0 0 ⎟⎟⎟⎟
⎠ ⎠
0 0 0 0 + + + + + − − + + − 0 0 0 0 0 0

Theorem 4.1.2: (Agaian–Sarukhanyan multiplicative theorem14,15 ) Let there be


Williamson–Hadamard matrices of order 4m and 4n. Then Williamson–Hadamard
matrices exist of order 4(2m)i n, i = 1, 2, . . . .
Proof: Let A, B, C, D and A0 , B0 , C0 , D0 be Williamson matrices of orders m and
n, respectively. Note that according to Lemma 4.1.1, the (+1, −1) matrices X, Y

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 161

satisfy the conditions of Eq. (4.19). Consider the following matrices:

Ai = Ai−1 ⊗ X + Bi−1 ⊗ Y,
Bi = Bi−1 ⊗ X − Ai−1 ⊗ Y,
(4.22)
Ci = Ci−1 ⊗ X + Di−1 ⊗ Y,
Di = Di−1 ⊗ X − Ci−1 ⊗ Y,

where ⊗ is the Kronecker product.


We want to prove that for any natural number i the matrices Ai , Bi , Ci , and Di
are Williamson matrices of order (2m)i n. Let us consider two cases, namely, case
(a) when i = 1, and case (b) when i is any integer.
Case (a): Let i = 1. From Eq. (4.22), we obtain

A1 AT1 = A0 AT0 ⊗ XX T + B0 BT0 ⊗ YY T + A0 BT0 ⊗ XY T + B0 AT0 ⊗ Y X T ,


(4.23)
B1 BT1 = B0 BT0 ⊗ XX T + A0 AT0 ⊗ YY T − B0 AT0 ⊗ XY T − A0 BT0 ⊗ Y X T .

Taking into account the conditions of Eqs. (4.1) and (4.19) and summarizing the
last expressions, we find that

A1 AT1 + B1 BT1 = (A0 AT0 + B0 BT0 ) ⊗ (XX T + YY T ). (4.24)

Similarly, we obtain

C1C1T + D1 DT1 = (C0C0T + D0 DT0 ) ⊗ (XX T + YY T ). (4.25)

Now, summarizing the last two equations and taking into account that A0 , B0 ,
C0 , D0 are Williamson matrices of order n, and X and Y satisfy the conditions of
Eq. (4.19), we have

A1 AT1 + B1 BT1 + C1C1T + D1 DT1 = 8mnI2mn . (4.26)

Let us now prove equality of A1 BT1 = B1 AT1 . From Eq. (4.22), we have

A1 BT1 = A0 BT0 ⊗ XX T − A0 AT0 ⊗ XY T + B0 BT0 ⊗ Y X T − B0 AT0 ⊗ YY T ,


(4.27)
B1 AT1 = B0 AT0 ⊗ XX T + B0 BT0 ⊗ XY T − A0 AT0 ⊗ Y X T − A0 BT0 ⊗ YY T .

Comparing both expressions, we conclude that A1 BT1 = B1 AT1 . Similarly, it can be


shown that

PQT = QPT , (4.28)

where

P, Q ∈ {A1 , B1 , C1 , D1 } . (4.29)

Thus, the matrices A1 , B1 , C1 , D1 are Williamson matrices of order 2mn.


Case (b): Let i be any integer; we assume that the theorem is correct for
k = i > 1, i.e., Ak , Bk , Ck , Dk are Williamson matrices of order (2m)k n. Let us

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


162 Chapter 4

prove that Ai+1 , Bi+1 , Ci+1 , and Di+1 also are Williamson matrices. Check only the
second condition of Eq. (4.1). By computing

Ai+1 ATi+1 = Ai ATi ⊗ XX T + Ai BTi ⊗ XY T + Bi ATi ⊗ Y X T + Bi BTi ⊗ YY T ,


Bi+1 BTi+1 = Bi BTi ⊗ XX T − Bi ATi ⊗ XY T − Ai BTi ⊗ Y X T + Ai ATi ⊗ YY T ,
(4.30)
T
Ci+1Ci+1 = CiCiT ⊗ XX T + Ci DTi ⊗ XY T + DiCiT ⊗ Y X T + Di DTi ⊗ YY T ,
Di+1 DTi+1 = Di DTi ⊗ XX T − DiCiT ⊗ XY T − Ci DTi ⊗ Y X T + CiCiT ⊗ YY T

and summarizing the obtained equations, we find that

Ai+1 ATi+1 + Bi+1 BTi+1 + Ci+1Ci+1


T
+ Di+1 DTi+1
  
= Ai+1 ATi+1 + Bi+1 BTi+1 + Ci+1Ci+1
T
+ Di+1 DTi+1 XX T + YY T . (4.31)

Because Ai , Bi , Ci , Di are Williamson matrices of order (2m)i n, and matrices X, Y


satisfy the conditions of Eq. (4.19), we can conclude that

Ai+1 ATi+1 + Bi+1 BTi+1 + Ci+1Ci+1


T
+ Di+1 DTi+1 = 4(2m)i+1 nI(2m)i+1 n . (4.32)

From the Williamson Theorem (Theorem 4.1.1), we obtain the Williamson–


Hadamard matrix of order 4(2m)i n.
Theorem 4.1.2 is known as the Multiplicative Theorem because it is related
with multiplication of orders of two Hadamard matrices. It shows that if
Williamson–Hadamard matrices of order 4m and 4n exist, then Williamson–
Hadamard matrices of order (4m · 4n)/2 = 8mn exist. Remember that the first
representative of a multiplicative theorem is as follows: if H4m and H4n are
Hadamard matrices of order 4m and 4n, then the Kronecker product H4m ⊗ H4n
is a Hadamard matrix of order 16mn.
Proof: First of all, the Kronecker product is a (+1, −1) matrix. Second, it is
orthogonal:
 
(H4m ⊗ H4n ) (H4m ⊗ H4n )T = (H4m ⊗ H4n ) H4m T
⊗ H4n
T
   
= H4m H4m T
⊗ H4n H4n
T
= (4mI4m ) ⊗ (4nI4n ) = 16mnI4nm . (4.33)

Problems for Exploration


• Show that Williamson-type matrices of order n exist, where n is an integer.
• Show that if W1 and W2 are Williamson-type matrices of order n and m, then
Williamson-type matrices of order mn exist.
• Show that if two Williamson–Hadamard matrices of order n and m exist, then
Williamson–Hadamard matrices of order mn/4 exist.

Algorithm 4.1.1: The algorithm for the generation of the Williamson–Hadamard


matrix of order 4mn comes from the Williamson–Hadamard matrices of orders 4m
and 4n.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 163

Input: Williamson matrices A, B, C, D and A0 , B0 , C0 , D0 of orders m and n.


Step 1. Construct the matrices
   
1 A+B C+D 1 A−B C−D
X= , Y= . (4.34)
2 C + D −A − B 2 −C + D A − B
Step 2. Construct matrices A1 , B1 , C1 , D1 of order 2mn, according to
Eq. (4.22).
Step 3. For I = 2, 3, . . ., construct Williamson matrices Ai , Bi , Ci , Di of
order (2m)i n using recursion [Eq. (4.22)]:
Ai = Ai−1 ⊗ X + Bi−1 ⊗ Y, Bi = Bi−1 ⊗ X − Ai−1 ⊗ Y,
(4.35)
Ci = Ci−1 ⊗ X + Di−1 ⊗ Y, Di = Di−1 ⊗ X − Ci−1 ⊗ Y.
Step 4. Construct the Williamson–Hadamard matrix as
⎛ ⎞
⎜⎜⎜ Ai Bi Ci Di ⎟⎟⎟
⎜⎜⎜ −B Ai −Di Ci ⎟⎟⎟⎟
[WH]4(2m)i n = ⎜⎜⎜⎜ i ⎟.
⎜⎜⎝ −Ci Di Ai −Bi ⎟⎟⎟⎟⎠
(4.36)
−Di −Ci Bi Ai
Output: The Williamson–Hadamard matrix [WH]4(2m)i n of order 4(2m)i n.
Example 4.1.2: Construction of Williamson–Hadamard matrix of order 24.
Step 1. Input the matrices A0 = (1), B0 = (1), C0 = (1), D0 = (1), and
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜+ − −⎟⎟⎟

A = ⎜⎜⎜⎝+ + +⎟⎟⎟⎟⎠ , B = C = D = ⎜⎜⎜⎝− + −⎟⎟⎟⎟⎠ .
⎜ (4.37)
+ + + − − +

Step 2. Using Eq. (4.22), construct the following matrices:


% & % &
A C B D
A1 = A3 = D −B , A2 = A4 = C −A , (4.38)
⎛+ + + + − −⎞⎟⎟ ⎛+ − − + − −⎞⎟⎟
⎜⎜⎜ ⎜⎜⎜
⎜⎜⎜+ + + − + −⎟⎟⎟⎟ ⎜⎜⎜− + − − + −⎟⎟⎟⎟
⎜⎜⎜+ + + − − +⎟⎟⎟⎟⎟ ⎜⎜⎜− − + − − +⎟⎟⎟⎟⎟
⎜ ⎜
A1 = A3 = ⎜⎜⎜⎜⎜ ⎟⎟ , A2 = A4 = ⎜⎜⎜⎜⎜ ⎟⎟ . (4.39)
⎜⎜⎜+ − − − + +⎟⎟⎟⎟⎟ ⎜⎜⎜+ − − − − −⎟⎟⎟⎟⎟
⎜⎜⎜− + − + − +⎟⎟⎟⎠ ⎜⎜⎜− + − − − −⎟⎟⎟⎠
⎝ ⎝
− − + + + − − − + − − −

Step 3. Substitute matrices A1 , A2 , A3 , A4 into the Williamson array:


⎛ ⎞
⎜⎜⎜ A1 A2 A1 A2 ⎟⎟⎟
⎜⎜⎜−A A1 −A2 A1 ⎟⎟⎟⎟
= ⎜⎜⎜⎜ 2 ⎟.
⎜⎜⎝−A1 A2 A1 −A2 ⎟⎟⎟⎟⎠
H24 (4.40)
−A2 −A1 A2 A1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


164 Chapter 4

Output: The Williamson–Hadamard matrix H24 of order 24.

Example 4.1.3: Construction of Williamson–Hadamard matrix of order 40.


Step 1. Input the matrices A0 = (1), B0 = (1), C0 = (1), D0 = (1) and
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ − −
− −⎟⎟
⎟ ⎜⎜⎜+ + − − +⎟⎟

⎜⎜⎜− + − −⎟⎟⎟⎟
− ⎜⎜⎜+ + + − −⎟⎟⎟⎟
⎜⎜ ⎟ ⎜⎜ ⎟
A = B = ⎜⎜⎜⎜⎜− − − −⎟⎟⎟⎟ ,
+
⎟ C = ⎜⎜⎜⎜⎜− + + + −⎟⎟⎟⎟ ,

⎜⎜⎜− − + −⎟⎟⎟⎟⎠
− ⎜⎜⎜− − + + +⎟⎟⎟⎟⎠
⎜⎝ ⎜⎝
− − −
− + + − − + +
⎛ ⎞ (4.41)
⎜⎜⎜+ − + + −⎟⎟

⎜⎜⎜⎜− + − + +⎟⎟⎟⎟
⎜ ⎟
D = ⎜⎜⎜⎜⎜+ − + − +⎟⎟⎟⎟ .

⎜⎜⎜+ + − + −⎟⎟⎟⎟⎠
⎜⎝
− + + − +

Step 2. Construct matrices


   
A C B D
A1 = A3 = , A2 = A4 = , (4.42)
D −B C −A
⎛ ⎞
⎜⎜⎜+ − − − − + + − − +⎟⎟

⎜⎜⎜⎜− + − − − + + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− − + − − − + + + −⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ − − + − − − + + +⎟⎟⎟⎟⎟
⎜⎜⎜− − − − + + − − + +⎟⎟⎟⎟⎟
A1 = A3 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ ,
⎜⎜⎜+ − + + − − + + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − + + + − + + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − + − + + + − + +⎟⎟⎟⎟⎟
⎜⎜⎜+
⎝ + − + − + + + − +⎟⎟⎟⎟⎠
− + + − + + + + + −
⎛ ⎞ (4.43)
⎜⎜⎜+ − − − − + − + + −⎟⎟

⎜⎜⎜− + − − − − + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− − + − − + − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜− − − + − + + − + −⎟⎟⎟⎟⎟
⎜⎜⎜− − − − + − + + − +⎟⎟⎟⎟⎟
A2 = A4 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ .
⎜⎜⎜+ + − − + − + + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + − − + − + + +⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ + + + − + + − + +⎟⎟⎟⎟⎟
⎜⎜⎜−
⎝ − + + + + + + − +⎟⎟⎟⎟⎠
+ − − + + + + + + −

Step 3. Substitute matrices A1 , A2 , A3 , A4 into the Williamson array.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 165

Table 4.2 Orders of Williamson matrices of order 2n.

35, 37, 39, 43, 49, 51, 55, 63, 69, 77, 81, 85, 87, 93, 95, 99, 105, 111, 115, 117, 119, 121, 125, 129, 133, 135,
143, 145, 147, 155, 161, 165, 169, 171, I75, 185, 187, 189, 195, 203, 207, 209, 215, 217, 221, 225, 231, 243,
247, 253, 255, 259, 261, 273, 275, 279, 285, 289, 297, 299, 301, 315, 319, 323, 333, 335, 341, 345, 351, 357,
361, 363, 377, 387, 391, 403, 405, 407, 425, 429, 437, 441, 455, 459, 465, 473, 475, 481, 483, 495, 513, 525,
527, 529, 551, 559, 561, 567, 575, 589, 609, 621, 625, 627, 637, 645, 651, 667, 675, 693, 713, 725, 729, 731,
751, 759, 775, 777, 783, 817, 819, 825, 837, 851, 891, 899, 903, 925, 957, 961, 989, 1023, 1073, 1075, 1081,
1089, 1147, 1161, 1221, 1247, 1333, 1365, 1419, 1547, 1591, 1729, 1849, 2013

Step 4. Output the Williamson–Hadamard matrix of order 40:


⎛ ⎞
⎜⎜⎜ A1 A2 A1 A2 ⎟⎟⎟
⎜⎜⎜−A A1 −A2 A1 ⎟⎟⎟⎟
H40 = ⎜⎜⎜⎜ 2 ⎟.
⎜⎜⎝−A1 A2 A1 −A2 ⎟⎟⎟⎟⎠
(4.44)
−A2 −A1 A2 A1
Corollary 4.1.1:14,15 Williamson matrices of orders 2i−1 n1 n2 · · · ni exist where
ni ∈ L, i = 1, 2, . . . . In particular, Williamson matrices of order 2n exist, where
the values of n are given in Table 4.2. From Corollary 4.1.1 and Williamson’s
theorem, 2,3 the following emerges:
Corollary 4.1.2: If Williamson matrices of orders n1 , n2 , . . . , nk exist, then a
Williamson–Hadamard matrix of order 2k+1 n1 n2 · · · nk exists.
Note that as follows from the list of orders of existing Williamson matrices,
the existence of Williamson-type matrices of order n also implies the existence
of Williamson matrices of order 2n. The value of Corollary 4.1.1 is as follows:
Although the existence of Williamson matrices of order n can be unknown, accord-
ing to Corollary 4.1.1, Williamson-type matrices of order 2n nevertheless exist.
Lemma 4.1.2: If p = 1(mod 4) is a power of a prime number, then symmetric
(0, ±1) matrices of order p + 1 satisfying conditions of Eq. (4.19) exist.
Proof: In Ref. 8, the existence of cyclic symmetric Williamson matrices was
proved, represented as I + A1 , I − A1 , B1 , B1 , and having an order (p + 1)/2, if
p = 1(mod 4) is a prime power. In this case, we can represent the matrices in
Eq. (4.18) as
   
I B1 A1 0
X= , Y= . (4.45)
B1 −I 0 A1
It is evident that these are symmetric matrices, and we can easily check that they
satisfy all conditions of Eq. (4.19). Now, from Theorem 4.1.2 and Lemma 4.1.2,
we obtain the following.
Corollary 4.1.3: If symmetric Williamson matrices of order n exist, then symme-
tric Williamson matrices of order n(p+1) also exist, where p = 1(mod 4) is a prime
power. Below, several orders of existing symmetric Williamson-type matrices of
order 2n are given, where n ∈ W2 = {k1 · k2 }, and where k1 ∈ W, k2 ∈ {5, 9,
13, 17, 25, 29, 37, 41, 49, 53, 61, 73, 81}.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


166 Chapter 4

Examples of symmetric Williamson-type matrices of order 10 and 18 are given


as follows:
⎛ ⎞
⎜⎜⎜+ + − − + + − − − −⎟⎟

⎜⎜⎜+ + + − − − + − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + + − − − + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜− − + + + − − − + −⎟⎟⎟⎟⎟
⎜⎜⎜+ − − + + − − − − +⎟⎟⎟⎟⎟
A10 = B10 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ ,
⎜⎜⎜+ − − − − − + − − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − − − + − + − −⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ − + − − − + − + −⎟⎟⎟⎟⎟
⎜⎜⎜−
⎝ − − + − − − + − +⎟⎟⎟⎟⎠
− − − − + + − − + −
⎛ ⎞ (4.46)
⎜⎜⎜+ − + + − + − − − −⎟⎟

⎜⎜⎜− + − + + − + − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − + − + − − + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − + − − − − + −⎟⎟⎟⎟⎟
⎜⎜⎜− + + − + − − − − +⎟⎟⎟⎟⎟

C 10 = D10 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ ,
⎜⎜⎜⎜+ − − − − − − + + −⎟⎟⎟⎟
⎜⎜⎜− ⎟
+ − − − − − − + +⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟
⎜⎜⎜− − + − − + − − − +⎟⎟⎟⎟⎟
⎜⎜⎜− − − + − + + − − −⎟⎟⎟⎟⎠
⎜⎝
− − − − + − + + − −

⎛ ⎞
⎜⎜⎜+ + + + − − + + + + + − + − − + − +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + − − + + + + + − + − − + −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ + + + + + − − + − + + + − + − − +⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ + + + + + + − − + − + + + − + − −⎟⎟⎟⎟⎟

⎜⎜⎜− + + + + + + + − − + − + + + − + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− − + + + + + + + − − + − + + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ − − + + + + + + + − − + − + + + −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ + − − + + + + + − + − − + − + + +⎟⎟⎟⎟

⎜⎜⎜+ + + − − + + + + + − + − − + − + +⎟⎟⎟⎟⎟
A18 = B18 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ , (4.47a)
⎜⎜⎜+ + − + − − + − + − + + + − − + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + − + − − + − + − + + + − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + + − + − − + + + − + + + − − +⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ − + + + − + − − + + + − + + + − −⎟⎟⎟⎟⎟

⎜⎜⎜−
⎜⎜⎜ + − + + + − + − − + + + − + + + −⎟⎟⎟⎟

⎜⎜⎜− − + − + + + − + − − + + + − + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − − + − + + + − + − − + + + − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝− + − − + − + + + + + − − + + + − +⎟⎟⎟⎟

+ − + − − + − + + + + + − − + + + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 167

C18 = D18
⎛ ⎞
⎜⎜⎜+ − − − + + − − − + + − + − − + − +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − − − + + − − + + + − + − − + −⎟⎟⎟⎟
⎜⎜⎜− − ⎟
⎜⎜⎜ + − − − + + − − + + + − + − − +⎟⎟⎟⎟⎟
⎜⎜⎜− − ⎟
⎜⎜⎜ − + − − − + + + − + + + − + − −⎟⎟⎟⎟
⎜⎜⎜+ − ⎟⎟
⎜⎜⎜ − − + − − − + − + − + + + − + −⎟⎟⎟⎟

⎜⎜⎜+ +
⎜⎜⎜ − − − + − − − − − + − + + + − +⎟⎟⎟⎟⎟

⎜⎜⎜− + + − − − + − − + − − + − + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− − + + − − − + − − + − − + − + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜− − − + + − − − + + − + − − + − + +⎟⎟⎟⎟⎟
= ⎜⎜⎜⎜⎜ ⎟⎟⎟ . (4.47b)
⎜⎜⎜+ +
⎜⎜⎜ − + − − + − + − − − − + + − − −⎟⎟⎟⎟⎟

⎜⎜⎜+ + + − + − − + − − − − − − + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− + + + − + − − + − − − − − − + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − + + + − + − − − − − − − − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − + + + − + − + − − − − − − − +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− − + − + + + − + + + − − − − − − −⎟⎟⎟⎟
⎜⎜⎜+ − ⎟
⎜⎜⎜ − + − + + + − − + + − − − − − −⎟⎟⎟⎟⎟
⎜⎜⎜− + ⎟
⎜⎝ − − + − + + + − − + + − − − − −⎟⎟⎟⎟

+ − + − − + − + + − − − + + − − − −

Theorem 4.1.3: If A, B, C, D are cyclic Williamson matrices of order n, then the


matrix
⎛ ⎞
⎜⎜⎜ A A A B −B C −C −D B C −D −D⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ A −A B −A B −D D −C −B −D −C −C ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ A −B −A A −D D −B B −C −D C −C ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ B
⎜⎜⎜ A −A −A D D D C C −B −B −C ⎟⎟⎟⎟⎟

⎜⎜⎜ B
⎜⎜⎜ −D D D A A A C −C B −C B⎟⎟⎟⎟⎟
⎟⎟
⎜⎜⎜ B
⎜⎜⎜ C −D D A −A C −A −D C B −B⎟⎟⎟⎟
⎟⎟ (4.48)
⎜⎜⎜ D
⎜⎜⎜ −C B −B A −C −A A B C D −D⎟⎟⎟⎟
⎟⎟
⎜⎜⎜ −C
⎜⎜⎜ −D −C −D C A −A −A −D B −B −B⎟⎟⎟⎟
⎟⎟
⎜⎜⎜ D
⎜⎜⎜ −C −B −B −B C C −D A A A D⎟⎟⎟⎟
⎟⎟
⎜⎜⎜−D
⎜⎜⎜ −B C C C B B −D A −A D −A⎟⎟⎟⎟
⎟⎟
⎜⎜⎜ C
⎜⎜⎝ −B −C C D −B −D −B A −D −A A⎟⎟⎟⎟⎟

−C −D −D −D C −C −B B B D −A −A

is a Williamson–Hadamard matrix of order 12n.

Corollary 4.1.4: A Williamson–Hadamard matrix of order 12n exists, where n


takes a value from Tables 4.1 and 4.2.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


168 Chapter 4

4.2 Construction of 8-Williamson Matrices

8-Williamson matrices are defined similar to Williamson matrices as follows.

Definition: Square (+1, −1) matrices Ai , i = 1, 2, . . . , 8 of order n, which are called


8-Williamson matrices of order n, exist if the following conditions are satisfied:

Ai ATj = A j ATi , i, j = 1, 2, . . . , 8,
8 (4.49)
Ai ATi = 8nIn .
i=1

The Williamson array of order 8 is also known, making it possible to construct a


Hadamard matrix of order 8n, if 8-Williamson matrices of order n exist.

Theorem 4.2.1: 3 If A, B, . . . , G, H are 8-Williamson matrices of order n, then


⎛ ⎞
⎜⎜⎜ A B C D E F G H ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −B A D −C F −E −H G⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −C −D A B G H −E −F ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −D C −B A H −G F −E ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −E −F −G −H A B C D⎟⎟⎟⎟⎟ (4.50)
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ −F E −H G −B A −D C ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −G H E −F −C D A −B⎟⎟⎟⎟⎟
⎝ ⎠
−H −G F E −D −C B A

is a Williamson–Hadamard matrix of order 8n.

Theorem 4.2.2: (Multiplicative theorem for 8-Williamson matrices14–16 ) If


Williamson–Hadamard matrices of orders 4n and 4m exist, then a Williamson–
Hadamard matrix of order 8mn also exists.

Proof: Let A1 , A2 , A3 , A4 and P1 , P2 , P3 , P4 be Williamson matrices of orders n


and m, respectively. Consider the following matrices:

A1 + A2 A1 − A2
X1 = P1 ⊗ − P2 ⊗ ,
2 2
A 1 − A2 A 1 + A2
X2 = P1 ⊗ + P2 ⊗ ,
2 2
A1 + A2 A1 − A2
X3 = P3 ⊗ − P4 ⊗ ,
2 2
A 1 − A2 A 1 + A2
X4 = P3 ⊗ + P4 ⊗ , (4.51)
2 2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 169

A3 + A4 A 3 − A4
X5 = P1 ⊗ − P2 ⊗ ,
2 2
A3 − A4 A3 + A4
X6 = P1 ⊗ + P2 ⊗ ,
2 2
A3 − A4 A 3 + A4
X7 = P3 ⊗ − P4 ⊗ ,
2 2
A 3 + A4 A 3 − A4
X8 = P3 ⊗ + P4 ⊗ .
2 2
Below, we check that Xi , i = 1, 2, . . . , 8 are 8-Williamson matrices of order mn,
i.e., the conditions of Eq. (4.49) are satisfied. Check the first condition,

A1 AT1 − A2 AT2 A1 AT1 + 2A1 AT2 + A2 AT2


X1 X2T = P1 PT1 ⊗ + P1 PT2 ⊗
4 4
A A
1 1
T
− A A
2 2
T
A A
1 1
T
− 2A 1 A2 + A 2 A2
T T
− P2 PT2 ⊗ − P2 PT1 ⊗ ,
4 4 (4.52)
A1 AT1 − A2 AT2 A1 AT1 − 2A1 AT2 + A2 AT2
X2 X1 = P1 P1 ⊗
T T
− P 1 P2 ⊗
T
4 4
A A
1 1
T
− A A
2 2
T
A A
1 1
T
+ 2A 1 A2 + A 2 A2
T T
− P2 PT2 ⊗ + P2 PT1 ⊗ .
4 4
Comparing the obtained expressions, we conclude that X1 X2T = X2 X1T . Similarly, it
can be shown that

Xi X Tj = X j XiT , i, j = 1, 2, . . . , 8. (4.53)

Now we check the second condition of Eq. (4.49). With this purpose, we calculate

(A1 + A2 )(A1 + A2 )T A1 AT1 − A2 AT2


X1 X1T = P1 PT1 ⊗ − P1 PT2 ⊗
4 2
(A1 − A2 )(A1 − A2 )T
+ P2 PT2 ⊗ ,
4
(A1 − A2 )(A1 − A2 )T A1 AT1 − A2 AT2
X2 X2T = P1 PT1 ⊗ + P1 PT2 ⊗
4 2
(A1 + A2 )(A1 + A2 )T
+ P2 PT2 ⊗ ,
4 (4.54)
(A1 + A2 )(A1 + A2 )T A1 AT1 − A2 AT2
X3 X3T = P3 PT3 ⊗ − P3 PT4 ⊗
4 2
(A1 − A2 )(A1 − A2 )T
+ P4 PT4 ⊗ ,
4
(A1 − A2 )(A1 − A2 )T A1 AT1 − A2 AT2
X4 X4T = P3 PT3 ⊗ + P3 PT4 ⊗
4 2
(A1 + A2 )(A1 + A2 )T
+ P4 PT4 ⊗ .
4

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


170 Chapter 4

Summarizing the above expressions, we find that


8 4
(A1 + A2 )(A1 + A2 )T + (A1 − A2 )(A1 − A2 )T
Xi XiT = Pi PTi ⊗ . (4.55)
i=1 i=1
4

But, (A1 + A2 )(A1 + A2 )T + (A1 − A2 )(A1 − A2 )T = 2(A1 AT1 + A2 AT2 ). Thus, from
Eq. (4.55) we have
4 4
1
Xi XiT = Pi PTi ⊗ (A1 AT1 + A2 AT2 ). (4.56)
i=1
2 i=1

From Eq. (4.51), we obtain


8 4
1
Xi XiT = Pi PTi ⊗ (A3 AT3 + A4 AT4 ). (4.57)
i=5
2 i=1

Summarizing both parts of equalities [Eqs. (4.56) and (4.57)], we find that
8 4 4
1
Xi XiT = Pi PTi ⊗ Ai ATi . (4.58)
i=1
2 i=1 i=1

Because Pi and Ai , i = 1, 2, 3, 4 are Williamson matrices of order n and m,


respectively,
4 4
Pi PTi = 4nIn , Ai ATi = 4mIm . (4.59)
i=1 i=1

Now, substituting the last expressions into Eq. (4.58), we conclude that
8
Xi XiT = 8mnImn . (4.60)
i=1

Substituting matrices Xi into Eq. (4.50), we obtain a Williamson–Hadamard matrix


of order 8mn.

Algorithm 4.2.1: Construction of a Williamson–Hadamard matrix of order 24n.

Input: Take matrices


⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜+ − −⎟⎟⎟
A1 = ⎜⎜⎜⎝+ + +⎟⎟⎟⎠⎟ ,
⎜ A2 = A3 = A4 = ⎜⎜⎜⎝− + −⎟⎟⎟⎠⎟
⎜ (4.61)
+ + + − − +

and Williamson matrices P1 , P2 , P3 , P4 of order n.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 171

Step 1. Substitute the matrices Ai and Pi , i = 1, 2, 3, 4 into the formula in


Eq. (4.51) to find the matrices
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ P1 −P2 −P2 ⎟⎟⎟ ⎜⎜⎜P2 P1 P1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
X1 = ⎜⎜⎜−P2 P1 −P2 ⎟⎟⎟ , X2 = ⎜⎜⎜P1 P2 P2 ⎟⎟⎟⎟⎟ ,
⎝ ⎠ ⎝ ⎠
−P2 −P2 P1 P1 P1 P2
⎛ ⎞
⎜⎜⎜P2 P2 P2 ⎟⎟⎟
⎜ ⎟
X4 = ⎜⎜⎜⎜⎜P2 P2 P2 ⎟⎟⎟⎟⎟ ,
⎝ ⎠
P2 P2 P2
⎛ ⎞ (4.62)
⎜⎜⎜ P1 −P1 −P1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
X5 = ⎜⎜⎜−P1 P1 −P1 ⎟⎟⎟ ,
⎝ 1 ⎠
−P −P1 P1
⎛ ⎞
2

⎜⎜⎜ P2 −P2 −P2 ⎟⎟⎟


⎜⎜⎜ ⎟
X3 = X6 = X7 = X8 = ⎜⎜⎜−P2 P2 −P2 ⎟⎟⎟⎟⎟ .
⎝ ⎠
−P2 −P2 P2

Step 2. Substitute the matrices Xi into the array in Eq. (4.50) to obtain a
Williamson–Hadamard matrix of order 24n:
⎛ ⎞
⎜⎜⎜ X1 X2 X3 X4 X5 X3 X3 X3 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜−X2 X1 X4 −X3 X3 −X5 −X3 X3 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜−X3 −X4 X1 X2 X3 X3 −X5 −X3 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜−X4 X3 −X2 X2 X3 −X3 X3 −X5 ⎟⎟⎟⎟
⎜⎜⎜−X −X −X −X ⎟. (4.63)
⎜⎜⎜ 5 3 3 3 X 1 X2 X3 X4 ⎟⎟⎟⎟⎟
⎜⎜⎜−X ⎟
⎜⎜⎜ 3 X 5 −X 3 X 3 −X 2 X1 −X4 X3 ⎟⎟⎟⎟⎟
⎜⎜⎜−X ⎟
⎜⎜⎝ 3 X3 X5 −X3 −X3 X4 X1 −X2 ⎟⎟⎟⎟⎟

−X3 −X3 X3 X5 −X4 −X3 X2 X 1

Output: A Williamson–Hadamard matrix of order 24n.

In particular, 8-Williamson matrices of order 9 are represented as


⎛ ⎞
⎜⎜⎜+ + + − + + − + +⎟⎟

⎜⎜⎜+ + + + − + + − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + − + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜− + + + + + − + +⎟⎟⎟⎟⎟
⎜ ⎟
X1 = ⎜⎜⎜⎜+ − + + + + + − +⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟
⎜⎜⎜+ + − + + + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ + + − + + + + +⎟⎟⎟⎟

⎜⎜⎜+
⎝ − + + − + + + +⎟⎟⎟⎟⎠
+ + − + + − + + +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


172 Chapter 4

⎛ ⎞
⎜⎜⎜+ − − + + + + + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − + + + + + +⎟⎟⎟⎟
⎜⎜⎜− − + ⎟
⎜⎜⎜ + + + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ + +
⎜⎜ + − − + + +⎟⎟⎟⎟⎟

X2 = ⎜⎜⎜⎜+ + + − + − + + +⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟
⎜⎜⎜+ + + − − + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ + + + + + + − −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + + − + −⎟⎟⎟⎟
⎜⎝ ⎠
+ + + + + + − − +
⎛ ⎞
⎜⎜⎜+ − − + − − + − −⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − − + − − + −⎟⎟⎟⎟
⎜⎜⎜− − + ⎟
⎜⎜⎜ − − + − − +⎟⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ − − + − − + − −⎟⎟⎟⎟⎟
⎜ ⎟
X4 = ⎜⎜⎜⎜− + − − + − − + −⎟⎟⎟⎟ ,
⎜⎜⎜⎜− − + − − + − − +⎟⎟⎟⎟⎟

⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+ − −
⎜⎜⎜ + − − + − −⎟⎟⎟⎟⎟

⎜⎜⎜− + − − + − − + −⎟⎟⎟⎟
⎝ ⎠
− − + − − + − − +
⎛ ⎞
⎜⎜⎜+ + + + + + + + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + + + + +⎟⎟⎟⎟
⎜⎜⎜+ + + ⎟
⎜⎜⎜ + + + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ + +
⎜⎜ + + + + + +⎟⎟⎟⎟⎟

X5 = ⎜⎜⎜⎜+ + + + + + + + +⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ + + + + + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + + + + + +⎟⎟⎟⎟
⎜⎝ ⎠
+ + + + + + + + +
⎛ ⎞
⎜⎜⎜+ − − − + + − + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − + − + + − +⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ − + + + − + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−
⎜⎜ + + + − − − + +⎟⎟⎟⎟⎟

X3 = X6 = X7 = X8 = ⎜⎜⎜⎜+ − + − + − + − +⎟⎟⎟⎟ . (4.64)
⎜⎜⎜⎜+ + − − − + + + −⎟⎟⎟⎟⎟

⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜−
⎜⎜⎜ + + − + + + − −⎟⎟⎟⎟⎟

⎜⎜⎜+ − + + − + − + −⎟⎟⎟⎟
⎝ ⎠
+ + − + + − − − +
From Theorem 4.2.2 and Corollary 4.1.3, we have the following.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 173

Table 4.3 Orders of 8-Williamson matrices.

3, 5, . . . , 39, 43, 45, 49, 51, 55, 57, 63, 65, 69, 75, 77, 81, 85, 87, 91, 93, 95, 99, 105, 111, 115, 117, 119, 121,
125, 129, 133, 135, 143, 145, 147, 153, 155, 161, 165, 169, 171, 175, 185, 187, 189, 195, 203, 207, 209, 215,
217, 221, 225, 231, 243, 247, 253, 255, 259, 261, 273, 275, 279, 285, 289, 297, 299, 301, 315, 319, 323, 325,
333, 341, 345, 351, 361, 375, 377, 387, 391, 399, 403, 405, 407, 425, 435, 437, 441, 455, 459, 473, 475, 481,
483, 493, 495, 513, 525, 527, 529, 551, 555, 559, 567, 575, 589, 609, 621, 625, 629, 637, 645, 651, 667, 675,
703, 713, 725, 729, 731, 775, 777, 783, 817, 819, 837, 841, 851, 899, 903, 925, 961, 989, 999, 1001, 1073

Corollary 4.2.1: (a) Symmetric 8-Williamson matrices of order mn exist, where


 $
p+1
m, n ∈ W ∪ , p ≡ 1 (mod 4) is a prime power. (4.65)
2

(b) 8-Williamson matrices of order mn exist, where


 $
p+1
m, n ∈ W ∪ L ∪ , p ≡ 1 (mod 4) is a prime power.
2

See Table 4.3 for examples. The following theorem is correct.

Theorem 4.2.3: Let A, B, C, D and A0 , B0 , C0 , D0 , E0 , F0 , G0 , H0 be Williamson


and 8-Williamson matrices of orders n and m, respectively. Then, the following
matrices

Ai = Ai−1 ⊗ X + Bi−1 ⊗ Y, Bi = Bi−1 ⊗ X − Ai−1 ⊗ Y,


Ci = Ci−1 ⊗ X + Di−1 ⊗ Y, Di = Di−1 ⊗ X − Ci−1 ⊗ Y,
(4.66)
Ei = Ei−1 ⊗ X + Fi−1 ⊗ Y, Fi = Fi−1 ⊗ X − Ei−1 ⊗ Y,
Gi = Gi−1 ⊗ X + Hi−1 ⊗ Y, Hi = Hi−1 ⊗ X − Gi−1 ⊗ Y,

are 8-Williamson matrices of order (2n)i m, i = 1, 2, . . . , where X, Y has the form


of Eq. (4.18).

From Corollary 4.2.1 and Theorem 4.2.3, we conclude that there are eight
Williamson-type matrices of order 2mn, where

m ∈ W8 , n ∈ W ∪ L. (4.67)

4.3 Williamson Matrices from Regular Sequences

In this section, we describe the construction of Williamson and generalized


Williamson matrices based on regular sequences.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


174 Chapter 4

Definition 4.3.1:17 A sequence of (+1, −1) matrices {Qi }2s


i=1 of order m is called a
semiregular s-sequence if the following conditions are satisfied:

Qi QTj = J, i − j  0, ±s, i, j = 1, 2, . . . , 2s,

Qi QTi+s = Qi+s QTi , i = 1, 2, . . . , s, (4.68)


2s
Qi QTi = 2smIm .
i=1

Definition 4.3.2:18,19 A sequence of (+1, −1) matrices {Ai }i=1


s
of order m is called
a regular s-sequence if

Ai A j = J, i  j, i, j = 1, 2, . . . , s,
ATi A j = A j ATi , i = 1, 2, . . . , s, (4.69)
s
(Ai ATi + ATi Ai ) = 2smIm .
i=1

Remark: From the conditions of Eq. (4.67), we can obtain matrices Ai , i = 1,


2, . . . s that also satisfy Ai J = JATj = aJ, i, j = 1, 2, . . . , s, where a is an integer.

Lemma 4.3.1: If a regular s-sequence exists, then a semiregular s-sequence also


exists.

Proof: Let {Ai }i=1


s
be a regular s-sequence of matrices of order m. It is not difficult
to check that the sequence {Qi }2s i=1 is a semiregular s-sequence, where Qi = Ai , and
Qi+s = ATi , i = 1, 2, . . . , s.

Remark: A regular two-sequence (A1 , A2 ) exists of the form19


⎛ ⎞ ⎛ ⎞
⎜⎜⎜ B1 B1 U B1 U 2 ⎟⎟⎟ ⎜⎜⎜ B2 B B U 2⎟
⎟⎟⎟
⎜⎜ ⎟⎟ ⎜⎜⎜ 2 1

A1 = ⎜⎜⎜⎜⎜ B1 U B1 U 2 B1 ⎟⎟⎟⎟⎟ , A2 = ⎜⎜⎜ B2 U B2 U B2 U ⎟⎟⎟⎟⎟ ,
⎜ (4.70)
⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠
B1 U 2 B1 B1 U B2 U 2 B2 U 2 B2 U 2

where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + −⎟⎟⎟ ⎜⎜⎜+ + −⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
B1 = ⎜⎜⎜⎜− + +⎟⎟⎟⎟⎟ , B2 = ⎜⎜⎜⎜+ + −⎟⎟⎟⎟⎟ ; (4.71)
⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠
+ − + + + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 175

i.e.,
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + − + + − + + −⎟⎟
⎟ ⎜⎜⎜+ + − + + − + + −⎟⎟

⎜⎜⎜− + + − + + − + +⎟⎟⎟⎟⎟ ⎜⎜⎜+ + − + + − + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜
⎜⎜⎜+
⎜⎜⎜ − + + − + + − +⎟⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎜⎜ + − + + − + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜− + + + − + + + −⎟⎟⎟⎟ ⎜⎜⎜− + + − + + − + +⎟⎟⎟⎟
⎟ ⎟
A1 = ⎜⎜⎜⎜⎜+ − + + + − − + +⎟⎟⎟⎟ ,
⎟ A2 = ⎜⎜⎜⎜⎜− + + − + + − + +⎟⎟⎟⎟ .

⎜⎜⎜+ + − − + + + − +⎟⎟⎟⎟⎟ ⎜⎜⎜− + + − + + − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜
⎜⎜⎜+ − + + + − − + +⎟⎟⎟⎟ ⎜⎜⎜+ − + + − + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝+ + − − + + + − +⎟⎟⎟⎟ ⎜⎜⎝+ − + + − + + − +⎟⎟⎟⎟
⎠ ⎠
− + + + − + + + − + − + + − + + − +
(4.72)

Some known results are now provided.

Theorem 4.3.1: Let p ≡ 1(mod 4) and q ≡ 3(mod 4) be prime powers. Then,


(a) a semiregular (p + 1)-sequence of matrices of order p2 exists 18 and
(b) a regular (q + 1)/2-sequence of matrices of order q2 exists. 19
In particular, from this theorem, we show the existence of a semiregular (n + 1)-
sequence of matrices of order n2 and regular (m + 1)/2-sequence of matrices of
order m2 , where

n ∈ R1 = {5, 9, 13, 17, 25, 29, 37, 41, 49, 53, 61, 73, 81, 89, 97} ,
m ∈ R2
= {3, 7, 11, 19, 23, 27, 31, 43, 47, 59, 67, 71, 79, 83, 103, 107, 119,
127, 131, 139, 151, 163, 167, 179, 191} . (4.73)

Theorem 4.3.2: 17,19 If a regular s-sequence of matrices of order m and a regular


sm-sequence of matrices of order n exists, then a regular s-sequence of matrices of
order mn exists.
Proof: Let A1 = (a1i, j )m
i, j=1 , A2 = (ai, j )i, j=1 , . . . , A s = (ai, j )i, j=1 be a regular s-sequ-
2 m s m

ence of matrices of order m, and {B1 , B2 , . . . , Bt } (t = sm) be a regular t-sequence


of matrices of order n. Denoting
   m
Ck = cki, j = aki, j B(k−1)m+i+ j−1 , k = 1, 2, . . . , s, (4.74)
i, j=1

or
⎛ k ⎞
⎜⎜⎜a11 B(k−1)m+1 ak12 B(k−1)m+2 ··· ak1m Bkm ⎟⎟⎟
⎜⎜⎜⎜ k ⎟⎟⎟
⎜⎜a21 B(k−1)m+2 ak22 B(k−1)m+3 ··· a2m B(k−1)m+1 ⎟⎟⎟⎟
k
Ck = ⎜⎜⎜⎜⎜ .. .. ..
⎟⎟⎟ ,
⎟⎟⎟ (4.75)
⎜⎜⎜ ..
⎜⎜⎝ . . . . ⎟⎟⎟

akm1 Bkm akm2 B(k−1)m+1 ··· akmm Bkm−1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


176 Chapter 4

we can show that {Ck }k=1


s
is a regular s-sequence of matrices of order mn. From
Lemma 4.3.1 and Theorem 4.3.2, we also obtain the following.

Corollary 4.3.1: (a) If a semiregular s-sequence of matrices of order m and a


semiregular (regular) sm-sequence of matrices of order n exist, then a semiregular
s-sequence of matrices of order mn exist.
(b) If q ≡ 3(mod 4) and (q + 1)q2 − 1 ≡ 3(mod 4) are prime powers, then a regular
(1/2)s-sequence of matrices of order [(q + 1)q2 − 1]2 q2 exists.

Proof: Actually, according to Theorem 4.3.1, a regular (1/2)s-sequence of


matrices of order q2 and a regular [(q + 1)/2]q2 -sequence of matrices of order
[(q + 1)q2 − 1]2 exist. Now, according to Theorem 4.3.2, it is possible to assert
that a regular (q + 1)/2-sequence of matrices of order [(q + 1)q2 − 1]2 q2 exists.
In particular, there are regular 12-, 20-, and 28-sequences of matrices of orders
112 · 14512 , 192 · 72192 , and 272 · 20,4112 , respectively. Note that if q ≡ 3(mod 4)
is a prime power, then (q + 1)q2 − 1 ≡ 3(mod 4).

Theorem 4.3.3: If two regular 2-sequences of matrices of orders m and n exist,


respectively, then a regular 2-sequence of matrices of order mn also exists.

Proof: Let {A1 , A2 } and {B1 , B2 } be regular 2-sequences of matrices of orders m


and n, respectively. We can show that matrices
B1 + B2 B1 − B2
P1 = A1 ⊗ + A2 ⊗ ,
2 2 (4.76)
B1 + B2 B1 − B2
P2 = A2 ⊗ + A1 ⊗
2 2
form a regular sequence of matrices of order mn, i.e., they satisfy the conditions of
Eq. (4.69).

Corollary 4.3.2: A regular 2-sequence of matrices of order 9t , t = 1, 2, . . . exists.


In reality, according to Theorem 4.3.1, a regular 2-sequence of matrices of order
9 exists. It is easy to see that from the previous theorem we have Corollary 4.3.2.
Now we will construct Williamson matrices using regular sequences.

Theorem 4.3.4:17,19 If Williamson matrices of order n and a regular (semiregular)


2n-sequence of matrices of order m exist, then Williamson matrices of order mn
exist.
Next, we prove a theorem similar to Theorem 4.3.4 for the construction of
8-Williamson matrices.

Theorem 4.3.5: Let 8-Williamson matrices of order n and the regular 4n-sequ-
ence of matrices of order m exist. Then 8-Williamson matrices of order mn also
exist.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 177

Proof: Let Ai = (ait, j )nt, j=1 , i = 1, 2, . . . , 8 be 8-Williamson matrices of order n, and


(Qi )4n
i=1 be a regular sequence of matrices of order m. We introduce the following
(+1, −1) matrices of order mn:
 n−1  n−1
X1 = a1i+1, j+1 Qi+ j , X2 = a2i+1, j+1 Qn+i+ j ,
i, j=0 i, j=0
 n−1  n−1
X3 = a3i+1, j+1 QTi+ j , X4 = a4i+1, j+1 QTn+i+ j ,
i, j=0 i, j=0
 n−1  n−1 (4.77)
X5 = a5i+1, j+1 Q2n+i+ j , X6 = a6i+1, j+1 Q3n+i+ j ,
i, j=0 i, j=0
 n−1  n−1
X7 = a7i+1, j+1 QT2n+i+ j , X8 = a8i+1, j+1 QT3n+i+ j ,
i, j=0 i, j=0

where the subscript r is calculated by the formula r(mod n).


Prove that matrices Xi are 8-Williamson matrices of order mn, i.e., the conditions
of Eq. (4.50) are satisfied. Calculate the i’th and j’th element of a matrix X1 X2T :
n−1 n−1
X1 X2T (i, j) = a1i+1,k+1 a2j+1,k+1 Qi+k QTn+ j+k = Jm a1i+1,k+1 a2j+1,k+1 . (4.78)
k=0 k=0

We can see that X1 X2T = X2 X1T . We can also show that Xi X Tj = X j XiT , for all i,
j = 1, 2, . . . , 8. Now, we will prove the second condition of Eq. (4.49). With this
purpose, we calculate the i’th and j’th element P(i, j) of the matrix 8i=1 Xi XiT :
n 
P(i, j) = a1i,r a1j,r Qi+r−1 QTj+r−1 + a2i,r a2j,r Qn+i+r−1 QTn+ j+r−1
r=1
+ a3i,r a3j,r Qi+r−1 QTj+r−1 + a4i,r a4j,r Qn+i+r−1 QTn+ j+r−1
+ a5i,r a5j,r Q2n+i+r−1 QT2n+ j+r−1 + a6i,r a6j,r Q3n+i+r−1 QT3n+ j+r−1

+ a7i,r a7j,r Q2n+i+r−1 QT2n+ j+r−1 + a8i,r a8j,r Q3n+i+r−1 QT3n+ j+r−1 . (4.79)

From the conditions of Eqs. (4.49) and (4.69), and from the above relation, we
obtain
8 n
P(i, j) = Jm ati,r atj,r = 0, i  j,
t=1 r=1
(4.80)
4n  
P(i, i) = Q j QTj + QTj Q j = 8mnIm .
j=1

This means that matrices Xi , i = 1, 2, . . . , 8 are 8-Williamson matrices of order mn.


From Theorems 4.3.1 and 4.3.5, we can also conclude that there are 8-Williamson
matrices of order 3 · 232 , 9 · 712 , 9 · 712 , 13 · 1032 , 15 · 1192 , 19 · 1512 , and 21 · 1672 .
Now we will construct generalized Williamson matrices.15,16 The Williamson
method was modified in Refs. 3–6. Thus, instead of using the array in Eq. (4.2),

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


178 Chapter 4

the method used the so-called Geothals–Seidel array,


⎛ ⎞ ⎛ ⎞
⎜⎜⎜ An Bn R ⎜⎜⎜0 0 ···0 1⎟⎟
Dn ⎟⎟⎟⎟ ⎜⎜⎜0 ⎟
1 0⎟⎟⎟⎟⎟
⎜⎜⎜ Cn
⎟⎟ ⎜⎜⎜ 0 ···
⎜⎜⎜ −Bn R An −DTn R CnT R⎟⎟⎟⎟ ⎜ .. ⎟⎟⎟⎟ .
⎜⎜⎜⎜ ⎟⎟ , where R = ⎜⎜⎜⎜ ... ..
.
..
.. 0⎟⎟⎟ (4.81)
⎜⎜⎜ −Cn R DTn R An −BTn R⎟⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎝ ⎟⎠ ⎜⎜⎜0 1 · · · 0 0⎟⎟⎟⎟
−Dn R −CnT R BTn R An ⎝ ⎠
1 0 ··· 0 0
Cyclic matrices An , Bn , Cn , Dn satisfying the second condition of Eq. (4.1) are
called matrices of the Geothals–Seidel type.3
Theorem 4.3.6: (Geothals-Seidel3,6 ) If An , Bn , Cn , and Dn are Geothals–Seidel-
type matrices, then the Geothals–Seidel array gives a Hadamard matrix of order
4n.
Definition: Square (+1, −1) matrices A, B, C, D of order n are called generalized
Williamson matrices if
PQ = QP, PRQT = QRPT , P, Q ∈ {A, B, C, D} ,
(4.82)
AAT + BBT + CC T + DDT = 4nIn .
Note that from the existence of Williamson matrices of order m and T -matrices of
order k, one can construct generalized Williamson matrices of order km.14
Definition: 3,14,33 Cyclic (0, −1, +1) matrices X1 , X2 , X3 , X4 of order k are called
T -matrices if the conditions
Xi ∗ X j = 0, i  j, i, j = 1, 2, 3, 4,
Xi X j = X j Xi , i, j = 1, 2, 3, 4,
(4.83)
X1 + X2 + X3 + X4 is a (−1, +1) − matrix,
X1 X1T + X2 X2T + X3 X3T + X3 X3T = kIk ,
are satisfied, where * is a Hadamard (pointwise) product.
It can be proved that if X1 , X2 , X3 , X4 are T -matrices of order k, then substitution
of matrices

X = A ⊗ X1 + B ⊗ X2 + C ⊗ X3 + D ⊗ X4 ,
Y = −B ⊗ X1 + A ⊗ X2 − D ⊗ X3 + C ⊗ X4 ,
(4.84)
Z = −C ⊗ X1 + D ⊗ X2 + A ⊗ X3 − B ⊗ X4 ,
W = −D ⊗ X1 − C ⊗ X2 + B ⊗ X3 + A ⊗ X4

into the following array (called a Geothals–Seidel array):


⎛ ⎞
⎜⎜⎜ X YR ZR WR ⎟⎟
⎜⎜⎜ −YR T ⎟
⎜⎜⎜ X −W R −Z R⎟⎟⎟⎟⎟
T
⎜⎜⎜ −ZR W T R ⎟ (4.85)
⎜⎝ X −Y T R⎟⎟⎟⎟

−WR −Z R Y R
T T
X

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 179

gives a Baumert–Hall array (for more detail, see forthcoming chapters). There are
infinite classes of T -matrices of orders 2a 10b 26c + 1, where a, b, c are nonnegative
integers.

Theorem 4.3.7: If generalized Williamson matrices of order n and a regular 2n-


sequence of matrices of order m exist, then generalized Williamson matrices of
order mn exist.
Proof: Let A = (ai, j ), B = (bi, j ), C = (ci, j ), D = (di, j ) be generalized Williamson
matrices of order n, and {Qi }2n−1
i=0 be a regular sequence of matrices of order m. The
matrices A, B, C, D are represented as
   
AT = AT1 , AT2 , . . . , ATn , BT = BT1 , BT2 , . . . , BTn ,
    (4.86)
C T = C1T , C2T , . . . , CnT , DT = DT1 , DT2 , . . . , DTn .

Now, we can rewrite the conditions of Eq. (4.82) as


n n
pi,k qk, j = qi,k pk, j , pi, j , qi, j ∈ ai, j , bi, j , di, j , di, j ,
k=1 k=1

P1i Q j = Q1i P j , Pi , Qi ∈ {Ai , B⎧i , Ci , Di } , (4.87)





⎨0, if i  j,
Ai A j + Bi B j + CiC j + Di D j = ⎪ ⎪ i, j = 1, 2, . . . , n,

⎩4n, if i = j,

where

P1 = (pn , pn−1 , . . . , p1 ), if P = (p1 , p2 , . . . , pn ). (4.88)

We introduce the following matrices:


 n−1  n−1
X = ai, j Q(n−i+ j)(mod n) , Y = bi, j Qn+(n−i+ j)(mod n) ,
i, j=0 i, j=0
 n−1  n−1 (4.89)
Z = ci, j QT(n−i+ j)(mod n) , W = di, j QTn+(n−i+ j)(mod n) .
i, j=0 i, j=0

First, we prove that X, Y, Z, W are generalized Williamson matrices, i.e., the


conditions of Eq. (4.87) are satisfied. Furthermore, we omit (mod n) in an index. It
is not difficult to see that the i’th block row and j’th block column of matrices X
and Y have the form
ai,0 Q(n−i) ai,1 Q(n−i+1) · · · ai,n−1 Q(2n−i−1) ,
(4.90)
b0, j Qn+(n+ j) b1, j Qn+(n+ j−1) · · · bn−1, j Qn+( j+1) .

Hence, the i’th, j’th block element of matrix XY is represented as


n−1
XY(i, j) = ai,k bk, j Q(n−i+k) Qn+(n+ j−k) . (4.91)
k=0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


180 Chapter 4

On the other hand, we also find that


n−1
Y X(i, j) = bi,k ak, j Qn+(n−i+k) Q(n+ j−k) . (4.92)
k=0

According to Eq. (4.69), we can rewrite the two last equations as

n−1
XY(i, j) = Jm ai,k bk, j ,
k=0
n−1
(4.93)
Y X(i, j) = Jm bi,k ak, j , i, j = 1, 2, . . . , n − 1.
k=0

According to the first condition of Eq. (4.87), we find that XY = Y X. Other


conditions such as PQ = QP can be proved in a similar manner.
Now, prove the condition XRY T = YRX T . Let us represent the i’th block row of
matrix XR and the j’th block column of the matrix Y T as

ai,n Q(n−i−1) , ai,n−1 Q(n−i−2) , . . . , ai,2 Q(−i+1) , ai,1 Q(−i) ,


(4.94)
b j,1 QTn+(n− j) , b j,2 QTn+(n− j+1) , ..., b j,n−1 QTn+(2n− j−2) , b j,n QTn+(2n− j−1) .

Hence, the i’th, j’th block elements of matrices XRY T and YRX T have the
following form:

n−1
XRY T (i, j) = ai,n−k−1 b j,k Q(n−i−1−k) QTn+(n− j+k) ,
k=0
n−1
(4.95)
YRX (i, j) =
T
bi,n−k−1 a j,k Qn+(2n−i−1−k) QT(n− j+k) .
k=0

According to the second condition of Eq. (4.69), we obtain

n−1
XRY T (i, j) = Jm ai,n−k−1 b j,k ,
k=0
n−1
(4.96)
YRX (i, j) = Jm
T
bi,n−k−1 a j,k .
k=0

Thus, the second condition of Eq. (4.87) is satisfied, which means that we have

XRY T (i, j) = Jm A1i B j = Jm B1i A j = YRX T (i, j). (4.97)

Other conditions such as PRQT = QRPT may be similarly proved.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 181

Table 4.4 Williamson matrices of various types.

Types of matrices Conditions Order of matrices

Williamson n ≡ 3 (mod 4) is a prime power, (n + 1)/4 is n2 (n + 1)/4


the order of Williamson matrices
Williamson n is the order of Williamson matrices, k is a 2 · 9k n
natural number
8-Williamson m ≡ 7 (mod 8) is a prime power, (m + 1)/8 is m2 (m + 1)/8
the order of 8-Williamson matrices
8-Williamson n, m ≡ 3(mod 4) is a prime power, k, kn2 (n + 1)/4, [n2 m2 (n + 1)(m + 1)]/16
(n + 1)/4, (m + 1)/4 is the order of
8-Williamson matrices
8-Williamson n ≡ 3 (mod 4) is prime power, n, (m + 1)/4 is [9k m2 n(m + 1)]/2
the order of Williamson matrices, k is a
natural number
Generalized n ≡ 3 (mod 4) is the prime power, (n + 1)/4 is n2 (n + 1)/4
Williamson the order of generalized Williamson matrices

Now we are going to prove the third condition of Eq. (4.87). We can see that the
i’th block rows of matrices X, Y, Z, W, have the following forms, respectively:
ai,1 Q(n−i) ai,2 Q(n−i+1) · · · ai,n Q(2n−i−1) ;
bi,1 Qn+(n−i) bi,2 Qn+(n−i+1) · · · bi,n Qn+(2n−i−1) ;
(4.98)
ci,1 QT(n−i) ci,2 QT(n−i+1) · · · ci,n QT(2n−i−1) ;
di,1 QTn+(n−i) di,2 QTn+(n−i+1) · · · di,n QTn+(2n−i−1) .

Calculating the i’th, j’th block element of a matrix XX T + YY T + ZZ T + WW T , we


find that
n−1 
P(i, j) = ai,k a j,k Q(n−i+k) QT(n− j+k) + bi,k b j,k Qn+(n−i+k) QTn+(n− j+k)
k=0

+ ci,k c j,k QT(n−i+k) Q(n− j+k) + di,k d j,k QTn+(n−i+k) Qn+(n− j+k) . (4.99)

From the conditions of Eqs. (4.69) and (4.87), we conclude


n−1
P(i, j) = Jm (ai,k a j,k + bi,k b j,k + ci,k c j,k + di,k d j,k ),
k=0
n−1 (4.100)
P(i, i) = (Qk QTk + QTk Qk ) = 4mnIm .
k=0

From Theorems 4.3.1 and 4.3.4–4.3.7, it follows that the existence of


Williamson matrices of various types are given in Table 4.4. In particular, we
conclude the existence of
(1) generalized Williamson matrices of orders [n2 (n + 1)]/4, where
n ∈ G = {19, 27, 43, 59, 67, 83, 107, 131, 163, 179, 211, 227,
251, 283, 307, 331, 347, 379, 419}. (4.101)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


182 Chapter 4

(2) 8-Williamson matrices of orders [m2 (m + 1)]/8, where

m ∈ W81 = {23, 71, 103, 119, 151, 167, 263, 311, 359, 423, 439}. (4.102)

References
1. J. Williamson, “Hadamard determinant theorem and sum of four squares,”
Duke Math. J. 11, 65–81 (1944).
2. J. Williamson, “Note on Hadamard’s determinant theorem,” Bull. Am. Math.
Soc. (53), 608–613 (1947).
3. W. D. Wallis, A. P. Street, and J. S. Wallis, Combinatorics: Room Squares,
Sum-Free Sets, Hadamard Matrices, Lecture Notes in Mathematics, 292,
Springer, Berlin/Heidelberg (1972) 273–445.
4. J. S. Wallis, “Some matrices of Williamson type,” Utilitas Math. 4, 147–154
(1973).
5. J. M. Geothals and J. J. Seidel, “Orthogonal matrices with zero diagonal,”
Can. J. Math. 19, 1001–1010 (1967).
6. J. M. Geothals and J. J. Seidel, “A skew Hadamard matrix of order 36,”
J. Austral. Math. Soc. 11, 343–344 (1970).
7. M. Hall Jr., Combinatorial Theory, Blaisdell Publishing Co., Waltham, MA
(1970).
8. R. J. Turyn, “An infinitive class of Williamson matrices,” J. Comb. Theory,
Ser. A 12, 319–322 (1972).
9. J. S. Wallis, “On Hadamard matrices,” J. Comb. Theory, Ser. A 18, 149–164
(1975).
10. A. G. Mukhopodhyay, “Some infinitive classes of Hadamard matrices,”
J. Comb. Theory, Ser. A 25, 128–141 (1978).
11. J. S. Wallis, “Williamson matrices of even order,” in Combinatorial Mathe-
matics, Proc. 2nd Austral. Conf., Lecture Notes in Mathematics, 403 132–142
Springer, Berlin/Heidelberg (1974).
12. E. Spence, “An infinite family of Williamson matrices,” J. Austral. Math. Soc.,
Ser. A 24, 252–256 (1977).
13. J. S. Wallis, “Construction of Williamson type matrices,” Lin. Multilin.
Algebra 3, 197–207 (1975).
14. S. S. Agaian and H. G. Sarukhanian, “Recurrent formulae of the construction
Williamson type matrices,” Math. Notes 30 (4), 603–617 (1981).
15. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in
Mathematics, 1168, Springer, Berlin/Heidelberg (1985).
16. H.G. Sarukhanyan, “Hadamard Matrices and Block Sequences”, Doctoral
thesis, Institute for Informatics and Automation Problems NAS RA, Yerevan,
Armenia (1998).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 183

17. X. M. Zhang, “Semi-regular sets of matrices and applications,” Australas. J.


Combinator 7, 65–80 (1993).
18. J. Seberry and A. L. Whiteman, “New Hadamard matrices and conference
matrices obtained via Mathon’s construction,” Graphs Combinator 4, 355–377
(1988).
19. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs,” in Surveys in Contemporary Design Theory, Wiley-Interscience
Series in Discrete Mathematics, John Wiley & Sons, Hoboken, NJ (1992).
20. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal
Processing, Springer-Verlag, Berlin (1975).
21. S. Agaian, H. Sarukhanyan, K. Egiazarian and J. Astola, Williamson–
Hadamard transforms: design and fast algorithms, in Proc. of 18th Int.
Scientific Conf. on Information, Communication and Energy Systems and
Technologies, ICEST-2003, Oct. 16–18, Sofia, Bulgaria, 199–208 (2003).
22. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Decomposition of
binary matrices and fast Hadamard transforms,” Circuit Syst. Signal Process
24 (4), 385–400 (2005).
23. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Binary matrices,
decomposition and multiply-add architectures,” Proc. SPIE 5014, 111–122
(2003).
24. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson-Hadamard
transforms,” J. Multiple-Valued Logic Soft Comput. 10 (2), 173–187 (2004).
25. S. Agaian and H. Sarukhanyan, Williamson type M-structures, presented at
2nd Int. Workshop on Transforms and Filter Banks, Berlin (Mar. 1999).
26. H. G. Sarukhanyan, “Multiplicative methods of Hadamard matrices construc-
tion and fast Hadamard transform,” Pattern Recogn. Image Anal. 9 (1), 89–91
(1999).
27. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “On fast Hadamard
transforms of Williamson type,” in Proc. of Signal Processing: Theories and
Applications (X European Signal Processing Conf.), EUSIPCO-2000, Sept.
4–8, Tampere, Finland, 1077–1080 (2000).
28. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “Construction of
Williamson type matrices and Baumert-Hall, Welch and Plotkin arrays,” Int.
Workshop on Spectral Techniques and Logic Design for Future Digital Systems
(SPELOG-2000), Tampere, Finland, TICSP 10, 189–205 (Jun. 2–3, 2000).
29. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “Decomposition of
Hadamard matrices,” Int. Workshop on Spectral Techniques and Logic Design
for Future Digital Systems (SPELOG-2000), Tampere, Finland, TICSP 10,
207–221 (Jun. 2–3, 2000).
30. H. Sarukhanyan and A. Anoyan, “Fast Hadamard transforms of Williamson
type,” Math. Prob. Comput. Sci. 21, 7–16 (2000).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


184 Chapter 4

31. H. Sarukhanyan, “Decomposition of the Hadamard matrices and fast


Hadamard transform,” in Computer Analysis of Images and Patterns, Lecture
Notes in Computer Science, 1296 575–581 (1997).
32. H. Sarukhanyan, “Product of Hadamard matrices,” in Proc. of Conf. on
Computer Science and Information Technologies (CSIT-97), Sept. 25–30,
Yerevan (1997).
33. H. Sarukhanyan, “Hadamard matrices: construction methods and applica-
tions,” in Proc. of Workshop on Transforms and Filter Banks, Feb. 21–27,
Tampere, Finland, 95–129 (1998).
34. http://www.uow.edu.au/∼jennie/lifework.html.
35. S. Agaian, H. Sarukhanyan, and J. Astola, “Multiplicative theorem based fast
Williamson–Hadamard transforms,” Proc. SPIE 4667, 82–91 (2002).
36. S. Agaian and H. Sarukhanyan, “Parametric M-structures,” Preprint IIAP NAS
RA, No. 94-007 (1994).
37. S. Georgiou, C. Koukouvinos and J. Seberry, “Hadamard matrices, orthogonal
designs and construction algorithms,” http://www.uow.edu.au/∼jennie (2003).
38. R. Craigen, J. Seberry, and X.-M. Zhang, “Product of four Hadamard
matrices,” J. Combin. Theory Ser. A 59, 318–320 (1992).
39. K. Yamamoto and M. Yamada, “Williamson Hadamard matrices and Gauss
sums,” J. Math. Soc. Jpn. 37 (4), 703–717 (1985).
40. R. J. Turyn, “A special class of Williamson matrices and difference sets,”
J. Combin. Theory Ser. A 36, 111–115 (1984).
41. M.-Y. Xia and G. Liu, “An infinite class of supplementary difference sets and
Williamson matrices,” J. Comb. Theory Ser. A 58 (2), 310–317 (1991).
42. J. Seberry, B.J. Wysocki and T.A. Wysocki, “Williamson–Hadamard spread-
ing sequences for DS-CDMA applications,” http://www.uow.edu.au/∼jennie/
WEB/Will_CDMA.pdf (2003).
43. J. Horton, Ch. Koukouvinos and J. Seberry, “A Search for Hadamard
matrices constructed from Williamson matrices,” http://www.uow.edu.au/
∼jennie (2003).

44. H. Sarukhanyan, A. Anoyan, S. Agaian, K. Egiazarian and J. Astola, “Fast


Hadamard transforms,” in Proc. Int. TICSP Workshop on Spectral Methods
and Multirate Signal Processing, T. Saramäki, K. Egiazarian, J. Astola, Eds.,
SMMSP’2001, Pula, Croatia, 33–40 (2001).
45. J. Cooper and J. Wallis, “A construction for Hadamard arrays,” Bull. Austral.
Math. Soc. 7, 269–278 (1972).
46. W. H. Holzmann and H. Kharaghani, “On the amicability of orthogonal
designs,” J. Combin. Des. 17, 240–252 (2009).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 185

47. M. H. Dawson and S. E. Tavares, “An expanded set of S-box design criteria
based on information theory and its relation to differential-like attacks,”
in Advances in Cryptology—EUROCRYPT’91, Lecture Notes in Computer
Science, 547 352–367 Springer-Verlag, Berlin (1991).
48. G. M’gan Edmonson, J. Seberry, and M. Anderson, “On the existence of
Turyn sequences of length less than 43,” Math. Comput. 62, 351–362 (1994).
49. S. Eliahou, M. Kervaire, and B. Saffari, “A new restriction on the lengths of
Golay complementary sequences,” J. Combin. Theory, Ser A 55, 49–59 (1990).
50. S. Eliahou, M. Kervaire, and B. Saffari, “On Golay polynomial pairs,” Adv.
Appl. Math. 12, 235–292 (1991).
51. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs and
Hadamard matrices,” in Congressus Numerantium, Proc. 9th Manitoba Conf.
on Numerical Mathematics 27, 23–29 (1979).
52. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs and
applications,” IEEE Trans. Inf. Theory 27 (6), 772–779 (1981).
53. H. F. Harmuth, Transmission of Information by Orthogonal Functions,
Springer-Verlag, Berlin (1972).
54. C. Koukouvinos, C. Kounias, and K. Sotirakoglou, “On Golay sequences,”
Disc. Math. 92, 177–185 (1991).
55. C. Koukouvinos, M. Mitrouli, and J. Seberry, “On the smith normal form of
d-optimal designs,” J. Lin. Multilin. Algebra 247, 277–295 (1996).
56. Ch. Koukouvinos and J. Seberry, “Construction of new Hadamard matrices
with maximal excess and infinitely many new SBIBD (4k2, 2k2 + k, k2 + k),”
in Graphs, Matrices and Designs: A Festschrift for Norman J. Pullman,
R. Rees, Ed., Lecture Notes in Pure and Applied Mathematics, Marcel Dekker,
New York (1992).
57. R. E. A. C. Paley, “On orthogonal matrices,” J. Math. Phys. 12, 311–320
(1933).
58. D. Sarvate and J. Seberry, “A note on small defining sets for some SBIBD(4t−
1, 2t − 1, t − 1),” Bull. Inst. Comb. Appl. 10, 26–32 (1994).
59. J. Seberry, “Some remarks on generalized Hadamard matrices and theorems
of Rajkundlia on SBIBDs,” in Combinatorial Mathematics VI, Lecture Notes
in Mathematics, 748 154–164 Springer-Verlag, Berlin (1979).
60. J. Seberry, X.-M. Zhang, and Y. Zheng, “Cryptographic Boolean functions
via group Hadamard matrices,” Australas. J. Combin. 10, 131–145 (1994).
61. S. E. Tavares, M. Sivabalan, and L. E. Peppard, “On the designs of {SP}
networks from an information theoretic point of view,” in Advances in
Cryptology—CRYPTO’92, Lecture Notes in Computer Science, 740 260–279
Springer-Verlag, Berlin (1992).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


186 Chapter 4

62. R. J. Turyn, Complex Hadamard Matrices, Structures and Their Applications,


Gordon and Breach, New York (1970).
63. J. Seberry and J. Wallis, “On the existence of Hadamard matrices,” J. Combin.
Theory, Ser. A 21, 188–195 (1976).
64. J. Wallis, “Some (1, −1) matrices,” J. Combin. Theory, Ser. B 10, 1–11 (1971).
65. J. Seberry, K. Finlayson, S. S. Adams, T. A. Wysocki, T. Xia, and B. J.
Wysocki, “The theory of quaternion orthogonal designs,” IEEE Trans. Signal
Process. 56 (1), 256–265 (2008).
66. M. Xia, T. Xia, and J. Seberry, “A new method for constructing Williamson
matrices,” Des. Codes Cryptog. 35 (2), 191–209 (2005).
67. M. Xia, T. Xia, J. Seberry, and J. Wu, “An infinite family of Goethals–Seidel
arrays,” Discr. Appl. Math. 145 (3), 498–504 (2005).
68. T. Xia, J. Seberry, and J. Wu, “Boolean functions with good properties,”
Security Man. 294–299 (2004).
69. Ch. Koukouvinos and J. Seberry, “Orthogonal designs of Kharaghani type:
II,” Ars Comb. 72, (2004).
70. S. Georgiou, Ch. Koukouvinos, and J. Seberry, “Generalized orthogonal
designs,” Ars Comb. 71, (2004).
71. J Seberry, B. J. Wysocki, and T. A. Wysocki, “Williamson–Hadamard
spreading sequences for DS-CDMA applications,” Wireless Commun. Mobile
Comput. 3 (5), 597–607 (2003).
72. S. Georgiou, Ch. Koukouvinos, and J. Seberry, “On full orthogonal designs in
order 56,” Ars Comb. 65, (2002).
73. Ch. Qu, J. Seberry, and J. Pieprzyk, “On the symmetric property of homo-
geneous Boolean functions,” in Proc. ACISP, Lecture Notes in Computer
Science, 1587 26–35 Springer, Berlin/Heidelberg (1999).
74. A. Jiwa, J. Seberry, and Y. Zheng, “Beacon based authentication,” in
Proc. ESORICS, Lecture Notes in Computer Science, 875 123–141 Springer,
Berlin/Heidelberg (1994).
75. J. Seberry, X.-M. Zhang, and Y. Zheng, “Nonlinearly balanced Boolean
functions and their propagation characteristics (Extended abstract),” in Proc.
CRYPTO 1993, Lecture Notes in Computer Science, 773 49–60 Springer,
Berlin/Heidelberg (1993).
76. W. H. Holzmann, H. Kharaghani, and B. Tayfeh-Rezaie, “Williamson
matrices up to order 59,” Des. Codes Cryptogr. 46, 343–352 (2008).
77. L. D. Baumert and M. Hall Jr., “Hadamard matrices of the Williamson type,”
Math. Comput. 19, 442–447 (1965).
78. K. Sawade, “Hadamard matrices of order 100 and 108,” Bull. Nagoya Inst.
Technol. 29, 147–153 (1977).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


“Plug-In Template” Method: Williamson–Hadamard Matrices 187

79. D. Z. Djokovic, “Note on Williamson matrices of orders 25,” J. Combin. Math.


Combin. Comput. 18, 171–175 (1995).
80. D. Z. Djokovic, “Williamson matrices of orders 4. 29 and 4. 31,” J. Combin.
Theory, Ser. A 59, 442–447 (1992).
81. A. L. Whiteman, “An infinite family of Hadamard matrices of Williamson
type,” J. Combin. Theory, Ser. A 14, 334–340 (1973).
82. S. Agaian and H. Sarukhanyan, “On construction of Hadamard matrices,”
Dokladi NAS RA LXV (4), (1977) (in Russian).
83. K. Egiazarian, J. Astola, and S. Agaian, “Binary polynomial transforms and
logical correlation,” in Nonlinear Filters for Image Processing, E. Dougherty
and J. Astola, Eds., 299–354 SPIE Press, Bellingham, WA (1999) IEEE Press,
New York.
84. S. Agaian and H. Sarukhanyan, “A note on the construction of Hadamard
matrix,” presented at 4th Int. Cong. Cybernetics Systems, Amsterdam (1978).
85. H. Sarukhanyan, “Generalized Williamson’s type matrices,” Scientific Notes
ESU (No. 2), (1978) (in Russian).
86. H. Sarukhanyan, “Parametric matrices of Williamson type and Geothals-Seidel
arrays,” presented at 5th All Union Conference in Problem of Theoretical
Cybernetics, Novosibirsk (1980).
87. H. Sarukhanyan, “On decomposition of Williamson type matrices,” Math.
Prob. Comput. Sci. 10, 91–101 (1982) (in Russian).
88. H. Sarukhanyan, “Product of Hadamard matrices,” in Proc. Conf. Computer
Science Inform. Technol. (CSIT-97), NAR RA, Sept. 25–29, Yerevan, 153–154
(1997).
89. H. Sarukhanyan, “Fast Hadamard transform,” Math. Prob. Comput. Sci. 18,
14–18 (1997).
90. H. Sarukhanyan and A. Badeyan, “Fast Walsh-Hadamard transform of pre-
assigned spectral coefficients,” in Proc. Conf. Comput. Sci. Inform. Technol.
(CSIT-97), NAS RA, Yerevan, Sept. 25–29, 150–152 (1997).
91. H. Sarukhanyan, “Decomposition of Hadamard matrices by (-1,+1)-vectors
and fast Hadamard transform algorithm,” Dokladi NAS RA 97 (2), (1997) (in
Russian).
92. H. G. Sarukhanyan, “Multiplicative methods of Hadamard matrices
construction and fast Hadamard transform,” Pattern Recogn. Image Anal. 9
(1), 89–91 (1999).
93. H. Sarukhanyan and A. Petrosian, “Construction and application of hybrid
wavelet and other parametric orthogonal transforms,” J. Math. Imaging Vis. 23
(1), 25–46 (2005).
94. S. S. Agaian, “2D and 3D block Hadamard matrices constructions,” Math.
Prob. Comput. Sci. 12, 5–50 (1984) (in Russian).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


188 Chapter 4

95. S. S. Agaian and K. O. Egiazarian, “Generalized Hadamard matrices,” Math.


Prob. Comput. Sci. 12, 51–88 (1984) (in Russian).
96. S. M. Athurian, “On one modification of Paley–Wallis–Whiteman method
on the Hadamard matrices construction,” Math. Prob. Comput. Sci. 12, 89–94
(1984) (in Russian).
97. A. K. Matevosian, “On construction of orthogonal arrays, Hadamard matrices
and their possibility applications,” Math. Prob. Comput. Sci. 12, 95–104 (1984)
(in Russian).
98. H. G. Sarukhanyan, “On construction of generalized sequences with zero
autocorrelation functions and Hadamard matrices,” Math. Prob. Comput. Sci.
12, 105–129 (1984) (in Russian).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 5
Fast Williamson–Hadamard
Transforms
Hadamard matrices have recently received attention due to their numerous known
and promising applications.1–27 The FHT algorithms were developed for the orders
N = 2n , 12 · 2n , 4n . The difficulties of the construction of the N ≡ 0(mod 4)–point
HT are related to the problem of the existence of Hadamard matrices (the so-called
Hadamard problem).
In this chapter, we have utilized a Williamson’s construction of parametric
Hadamard matrices in order to develop efficient computational algorithms of a
special type of HTs—the Williamson–Hadamard transforms. Several algorithms
for fast computation of a special type of HTs, namely, the Williamson–Hadamard
transform, are presented. An efficient algorithm to compute 4t-point (t is an “arbi-
trary” integer number) Williamson–Hadamard transforms is traced. Comparative
estimates revealing the efficiency of the proposed algorithms with respect to ones
known are given, and the results of numerical examples are presented.
Section 5.1 describes the Hadamard matrix construction from Williamson
matrices. Sections 5.2 and 5.3 present the block representation of parametric
Williamson–Hadamard matrices and the fast Williamson–Hadamard block
transform algorithm. In Section 5.4, the Williamson–Hadamard transform
algorithm on add/shift architecture is developed. Sections 5.5 and 5.6 present fast
Williamson–Hadamard transform algorithms based on multiplicative theorems. In
Section 5.7, complexities of developed algorithms and also comparative estimates
are presented, revealing the efficiency of the proposed algorithms with respect to
ones known.

5.1 Construction of Hadamard Matrices Using Williamson Matrices

In this section, we describe a fast algorithm for generation of Williamson–Hadamard


matrices and transforms. We have seen that if four (+1, −1) matrices, A, B, C, D,
of order n exist with

PQT = QPT , P, Q ∈ {A, B, C, D},


(5.1)
AAT + BBT + CC T + DDT = 4nIn ,

189

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


190 Chapter 5

then,
⎛ ⎞
⎜⎜⎜ A B C D⎟⎟⎟
⎜⎜⎜ −B A −D C ⎟⎟⎟
= ⎜⎜⎜⎜ ⎟⎟
⎜⎜⎝ −C D A −B⎟⎟⎟⎟⎠
W4n (5.2)
−D −C B A

is a Hadamard matrix of order 4n.


Note that any cyclic symmetric matrix A of order n can be represented as

n−1
A= ai U i , (5.3)
i=0

where U is a cyclic matrix of order n with the first row (0, 1, 0, . . . , 0) of length n,
and U n+i = U i , ai = an−i , for i = 1, 2, . . . , n − 1.
Thus, the four cyclic symmetric Williamson matrices A ⇔ (a0 , a1 , . . . , an−1 ),
B ⇔ (b0 , b1 , . . . , bn−1 ), C ⇔ (c0 , c1 , . . . , cn−1 ), D ⇔ (d0 , d1 , . . . , dn−1 ) can be repre-
sented as
n−1
A(a0 , a1 , . . . , an−1 ) = ai U i ,
i=0
n−1
B(b0 , b1 , . . . , bn−1 ) = bi U i ,
i=0
n−1
(5.4)
C(c0 , c1 , . . . , cn−1 ) = ci U ,
i

i=0
n−1
D(d0 , d1 , . . . , dn−1 ) = di U i ,
i=0

where the coefficients ai , bi , ci , and di , 0 ≤ i ≤ n − 1, satisfy the following condi-


tions:

ai = an−i , bi = bn−i , ci = cn−i , and di = dn−i . (5.5)

Additionally, if a0 = b0 = c0 = d0 = 1, then

A(a0 , a1 , . . . , an−1 ) = A+ − A− , B(b0 , b1 , . . . , bn−1 ) = B+ − B− ,


(5.6)
C(c0 , c1 , . . . , cn−1 ) = C + − C − , D(d0 , d1 , . . . , dn−1 ) = D+ − d− ,

where Q+ denotes the (0, 1) matrix, which is obtained from the (+1, −1) matrix
Q by replacement of −1 by zero, and Q− denotes the (0, 1) matrix, which is
obtained from the (+1, −1) matrix Q by replacement of −1 by +1 and +1 by zero,
respectively.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Williamson–Hadamard Transforms 191

Thus, the equation

A2 + B2 + C 2 + D2 = 4nIn (5.7)

can be expressed by

, -2 , -2 , -2 , -2
2A+ − J + 2B+ − J + 2C + − J + 2D+ − J = 4nIn , (5.8)

where J = A+ + A− = B+ + B− = C + + C − = D+ + D− , i.e., J is the matrix of ones.


Below, we state an algorithm to construct Williamson–Hadamard matrices.

Algorithm 5.1.1: Hadamard matrix construction via cyclic symmetric parametric


Williamson matrices.
Input: (a0 , a1 , . . . , an−1 ), (b0 , b1 , . . . , bn−1 ), (c0 , c1 , . . . , cn−1 ), and (d0 , d1 , . . . , dn−1 ).

Step 1. Construct matrices A, B, C, D by

n−1 n−1 n−1 n−1


A= ai U i , B= bi U i , C= ci U i , D= di U i . (5.9)
i=0 i=0 i=0 i=0

Step 2. Substitute matrices A, B, C, D into the array


⎛ ⎞
⎜⎜⎜ A B C D⎟⎟⎟
⎜⎜⎜ −B A −D C ⎟⎟⎟
= ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎝ −C D A −B⎟⎟⎟⎟⎠
W4n (5.10)
−D −C B A

Output: Parametric Williamson–Hadamard matrix W4n :

W4n (a0 , . . . , an−1 , b0 , . . . , bn−1 , c0 , . . . , cn−1 , d0 , . . . , dn−1 ) =


⎛ ⎞
⎜⎜⎜ A(a0 , a1 , . . . , an−1 ) B(b0 , b1 , . . . , bn−1 ) C(c0 , c1 , . . . , cn−1 ) D(d0 , d1 , . . . , dn−1 ) ⎟⎟

⎜⎜⎜ −B(b , b , . . . , b ) A(a , a , . . . , a ) −D(d , d , . . . , d ) C(c0 , c1 , . . . , cn−1 ) ⎟⎟⎟⎟
⎜⎜⎜⎜ 0 1 n−1 0 1 n−1 0 1 n−1
⎟.
⎜⎜⎝ −C(c0 , c1 , . . . , cn−1 ) D(d0 , d1 , . . . , dn−1 ) A(a0 , a1 , . . . , an−1 ) −B(b0 , b1 , . . . , bn−1 )⎟⎟⎟⎟⎠
−D(d0 , d1 , . . . , dn−1 ) −C(c0 , c1 , . . . , cn−1 ) B(b0 , b1 , . . . , bn−1 ) A(a0 , a1 , . . . , an−1 )
(5.11)

It has been shown that for any ai , bi , ci, , di , 0 ≤ i ≤ n − 1 with |ai | = |bi | = |ci | =
|di | = 1, for all 0 ≤ i ≤ n − 1 and ai = an−i , bi = bn−i , ci = cn−i, , di = dn−i , the matrix
W4n (a0 , . . . , an−1 , . . . , d0 , . . . dn−1 ) is a Williamson–Hadamard matrix of order 4n.
The following is an example. Let (a0 , a1 , a1 ), (b0 , b1 , b1 ), (c0 , c1 , c1 ), and (d0 ,
d1 , d1 ) be the first rows of parametric Williamson-type cyclic symmetric matrices
of order 3. Using Algorithm 5.1.1, we can construct the following parametric

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


192 Chapter 5

matrix of order 12:


⎛ ⎞
⎜⎜⎜ a0 a1 a1 b0 b1 b1 c0 c1 c1 d0 d1 d1 ⎟⎟

⎜⎜⎜⎜ a1 a0 a1 b1 b0 b1 c1 c0 c1 d1 d0 d1 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ a1 a1 a0 b1 b1 b0 c1 c1 c0 d1 d1 d0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−b −b −b a0 a1 a1 −d0 −d1 −d1 c0 c1 c1 ⎟⎟⎟⎟
⎜⎜⎜ 0 1 1

⎜⎜⎜−b1 −b0 −b1
⎜⎜⎜ a1 a0 a1 −d1 −d0 −d1 c1 c0 c1 ⎟⎟⎟⎟⎟
⎜⎜⎜−b1 −b1 −b0 a1 a1 a0 −d1 −d1 −d0 c1 c1 c0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ . (5.12)
⎜⎜⎜ −c −c −c d0 d1 d1 a0 a1 a1 −b0 −b1 −b1 ⎟⎟⎟⎟
⎜⎜⎜ 0 1 1

⎜⎜⎜ −c1 −c0 −c1 d1 d0 d1 a1 a0 a1 −b1 −b0 −b1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −c1 −c1 −c0 d1 d1 d0 a1 a1 a0 −b1 −b1 −b0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−d0 −d1 −d1 −c0 −c1 −c1 b0 b1 b1 a0 a1 a1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−d1 −d0 −d1 −c1 −c0 −c1 b1 b0 b1 a1 a0 a1 ⎟⎟⎟⎟
⎝ ⎠
−d1 −d1 −d0 −c1 −c1 −c0 b1 b1 b0 a1 a1 a0

5.2 Parametric Williamson Matrices and Block Representation of


Williamson–Hadamard Matrices
In this section, we present an approach of block Hadamard matrix construction
equivalent to the Williamson–Hadamard matrices. This approach is useful for
designing fast transform algorithms and generating new Hadamard matrices (for
more details, see Chapter 2). Now we want to use the concepts of Algorithm 5.1.1
to build an equivalent block cyclic matrix.
The first block, P0 , is formed as follows: (1) from the first row of the matrix in
Eq. (5.12) taking the first, fourth, seventh, and tenth elements (a0 , b0 , c0 , d0 ), we
form the first row of block P0 ; (2) from the fourth row of the above-given matrix,
taking the first, fourth, seventh, and tenth elements (−b0 , a0 , −d0 , c0 ), we construct
the second row of block P0 , and so on. Hence, we obtain
⎛ ⎞
⎜⎜⎜ a0 b0 c0 d0 ⎟⎟⎟
⎜⎜⎜−b a0 −d0 c0 ⎟⎟⎟⎟
P0 = ⎜⎜⎜⎜ 0 ⎟.
⎜⎜⎝ −c0 d0 a0 −b0 ⎟⎟⎟⎠⎟
(5.13)
−d0 −c0 b0 a0

We form the second (and third) block P1 as follows: (1) from the second, fifth,
eighth, and eleventh elements of the first row, we make the first row (a1 , b1 , c1 , d1 )
of block P1 ; (2) from the second, fifth, eighth, and eleventh elements of the fourth
row we make the second row (−b1 , a1 , −d1 , c1 ) of block P1 , and so on.
Hence, we obtain
⎛ ⎞
⎜⎜⎜ a1 b1 c1 d1 ⎟⎟⎟
⎜⎜⎜−b a1 −d1 c1 ⎟⎟⎟⎟
P1 = ⎜⎜⎜⎜ 1 ⎟.
⎜⎜⎝ 1 d1 a1 −b1 ⎟⎟⎟⎟⎠
(5.14)
−c
−d1 −c1 b1 a1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Williamson–Hadamard Transforms 193

From Eqs. (5.12)–(5.14), we obtain


⎛ ⎞
⎜⎜⎜ a0 b0 c0 d0 a1 b1 c1 d1 a1 b1 c1 d1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−b0 a0 −d0 c0 −b1 a1 −d1 c1 −b1 a1 −d1 c1 ⎟⎟⎟⎟
⎜⎜⎜ −c ⎟
⎜⎜⎜ 0 d0 a0 −b0 −c1 d1 a1 −b1 −c1 d1 a1 −b1 ⎟⎟⎟⎟⎟
⎜⎜⎜−d0 −c0 b0 a0 −d1 −c1 b1 a1 ⎟
⎜⎜⎜ −d1 −c1 b1 a1 ⎟⎟⎟⎟

⎜⎜⎜ a1 b1 c1 d1 a0 b0 c0 d0
⎜⎜⎜ a1 b1 c1 d1 ⎟⎟⎟⎟⎟

⎜−b a1 −d1 c1 −b0 a0 −d0 c0 −b1 a1 −d1 c1 ⎟⎟⎟⎟
[BW]12 = ⎜⎜⎜⎜⎜ 1 ⎟ (5.15)
⎜⎜⎜ −c1 d1 a1 −b1 −c0 d0 a0 −b0 −c1 d1 a1 −b1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜−d1 −c1 b1 a1 −d0 −c0 b0 a0 −d1 −c1 b1 a1 ⎟⎟⎟⎟
⎜⎜⎜ a ⎟
⎜⎜⎜ 1 b1 c1 d1 a1 b1 c1 d1 a0 b0 c0 d0 ⎟⎟⎟⎟

⎜⎜⎜−b1 a1 −d1 c1 −b1 a1 −d1 c1
⎜⎜⎜ −b0 a0 −d0 c0 ⎟⎟⎟⎟⎟

⎜⎜⎜ −c1 d1 a1 −b1 −c1 d1 a1 −b1 −c0 d0 a0 −b0 ⎟⎟⎟⎟
⎝ ⎠
−d1 −c1 b1 a1 −d1 −c1 b1 a1 −d0 −c0 b0 a0

or
⎛ ⎞
⎜⎜⎜P0 P1 P1 ⎟⎟⎟
⎜⎜⎜ ⎟
[BW]12 = ⎜⎜⎜P1 P0 P1 ⎟⎟⎟⎟⎟ , (5.16)
⎝ ⎠
P1 P1 P0

which is a block-cyclic, block-symmetric Hadamard matrix.


Using the properties of the Kronecker product, we can rewrite Eq. (5.15) as
⎛ ⎞
⎜⎜⎜P0 P1 P1 ⎟⎟⎟
⎜⎜⎜ ⎟
[BW]12 = ⎜⎜⎜P1 P0 P1 ⎟⎟⎟⎟⎟ = I3 ⊗ P0 + U ⊗ P1 + U 2 ⊗ P1 . (5.17)
⎝ ⎠
P1 P1 P0

In general, any Williamson–Hadamard matrix of order 4n can be presented as

n−1
[BW]4n = U i ⊗ Qi , (5.18)
i=0

where
⎛ ⎞
⎜⎜⎜ ai bi ci di ⎟⎟⎟
⎜⎜⎜−b a −d c ⎟⎟⎟
Qi (ai , bi , ci , di ) = ⎜⎜⎜⎜ i i i i⎟
⎟,
⎜⎜⎝ −ci di ai −bi ⎟⎟⎟⎟⎠
(5.19)
−di −ci bi ai

where Qi = Qn−i , ai , bi , ci , di = ±1, and ⊗ is a sign of Kronecker product.13


The Hadamard matrices of the form in Eq. (5.18) are called block-cyclic, block-
symmetric Hadamard matrices.13
The Williamson–Hadamard matrix W12 (see Section 5.1) can be represented as
a block-cyclic, block-symmetric matrix,

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


194 Chapter 5

⎛ ⎞
⎜⎜⎜+ + + + + − − − + − − −⎟⎟

⎜⎜⎜⎜− + − + + + + − + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + − + − + + + − + +⎟⎟⎟⎟⎟
⎜⎜⎜− − + + + + − + + + − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+ − − − + + + + + − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜+ + + − − + − + + + + −⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟,
+⎟⎟⎟⎟⎟
[BW]12 (5.20)
⎜⎜⎜+ − + + − + + − + − +
⎜⎜⎜+ + − + − − + + + + − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+ − − − + − − − + + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + − + + + − − + − +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎝ − + + + − + + − + + −⎟⎟⎟⎟⎠
+ + − + + + − + − − + +

or
⎛ ⎞
⎜⎜⎜Q0 (+1, +1, +1, +1) Q4 (+1, −1, −1, −1) Q4 (+1, −1, −1, −1)⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
[BW]12 = ⎜⎜Q4 (+1, −1, −1, −1) Q0 (+1, +1, +1, +1) Q4 (+1, −1, −1, −1)⎟⎟⎟⎟ . (5.21)
⎜⎝ ⎟⎠
Q4 (+1, −1, −1, −1) Q4 (+1, −1, −1, −1) Q0 (+1, +1, +1, +1)

From Eq. (5.18), we can see that all of the blocks are Hadamard matrices of
the Williamson type of order 4. In Ref. 14, it was proved that cyclic symmetric
Williamson–Hadamard block matrices can be constructed using only five different
blocks, for instance, as
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟
⎟ ⎜⎜⎜+ + + −⎟⎟
⎟ ⎜⎜⎜+ + − +⎟⎟

⎜⎜⎜− + − +⎟⎟⎟⎟ ⎜⎜⎜− + + +⎟⎟⎟⎟ ⎜⎜⎜− + − −⎟⎟⎟⎟
Q0 = ⎜⎜⎜⎜ ⎟, Q1 = ⎜⎜⎜⎜ ⎟, Q2 = ⎜⎜⎜⎜ ⎟,
⎜⎜⎝− + + −⎟⎟⎟⎟⎠ ⎜⎜⎝− − + −⎟⎟⎟⎟⎠ ⎜⎜⎝+ + + −⎟⎟⎟⎟⎠
− − + + + − + + − + + +
⎛ ⎞ ⎛ ⎞ (5.22)
⎜⎜⎜⎜+ − + +⎟⎟
⎟ ⎜⎜⎜⎜+ − − −⎟⎟

⎜⎜+ + − +⎟⎟⎟⎟ ⎜⎜+ + + −⎟⎟⎟⎟
Q3 = ⎜⎜⎜⎜ ⎟, Q4 = ⎜⎜⎜⎜ ⎟.
⎜⎜⎝− + + +⎟⎟⎟⎟⎠ ⎜⎜⎝+ − + +⎟⎟⎟⎟⎠
− − − + + + − +

For example, Williamson–Hadamard block matrix [BW]12 was constructed using


only matrices Q0 and Q4 .
Note that to fix the first block, one needs a maximum of four blocks to design any
Williamson–Hadamard block matrix, and these four blocks are defined uniquely up
to a sign. Thus, if the first row of the first block consists of an even number of +1,
then the first rows of the other four blocks consist of an odd number of +1. This
means that if n is odd and Qi = Qn−i , ai , bi , ci , di = ±1, and a0 + b0 + c0 + d0 = 4,
then ai + bi + ci + di = ±2. Similarly, if the first row of the first block consists of
an odd number of +1, then the first rows of the other four blocks consist of an even
number of +1. Or, if n is odd, Qi = Qn−i , ai ,bi ,ci ,di = ±1, and a0 + b0 + c0 + d0 = 2,
then ai + bi + ci + di = 0 or 4.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Williamson–Hadamard Transforms 195

The set of blocks with a fixed first block with odd +1 is as follows:
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + −⎟⎟
⎟ ⎜⎜⎜+ − − +⎟⎟
⎟ ⎜⎜⎜− + − +⎟⎟

⎜⎜⎜− + + +⎟⎟⎟⎟ ⎜⎜⎜+ + − −⎟⎟⎟⎟ ⎜⎜⎜− − − −⎟⎟⎟⎟
Q0 = ⎜⎜⎜⎜
1 ⎟, Q1 = ⎜⎜⎜⎜
1 ⎟, Q2 = ⎜⎜⎜⎜
1 ⎟,
⎜⎜⎝− − + −⎟⎟⎟⎟⎠ ⎜⎜⎝+ + + +⎟⎟⎟⎟⎠ ⎜⎜⎝+ + − −⎟⎟⎟⎠⎟
+ − + + − + − + − + + −
⎛ ⎞ ⎛ ⎞ (5.23)
⎜⎜⎜− − + +⎟⎟
⎟ ⎜⎜⎜+ + + +⎟⎟

⎜⎜⎜+ − − +⎟⎟⎟⎟ ⎜⎜⎜− + − +⎟⎟⎟⎟
Q13 = ⎜⎜⎜⎜ ⎟, Q14 = ⎜⎜⎜⎜ ⎟.
⎜⎜⎝− + − +⎟⎟⎟⎟⎠ ⎜⎜⎝− + + −⎟⎟⎟⎟⎠
− − − − − − + +

The first block rows of Williamson–Hadamard block matrices are given in


Appendix A.3.13,14

5.3 Fast Block Williamson–Hadamard Transform


In this section, we describe two algorithms for calculation of the 4n-point forward
block Williamson–Hadamard transform,

F = [BW]4n f. (5.24)

Let us split the vector column f into n 4D vectors as

n−1
f = Pi ⊗ X i , (5.25)
i=0

where Pi are column vectors of dimension n whose i’th element is equal to 1, the
remaining elements are equal to 0, and

Xi = ( f4i , f4i+1 , f4i+2 , f4i+3 )T , i = 0, 1, . . . , n − 1. (5.26)

Now, using Eq. (5.18), we have


⎛ n−1 ⎞ ⎛ n−1 ⎞
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ n−1
[BW]4n f = ⎜⎜⎜⎝ U i ⊗ Qi ⎟⎟⎟⎠ ⎜⎜⎜⎝ P j ⊗ X j ⎟⎟⎟⎠ = U i P j ⊗ Qi X j . (5.27)
i=0 j=0 i, j=0

We can verify that U i P j = Pn−i+ j , j = 0, 1, . . . , n − 1, i = j + 1, . . . , n − 1. Hence,


Eq. (5.27) can be presented as

n−1 n−1
[BW]4n f = U i P j ⊗ Qi X j = B j, (5.28)
i, j=0 j=0

where B j = U i P j ⊗ Qi X j .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


196 Chapter 5

From Eq. (5.28), we see that in order to perform the fast Williamson–Hadamard
transform, we need to calculate the spectral coefficients of the block transforms,
such as Yi = Qi X. Here, Qi , i = 0, 1, 2, 3, 4 have the form of Eq. (5.22), and

X = (x0 , x1 , x2 , x3 )T , Y = (y0 , y1 , y2 , y3 )T (5.29)

are the input and output column vectors, respectively.

Algorithm 5.3.1: Joint computation of five four-point Williamson–Hadamard


transforms.

Input: X = (x0 , x1 , x2 , x3 ) signal vector column.


Step 1. Compute a = x0 + x1 , b = x2 + x3 , c = x0 − x1 , d = x2 − x3 .
Step 2. Compute the transforms Yi = Qi X, i = 0, 1, 2, 3, 4 [where Qi has
the form of Eq. (5.22)] as

y00 = a + b, y10 = −c − d, y20 = −c + d, y30 = −a + b,


y01 = a + d, y11 = −c + b, y21 = −a + d, y31 = c + b,
y02 = −y21 , y12 = −y31 , y22 = y01 , y32 = y11 , (5.30)
y03 = y31 , y13 = −y21 , y23 = y11 , y33 = −y01 ,
y04 = −y11 , y14 = y01 , y24 = y31 , y34 = −y21 .

Output: The transform (spectral) coefficients

Y0 = (y00 , y10 , y20 , y30 ), Y1 = (y01 , y11 , y21 , y31 ), Y2 = (y02 , y12 , y22 , y32 ),
(5.31)
Y3 = (y03 , y13 , y23 , y33 ), Y4 = (y04 , y14 , y24 , y34 ).

The flow graph for joint computation Qi X, i = 0, 1, 2, 3, 4 is given in


Fig. 5.1.
From Eqs. (5.30) and (5.31), we can see that the joint computation of four-
point transforms Qi X, i = 0, 1, 2, 3, 4 requires only 12 addition/subtraction
operations. Note that the separate calculation of Qi X, i = 0, 1, 2, 3, 4 requires 40
addition/subtraction operations. In reality, from Fig. 5.1 we can check that the
transform Q0 X requires eight addition/subtraction operations, and the transform
Q1 X requires four addition/subtraction operations. We can also see that the joint
computation of all four-point transforms Qi X, i = 0, 1, 2, 3, 4 requires only 12
addition/subtraction operations. Now we present a detailed description of the
36-point block Williamson–Hadamard fast transform algorithm.

Example 5.3.1: The 36-point Williamson–Hadamard transform can be calculated


using 396 operations.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Williamson–Hadamard Transforms 197

0
x0 y0

1
x1 y0
Q0 X
2
x2 y0

x3 3
y0

x0 0
y1

x1 1
y1
Q1 X
x2 2
y1

x3 3
y1

0
x0 y2

1
x1 y2
Q2 X
2
x2 y2

x3 3
y2

0
x0 y3

1
x1 y3
Q3 X
x2 2
y3

3
x3 y3

0
x0 y4

x1 1
y4
Q4 X
2
x2 y4

x3 3
y4

Figure 5.1 Flow graph for the joint Qi X transforms, i = 0, 1, 2, 3, 4.

Input: Vector column F36 = ( fi )35


i=0 and blocks Q0 , Q1 , and Q2 .
⎛+ + + +⎞⎟⎟ ⎛+ + + −⎞⎟⎟ ⎛+ + − +⎞⎟⎟
⎜⎜⎜ ⎜⎜⎜ ⎜⎜⎜
⎜− + − +⎟⎟⎟⎟ ⎜− + + +⎟⎟⎟⎟ ⎜− + − −⎟⎟⎟⎟
Q0 = ⎜⎜⎜⎜⎜− + + −⎟⎟⎟⎠ , Q1 = ⎜⎜⎜⎜⎜− − + −⎟⎟⎟⎠ , Q2 = ⎜⎜⎜⎜⎜+ + + −⎟⎟⎟⎠ . (5.32)
⎝ ⎝ ⎝
− − + + + − + + − + + +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


198 Chapter 5

Step 1. Split vector F36 into nine parts as follows: F36 = (X0 , X1 ,
. . . , X8 )T , where

XiT = ( f4i , f4i+1 , f4i+2 , f4i+3 ), i = 0, 1, . . . , 8. (5.33)

Step 2. Compute the vectors Yi , i = 0, 1, . . . , 8, as shown in Fig. 5.2. Note


that the sub-blocks A(Q0 , Q1 , Q2 ) in Fig. 5.2 can be computed
using Algorithm 5.3.1 (see Fig. 5.1).
Step 3. Evaluate the vector Y = Y0 + Y1 + · · · + Y8 .

Output: 36-point Williamson–Hadamard Transform coefficients, i.e., vector Y.


From Eqs. (5.30) and (5.31), it follows that the joint computation of the
transforms Q0 Xi , Q1 Xi , and Q2 Xi requires only 12 addition/subtraction operations.
From Eq. (5.22), we can see that only these transforms are present in each vector Yi .
Hence, for all of these vectors, it is necessary to perform 108 operations. Finally,
the 36-point HT requires only 396 addition/subtraction operations, but in direct
computation it requires 1260 addition/subtraction operations.
Note that we have developed a fast Walsh–Hadamard transform algorithm
without knowing the existence of any Williamson–Hadamard matrices. This
algorithm can be developed more efficiently if we use a construction of these
matrices.
The first block rows of the block-cyclic, block-symmetric (BCBS) Hadamard
matrices of the Williamson type of order 4n, n = 3, 5, . . . , 2513,15 with marked
cyclic congruent circuits (CCCs), are given in the Appendix.
In addition, we describe the add/shift architecture for the Williamson–Hadamard
transform. Denoted by z1 = x1 + x2 + x3 , z2 = z1 − x0 , and using Eq. (5.22), we can
calculate Yi = Qi X as follows:

y00 = z1 + x0 , y10 = z2 − 2x2 , y20 = z2 − 2x3 , y30 = z2 − 2x1 ;


y01 = y00 − 2x3 , y11 = z2 , y21 = y20 − 2x1 , y31 = y00 − 2x1 ;
y02 = −y21 , y12 = −y31 , y22 = y01 , y32 = y11 ; (5.34)
y03 = y31 , y13 = −y21 , y23 = y11 , y33 = −y01 ;
y04 = −y11 , y14 = y01 , y24 = y31 , y34 = −y21 .

It is easy to check that the joint four-point transform computation requires fewer
operations than its separate computations. The separate computations of transforms
Q0 X and Q1 X require 14 addition/subtraction operations and six one-bit shifts;
however, for their joint computation, only 10 addition/subtraction operations and
three one-bit shifts are necessary. Thus, using this fact, the complexity of the fast
Williamson–Hadamard transform will be discussed next.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Williamson–Hadamard Transforms 199

Y0 Y1 Y2
Q0 X0 Q1X1 –Q2X2
A (Q0, Q1, Q2) Q1 X0 A (Q0, Q1, Q2) Q0X1 Q1X2
–Q2 X0 Q1X1 Q0X2
Q0 X0 Q0 X1 Q0 X2
Q1 X0 –Q2X1 Q1X2
–Q1 X0 Q1X1 –Q2X2
Q1 X0 Q1 X1 Q1 X2
–Q1 X0 –Q1X1 Q1X2

Q2 X0 Q1 X0 Q2 X 1 –Q1X1 Q2 X2 –Q1X2

–Q2 X0 Q1X1 –Q1X2

–Q2X1 Q1X2
Q1 X0

Y3 Y4 Y5
Q1X3 –Q1X4 –Q1X5
A (Q0, Q1, Q2) –Q2X3 A (Q0, Q1, Q2) Q1X4 A (Q0, Q1, Q2) –Q1X5
Q1X3 –Q2X4 Q1X5
Q 0 X3 Q0 X4 Q0 X5
Q0X3 Q1X4 Q1X5
Q1X3 Q0X4 –Q2X5
Q 1 X3 Q1 X4 Q1 X5
–Q2X3 Q1X4 Q1X5

Q 2 X3 Q1X3 Q2 X4 Q2X4 Q2 X5 Q0X5

–Q1X3 Q1X4 Q1X5


–Q1X3 –Q1X4 –Q2X5

Y6 Y7 Y8
Q1X6 –Q2X7 Q1 X8
A (Q0, Q1, Q2) –Q1X6 A (Q0, Q1, Q2) Q1X7 A (Q0, Q1, Q2) –Q2X8

–Q1X6 –Q1X7 Q0 X8 Q1X8


Q0 X6 Q0 X7
Q1X6 –Q1X7 –Q1X8
–Q2X6 Q1X7 Q1 X8 –Q1X8
Q1 X6 Q1 X7
Q1X6 –Q2X7 Q1X8

Q2 X6 Q0X6 Q2 X7 Q1X7 Q2 X8 –Q2X8


Q1X6 Q0X7 Q1X8

Q1X7
–Q2X6 Q0X8

Figure 5.2 Flow graphs of 36-dimensional vector components Yi , i = 0, 1, . . . , 8,


computation.

5.4 Multiplicative-Theorem-Based Williamson–Hadamard Matrices


In this section, we describe Williamson–Hadamard matrix construction based on
the following multiplicative theorems:

Theorem 5.4.1: (Agaian–Sarukhanyan Multiplicative Theorem16 ) Let there be


Williamson–Hadamard matrices of orders 4m and 4n. Then, Williamson–Hadamard
matrices of order 4(2m)i n, i = 1, 2, . . . exist.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


200 Chapter 5

Theorem 5.4.2: Let there be Williamson matrices of order n and a Hadamard


matrix of order 4m. Then, a Hadamard matrix of order 8mn exists.

Algorithm 5.4.1: Generation of a Williamson–Hadamard matrix of order 4mn


from Williamson–Hadamard matrices of orders 4m and 4n.

Input: Williamson matrices A, B, C, D and A0 , B0 , C0 , D0 of orders m and n,


respectively.
Step 1. Construct matrices X and Y as follows:
⎛ ⎞ ⎛ ⎞
1 ⎜⎜⎜⎜ A + B C + D⎟⎟⎟⎟ 1 ⎜⎜⎜⎜ A − B C − D⎟⎟⎟⎟
X = ⎜⎝ ⎟, Y = ⎜⎝ ⎟. (5.35)
2 C + D −A − B⎠ 2 −C + D A − B⎠

Step 2. For i = 1, 2, . . . , k, recursively construct the following matrices:

Ai = Ai−1 ⊗ X + Bi−1 ⊗ Y, Bi = Bi−1 ⊗ X − Ai−1 ⊗ Y,


(5.36)
Ci = Ci−1 ⊗ X + Di−1 ⊗ Y, Di = Di−1 ⊗ X − Ci−1 ⊗ Y.

Step 3. For i = 1, 2, . . . , k, construct the Williamson–Hadamard matrix


as follows:
⎛ ⎞
⎜⎜⎜ Ai Bi Ci Di ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜ −B Ai −Di Ci ⎟⎟⎟⎟⎟
[WH]i = ⎜⎜⎜⎜ i ⎟.
⎜⎜⎜ −Ci Di Ai −Bi ⎟⎟⎟⎟⎟
(5.37)
⎜⎝ ⎟⎠
−Di −Ci Bi Ai

Output: Williamson–Hadamard matrices [WH]i of the order 8mni , i = 1, 2, . . . , k.

Example 5.4.1: Construction of Williamson matrices. Using Williamson matrices


of order 3 and 5 from Algorithm 5.1.1 and Eq. (5.35), we obtain the following:
For n = 3,
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 0 + − −⎟⎟⎟ ⎜⎜⎜ 0 + + 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜ 0 + 0 − + −⎟⎟⎟⎟ ⎜⎜⎜+ 0 + 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜ 0 0 + − − +⎟⎟⎟⎟ ⎜⎜+ + 0 0 0 0 ⎟⎟⎟⎟
X = ⎜⎜⎜⎜ ⎟⎟ , Y = ⎜⎜⎜⎜ ⎟⎟ . (5.38)
⎜⎜⎜+
⎜⎜⎜ − − − 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜ 0
⎜⎜⎜ 0 0 0 + +⎟⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜−
⎜⎝ + − 0 − 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜ 0
⎜⎝ 0 0 + 0 +⎟⎟⎟⎟⎟
⎠ ⎠
− − + 0 0 − 0 0 0 + + 0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Williamson–Hadamard Transforms 201

For n = 5,
⎛ ⎞
⎜⎜⎜+ − − − − + 0 0 0 0 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜− + − − − 0 + 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− − + − − 0 0 + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− − − + − 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜− − − − + +⎟⎟⎟⎟
X = ⎜⎜⎜⎜⎜
0 0 0 0
⎟⎟ ,
⎜⎜⎜+ 0 0 0 0 − + + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 0 + 0 0 0 + − + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 0 0 + 0 0 + + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 0 0 0 + 0 + + + − +⎟⎟⎟⎟
⎜⎝ ⎟⎠
0 0 0 0 + + + + + −
⎛ ⎞ (5.39)
⎜⎜⎜ 0 0 0 0 0 0 + − − +⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 0 0 0 0 0 + 0 + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 0
⎜⎜⎜ 0 0 0 0 − + 0 + −⎟⎟⎟⎟⎟

⎜⎜⎜ 0
⎜⎜⎜ 0 0 0 0 − − + 0 +⎟⎟⎟⎟⎟
⎟⎟
⎜⎜ 0 + − − + 0 ⎟⎟⎟⎟
Y = ⎜⎜⎜⎜⎜
0 0 0 0
⎟⎟ .
⎜⎜⎜ 0 − + + − 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− 0 − + + 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ − 0 − + 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ + − 0 − 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎝ ⎟⎠
− + + − 0 0 0 0 0 0

Let A0 = (1), B0 = (1), C0 = (1), D0 = (1), A = (1, 1, 1), B = C = D =


(1, −1, −1). Then, from Eq. (5.36), we obtain Williamson matrices of order 6, i.e.,
   
A C B D
A1 = A3 = , A2 = A4 = , (5.40)
D −B C −A
or
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + + − −⎟⎟
⎟ ⎜⎜⎜+ − − + − −⎟⎟

⎜⎜⎜+ + + − + −⎟⎟⎟⎟ ⎜⎜⎜− + − − + −⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜+ + + − − +⎟⎟⎟⎟ ⎜⎜− − + − − +⎟⎟⎟⎟
A1 = A3 = ⎜⎜⎜⎜ ⎟, A2 = A4 = ⎜⎜⎜⎜ ⎟.
+⎟⎟⎟⎟⎟ −⎟⎟⎟⎟⎟
(5.41)
⎜⎜⎜+ − − − + ⎜⎜⎜+ − − − −
⎜⎜⎜− + − + − +⎟⎟⎟⎟⎠ ⎜⎜⎜− + − − − −⎟⎟⎟⎟⎠
⎜⎝ ⎜⎝
− − + + + − − − + − − −

Let A0 = (1), B0 = (1), C0 = (1), D0 = (1) and A = B = (1, −1, −1, −1, −1),
C = (1, 1, −1, −1, 1), and D = (1, −1, 1, 1, −1) be cyclic symmetric matrices of
order 1 and 5, respectively. Then, from Eq. (5.36), we obtain Williamson matrices
of order 10, i.e.,

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


202 Chapter 5

⎛ ⎞
⎜⎜⎜+ − − − − + + − − +⎟⎟

⎜⎜⎜− + − − − + + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−
⎜⎜⎜ − + − − − + + + −⎟⎟⎟⎟⎟
⎜⎜⎜− − − + − − − + + +⎟⎟⎟⎟⎟
⎜⎜⎜
− − − − + + − − + +⎟⎟⎟⎟⎟
A1 = A3 = ⎜⎜⎜⎜⎜ ⎟,
⎜⎜⎜+ − + + − − + + + +⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ + − + + + − + + +⎟⎟⎟⎟

⎜⎜⎜+ − + − + + + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝+ + − + − + + + − +⎟⎟⎟⎟⎠
− + + − + + + + + −
⎛ ⎞ (5.42)
⎜⎜⎜+ − − − − + − + + −⎟⎟

⎜⎜⎜− + − − − − + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−
⎜⎜⎜ − + − − + − + − +⎟⎟⎟⎟⎟
⎜⎜⎜− − − + − + + − + −⎟⎟⎟⎟⎟
⎜⎜⎜
− − − − + − + + − +⎟⎟⎟⎟⎟
A2 = A4 = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜ + + − − + − + + + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ + + − − + − + + +⎟⎟⎟⎟

⎜⎜⎜− + + + − + + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝− − + + + + + + − +⎟⎟⎟⎟⎠
+ − − + + + + + + −
Now the Williamson–Hadamard matrix of order 40 can be synthesized as
⎛ ⎞
⎜⎜⎜ A1 A2 A1 A2 ⎟⎟⎟
⎜⎜⎜−A A1 −A2 A1 ⎟⎟⎟⎟
[WH]40 = ⎜⎜⎜⎜ 2 ⎟.
⎜⎝⎜−A1 A2 A1 −A2 ⎟⎟⎟⎠⎟
(5.43)
−A2 −A1 A2 A1

5.5 Multiplicative-Theorem-Based Fast Williamson–Hadamard


Transforms
In this section, we present a fast transform algorithm based on Theorems 5.4.1 and
5.4.2. First we present an algorithm for generation of a Hadamard matrix based on
Theorem 5.4.2.
Algorithm 5.5.1: Generation of a Hadamard matrix via Theorem 5.4.2.
Input: Williamson matrices A, B, C, and D of order n and Hadamard matrix H1
of order 4m.
Step 1. Construct the matrices X and Y according to Eq. (5.35).
Step 2. Construct a Hadamard matrix as

P = X ⊗ H1 + Y ⊗ S 4m H1 , (5.44)

where S 4m is a monomial matrix with the conditions


T
S 4m = −S 4m , T
S 4m S 4m = I4m . (5.45)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Williamson–Hadamard Transforms 203

Output: Hadamard matrix P of order 8mn.


An example of a monomial matrix of order eight is given below.
⎛ ⎞
⎜⎜⎜ 0 + 0 0 0 0 0 0 ⎟⎟
⎜⎜⎜⎜− ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ 0 0 0 + 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜
− 0 ⎟⎟⎟⎟⎟
S 8 = ⎜⎜⎜⎜⎜
0 0 0 0 0 0
⎟. (5.46)
⎜⎜⎜ 0 0 0 0 0 + 0 0 ⎟⎟⎟⎟
⎜⎜⎜ 0 ⎟
⎜⎜⎜ 0 0 0 − 0 0 0 ⎟⎟⎟⎟

⎜⎜⎜ 0 0 0 0 0 0 0 +⎟⎟⎟⎟
⎝ ⎠
0 0 0 0 0 0 − 0

Algorithm 5.5.2: Fast transform with the matrix in Eq. (5.44).


Input: Vector column F T = ( f1 , f2 , . . . , f8mn ), and Hadamard matrix P from
Eq. (5.44).
Step 1. Perform P as

P = (X ⊗ I4m + Y ⊗ S 4m )(I2n ⊗ H1 ). (5.47)

Step 2. Split vector F as F = [F1 , F2 , . . . , F2n ], where


 
F j = f4m( j−1)+1 , f4m( j−1)+2 , . . . , f4m( j−1)+4m . (5.48)

Step 3. Compute the transform Qi = H1 F1 , i = 1, 2, . . . , 2n.


Step 4. Split vector Q = (Q1 , Q2 , . . . Q2n ) into 4m 2n-dimensional
vectors as
Q = (P1 , P2 , . . . , P4m ), (5.49)

where P j = ( f2n( j−1)+1 , f2n( j−1)+2 , . . . , f2n( j−1)+2n ).


Step 5. Compute the transforms XP j and Y P j .
Output: Transform coefficients.
Let us present an example of the computation of transforms XF and Y F[F =
( f1 , f2 , . . . , f6 )], where A, B, C, D are Williamson matrices of order 3, and X, Y are
from Algorithm 5.1.1: First, we compute
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ f1 + f4 − ( f 5 + f6 )⎟⎟⎟ ⎜⎜⎜ f2 + f3 ⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜ f2 + f5 − ( f 4 + f6 )⎟⎟⎟⎟ ⎜⎜⎜ f1 + f3 ⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜⎜⎜⎜ ⎟⎟
⎜f + f6 − ( f 4 + f5 )⎟⎟⎟⎟ ⎜f + f2 ⎟⎟⎟⎟
XF = ⎜⎜⎜⎜ 3 ⎟⎟ , Y F = ⎜⎜⎜⎜ 1 ⎟⎟ . (5.50)
⎜⎜⎜ f1 − f4 − ( f 2 + f3 )⎟⎟⎟⎟⎟ ⎜⎜⎜ f5 + f6 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟ ⎜⎜⎜⎜ ⎟
⎜⎜⎜ f2 − f5 − ( f 1 + f3 )⎟⎟⎟⎟⎟ ⎜⎜⎜ f4 + f6 ⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
f3 − f6 − ( f 1 + f2 ) f4 + f5

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


204 Chapter 5

Figure 5.3 Flow graph for the joint computation of XF and Y F transforms.

From Eq. (5.50), it follows that joint computation of XF and Y F requires only
18 additions/subtractions (see Fig. 5.3). Then, from Eq. (5.47), we can conclude
that the complexity of the PF transform algorithm can be obtained by

C(24m) = 48m(2m + 1). (5.51)

Note, that if X, Y are matrices of order k defined by Eq. (5.35), Hm is a Hadamard


matrix of order m and S m is a monomial matrix of order m; then, for any integer n,

Hmkn = X ⊗ Hmkn−1 + Y ⊗ S mkn−1 Hmkn−1 (5.52)

is a Hadamard matrix of order mkn .


Remark 5.5.1: For A = B = C = D = (1) from Eq. (5.35), we have
   
1 1 0 0
X= , Y= , (5.53)
1 −1 0 0
 
and if H2 = 11 −11 , then the matrix in Eq. (5.18) is the Walsh–Hadamard matrix of
order 2n+1 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Williamson–Hadamard Transforms 205

Algorithm 5.5.3: Construction of Hadamard matrices of order m(2n)k .


Input: Williamson matrices A, B, C, D of order n and Hadamard matrix of
order m.
Step 1. Construct matrices X and Y according to Eq. (5.35).
Step 2. Construct the matrix H2mn = X ⊗ Hm + Y ⊗ S m Hm .
Step 3. If i < k then i ← i + 1; Hm(2n)i ← Hm(2n)i+1 ; S m(2n)i ← S m(2n)i+1 ;
and go to step 2.
Output: Hadamard matrix Hm(2n)k of the order m(2n)k .
Let us represent a matrix Hmkn as a product of sparse matrices

Hmkn = (X ⊗ Imkn−1 + Y ⊗ S mkn−1 ) (Ik ⊗ Hmkn−1 ) = A1 (Ik ⊗ Hmkn−1 ) . (5.54)

Continuing this factorization process for all matrices Hmkn−i , i = 1, 2, . . . , n, we


obtain

Hmkn = A1 A2 · · · An (Ikn ⊗ Hm ) , (5.55)

where Ai = Iki−1 ⊗ (X ⊗ Imkn−i + Y ⊗ S mkn−i ) , i = 1, 2, . . . , n.

Example 5.5.1: Let Hm be a Hadamard matrix of order m, let X and Y have the
form as in Algorithm 5.1.1, and let F = ( fi )6m
i=1 be an input vector. Then, we have
a Hadamard matrix of order 6m of the form H6m = X ⊗ Hm + Y ⊗ S m Hm . As in
Eq. (5.55), we have H6m = A1 (I6 ⊗ Hm ), where A1 = X ⊗ Im + Y ⊗ S m , and
⎛ ⎞
⎜⎜⎜ Im Om Om Im −Im −Im ⎟⎟
⎜⎜⎜ Om ⎟
⎜⎜⎜ Im Om −Im Im −Im ⎟⎟⎟⎟

⎜⎜ O Om Im −Im −Im Im ⎟⎟⎟⎟
X ⊗ Im = ⎜⎜⎜⎜ m ⎟,
⎜⎜⎜⎜ Im −Im −Im −Im Om Om ⎟⎟⎟⎟⎟
⎜⎜⎜−Im
⎝ Im −Im Om −Im Om ⎟⎟⎟⎟⎠
−Im −Im Im Om Om −Im
⎛ ⎞ (5.56)
⎜⎜⎜Om Sm Sm Om Om Om ⎟⎟

⎜⎜⎜ S m Om Sm Om Om Om ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜ S Sm Om Om Om Om ⎟⎟⎟⎟
Y ⊗ S m = ⎜⎜⎜⎜ m ⎟.
⎜⎜⎜Om Om Om Om Sm S m ⎟⎟⎟⎟⎟
⎜⎜⎜O S m ⎟⎟⎟⎟⎠
⎜⎝ m Om Om Sm Om
Om Om Om Sm Sm Om

The input column vector is represented as F = (F1 , F2 , . . . , F6 ), where Fi is an


m-dimensional vector. Now we estimate the complexity of transforms

H6m F = A1 (I6 ⊗ Hm ) F = A1 diag {Hm F1 , Hm F2 , . . . , Hm F6 } . (5.57)

Denote T = (I6 ⊗ Hm )F. Computing A1 T , where T = (T 1 , T 2 , . . . T 6 ), from


Eq. (5.56) we obtain

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


206 Chapter 5

⎛ ⎞ ⎛ ⎞
⎜⎜⎜T 1 + T 4 − (T 5 + T 6 )⎟⎟⎟ ⎜⎜⎜S m (T 2 + T 3 )⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜T 2 + T 5 − (T 4 + T 6 )⎟⎟⎟⎟⎟ ⎜⎜⎜S m (T 1 + T 3 )⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
T + T 6 − (T 4 + T 5 )⎟⎟⎟⎟ S (T + T 2 )⎟⎟⎟⎟
(X ⊗ Im )T = ⎜⎜⎜⎜⎜ 3 ⎟, (Y ⊗ S m )T = ⎜⎜⎜⎜⎜ m 1 ⎟ . (5.58)
⎜⎜⎜T 1 − T 4 − (T 2 + T 3 )⎟⎟⎟⎟⎟ ⎜⎜⎜S m (T 5 + T 6 )⎟⎟⎟⎟⎟
⎜⎜⎜⎜T − T − (T + T )⎟⎟⎟⎟ ⎜⎜⎜⎜S (T + T )⎟⎟⎟⎟
⎜⎜⎝ 2 5 1 3 ⎟ ⎟⎠ ⎜⎜⎝ m 4 6 ⎟ ⎟⎠
T 3 − T 6 − (T 1 + T 2 ) S m (T 4 + T 5 )

From Eqs. (5.57) and (5.58), it follows that the computational complexity of
transform H6m F is C(H6m ) = 24m + 6C(Hm ), where C(Hm ) is a complexity of an
m-point HT.

5.6 Complexity and Comparison


5.6.1 Complexity of block-cyclic, block-symmetric Williamson–Hadamard
transform
Because every block row of the block-cyclic, block-symmetric Hadamard matrix
contains block Q0 , and other blocks are from the set {Q1 , Q2 , Q3 , Q4 } (see
Appendix A.3), it is not difficult to find that the complexity of the block
Williamson–Hadamard transform of order 4n can be obtained from the following
formula:

C(H4n ) = 4n(n + 2). (5.59)

From representation of a block Williamson–Hadamard matrix (see the Appendix),


we can see that some of block pairs are repeated.
Two block sequences of length k (k < n) in the first block row of the block
Williamson–Hadamard matrix of order 4n are

{(−1) p1 Qi , (−1) p2 Qi , . . . , (−1) pk Qi } and


(5.60)
(−1) Q j , (−1) Q j , . . . , (−1) Q j ,
q1 q2 qk

where pt , qt ∈ {0, 1} and for all t = 1, 2, . . . , k, pt = qt , or qt = pt (1 = 0, 0 = 1), we


call cyclic congruent circuits if

dist[(−1) pt Qi , (−1) pt+1 Qi ] = dist[(−1)qt Q j , (−1)qt+1 Q j ] (5.61)

for all t = 1, 2, . . . , k − 1, where dist[Ai , A j ] = j − i, for A = (Ai )m


i=1 . For example,
in the first block row of the block-cyclic, block-symmetric Hadamard matrix of
order 36, there are three cyclic congruent circuits of length 2. These circuits are
underlined as follows:

Q0 , Q1 , −Q2 , Q1 , −Q1 ; −Q1 , Q1 , −Q2 , Q1 . (5.62)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Williamson–Hadamard Transforms 207

Table 5.1 Values of parameters n, m, tm , Nm, j and the complexity of the Williamson-type HT
of order 4n.
n 4n M tm Nm, j Cr (H4n ) Direct comp.

3 12 0 0 0 60 132
5 20 0 0 0 140 380
7 28 2 1 2 224 756
9 36 2 1 3 324 1260
11 44 2 1 2 528 1892
13 52 3 1 2 676 2652
15 60 2 3 3, 2, 2 780 3540
17 68 2 2 2, 3 1088 4558
19 76 2 3 2„4, 3 1140 5700
21 84 2 3 2, 2, 5 1428 6972
23 92 2 3 4, 2, 2 1840 8372
25 100 2 3 2, 7, 2 1850 9900

With this observation, one can reduce several operations in summing up the
vectors Yi (see step 3 of the above example and its corresponding flow graphs). Let
m be a length of the cyclic congruent circuits of the first block row of the block-
cyclic, block-symmetric Hadamard matrix of order 4n, tm be a number of various
cyclic congruent circuits of length m, and Nm, j be the number of cyclic congruent
circuits of type j and length m. Then, the complexity of the HT of order 4n takes
the form
⎡ ⎤
⎢⎢⎢ tm ⎥⎥⎥
Cr (H4n ) = 4n ⎢⎢⎣n + 2 − 2 (Nm, j − 1)(i − 1)⎥⎥⎥⎦ .
⎢ (5.63)
j=1

The values of parameters n, m, tm , Nm, j and the complexity of the Williamson-


type HT of order 4n are given in Table 5.1. Thus, the complexity of the block
Williamson–Hadamard transform can be calculated from the formula

C ± = 2n(2n + 3),
(5.64)
C sh = 3n,

where C ± is the number of additions/subtractions, and C sh is the number of shifts.


Now, using repetitions of additions of vectors Yi and the same notations as in the
previous subsection [see Eq. (5.63)], the complexity of the Williamson–Hadamard
transform can be presented as
⎛ ⎞
⎜⎜⎜ m tm ⎟⎟⎟
Cr± = 2n ⎜⎜⎜⎝2n + 3 − 2 (Nm, j − 1)(i − 1)⎟⎟⎟⎠ ,
i=2 j=1
(5.65)
C sh
= 3n.

Formulas for the complexities of the fast Williamson–Hadamard transforms


without repetitions of blocks, and with repetitions and shifts, and their numerical

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


208 Chapter 5

Table 5.2 Williamson–Hadamard transforms without repetitions of blocks, and with


repetitions and shifts, and their numerical results.
n 4n C C± C sh Cr Cr± Direct comp.

3 12 60 54 9 60 54 132
5 20 140 130 15 140 130 380
7 28 252 238 21 224 210 756
9 36 396 378 27 324 306 1260
11 44 572 550 33 528 506 1892
13 52 780 754 39 676 650 2652
15 60 1020 990 45 780 750 3540
17 68 1292 1258 51 1088 1054 4558
19 76 1596 1558 57 1140 1102 5700
21 84 1932 1890 63 1428 1386 6972
23 92 2300 2254 69 1840 1794 8372
25 100 2700 2650 75 1900 1850 9900

results, are given in the formula in Eq. (5.66) and in Table 5.2, respectively.

C = 4n(n⎡ + 2), ⎤
⎢⎢⎢ m tm ⎥⎥⎥
Cr = 4n ⎢⎢⎢⎣n + 2 − 2 (Nm, j − 1)(i − 1)⎥⎥⎥⎦ ,
i=2 j=1
C ± = 2n(2n + 3),
(5.66)
C sh = 3n,⎡ ⎤
⎢⎢⎢ m tm ⎥⎥⎥
Cr± = 2n ⎢⎢⎢⎣n + 3 − 2 (Nm, j − 1)(i − 1)⎥⎥⎥⎦ ,
i=2 j=1
C sh = 3n.

5.6.2 Complexity of the HT from the multiplicative theorem


Recall that if X, Y are matrices from Eq. (5.35) of order k, and Hm is a Hadamard
matrix of order m, then the recursively constructed Hadamard matrix

Hmkn = X ⊗ Hmkn−1 + Y ⊗ S mkn−1 Hmkn−1 (5.67)

can be factorized as

Hmkn = A1 A2 · · · An (Ikn ⊗ Hm ), (5.68)

where

Ai = Iki−1 ⊗ (X ⊗ Imkn−i + Y ⊗ S mkn−i ). (5.69)

Let us now evaluate the complexity of a transform:

n, F T = ( f1 , f2 , . . . , fmkn ).
F
Hmk (5.70)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Williamson–Hadamard Transforms 209

Table 5.3 Complexity of m-point HTs.

Hm Complexity

Hm = X = H2 (see Remark 5.5.1) (n + 1)2n+1


Walsh–Hadamard (W–H) (C X + CY + k)mnkn−1 + mkn log2 m
BCBS W–H with block repetition (C X + CY + k)mnkn−1 + m[(m/4) + 2]kn
BCBS W–H with block repetition and (C X +*CY + k)mnkn−1 + +
tr
congruent circuits mkn (m/4) + 2 − 2 ri=2 j=1 (Nr, j − 1)(i − 1)
BCBS W–H with block repetition and shifts (C X + CY + k)mnkn−1 + (m/2)[(m/2) + 3]kn , (3m/4)kn
BCBS W–H with block repetition and (C X + CY *+ k)mnkn−1 + +
tr
congruent circuits and shifts kn (m/2) (m/2) + 3 − 2 r
i=2 j=1 (Nr, j − 1)(i − 1) , (3m/4)kn

First, we find the required operations for the transform:

Ai P, P = (p1 , p2 , . . . , pmkn ). (5.71)

We represent PT = (P1 , P2 , . . . , Pki−1 ), where


* +mkn−i+1
P j = ( j − 1)mkn−i+1 + t , j = 1, 2, . . . , ki−1 . (5.72)
t=1

Then, from Eq. (5.69), we have

Ai P = diag {(X ⊗ Imkn−i + Y ⊗ S mkn−i )P1 , . . . ,


(X ⊗ Imkn−i + Y ⊗ S mkn−i )Pki−1 } . (5.73)

We denote the complexities of transforms XQ and Y Q by C X and CY , respectively.


We have
C X < k(k − 1), CY < k(k − 1). (5.74)
#
From Eq. (5.73), we obtain the complexity of transform ni=1 Ai P by (C X + CY +
k)mnkn−1 . Hence, the total complexity of transform Hmkn F is

C(Hmkn ) < (C X + CY + k)mnkn−1 + knC(Hm ), (5.75)

where C(Hm ) is a complexity of an m-point HT (see Table 5.3).


For given matrices X and Y, we can compute the exact value of C X , CY ;
therefore, we can obtain the exact complexity of the transform. For example, for
k = 6, from Eq. (5.50), we see that C X +CY = 18; hence, the 6n m-point HT requires
only 24 · 6n−1 mn + 6nC(Hm ) operations.

References
1. N. Ahmed and K. Rao, Orthogonal Transforms for Digital Signal Processing,
Springer-Verlag, New York (1975).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


210 Chapter 5

2. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Binary matrices,


decomposition and multiply-add architectures,” Proc. SPIE 5014, 111–122
(2003) [doi:10.1117/12.473134].
3. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson–Hadamard
transforms,” J. Multiple-Valued Logic Soft Comput. 10 (2), 173–187 (2004).
4. S. Agaian, H. Sarukhanyan, and J. Astola, “Multiplicative theorem based
fast Williamson–Hadamard transforms,” Proc. SPIE 4667, 82–91 (2002)
[doi:10.1117/12.467969].
5. H. Sarukhanyan, A. Anoyan, S. Agaian, K. Egiazarian and J. Astola, Fast
Hadamard transforms, in Proc. of Int. TICSP Workshop on Spectral Methods
and Multirate Signal Processing, SMMSP’2001, Pula, Croatia, Jun. 16–18,
pp. 33–40 (2001).
6. H. Sarukhanyan and A. Anoyan, “On fast Hadamard transform,” Math. Prob.
Comput. Sci. 21, 7–16 (2000).
7. S. Agaian, Williamson family and Hadamard matrices, in Proc. of 5th All-
Union Conf. on Problems of Cybernetics (in Russian) (1977).
8. S. Agaian and A. Matevosian, “Fast Hadamard transform,” Math. Prob.
Cybernet. Comput. Technol. 10, 73–90 (1982).
9. S. Agaian, “A unified construction method for fast orthogonal transforms,”
Prog. Cybernet. Syst. Res. 8, 301–307 (1982).
10. S. Agaian, “Advances and problems of the fast orthogonal transforms for
signal-images processing applications (Part 1),” Pattern Recognition, Classifi-
cation, Forecasting, Yearbook, 3, Russian Academy of Sciences, 146–215,
Nauka, Moscow (1990) (in Russian).
11. S. Agaian, “Advances and problems of the fast orthogonal transforms for
signal-images processing applications (Part 2),” Pattern Recognition, Classifi-
cation, Forecasting, Yearbook, 4, Russian Academy of Sciences, 156–246,
Nauka, Moscow (1991) (in Russian).
12. S. Agaian, “Optimal algorithms for fast orthogonal transforms and their
realization,” Cybernetics and Computer Technology, Yearbook, 2, 231–319,
Nauka, Moscow (1986).
13. S. S. Agaian, Hadamard Matrices and their Applications, Lecture Notes in
Mathematics, 1168, Springer-Verlag, New York (1985).
14. S. S. Agaian, “Construction of spatial block Hadamard matrices,” Math. Prob.
Cybernet. Comput. Technol. 12, 5–50 (1984) (in Russian).
15. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, On fast Hadamard
transforms of Williamson type, in Proc. EUSIPCO-2000, Tampere, Finland
Sept. 4–8, 2, 1077–1080 (2000).
16. S. S. Agaian and H. G. Sarukhanian, “Recurrent formulae of the construction
Williamson type matrices,” Math. Notes 30 (4), 603–617 (1981).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Williamson–Hadamard Transforms 211

17. G.R. Reddy and P. Satyanarayana, Interpolation algorithm using Walsh–


Hadamard and discrete Fourier/Hartley transforms, in Proc. of IEEE on the
33rd Midwest Symposium on Circuits and Systems, Vol. 1, 545–547 (1991).
18. C.-F. Chan, Efficient implementation of a class of isotropic quadratic filters by
using Walsh–Hadamard transform, in Proc of. IEEE Int. Symp. on Circuits and
Systems, June 9–12, Hong Kong, 2601–2604 (1997).
19. A. Chen, D. Li and R. Zhou, A research on fast Hadamard transform (FHT)
digital systems, IEEE TENCON 93, Beijing, 541–546 (1993).
20. H.G. Sarukhanyan, Hadamard matrices: construction methods and applica-
tions, in Proc. of Workshop on Transforms and Filter Banks, Tampere, Finland,
95–130 (Feb. 21–27, 1998).
21. H. G. Sarukhanyan, “Decomposition of the Hadamard matrices and fast
Hadamard transform,” in Computer Analysis of Images and Patterns, Lecture
Notes in Computer Science, 1296, 575–581 (1997).
22. R. K. Yarlagadda and E. J. Hershey, Hadamard Matrix Analysis and Synthesis
with Applications and Signal/Image Processing, Kluwer Academic Publishers,
Boston (1996).
23. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs,” in Contemporary Design Theory, 431–554, John Wiley & Sons,
Hoboken, NJ (1992).
24. S. Samadi, Y. Suzukake and H. Iwakura, On automatic derivation of fast
Hadamard transform using generic programming, in Proc. of 1998 IEEE Asia-
Pacific Conf. on Circuit and Systems, Thailand, 327–330 (1998).
25. D. Coppersmith, E. Feig, and E. Linzer, “Hadamard transforms on multiply/
add architectures,” IEEE Trans. Signal Process 42 (4), 969–970 (1994).
26. http://www.cs.uow.edu.au/people/jennie/lifework.html.
27. S. Agaian, H. Sarukhanyan, K. Egiazarian and J. Astola, Williamson–
Hadamard transforms: design and fast algorithms, in Proc. of 18 Int. Scientific
Conf. on Information, Communication and Energy Systems and Technologies,
ICEST-2003, Oct. 16–18, Sofia, Bulgaria, 199–208 (2003).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 6
Skew Williamson–Hadamard
Transforms
Skew Hadamard matrices are of special interest due to their uses, among others,
in constructing orthogonal designs. Fast computational algorithms for skew
Williamson–Hadamard transforms are constructed in this chapter. Fast algorithms
of two groups of transforms based on skew-symmetric Williamson–Hadamard
matrices are designed using the block structures of these matrices.

6.1 Skew Hadamard Matrices


Many constructions of Hadamard matrices are known, but not all of them give skew
Hadamard matrices.1–39 In Ref. 1, the authors provide a survey on the existence and
equivalence of skew-Hadamard matrices. In addition, they present some new skew
Hadamard matrices of order 52 and improve the known lower bound on the number
of the skew Hadamard matrices of this order. As of August 2006, skew Hadamard
matrices were known to exist for all n ≤ 188 with n divisible by 4. The survey of
known results about skew-Hadamard matrices is given in Ref. 33. It is conjectured
that skew Hadamard matrices exist for n = 1, 2 and all n divisible by 4.8,11,13,14

Definition 6.1.1: A matrix Am is called symmetric if ATm = Am . Matrix Am of order


m is called skew symmetric if ATm = −Am . The following matrices are examples of
skew-symmetric matrices of order 2, 3, and 4:
⎛ ⎞
  ⎛ ⎞ ⎜⎜⎜ 0 1 1 1⎟⎟⎟
⎜⎜⎜ 0 1 −1⎟⎟⎟ ⎜⎜⎜⎜−1 0 −1 1⎟⎟⎟⎟
, ⎜⎜⎜⎜⎝−1 0 1⎟⎟⎟⎟⎠ ,
0 1
⎜⎜⎜ ⎟.
⎜⎜⎝−1 1 0 −1⎟⎟⎟⎟⎠
(6.1)
−1 0
1 −1 0
−1 −1 1 0

6.1.1 Properties of the skew-symmetric matrices


• If A = (ai, j ) is a skew-symmetric matrix, then ai, j = 0.
• If A = (ai, j ) is a skew-symmetric matrix, then ai, j = −a j,i .
• Sums and scalar products of skew-symmetric matrices are again skew symme-
tric, i.e., A and B are skew-symmetric matrices of the same orders, and c is the
scalar number, then A + B and cA are skew-symmetric matrices.

213

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


214 Chapter 6

• If A is a skew-symmetric matrix of order n, the determinant of A satisfies

det(A) = det(AT ) = det(−A) = (−1)n det(A).

Definition 6.1.2: A Hadamard matrix H4n of order 4n of the form H4n = I4n + S 4n
T
is called skew-symmetric type, skew symmetric, or skew if S 4n = −S 4n .34,35
We can see that if H4n = I4n + S 4n is a skew-symmetric Hadamard matrix of
order 4n, then
T
S 4n S 4n = S 4n
2
= (1 − 4n)I4n . (6.2)

Indeed,
T
H4n H4n = (I4n + S 4n )(I4n − S 4n ) = I4n − S 4n + S 4n − S 4n
2

= I4n − S 4n
2
= 4nI4n , (6.3)
2
from which we obtain S 4n = (1 − 4n)I4n .
A skew-Hadamard matrix Hm of order m can always be written in a skew-normal
form as
 
1 e
Hm = , (6.4)
−eT Cm−1 + Im−1

where e is the row vector of ones of size m − 1, Cm−1 is a skew-symmetric


(0, −1, +1) matrix of order m − 1, and Im−1 is an identity matrix of order m − 1.
Or, a Hadamard matrix Hm is “skew Hadamard” if Hm + HmT = 2Im .
For example, the following matrices are skew-symmetric Hadamard matrices of
orders 2 and 4:
       
+ + 0 + + 0 0 +
H2 = = + = + I2 ,
− + − 0 0 + − 0
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟⎟ ⎜⎜⎜0 + + +⎟⎟⎟ (6.5)
⎜⎜⎜− + − +⎟⎟⎟⎟ ⎜⎜⎜⎜− 0 − +⎟⎟⎟⎟
H4 = ⎜⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ + I4 .
⎜⎜⎝− + + −⎟⎟⎠ ⎜⎜⎝− + 0 −⎟⎟⎠
− − + + − − + 0

The simple skew-Hadamard matrix construction method is based on the following


recursive formulas:
Suppose that Hn = S n + In is a skew-Hadamard matrix of order n. Then
 
S n + I n S n + In
H2n = (6.6)
S n − In −S n + In

is a skew-Hadamard matrix of order 2n. Examples of skew-Hadamard matrices of


orders 8 and 16 are given as follows:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Skew Williamson–Hadamard Transforms 215

⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟

⎜⎜⎜− + − + − + − +⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜−
⎜⎜⎜ + + − − + + −⎟⎟⎟⎟⎟
⎜− − + + − − + +⎟⎟⎟⎟⎟
H8 = ⎜⎜⎜⎜⎜ ⎟, (6.7)
⎜⎜⎜⎜− + + + + − − −⎟⎟⎟⎟

⎜⎜⎜− − − + + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝− + − − + − + +⎟⎟⎟⎟

− − + − + + − +
⎛ ⎞
⎜⎜⎜+ + + + + + + + + + + + + + + +⎟⎟

⎜⎜⎜⎜− + − + − + − + − + − + − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + − − + + − − + + − − + + −⎟⎟⎟⎟⎟
⎜⎜⎜− − + + − − + + − − + + − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + + + − − − − + + + + − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− − − + + + + − − − − + + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + − − + − + + − + − − + − + +⎟⎟⎟⎟
⎜⎜⎜− ⎟
− + − + + − + − − + − + + − +⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟.
−⎟⎟⎟⎟⎟
H16 (6.8)
⎜⎜⎜− + + + + + + + + − − − − − −
⎜⎜⎜− − − + − + − + + + + − + − + −⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜−
⎜⎜⎜ + − − − + + − + − + + + − − +⎟⎟⎟⎟⎟

⎜⎜⎜− − + − − − + + + + − + + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + + − − − − + − − − + + + +⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ − − + + − + − + + + − − + − +⎟⎟⎟⎟

⎜⎜⎜−
⎝ + − − + − − + + − + + − + + −⎟⎟⎟⎟

− − + − + + − − + + − + − − + +

6.2 Skew-Symmetric Williamson Matrices


Similarly, many constructions of Williamson–Hadamard matrices are known, but
not all of them give skew Williamson–Hadamard matrices. For instance, in Ref. 36
Blatt and Szekeres use symmetric balanced incomplete block design (SBIBD)
configurations with parameters [q2 (q + 2), q(q + 1), q], for q is a prime power to
construct Williamson-type matrices. In Ref. 40, J. Seberry shows that if q is a prime
power, then there are Williamson-type matrices of order
• (1/2)q2 (q + 1), where q ≡ 1(mod 4) is a prime power, and
• (1/2)q2 (q + 1), where q ≡ 3(mod 4) is a prime power and there are Williamson-
type matrices of order (1/2)(q + 1).
This gives Williamson-type matrices for the new orders 363, 1183, 1805, 2601,
3174, and 5103. Other related results of Williamson matrices can be found in
Refs. 41–49.
Let A, B, C, D be cyclic (+1, −1) matrices of order n satisfying the conditions

A = In + A1 , AT1 = −A1 ,
BT = B, C T = C, DT = B, (6.9)
AA + BB + CC + DD = 4nIn .
T T T T

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


216 Chapter 6

Then the matrix


⎛ ⎞
⎜⎜⎜ A B C D⎟⎟⎟
⎜⎜⎜−B A D −C ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎝−C −D A B ⎟⎟⎟⎟⎠
(6.10)
−D C −B A

will be a skew-symmetric Hadamard matrix of Williamson type of order 4n.


The following theorem is correct:
Theorem 6.2.1:47,48 Let A = (ai, j ), B = (bi, j ), C = (ci, j ), and D = (di, j ) be (+1, −1)
matrices of order n. Furthermore, let A be a skew-type cyclic matrix, and B, C, D
be back-cyclic matrices whose first rows satisfy the following equations:
a0,0 = b0,0 = c0,0 = d0,0 = 1,
a0, j = −a0,n− j , b0, j = b0,n− j , c0, j = c0,n− j , d0, j = d0,n− j , (6.11)
j = 1, 2, . . . , n − 1.

If AAT + BBT + CC T + DDT = 4nIn , then Eq. (6.10) is a skew-Hadamard matrix


of order 4n.
Four matrices satisfying the conditions of Eq. (6.9) are called skew-symmetric
Williamson-type matrices of order n, and the matrix Eq. (6.10) is called the skew-
symmetric Hadamard matrix of the Williamson type.47,48 Let us give an example
of a skew-symmetric Hadamard matrix of the Williamson type of order 12. Skew-
symmetric Williamson matrices of order 3 have the following forms:
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ − +⎟⎟⎟ ⎜⎜⎜+ − −⎟⎟⎟ ⎜⎜⎜+ + +⎟⎟⎟
⎜ ⎟ ⎜ ⎟ ⎜
A = ⎜⎜⎜⎝+ + −⎟⎟⎟⎠ , B = C = ⎜⎜⎜⎝− + −⎟⎟⎟⎠ , D = ⎜⎜⎜⎝+ + +⎟⎟⎟⎟⎠ . (6.12)
− + + − − + + + +

Thus, the skew-symmetric Hadamard matrix of the Williamson type of order 12


obtained from Eq. (6.10) will be represented as
⎛ ⎞
⎜⎜⎜+ − + + − − + − − + + +⎟⎟

⎜⎜⎜+ + − − + − − + − + + +⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜−
⎜⎜⎜ + + − − + − − + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜− + + + − + + + + − + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − + + + − + + + + − +⎟⎟⎟⎟⎟
⎜⎜⎜+
⎜⎜⎜ + − − + + + + + + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ . (6.13)
⎜⎜⎜− + + − − − + − + + − −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
⎜⎜⎜ − + − − − + + − − + −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ + − − − − − + + − − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜− − − + − − − + + + − +⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎝ − − − + − + − + + + −⎟⎟⎟⎟

− − − − − + + + − − + +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Skew Williamson–Hadamard Transforms 217

The first rows of the Williamson-type skew-symmetric cyclic matrices A, B, C, D


of order n, n = 3, 5, . . . , 31 [47, 48, 52] are given in Appendix A.4.

6.3 Block Representation of Skew-Symmetric


Williamson–Hadamard Matrices

In this section, we present a construction of block-cyclic Hadamard matrices, i.e.,


Hadamard matrices that can be defined by the first block rows. We demonstrate
that the Williamson-type skew-symmetric cyclic matrices of orders n exist, and
thus block-cyclic, skew-symmetric Hadamard matrices of order 4n exist, whose
blocks are also skew-symmetric Hadamard matrices of order 4.
Let H4n be a skew-symmetric Hadamard matrix of the Williamson type of order
4n and let A, B, C, D be the Williamson-type cyclic skew-symmetric matrices.
We have seen that the Williamson-type cyclic skew-symmetric matrices can be
represented as

n−1 n−1 n−1 n−1


A= ai U i , B= bi U i , C= ci U i , D= di U i , (6.14)
i=0 i=0 i=0 i=0

where U is the cyclic matrix of order n, with the first row (0, 1, 0, . . . , 0), U 0 =
U n = In being an identity matrix of order n, and U n+i = U i , ai = −an−i , bi = bn−i ,
ci = cn−i , di = dn−i , for i = 1, 2, . . . , n − 1. Now, the skew-symmetric Williamson-
type Hadamard matrix H4n can be represented as

n−1
H4n = U i ⊗ Pi , (6.15)
i=0

where
⎛ ⎞
⎜⎜⎜ ai bi ci di ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−bi ai di −ci ⎟⎟⎟⎟⎟
Pi = ⎜⎜⎜ ⎟, i = 0, 1, . . . , n − 1, (6.16)
⎜⎜⎜−ci −di ai bi ⎟⎟⎟⎟⎟
⎝ ⎠
−di ci −bi ai

and ai , bi , ci , di = ±1.
We will call the Hadamard matrices of the form of Eq. (6.15) skew-symmetric,
block-cyclic Williamson–Hadamard matrices.
An example of a skew-symmetric, block-cyclic Williamson–Hadamard matrix
of order 12 is given as follows:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


218 Chapter 6

⎛ ⎞
⎜⎜⎜+ + + + − − − + + − − +⎟⎟⎟
⎜⎜⎜⎜− + + − + − + + + + + +⎟⎟⎟⎟

⎜⎜⎜ ⎟⎟
⎜⎜⎜− − + + + − − − + − + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−
⎜⎜⎜ + − + − − + − − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ − − + + + + + − − − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + − + + − + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ . (6.17)
⎜⎜⎜+ − + − − − + + + − − −⎟⎟⎟⎟
⎜⎜⎜− ⎟
⎜⎜⎜ − + + − + − + − − + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−
⎜⎜⎜ − − + + − − + + + + +⎟⎟⎟⎟⎟

⎜⎜⎜+ − + + + + + + − + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ − − − + − + − − − + +⎟⎟⎟⎟
⎝ ⎠
− − + − − − + + − + − +

Note that block-cyclic, skew-symmetric Hadamard matrices are synthesized from


eight different blocks of order 4; the first ones have been used once in the first
position.
The following skew-symmetric Hadamard matrices of Williamson type of order
4 can be used to design block-cyclic, skew-symmetric Hadamard matrices of the
Williamson type of order 4n:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟⎟ ⎜⎜⎜+ + + −⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜− + + −⎟⎟⎟⎟ ⎜− + − −⎟⎟⎟⎟
P0 = ⎜⎜⎜⎜ ⎟⎟ , P1 = ⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎜− − + +⎟⎟⎟⎟ ⎜⎜⎜− + + +⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
− + − + + + − +
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + − +⎟⎟⎟ ⎜⎜⎜+ + − −⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜− + + +⎟⎟⎟⎟ ⎜− + − +⎟⎟⎟⎟
P2 = ⎜⎜⎜⎜ ⎟⎟ , P3 = ⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎜+ − + +⎟⎟⎟⎟ ⎜⎜⎜+ + + +⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
− − − + + − − +
⎛ ⎞ ⎛ ⎞ (6.18)
⎜⎜⎜+ − + +⎟⎟⎟ ⎜⎜⎜+ − + −⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜+ + + −⎟⎟⎟⎟ ⎜+ + − −⎟⎟⎟⎟
P4 = ⎜⎜⎜⎜ ⎟⎟ , P5 = ⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎜− − + −⎟⎟⎟⎟ ⎜⎜⎜− + + −⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
− + + + + + + +
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ − − +⎟⎟⎟ ⎜⎜⎜+ − − −⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜+ + + +⎟⎟⎟⎟ ⎜+ + − +⎟⎟⎟⎟
P6 = ⎜⎜⎜⎜ ⎟⎟ , P7 = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎜+ − + −⎟⎟⎟⎟ ⎜⎜⎜+ + + −⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
− − + + + − + +

In Appendix A.4, the first block rows of the block-cyclic, skew-symmetric


Hadamard matrices of Williamson type of order 4n are given, e.g., n = 3, 5,
. . . , 35.48–53

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Skew Williamson–Hadamard Transforms 219

6.4 Fast Block-Cyclic, Skew-Symmetric Williamson–Hadamard


Transform
In this section, we present a fast algorithm for calculation of a block-cyclic,
skew-symmetric Williamson–Hadamard Transform of order 4n. As follows from
Section 6.3, all block-cyclic, skew-symmetric Williamson–Hadamard matrices
contain several blocks from the set of matrices {P0 , P1 , . . . , P7 }. Therefore, the
realization of block-cyclic, skew-symmetric Williamson–Hadamard transforms
can be accomplished by calculating several specialized 4-point HTs such as Yi =
Pi X, where Pi , i = 0, 1, . . . , 7.
Let us calculate the spectral coefficients of transforms from Eq. (6.18). Let
X = (x0 , x1 , x2 , x3 )T and Yi = (y0i , y1i , y2i , y3i )T be input and output column vectors,
respectively. We obtain

y00 = (x0 + x1 ) + (x2 + x3 ), y10 = (x0 − x1 ) + (x2 − x3 ),


y20 = −(x0 + x1 ) + (x2 + x3 ), y30 = −(x0 − x1 ) − (x2 − x3 );
(6.19a)
y01 = (x0 + x1 ) + (x2 − x3 ), y11 = −(x0 − x1 ) − (x2 + x3 ),
y21 = −(x0 − x1 ) + (x2 + x3 ), y31 = (x0 + x1 ) − (x2 − x3 );
y02 = (x0 + x1 ) − (x2 − x3 ), y12 = −(x0 − x1 ) + (x2 + x3 ),
y22 = (x0 − x1 ) + (x2 + x3 ), y32 = −(x0 + x1 ) − (x2 − x3 );
(6.19b)
y03 = (x0 + x1 ) − (x2 + x3 ), y13 = −(x0 − x1 ) − (x2 − x3 ),
y23 = (x0 + x1 ) + (x2 + x3 ), y33 = (x0 − x1 ) − (x2 − x3 );
y04 = (x0 − x1 ) + (x2 + x3 ), y14 = (x0 + x1 ) + (x2 − x3 ),
y24 = −(x0 + x1 ) + (x2 − x3 ), y34 = −(x0 − x1 ) + (x2 + x3 );
(6.19c)
y05 = (x0 − x1 ) + (x2 − x3 ), y15 = (x0 + x1 ) − (x2 + x3 ),
y25 = −(x0 − x1 ) + (x2 − x3 ), y35 = (x0 + x1 ) + (x2 + x3 );
y06 = (x0 − x1 ) − (x2 − x3 ), y16 = (x0 + x1 ) + (x2 + x3 ),
y26 = (x0 − x1 ) + (x2 − x3 ), y36 = −(x0 + x1 ) + (x2 + x3 );
(6.19d)
y07 = (x0 − x1 ) − (x2 + x3 ), y17 = (x0 + x1 ) − (x2 − x3 ),
y27 = (x0 + x1 ) + (x2 − x3 ), y37 = (x0 − x1 ) + (x2 + x3 ).

From the equations above, it follows that

y02 = y31 , y12 = y21 , y22 = −y11 , y32 = −y01 ;


y03 = −y20 , y13 = y30 , y23 = y00 , y33 = −y10 ;
y04 = −y11 , y14 = y01 , y24 = −y31 , y34 = y21 ;
(6.20)
y05 = −y30 , y15 = −y20 , y25 = y10 , y35 = y00 ;
y06 = −y10 , y16 = y00 , y26 = −y30 , y36 = y20 ;
y07 = −y21 , y17 = y31 , y27 = y01 , y37 = −y11 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


220 Chapter 6

x0 0
y0

1
x1 y0
P0 X
2
x2 y0

x3 3
y0

x0 0
y1

x1 y1
1

P1X
x2 2
y1

x3 3
y1

0 1 2 3 0 1 2 3
y0 y0 y0 y0 y1 y1 y1 y1

x0 0 0
y0 x0 y0
1 1
x1 y0 x1 y0
P3X P2 X
2 2
x2 y0 x2 y0
3 3
x3 y0 x3 y0

x0 0 x0 0
y0 y0

x1 1 x1 1
y0 y0
P5 X P4X
x2 2 x2 2
y0 y0

x3 3 3
y0 x3 y0

x0 0 x0 0
y1 y1

x1 1 x1 1
y1 y1
P6 X P7X
x2 2 x2 2
y1 y1

x3 3 x3 3
y1 y1

Figure 6.1 Flow graphs of the joint Pi X transforms, i = 0, 1, . . . , 7.

Now, from Eqs. (6.19a)–(6.20), we can see that the joint computation of 4-point
transforms Pi X, i = 0, 1, . . . , 7 requires only 12 addition/subtraction operations. In
Fig. 6.1, the joint Pi X transforms, i = 0, 1, . . . , 7, are shown.
Let us give an example. The block-cyclic, skew-symmetric Hadamard matrix of
the Williamson type of order 36 has the following form:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Skew Williamson–Hadamard Transforms 221

⎛ ⎞
⎜⎜⎜ P0 −P3 −P1 −P2 P2 −P5 P5 P6 P4 ⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ P4 P0 −P3 −P1 −P2 P2 −P5 P5 P6 ⎟⎟⎟⎟
⎜⎜⎜ P ⎟
⎜⎜⎜ 6 P4 P0 −P3 −P1 −P2 P2 −P5 P5 ⎟⎟⎟⎟⎟
⎜⎜⎜ P5 ⎟
⎜⎜ P6 P4 P0 −P3 −P1 −P2 P2 −P5 ⎟⎟⎟⎟

H36 = ⎜⎜⎜⎜⎜−P5 P5 P6 P4 P0 −P3 −P1 −P2 P2 ⎟⎟⎟⎟⎟ . (6.21)
⎜⎜⎜ ⎟
⎜⎜⎜ P2 −P5 P5 P6 P4 P0 −P3 −P1 −P2 ⎟⎟⎟⎟
⎜⎜⎜−P ⎟
⎜⎜⎜ 2 P2 −P5 P5 P6 P4 P0 −P3 −P1 ⎟⎟⎟⎟⎟
⎜⎜⎜−P ⎟
⎜⎝ 1 −P2 P2 −P5 P5 P6 P4 P0 −P3 ⎟⎟⎟⎟

−P3 −P1 −P2 P2 −P5 P5 P6 P4 P0

The input vector F36 can be represented as


* +
T
F36 = X0T , X1T , . . . , X8T , (6.22)

where
 
XiT = f4i , f4i+1 , f4i+2 , f4i+3 , i = 0, 1, . . . , 8. (6.23)

Now, the 36-point HT is represented as follows:

H36 F36 = Y0 + Y1 + · · · + Y8 , (6.24)

where Yi , i = 0, 1, . . . , 8 has the following form, respectively:


⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜ P0 X0 ⎟⎟⎟ ⎜⎜⎜−P3 X1 ⎟⎟⎟ ⎜⎜⎜−P1 X2 ⎟⎟⎟ ⎜⎜⎜−P2 X3 ⎟⎟⎟ ⎜⎜⎜ P2 X4 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ P4 X0 ⎟⎟⎟ ⎜⎜⎜ P0 X1 ⎟⎟⎟ ⎜⎜⎜−P3 X2 ⎟⎟⎟ ⎜⎜⎜−P1 X3 ⎟⎟⎟ ⎜⎜⎜−P2 X4 ⎟⎟⎟⎟⎟
⎜⎜⎜ P X ⎟⎟⎟ ⎜⎜⎜ P X ⎟⎟⎟ ⎜⎜⎜ P X ⎟⎟⎟ ⎜⎜⎜−P X ⎟⎟⎟ ⎜⎜⎜−P X ⎟⎟⎟
⎜⎜⎜ 6 0 ⎟⎟⎟ ⎜⎜⎜ 4 1 ⎟⎟⎟ ⎜⎜⎜ 0 2 ⎟⎟⎟ ⎜⎜⎜ 3 3 ⎟⎟⎟ ⎜⎜⎜ 1 4 ⎟⎟⎟
⎜⎜⎜ P5 X0 ⎟⎟⎟ ⎜⎜⎜ P6 X1 ⎟⎟⎟ ⎜⎜⎜ P4 X2 ⎟⎟⎟ ⎜⎜⎜ P0 X3 ⎟⎟⎟ ⎜⎜⎜−P3 X4 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜ ⎟⎟
Y0 = ⎜⎜⎜−P5 X0 ⎟⎟⎟ , Y1 = ⎜⎜⎜ P5 X1 ⎟⎟⎟ , Y2 = ⎜⎜⎜ P6 X2 ⎟⎟⎟ , Y3 = ⎜⎜⎜ P4 X3 ⎟⎟⎟ , Y4 = ⎜⎜⎜⎜⎜ P0 X4 ⎟⎟⎟⎟⎟ ,
⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ P2 X0 ⎟⎟⎟⎟⎟ ⎜⎜⎜−P5 X1 ⎟⎟⎟⎟⎟ ⎜⎜⎜ P5 X2 ⎟⎟⎟⎟⎟ ⎜⎜⎜ P6 X3 ⎟⎟⎟⎟⎟ ⎜⎜⎜ P4 X4 ⎟⎟⎟⎟⎟
⎜⎜⎜−P X ⎟⎟⎟ ⎜⎜⎜ P X ⎟⎟⎟ ⎜⎜⎜−P X ⎟⎟⎟ ⎜⎜⎜ P X ⎟⎟⎟ ⎜⎜⎜ P X ⎟⎟⎟
⎜⎜⎜ 2 0 ⎟⎟⎟ ⎜⎜⎜ 2 1 ⎟⎟⎟ ⎜⎜⎜ 5 2 ⎟⎟⎟ ⎜⎜⎜ 5 3 ⎟⎟⎟ ⎜⎜⎜ 6 4 ⎟⎟⎟
⎜⎜⎜−P X ⎟⎟⎟ ⎜⎜⎜−P X ⎟⎟⎟ ⎜⎜⎜ P X ⎟⎟⎟ ⎜⎜⎜−P X ⎟⎟⎟ ⎜⎜⎜ P X ⎟⎟⎟
⎝⎜ 1 0
⎠⎟ ⎜⎝ 2 1 ⎟⎠ ⎜⎝ 2 2 ⎟⎠ ⎜⎝ 5 3 ⎟⎠ ⎜⎝ 5 4 ⎟⎠
−P3 X0 −P1 X1 −P2 X2 P2 X3 −P5 X4
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜−P5 X5 ⎟⎟⎟ ⎜⎜⎜ P5 X6 ⎟⎟⎟ ⎜⎜⎜ P5 X7 ⎟⎟⎟ ⎜⎜⎜ P4 X8 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ P2 X5 ⎟⎟⎟ ⎜⎜⎜−P5 X6 ⎟⎟⎟ ⎜⎜⎜ P5 X7 ⎟⎟⎟ ⎜⎜⎜ P5 X8 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−P2 X5 ⎟⎟⎟ ⎜⎜⎜ P2 X6 ⎟⎟⎟ ⎜⎜⎜−P5 X7 ⎟⎟⎟ ⎜⎜⎜ P5 X8 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜−P1 X5 ⎟⎟⎟⎟ ⎜⎜⎜⎜−P2 X6 ⎟⎟⎟⎟ ⎜⎜⎜
⎜⎜⎜ P2 X7 ⎟⎟⎟⎟⎟
⎟ ⎜⎜⎜
⎜⎜⎜−P5 X8 ⎟⎟⎟⎟⎟

⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
Y5 = ⎜⎜⎜⎜−P3 X5 ⎟⎟⎟⎟ , Y6 = ⎜⎜⎜⎜−P1 X6 ⎟⎟⎟⎟ , Y7 = ⎜⎜⎜⎜−P2 X7 ⎟⎟⎟⎟ , Y8 = ⎜⎜⎜⎜ P2 X8 ⎟⎟⎟⎟⎟ . (6.25) ⎜ ⎟ ⎜
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜ P0 X5 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜−P3 X6 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜−P1 X7 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜−P2 X8 ⎟⎟⎟⎟⎟
⎜⎜⎜ P X ⎟⎟⎟ ⎜⎜⎜ P X ⎟⎟⎟ ⎜⎜⎜−P X ⎟⎟⎟ ⎜⎜⎜−P X ⎟⎟⎟
⎜⎜⎜ 4 5 ⎟⎟⎟ ⎜⎜⎜ 0 6 ⎟⎟⎟ ⎜⎜⎜ 3 7 ⎟⎟⎟ ⎜⎜⎜ 1 8 ⎟⎟⎟
⎜⎜⎜ P6 X5 ⎟⎟⎟ ⎜⎜⎜ P4 X6 ⎟⎟⎟ ⎜⎜⎜ P0 X7 ⎟⎟⎟ ⎜⎜⎜−P3 X8 ⎟⎟⎟
⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠ ⎜⎝ ⎟⎠
P5 X5 P 5 X6 P4 X7 P 0 X8

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


222 Chapter 6

From Eqs. (6.19a)–(6.19d) and the above-given equalities for Yi , we can see that
in order to compute all transforms Pi X j i = 0, 1, . . . , 6, j = 0, 1, . . . , 8 resulting
in Yi , i = 0, 1, . . . , 8, 108 addition operations are necessary, as in the block-cyclic,
block-symmetric case. Hence, the complexity of the block-cyclic, skew-symmetric
HT can be calculated by the formula

C s (H4n ) = 4n(n + 2). (6.26)

The computation of the vectors Yi , i = 0, 1, . . . , 8 is given schematically in Fig. 6.2.


Now, using repetitions of additions of vectors Yi and the same notations as in
the previous subsection, the complexity of the HT can be represented as
⎡ ⎤
⎢⎢⎢ m tm ⎥⎥⎥
C rs (H4n ) = 4n ⎢⎢⎢⎣n + 2 − (Nm, j − 1)(i − 1)⎥⎥⎥⎦ . (6.27)
i=2 j=1

In Appendix A.5, the first block rows of block-cyclic, skew-symmetric


Hadamard matrices of the Williamson type of order 4n, n = 3, 5, . . . , 25 [47–50]
with marked cyclic congruent circuits are reflected.

6.5 Block-Cyclic, Skew-Symmetric Fast Williamson–Hadamard


Transform in Add/Shift Architectures
Let us introduce the notations r1 = x1 + x2 + x3 , r2 = r1 − x0 . From these notations
and Eqs. (6.19a)–(6.19d), it follows that

y00 = r1 + x0 , y10 = r2 − 2x3 , y20 = r2 − 2x1 , y30 = r2 − 2x2 ;


y01 = y00 − 2x3 , y11 = −y00 + 2x1 , y21 = r2 , y31 = y00 − 2x2 ;
y02 = y31 , y12 = r2 , y22 = −y11 , y32 = −y01 ;
y03 = −y20 , y13 = y30 , y23 = y00 , y33 = −y10 ;
(6.28)
y04 = −y11 , y14 = y01 , y24 = −y02 , y34 = r2 ;
y05 = −y10 , y15 = −y20 , y25 = y10 , y35 = y00 ;
y06 = −y10 , y16 = y00 , y26 = −y30 , y36 = y20 ;
y07 = −r2 , y17 = y02 , y27 = y01 , y37 = −y11 .

Analysis of the 4-point transforms given above shows that their joint com-
putation requires fewer operations than does their separate computations. For
example, the transforms P0 X and P1 X require 14 addition/subtraction operations
and three one-bit shifts; however, for their joint computation, only 10 addition/
subtraction operations and three one-bit shifts are necessary.
One can show that formulas of the complexity, in this case, are similar to ones
in the case of symmetric Williamson–Hadamard matrices, i.e.,

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Skew Williamson–Hadamard Transforms 223

Figure 6.2 Flow graphs of the computation of Yi vectors.

C ±s (H4n ) = 2n(2n + 3),


C ssh (H4n ) = 3n;
⎡ ⎤
⎢⎢⎢ m tm ⎥⎥⎥ (6.29)
C rs (H4n )± = 2n ⎢⎢⎢⎣2n + 3 − 2 (Nm, j − 1)(i − 1)⎥⎥⎥⎦ ,
i=2 j=1

C ssh (H4n ) = 3n.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


224 Chapter 6

Table 6.1 Complexities of block-cyclic, skew-symmetric fast Williamson–Hadamard


transforms in add/shift architectures.
n C s (H4n ) C sh (H4n ) C rs (H4n ) C rs (H4n )±

3 60 9 60 54
5 140 15 140 130
7 252 21 252 238
9 396 27 360 342
11 572 33 484 462
13 780 39 676 650
15 1020 45 900 870
17 1292 51 1088 1054
19 1596 57 1216 1178
21 1932 63 1596 1554
23 2300 69 1840 1794
25 2700 75 2100 2050

Some numerical results of complexities of block-cyclic, skew-symmetric fast


Williamson–Hadamard transforms in add/shift architectures are given in Table 6.1.

References
1. R. Craigen, “Hadamard matrices and designs,” in The CRC Handbook of
Combinatorial Designs, C. J. Colbourn and J. H. Dinitz, Eds., pp. 370–377
CRC Press, Boca Raton (1996).
2. D. Z. Djokovic, “Skew Hadamard matrices of order 4 × 37 and 4 × 43,”
J. Combin. Theory, Ser. A 61, 319–321 (1992).
3. D. Z. Djokovic, “Ten new orders for Hadamard matrices of skew type,” Univ.
Beograd. Pupl. Electrotehn. Fak., Ser. Math. 3, 47–59 (1992).
4. D. Z. Djokovic, “Construction of some new Hadamard matrices,” Bull.
Austral. Math. Soc. 45, 327–332 (1992).
5. D. Z. Djokovic, “Good matrices of order 33, 35 and 127 exist,” J. Combin.
Math. Combin. Comput. 14, 145–152 (1993).
6. D. Z. Djokovic, “Five new orders for Hadamard matrices of skew type,”
Australas. J. Combin. 10, 259–264 (1994).
7. D. Z. Djokovic, “Six new orders for G-matrices and some new orthogonal
designs,” J. Combin. Inform. System Sci. 20, 1–7 (1995).
8. R. J. Fletcher, C. Koukouvinos, and J. Seberry, “New skew-Hadamard
matrices of order 4 · 49 and new D-optimal designs of order 2 · 59,” Discrete
Math. 286, 251–253 (2004).
9. S. Georgiou and C. Koukouvinos, “On circulant G-matrices,” J. Combin.
Math. Combin. Comput. 40, 205–225 (2002).
10. S. Georgiou and C. Koukouvinos, “Some results on orthogonal designs and
Hadamard matrices,” Int. J. Appl. Math. 17, 433–443 (2005).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Skew Williamson–Hadamard Transforms 225

11. S. Georgiou, C. Koukouvinos, and J. Seberry, “On circulant best matrices and
their applications,” Linear Multilin. Algebra 48, 263–274 (2001).
12. S. Georgiou, C. Koukouvinos, and S. Stylianou, “On good matrices, skew
Hadamard matrices and optimal designs,” Comput. Statist. Data Anal. 41,
171–184 (2002).
13. S. Georgiou, C. Koukouvinos, and S. Stylianou, “New skew Hadamard
matrices and their application in edge designs,” Utilitas Math. 66, 121–136
(2004).
14. S. Georgiou, C. Koukouvinos, and S. Stylianou, “Construction of new skew
Hadamard matrices and their use in screening experiments,” Comput. Stat.
Data Anal. 45, 423–429 (2004).
15. V. Geramita and J. Seberry, Orthogonal Designs: Quadratic Forms and
Hadamard Matrices, Marcel Dekker, New York (1979).
16. J. M. Goethals and J. J. Seidel, “A skew Hadamard matrix of order 36,”
J. Austral. Math. Soc. 11, 343–344 (1970).
17. H. Kharaghani and B. Tayfeh-Rezaie, “A Hadamard matrix of order 428,”
J. Combin. Des. 13, 435–440 (2005).
18. C. Koukouvinos and J. Seberry, “On G-matrices,” Bull. ICA 9, 40–44 (1993).
19. S. Kounias and T. Chadjipantelis, “Some D-optimal weighing designs for
n ≡ 3 (mod 4),” J. Statist. Plann. Inference 8, 117–127 (1983).
20. R. E. A. C. Paley, “On orthogonal matrices,” J. Math. Phys. 12, 311–320
(1933).
21. J. Seberry Wallis, “A skew-Hadamard matrix of order 92,” Bull. Austral. Math.
Soc. 5, 203–204 (1971).
22. J. Seberry Wallis, “On skew Hadamard matrices,” Ars Combin. 6, 255–275
(1978).
23. J. Seberry Wallis and A. L. Whiteman, “Some classes of Hadamard matrices
with constant diagonal,” Bull. Austral. Math. Soc. 7, 233–249 (1972).
24. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs,” in Contemporary Design Theory—A Collection of Surveys, J. H.
Dinitz and D. R. Stinson, Eds., 431–560 Wiley, Hoboken, NJ (1992).
25. E. Spence, “Skew-Hadamard matrices of order 2(q + 1),” Discrete Math. 18,
79–85 (1977).
26. G. Szekeres, “A note on skew type orthogonal ±1 matrices,” in Combinatorics,
Colloquia Mathematica Societatis, Vol. 52, J. Bolyai, A. Hajnal, L. Lovász,
and V. T. Sòs, Eds., 489–498 North-Holland, Amsterdam (1988).
27. W. D. Wallis, A. P. Street and J. Seberry Wallis, Combinatorics: Room
Squares, Sum-Free Sets, Hadamard Matrices, Lecture Notes in Mathematics,
292, Springer, New York, 1972.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


226 Chapter 6

28. A. L. Whiteman, “An infinite family of skew-Hadamard matrices,” Pacific J.


Math. 38, 817–822 (1971).
29. A. L. Whiteman, “Skew Hadamard matrices of Goethals–Seidel type,”
Discrete Math. 2, 397–405 (1972).
30. H. Baartmans, C. Lin, and W. D. Wallis, “Symmetric and skew equivalence
of Hadamard matrices of order 28,” Ars. Combin. 41, 25–31 (1995).
31. http://rangevoting.org/SkewHad.html.
32. K. Balasubramanian, “Computational strategies for the generation of equiva-
lence classes of Hadamard matrices,” J. Chem. Inf. Comput. Sci. 35, 581–589
(1995).
33. K. B. Reid and E. Brown, “Doubly regular tournaments are equivalent to skew
Hadamard matrices,” J. Combinatorial Theory, Ser. A 12, 332–338 (1972).
34. P. Solé and S. Antipolis, “Skew Hadamard designs and their codes,” http://
www.cirm.univ-mrs.fr/videos/2007/exposes/13/Sole.pdf (2007).
35. C. J. Colbourn and J. H. Dinitz, Handbook of Combinatorial Designs, 2nd ed.,
CRC Press, Boca Raton (2006).
36. D. Blatt and G. Szekeres, “A skew Hadamard matrix of order 52,” Can. J.
Math. 21, 1319–1322 (1969).
37. J. Seberry, “A new construction for Williamson-type matrices,” Graphs
Combin. 2, 81–87 (1986).
38. J. Wallis, “Some results on configurations,” J. Aust. Math. Soc. 12, 378–384
(1971).
39. A. C. Mukopadhyay, “Some infinite classes of Hadamard matrices,” J. Comb.
Theory Ser. A 25, 128–141 (1978).
40. J. Seberry, “Some infinite classes of Hadamard matrices,” J. Aust. Math. Soc.,
Ser. A 25, 128–141 (1980).
41. J. S. Wallis, “Some matrices of Williamson type,” Utilitas Math. 4, 147–154
(1973).
42. J. S. Wallis, “Williamson matrices of even order”, in Combinatorial Matrices,
Lecture Notes in Mathematics 403, Springer-Verlag Berlin 1974.
43. J. S. Wallis, “Construction of Williamson type matrices,” Linear Multilinear
Algebra 3, 197–207 (1975).
44. M. Yamada, “On the Williamson type j matrices of order 4.29, 4.41, and 4.37,”
J. Comb. Theory, Ser. A 27, 378–381 (1979).
45. M. Yamada, “On the Williamson matrices of Turyn’s type and type j,”
Comment. Math. Univ. St. Pauli 31 (1), 71–73 (1982).
46. C. Koukouvinos and S. Stylianou, “On skew-Hadamard matrices,” Discrete
Math. 308 (13), 2723–2731 (2008).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Skew Williamson–Hadamard Transforms 227

47. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in


Mathematics 1168, Springer-Verlag, New York, 1985.
48. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “On fast Hadamard
transforms of Williamson type,” in Proc. EUSIPCO-2000, Tampere, Finland
Sept. 4–8, 2, pp. 1077–1080 (2000).
49. H. Sarukhanyan, A. Anoyan, S. Agaian, K. Egiazarian and J. Astola, “Fast
Hadamard transforms,” in Proc of. Int. TICSP Workshop on Spectral Methods
and Multirate Signal Processing, SMSP-2001, June 16–18, Pula, Croatia,
TICSP Ser. 13, pp. 33–40 (2001).
50. S. Agaian, H. Sarukhanyan, K. Egiazarian and J. Astola, “Williamson–
Hadamard transforms: design and fast algorithms,” in Proc. of 18 Int. Scientific
Conf. on Information, Communication and Energy Systems and Technologies,
ICEST-2003, Oct. 16–18, Sofia, Bulgaria, 199–208 (2003).
51. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Binary matrices,
decomposition and multiply-add architectures,” Proc. SPIE 5014, 111–122
(2003) [doi:10.1117/12.473134].
52. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson–Hadamard
transforms,” J. Multiple-Valued Logic Soft Comput. 10 (2), 173–187 (2004).
53. S. Agaian, H. Sarukhanyan, and J. Astola, “Multiplicative theorem based
fast Williamson–Hadamard transforms,” Proc. SPIE 4667, 82–91 (2002)
[doi:10.1117/12.467969].

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 7
Decomposition of Hadamard
Matrices
We have seen in Chapter 1 that Hadamard’s original construction of Hadamard ma-
trices states that the Kronecker product of Hadamard matrices of orders m and n is
a Hadamard matrix of order mn. The multiplicative theorem was proposed in 1981
by Agaian and Sarukhanyan1 (see also Ref. 2). They demonstrated how to multiply
Williamson–Hadamard matrices in order to obtain a Williamson–Hadamard matrix
of order mn/2. This result has been extended by the following:

• Craigen et al.3 show how to multiply four Hadamard matrices of orders m, n, p,


q in order to obtain a Hadamard matrix of order mnpq/16.
• Agaian2 and Sarukhanyan et al.4 show how to multiply several Hadamard
matrices of orders ni , i = 1, 2, . . . , k + 1, to obtain a Hadamard matrix of
order (n1 n2 . . . nk+1 )/2k , k = 1, 2, . . . . They obtained a similar result for A(n, k)-
type Hadamard matrices and for Baumert–Hall, Plotkin, and Geothals–Seidel
arrays.5
• Seberry and Yamada investigated the multiplicative theorem of Hadamard
matrices of the generalized quaternion type using the M-structure.6
• Phoong and Chang7 show that the Agaian and Sarukhanyan theorem results can
be generalized to the case of antipodal paraunitary (APU) matrices. A matrix
function H(z) is said to be paraunitary (PU) if it is unitary for all values of
the parameters z, H(z)H T (1/z) = nIn n ≥ 2. One attractive feature of these
matrices is their energy preservation properties that can reduce the noise or error
amplification problem. For further details of PU matrices and their applications,
we refer the reader to Refs. 8–10. A PU matrix is said to be an APU matrix
if all of its coefficient matrices have ±1 as their entries. For the special case
of constant (memoryless) matrices, APU matrices reduce to the well-known
Hadamard matrices.
The analysis of the above-stated results relates to the solution of the following
problem:
Problem 1:2,11 Let Xi and Ai , i = 1, 2, . . . , k be (0, ±1) and (+1, −1) matrices of
dimensions p1 × p2 and q1 × q2 , respectively, and p1 q1 = p2 q2 = n ≡ 0 (mod 4).

229

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


230 Chapter 7

(a) What conditions must matrices Xi and Ai satisfy for


k
H= Xi ⊗ Ai (7.1)
i=1

to be a Hadamard matrix of order n, and


(b) How are these matrices constructed?

In this chapter, we develop methods for constructing matrices Xi and Ai , making


it possible to construct new Hadamard matrices and orthogonal arrays. We also
present a classification of Hadamard matrices based on their decomposability
by orthogonal (+1, −1) vectors. We will present multiplicative theorems of
construction of a new class of Hadamard matrices and Baumert–Hall, Plotkin,
and Geothals–Seidel arrays. Particularly, we will show that if there be k
Hadamard matrices of order m1 , m2 , . . . , mk , then a Hadamard matrix of order
(m1 m2 . . . mk )/2k+1 exists. As an application of multiplicative theorems, one may
find an example in Refs. 12–14.

7.1 Decomposition of Hadamard Matrices by (+1, −1) Vectors


In this section, a particular case of the problem given above is studied, i.e., the case
when Ai is (+1, −1) vectors.
Theorem 7.1.1: For matrix H [see Eq. (7.1)] to be an Hadamard matrix of order n,
it is necessary and sufficient that there be (0, ±1) matrices Xi and (+1, −1) matrices
Ai , i = 1, 2, . . . , k of dimensions p1 × p2 and q1 × q2 , respectively, satisfying the
following conditions:

1. p1 q1 = p2 q2 = n ≡ 0 (mod 4),
2. Xi ∗ X j = 0, i  j, i, j = 1, 2, . . . , k, * is Hadamard product,
k
3. Xi is a (+1, −1) matrix,
i=1
k k
4. Xi XiT ⊗ Ai ATi + Xi X Tj ⊗ Ai ATj = nIn , i  j,
i=1 i, j=1
k k
5. XiT Xi ⊗ ATi Ai + XiT X j ⊗ ATi A j = nIn , i  j.
i=1 i, j=1

The first three conditions are evident. The two last conditions are jointly equivalent
to conditions

HH T = H T H = nIn . (7.2)

Now, let us consider the case where Ai are (+1, −1) vectors. Note that any
Hadamard matrix Hn of order n can be represented as

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Decomposition of Hadamard Matrices 231

(a) Hn = (++) ⊗ X + (+−) ⊗ Y,


8
(7.3)
(b) Hn = vi ⊗ A i ,
i=1

where X, Y are (0, ±1) matrices of dimension n × (n/2), Ai are (0, ±1) matrices of
dimension n × (n/4), and vi are the following four-dimensional (+1, −1) vectors:

v1 = (+ + ++), v2 = (+ + −−), v3 = (+ − −+), v4 = (+ − +−),


(7.4)
v5 = (+ − −−), v6 = (+ − ++), v7 = (+ + +−), v8 = (+ + −+).

Here, we give the examples of decomposition of the following Hadamard matrices:


⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟

⎜⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟⎟
⎛ ⎞ ⎜⎜⎜
⎜⎜⎜+ + + +⎟⎟ ⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟⎟
⎟ ⎜⎜⎜
⎜⎜⎜+ − + −⎟⎟⎟⎟ + − − + + − − +⎟⎟⎟⎟⎟
H4 = ⎜⎜⎜⎜ ⎟, H8 = ⎜⎜⎜⎜⎜ ⎟.
−⎟⎟⎟⎟⎠
(7.5)
⎜⎜⎝+ + − ⎜⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟

+ − − + ⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝+ + − − − − + +⎟⎟⎟⎟

+ − − + − + + −

We use the following notations:

w1 = (++), w2 = (+−), v1 = (+ + ++), v2 = (+ − +−),


(7.6)
v3 = (+ + −−), v4 = (+ − −+).

Example 7.1.1: The Hadamard matrix H4 and H8 can be decomposed as follows:


(1) Via two vectors:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ +⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜0 0 ⎟⎟ + +⎟⎟⎟⎟
H4 = w1 ⊗ ⎜⎜⎜⎜ ⎟⎟⎟ + w2 ⊗ ⎜⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎜+ −⎟⎟ ⎟ ⎜
⎜⎜⎝0 0 ⎟⎟⎟⎟
⎝ ⎠ ⎠
0 0 + −
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟⎟ ⎜⎜⎜0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜+ + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜⎜0 ⎟
⎜⎜⎜+ − + −⎟⎟⎟⎟ ⎜⎜⎜ 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜+ ⎟⎟
0 ⎟⎟⎟ − + −⎟⎟⎟⎟
H8 = w1 ⊗ ⎜⎜⎜⎜⎜ ⎟⎟⎟ + w2 ⊗ ⎜⎜⎜⎜⎜
0 0 0
⎟. (7.7)
⎜⎜⎜⎜+ + − −⎟⎟⎟ ⎜⎜⎜⎜0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 ⎟ ⎜⎜⎜+ ⎟
⎜⎜⎜ 0 ⎟⎟⎟⎟ ⎜⎜⎜ + − −⎟⎟⎟⎟
⎟⎟⎟ ⎟
⎜⎜⎜+ − −
⎜⎝ +⎟⎟⎟ ⎜⎜⎜0
⎜⎝ 0 0 0 ⎟⎟⎟⎟⎟
⎠ ⎠
0 0 0 0 + − − +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


232 Chapter 7

(2) Via four vectors:


⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟ ⎜0 ⎟
H4 = v1 ⊗ ⎜⎜ ⎟⎟ + v2 ⊗ ⎜⎜ ⎟⎟ + v3 ⊗ ⎜⎜ ⎟⎟ + v4 ⊗ ⎜⎜⎜⎜ ⎟⎟⎟⎟ ,
⎜⎜⎜⎝0 ⎟⎟⎟⎠ ⎜⎜⎜0 ⎟⎟⎟
⎝ ⎠
⎜⎜⎜+⎟⎟⎟
⎝ ⎠
⎜⎜⎜0 ⎟⎟⎟
⎝ ⎠
0 0 0 +
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ +⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟
⎜⎜⎜⎜0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜+ +⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 ⎟⎟⎟⎟ ⎜⎜⎜
⎜⎜⎜0

0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜+ +⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎟⎟
⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜ +⎟⎟⎟⎟
H8 = v1 ⊗ ⎜⎜⎜⎜ ⎟⎟⎟ + v2 ⊗ ⎜⎜⎜ ⎟⎟⎟ + v3 ⊗ ⎜⎜⎜ ⎟⎟⎟ + v4 ⊗ ⎜⎜⎜⎜⎜+ ⎟. (7.8)
⎜⎜⎜+ −⎟⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0
⎜⎜⎜ 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜+ −⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟⎟⎟ ⎟
⎜⎜⎜0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜
0 ⎟⎟⎟⎟

⎜⎜⎜0 0 ⎟⎟⎟ ⎜ ⎟ ⎜ + − ⎟ ⎜⎜⎜0 0 ⎟⎟⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠ ⎜⎜⎝ 0 0 ⎟⎟⎠ ⎜⎜⎝ ⎟⎟⎠ ⎝ ⎠
0 0 0 0 0 0 + −

Now, let us introduce the following matrices:

B1 = A1 + A2 + A7 + A8 , B2 = A3 + A4 + A5 + A6 ,
(7.9)
B3 = A1 − A2 − A5 + A6 , B4 = −A3 + A4 + A7 − A8 .

Theorem 7.1.2:15 For the existence of Hadamard matrices of order n, the existence
of (0, ±1) matrices Bi , i = 1, 2, 3, 4 of dimension n × (n/4) is necessary and
sufficient, satisfying the following conditions:

1. B1 ∗ B2 = 0, B3 ∗ B4 = 0,
2. B1 ± B2 , B3 ± B4 are (+1, −1)-matrices,
4
n
3. Bi BTi = In , (7.10)
i=1
2
4. BTi B j= 0, i  j, i, j = 1, 2, 3, 4,
n
5. BTi Bi = In/4 , i, j = 1, 2, 3, 4.
2

Proof: Necessity: Let Hn be a Hadamard matrix of order n. According to Eq. (7.1),


we have

Hn = v1 ⊗ A1 + v2 ⊗ A2 + · · · + v8 ⊗ A8 . (7.11)

From this representation, it follows that

Ai ∗ A j = 0, i  j, i, j = 1, 2, . . . , 8,
(7.12)
A1 + A2 + · · · + A8 is a (+1, −1)-matrix.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Decomposition of Hadamard Matrices 233

On the other hand, it is not difficult to show that the matrix Hn can also be presented
as

Hn = [(++) ⊗ B1 + (+−) ⊗ B2 , (++) ⊗ B3 + (+−) ⊗ B4 ] . (7.13)


Now, let us show that matrices Bi satisfy the conditions of Eq. (7.10). From the
representation of Eq. (7.13) and from Eq. (7.12) and Hn HnT = nIn , the first three
conditions of Eq. (7.10) will follow. Because Hn is a Hadamard matrix of order
n, from the representation of Eq. (7.13), we find the following system of matrix
equations:

BT1 B1 + BT1 B2 + BT2 B1 + BT2 B2 = nIn/4 ,


BT1 B1 − BT1 B2 + BT2 B1 − BT2 B2 = 0,
(7.14a)
BT1 B3 + BT1 B4 + BT2 B3 + BT2 B4 = 0,
BT1 B3 + BT1 B4 − BT2 B3 − BT2 B4 = 0;
BT1 B1 + BT1 B2 − BT2 B1 − BT2 B2 = 0,
BT1 B1 − BT1 B2 − BT2 B1 + BT2 B2 = nIn/4 ,
(7.14b)
BT1 B3 + BT1 B4 − BT2 B3 − BT2 B4 = 0,
BT1 B3 − BT1 B4 − BT2 B3 + BT2 B4 = 0;
BT3 B1 + BT3 B2 + BT4 B1 + BT4 B2 = 0,
BT3 B1 − BT3 B2 + BT4 B1 − BT4 B2 = 0,
(7.14c)
BT3 B3 + BT3 B4 + BT4 B3 + BT4 B4 = nIn/4 ,
BT3 B3 − BT3 B4 + BT4 B3 − BT4 B4 = 0;
BT3 B1 + BT3 B2 − BT4 B1 − BT4 B2 = 0,
BT3 B1 − BT3 B2 − BT4 B1 + BT4 B2 = 0,
(7.14d)
BT3 B3 + BT3 B4 − BT4 B3 − BT4 B4 = 0,
BT3 B3 − BT3 B4 − BT4 B3 + BT4 B4 = nIn/4 ;
which are equivalent to
n
BTi Bi = In/4 , i = 1, 2, 3, 4,
2 (7.15)
BTi B j = 0, i  j, i = 1, 2, 3, 4.
Sufficiency: Let (0, ±1) matrices Bi , i = 1, 2, 3, 4 of dimensions n × (n/4) satisfy
the conditions of Eq. (7.10). We can directly verify that Eq. (7.13) is a Hadamard
matrix of order n.
Corollary 7.1.1: The (+1, −1) matrices

Q1 = (B1 + B2 )T , Q2 = (B1 − B2 )T ,
(7.16)
Q3 = (B3 + B4 )T , Q4 = (B3 − B4 )T

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


234 Chapter 7

of dimensions n/4 × n satisfy the conditions

Qi QTj = 0, i  j, i = 1, 2, 3, 4,
(7.17)
Qi QTi = nIn/4 , i = 1, 2, 3, 4.

Corollary 7.1.2:3 If there be Hadamard matrices of order n, m, p, q, then the


Hadamard matrix of order mnpq/16 also exists.
Proof: According to Theorem 7.1.2, there are (0, ±1) matrices Ai and Bi , i =
1, 2, 3, 4 of dimensions m × (m/4) and n × (n/4), respectively, satisfying the
conditions in Eq. (7.10).
We introduce the following (+1, −1) matrices of orders mn/4:

X = A1 ⊗ (B1 + B2 )T + A2 ⊗ (B1 − B2 )T ,
(7.18)
Y = A3 ⊗ (B3 + B4 )T + A4 ⊗ (B3 − B4 )T .

It is easy to show that matrices X, Y satisfy the conditions

XY T = X T Y = 0,
mn (7.19)
XX T + YY T = X T X + Y T Y = Imn/4 .
2
Again, we rewrite matrices X, Y in the following form:

X = [(++) ⊗ X1 + (+−) ⊗ X2 , (++) ⊗ X3 + (+−) ⊗ X4 ],


(7.20)
Y = [(++) ⊗ Y1 + (+−) ⊗ Y2 , (++) ⊗ Y3 + (+−) ⊗ Y4 ],

where Xi , Yi , i = 1, 2, 3, 4 are (0, ±1) matrices of dimensions (mn/4) × (mn/16),


satisfying the conditions

X1 ∗ X2 = X3 ∗ X4 = Y1 ∗ Y2 = Y3 ∗ Y4 = 0,
X1 ± X2 , X3 ± X4 , Y1 ± Y2 , Y3 ± Y4 are (+1, −1) matrices,
4 4
Xi YiT = XiT Yi = 0, (7.21)
i=1 i=1
4   4   mn
Xi XiT + Yi YiT = XiT Xi + YiT Yi = Imn/4 .
i=1 i=1
4

(+1, −1) matrices P and Q of orders pq/4 can be constructed in a manner similar
to the construction of Hadamard matrices of order p and q, with the conditions of
Eq. (7.19).
Now, consider the following (0, ±1) matrices:
P+Q P−Q
Z= , W= ,
2 2 (7.22)
Ci = Xi ⊗ Z + Yi ⊗ W, i = 1, 2, 3, 4.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Decomposition of Hadamard Matrices 235

It is not difficult to show that matrices Z and W satisfy the conditions

Z ∗ W = 0,
ZW T = WZ T , (7.23)
pq
ZZ T = Z T Z = WW T = W T W = I pq/4 ,
8
assuming that matrices Ci of dimension (mnpq/16) × (mnpq/64) satisfy the
conditions of Eq. (7.10).
Hence, according to Theorem 7.1.2, the matrix

[(++) ⊗ C1 + (+−) ⊗ C2 , (++) ⊗ C3 + (+−) ⊗ C4 ] (7.24)

is a Hadamard matrix of order mnpq/16.

Corollary 7.1.3: If Hadamard matrices of orders ni , i = 1, 2, . . . , k + 1, exist, then


there are also Hadamard matrices of orders (n1 n2 . . . nk+1 )/2k , k = 1, 2, . . ..
Proof: By Theorem 7.1.2, (0, ±1) matrices Ai and Bi , i = 1, 2, 3, 4 of dimensions
n1 × (n1 /4) and n2 × (n2 /4), respectively, exist, satisfying the conditions of
Eq. (7.10).
Consider the following representations of matrices:

Q1 = (B1 + B2 )T = (++) ⊗ X1 + (+−) ⊗ X2 ,


Q2 = (B1 − B2 )T = (++) ⊗ Y1 + (+−) ⊗ Y2 ,
(7.25)
Q3 = (B3 + B4 )T = (++) ⊗ Z1 + (+−) ⊗ Z2 ,
Q4 = (B3 − B4 )T = (++) ⊗ W1 + (+−) ⊗ W2 ,

where Xi , Yi , Zi , Wi , i = 1, 2 are (0, ±1) matrices of the dimension (n2 /4) × (n2 /2).
From the condition of Eq. (7.17) and the representation of Eq. (7.25), we find
that
X1 ∗ X2 = Y1 ∗ Y2 = Z1 ∗ Z2 = W1 ∗ W2 = 0,
X1 ± X2 , Y1 ± Y2 , Z1 ± Z2 , W1 ± W2 are (+1, −1) matrices,
X1 Y1T + X2 Y2T = 0, X1 Z1T + X2 Z2T = 0, X1 W1T + X2 W2T = 0, (7.26)
Y1 Z1T + Y2 Z2T = 0, Y1 W1T + Y2 W2T = 0, Z1 W1T + Z2 W2T = 0,
n2
X1 X1T + X2 X2T = Y1 Y1T + Y2 Y2T = Z1 Z1T + Z2 Z2T = W1 W1T + W2 W2T = In /4 .
2 2
Now, we define the following matrices:
   
X ⊗ A1 + Y1 ⊗ A2 X ⊗ A1 + Y2 ⊗ A2
C1 = 1 , C2 = 2 ,
Z1 ⊗ A1 + W1 ⊗ A2 Z2 ⊗ A1 + W2 ⊗ A2
    (7.27)
X ⊗ A3 + Y1 ⊗ A4 X ⊗ A3 + Y2 ⊗ A4
C3 = 1 , C4 = 2 .
Z1 ⊗ A3 + W1 ⊗ A4 Z2 ⊗ A3 + W2 ⊗ A4

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


236 Chapter 7

It is easy to show that (0, ±1) matrices Ci , i = 1, 2, 3, 4 of dimensions (n1 n2 /2) ×


(n1 n2 /8) satisfy the conditions of Eq. (7.10). Hence, according to Theorem 7.1.2,
a Hadamard matrix of order n1 n2 /2 exists. Subsequently, Corollary 7.1.2 implies
Corollary 7.1.3.

Corollary 7.1.4: If there are Hadamard matrices of order ni , i = 1, 2, . . ., then


there is also a Hadamard matrix of order (n1 n2 . . . nk+3 )/2k+3 , k = 1, 2, . . ..

Proof: The case for k = 1 was proved in Corollary 7.1.2. According to


Corollary 7.1.3, from Hadamard matrices of orders n1 n2 n3 n4 /16, n5 , n6 , . . . , nk , we
can construct a Hadamard matrix of the order (n1 n2 . . . nk )/2k , k = 4, 5, . . ..

Theorem 7.1.3: For any natural numbers k and t, there is a Hadamard matrix
of order [n1 n2 . . . nt(k+2)+1 ]/2t(k+3) , where ni ≥ 4 are orders of known Hadamard
matrices.

Proof: The case for t = 1 and k = 1, 2, . . . was proved in Corollary 7.1.4. Let t > 1
and assume that the assertion is correct for t = t0 > 1, i.e., there is a Hadamard
matrix of order [n1 n2 · · · nt0 (k+3) ]/2t0 (k+3) .
Prove the theorem for t = t0 + 1. We have k + 3 Hadamard matrices of the following
orders:
n1 n2 . . . nt0 (k+2)+1
m1 = , nt0 (k+2)+2 , . . . , nt0 (k+2)+k+3 . (7.28)
2t0 (k+3)

According to Corollary 7.1.4, we can construct a Hadamard matrix of the order

n1 n2 . . . n(t0 +1)(k+2)+1
. (7.29)
2(t0 +1)(k+3)

Now, prove the following lemma.

Lemma 7.1.1: There are no Hadamard matrices of order n represented as


k
Hn = wi ⊗ A i , k  4, 8, wi ∈ {v1 , v2 , . . . , v8 }. (7.30)
i=1

Proof: We prove the lemma for k = 3, 5. For the other value k, the proof is similar.
For k = 3, allow a Hadamard matrix Hn of order n of the type in Eq. (7.30) to exist,
i.e.,

Hn = (+ + ++) ⊗ A1 + (+ + −−) ⊗ A2 + (+ − +−) ⊗ A3 . (7.31)

This matrix can be written as

Hn = [(++) ⊗ (A1 + A2 ) + (+−) ⊗ A3 , (++) ⊗ (A1 − A2 ) + (+−) ⊗ (−A3 )]. (7.32)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Decomposition of Hadamard Matrices 237

According to Theorem 7.1.2, the (0, ±1) matrices

B1 = A1 + A2 , B2 = A3 , B3 = A1 − A2 , B4 = −A3 (7.33)

of dimension n × (n/4) must satisfy all the conditions in Eq. (7.10). In particular,
the following conditions should be satisfied:

n
BT2 B4 = 0, BT2 B2 = In/4 . (7.34)
2

That is, on the one hand AT3 A3 = 0, and on the other hand AT3 A3 = (n/2)In/4 , which
is impossible. Now we consider the case k = 5.
Let

Hn = (+ + ++) ⊗ A1 + (+ + −−) ⊗ A2 + (+ − −+) ⊗ A3 + (+ − +−) ⊗ A4 + (+ − −+)A5 .


(7.35)
The matrix Hn can be easily written as

Hn = [(++) ⊗ (A1 + A2 ) + (+−) ⊗ (A3 + A4 + A5 ),


(++) ⊗ (A1 − A2 − A5 ) + (+−) ⊗ (−A3 + A4 )]. (7.36)

According to Theorem 7.1.2, the (0, ±1) matrices

B1 = A1 + A2 , B2 = A3 + A4 + A5 , B3 = A1 − A2 − A5 , B4 = −A3 + A4
(7.37)

must satisfy the conditions of Eq. (7.10). We can see that the conditions

n
BT1 B1 = BT3 B3 = In/4 (7.38)
2

mean that any column of matrices B1 and B3 contains precisely n/2 nonzero
elements. From this point, we find that A5 = 0, which contradicts the condition
of Lemma 7.1.1.

7.2 Decomposition of Hadamard Matrices and Their Classification


In this section, we consider the possibility of decomposing Hadamard matrices
using four orthogonal vectors of length 4. Let vi , i = 1, 2, . . . , k be mutually
orthogonal k-dimensional (+1, −1) vectors. We investigate Hadamard matrices of
order n that have the following representation:

Hn = v1 ⊗ B1 + v2 ⊗ B2 + · · · + vk ⊗ Bk . (7.39)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


238 Chapter 7

We call the Hadamard matrices having the representation in Eq. (7.39) an A(n, k)-
type Hadamard matrix or simply an A(n, k)-matrix.

Theorem 7.2.1:16,17 A matrix of order n is an A(n, k)-type Hadamard matrix if and


only if, nonzero (0, ±1) matrices Bi , i = 1, 2, . . . , k of dimensions n × n/k satisfying
the following conditions exist:

Bi ∗ B j = 0, i  j, i, j = 1, 2, . . . , k,
k
Bi is a (+1, −1) matrix,
i=1
k
n (7.40)
Bi BTi = In ,
i=1
k
BTi B j = 0, i  j, i, j = 1, 2, . . . , k,
n
BTi Bi = In/k , i = 1, 2, . . . , k.
k
Proof: Necessity: To avoid excessive formulas, we prove the theorem for the case
k = 4. The general case is then a direct extension. Let Hn be a Hadamard matrix of
type A(n, 4), i.e., Hn has the form of Eq. (7.39), where

vi vTj = 0, i  j, i, j = 1, 2, 3, 4,
(7.41)
vi vTi = 4, i = 1, 2, 3, 4.

We shall prove that (0, ±1) matrices Bi , i = 1, 2, 3, 4 of dimensions n × n/4


satisfy the conditions of Eq. (7.40). First, two conditions are obvious. The third
condition follows from the relationship
4
HH T = 4 Bi BTi = nIn . (7.42)
i=1

Consider the last two conditions of Eq. (7.40). Note that the Hadamard matrix Hn
has the form

Hn = (+ + ++) ⊗ B1 + (+ + −−) ⊗ B2 + (+ − −+) ⊗ B3 + (+ − +−) ⊗ B4 . (7.43)

We can also rewrite Hn as

Hn = [(++) ⊗ C1 + (+−) ⊗ C2 , (++) ⊗ C3 + (+−) ⊗ C4 ], (7.44)

where, by Theorem 7.1.2,

C1 = B1 + B2 , C2 = B3 + B4 , C3 = B1 − B2 , C4 = B3 − B4 (7.45)

satisfies the conditions of Eq. (7.10).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Decomposition of Hadamard Matrices 239

Hence, taking into account the last two conditions of Eq. (7.40), we can see that
the matrices Bi satisfy the following equations:

BT1 B1 + BT1 B2 + BT2 B1 + BT2 B2 = nIn/4 , BT1 B1 + BT1 B2 − BT2 B1 − BT2 B2 = 0,


BT1 B1 − BT1 B2 + BT2 B1 − BT2 B2 = 0, BT1 B1 − BT1 B2 − BT2 B1 + BT2 B2 = nIn/4 ,
BT1 B3 + BT1 B4 + BT2 B3 + BT2 B4 = 0, BT1 B3 + BT1 B4 − BT2 B3 − BT2 B4 = 0,
BT1 B3 + BT1 B4 − BT2 B3 − BT2 B4 = 0; BT1 B3 − BT1 B4 − BT2 B3 + BT2 B4 = 0;
(7.46)
BT3 B1 + BT3 B2 + BT4 B1 + BT4 B2 = 0, BT3 B1 + BT3 B2 − BT4 B1 − BT4 B2 = 0,
BT3 B1 − BT3 B2 + BT4 B1 − BT4 B2 = 0, BT3 B1 − BT3 B2 − BT4 B1 + BT4 B2 = 0,
BT3 B3 + BT3 B4 + BT4 B3 + BT4 B4 = nIn/4 , BT3 B3 + BT3 B4 − BT4 B3 − BT4 B4 = 0,
BT3 B3 − BT3 B4 + BT4 B3 − BT4 B4 = 0; BT3 B3 − BT3 B4 − BT4 B3 + BT4 B4 = nIn/4 .

Solving these systems, we find that

BTi B j = 0, i  j, i, j = 1, 2, 3, 4,
n (7.47)
BTi Bi = In/4 , i = 1, 2, 3, 4.
4
Sufficiency: Let (0, ±1) matrices Bi , i = 1, 2, 3, 4 satisfy the conditions of
Eq. (7.40). We shall show that the matrix in Eq. (7.43) is a Hadamard matrix.
Indeed, calculating Hn HnT and HnT Hn , we find that
4 4
n
Hn HnT = 4 Bi BTi = HnT Hn = vTi vi ⊗ In/4 = nIn . (7.48)
i=1 i=1
4

From Theorems 7.1.2 and 7.2.1, the following directly follows:

Corollary 7.2.1: Any Hadamard matrix of order n is an A(n, 2) matrix.


From the condition of mutual orthogonality of k-dimensional (+1, −1) vectors
vi , i = 1, 2, . . . , k and from the condition of Eq. (7.48), it follows that k = 2 or
k ≡ 0 (mod 4).
Theorem 7.2.2 reveals a relationship between the order of the A(n, k) matrix and
the dimension of vectors vi .

Theorem 7.2.2: 15,17 Let Hn be an A(n, k) matrix . Then, n ≡ 0 (mod 2k).

Proof: According to Theorem 7.2.1, the matrix Hn can be written as

Hn = v1 ⊗ B1 + v2 ⊗ B2 + · · · + vk ⊗ Bk , (7.49)

where (0, ±1) matrices Bi of dimension n × n/k satisfy the conditions of Eq. (7.40).
Note that the fifth condition of Eq. (7.40) means that BTi are orthogonal (0, ±1)
matrices and any row of this matrix contains n/k nonzero elements. Hence, for

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


240 Chapter 7

T
matrix BTi , a matrix Bi corresponds to it having the following form:
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜/// ··· ···
· · ·⎟⎟
⎟ ⎜⎜⎜· · · /// ···· · ·⎟⎟
⎟ ⎜⎜⎜· · · ··· ··· ///⎟⎟
⎜⎜⎜⎜.. .. .... ⎟⎟⎟⎟ ⎜⎜⎜.
⎜⎜.. .. .... ⎟⎟⎟⎟ ⎜⎜⎜.
⎜⎜.. .. .. .. ⎟⎟⎟⎟⎟
⎜. . . . ⎟⎟⎟ . . . ⎟⎟⎟ . . . ⎟⎟⎟
B1 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ , B2 = ⎜⎜⎜⎜⎜ ⎟⎟⎟ , . . . , Bk = ⎜⎜⎜⎜⎜
T T T
⎟,
⎜⎜⎜/// · · · · · · · · ·⎟⎟ ⎜⎜⎜· · · /// · · · · · ·⎟⎟ ⎜⎜⎜· · · · · · · · · ///⎟⎟⎟⎟
⎜⎝.
.. .. .. .. ⎟⎟⎠ ⎜⎝.
.. .. .. .. ⎟⎟⎠ ⎜⎝.
.. .. .. .. ⎟⎟⎠
. . . . . . . . .
(7.50)

where the shaded portions of rows contain ±1, and other parts of these rows are
filled with zeros.
T
From the condition Bi Bi = (n/k)In/k , it follows that the shaded pieces of i’th
T
rows of matrices Bi contain an even number of ±1s, and from the condition
T
Bi B j = 0, i  j, (7.51)

it follows that other parts of the i’th row also contain an even number of ±1s. It
follows that n/k is an even number, i.e., n/k = 2l; hence, n ≡ 0 (mod 2k).
Naturally, the following problem arises:
For any n, n ≡ 0 (mod 2k), construct an A(n, k)-type Hadamard matrix.
Next, we present some properties of A(n, k)-type Hadamard matrices.

Property 7.2.1: (a) If A(n, k)- and A(m, r)-type Hadamard matrices exist, then an
A(mn, kr)-type Hadamard matrix also exists.
(b) If a Hadamard matrix of order n exists, then there also exists an A(2i−1 n, 2i )-
type Hadamard matrix, i = 1, 2, . . ..
(c) If Hadamard matrices of order ni , i = 1, 2, . . . exist, then a Hadamard matrix
of type A{[n1 n2 . . . nt(r+2)+2 ]/2t(k+3) , 4} exists, where k, t = 1, 2, . . ..

Theorem 7.2.3:15,17 Let there be a Hadamard matrix of order m and an A(n, k)


matrix . Then, for any even number r such as m, n ≡ 0 (mod r), there are (0, ±1)
matrices Bi, j , i = 1, 2, . . . , (r/2), j = 1, 2, . . . , k, of dimension (mn/r) × (mn/r)
satisfying the following conditions:
r
B p,i ∗ B p, j = 0, i  j, i, j = 1, 2, . . . , k, p = 1, 2, . . . , ,
2
k
r
B p,i is a (+1, −1) matrix, p = 1, 2, . . . , ,
i=1
2
k (7.52)
r
Bi,p BTj,p = 0, i  j, i, j = 1, 2, . . . , ,
p=1
2
k/2 k
mn
B p,i BTp.i = Imn/r .
p=1 i=1
2k

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Decomposition of Hadamard Matrices 241

 
Proof: Represent the A(n, k) matrix Hn as HnT = PT1 , PT2 , . . . , PTr , where (+1, −1)
matrices Pi of dimension n/r × n have the form
k
Pi = vt ⊗ Ai,t , i = 1, 2, . . . , r, (7.53)
t=1

and vt are mutually orthogonal k-dimensional (+1, −1) vectors.


We can show that (0, ±1) matrices Ai, j of order n/r satisfy the following
conditions:

At,i ⊗ At, j = 0, i  j, t = 1, 2, . . . , r, i, j = 1, 2, . . . , k,
k
At,i is a (+1, −1) matrix, t = 1, 2, . . . , r,
i=1
k
(7.54)
Ai,t ATj,t = 0, i  j, i, j = 1, 2, . . . , r,
t=1
k
n
Ai,t ATi,t = In/r , i = 1, 2, . . . , r.
t=1
r

Now, for the Hadamard matrix Hm of order m, we present as Hm = (Q1 , Q2 ,


. . . , Qk ), where it is obvious that (+1, −1) matrices Qi have a dimension m × m/k
and satisfy the condition
k
Qi QTi = mIm . (7.55)
i=1

Let us introduce the following matrices:

Q2i−1 + Q2i Q2i−1 − Q2i k


U2i−1 = , U2i = , i = 1, 2, . . . , . (7.56)
2 2 2
We can show that these matrices satisfy the conditions

k
U2i−1 ∗ U2i = 0, i = 1, 2, . . . , ,
2
U2i−1 ± U2i is a (+1, −1) matrix,
(7.57)
k
m
Ui UiT = Im .
i=1
2

Now, we consider (0, ±1) matrices of dimension (mn/r) × (mn/rk):


r
Bt,i = U2t−1 ⊗ A2t−1,i + U2t ⊗ A2t,i , t = 1, 2, . . . , , i = 1, 2, . . . , k. (7.58)
2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


242 Chapter 7

By using the conditions of Eqs. (7.54) and (7.57), we can verify that these matrices
satisfy the conditions of Eq. (7.52). From Theorem 7.2.3, some useful corollaries
follow.
Corollary 7.2.2:1,2,18 The existence of Hadamard matrices of orders m and n
implies the existence of a Hadamard matrix of order mn/2.
Indeed, according to Theorem 7.2.3, for k = r = 2, there are (0, ±1) matrices
B1,1 and B1,2 , satisfying the conditions of Eq. (7.52). Now it is not difficult to show
that (++) ⊗ B1,1 + (+−) ⊗ B1,2 is a Hadamard matrix of order mn/2.

Corollary 7.2.3:19 If Hadamard matrices of order m and n exist, then there are
(0, ±1) matrices X, Y of order mn/4, satisfying the conditions

XY T = 0,
mn (7.59)
XX T + YY T = Imn/4 .
2
According to Theorem 7.2.3, for k = 2 and r = 4, we have two pairs of (0,
±1) matrices B1,1 , B1,2 and B2,1 , B2,2 of dimension mn/4 × mn/8 satisfying the
conditions of Eq. (7.52). We can show that matrices

X = (++) ⊗ B1,1 + (+−) ⊗ B1,2 ,


(7.60)
Y = (++) ⊗ B2,1 + (+−) ⊗ B2,2

satisfy the conditions of Eq. (7.59).

Corollary 7.2.4: If an A(n, 4) matrix and a Hadamard matrix of order m exist,


then there are (0, ±1) matrices X, Y of order mn/4 of the form
4 4
X= vi ⊗ B1,i , Y= vi ⊗ B2,i (7.61)
i=1 i=1

satisfying the conditions of Eq. (7.59). Here, vi are mutually orthogonal four-
dimensional (+1, −1) vectors. The proof of this corollary follows from Theorem
7.2.3 for r = k = 4. As mentioned, the length of k mutually orthogonal (+1, −1)-
vectors is equal to 2 or k ≡ 0 (mod 4).
Below, we consider vectors of the dimension k = 2t . Denote the set of all
Hadamard matrices by C and the set of A(n, k)-type Hadamard matrices by Ck .
From Theorem 7.2.1, it follows that C = C2 , and from Corollary 7.2.1 it directly
follows that

C = C 2 ⊃ C 4 ⊃ C 8 ⊃ · · · ⊃ C 2k . (7.62)

Now, from Theorem 7.2.2, the following can be derived:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Decomposition of Hadamard Matrices 243

Corollary 7.2.5: If Hn ∈ C2k , then n ≡ 0 (mod 2k+1 ).


Theorem 7.2.4: Let Hn ∈ Ck , Hm ∈ Cr , and k ≤ r. Then, there are Hmn/k ∈ Cr .
Proof: According to Theorem 7.2.1, (0, ±1) matrices Bi , i = 1, 2, . . . , k exist of
dimensions n × n/k satisfying the conditions of Eq. (7.40). The matrix Hm can be
written as
⎛ r ⎞
⎜⎜⎜ ⎟
⎜⎜⎜ vi ⊗ A1,i ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜⎜ i=1 r ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ vi ⊗ A2,i ⎟⎟⎟⎟⎟
Hm = ⎜⎜⎜⎜ i=1 ⎟⎟⎟ ,
⎟⎟⎟ (7.63)
⎜⎜⎜⎜ .. ⎟

⎜⎜⎜ . ⎟⎟⎟
⎜⎜⎜ r ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎝ vi ⊗ Ak,i ⎟⎟⎟⎠
i=1

where (0, ±1) matrices Ai, j of dimensions n/k × m/r satisfy the conditions

Ai,t ∗ Ai,p = 0, i = 1, 2, . . . , k, t  p, t, p = 1, 2, . . . , r,
r
Ai,t is a (+1, −1)-matrix, i = 1, 2, . . . , k,
t=1
r
(7.64)
Ai,t ATp,t = 0, i  p, i, p = 1, 2, . . . , k,
t=1
r
m
Ai,t ATi,t = Im/k , i = 1, 2, . . . , k.
t=1
r

Now, we introduce (0, ±1) matrices Di of dimension mn/k × mn/r:


k
Di = Bt ⊗ At,i , i = 1, 2, . . . , r. (7.65)
t=1

One can show that matrices Di satisfy the conditions of Eq. (7.40). According
to Theorem 7.2.1, this means that there is a Hadamard matrix of type A(mn/k, r),
where Hmn/k ∈ Cr , thus proving the theorem.

7.3 Multiplicative Theorems of Orthogonal Arrays and Hadamard


Matrix Construction
Now, we move on to orthogonal arrays and Hadamard matrix construction using
properties of Hadamard matrix decomposability.
Theorem 7.3.1: If there is an A(n, k) matrix and a Hadamard matrix of order
m [m ≡ 0 (mod k)], then a Hadamard matrix of order mn/k exists.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


244 Chapter 7

Proof: According to Theorem 7.2.1, there are matrices Bi , i = 1, 2, . . . , k of


dimensions n×n/k satisfying
 the conditions of Eq. (7.40). Represent the Hadamard
matrix Hm as HmT = H1T , H2T , . . . , HkT , where Hi are (+1, −1) matrices of
dimension n/k × m, satisfying the conditions

Hi H Tj = 0, i  j, i, j = 1, 2, . . . , k,
k
(7.66)
Hi HiT = mIm .
i=1

Now, it is not difficult to show that the matrix k


i=1 Hi ⊗ Bi is the Hadamard matrix
of order mn/k.
Theorem 7.3.2: Let there be an A(n, k) matrix and Hadamard matrices of orders
m, p, q. Then, an A[(mnpq/16), k] matrix also exists.
Proof: We present the Hadamard matrix H1 of type A(n, k) in the following form:
⎛ k ⎞
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜ vi ⊗ A1,i ⎟⎟⎟⎟⎟
⎜⎜⎜ i=1 ⎟⎟⎟
⎜⎜⎜ k ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ vi ⊗ A2,i ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
H1 = ⎜⎜⎜⎜⎜ i=1 ⎟⎟⎟ ,
⎟⎟⎟ (7.67)
⎜⎜⎜ k


⎜⎜⎜ vi ⊗ A3,i ⎟⎟⎟
⎜⎜⎜⎜ i=1 ⎟⎟⎟⎟
⎜⎜⎜ k ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝ vi ⊗ A4,i ⎟⎟⎟⎟⎠
i=1

where vi are mutually orthogonal k-dimensional (+1, −1) vectors, and Ai, j are (0,
±1) matrices of the dimension n/4 × n/k satisfying the conditions

At,i ∗ At, j = 0, t = 1, 2, 3, 4, i  j, i, j = 1, 2, . . . , k,
k
At,i is a (+1, −1) matrix, t = 1, 2, 3, 4,
i=1
k
(7.68)
At,i ATr,i = 0, t  r, t, r = 1, 2, 3, 4,
i=1
k
n
At,i ATt,i = In/4 , t = 1, 2, 3, 4.
i=1
k

Now, we represent the Hadamard matrix H2 of order m as H2 = [P1 , P2 , P3 , P4 ],


and introduce the following (0, ±1) matrices of dimension m × m/4:

P1 + P2 P1 − P2 P3 + P4 P3 − P4
U1 = , U2 = , U3 = , U4 = . (7.69)
2 2 2 2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Decomposition of Hadamard Matrices 245

We can show that these matrices satisfy the conditions

U1 ∗ U2 = U3 ∗ U4 = 0,
U1 ± U2 , U3 ± U4 are (+1, −1) matrices,
4 (7.70)
m
Ui UiT = Im .
i=1
2

Furthermore, we introduce (+1, −1) matrices of order mn/k:


k
, -
X= vi ⊗ U1 ⊗ A1,i + U1 ⊗ A2,i ,
i=1
k
(7.71)
, -
Y= vi ⊗ U3 ⊗ A3,i + U4 ⊗ A4,i .
i=1

One can show that these matrices satisfy the conditions of Eq. (7.59). According
to Corollary 7.2.3, from the existence of Hadamard matrices of orders p and q,
the existence of (+1, −1) matrices X1 , Y1 of order pq/4 follows, satisfying the
conditions of Eq. (7.59). Now we can show that (0, ±1) matrices

X1 + Y1 X1 − Y1
Z= , W= , (7.72)
2 2
satisfy the conditions

Z ∗ W = 0,
Z ± W is a (+1, −1) matrix,
ZW T = WZ T , (7.73)
pq
ZZ T = WW T = I pq/4 .
8
Finally, we introduce (0, ±1) matrices Bi , i = 1, 2, . . . , k of dimensions (mnpq/16)×
(mnpq/16):
, - , -
Bi = U1 ⊗ A1,i + U2 ⊗ A2,i ⊗ Z + U3 ⊗ A3,i + U4 ⊗ A4,i ⊗ W. (7.74)

We can show that the matrices Bi satisfy the conditions of Theorem 7.2.1. Hence,
there is a Hadamard matrix of type A[(mnpq/16), k].
From Corollary 7.2.2 and Theorems 7.3.1 and 7.3.2, the following ensues:
Corollary 7.3.1: (a) If there is an A(n1 , k) matrix and Hadamard matrices of
orders ni , i = 2, 3, 4, . . ., then a Hadamard matrix also exists of type
%n n . . . n &
1 2 3t+1
A 4t
, k , t = 1, 2, . . . . (7.75)
2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


246 Chapter 7

(b) If there are Hadamard matrices of orders ni , i = 1, 2, . . ., then there are also
%n n . . . n & %n n . . . n &
1 2 3t+1 1 2 3t+1
A , 4 and A , 8 (7.76)
24t−1 24t−2
matrices, t = 1, 2, . . ..
(c) If Hadamard matrices of orders ni , i = 1, 2, . . ., exist, then there is also a
Hadamard matrix of order (n1 n2 . . . n3i+2 )/24i+1 .

Theorem 7.3.3: If there is an A(n, k) matrix and orthogonal design OD m; mk ,
  
k , . . . , k , then orthogonal design OD k ; k2 , k2 , . . . , k2 exists.
m m mn mn mn mn

The proof of this theorem is similar to the proof of Theorem 7.3.1.

Corollary 7.3.2: (a) If there is an A(n, 4) matrix and Baumert–Hall (Geothals–


Seidel) array of order m, then a Baumert–Hall (Geothals–Seidel) array of
order mn/4 exists.
(b) If an A(n, 8) matrix and Plotkin array of order m exists, then a Plotkin array of
order mn/8 exists.

Theorem 7.3.4: (Baumert–Hall20 ) If there are Williamson matrices of order n


and a Baumert–Hall array of order 4t, then there is a Hadamard matrix of order
4nt.
From Theorems 7.1.3 and 7.3.4, and Corollaries 7.2.3, 7.3.1, and 7.3.2 we find:

Corollary 7.3.3: Let wi be orders of known Williamson-type matrices and ti be


orders of known T matrices. Then, there are
(a) Baumert–Hall arrays of order

2n(k+1)+4 w1 w2 . . . wn(k+2)+2 t1 t2 . . . tn(k+2)+3 ,


(7.77)
22k+1 w1 w2 . . . w3k+1 t1 t2 . . . t3k+2 ;

(b) Plotkin arrays of order

22(k+2) w1 w2 . . . w3k+1 t1 t2 . . . t3k+1 ,


(7.78)
3 · 22(k+2) w1 w2 . . . w3k+1 t1 t2 . . . t3k+1 ;

(c) Hadamard matrices of order

2n(k+1)+4 w1 w2 . . . wn(k+2)+3 t1 t2 . . . tn(k+2)+3 ,


22k+1 w1 w2 . . . w3k+2 t1 t2 . . . t3k+2 ;
(7.79)
22(k+2) w1 w2 . . . w3k+3 t1 t2 . . . t3k+1 ,
3 · 22(k+2) w1 w2 . . . w3k+3 t1 t2 . . . t3k+1 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Decomposition of Hadamard Matrices 247

References
1. S. S. Agaian and H. G. Sarukhanyan, “Recurrent formulae for construction of
Williamson type matrices,” Math. Notes 30 (4), 603–617 (1981).
2. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in
Mathematics 1168, Springer-Verlag, Berlin, (1985).
3. R. Craigen, J. Seberry, and X. Zhang, “Product of four Hadamard matrices,”
J. Combin. Theory, Ser. A 59, 318–320 (1992).
4. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Binary matrices,
decomposition and multiply-add architectures,” Proc. SPIE 5014, 111–122
(2003) [doi:10.1117/12.473134].
5. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “Construction of
Williamson type matrices and Baumert–Hall, Welch and Plotkin arrays,” in
Proc. First Int. Workshop on Spectral Techniques and Logic Design for Future
Digital Systems, Tampere, Finland SPECLOG’2000, TICSP Ser. 10, 189–205
(2000).
6. J. Seberry and M. Yamada, “On the multiplicative theorem of Hadamard
matrices of generalize quaternion type using M-structure,” http://www.uow.
edu.au/∼jennie/WEB/WEB69-93/max/183_1993.pdf.
7. S. M. Phoong and K. Y. Chang, “Antipodal paraunitary matrices and
their application to OFDM systems,” IEEE Trans. Signal Process. 53 (4),
1374–1386 (2005).
8. W. A. Rutledge, “Quaternions and Hadamard matrices,” Proc. Am. Math. Soc.
3 (4), 625–630 (1952).
9. M.J.T. Smith and T.P. Barnwell III, “A procedure for designing exact
reconstruction filter banks for tree-structured subband coders,” in Proc. of
IEEE Int. Conf. Acoust. Speech, Signal Process, San Diego, 27.11–27.14 (Mar.
1984).
10. P. P. Vaidyanathan, “Theory and design of M-channel maximally decimated
quadrature mirror filters with arbitrary M, having perfect reconstruction
property,” IEEE Trans. Acoust., Speech, Signal Process. ASSP-35, 476–492
(Apr. 1987).
11. S.S. Agaian, “Spatial and high dimensional Hadamard matrices,” in Mathe-
matical Problems of Computer Science (in Russian), NAS RA, Yerevan,
Armenia, 12, 5–50 (1984).
12. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, “Decomposition of
Hadamard matrices,” in Proc. of First Int. Workshop on Spectral Techniques
and Logic Design for Future Digital Systems, 2–3 June 2000 Tampere, Finland
SPECLOG’2000, TICSP Ser. 10, pp. 207–221 (2000).
13. S. Agaian, H. Sarukhanyan, and J. Astola, “Multiplicative theorem based
fast Williamson–Hadamard transforms,” Proc. SPIE 4667, 82–91 (2002)
[doi:10.1117/12.467969].

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


248 Chapter 7

14. http://www.uow.edu.au/∼jennie.
15. H.G. Sarukhanyan, “Hadamard Matrices and Block Sequences,” Doctoral
thesis, Institute for Informatics and Automation Problems of NAS RA,
Yerevan, Armenia (1998).
16. H. G. Sarukhanyan, “Decomposition of Hadamard matrices by orthogonal
(−1, +1) vectors and algorithm of fast Hadamard transform,” Rep. Acad. Sci.
Armenia 97 (2), 3–6 (1997) (in Russian).
17. H. G. Sarukhanyan, “Decomposition of the Hadamard matrices and fast
Hadamard transform”, Computer Analysis of Images and Patterns, Lecture
Notes in Computer Science 1296, pp. 575–581 (1997).
18. H. G. Sarukhanyan, S. S. Agaian, J. Astola, and K. Egiazarian, “Decomposi-
tion of binary matrices and fast Hadamard transforms,” Circuits, Systems, and
Signal Processing 24 (4), 385–400 (1993).
19. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs”, in Surveys in Contemporary Design Theory, Wiley-Interscience
Series in Discrete Mathematics, 431–560 John Wiley & Sons, Hoboken, NJ
(1992).
20. J. S. Wallis, “On Hadamard matrices,” J. Combin. Theory, Ser. A 18, 149–164
(1975).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 8
Fast Hadamard Transforms for
Arbitrary Orders
Hadamard matrices have received much attention in recent years, owing to their
numerous known and promising applications. The FHT algorithm was developed
for N = 2n , 12 · 2n , 4n . In this chapter, a general and efficient algorithm to compute
4t-point (t is an “arbitrary” integer) HTs is developed. The proposed scheme
requires no zero padding of the input data to make it the size equal to 2n . The
difficulty of the construction of the N ≡ 0 (mod 4)-point HT is related to the
Hadamard problem, namely, we do not know if, for every integer n, there is or
is not an orthogonal 4n × 4n matrix of plus and minus ones. The number of real
operations is reduced from O(N 2 ) to O(N log2 N). Comparative estimates revealing
the efficiency of the proposed algorithms with respect to those known are given. In
particular, it is demonstrated that, in typical applications, the proposed algorithm
is significantly more efficient than the conventional WHT. Note that the general
algorithm is more efficient for Hadamard matrices of orders ≥96 than the classical
WHT, whose order is a power of 2. The algorithm renders a simple and symmetric
structure. Note that there are many approaches and algorithms concerning HTs.1–49
In this chapter, we present new algorithms for fast computation of HTs of any
existing order. Additionally, using the structures of those matrices, we reduce
the number of operations. The chapter is organized as follows. Section 8.1
presents three algorithms of Hadamard matrix construction. Sections 8.2 and 8.3
present the decomposition of the arbitrary Hadamard matrix by {(1, 1), (1, −1)}
and by the {(1, 1, 1, 1), (1, 1, −1, −1), (1, −1, −1, 1), (1, −1, 1, −1)} vector system.
Section 8.4 describes these decompositions based on N ≡ 0 (mod 4)-point FHT
algorithms. Section 8.5 describes a multiply/add instruction-based FHT algorithm
that primarily uses shifted operations. Section 8.6 presents the complexity of
developed algorithms, as well as comparative estimates, revealing the efficiency
of the proposed algorithms with respect to those known.

8.1 Hadamard Matrix Construction Algorithms


In this section, we describe the Hadamard matrix construction algorithms. The
first algorithm is based on Sylvester (Walsh–Hadamard) matrix construction. The
second and the third algorithms are based on the multiplicative theorem.1–4

249

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


250 Chapter 8

Algorithm 8.1.1: Sylvester matrix construction.


Input: H1 (1).
Step 1. For k = 1, 2, . . . , n construct
 
H k−1 H2k−1
H2k = 2 , k = 1, 2, . . . , n. (8.1)
H2k−1 −H2k−1
Output: Hadamard matrix of order 2n .
Below, the Sylvester-type matrices of orders 2 and 4, respectively, are given:
⎛ ⎞
  ⎜⎜⎜+ + + +⎟⎟⎟
+ + ⎜⎜⎜⎜+ − + −⎟⎟⎟⎟
, ⎜⎜⎜ ⎟.
⎜⎜⎝+ + − −⎟⎟⎟⎟⎠
(8.2)
+ −
+ − − +

Algorithm 8.1.2: Hadamard matrix construction algorithm via two Hadamard


matrices.
Input: Hadamard matrices H1 and H2 of order m and n.
Step 1. Split the matrix H1 as
 
P
H1 = . (8.3)
Q
Step 2. Decompose H2 as
H2 = (++) ⊗ A1 + (+−) ⊗ A2 . (8.4)
Step 3. Construct the matrix via
Hmn/2 = P ⊗ A1 + Q ⊗ A2 . (8.5)
Note that Hmn/2 is a Hadamard matrix of order mn/2.
Step 4. For a given number k, k = 2, 3, . . . , using steps 1–3, construct a
Hadamard matrix of the order n(m/2)k .
Output: A Hadamard matrix of the order n(m/2)k .
Algorithm 8.1.3: Hadamard matrix construction algorithm via four Hadamard
matrices.
Input: Hadamard matrices H1 , H2 , H3 , and H4 of orders m, n, p, and q,
respectively.
Step 1. Split each H1 and H2 into two parts,
H1 = [D1 , D2 ], H2 = [D3 , D4 ]. (8.6)

Step 2. Decompose H1 and H2 as


H1 = [(++) ⊗ A1 + (+−) ⊗ A2 , (++) ⊗ A3 + (+−) ⊗ A4 ] ,
(8.7)
H2 = [(++) ⊗ B1 + (+−) ⊗ B2 , (++) ⊗ B3 + (+−) ⊗ B4 ] .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 251

Step 3. Construct (+1, −1) matrices via

X = B1 ⊗ (A1 + A2 )T + B2 ⊗ (A1 − A2 )T ,
(8.8)
Y = B3 ⊗ (A3 + A4 )T + B4 ⊗ (A3 − A4 )T .

Step 4. Split matrices H3 and H4 into two parts,

H3 = [F1 , F2 ], H4 = [F3 , F4 ]. (8.9)

Step 5. Decompose H3 and H4 as

H3 = [(++) ⊗ P1 + (+−) ⊗ P2 , (++) ⊗ P3 + (+−) ⊗ P4 ] ,


(8.10)
H4 = [(++) ⊗ Q1 + (+−) ⊗ Q2 , (++) ⊗ Q3 + (+−) ⊗ Q4 ] .

Step 6. Construct (+1, −1) matrices

Z = Q1 ⊗ (P1 + P2 )T + Q2 ⊗ (P1 − P2 )T ,
(8.11)
W = Q3 ⊗ (P3 + P4 )T + Q4 ⊗ (P3 − P4 )T .

Step 7. Design the following matrices:

Z+W Z−W
P= , Q= . (8.12)
2 2
Step 8. Construct the Hadamard matrix as

Hmnpq/16 = X ⊗ P + Y ⊗ Q. (8.13)

Output: The Hadamard matrix Hmnpq/16 of order mnpq/16.

8.2 Hadamard Matrix Vector Representation


In this section, we consider a representation of the Hadamard matrix Hn of order
n by (+1, −1) vectors as follows. Let vi , i = 1, 2, . . . , k be k-dimensional mutually
orthogonal (+1, −1) vectors. The Hadamard matrix of order n of the form

Hn = v1 ⊗ A1 + v2 ⊗ A2 + · · · + vk ⊗ Ak (8.14)

is called the Hadamard matrix of type A(n, k), or the A(n, k) matrix,1,2,4–6 where
vi are orthogonal (+1, −1) vectors of length k, and Ai are (0, ±1) matrices of
dimension n × n/k.
Theorem 8.2.1: A matrix Hn of order n is an A(n, k)-type Hadamard matrix if and
only if, there are nonzero (0, ±1) matrices Ai , i = 1, 2, . . . , k of size n×n/k satisfying
the following conditions:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


252 Chapter 8

Ai ∗ A j , i  j, i, j = 1, 2, . . . , k,
k
Ai is a (+1, −1) matrix,
i=1
k
n (8.15)
Ai ATi = In ,
i=1
k
ATi A j = 0, i  j, i, j = 1, 2, . . . , k,
n
ATi Ai = In/k , i = 1, 2, . . . , k.
k
Proof: Necessity: In order to avoid excessive formulas, we prove the theorem for
the case k = 4. The general case is then a straightforward extension of the proof.
Let Hn be a Hadamard matrix of type A(n, k), i.e., Hn has the form of Eq. (8.14),
where

vi vTi = 4, vi vTj = 0, i  j, i, j = 1, 2, 3, 4. (8.16)


We shall prove that (0, ±1) matrices Ai , i = 1, 2, . . . , k of size n × n/k satisfy
the conditions of Eq. (8.15). First, two conditions are obvious. The third condition
follows from the relationship
4
Hn HnT = 4 Ai ATi = nIn . (8.17)
i=1

Consider the last two conditions of Eq. (8.15). Note that the Hadamard matrix Hn
has the form

Hn = (+ + ++) ⊗ A1 + (+ + −−) ⊗ A2 + (+ − −+) ⊗ A3 + (+ − +−) ⊗ Ak . (8.18)

We can also write Hn as

Hn = [(++) ⊗ C1 + (+−) ⊗ C2 , (++) ⊗ C3 + (+−) ⊗ C4 ] , (8.19)

Where, by Theorem 7.1.1 (see Chapter 7),

C1 = A1 + A2 , C2 = A3 + A4 , C3 = A1 − A2 , C4 = A3 − A4 (8.20)

satisfies the conditions of Eq. (7.12).


Hence, taking into account the last two conditions of Eq. (7.10), the matrices Ai
satisfy the following equations:
n
AT1 A1 + AT1 A2 + AT2 A1 + AT2 A2 = In/4 ,
2
AT1 A3 + AT1 A4 + AT2 A3 + AT2 A4 = 0, (8.21a)
AT1 A1 − AT1 A2 + AT2 A1 − AT2 A2 = 0,
AT1 A3 − AT1 A4 + AT2 A3 − AT2 A4 = 0;

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 253

AT3 A1 + AT3 A2 + AT4 A1 + AT4 A2 = 0,


n
AT3 A3 + AT3 A4 + AT4 A3 + AT4 A4 = In/4 ,
2 (8.21b)
AT3 A1 − AT3 A2 + AT4 A1 − AT4 A2 = 0,
AT3 A3 − AT3 A4 + AT4 A3 − AT4 A4 = 0;

AT1 A1 + AT1 A2 − AT2 A1 − AT2 A2 = 0,


AT1 A3 + AT1 A4 − AT2 A3 − AT2 A4 = 0,
n (8.21c)
AT1 A1 − AT1 A2 − AT2 A1 + AT2 A2 = In/4 ,
2
AT1 A3 − AT1 A4 − AT2 A3 + AT2 A4 = 0;

AT3 A1 + AT3 A2 − AT4 A1 − AT4 A2 = 0,


AT3 A3 + AT3 A4 − AT4 A3 − AT4 A4 =, 0
(8.21d)
AT3 A1 − AT3 A2 − AT4 A1 + AT4 A2 = 0,
n
AT3 A3 − AT3 A4 − AT4 A3 + AT4 A4 = In/4 .
2
Solving these systems, we find that
n
ATi A j = 0, i  j, ATi Ai = In/4 , i, j = 1, 2, 3, 4. (8.22)
4
Sufficiency: Let (0, ±1) matrices Ai = 0, i = 1, 2, 2, 4 satisfy the conditions of
Eq. (8.15). We shall show that the matrix in Eq. (8.18) is a Hadamard matrix.
Indeed, calculating Hn HnT and HnT Hn , we find that

4 4
n
Hn HnT = 4 Ai ATi = HnT Hn = vTi vi ⊗ In/4 = nIn . (8.23)
i=1 i=1
4

Now, we formulate a Hadamard matrix construction theorem, which makes it


possible to decompose it by orthogonal vectors of size 2n , with n = 1, 2, 3, . . . , 2k .

Theorem 8.2.2: The Kronecker product of k Hadamard matrices H1 ⊗H2 ⊗· · ·⊗Hk


may be decomposed by 2k orthogonal (+1, −1) vectors of size 2k .

Proof: Let Hi , i = 1, 2, . . . , k be Hadamard matrices of orders ni . According to


Eq. (8.3), matrices H1 and H2 can be represented as

H1 = (++) ⊗ A11 + (+−) ⊗ A12 ,


(8.24)
H2 = (++) ⊗ A21 + (+−) ⊗ A22 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


254 Chapter 8

We can see that

H1 ⊗ H2 = [(++) ⊗ A11 + (+−) ⊗ A12 ] ⊗ [(++) ⊗ A21 + (+−) ⊗ A22 ]


= (+ + ++) ⊗ [A11 ⊗ A21 ] + (+ + −−) ⊗ [A11 ⊗ A22 ]
+ (+ − −+) ⊗ [A12 ⊗ A22 ] + (+ − +−) ⊗ [A12 ⊗ A21 ]
= (+ + ++) ⊗ D1 + (+ + −−) ⊗ D2
+ (+ − −+) ⊗ D3 + (+ − +−) ⊗ D4 , (8.25)

where D1 = A11 ⊗ A21 , D2 = A11 ⊗ A22 , D3 = A12 ⊗ A22 , D4 = A12 ⊗ A21 .


This means that H1 ⊗ H2 is an A(n1 n2 , 4)-type Hadamard matrix. Continuing the
above construction for 3, 4, . . . , k matrices, we prove Theorem 8.2.2 correct.

Below, we give an algorithm based on this theorem. Note that any Hadamard
matrix Hn of order n can be presented as

Hn = (++) ⊗ X + (+−) ⊗ Y, (8.26)

where X, Y are (0, ±1) matrices with dimension n × n/2. Examples of the
decomposition of Hadamard matrices are given below.

Example 8.2.1: (1) The following Hadamard matrix of order 4 can be decom-
posed:
(a) via two vectors (+ +), (+ −),

⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟
⎟⎟⎟ ⎜⎜⎜+ +⎟⎟
⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟

⎜⎜⎜+ − + −⎟⎟ ⎜⎜0 ⎜⎜+ +⎟⎟⎟⎟
⎟⎟⎟ = (++) ⊗ ⎜⎜⎜⎜⎜
0 ⎟⎟
H4 = ⎜⎜⎜⎜ ⎟⎟⎟ + (+−) ⊗ ⎜⎜⎜⎜⎜ ⎟,
0 ⎟⎟⎟⎟⎠
(8.27)
⎜⎜⎝+ + − −⎟⎟⎠ ⎜⎜⎝+ −⎟⎟⎠ ⎜⎜⎝0
+ − − + 0 0 + −

(b) via four vectors (+ + + +), (+ − + −), (+ + − −), (+ − − +),

⎛ ⎞ ⎛ ⎞
⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 ⎟⎟⎟ ⎜+⎟
H4 = (+ + ++) ⊗ ⎜⎜⎜ ⎟⎟⎟ + (+ − +−) ⊗ ⎜⎜⎜⎜⎜ ⎟⎟⎟⎟⎟
⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟
⎝ ⎠ ⎝ ⎠
0 0
⎛ ⎞ ⎛ ⎞
⎜⎜⎜0 ⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟⎟ ⎜⎜⎜⎜ ⎟⎟⎟⎟
+ (+ + −−) ⊗ ⎜⎜⎜⎜⎜ ⎟⎟⎟⎟⎟ + (+ − −+) ⊗ ⎜⎜⎜⎜⎜ ⎟⎟⎟⎟⎟ .
0 0
(8.28)
⎜⎜⎜+⎟⎟⎟ ⎜⎜⎜0 ⎟⎟⎟
⎝ ⎠ ⎝ ⎠
0 +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 255

(2) The following Hadamard matrix of order 8 can be decomposed:


⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟⎟
⎜⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟⎟
⎜⎜⎜
+ − − + + − − +⎟⎟⎟⎟⎟
H8 = ⎜⎜⎜⎜⎜ ⎟ (8.29)
⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟⎟
⎜⎜⎜+ − + − − + − +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟⎟
⎝ ⎠
+ − − + − + + −

(a) via two vectors (+ +), (+ −),


⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟
⎟⎟⎟ ⎜⎜⎜0 0 0 0 ⎟⎟

⎜⎜⎜0 0 0 0 ⎟⎟⎟ ⎜⎜⎜+ + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜
⎜⎜⎜+
⎜⎜⎜ − + −⎟⎟⎟⎟ ⎜⎜⎜0
⎜⎜⎜ 0 0 0 ⎟⎟⎟⎟⎟
⎜0 0 ⎟⎟⎟⎟⎟ ⎜+ − + −⎟⎟⎟⎟⎟
H8 = (++) ⊗ ⎜⎜⎜⎜⎜ ⎟ + (+−) ⊗ ⎜⎜⎜⎜⎜
0 0
⎟, (8.30)
⎜⎜⎜+ + − −⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜0 ⎟ ⎟
⎜⎜⎜ 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎜⎜ + − −⎟⎟⎟⎟
⎟⎟⎟ ⎟
⎜⎜⎜+ − − +⎟⎟ ⎜⎜⎜0 0 0 0 ⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
0 0 0 0 + − − +

(b) via four vectors (+ + + +), (+ − + −), (+ + − −), (+ − − +),


⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ +⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟
⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜+ +⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟⎟⎟
H8 = (+ + ++) ⊗ ⎜⎜⎜ ⎟⎟ + (+ − +−) ⊗ ⎜⎜⎜ ⎟
⎜⎜⎜⎜+ −⎟⎟⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 ⎟⎟⎟ ⎜
⎜⎜⎜+ −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎝0 0 ⎟⎟⎠ ⎜⎜⎝0 0 ⎟⎟⎟⎟⎠
0 0 0 0
⎛ ⎞ ⎛ ⎞
⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟
⎜⎜⎜⎜0 0 ⎟⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜+ +⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜
0 0 ⎟⎟⎟⎟⎟ ⎜+ +⎟⎟⎟⎟⎟
+ (+ + −−) ⊗ ⎜⎜⎜⎜⎜ ⎟⎟⎟ + (+ − −+) ⊗ ⎜⎜⎜⎜⎜ ⎟⎟ . (8.31)
⎜⎜⎜⎜0 0 ⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝+ −⎟⎟⎟⎟⎠ ⎜⎜⎝0 0 ⎟⎟⎟⎟⎠
0 0 + −

Algorithm 8.2.1: Construct the Hadamard matrix by decomposing four orthogo-


nal vectors.
Input: The Hadamard matrices H1 and H2 of orders m and n, respectively.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


256 Chapter 8

Step 1. Represent matrices H1 and H2 as follows:


H1 = (++) ⊗ X + (+−) ⊗ Y,
(8.32)
H2 = (++) ⊗ Z + (+−) ⊗ W.
Step 2. Construct the following matrices:
P1 = X ⊗ Z, P2 = X ⊗ W, P3 = Y ⊗ Z, P4 = Y ⊗ W. (8.33)
Step 3. Design the matrix of order mn as follows:
Hmn = (+ + ++) ⊗ P1 + (+ + −−) ⊗ P2 + (+ − −+) ⊗ P3 + (+ − +−) ⊗ P4 . (8.34)

Output: The Hadamard matrix Hmn of order mn.

8.3 FHT of Order n ≡ 0 (mod 4)


In this section, fast algorithms for more general HTs will be derived. As was
mentioned above, a classical fast WHT operates only with 2k -dimensional vectors.
Below, we give an algorithm of FHT for the cases when the order of Hadamard
matrix is not a power of 2.2,5,6 The forward HT is defined as F = HX. Below, we
derive the FHT algorithm based on Theorem 8.2.1.
Algorithm 8.3.1: General FHT algorithm.
Input: An A(n, k)-type Hadamard matrix Hn , X = (x1, x2 , . . . , xn )T signal vector
and Pi column vectors of dimension n/k, whose i’th element is equal to 1,
and whose remaining elements are equal to 0.
Step 1. Decompose Hn as
Hn = v1 ⊗ A1 + v2 ⊗ A2 + · · · + vk ⊗ Ak . (8.35)
Step 2. Split the input vector X into n/k parts as follows:
n/k
X= Xi ⊗ P i , (8.36)
i=1

where Xi is a column vector of the form


, - n
XiT = fk(i−1)+1 , fk(i−1)+2 , . . . , fk(i−1)+k , i = 1, 2, . . . , . (8.37)
k
Step 3. Perform the fast WHTs:
  n
Ci = H2k Xi = ci1 , ci2 , . . . , cik , i = 1, 2, . . . , . (8.38)
k
Step 4. Compute
k
n
Bi = cij Aij , i = 1, 2, . . . , , (8.39)
j=1
k

where Aij is the i’th column of matrix A j .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 257

Step 5. Compute the spectral elements of transform as

F = B1 + B2 + · · · + Bn/k . (8.40)

Output: The n ≡ 0 (mod 4)-point HT coefficients.


Now we give an example of the HT derived above.

Example 8.3.1: The 12-point FHT algorithm. Consider the block-cyclic Hada-
mard matrix H12 of order 12 with first block row (Q0 , Q1 , Q1 ), i.e.,

H12 = Q0 ⊗ I3 + Q1 ⊗ U + Q1 ⊗ U 2 , (8.41)

where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟
⎟ ⎜⎜⎜+ − − −⎟⎟

⎜⎜⎜− + − +⎟⎟⎟⎟ ⎜⎜⎜+ + + −⎟⎟⎟⎟
Q0 = ⎜⎜⎜⎜ ⎟, Q1 = ⎜⎜⎜⎜ ⎟.
−⎟⎟⎟⎟⎠ +⎟⎟⎟⎟⎠
(8.42)
⎜⎜⎝− + + ⎜⎜⎝+ − +
− − + + + + − +

Algorithm 8.3.2:
Input: An A(12, 2)-type Hadamard matrix H12 , X = (x1 , x2 , . . . , x12 )T signal
vector and Pi column vectors of dimension 12/2 = 6, whose i’th element
is equal to 1, and whose remaining elements are equal to 0.
Step 1. Decompose H12 as H12 = (+ +) ⊗ A1 + (+−) ⊗ A2 , where

⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + 0 − 0 −⎟⎟⎟ ⎜⎜⎜0 0 + 0 + 0 ⎟⎟⎟
⎜⎜⎜⎜0 ⎟ ⎜⎜⎜⎜− ⎟
⎜⎜⎜ 0 + 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜ − 0 + 0 +⎟⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 + 0 +⎟⎟⎟⎟⎟ ⎜⎜⎜−
⎜⎜⎜ + + 0 + 0 ⎟⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜−
⎜⎜⎜ + + 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0
⎜⎜⎜ 0 0 − 0 −⎟⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜0
⎜⎜⎜ − + + 0 −⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎜⎜ 0 0 0 + 0 ⎟⎟⎟⎟
⎟⎟ ⎟⎟
⎜⎜⎜+ 0 0 0 + 0 ⎟⎟⎟⎟ ⎜⎜⎜0 + − − 0 +⎟⎟⎟⎟
A1 = ⎜⎜⎜⎜ ⎟⎟ , A2 = ⎜⎜⎜⎜ ⎟⎟ . (8.43)
⎜⎜⎜0 + 0 0 0 +⎟⎟⎟⎟ ⎜⎜⎜+ 0 − + + 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜+ 0 − + + 0 ⎟⎟⎟⎟ ⎜⎜⎜0 − 0 0 0 −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜0 − 0 − + +⎟⎟⎟⎟ ⎜⎜⎜+ 0 + 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜+ 0 + 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜0 + 0 + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜0 + 0 + 0 0 ⎟⎟⎟⎟ ⎜⎜⎜+ 0 + 0 − +⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
+ 0 + 0 − + 0 − 0 − 0 0

Step 2. Split the input vector X into six parts as

f = X1 ⊗ P1 + X2 ⊗ P2 + · · · + X6 ⊗ P6 , (8.44)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


258 Chapter 8

Table 8.1 Computations of 12-dimensional vectors B j , j = 1, 2, 3, 4, 5, 6.

B1 B2 B3 B4 B5 B6

f1 + f2 f3 + f 4 f5 − f6 − f7 − f8 f9 − f10 − f11 − f12


− f1 + f2 − f3 + f4 f5 + f6 f7 − f 8 f9 + f10 f11 − f12
− f1 + f2 f3 − f 4 f5 − f6 f7 + f 8 f9 − f10 f11 + f12
− f1 − f2 f3 + f 4 f5 + f6 − f7 + f8 f9 + f10 − f11 + f12
f1 − f2 − f3 − f4 f5 + f6 f7 + f 8 f9 − f10 − f11 − f12
f1 + f2 f3 − f 4 − f5 + f6 − f7 + f8 f9 + f10 f11 − f12
f1 − f2 f3 + f 4 − f5 + f6 f7 − f 8 f9 − f10 f11 + f12
f1 + f2 − f3 + f4 − f5 − f6 f7 + f 8 f9 + f10 − f11 + f12
f1 − f2 − f3 − f4 f5 − f6 − f7 − f8 f9 + f10 f11 + f12
f1 + f2 f3 − f 4 f5 + f6 f7 − f 8 − f9 + f10 − f11 + f12
f1 − f2 f3 + f 4 f5 − f6 f7 + f 8 − f9 + f10 f11 − f12
f1 + f2 f3 + f 4 f5 + f6 − f7 + f8 − f9 − f10 f11 + f12

where
     
f f f
X1 = 1 , X2 = 3 , X3 = 5 ,
f2 f4 f6
      (8.45)
f f f
X4 = 7 , X5 = 9 , X6 = 11 .
f8 f10 f12
Step 3. Perform the fast WHTs
 
+ +
X for i = 1, 2, . . . , 6. (8.46)
+ − i
Step 4. Compute
B j = (++)X j ⊗ A1 P j + (+−)X j ⊗ A2 P j , j = 1, 2, . . . , 6, (8.47)
where the values of B j are shown in Table 8.1.
Step 5. Compute the spectral elements of the transform as F = B1 + B2 +
· · · + B6 .
Output: The 12-point HT coefficients.
Flow graphs of 12-dimensional vectors B j , j = 1, 2, . . . , 6 computations are
given in Fig. 8.1.
Note that A1 and A2 are block-cyclic matrices of dimension 12 × 6 with the first
block rows represented as (R0 , R1 , R1 ) and (T 0 , T 1 , T 1 ), where
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ +⎟⎟⎟ ⎜⎜⎜0 −⎟⎟⎟ ⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜+ 0 ⎟⎟⎟
⎜⎜⎜0 0 ⎟⎟⎟ ⎜⎜⎜+ 0 ⎟⎟⎟ ⎜⎜⎜− −⎟⎟⎟ ⎜⎜⎜0 +⎟⎟⎟
R0 = ⎜⎜⎜⎜ ⎟⎟ , R1 = ⎜⎜⎜⎜ ⎟⎟ , T 0 = ⎜⎜⎜⎜ ⎟⎟ , T 1 = ⎜⎜⎜⎜ ⎟⎟ . (8.48)
⎜⎝⎜0 0 ⎟⎟⎟⎠⎟ ⎜⎝⎜0 +⎟⎟⎟⎠⎟ ⎜⎝⎜− +⎟⎟⎟⎠⎟ ⎜⎝⎜+ 0 ⎟⎟⎟⎠⎟
− + + 0 0 0 0 −
Thus,
⎛ ⎞ ⎛ ⎞
⎜⎜⎜R0 R1 R1 ⎟⎟⎟ ⎜⎜⎜T 0 T 1 T 1 ⎟⎟⎟

A1 = ⎜⎜⎜⎝R1 R0 R1 ⎟⎟⎟⎟⎠ , ⎜
A2 = ⎜⎜⎜⎝T 1 T 0 T 1 ⎟⎟⎟⎟⎠ . (8.49)
R1 R1 R0 T1 T1 T0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 259

f1 f2 f3 f4

1 –2 –3 –4 5 6 7 8 9 10 11 12 1 –2 3 4 –5 6 7 –8 –9 10 11 –12
f5 f6 f7 f8

1 2 3 4 5 –6 –7 –8 9 10 11 12 –1 2 3 –4 5 –6 7 8 –9 10 11 –12
f9 f10 f11 f12

1 2 3 4 5 6 7 8 9 –10 –11 –12 –1 2 3 –4 –5 6 7 –8 9 –10 11 12

Figure 8.1 Flow graphs of the 12-dimensional vectors B j , j = 1, 2, . . . , 6 computations.

Note that above we ignored the interior structure of a Hadamard matrix. Now
we examine it in more detail. We see that (1) the Hadamard matrix H12 is a block-
cyclic, block-symmetric matrix; (2) the matrices in Eq. (8.46) are also block-cyclic,
block-symmetric matrices; and (3) the 12-point HT requires only 60 addition
operations.
Let us prove the last statement. In reality, to compute all elements of vectors
B1 and B2 , it is necessary to perform four addition operations, i.e., two 2-point
HTs are necessary. Then, it is not difficult to see that the computation of the sum
B1 + B2 requires only 12 additions because there are four repetition pairs, as well as
B1 (4+i)+B2 (4+i) = B1 (8+i)+B2 (8+i), for i = 1, 2, 3, 4. A similar situation occurs
for computing B3 + B4 and B5 + B6 . Hence, the complete 12-point HT requires only
60 addition operations.
Now, we continue Example 8.3.1 for an inverse transform. Note that the 12-point
inverse HT can be computed as
1 T
X= H Y. (8.50)
12 12

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


260 Chapter 8

Algorithm 8.3.3: The 12-point inverse HT.


Input: An A(12, 2)-type Hadamard matrix H12 T
, Y = (y1 , y2 , . . . , y12 )T signal-
vector (or spectral coefficients) and Pi column-vectors of dimension
12/2 = 6, whose i’th element is equal to 1, and whose remaining elements
are equal to 0.
Step 1. Decompose H12 T T
as H12 = (+ +) ⊗ B1 + (+ −) ⊗ B2 , where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜0 − + + + +⎟⎟ ⎜⎜⎜+ 0 0 0 0 0 ⎟⎟
⎜⎜⎜⎜+ ⎟ ⎜⎜⎜⎜0 ⎟
0 0 0 0 0 ⎟⎟⎟⎟ + − − − −⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 0 0 0 ⎟⎟⎟⎟ ⎜⎜⎜+ 0 − + − +⎟⎟⎟⎟
⎜⎜⎜+ ⎟ ⎜⎜⎜0 ⎟
⎜⎜⎜ 0 − + − +⎟⎟⎟⎟⎟ ⎜⎜⎜ − 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜+
⎜⎜⎜ + 0 − + +⎟⎟⎟⎟⎟ ⎜⎜⎜0
⎜⎜⎜ 0 + 0 0 0 ⎟⎟⎟⎟⎟
⎜0 + 0 ⎟⎟⎟⎟⎟ ⎜− − + − −⎟⎟⎟⎟⎟
B1 = ⎜⎜⎜⎜⎜ B2 = ⎜⎜⎜⎜⎜
0 0 0 0
⎟, ⎟ . (8.51)
⎜⎜⎜0 0 0 + 0 0 ⎟⎟⎟⎟ ⎜⎜⎜− + + 0 − +⎟⎟⎟⎟
⎜⎜⎜− ⎟ ⎜⎜⎜0 ⎟
⎜⎜⎜ + + 0 − +⎟⎟⎟⎟ ⎜⎜⎜ 0 0 + 0 0 ⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜+ + + + 0 −⎟⎟⎟⎟ ⎜⎜⎜0 0 0 0 + 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜− − − − 0 +⎟⎟⎟⎟⎟
⎜⎜⎜0 +⎟⎟⎟⎟⎠ ⎜⎜⎜− + − + + 0 ⎟⎟⎟⎟⎠
⎜⎝ 0 0 0 0 ⎜⎝
− + − + + 0 0 0 0 0 0 +

Step 2. Split the input vector Y into six parts as Y = Y1 ⊗P1 +Y2 ⊗P2 +· · ·+
Y6 ⊗ P6 , where
     
y y y
Y1 = 1 , Y2 = 3 , Y3 = 5 ,
y2 y4 y6
      (8.52)
y y y
Y4 = 7 , Y5 = 9 , Y6 = 11 .
y8 y10 y12

Step 3. Perform the WHTs,


 
+ +
Y for i = 1, 2, . . . , 6. (8.53)
+ − i

Step 4. Compute D j = (+ +)Y j ⊗ B1 P j + (+ −)Y j ⊗ B2 P j , j = 1, 2, . . . , 6,


where the values of D j are shown in Table 8.2.
Step 5. Compute F = D1 + D2 + · · · + D6 .
Output: The 12-point inverse HT coefficients (i.e., input signal x).
Note that
⎛ ⎞ ⎛ ⎞
⎜⎜⎜R1 R0 R0 ⎟⎟⎟ ⎜⎜⎜T 1 T 0 T 0 ⎟⎟⎟
B1 = ⎜⎜⎜⎜⎝R0 R1 R0 ⎟⎟⎟⎠⎟ , B2 = ⎜⎜⎜⎜⎝T 0 T 1 T 0 ⎟⎟⎟⎠⎟ , (8.54)
R 0 R0 R1 T0 T0 T1
where R0 , R1 and T 0 , T 1 have the form of Eq. (8.51). Note also that flow graphs
of computation vectors Di have a similar structure as that shown in Fig. 8.1.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 261

Table 8.2 Computations of 12-dimensional vectors D j , j = 1, 2, 3, 4, 5, 6.

D, D2 D3 D4 D5 D6

y1 − y 2 −y3 − y4 y5 + y6 y7 + y 8 y9 + y10 y11 + y12


y1 + y 2 y3 − y 4 −y5 + y6 −y7 + y8 −y9 + y10 −y11 + y12
y1 − y 2 y3 + y 4 −y5 + y6 y7 − y 8 −y9 + y10 y11 − y12
y1 + y 2 −y3 + y4 −y5 − y6 y7 + y 8 −y9 − y10 y11 + y12
y1 + y 2 y3 + y 4 y5 − y6 −y7 − y8 y9 + y10 y11 + y12
y1 − y 2 −y3 + y4 −y5 − y6 y7 − y 8 −y9 + y10 −y11 + y12
y1 − y 2 y3 − y 4 y5 − y6 y7 + y 8 −y9 + y10 y11 − y12
−y1 − y2 y3 + y 4 y5 + y6 −y7 − y8 −y9 − y10 y11 + y12
y1 + y 2 y3 + y 4 y5 + y6 y7 + y 8 y9 − y10 −y11 − y12
y1 − y 2 −y3 + y4 −y5 + y6 −y7 + y8 y9 + y10 y11 − y12
y1 − y 2 y3 − y 4 −y5 + y6 y7 − y 8 y9 − y10 y11 + y12
−y1 − y2 y3 + y 4 −y5 − y6 y7 + y 8 y9 + y10 −y11 + y12

Example 8.3.2: The 20-point FHT algorithm.


Consider block-cyclic Hadamard matrix H20 of order 20 with the first block row
(Q0 , Q1 , Q2 , Q2 , Q1 ), where
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟⎟ ⎜⎜⎜− − + −⎟⎟⎟ ⎜⎜⎜− − − +⎟⎟⎟
⎜⎜⎜− + − +⎟⎟⎟ ⎜⎜⎜+ − + +⎟⎟⎟ ⎜⎜⎜+ − − −⎟⎟⎟
Q0 = ⎜⎜⎜⎜ ⎟⎟ , Q1 = ⎜⎜⎜⎜ ⎟⎟ , Q2 = ⎜⎜⎜⎜ ⎟⎟ . (8.55)
⎜⎝⎜− + + −⎟⎟⎟⎠⎟ ⎜⎝⎜− − − +⎟⎟⎟⎠⎟ ⎜⎝⎜+ + − +⎟⎟⎟⎠⎟
− − + + + − − − − + − −

Input: An A(20, 2)-type Hadamard matrix H20 , f = ( f1 , f2 , . . . f20 )T signal vector


and Pi column vectors of length 10, whose i’th element is equal to 1, and
whose remaining elements are equal to 0.
Step 1. Decompose H20 by H20 = (+ +)⊗A1 +(+ −)⊗A2 , where A1 and A2
are block-cyclic matrices with first block row (R0 , R1 , R2 , R2 , R1 )
and (T 0 , T 1 , T 2 , T 2 , T 1 ), respectively, and
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ +⎟⎟
⎟ ⎜⎜⎜− 0 ⎟⎟
⎟ ⎜⎜⎜− 0 ⎟⎟

⎜⎜⎜0 0 ⎟⎟⎟⎟ ⎜⎜⎜0 +⎟⎟⎟⎟ ⎜⎜⎜0 −⎟⎟⎟⎟
R0 = ⎜⎜⎜⎜ ⎟, R1 = ⎜⎜⎜⎜ ⎟, R2 = ⎜⎜⎜⎜ ⎟,
⎜⎜⎝0 0 ⎟⎟⎟⎟⎠ ⎜⎜⎝− 0 ⎟⎟⎟⎟⎠ ⎜⎜⎝+ 0 ⎟⎟⎟⎟⎠
− + 0 − 0 −
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ (8.56)
⎜⎜⎜⎜0 0 ⎟⎟ ⎜⎜⎜⎜0 +⎟⎟ ⎜⎜⎜⎜0 −⎟⎟
⎟ ⎟ ⎟
⎜⎜− −⎟⎟⎟⎟ ⎜⎜+ 0 ⎟⎟⎟⎟ ⎜⎜+ 0 ⎟⎟⎟⎟
T 0 = ⎜⎜⎜⎜ ⎟, T 1 = ⎜⎜⎜⎜ ⎟, T 2 = ⎜⎜⎜⎜ ⎟.
⎜⎜⎝− +⎟⎟⎟⎟⎠ ⎜⎜⎝0 −⎟⎟⎟⎟⎠ ⎜⎜⎝0 −⎟⎟⎟⎟⎠
0 0 + 0 − 0
Step 2. Split the input vector f into 10 parts as f = X1 ⊗ P1 + X2 ⊗ P2 +
· · · + X10 ⊗ P10 , where
 
f2i−1
Xi = , i = 1, 2, . . . , 10. (8.57)
f2i
 
Step 3. Perform the fast WHTs ++ +− Xi .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


262 Chapter 8

Table 8.3 Computations of 20-dimensional vectors B j , j = 1, 2, 3, 4, 5.

B1 B2 B3 B4 B5

f1 + f2 f3 + f 4 − f5 − f6 f7 − f 8 − f9 − f10
− f1 + f2 − f3 + f4 f5 − f6 f7 + f 8 f9 − f10
− f1 + f2 f3 − f 4 − f5 − f6 − f7 + f8 f9 + f10
− f1 − f2 f3 + f 4 f5 − f6 − f7 − f8 − f9 + f10
− f1 − f2 f3 − f 4 f5 + f6 f7 + f 8 − f9 − f10
f1 − f2 f3 + f 4 − f5 + f6 − f7 + f8 f9 − f10
− f1 − f2 − f3 + f4 − f5 + f6 f7 − f 8 − f9 − f10
f1 − f2 − f3 − f4 − f5 − f6 f7 + f 8 f9 − f10
− f1 − f2 − f3 + f4 − f5 − f6 f7 − f 8 f9 + f10
f1 − f2 − f3 − f4 f5 − f6 f7 − f 8 − f9 + f10
f1 + f2 − f3 + f4 − f5 − f6 − f7 + f8 − f9 + f10
− f1 + f2 − f3 − f4 f5 − f6 − f7 − f8 − f9 − f10
− f1 − f2 − f3 + f4 − f5 − f6 − f7 + f8 − f9 − f10
f1 − f2 − f3 − f4 f5 − f6 − f7 − f8 f9 − f10
f1 + f2 − f3 + f4 f5 + f6 − f7 + f8 − f9 − f10
− f1 + f2 − f3 − f4 − f5 + f6 − f7 − f8 f9 − f10
− f1 − f2 f3 − f 4 − f5 − f6 − f7 + f8 − f9 − f10
f1 − f2 f3 + f 4 f5 − f6 − f7 − f8 f9 − f10
− f1 − f2 − f3 + f4 f5 + f6 − f7 + f8 f9 + f10
f1 − f2 − f3 − f4 − f5 + f6 − f7 − f8 − f9 + f10

Table 8.4 Computations of 20-dimensional vectors B j , j = 6, 7, 8, 9, 10.

B6 B7 B8 B9 B10

− f11 + f12 − f13 − f14 − f15 + f16 − f17 − f18 f19 − f20
− f11 − f12 f13 − f14 − f15 − f16 f17 − f18 f19 + f20
− f11 + f12 f13 + f14 − f15 + f16 − f17 − f18 − f19 + f20
− f11 − f12 − f13 + f14 − f15 − f16 f17 − f18 − f19 − f20
f11 − f12 − f13 − f14 − f15 + f16 − f17 − f18 f19 + f20
f11 + f12 f13 − f14 − f15 − f16 f17 − f18 − f19 − f20
− f11 + f12 f13 + f14 − f15 + f16 f17 + f18 − f19 + f20
− f11 − f12 − f13 + f14 − f15 − f16 − f17 − f18 − f19 − f20
f11 + f12 − f13 − f14 f15 − f16 − f17 − f18 − f19 + f20
− f11 + f12 f13 − f14 f15 + f16 f17 − f18 − f19 − f20
f11 − f12 − f13 − f14 − f15 + f16 f17 + f18 − f19 + f20
f11 + f12 f13 − f14 − f15 − f16 − f17 + f18 − f19 − f20
f11 − f12 f13 + f14 f15 + f16 − f17 − f18 f19 − f20
f11 + f12 − f13 + f14 − f15 + f16 f17 − f18 f19 + f20
− f11 + f12 − f13 + f14 f15 − f16 − f17 − f18 − f19 + f20
− f11 − f12 − f13 − f14 f15 + f16 f17 − f18 − f19 − f20
− f11 + f12 − f13 − f14 f15 − f16 f17 + f18 f19 + f20
− f11 − f12 f13 − f14 f15 + f16 − f17 + f18 − f19 + f20
− f11 + f12 − f13 − f14 − f15 + f16 − f17 + f18 f19 − f20
− f11 − f12 f13 − f14 − f15 − f16 − f17 − f18 f19 + f20

Step 4. Compute B j = (+ +)X j ⊗ A1 P j +(+ −)X j ⊗ A2 P j , where the values


of B j are given in Tables 8.3 and 8.4.
Step 5. Compute the spectral elements of the transform as F = B1 + B2 +
· · · + B10 .
Output: 20-point HT coefficients.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 263

a1 a1
5 1

a2 6 a2 2

b1 7 b1 3

b2 4
8 b2

9
9
a1 = f1 + f2 a1 = f5 + f6
10
10
a2 = f1 – f2 a2 = f5 – f6
11
b1 = f3 + f4 11 b1 = f7 + f8
b2 = f3 – f4 b2 = f7 – f8 12
12

13
13
a1 + b1 = C2 (5) 14
14
a1 + b1 = C1 (1)
15 –a2 – b2 = C2 (6) 15
–a2 – b2 = C1 (2) –a2 + b2 = C2 (7)
16
16
–a2 + b2 = C1 (3) –a1 + b1 = C2 (8)
–a1 + b1 = C1 (4) 17
17
18
18
19
19
20
20
C1 (.)
C2 (.)

Figure 8.2 Flow graphs of 20-dimensional vectors C1 and C2 computations.

Note that above we ignored the interior structure of the Hadamard matrix. Now
we examine it in more detail. We can see that to compute all of the elements of
vector Bi , it is necessary to perform two addition operations, i.e., 2-point HTs are
necessary. Then, it is not difficult to see that the computation of the sum B2i–1 + B2i
requires only eight additions. Hence, the complete 20-point HT requires only 140
addition operations.
We introduce the notation Ci = B2i–1 + B2i , i = 1, 2, 3, 4, 5; the spectral elements
of the vector F can be calculated as F = C1 + C2 + · · · + C5 . The flow graphs of
computation of Ci are given in Figs. 8.2–8.4.

8.4 FHT via Four-Vector Representation


In this section, we present the four-vector-based N-point FHT algorithm. We
demonstrate this algorithm using the above-mentioned example. We can demon-
strate it for N = 24.
Algorithm 8.4.1:
Input: The signal vector f = ( f1 , f2 , . . . , f24 )T .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


264 Chapter 8

a1 a1
1 1

a2 2 a2 2

b1 3 b1 3

b2 4 4
b2

5 5
a1 = f9 + f10 a1 = f13 + f14
6 6
a2 = f9 – f10 a2 = f13 – f14
7 7
b1 = f11 + f12 b1 = f15 + f16
8 8
b2 = f11 – f12 b2 = – f16

13 9
14
a1 + b1 = C4 (13) 10
a1 + b1 = C3 (9) 15 11
–a2 – b2 = C4 (14)
16 12
–a2 – b2 = C3 (10) –a2 + b2 = C4 (15)
–a2 + b2 = C3 (11) 17 17
–a1 + b1 = C4 (16)
18 18
–a1 + b1 = C3 (12)
19 19
20
20

C3 (.) C4 (.)

Figure 8.3 Flow graphs of 20-dimensional vectors C3 and C4 computations.

Step 1. Construct the following matrices:


       
A O A O12
D1 = 1 , D2 = 12 , D3 = 2 , D4 = , (8.58)
O12 A1 O12 A2

where O12 is a zero matrix of dimension 12 × 6, and matrices A1


and A2 have the form of Eq. (8.46).
Step 2. Decompose H24 matrix as
H24 = v1 ⊗ D1 + v2 ⊗ D2 + v3 ⊗ D3 + v4 ⊗ D4 . (8.59)
Step 3. Make vectors
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜ f1 ⎟⎟⎟ ⎜⎜⎜ f5 ⎟⎟⎟ ⎜⎜⎜ f9 ⎟⎟⎟
⎜⎜⎜ f ⎟⎟⎟ ⎜⎜⎜ f ⎟⎟⎟ ⎜⎜⎜ f ⎟⎟⎟
X1 = ⎜⎜⎜⎜ 2 ⎟⎟⎟⎟ , X2 = ⎜⎜⎜⎜ 6 ⎟⎟⎟⎟ , X3 = ⎜⎜⎜⎜ 10 ⎟⎟⎟⎟ ,
⎜⎝⎜ f3 ⎟⎠⎟ ⎜⎝⎜ f7 ⎟⎠⎟ ⎜⎝⎜ f11 ⎟⎠⎟
f4 f8 f12
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ (8.60)
⎜⎜⎜ f13 ⎟⎟⎟ ⎜⎜⎜ f17 ⎟⎟⎟ ⎜⎜⎜ f21 ⎟⎟⎟
⎜⎜⎜ f ⎟⎟⎟ ⎜⎜⎜ f ⎟⎟⎟ ⎜⎜⎜ f ⎟⎟⎟
X4 = ⎜⎜⎜⎜ 14 ⎟⎟⎟⎟ , X5 = ⎜⎜⎜⎜ 18 ⎟⎟⎟⎟ , X6 = ⎜⎜⎜⎜ 22 ⎟⎟⎟⎟ .
⎜⎜⎝ f15 ⎟⎟⎠ ⎜⎜⎝ f19 ⎟⎟⎠ ⎜⎜⎝ f23 ⎟⎟⎠
f16 f20 f24

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 265

a1 1

a2 2

3
b1

b2 4

a1 = f17 + f18
6
a2 = f17 – f18
b1 = f19 + f20 7
b2 = f19 – f20
8

9
a1 + b1 = C5 (17)
10
–a2 – b2 = C5 (18)
–a2 + b2 = C5 (19) 11
–a1 + b1 = C5 (20)
12

13

14

15

16

C5 (.)

Figure 8.4 Flow graph of 20-dimensional vectors C5 computations.

Step 4. Perform the 4-point fast WHTs on the vectors Xi ,


⎛ ⎞
⎜⎜⎜+ + + +⎟⎟⎟
⎜⎜⎜+ − + −⎟⎟⎟
Ci = H4 Xi = ⎜⎜⎜⎜ ⎟⎟ X , i = 1, 2, . . . , 6.
⎜⎝⎜+ + − −⎟⎟⎟⎠⎟ i
(8.61)
+ − − +
Step 5. Calculate

R1 ( j) = v1 X j ⊗ D1 P j , R2 ( j) = v2 X j ⊗ D2 P j ,
(8.62)
R3 ( j) = v3 X j ⊗ D2 P j , R4 ( j) = v4 X j ⊗ D4 P j .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


266 Chapter 8

Table 8.5 Computations of vectors B j , j = 1, 2, 3.

B1 B2 B3

f1 + f2 + f3 + f4 f5 + f 6 + f 7 + f 8 f9 + f10 − f11 − f12


− f1 − f2 + f3 + f4 − f5 − f6 + f7 + f8 f9 + f10 + f11 + f12
− f1 − f2 + f3 + f4 f5 + f 6 − f 7 − f 8 f9 + f10 − f11 − f12
− f1 − f2 − f3 − f4 f5 + f 6 + f 7 + f 8 f9 + f10 + f11 + f12
f1 + f 2 − f 3 − f 4 − f5 − f 6 − f 7 − f 8 f9 + f10 + f11 + f12
f1 + f 2 + f 3 + f 4 f5 + f 6 − f 7 − f 8 − f9 − f10 + f11 + f12
f1 + f 2 − f 3 − f 4 f5 + f6 + f7 + f8 − f9 − f10 + f11 + f12
f1 + f 2 + f 3 + f 4 − f5 − f6 + f7 + f8 f9 + f10 + f11 + f12
f1 + f 2 − f 3 − f 4 − f5 − f 6 − f 7 − f 8 f9 + f10 − f11 − f12
f1 + f 2 + f 3 + f 4 f5 + f6 − f7 − f8 f9 + f10 + f11 + f12
f1 + f 2 − f 3 − f 4 f5 + f6 + f7 + f8 f9 + f10 − f11 − f12
f1 + f 2 + f 3 + f 4 − f5 − f6 + f7 + f8 f9 + f10 + f11 + f12
f1 − f 2 + f 3 − f 4 f5 − f 6 + f 7 − f 8 f9 − f10 − f11 + f12
− f1 + f 2 + f 3 − f 4 − f5 + f6 + f7 − f8 f9 − f10 + f11 − f12
− f1 + f 2 + f 3 − f 4 f5 − f 6 − f 7 + f 8 f9 − f10 − f11 + f12
− f1 + f 2 − f 3 + f 4 f5 − f 6 + f 7 − f 8 f9 − f10 + f11 − f12
f1 − f 2 − f 3 + f 4 − f5 + f6 − f7 + f8 f9 − f10 + f11 − f12
f1 − f 2 + f 3 − f 4 f5 − f 6 − f 7 + f 8 − f9 + f10 + f11 − f12
f1 − f2 − f3 + f4 f5 − f 6 + f 7 − f 8 − f9 + f10 + f11 − f12
f1 − f2 + f3 − f4 − f5 + f6 + f7 − f8 f9 − f10 + f11 − f12
f1 − f2 − f3 + f4 − f5 + f6 − f7 + f8 f9 − f10 − f11 + f12
f1 − f2 + f3 − f4 f5 − f 6 − f 7 + f 8 f9 − f10 + f11 − f12
f1 − f2 − f3 + f4 f5 − f 6 + f 7 − f 8 f9 − f10 − f11 + f12
f1 − f2 + f3 − f4 − f5 + f6 + f7 − f8 f9 − f10 + f11 − f12

Step 6. Compute B j = R1 ( j)+R2 ( j)+R3 ( j)+R4 ( j), j = 1, 2, . . . , 6, where


the values of B j are given in Tables 8.5 and 8.6.
Step 7. Compute the spectral elements of transform as

F = B1 + B2 + · · · + B6 . (8.63)
Output: 24-point HT coefficients.

8.5 FHT of Order N ≡ 0 (mod 4) on Shift/Add Architectures


In this section, we describe a multiply/add instruction based on fast 2n and N ≡
0 (mod 4)-point HT algorithms. This algorithm is similar to the general FHT
algorithm (see Algorithm 8.3.1).
The difference is only in step 3, which we now perform via multiply/add
architecture. We will start with an example. Let X = (x0 , x1 , x2 , x3 )T and Y =
(y0 , y1 , y2 , y3 )T be the input and output vectors, respectively. Consider the 4-point
HT
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟ ⎜⎜ x0 ⎟⎟ ⎜⎜ x0 + x1 + x2 + x3 ⎟⎟
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜+ − + −⎟⎟ ⎜⎜ x1 ⎟⎟ ⎜⎜ x0 − x1 + x2 − x3 ⎟⎟⎟⎟
Y = H4 X = ⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟.
−⎟⎟⎠ ⎜⎜⎝ x2 ⎟⎟⎠ ⎜⎜⎝ x0 + x1 − x2 − x3 ⎟⎟⎟⎟⎠
(8.64)
⎜⎜⎝+ + −
+ − − + x3 x0 − x 1 − x 2 + x 3

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 267

Table 8.6 Computations of 20-dimensional vectors B j , j = 4, 5, 6, 7.

B4 B5 B6

− f13 − f14 − f15 − f16 f17 + f18 − f19 − f20 − f21 − f22 − f23 − f24
f13 + f14 − f15 − f16 f17 + f18 + f19 + f20 f21 + f22 − f23 − f24
f13 + f14 + f15 + f16 f17 + f18 − f19 − f20 f21 + f22 + f23 + f24
− f13 − f14 + f15 + f16 f17 + f18 + f19 + f20 − f21 − f22 + f23 + f24
f13 + f14 + f15 + f16 f17 + f18 − f19 − f20 − f21 − f22 − f23 − f24
− f13 − f14 + f15 + f16 f17 + f18 + f19 + f20 f21 + f22 − f23 − f24
f13 + f14 − f15 − f16 f17 + f18 − f19 − f20 f21 + f22 + f23 + f24
f13 + f14 + f15 + f16 f17 + f18 + f19 + f20 − f21 − f22 + f23 + f24
− f13 − f14 − f15 − f16 f17 + f18 + f19 + f20 f21 + f22 + f23 + f24
f13 + f14 − f15 − f16 − f17 − f18 + f19 + f20 − f21 − f22 + f23 + f24
f13 + f14 + f15 + f16 − f17 − f18 + f19 + f20 f21 + f22 − f23 − f24
f13 + f14 − f15 − f16 − f17 − f18 − f19 − f20 f21 + f22 + f23 + f24
− f13 + f14 − f15 + f16 f17 − f18 − f19 + f20 − f21 + f22 − f23 + f24
f13 − f14 − f15 + f16 f17 − f18 + f19 − f20 f21 − f22 − f23 + f24
f13 − f14 + f15 − f16 f17 − f18 − f19 + f20 f21 − f22 + f23 − f24
− f13 + f14 + f15 − f16 f17 − f18 + f19 − f20 − f21 + f22 + f23 − f24
f13 − f14 + f15 − f16 f17 − f18 − f19 + f20 − f21 + f22 − f23 + f24
− f13 + f14 + f15 − f16 f17 − f18 + f19 − f20 f21 − f22 − f23 + f24
f13 − f14 − f15 + f16 f17 − f18 − f19 + f20 f21 − f22 + f23 − f24
f13 − f14 + f15 − f16 f17 − f18 + f19 − f20 − f21 + f22 + f23 − f24
− f13 + f14 + f15 − f16 f17 − f18 + f19 − f20 f21 − f22 + f23 − f24
f13 − f14 − f15 + f16 − f17 + f18 + f19 − f20 − f21 + f22 + f23 − f24
f13 − f14 + f15 − f16 − f17 + f18 + f19 − f20 f21 − f22 − f23 + f24
− f13 + f14 + f15 − f16 − f17 + f18 − f19 + f20 f21 − f22 + f23 − f24

Denoting z0 = x1 + x2 + x3 , z1 = x0 − z0 , we can rewrite Eq. (8.11) as follows:


⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜y0 ⎟⎟⎟ ⎜⎜⎜ x0 + x1 + x2 + x3 ⎟⎟⎟ ⎜⎜⎜z0 + x0 ⎟⎟⎟
⎜⎜⎜y ⎟⎟⎟ ⎜⎜⎜ x − x + x − x ⎟⎟⎟ ⎜⎜⎜z + 2x ⎟⎟⎟
⎜⎜⎜ 1 ⎟⎟⎟ = ⎜⎜⎜ 0 1 2 3⎟
⎟ = ⎜⎜ 1 2⎟
⎟.
⎜⎜⎝⎜y2 ⎟⎟⎠⎟ ⎜⎜⎝⎜ x0 + x1 − x2 − x3 ⎟⎟⎟⎠⎟ ⎜⎜⎜⎜⎝z1 + 2x1 ⎟⎟⎟⎟⎠
(8.65)
y3 x0 − x1 − x2 + x3 z1 + 2x3

Thus, the 4-point WHT can be computed by seven additions and three one-bit shift
operations (two operations to calculate z0 , one for z1 , and four for y0 , y1 , y2 , y3 , and
three one-bit shift operations).
We next demonstrate the full advantages of shift/add architecture on the 16-point
FHT algorithm.
Algorithm 8.5.1: 16-point FHT.
Input: The signal vector X = (x0 , x1 , . . . , x15 )T .
Step 1. Split input vector X as
⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜ x0 ⎟⎟⎟ ⎜⎜⎜ x4 ⎟⎟⎟ ⎜⎜⎜ x8 ⎟⎟⎟ ⎜⎜⎜ x12 ⎟⎟⎟
⎜⎜⎜ x ⎟⎟⎟ ⎜⎜⎜ x ⎟⎟⎟ ⎜⎜⎜ x ⎟⎟⎟ ⎜⎜⎜ x ⎟⎟⎟
X 0 = ⎜⎜⎜⎜ 1 ⎟⎟⎟⎟ , X 1 = ⎜⎜⎜⎜ 5 ⎟⎟⎟⎟ , X 2 = ⎜⎜⎜⎜ 9 ⎟⎟⎟⎟ , X 3 = ⎜⎜⎜⎜ 13 ⎟⎟⎟⎟ . (8.66)
⎜⎜⎝ x2 ⎟⎟⎠ ⎜⎜⎝ x6 ⎟⎟⎠ ⎜⎜⎝ x10 ⎟⎟⎠ ⎜⎜⎝ x14 ⎟⎟⎠
x3 x7 x11 x15

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


268 Chapter 8

Step 2. Perform 4-point FHT with shift operations

Pi = H4 X i , i = 0, 1, 2, 3. (8.67)
Step 3. Define the vectors

r0 = P1 + P2 + P3 , r1 = P0 − r0 . (8.68)
Step 4. Compute the vectors
Y 0 = (y0 , y1 , y2 , y3 )T = r0 + P0 , Y 1 = (y4 , y5 , y6 , y7 )T = r1 + 2P2 ,
Y 2 = (y8 , y9 , y10 , y11 )T = r1 + 2P1 , Y 3 = (y12 , y13 , y14 , y15 )T = r1 + 2P3 .
(8.69)
 T
Output: The 16-point FHT coefficients, i.e., Y 0 , Y 1 , Y 2 , Y 3 .
We conclude that a 1D WHT of order 16 requires only 56 addition/subtraction
operations and 24 one-bit shifts. In Fig. 8.5, the flow graph of a 1D WHT with
shifts for N = 16 is given.

8.6 Complexities of Developed Algorithms


8.6.1 Complexity of the general algorithm
We calculated the complexity of the n ≡ 0 (mod 4)-point forward HT algorithm in
Section 8.3. The forward HT of the vector f is given by
k n/k
Hn f = vi X j ⊗ A i P j . (8.70)
i=1 j=1

Now, let us consider the j’th item of the sum of Eq. (8.73),

B j = v1 X j ⊗ A1 P j + v2 X j ⊗ A2 P j + · · · + vk X j ⊗ Ak P j . (8.71)

From the definition of the matrix P j , it follows that Ai P j is a j’th column


of a matrix Ai , which has n/k nonzero elements according to the condition of
Eq. (8.15). A product vi X j ⊗ Ai P j means that the i’th element of the WHT of
vector X j is located in n/k positions of the j’th column of a matrix Ai . Because
of the condition of Eq. (8.15) only k log2 k additions are needed to compute all of
the elements of the n-dimensional vector of Eq. (8.74). Hence, for a realization of
the HT given in Eq. (8.73), it is necessary to perform
%n &
D1 = n log2 k + n −1 (8.72)
k
addition operations. Note that the complexity of an inverse transform is the same
as that of the forward transform [see Eq. (8.75)].
In general, let Hn be an A(n, 2k ) matrix. We can see that in order to obtain all
of the elements of vector Bi (see Algorithm 8.3.1), we need only 2k log2 2k = k2k

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 269

x0
2
x1
x2 2

x3 2 r0

x4
2 r0
x5
x6 2
2
x7 2 r1

2 r2
x8
x9 2

x10 2
2
r3
x11 2

x12 r1
x13 2

x14 2
2
x15

Figure 8.5 Flow graph of fast WHT with shifts.

operations, and in order to obtain each sum B2i–1 + B2i , i = 1, 2, . . . , n/2k , we need
only 2k+2 operations. Hence, the complexity of the Hn f transform can be calculated
as
n n % n & % n &
C(n, 2k ) = k k2k + k+1 2k+2 + n k+1 − 1 = n k + k+1 + 1 . (8.73)
2 2 2 2

Now, if n = m2k+1 , where m is odd and k ≥ 1, then we have C(m2k+1 , 2k ) =


m(k + m + 1)2k+1 .
Denote by D = N log2 N the number of operations for a Walsh–Hadamard
fast transform (here N is a power of 2, N = 2 p ). In Table 8.7, several values
of n = m2k+1 , m, N, p, k, and the corresponding numbers of additions for the
Walsh–Hadamard fast algorithm and the algorithm developed above are given.
From this table, we see that for n = 3 · 2k+1 and n = 5 · 2k+1 , the new algorithm is
more effective than the classical version. We can also see that instead of using the
72-point transform, it is better to use the 80-point HT.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


270 Chapter 8

Table 8.7 Complexity of the general algorithm with shifts.

N M k p N D D1 C(c, 2k ) Direct comp.

12 3 1 4 16 64 72 60 132
24 3 2 5 32 160 168 144 552
48 3 3 6 64 384 384 336 2256
96 3 4 7 128 896 864 768 9120
20 5 1 5 32 160 200 140 380
40 5 2 6 64 384 440 320 1560
80 5 3 7 128 896 960 720 6320
160 5 4 8 256 2048 2080 1600 25,440
320 5 5 9 512 4608 4480 3520 102,080
28 7 1 5 32 160 392 252 766
56 7 2 6 64 384 840 560 3080
112 7 3 7 128 896 1792 1232 12432
224 7 4 8 256 2048 3808 2688 49,952
36 9 1 6 64 384 648 396 1260
72 9 2 7 128 896 1458 864 5112
144 9 3 8 256 2048 2880 1872 20,592
288 9 4 9 512 4608 6048 4032 82,656

8.6.2 Complexity of the general algorithm with shifts


We recall that the N = 2k -point FHT algorithm with shift operation has com-
plexity7

7k · 2k−3 , k is even,
C(N) =
(7k + 1)2k−3 , k is odd,
 (8.74)
3k · 2k−3 , k is even,
C s (N) =
3(k − 1)2k−3 , k is odd,

where C(N) denotes the number of addition/subtraction operations, and C s (N)


denotes the number of shifts. Now, we use the concept of multiply/add or
addition/subtraction shift architectures for the A(n, 2k )-type HT. Denote the
complexity of this transform by C(n, 2k ) for addition/subtraction operations and
by C s (n, 2k ) for shifts. Using Eqs. (8.73) and (8.74) for n = m2k+1 (m is odd), we
obtain

m(7k + 8m + 8)2k−2 , k is even,
C(n, 2 ) =
k
m(7k + 8m + 9)2k−2 , k is odd,
 (8.75)
3mk · 2k−2 , k is even,
C s (n, 2 ) =
k
3m(k − 1)2k−2 , k is odd.

References
1. S. S. Agaian and H. G. Sarukhanyan, Hadamard matrices representation by
(−1, +1)-vectors, in Proc. of Int. Conf. Dedicated to Hadamard Problem’s
Centenary, Australia, (1993).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 271

2. H. G. Sarukhanyan, “Decomposition of the Hadamard matrices and fast


Hadamard transform,” in Computer Analysis of Images and Patterns, Lecture
Notes in Computer Science, 1296 575–581 Springer, Berlin (1997).
3. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs,” in Surveys in Contemporary Design Theory, Wiley-Interscience
Series in Discrete Mathematics, Wiley, Hoboken, NJ (1992).
4. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, Decomposition of
Hadamard matrices, in Proc. of 1st Int. Workshop on Spectral Techniques
and Logic Design for Future Digital Systems, Tampere, Finland, Jun. 2–3,
pp. 207–221 (2000).
5. H. G. Sarukhanyan, Hadamard matrices: construction methods and applica-
tions, in Proc. of Workshop on Transforms and Filter Banks, Feb. 21–27,
Tampere, Finland, 95–129 (1998).
6. H. Sarukhanyan, “Decomposition of Hadamard matrices by orthogonal
(−1, +1)-vectors and algorithm of fast Hadamard transform,” Rep. Acad. Sci.
Armenia 97 (2), 3–6 (1997) (in Russian).
7. D. Coppersmith, E. Feig, and E. Linzer, “Hadamard transforms on multiply/
add architectures,” IEEE Trans. Signal Process. 42 (4), 969–970 (1994).
8. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in
Mathematics, 1168, Springer-Verlag, Berlin (1985).
9. R. Stasinski and J. Konrad, “A new class of fast shape-adaptive orthogonal
transforms and their application to region-based image compression,” IEEE
Trans. Circuits Syst. Video Technol. 9 (1), 16–34 (1999).
10. M. Barazande-Pour and J. W. Mark, Adaptive MHDCT, in Proc. Image
Process. IEEE Int. Conf. ICIP-94., IEEE International Conference, Nov.
13–16, Austin, TX, pp. 90–94 (1994).
11. G. R. Reddy and P. Satyanarayana, Interpolation algorithm using Walsh–
Hadamard and discrete Fourier/Hartley transforms, in IEEE Proc. 33rd
Midwest Symp. Circuits and Systems, Vol. 1, 545–547 (1991).
12. C.-F. Chan, Efficient implementation of a class of isotropic quadratic filters by
using Walsh–Hadamard transform, in Proc. of IEEE Int. Symp. on Circuits and
Systems, Jun. 9–12, Hong Kong, 2601–2604 (1997).
13. B. K. Harms, J. B. Park, and S. A. Dyer, “Optimal measurement techniques
utilizing Hadamard transforms,” IEEE Trans. Instrum. Meas. 43 (3), 397–402
(1994).
14. A. Chen, D. Li and R. Zhou, A research on fast Hadamard transform (FHT)
digital systems, in Proc. of IEEE TENCON 93, Beijing, 541–546 (1993).
15. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal Process-
ing, Springer-Verlag, Berlin (1975).
16. R. R. K. Yarlagadda and E. J. Hershey, Hadamard Matrix Analysis and
Synthesis with Applications and Signal/Image Processing, Kluwer, (1997).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


272 Chapter 8

17. J. J. Sylvester, “Thoughts on inverse orthogonal matrices, simultaneous sign


successions and tesselated pavements in two or more colours, with applications
to Newton’s rule, ornamental till-work, and the theory of numbers,” Phil. Mag.
34, 461–475 (1867).
18. K. G. Beauchamp, Walsh Functions and Their Applications, Academic Press,
London (1975).
19. S. Samadi, Y. Suzukake and H. Iwakura, On automatic derivation of fast
Hadamard transform using generic programming, in Proc. of 1998 IEEE Asia-
Pacific Conf. on Circuit and Systems, Thailand, 327–330 (1998)..
20. http://www.cs.uow.edu.au/people/jennie/lifework.html.
21. Z. Li, H. V. Sorensen and C. S. Burus, FFT and convolution algotithms an
DSP microprocessors, in Proc. of IEEE Int. Conf. Acoust., Speech, Signal
Processing, 289–294 (1986).
22. R. K. Montoye, E. Hokenek, and S. L. Runyon, “Design of the IBM RISC
System/6000 floating point execution unit,” IBM J. Res. Dev. 34, 71–77 (1990).
23. A. Amira, A. Bouridane and P. Milligan, An FPGA based Walsh Hadamard
transforms, in Proc of IEEE Int. Symp. on Circuits and Systems, ISCAS 2001,
2, 569–572 (2001).
24. W. Philips, K. Denecker, P. de Neve, and S. van Asche, “Lossless quantization
of Hadamard transform coefficients,” IEEE Trans. Image Process. 9 (11),
1995–1999 (2000).
25. A. M. Grigoryan and S. S. Agaian, “Method of fast 1D paired transforms for
computing the 2D discrete Hadamard transform,” IEEE Trans. Circuits Syst. II
47 (10), 1098–1103 (2000).
26. I. Valova and Y. Kosugi, “Hadamard-based image decomposition and compre-
ssion,” IEEE Trans. Inf. Technol. Biomed. 4 (4), 306–319 (2000).
27. A. M. Grigoryan and S. S. Agaian, “Split manageable efficient algorithm
for Fourier and Hadamard transforms,” IEEE Trans. Signal Process. 48 (1),
172–183 (2000).
28. J. H. Jeng, T. K. Truong, and J. R. Sheu, “Vision, Fast fractal image compre-
ssion using the Hadamard transform,” IEEE Proc. Image Signal Process. 147
(6), 571–574 (2000).
29. H. Bogucka, Application of the new joint complex Hadamard-inverse Fourier
transform in a OFDM/CDMA wireless communication system, in Proc.
of IEEE 50th Vehicular Technology Conference, VTS 1999, 5, 2929–2933
(1999).
30. R. Hashemian and S. V. J. Citta, A new gate image encoder: algorithm, design
and implementation, in Proc. of 42nd Midwest Symp. Circuits and Systems, 1,
418–421 (2000).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Fast Hadamard Transforms for Arbitrary Orders 273

31. M. Skoglund and P. Hedelin, “Hadamard-based soft decoding for vector


quantization over noisy channels,” IEEE Trans. Inf. Theory 45 (2), 515–532
(1999).
32. S.-Y. Choi and S.-I. Chae, “Hierarchical motion estimation in Hadamard
transform domain,” Electron. Lett. 35 (25), 2187–2188 (1999).
33. P. Y. Cochet and R. Serpollet, Digital transform for a selective channel
estimation (application to multicarrier data transmission), in Proc. of IEEE
Int. Conf. on Communications, ICC 98 Conf. Record, 1, 349–354 (1998).
34. S. Muramatsu, A. Yamada and H. Kiya, The two-dimensional lapped
Hadamard transform, in Proc. of IEEE Int. Symp. on Circuits and Systems,
ISCAS ’98, Vol. 5, 86–89 (1998).
35. L. E. Nazarov and V. M. Smolyaninov, “Use of fast Walsh–Hadamard
transformation for optimal symbol-by-symbol binary block-code decoding,”
Electron. Lett. 34 (3), 261–262 (1998).
36. D. Sundararajan and M. O. Ahmad, “Fast computation of the discrete Walsh
and Hadamard transforms,” IEEE Trans. Image Process. 7 (6), 898–904
(1998).
37. Ch.-P. Fan and J.-F. Yang, “Fixed-pipeline two-dimensional Hadamard
transform algorithms,” IEEE Trans. Signal Process. 45 (6), 1669–1674 (1997).
38. Ch.-Fat Chan, Efficient implementation of a class of isotropic quadratic filters
by using Walsh–Hadamard transform, in Proc. of IEEE Int. Symp. Circuits and
Systems, ISCAS ’97, 4, 2601–2604 (1997).
39. A. R. Varkonyi-Koczy, Multi-sine synthesis and analysis via Walsh–Hadamard
transformation, in Proc. of IEEE Int. Symp. Circuits and Systems, ISCAS ’96,
Connecting the World, 2, 457–460 (1996).
40. M. Colef and B. J. Vision, “NTSC component separation via Hadamard
transform,” IEEE Image Signal Process. 141 (1), 27–32 (1994).
41. T. Beer, “Walsh transforms,” Am. J. Phys 49 (5), 466–472 (1981).
42. G.-Z. Xiao and J. L. Massey, “A spectral characterization of correlation-
immune combining functions,” IEEE Trans. Inf. Theory 34 (3), 569–571
(1988).
43. C. Yuen, “Testing random number generators by Walsh transform,” IEEE
Trans. Comput C-26 (4), 329–333 (1977).
44. H. Larsen, “An algorithm to compute the sequency ordered Walsh transform,”
IEEE Trans. Acoust. Speech Signal Process. ASSP-24, 335–336 (1976).
45. S. Agaian, H. Sarukhanyan, K. Egiazarian and J. Astola, Williamson-
Hadamard transforms: design and fast algorithms, in Proc. of 18 Int. Scientific
Conf. on Information, Communication and Energy Systems and Technologies,
ICEST-2003, Sofia, Bulgaria, Oct. 16–18, pp. 199–208 (2003).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


274 Chapter 8

46. H. Sarukhanyan, S. Agaian, J. Astola, and K. Egiazarian, “Binary matrices,


decomposition and multiply-add architectures,” Proc. SPIE 5014, 111–122
(2003) [doi:10.1117/12.473134].
47. S. Agaian, H. Sarukhanyan, and J. Astola, “Skew Williamson–Hadamard
transforms,” J. Multiple-Valued Logic Soft Comput. 10 (2), 173–187 (2004).
48. S. Agaian, H. Sarukhanyan, and J. Astola, “Multiplicative theorem based
fast Williamson–Hadamard transforms,” Proc. SPIE 4667, 82–91 (2002)
[doi:10.1117/12.46969].
49. H. Sarukhanyan, A. Anoyan, S. Agaian, K. Egiazarian and J. Astola, Fast
Hadamard transforms, in Proc of. Int. TICSP Workshop on Spectral Methods
and Multirate Signal Processing, SMMSP-2001, June 16–18, Pula, Croatia,
TICSP Ser. 13, 33–40 (2001).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 9
Orthogonal Arrays
We have seen that one of the basic Hadamard matrix building methods is based
on the construction of a class of “special-component” matrices that can be
plugged into arrays (templates) to generate Hadamard matrices. In Chapter 4, we
discussed how to construct these special-component matrices. In this chapter, we
focus on the second component of the plug-in template method: construction of
arrays/templates. Generally, the arrays into which suitable matrices are plugged are
orthogonal designs (ODs), which have formally orthogonal rows (and columns).
The theory of ODs dates back over a century.1–3 ODs have several variations
such as the Goethals–Seidel arrays and Wallis–Whiteman arrays. Numerous
approaches for construction of these arrays/templates have been developed.4–101
A survey of OD applications, particularly space–time block coding, can be found
in Refs. 3–7, 23, 24, 34, 87–91.
The space–time block codes are particularly attractive because they can
provide full transmit diversity while requiring a very simple decoupled maximum-
likelihood decoding method.80–91 The combination of space and time diversity has
moved the capacity of wireless communication systems toward theoretical limits;
this technique has been adopted in the 3G standard in the form of an Alamouti code
and in the newly proposed standard for wireless LANs IEEE 802.11n.87
In this chapter, two plug-in template methods of construction of Hadamard
matrices are presented. We focus on construction of only Baumert–Hall, Plotkin,
and Welch arrays, which are the subsets of ODs.

9.1 ODs
The original definition of OD was proposed by Geramita et al.6 Dr. Seberry
(see Fig. 9.1), a co-author of that paper, is world renowned for her discoveries
on Hadamard matrices, ODs, statistical designs, and quaternion OD (QOD).
She also did important work on cryptography. Her studies of the application
of discrete mathematics and combinatorial computing via bent functions and
S -box design have led to the design of secure crypto algorithms and strong
hashing algorithms for secure and reliable information transfer in networks and
telecommunications. Her studies of Hadamard matrices and ODs are also applied
in CDMA technologies.11
An OD of order n and type (s1 , s2 , . . . , sk ), denoted by OD(n; s1 , s2 , . . . , sk ) is
an n × n matrix D with entries from the set (0, ±x1 , ±x2 , . . . , ±xk ) where each xi

275

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


276 Chapter 9

Figure 9.1 Dr. Jennifer Seberry.

occurs si times in each row and column, such that the distinct rows are pairwise
orthogonal, i.e.,
k
D(x1 , x2 , . . . , xk )DT (x1 , x2 , . . . , xk ) = si xi2 In , (9.1)
i=1

where In is an identity matrix of order n, and superscript T is a transposition sign.


The OD example of orders 2, 4, 4, 4, 8 and types (2, 1, 1), (4, 1, 1, 1, 1), (4, 1,
1, 2), (4, 1, 1), and (8, 1, 1, 1, 1, 1, 1, 1, 1) are given as follows:
⎛ ⎞
  ⎜⎜⎜a −b −c −d⎟⎟⎟
a b ⎜
⎜⎜b a −d c⎟⎟⎟⎟
OD(2; 1, 1) = , OD(4; 1, 1, 1, 1) = ⎜⎜⎜⎜ ⎟,
b −a ⎜⎝⎜ c d a −b⎟⎟⎟⎠⎟
d −c b a
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ a b b d⎟⎟⎟ ⎜⎜⎜a 0 −c 0⎟⎟⎟
⎜⎜⎜−b a d −b⎟⎟⎟⎟ ⎜⎜⎜0 a 0 c⎟⎟⎟⎟
OD(4; 1, 1, 2) = ⎜⎜⎜⎜ ⎟⎟⎟ , OD(4; 1, 1) = ⎜⎜⎜⎜ ⎟,
⎜⎜⎝ −b −d a b ⎟⎟⎠ ⎜⎜⎝ c 0 a 0⎟⎟⎟⎟⎠
−d b −b a 0 −c 0 a
⎛ ⎞
⎜⎜⎜ a b c d e f g h⎟⎟⎟
⎜⎜⎜ −b a d −c f −e −h g⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −c −d a b g h −e − f ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜ −d c −b a h −g f −e⎟⎟⎟⎟⎟
OD(8, 1, 1, 1, 1, 1, 1, 1, 1), = ⎜⎜⎜⎜⎜ ⎟. (9.2)
⎜⎜⎜ −e − f −g −h a b c d⎟⎟⎟⎟⎟
⎜⎜⎜− f e −h g −b a −d c⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎝ −g h e − f −c d a −b⎟⎟⎟⎟⎠
−h −g f e −d −c b a

It is well known that the maximum number of variables that may appear in an
OD is given by Radon’s function ρ(n), which is defined by ρ(n) = 8c + 2d , where
n = 2a b, b is an odd number, and a = 4c + d, 0 ≤ d < 4 (see, for example, Ref. 5).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 277

Now we present two simple OD construction methods.5 Let two cyclic matrices
A1 , A2 of order n exist, satisfying the condition

A1 AT1 + A2 AT2 = f In . (9.3)

If f is the quadratic form f = s1 x12 + s2 x22 , then an OD, OD(2n; s1 , s2 ), exists.

Proof: It can be verified that the matrices


   
A1 A2 A1 A2 R
or (9.4)
−AT2 AT1 −A2 R A1

are OD(2n; s1 , s2 ), where R is the back-diagonal identity matrix of order n.


Similarly, if Bi , I = 1, 2, 3, 4 are cyclic matrices of order n with entries (0, x1 ,
x2 , . . . , xk ) satisfying the condition
4 k
Bi BTi = xi s2i In , (9.5)
i=1 i=1

then the Goethals–Seidel array


⎛ ⎞
⎜⎜⎜ B1 B2 R B3 R B4 R ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−B2 R B1 −BT4 R BT3 R ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ (9.6)
⎜⎜⎜−B3 R BT4 R B1 −BT2 R⎟⎟⎟⎟⎟
⎝ ⎠
−B4 R −BT3 R BT2 R B1

is an OD(4n; s1 , s2 , . . . , sk ) (see Ref. 4, p. 107, for more details).


Below we present the well-known OD construction methods:5,76,92

• If there is an OD, OD(n; s1 , s2 , . . . , sk ), then an OD, OD(2n; s1 , s1 , es2 , . . . , esk ),


exists, where e = 1 or 2.
• If there is an OD, OD(n; s1 , s2 , . . . , sk ), on the commuting variables
(0, ±x1 , ±x2 , . . . , ±xk ), then there is an OD, OD(n; s1 , s2 , . . . , si + s j , . . . , sk )
and OD(n; s1 , s2 , . . . s j−1 , s j+1 , . . . , sk ), on the k − 1 commuting variables
(0, ±x1 , ±x2 , . . . , ±x j−1 , ±x j+1 , . . . , ±xk ).
• If n ≡ 0 (mod 4), then the existence of W(n, n − 1) implies the existence of
a skew-symmetric W(n, n − 1). The existence the skew-symmetric W(n, k) is
equivalent to the existence of OD(n; 1, k).
• An OD, OD(n; 1, k), can only exist in order n ≡ 4 (mod 8) if k is the sum of
three squares. An OD, OD(n; 1, n − 2), can only exist in order n ≡ 4 (mod 8) if
n − 2 is the sum of two squares.
• If four cyclic matrices A, B, C, D of order n exist satisfying

AAT + BBT + CC T + DDT = f In , (9.7)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


278 Chapter 9

then
⎛ ⎞
⎜⎜⎜ A BR CR DR ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ −BR T ⎟
⎜⎜⎜ A D R −C R⎟⎟⎟⎟
T

⎜⎜⎜−CR −DT R A ⎟⎟ (9.8)


⎜⎜⎝ BT R ⎟⎟⎟⎟
⎟⎠
−DR C T R −BT R A

is a W(4n, f ) when A, B, C, D are (0, −1, +1) matrices, and an OD, OD(4n; s1 ,
s2 , . . . , sk ), on x1 , x2 , . . . , xk when A, B, C, D are entries from (0, ±x1 , ±x2 , . . . ,
±xk ) and f = ki=1 si xi2 . Here, R is a back-diagonal identity matrix of order n.
• If there are four sequences A, B, C, D of length n with entries from (0, ±x1 ,
±x2 , ±x3 , ±x4 ) with zero periodic or nonperiodic autocorrelation functions, then
these sequences can be used as the first rows of cyclic matrices that can be used
in the Goethals–Seidel array to form an OD(4n; s1 , s2 , s3 , s4 ). Note that if there
are sequences of length n with zero nonperiodic autocorrelation functions, then
there are sequences of length n + m for all m ≥ 0.
• OD of order 2t = (m − 1)n and type (1, m − 1, mn − m − n) exist.
• If two Golay sequences of length m and a set of two Golay sequences of length
k exist, then a three-variable full OD, OD[4(m + 2k); 4m, 4k, 4k], exists.76

Recently, Koukouvinon and Simos have constructed equivalent Hadamard


matrices based on several new and old full ODs, using circulant and symmetric
block matrices. In addition, they have provided several new constructions for ODs
derived from sequences with zero autocorrelation. The ODs used to construct
the equivalent Hadamard matrices are produced from theoretical and algorithmic
constructions.76
Problem for exploration: An OD, OD(4n; t, t, t, t), for every positive integer t
exists.
Several generalizations of real square ODs have followed, including generalized
real ODs, complex ODs (CODs), generalized CODs, generalized complex linear
processing ODs, and QODs.

9.1.1 ODs in the complex domain


Geramita and Geramita first studied ODs in the complex domain.6 Complex ODs
COD(n; s1 , s2 , . . . sk ) of type (s1 , s2 , . . . sk ) are √
n × n matrices C with entries in the
set {0, ±x1 , ±jx1 , ±x2 , ±jx2 , . . . , ±xk , ±jxk }( j = −1) satisfying the conditions

k
CC H = C H C = si s2i In , (9.9)
i=1

where H denotes the Hermitian transpose (the transpose complex conjugate).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 279

An orthogonal complex array of size n is an n × n√matrix with entries (z1 , z2 ,


. . . , zn ), their conjugations, or their products by j ( j = −1) such that

k
S HS = |zi |2 In . (9.10)
i=1

For example, Alamouti’s 2 × 2 matrix is defined by


 
z1 z2
. (9.11)
z∗2 −z∗1

This definition can be generalized to include r × n rectangular designs. The


r × n rectangular designs apply to space–time block coding for multiple-antenna
wireless communications.
Finally, in Refs. 87, 88, 93, the authors generalized the definition of complex
ODs by introducing ODs over the quaternion domain. They made the first step in
building a theory of these novel quaternion ODs. The noncommutative quaternions,
invented by Hamilton in 1843, can be viewed as a generalization of complex
numbers. The noncommutative quaternions Q = [±1, ±i, ± j, ±k] satisfy i2 = j2 =
k2 = ijk = −1. A quaternion variable a = a1 + a2 i + a3 j + a4 k, where a1 , a2 , a3 , a4
are real variables, has a quaternion conjugate defined by a∗ = a1 − a2 i − a3 j − a4 k.
More information about quaternions and their properties can be found in Ref. 94.
Several construction methods for obtaining QODs over quaternion variables have
been introduced.3,87,88,93,95,96
Next, we present a definition of the quaternion orthogonal array and a simple
example:
Definition 9.1.1:93 A QOD for commuting real variables x1 , x2 , . . . , xu of type
(s1 , s2 , . . . , su ) is an r × n matrix A with entries from {0, ±q1 x1 , ±q2 x2 , . . . ±
qu xu }, qh ∈ Q that satisfies
u
AQ A = sh xh2 In . (9.12)
h=1

This design is denoted by QOD(r, n; s1 , s2 , . . . su ). When r = n, we have


u
AQ A = AAQ = sh xh2 In . (9.13)
h=1

Similarly, we define a QOD for commuting complex variables z1 , z2 , . . . , zu


of type (s1 , s2 , . . . , su ) as an n × r matrix A with entries from a set {0, ±q1 z1 ,
±q∗1 z∗1 , . . . , ±qu zu , ±q∗u z∗u }, q ∈ Q, that satisfies
u
AQ A = sh |zh |2 In . (9.14)
h=1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


280 Chapter 9

Finally, we define a QOD for quaternion variables a1 , a2 , . . . , au of type


(s1 , s2 , . . . , su ) as an n × r matrix A with entries from a set {0, ±a1 , ±a1Q , ±a2 , ±a2Q ,
. . . , ±au , ±auQ }, q ∈ Q, that satisfies
u
AQ A = sh |zh |2 In . (9.15)
h=1

We can generalize these definitions to allow the design entries to be real linear
combinations of the permitted variables and their quaternion multipliers, in which
case we say the design is by linear processing.
Examples:
 
• The matrix X = −x − jx2 kx1 is a QOD on real variables x1 , x2 .
1 ix2
 
• The matrix Z = iz−1jz∗2 izjz2∗1 is a QOD on complex variables z1 , z2 .
 
• The matrix A = a0 0a is the most obvious example of a QOD on quaternion
variable a. Note that QODs on quaternion variables are the most difficult to
construct.
Theorem 9.1.1: 93 Let A and B be CODs, COD(n, n; s1 , s2 , . . . , sk ) and COD(n, n;
t1 , t2 , . . . , tk ), respectively, on commuting complex variables z1 , z2 , . . . , zk . If H H B
is symmetric, then A + jB is QOD QOD(n, n; s1 + t1 , s2 + t2 , . . . , sk6 + tk ) on the
complex variables z1 , z2 , . . . , zk , where AH is the quaternion transpose.

9.2 Baumert–Hall Arrays


Baumert–Hall arrays admit generalizations of Williamson’s theorem. Unfortu-
nately, it is very difficult in general to find a Baumert–Hall array of order n,
even for a small n. The Baumert–Hall array of order 12 given below is the first
Baumert–Hall array constructed in Refs. 7 and 97. The class of Baumert–Hall ar-
rays of order 4t was constructed using T matrices and the Geothals–Seidel array of
order 4.
Definition 9.2.1:7,97 A square matrix H(a, b, c, d) of order 4t is called a
Baumert–Hall array of order 4t if it satisfies the following conditions:
(1) Each element of H(a, b, c, d) has the form ±x, x ∈ {a, b, c, d}.
(2) In any row there are exactly t entries ±x, x ∈ {a, b, c, d}.
(3) The rows (columns) of H(a, b, c, d) are formally orthogonal.

Example 9.2.1: (a) Baumert–Hall array of order 4 (also a Williamson array):


⎛ ⎞
⎜⎜⎜ x y z w⎟⎟⎟
⎜⎜⎜ −y x −w z⎟⎟⎟
⎜⎜⎜ ⎟⎟ ,
⎜⎜⎜⎝ −z w x −y⎟⎟⎟⎟⎠
(9.16)
−w −z y x

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 281

(b) Baumert–Hall array of order 12:


⎛ ⎞
⎜⎜⎜ y x x x −z z w y −w w z −y⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ −x y x −x w −w z −y −z z −w −y⎟⎟⎟⎟
⎜⎜⎜ −x ⎟
⎜⎜⎜ −x y x w −y −y w z z w −z⎟⎟⎟⎟
⎟⎟
⎜⎜⎜⎜ −x x −x y −w −w −z w −z −y −y −z⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −y −y −z −w z x x x −w −w z −y⎟⎟⎟⎟
⎜⎜⎜−w ⎟
−w −z −x −x −z −w⎟⎟⎟⎟⎟
A(x, y, z, w) = ⎜⎜⎜⎜⎜
y z x y y
⎟ . (9.17)
⎜⎜⎜ w −w w −y −x −x z x y −z −y −z⎟⎟⎟⎟
⎜⎜⎜−w ⎟
⎜⎜⎜ −z w −z −x x −x z −y y −y w⎟⎟⎟⎟⎟
⎜⎜⎜ −y ⎟
⎜⎜⎜ y −z −w −z −z w y w x x x⎟⎟⎟⎟

⎜⎜⎜ z
⎜⎜⎜ −z −y −w −y −y −w −z −x w x −x⎟⎟⎟⎟⎟

⎜⎜⎜ −z −z y z −y −w y −w −x −x w x⎟⎟⎟⎟
⎝ ⎠
z −w −w z y −y y z −x x −x w

Theorem 9.2.1: If a Baumert–Hall array of order t and Williamson matrices of


order n exist, then a Hadamard matrix of order 4nt also exists.

Definition 9.2.2:10,92 Square (0, ±1) matrices X1 , X2 , X3 , X4 of orders k are called


T matrices if the following conditions are satisfied:

Xi ∗ X j = 0, i  j, i, j = 1, 2, 3, 4;
Xi X j = X j Xi , i, j = 1, 2, 3, 4;
Xi RX Tj = X j RXiT , i, j = 1, 2, 3, 4;
4
(9.18)
Xi is a (+1, −1) matrix;
i=1
4
Xi XiT = kIk .
i=1

Note that in Refs. 10 and 92, only cyclic T matrices were constructed. In this case,
the second and the third conditions of Eq. (9.18) are automatically satisfied.
The first rows of some examples of cyclic T matrices of orders 3, 5, 7, 9 are
given as follows:

n = 3: X1 = (1, 0, 0), X2 = (0, 1, 0), X3 = (0, 0, 1);


n = 5: X1 = (1, 1, 0, 0, 0), X2 = (0, 0, 1, −1, 0), X3 = (0, 0, 0, 0, 1);
n = 7: X1 = (1, 0, 1, 0, 0, 0, 0), X2 = (0, 0, 0, −1, −1, 1, 0),
(9.19)
X3 = (0, 0, 0, 0, 0, 0, 1), X4 = (0, 1, 0, 0, 0, 0, 0);
n = 9: X1 = (1, 0, 1, 0, 1, 0, −1, 0, 0), X2 = (0, 1, 0, 1, 0, −1, 0, 1, 0),
X3 = (0, 0, 0, 0, 0, 0, 0, 0, 1).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


282 Chapter 9

Remark 9.2.1: There are T matrices of order n, n ∈ M = {1, 3, 5, . . . , 59, 61,


2a 10b 26c + 1}, where a, b, and c are non-negative integers.10,11,92 See also
Theorem 9.3.2, by which the Baumert–Hall arrays of order 335 and 603 are also
constructed.

Theorem 9.2.2: (Cooper-Seberry92,97,98 ) Let X1 , X2 , X3 , X4 be T matrices of


order k. Then, the matrix
⎛ ⎞
⎜⎜⎜ A BR CR DR⎟⎟

⎜⎜⎜ −BR A −DT R C T R⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ (9.20)
⎜⎜⎜ −CR DT R A −BT R⎟⎟⎟⎟
⎜⎝ ⎠
−DR −C T R BT R A

is a Baumert–Hall array of order 4k, where

A = aX1 + bX2 + cX3 + dX4 ,


B = −bX1 + aX2 − dX3 + cX4 ,
(9.21)
C = −cX1 + dX2 + aX3 − bX4 ,
D = −dX1 − cX2 + bX3 + aX4 .

Analyzing the expression of Eq. (9.21), we see that if instead of parameters a,


b, c, d we substitute parametric commutative matrices of order t, then the matrix
in Eq. (9.20) will be a Baumert–Hall array of order 4kt. Furthermore, we shall call
such matrices parametric Williamson matrices. The methods of their construction
are given in the next chapter.
It is also obvious that if there is a Baumert–Hall array of the form (Ai,Aj )i, j=1 of
order 4k, where Ai, j are parametric commutative matrices, it is possible to construct
similar matrices [Eq. (9.21)] suitable to the array in Eq. (9.20). The representations
of matrices in a block form and their properties and methods of construction
are considered in the subsequent chapters. Below, we give two orthogonal
arrays of orders 20 and 36 constructed by Welch and Ono–Sawade–Yamamoto
(see p. 363).7

9.3 A Matrices

Definition 9.3.1:92,98 A square matrix H(x1 , x2 , . . . , x1 ) of order m with elements


±xi we will call an A matrix depending on l parameters if it satisfies the condition

H(x11 , x21 , . . . , xl1 )H T (x12 , x22 , . . . , xl2 ) + H(x12 , x22 , . . . , xl2 )H T (x11 , x21 , . . . , xl1 )
l
2m
= xi1 xi2 Im . (9.22)
l i=1

Note that the concept of an A matrix of order m depending on l parameters


coincides with the following:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 283

• Baumert–Hall array if l = 4.
• Plotkin array if l = 8.
• Yang array if l = 2.
The A matrix of order 12 depending on three parameters is given as follows:
⎛ ⎞
⎜⎜⎜ a b c b −c a c b −a c a −b⎟⎟⎟
⎜⎜⎜⎜ c a b −c a b b −a c a −b c⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ b c a a b −c −a c b −b c a⎟⎟⎟⎟⎟
⎜⎜⎜−b c −a a b c −c b −a c −a b⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ c −a −b c a b b −a −c −a b c⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−a −b c b c a −a −c b b c −a⎟⎟⎟⎟⎟
A(a, b, c) = ⎜⎜⎜ ⎟ . (9.23)
⎜⎜⎜ −c −b a c −b a a b c −b −a c⎟⎟⎟⎟⎟
⎜⎜⎜−b a −c −b a c c a b −a c −b⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ a −c −b a c −b b c a c −b −a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −c −a b −c a −b b a −c a b c⎟⎟⎟⎟⎟
⎜⎜⎜−a b −c a −b −c a −c b c a b⎟⎟⎟
⎜⎝ ⎟⎠
b −c −a −b −c a −c b a b c a
Note that for a, b, c = ±1, the above-given matrix is the Hadamard matrix of
order 12. For a = 1, b = 2, and c = 1, this matrix is the integer orthogonal matrix
⎛ ⎞
⎜⎜⎜ 1 2 1 2 −1 1 1 2 −1 1 1 −2⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 1 1 2 −1 1 2 2 −1 1 1 −2 1⎟⎟⎟⎟
⎜⎜⎜ 2 ⎟
⎜⎜⎜ 1 1 1 2 −1 −1 1 2 −2 1 1⎟⎟⎟⎟⎟
⎜⎜⎜−2 ⎟
⎜⎜⎜ 1 −1 1 2 1 −1 2 −1 1 −1 2⎟⎟⎟⎟

⎜⎜⎜ 1
⎜⎜⎜ −1 −2 1 1 2 2 −1 −1 −1 2 1⎟⎟⎟⎟⎟

⎜⎜−1 −2 1 2 1 1 −1 −1 2 2 1 −1⎟⎟⎟⎟
A(1, 2, 1) = ⎜⎜⎜⎜ ⎟⎟ . (9.24)
⎜⎜⎜−1 −2 1 1 −2 1 1 2 1 −2 −1 1⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−2 1 −1 −2 1 1 1 1 2 −1 1 −2⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜ 1 −1 −2 1 1 −2 2 1 1 1 −2 −1⎟⎟⎟⎟

⎜⎜⎜−1
⎜⎜⎜ −1 2 −1 1 −2 2 1 −1 1 2 1⎟⎟⎟⎟⎟

⎜⎜⎜−1 2 −1 1 −2 −1 1 −1 2 1 1 2⎟⎟⎟⎟
⎝ ⎠
2 −1 −1 −2 −1 1 −1 2 −1 2 1 1

We can see that if H(x1 , x2 , . . . , xl ) is an A matrix, then H(±1, ±1, . . . , ±1) is the
Hadamard matrix.
Theorem 9.3.1:14,15,98 For the existence of an A matrix of order m depending
on l parameters, it is necessary and sufficient that there are (0, ±1) matrices Ki ,
I = 1, 2, . . . , l satisfying the conditions

Ki ∗ K j = 0, i  j, i, j = 1, 2, . . . , l,
Ki K Tj + K j KiT = 0, i  j, i, j = 1, 2, . . . , l, (9.25)
m
Ki KiT = Im , i = 1, 2, . . . , l.
l

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


284 Chapter 9

The set of matrices Ki , I = 1, 2, . . . , l satisfying the conditions of Eq. (9.25) is


called an l-basic frame.

Lemma 9.3.1: 14,98 If an A matrix of order m depending on l parameters (l =


2, 4, 8) exists, then there are A matrices Hi (x1 , x2 , . . . , xi ), i = 1, 2, . . . , l satisfying
the conditions

Hi (x1 , x2 , . . . , xl )H Tj (x1 , x2 , . . . , xl ) + H j (x1 , x2 , . . . , xl )HiT (x1 , x2 , . . . , xl )


= 0, i  j. (9.26)

Proof: By Theorem 9.3.1, there is an l-basic frame Ki , I = 1, 2, . . . , l. Now we


provide A matrices that satisfy the conditions of Eq. (9.26).
Case 1. l = 2,

H1 (x1 , x2 ) = x1 K1 + x2 K2 , H2 (x1 , x2 ) = H1 (−x2 , x1 ). (9.27)

Case 2. l = 4,

H1 (x1 , x2 , x3 , x4 ) = x1 K1 + x2 K2 + x3 K3 + x4 K4 ,
H2 (x1 , x2 , x3 , x4 ) = H1 (−x2 , x1 , −x4 , x3 ),
(9.28)
H3 (x1 , x2 , x3 , x4 ) = H1 (−x3 , x4 , x1 , −x2 ),
H4 (x1 , x2 , x3 , x4 ) = H1 (−x4 , −x3 , x2 , x1 ).

Case 3. l = 8,

H1 (x1 , x2 , . . . , x8 ) = x1 K1 + x2 K2 + · · · + x8 K8 ,
H2 (x1 , x2 , . . . , x8 ) = H1 (−x2 , x1 , x4 , −x3 , x6 , −x5 , −x8 , x7 ),
H3 (x1 , x2 , . . . , x8 ) = H1 (−x3 , −x4 , x1 , x2 , x7 , x8 , −x5 , −x5 ),
H4 (x1 , x2 , . . . , x8 ) = H1 (−x4 , x3 , −x2 , x1 , x8 , −x7 , x6 , −x5 ),
(9.29)
H5 (x1 , x2 , . . . , x8 ) = H1 (−x5 , −x6 , −x7 , −x8 , x1 , x2 , x3 , x4 ),
H6 (x1 , x2 , . . . , x8 ) = H1 (−x6 , x5 , −x8 , x7 , −x2 , x1 , −x4 , x3 ),
H7 (x1 , x2 , . . . , x8 ) = H1 (−x7 , x8 , x5 , −x6 , −x3 , x4 , x1 , −x2 ),
H8 (x1 , x2 , . . . , x8 ) = H1 (−x8 , −x7 , x6 , x5 , −x4 , −x3 , x2 , x1 ).

The following lemma relates to the construction of 4-basic frame using T matrices.

Lemma 9.3.2: Let X1 , X2 , X3 , X4 be T matrices of order n. Then, the following


matrices form a 4-basic frame of order 4n:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜X1 Bn X X X ⎟⎟⎟ ⎜⎜⎜X2 Bn −X1 X −X ⎟⎟⎟
⎜⎜⎜⎜ 2 3 4 ⎟⎟⎟
⎟ ⎜⎜⎜⎜ 4 3 ⎟⎟⎟

⎜⎜⎜ −X2 X1 Bn −X4 T
X3 ⎟⎟⎟⎟
T ⎜⎜⎜ X1 X2 Bn X3T
X4 ⎟⎟⎟⎟
T
K1 = ⎜⎜⎜ ⎜ ⎟⎟⎟ , K2 = ⎜⎜⎜ ⎜ ⎟⎟⎟ , (9.30a)
⎜⎜⎜ −X3 −X T −X1 Bn X2T ⎟⎟⎟⎟ ⎜⎜⎜ X4 X3T −X2 Bn −X1T ⎟⎟⎟⎟
⎜⎜⎝ 4 ⎟⎠ ⎜⎜⎝ ⎟⎠
−X4 X3T −X2T −X1 Bn X3 X4T X1T −X2 Bn

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 285

⎛ ⎞ ⎛ ⎞
⎜⎜⎜X B −X −X1 X2 ⎟⎟⎟⎟⎟ ⎜⎜⎜X B −X2 −X1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 3 n 4
⎟⎟⎟ ⎜⎜⎜ 4 n X3
⎟⎟⎟
⎜⎜⎜ ⎜⎜⎜
⎜⎜⎜ X4 X3 Bn −X2T −X1T ⎟⎟⎟⎟ ⎜⎜⎜ −X3 X4 Bn X1T −X2T ⎟⎟⎟⎟
K3 = ⎜⎜⎜ ⎟⎟⎟ , K4 = ⎜⎜⎜ ⎟⎟⎟ . (9.30b)
⎜⎜⎜ −X −X T −X B −X4T ⎟⎟⎟⎟ ⎜⎜⎜ −X X1T −X4 Bn X3T ⎟⎟⎟⎟
⎜⎜⎜ 1 2 3 n
⎟⎟⎠ ⎜⎜⎜ 2
⎟⎟⎠
⎝ ⎝
X2 −X1T X4T −X3 Bn −X1 −X2T −X3T −X4 Bn

Example 9.3.1: The 4-basic frame of order 12. Using the following T matrices:
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 0 ⎟⎟⎟ ⎜⎜⎜0 + 0 ⎟⎟⎟ ⎜⎜⎜0 0 +⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
X1 = ⎜⎜⎜0 + 0 ⎟⎟⎟⎟ , X2 = ⎜⎜⎜0 0 +⎟⎟⎟⎟ , X3 = ⎜⎜⎜+ 0 0 ⎟⎟⎟⎟ , (9.31)
⎝⎜ ⎠⎟ ⎝⎜ ⎟⎠ ⎝⎜ ⎠⎟
0 0 + + 0 0 0 + 0

we obtain
⎛ ⎞
⎜⎜⎜0
⎜⎜⎜ 0 + 0 + 0 0 0 + 0 0 0 ⎟⎟⎟⎟

⎜⎜⎜0
⎜⎜⎜ + 0 0 0 + + 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎟⎟
⎜⎜⎜+
⎜⎜⎜ 0 0 + 0 0 0 + 0 0 0 0 ⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ − 0 0 0 + 0 0 0 0 + 0 ⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 − 0 + 0 0 0 0 0 0 +⎟⎟⎟⎟⎟
⎜⎜⎜− ⎟⎟
+ + 0 ⎟⎟⎟⎟
K1 = ⎜⎜⎜⎜⎜
0 0 0 0 0 0 0 0
⎟⎟ , (9.32a)
⎜⎜⎜0 0 − 0 0 0 0 0 − 0 0 +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− 0 0 0 0 0 0 − 0 + 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0
⎜⎜⎜ − 0 0 0 0 − 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 0 + 0 0 0 − 0 0 −⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎝ 0 0 0 0 + − 0 0 0 − 0 ⎟⎟⎟⎟
⎟⎠
0 0 0 + 0 0 0 − 0 − 0 0
⎛ ⎞
⎜⎜⎜0
⎜⎜⎜ + 0 − 0 0 0 0 0 0 0 −⎟⎟⎟⎟

⎜⎜⎜+
⎜⎜⎜ 0 0 0 − 0 0 0 0 − 0 0 ⎟⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 + 0 0 − 0 0 0 0 − 0 ⎟⎟⎟⎟
⎟⎟
⎜⎜⎜+
⎜⎜⎜ 0 0 0 + 0 0 + 0 0 0 0 ⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ + 0 + 0 0 0 0 + 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 ⎟⎟
+ + + 0 ⎟⎟⎟⎟
K2 = ⎜⎜⎜⎜⎜
0 0 0 0 0 0 0
⎟⎟ , (9.32b)
⎜⎜⎜0 0 0 0 + 0 0 − 0 − 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 0 0 + − 0 0 0 − 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 0 + 0 0 0 0 − 0 0 −⎟⎟⎟⎟⎟
⎟⎟
⎜⎜⎜0
⎜⎜⎜ 0 + 0 0 0 + 0 0 0 − 0 ⎟⎟⎟⎟
⎟⎟
⎜⎜⎜+
⎜⎜⎝ 0 0 0 0 0 0 + 0 − 0 0 ⎟⎟⎟⎟
⎟⎠
0 + 0 0 0 0 0 0 + 0 0 −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


286 Chapter 9

⎛ ⎞
⎜⎜⎜+ 0 0 0 0 0 − 0 0 0 + 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 + 0 0 0 0 − 0 0 0 +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 + 0 0 0 0 0 0 − + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 + 0 0 0 0 − − 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 + − 0 0 0 − 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 0 0 0 + 0 0 − 0 0 0 −⎟⎟⎟⎟
K3 = ⎜⎜⎜⎜ ⎟⎟ , (9.32c)
⎜⎜⎜− 0 0 0 0 − 0 0 − 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 − 0 − 0 0 0 − 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 − 0 − 0 − 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜0 + 0 − 0 0 0 0 0 − 0 0 ⎟⎟⎟⎟

⎜⎜⎜0
⎜⎜⎝ 0 + 0 − 0 0 0 0 0 0 −⎟⎟⎟⎟⎟

+ 0 0 0 0 − 0 0 0 0 − 0
⎛ ⎞
⎜⎜⎜0 0 0 0 0 + 0 − 0 − 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 + 0 0 0 0 − 0 − 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 + 0 − 0 0 0 0 −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 − 0 0 0 + 0 0 0 0 −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− 0 0 0 0 0 0 + 0 − 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜0 − 0 0 0 0 0 0 + 0 − 0 ⎟⎟⎟⎟
K4 = ⎜⎜⎜⎜ ⎟⎟ . (9.32d)
⎜⎜⎜0 − 0 + 0 0 0 0 0 0 + 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 − 0 + 0 0 0 0 0 0 +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− 0 0 0 0 + 0 0 0 + 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜− 0 0 0 0 − 0 − 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎝0 − 0 − 0 0 0 0 − 0 0 0 ⎟⎟⎟⎟⎟

0 0 − 0 − 0 − 0 0 0 0 0

Theorem 9.3.2: Let A0 , B0 , C0 , D0 and X1 , X2 be T matrices of order m and n,


respectively. Then, the following matrices are T matrices of order mni , i = 1, 2, . . .:

Ai = X1 ⊗ Ai−1 − X2 ⊗ BTi−1 ,
Bi = X1 ⊗ Bi−1 + X2 ⊗ ATi−1 ,
(9.33)
Ci = X1 ⊗ Ci−1 − X2 ⊗ DTi−1 ,
Di = X1 ⊗ Di−1 + X2 ⊗ Ci−1
T
.
Proof: Prove that the matrices in Eq. (9.33) satisfy the conditions of Eq. (9.18).
Prove the theorem by induction. Let i = 1. Compute

A1 ∗ B1 = (X1 ∗ X1 ) ⊗ (A0 ∗ B0 ) + (X1 ∗ X2 ) ⊗ (A0 ∗ AT0 )


−(X2 ∗ X1 ) ⊗ (BT0 ∗ B0 ) − (X2 ∗ X2 ) ⊗ (BT0 ∗ AT0 ). (9.34)

Since A0 ∗ B0 = 0, X1 ∗ X2 = 0, and BT0 ∗ AT0 = 0, we then conclude that A1 ∗ B1 = 0.


[Remember that ∗ is the sign of a Hadamard (point wise) product]. In a similar

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 287

manner, we can determine that

P ∗ Q = 0, P  Q, P, Q ∈ {A1 , B1 , C1 , D1 }. (9.35)

Prove the second condition of Eq. (9.18):

A1 B1 = X12 ⊗ A0 B0 + X1 X2 ⊗ A0 AT0 − X2 X1 ⊗ BT0 B0 − X22 ⊗ BT0 AT0 .


(9.36)
B1 A1 = X12 ⊗ B0 A0 − X1 X2 ⊗ B0 BT0 + X2 X1 ⊗ AT0 A0 − X22 ⊗ AT0 BT0 .

The comparison of both relations gives A1 B1 = B1 A1 . Now, compute A1 Rmn B1 with


consideration that Rmn = Rn ⊗ Rm .

A1 Rmn BT1 = X1 Rn X1T ⊗ A0 Rm BT0 + X1 Rn X2T ⊗ A0 Rm A0


− X2 Rn X1T ⊗ BT0 Rm BT0 − X2 Rn X2T ⊗ BT0 Rm A0 ,
(9.37)
B1 Rmn AT1 = X1 Rn X1T ⊗ B0 Rm AT0 − X1 Rn X2T ⊗ B0 Rm B0
+ X2 Rn X1T ⊗ AT0 Rm AT0 − X2 Rn X2T ⊗ AT0 Rm B0 .

Hence, we have A1 Rmn BT1 = B1 Rmn AT1 . Similarly, we can determine that

PRmn QT = QRmn PT , P  Q, P, Q ∈ {A1 , B1 , C1 , D1 }. (9.38)

The fourth condition of Eq. (9.18) follows from the equation

A1 + B1 + C1 + D1 = X1 ⊗ (A0 + B0 + C0 + D0 ) + X2
⊗ (AT0 − BT0 + C0T − DT0 ). (9.39)

Now let us prove the fifth condition:

A1 AT1 = X1 X1T ⊗ A0 AT0 − X1 X2T ⊗ A0 B0 − X2 X1T ⊗ BT0 AT0 + X2 X2T ⊗ BT0 B0 ,


B1 BT1 = X1 X1T ⊗ .B0 BT0 + X1 X2T ⊗ B0 A0 + X2 X1T ⊗ AT0 BT0 + X2 X2T ⊗ AT0 A0 ,
(9.40)
C1C1T = X1 X1T ⊗ .C0C0T − X1 X2T ⊗ C0 D0 − X2 X1T ⊗ DT0 C0T + X2 X2T ⊗ DT0 D0 ,
D1 DT1 = X1 X1T ⊗ .D0 DT0 + X1 X2T ⊗ D0C0 + X2 X1T ⊗ C0T DT0 + X2 X2T ⊗ C0T C0 .

Summing these expressions, we find that

A1 AT1 + B1 BT1 + C1C1T + D1 DT1


= (X1 X1T + X2 X2T ) ⊗ (A0 AT0 + B0 BT0 + C0C0T + D0 DT0 ) = mnImn . (9.41)

Now, assuming that matrices Ai , Bi , Ci , Di are T matrices for all i ≤ k, prove the
theorem for i = k + 1. We verify only the fifth condition of Eq. (9.18),

Ak+1 ATk+1 + Bk+1 BTk+1 + Ck+1Ck+1


T
+ Dk+1 DTk+1
= (X1 X1T + X2 X2T ) ⊗ (Ak ATk + Bk BTk + Ck CkT + Dk DTk ). (9.42)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


288 Chapter 9

Because X1 , X2 and Ak , Bk , Ck , Dk are T matrices of order n and mnk , respectively,


from the above-given equation we obtain

Ak+1 ATk+1 + Bk+1 BTk+1 + Ck+1Ck+1


T
+ Dk+1 DTk+1 = mnk+1 Imnk+1 . (9.43)

The theorem is proved.

Lemma 9.3.3: If A0 , B0 , C0 , D0 are T matrices of order m, then the following


matrices:
Ai = I6 ⊗ Bi−1 + X1 ⊗ Ci−1 + X2 ⊗ Di−1 ,
Bi = I6 ⊗ Ai−1 − X1 ⊗ Di−1 + X2T ⊗ Ci−1 ,
(9.44)
Ci = I6 ⊗ Di−1 + X1 ⊗ Ai−1 − X2T ⊗ Bi−1 ,
Di = I6 ⊗ Ci−1 + X1 ⊗ Bi−1 + X2 ⊗ Ai−1

are T matrices of order 6i m, where i = 1, 2, . . . , and


⎛ ⎞ ⎛ ⎞
⎜⎜⎜0 0 0 + 0 0 ⎟⎟ ⎜⎜⎜0 + − 0 − −⎟⎟
⎜⎜⎜0 ⎟ ⎟
⎜⎜⎜ 0 0 0 + 0 ⎟⎟⎟⎟ ⎜⎜⎜−
⎜⎜⎜ 0 + − 0 −⎟⎟⎟⎟
⎟ ⎟
⎜⎜0 0 0 0 0 +⎟⎟⎟⎟ ⎜⎜− − 0 + − 0 ⎟⎟⎟⎟
X1 = ⎜⎜⎜⎜ ⎟, X2 = ⎜⎜⎜⎜ ⎟.
0 ⎟⎟⎟⎟⎟ −⎟⎟⎟⎟⎟
(9.45)
⎜⎜⎜+ 0 0 0 0 ⎜⎜⎜0 − − 0 +
⎜⎜⎜0 + 0 ⎟⎟⎟⎟⎠ ⎜⎜⎜− − − +⎟⎟⎟⎟⎠
⎜⎝ 0 0 0 ⎜⎝ 0 0
0 0 + 0 0 0 + − 0 − − 0

For example, let


⎛ ⎞ ⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ 0 0 ⎟⎟⎟ ⎜⎜⎜0 + 0 ⎟⎟⎟ ⎜⎜⎜0 0 +⎟⎟⎟ ⎜⎜⎜0 0 0⎟⎟⎟
A0 = ⎜⎜⎝0 + 0 ⎟⎟⎟⎟⎠ ,

⎜ B0 = ⎜⎜⎝0 0 +⎟⎟⎟⎟⎠ ,

⎜ C0 = ⎜⎜⎝+ 0 0 ⎟⎟⎟⎟⎠ ,

⎜ D0 = ⎜⎜⎝0 0 0⎟⎟⎟⎟⎠ . (9.46)


0 0 + + 0 0 0 + 0 0 0 0

From Lemma 9.3.3, we obtain T matrices of order 18,


⎛ ⎞ ⎛ ⎞
⎜⎜⎜ B0 O0 O0 C0 O0 O0 ⎟⎟⎟ ⎜⎜⎜ A0 −C0 −C0 O0 −C0 C0 ⎟⎟⎟
⎜⎜⎜⎜O B O O ⎟
O0 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ C0 A0 −C0 −C0 O0 −C0 ⎟⎟⎟⎟⎟

⎜⎜⎜ 0 0 0 0 C0
⎟ ⎜⎜⎜ ⎟
⎜⎜⎜O O B O C0 ⎟⎟⎟⎟⎟ ⎜−C C0 A0 −C0 −C0 O0 ⎟⎟⎟⎟⎟
A1 = ⎜⎜⎜⎜⎜ 0 0 0 0 B1 = ⎜⎜⎜⎜⎜ 0
O0
⎟⎟ , ⎟,
⎜⎜⎜C0 O0 O0 B0 O0 O0 ⎟⎟⎟⎟ ⎜⎜⎜ O0 −C0 C0 A0 −C0 −C0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜O0 C0 O0 O0 B0 O0 ⎟⎟⎟⎟ ⎜⎜⎜−C0 O0 −C0 C0 A0 −C0 ⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
O0 O0 C0 O0 O0 B0 −C0 −C0 O0 −C0 C0 A0
⎛ ⎞ ⎛ (9.47)

⎜⎜⎜ D0 B0 B0 O0 B0 −B0 ⎟⎟⎟ ⎜⎜⎜ C0 A0 −A0 B0 −A0 −A0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−B0 D0 B0 B0 O0 B0 ⎟⎟
⎟⎟⎟ ⎜⎜⎜−A0 C0 A0 −A0 B0 −A0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎜
⎜⎜⎜−A −A ⎟
⎜ B −B0 D0 B0 B0 O0 ⎟⎟⎟ C0 A0 −A0 B0 ⎟⎟⎟⎟
C1 = ⎜⎜⎜⎜ 0 ⎟⎟⎟ , D1 = ⎜⎜⎜⎜ 0 0
⎟⎟ .
⎜⎜⎜ O0 B0 −B0
⎜⎜⎜
D0 B0 B0 ⎟⎟⎟
⎟⎟⎟ ⎜⎜⎜⎜ B0 −A0 −A0 C0 A0 −A0 ⎟⎟⎟⎟⎟
⎜⎜⎜ B0 O0 B0 −B0 D0 B0 ⎟⎟⎟ ⎜⎜⎜⎜−A0 B0 −A0 −A0 C0 A0 ⎟⎟⎟⎟
⎝ ⎠ ⎜⎝ ⎟⎠
B0 B0 O0 B0 −B0 D0 A0 −A0 B0 −A0 −A0 C0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 289

From Theorem 9.3.2 and Lemma 9.3.3, there follows the existence of T matrices
of order 2n, where n takes its values from the following set of numbers:

{63, 65, 75, 77, 81, 85, 87, 91, 93, 95, 99, 111, 115, 117, 119, 123,
125, 129, 133, 135, 141, 148, 145, 147, 153, 155, 159, 161, 165, 169, 171,
175, 177, 185, 189, 195, 203, 205, 209, 215, 221, 225, 231, 235, 243, 245,
247, 255, 259, 265, 273, 275, 285, 287, 295, 297, 299, 301, 303, 305, 315,
323, 325, 329, 343, 345, 351, 357, 361, 371, 375, 377, 385, 387, 399, 403,
405, 413, 425, 427, 429, 435, 437, 441, 455, 459, 465, 475, 481, 483, 495,
505, 507, 513, 525, 533, 551, 555, 559, 567, 575, 585, 589, 603, 609, 611,
615, 621, 625, 627, 637, 645, 651, 663, 665, 675, 689, 693, 703, 705, 707,
715, 725, 729, 735, 741, 765, 767, 771, 775, 777, 779, 783, 793, 805, 817,
819, 825, 837, 845, 855, 861, 875, 885, 891, 893, 903, 915, 925, 931, 945,
963, 969, 975, 987, 999, 1005, 1007, 1025, 1029, 1045, 1053, 1071, 1075,
1083, 1107, 1113, 1121, 1125, 1127, 1155, 1159, 1161, 1175, 1197, 1203,
1215, 1225, 1235, 1239, 1251, 1269, 1275, 1281, 1285, 1305, 1313, 1323,
1325, 1365, 1375, 1377, 1407, 1425, 1431, 1463, 1475, 1485, 1515, 1525,
1539, 1563, 1575, 1593, 1605, 1625, 1647, 1677, 1701, 1755, 1799, 1827,
1919, 1923, 1935, 2005, 2025, 2085, 2093, 2121, 2187, 2205, 2243, 2403,
2415, 2451, 2499, 2525, 2565, 2613, 2625, 2709, 2717, 2727, 2807, 2835,
2919, 3003, 3015, 3059}.

9.4 Goethals–Seidel Arrays


Goethals and Seidel17 provide the most useful tool for constructing ODs. The
Goethals–Seidel array is of the form
⎛ ⎞
⎜⎜⎜ A BR CR DR ⎟⎟
⎜⎜⎜ −BR T ⎟
⎜⎜⎜ A −D R −C R⎟⎟⎟⎟⎟
T
⎜⎜⎜−CR DT R ⎟, (9.48)
⎜⎝ A −BT R⎟⎟⎟⎟

−DR −C R B R
T T
A

where R is the back-diagonal identity matrix and A, B, C, and D are cyclic (−1, +1)
matrices of order n satisfying

AAT + BBT + CC T + DDT = 4nIn . (9.49)

If A, B, C, and D are cyclic symmetric (−1, +1) matrices, then a Williamson array
results in
⎛ ⎞
⎜⎜⎜ A B C D⎟⎟⎟
⎜⎜⎜ −B A −D C ⎟⎟⎟⎟
W = ⎜⎜⎜⎜ ⎟.
⎜⎜⎝ −C D A −B⎟⎟⎟⎟⎠
(9.50)
−D −C B A

This is the Goethals–Seidel array, a generalization of Williamson arrays.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


290 Chapter 9

For detailed discussions of more Goethals–Seidel arrays, we recommend


Refs. 28, 30, 33, 39, 40, 92, 97–99. In Ref. 26, the authors have constructed
an infinite family of Goethals–Seidel arrays. Particularly, they prove that if q =
4n − 1 ≡ 3 (mod 8) is a prime power, then there is a Hadamard matrix of order 4n
of the Goethals–Seidel type.
Cooper and Wallis32 first defined T matrices of order t (see Definition 9.2.2)
to construct OD(4t; t, t, t, t) (which at that time they called Hadamard arrays). The
following important theorem is valid:

Theorem 9.4.1: (Cooper–Seberry–Turyn) Suppose there are T matrices T 1 , T 2 ,


T 3 , T 4 of order t(assumed to be cyclic or block cyclic = type 1). Let a, b, c, d be
commuting variables. Then,

A = aT 1 + bT 2 + cT 3 + dT 4 ,
B = −bT 1 + aT 2 + dT 3 − cT 4 ,
(9.51)
C = −cT 1 − dT 2 + aT 3 + bT 4 ,
D = −dT 1 + cT 2 − bT 3 + aT 4 .

These can be used in the Goethals–Seidel array (or Seberry Wallis–Whiteman


array for block-cyclic, i.e., type 1 and type 2 matrices)
⎛ ⎞
⎜⎜⎜ A BR CR DR⎟⎟

⎜⎜⎜ −BR A DT R −C T R⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ (9.52)
⎜⎜⎜⎜ −CR −DT R A BT R⎟⎟⎟⎟
⎝ ⎠
−DR C T R −BT R A

to form an OD(4t; t, t, t), where R is the permutation matrix that transforms cyclic
to back-cyclic matrices or type 1 to type 2 matrices.

Theorem 9.4.2: (Cooper–Seberry–Turyn) Suppose there are T matrices T 1 , T 2 ,


T 3 , T 4 of order t (assumed to be cyclic or block cyclic = type 1). Let A, B, C, D be
Williamson matrices of order m. Then,

X = T 1 ⊗ A + T 2 ⊗ B + T 3 ⊗ C + T 4 ⊗ D,
Y = −T 1 ⊗ B + T 2 ⊗ A + T 3 ⊗ D − T 4 ⊗ C,
(9.53)
Z = −T 1 ⊗ C − T 2 ⊗ D + T 3 ⊗ A + T 4 ⊗ B,
W = −T 1 ⊗ D + T 2 ⊗ C − T 3 ⊗ B + T 4 ⊗ A

can be used in the Goethals–Seidel array


⎛ ⎞
⎜⎜⎜ X YR ZR WR⎟⎟⎟
⎜⎜⎜ ⎟
⎜ −YR X W T R −Z T R⎟⎟⎟⎟
GS = ⎜⎜⎜⎜ ⎟⎟ (9.54)
⎜⎜⎜ −ZR −W T R X Y T R⎟⎟⎟⎟
⎝ ⎠
−WR Z T R −Y T R X

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 291

to form a Hadamard matrix of order 4mt. Geramita and Seberry provide a


number of 8 × 8 arrays that can be treated as part Williamson and part
Goethals–Seidel (see Ref. 5, p. 102).
Definition 9.4.1:92 We will call a square matrix H(X1 , X2 , X3 , X4 ) of order 4t a
Goethals–Seidel array if the following conditions are satisfied:
(1) Each element of H has the form ±Xi , ±XiT Bn , ±XiT Bn .
(2) In each row (column) of H, the elements ±Xi , ±XiT , ±Xi Bn , ±XiT Bn are entered
t times.
(3) Xi Xi = X j Xi , Xi Bn XiT = X j Bn XiT .
(4) H(X1 , X2 , X3 , X4 )H T (X1T , X2T , X3T , X4T ) = t 4i=1 Xi XiT ⊗ I4t .
Note that for t = 1 and Bn = R, the defined array coincides with Eq. (9.20), and
for t = 1 and Bn = Rm ⊗ Ik (n = mk), the defined array coincides with the Wallis
array,13
⎛ ⎞
⎜⎜⎜ A1 ⊗ B1 A2 R ⊗ B2 A3 R ⊗ B3 A4 R ⊗ B4 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−A R ⊗ B A1 ⊗ B1 −AT4 R ⊗ B4 AT3 R ⊗ B3 ⎟⎟⎟⎟
⎜⎜⎜ 2 2
⎟⎟⎟ . (9.55)
⎜⎜⎜
⎜⎜⎜−A3 R ⊗ B3 A4 R ⊗ B4
T
A1 ⊗ B1 −AT2 R ⊗ B2 ⎟⎟⎟⎟
⎝ ⎟⎠
−A4 R ⊗ B4 −AT3 R ⊗ B3 AT2 R ⊗ B2 A1 ⊗ B1

Here we give an example of a Hadamard matrix of order 12 constructed using a


Wallis array. Let

B1 = B2 = B3 = B4 = 1,
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜+ − −⎟⎟⎟
A1 = ⎜⎜⎜⎜⎝+ + +⎟⎟⎟⎟⎠ , A2 = A3 = A4 = ⎜⎜⎜⎜⎝− + −⎟⎟⎟⎟⎠ . (9.56)
+ + + − − +

Then, the following is the Wallis-type Hadamard matrix of order 12:


⎛ ⎞
⎜⎜⎜+ + + − − + − − + − − +⎟⎟

⎜⎜⎜+ + + − + − − + − − + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + + + − − + − − + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ + − + + + + + − − − +⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+
⎜⎜⎜ − + + + + + − + − + −⎟⎟⎟⎟⎟

⎜⎜⎜− + + + + + − + + + − −⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ . (9.57)
⎜⎜⎜+ + − − − + + + + + + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − + − + − + + + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− + + + − − + + + − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ + − + + − − − + + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝+ − + + − + − + − + + +⎟⎟⎟⎟⎠
− + + − + + + − − + + +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


292 Chapter 9

Theorem 9.4.3: (Generalized Goethals–Seidel Theorem97 ) If there are William-


son-type matrices of order n and a Goethals–Seidel array of order 4t, then a
Hadamard matrix of order 4tn exists.
Lemma 9.4.1: Let A, B, C, D be Williamson-type matrices of order k and let
⎛ ⎞
⎜⎜⎜ X YG ZG WG⎟⎟

⎜⎜⎜ −YG X −W T G Z T G⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ (9.58)
⎜⎜⎜⎜ −ZG W T G X −Y T G⎟⎟⎟⎟
⎝ ⎠
−WG −Z T G Y T G X

be a Goethals–Seidel array of order 4. Then, the matrix


⎛ ⎞
⎜⎜⎜ X⊗E YG ⊗ E ZG ⊗ E WG ⊗ E X⊗F YG ⊗ F ZG ⊗ F WG ⊗ F ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −YG ⊗ E X ⊗ E −W G ⊗ E Z G ⊗ E
T T
YG ⊗ F −X ⊗ F −W G ⊗ F Z G ⊗ F ⎟⎟⎟⎟
T T
⎜⎜⎜ −ZG ⊗ E W T G ⊗ E ⎟⎟⎟
⎜⎜⎜ X ⊗ E −Y T
G ⊗ E ZG ⊗ F W T
G ⊗ F −X ⊗ F −Y T
G ⊗ F ⎟⎟⎟
⎜⎜⎜−WG ⊗ E −Z T G ⊗ E ⎟
⎜⎜⎜ Y G⊗E
T
X ⊗ E WG ⊗ F −Z G ⊗ F
T
Y G⊗F
T
−X ⊗ F ⎟⎟⎟⎟

⎜⎜⎜ −X ⊗ F −YG ⊗ F −ZG ⊗ F −WG ⊗ F X⊗E YG ⊗ E ZG ⊗ E WG ⊗ E ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −YG ⊗ F X ⊗ F −W G ⊗ F Z G ⊗ F −YG ⊗ E
T T
X ⊗ E −W G ⊗ E Z G ⊗ E ⎟⎟⎟⎟⎟
T T
⎜⎜⎜ ⎟
⎜⎜⎝ −ZG ⊗ F W G ⊗ F
T
X ⊗ F −Y G ⊗ F −ZG ⊗ E W G ⊗ E
T T
X ⊗ E −Y G ⊗ E ⎟⎟⎟⎟
T

−WG ⊗ F −Z T G ⊗ F YT G ⊗ F X ⊗ F −WG ⊗ E −Z T G ⊗ E YT G ⊗ E X⊗E
(9.59)
is the Goethals–Seidel array of order 16k, where
   
A B C D
E= , F= . (9.60)
−B A D −C

Consider a (0, ±1) matrix P = (pi, j ) of order 4n with elements pi, j , defined as



⎪ p = 1, p2i,2i−1 = −1, i = 1, 2, . . . , 2n,
⎨ 2i−1,2i

⎪ pi, j = 0, if i  2k − 1 & (9.61)

⎩ j  2k or i  2k & j  2k − 1, k = 1, 2, . . . , 2n.

We can verify that PT = −P, PPT = I4n .


Lemma 9.4.2: If there is a Goethals–Seidel array of order 4t, then there are two
Goethals–Seidel arrays H1 and H2 of order 4t satisfying the condition

H1 H2T + H2 H1T = 0. (9.62)

Proof: Let H(X1 , X2 , X3 , X4 ) be a Goethals–Seidel array of order 4t. We prove that


matrices H1 = H(X1 , X2 , X3 , X4 ) and H2 = H(X1 , X2 , X3 , X4 )(Ik ⊗ P) satisfy the
condition of Eq. (9.62), where matrix P is defined by Eq. (9.61),

H1 H2T + H2 H1T = H(Ik ⊗ PT )H T + H(Ik ⊗ P)H T


= −H(Ik ⊗ P)H T + H(Ik ⊗ P)H T = 0. (9.63)

Theorem 9.4.4: Let a Goethals–Seidel array of order 4t and Williamson-type


matrices of order k exist. Then, there is a Goethals–Seidel array of order 4kt.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 293

Proof: Let A, B, C, and D be Williamson matrices of order k and H(X1 , X2 , X3 , X4 )


be a Goethals–Seidel array of order 4t. According to Lemma 9.4.2, there are two
Goethals–Seidel arrays H1 and H2 satisfying the condition of Eq. (9.62). Consider
the array

S (X1 , X2 , X3 , X4 ) = X ⊗ H1 + Y ⊗ H2 , (9.64)

where matrices
   
1 A+B C+D 1 A−B C−D
X= , Y= , (9.65)
2 C + D −A − B 2 −C + D A − B
satisfy the conditions

X ∗ Y = 0,
X ± Y is a (+1, −1) matrix,
(9.66)
XY T = Y X T ,
XX T + YY T = 2kI2k .

We can prove that S (X1 , X2 , X3 , X4 ) is the Goethals–Seidel array of order 4kt,


i.e., it satisfies the conditions of Definition 9.4.1. Let us check only the fourth
condition,

S S T = XX T ⊗ H1 H1T + YY T ⊗ H2 H2T + XY T ⊗ (H1 H2T + H2 H1T )


4
= t(XX T + YY T ) ⊗ Xi XiT ⊗ I4t + XY T ⊗ (H1 H2T + H2 H1T )
i=1
4
= kt Xi XiT ⊗ I4kt . (9.67)
i=1

In particular, from Theorem 9.4.4, the existence of Goethals–Seidel arrays of order


8n follow, where n ∈ {3, 5, 7, . . . , 33, 37, 39, 41, 43, 49, 51, 55, 57, 61, 63}.

9.5 Plotkin Arrays


Plotkin’s array100 is defined similarly to the Baumert–Hall array and depends on
eight parameters. Plotkin’s result is that if there is a Hadamard matrix of order 4n,
then there is a Plotkin array of orders 4n, 8n, and 16n depending on two, four, and
eight symbols, which is obtained as follows. Let H4n be the Hadamard matrix of
order 4n. Introduce the following (0, −1, +1) matrices of order 4n:
   
1 I2n −I2n 1 I2n I2n
S = H4n , T= H ,
2 I2n I2n 2 −I2n I2n 4n
    (9.68)
1 I2n −I2n 1 I2n I2n
U= H4n , V= H .
2 −I2n −I2n 2 I2n −I2n 4n

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


294 Chapter 9

Then, we obtain the following:

• Plotkin array of order 4n,

H4n (a, b) = aS + bT ; (9.69)

• Plotkin array of order 8n,


 
H4n (a, b) H4n (c, d)
H8n (a, b, c, d) = ; (9.70)
H4n (−c, d) H4n (a, −b)

• Plotkin array of order 16n,


 
H8n (a, b, c, d) B8n (e, f, g, h)
H16n (a, b, . . . , g, h) = , (9.71)
B8n (−e, f, g, h) −H8n (−a, b, c, d)

where
 
aS + bT cU + dV
B8n (a, b, c, d) = . (9.72)
−cU − dV aS + bT

In Ref. 100, a Plotkin array of order 24 was presented, and the following
conjuncture was given.
Problem for exploration (Plotkin conjuncture): There are Plotkin arrays in every
order 8n, where n is a positive integer. Only two Plotkin arrays of order 8t are
known at this time. These arrays of order 8 and 24 are given below.92,100

Example 9.5.1: (a) Plotkin (Williamson) array of order 8:


⎛ ⎞
⎜⎜⎜ a b c d e f g h⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −b a d −c f −e −h g⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ −c −d a b g h −e − f ⎟⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟
−d c −b a h −g f −e⎟⎟⎟⎟⎟
P8 (a, b, . . . , h) = ⎜⎜⎜⎜⎜ ⎟. (9.73)
⎜⎜⎜ −e − f −g −h a b c d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− f e −h g −b a −d c⎟⎟⎟⎟⎟
⎜⎜⎜⎜ −g h e − f −c d a −b⎟⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠
−h −g f e −d −c b a

(b) Plotkin array of order 24:


 
A(x1 , x2 , x3 , x4 ) B(x5 , x6 , x7 , x8 )
P24 (x1 , x2 , . . . , x8 ) = , (9.74)
B(−x5 , x6 , x7 , x8 ) −A(−x1 , x2 , x3 , x4 )

where A(x1 , x2 , x3 , x4 ) is from Section 9.2 (see Example 9.2.2), and

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 295

⎛ ⎞
⎜⎜⎜ y x x x −w w z y −z z w −y⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ −x y x −x −z z −w −y w −w z −y⎟⎟⎟⎟
⎜⎜⎜⎜ −x ⎟
−x y x −y −w y −w −z −z w z⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ −x x −x y w w −z −w −y z y z⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜−w −w −z −y z x x x −y −y z −w⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
−z −w −x −x −w −w −z y⎟⎟⎟⎟
B(x, y, z, w) = ⎜⎜⎜⎜⎜
y y z x
⎟ . (9.75)
⎜⎜⎜−w w −w −y −x −x z x z y y z⎟⎟⎟⎟
⎜⎜⎜ z ⎟
⎜⎜⎜⎜ −w −w z −x x −x z y −y y w⎟⎟⎟⎟⎟

⎜⎜⎜ z −z y −w y y w −z w x x x⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ y −y −z −w −z −z −w −y −x w x −x⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝ z z y −z w −y −y w −x −x w x⎟⎟⎟⎟

−w −z w −z −y y −y z −x x −x w

9.6 Welch Arrays


In this section, we describe a special subclass of OD(n; s1 , s2 , . . . , sk ) known as
Welch-type ODs. Welch arrays were originally defined over cyclic groups, and the
definition was extended to matrices over finite Abelian groups.101 Next, we present
two examples of Welch arrays.
(a) A Welch array of order 2097 has the following form, [Welch]20 = (Wi, j )4j, j=1 ,
where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−d b −c −c −b⎟⎟⎟ ⎜⎜⎜ c a −d −d −a⎟⎟⎟
⎜⎜⎜⎜−b −d b −c −c⎟⎟⎟⎟ ⎜⎜⎜⎜−a c a −d −d⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜ ⎟⎟
W1,1 = ⎜⎜⎜⎜ −c −b −d b −c⎟⎟⎟⎟ , W1,2 = ⎜⎜⎜⎜⎜−d −a c a −d⎟⎟⎟⎟⎟ , (9.76a)
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝ −c −c −b −d b⎟⎟⎟⎟⎠ ⎜⎜⎝−d −d −a c a⎟⎟⎟⎟⎠
b −c −c −b −d a −d −d −a c
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−b −a c −c −a⎟⎟⎟ ⎜⎜⎜ a −b −d d −b⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−a −b −a c −c⎟⎟⎟ ⎜⎜⎜−b a −b −d d⎟⎟⎟⎟⎟

W1,3 = ⎜⎜⎜⎜ −c −a −b −a c⎟⎟⎟⎟ , ⎟ W1,4 = ⎜⎜⎜⎜ d −b a −b −d⎟⎟⎟⎟⎟ , (9.76b)

⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎝ c −c −a −b −a⎟⎟⎠ ⎜⎜⎝−d d −b a −b⎟⎟⎟⎠
−a c −c −a −b −b −d d −b a
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ −c a d d −a⎟⎟⎟ ⎜⎜⎜−d −b −c −c b⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−a −c a d d⎟⎟⎟ ⎜⎜⎜ b −d −b −c −c⎟⎟⎟⎟⎟
W2,1 = ⎜⎜⎜⎜⎜ d −a −c a d⎟⎟⎟⎟⎟ , W2,2 = ⎜⎜⎜⎜⎜ −c b −d −b −c⎟⎟⎟⎟⎟ , (9.76c)
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎝ d d −a −c a⎟⎟⎠ ⎜⎜⎝ −c −c b −d −b⎟⎟⎟⎟⎠
a d d −a −c −b −c −c b −d
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−a b −d d b⎟⎟⎟ ⎜⎜⎜−b −a −c c −a⎟⎟⎟
⎜⎜⎜⎜ b −a b −d d⎟⎟⎟⎟ ⎜⎜⎜⎜−a −b −a −c c⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜ ⎟⎟
W2,3 = ⎜⎜⎜⎜ d b −a b −d⎟⎟⎟⎟ , W2,4 = ⎜⎜⎜⎜⎜ c −a −b −a −c⎟⎟⎟⎟⎟ , (9.76d)
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎝−d d b −a b⎟⎟⎟⎠ ⎜⎜⎝ −c c −a −b −a⎟⎟⎟⎠
b −d d b −a −a −c c −a −b

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


296 Chapter 9

⎛ ⎞ ⎛ ⎞
⎜⎜⎜−b −a −c c −d⎟⎟⎟ ⎜⎜⎜ a b −d d b⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−d −b −a −c c⎟⎟⎟⎟ ⎜⎜⎜ b a b −d d⎟⎟⎟⎟
⎟ ⎟
W3,1 = ⎜⎜⎜⎜⎜ c −d −b −a −c⎟⎟⎟⎟⎟ , W3,2 = ⎜⎜⎜⎜⎜ d b a b −d⎟⎟⎟⎟⎟ , (9.76e)
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝ −c c −d −b −a⎟⎟⎟⎟ ⎜⎜⎝−d d b a b⎟⎟⎟⎟
⎠ ⎠
−a −c c −d −b b −d d b a
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−d −b c c b⎟⎟⎟ ⎜⎜⎜ −c a −d −d −a⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b −d −b c c⎟⎟⎟⎟ ⎜⎜⎜−a −c a −d −d⎟⎟⎟⎟
⎟ ⎟
W3,3 = ⎜⎜⎜⎜⎜ c b −d −b c⎟⎟⎟⎟⎟ , W3,4 = ⎜⎜⎜⎜⎜−d −a −c a −d⎟⎟⎟⎟⎟ , (9.76f)
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝ c c b −d −b⎟⎟⎟⎟ ⎜⎜⎝−d −d −a −c a⎟⎟⎟⎟
⎠ ⎠
−b c c b −d a −d −d −a −c
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−a −b −d d −b⎟⎟⎟ ⎜⎜⎜ b −a c −c −a⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b −a −b −d d⎟⎟⎟⎟ ⎜⎜⎜−a b −a c −c⎟⎟⎟⎟
⎟ ⎟
W4,1 = ⎜⎜⎜⎜⎜ d −b −a −b −d⎟⎟⎟⎟⎟ , W4,2 = ⎜⎜⎜⎜⎜ −c −a b −a c⎟⎟⎟⎟⎟ , (9.76g)
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝−d d −b −a −b⎟⎟⎟⎟ ⎜⎜⎝ c −c −a b −a⎟⎟⎟⎟
⎠ ⎠
−b −d d −b −a −a c −c −a b
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ c a d d −a⎟⎟⎟ ⎜⎜⎜−d b c c −b⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−a c a d d⎟⎟⎟⎟ ⎜⎜⎜−b −d b c c⎟⎟⎟⎟
⎟ ⎟
W4,3 = ⎜⎜⎜⎜⎜ d −a c a d⎟⎟⎟⎟⎟ , W4,4 = ⎜⎜⎜⎜⎜ c −b −d b c⎟⎟⎟⎟⎟ . (9.76h)
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝ d d −a c a⎟⎟⎟⎟ ⎜⎜⎝ c c −b −d b⎟⎟⎟⎟
⎠ ⎠
a d d −a c b c c −b −d

(b) A Welch array of order 36, which was constructed by Ono–Sawade–


Yamamoto.92,97 This array has the following form:
⎛ ⎞
⎜⎜⎜A1 A2 A3 A4 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ B1 B2 B3 B4 ⎟⎟⎟⎟
⎜⎜⎜⎜C ⎟, (9.77)
⎜⎜⎝ 1 C2 C3 C4 ⎟⎟⎟⎟⎟

D1 D2 D3 D4

where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ a a a b c d −b −d −c⎟⎟⎟ ⎜⎜⎜ b −a a b c −d b d −c⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a a a d b c −c −b −d⎟⎟⎟⎟ ⎜⎜⎜ a b −a −d b c −c b d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a a a c d b −d −c −b⎟⎟⎟⎟ ⎜⎜⎜−a a b c −d b d −c b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b −d −c a a a b c d⎟⎟⎟⎟⎟ ⎜⎜⎜ b c −d b −a a b c −d⎟⎟⎟⎟⎟
⎜ ⎟⎟ ⎜⎜⎜ ⎟
A1 = ⎜⎜⎜⎜⎜ −c −b −d a a a d b c⎟⎟⎟⎟ , A2 = ⎜⎜⎜−d b c a b −a −d b c⎟⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−d −c −b a a a c d b⎟⎟⎟⎟ ⎜⎜⎜ c −d b −a a b c −d b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b c d −b −d −c a a a⎟⎟⎟⎟ ⎜⎜⎜ b c −d −a a b b −a a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ d b c −c −b −d a a a⎟⎟⎟⎟ ⎜⎜⎜−d b c b −a a a b −a⎟⎟⎟⎟⎟
⎝ ⎟⎠ ⎝ ⎠
c d b −d −c −b a a a c −d b a b −a −a a b
(9.78a)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 297

⎛ ⎞ ⎛ ⎞
⎜⎜⎜ c −a a −b c d b −d c⎟⎟⎟ ⎜⎜⎜ d −a a b −c d −b d c⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a c −a d −b c c b −d⎟⎟⎟⎟⎟ ⎜⎜⎜ a d −a d b −c c −b d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−a a c c d −b −d c b⎟⎟⎟⎟⎟ ⎜⎜⎜−a a d −c d b d c −b⎟⎟⎟⎟
⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜
⎜⎜⎜ b −d c c −a a −b c d⎟⎟⎟⎟⎟ ⎜⎜⎜−b d c d −a a b −c d⎟⎟⎟⎟

⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
A3 = ⎜⎜⎜⎜ c b −d a c −a d −b c⎟⎟⎟⎟ , A4 = ⎜⎜⎜⎜ c −b d a d −a d b −c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜−d c b −a a c c d −b⎟⎟⎟⎟⎟ ⎜⎜⎜ d c −b −a a d −c d b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜⎜−b c d b −d c c −a a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜ b −c d −b d c d −a a⎟⎟⎟⎟
⎜⎜⎜ d −b c c b −d a c −a⎟⎟⎟ ⎜⎜⎜ d ⎟⎟
⎜⎝ ⎟⎠ ⎜⎝ b −c c −b d a d −a⎟⎟⎟⎟

c d −b −d c b −a a c −c d b d c −b −a a d
(9.78b)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−b a −a −b c −d −b d −c⎟⎟⎟ ⎜⎜⎜ a a a b −c −d −b d −c⎟⎟⎟
⎜⎜⎜⎜−a ⎟
−b a −d −b c −c −b d⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ a a

a −d b −c −c −b d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a −a −b c −d −b d −c −b⎟⎟⎟⎟ ⎜⎜⎜ a a a −c −d b d −c −b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜−b ⎟
⎜⎜⎜−b d −c −b a −a −b c −d⎟⎟⎟⎟ d −c a a a b −c −d⎟⎟⎟⎟
⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
B1 = ⎜⎜⎜⎜ −c −b d −a −b a −d −b c⎟⎟⎟⎟ , B2 = ⎜⎜⎜⎜ −c −b d a a a −d b −c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜ d −c −b a −a −b c −d −b⎟⎟⎟⎟ ⎜⎜⎜ d −c −b a a a −c −d b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b c −d −b d −c −b a −a⎟⎟⎟⎟ ⎜⎜⎜ b −c −d −b d −c a a a⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎝−d −b c −c −b d −a −b a⎟⎟⎟⎟ ⎜⎜⎝−d b −c −c −b d a a a⎟⎟⎟⎟
⎠ ⎠
c −d −b d −c −b a −a −b −c −d b d −c −b a a a
(9.78c)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−d −a a b −c −d −b −d −c⎟⎟⎟ ⎜⎜⎜ c a −a b c d −b −d c⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a −d −a −d b −c −c −b −d⎟⎟⎟⎟⎟ ⎜⎜⎜−a c a d b c c −b −d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−a a −d −c −d b −d −c −b⎟⎟⎟⎟ ⎜⎜⎜ a −a c c d b −d c −b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b −d −c −d −a a b c −d⎟⎟⎟⎟
⎟ ⎜⎜⎜−b −d c c a −a b c d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟⎟
B3 = ⎜⎜⎜⎜ −c −b −d a −d −a −d b c⎟⎟⎟⎟ , B4 = ⎜⎜⎜⎜ c −b −d −a c a d b c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−d −c −b −a a −d c −d b⎟⎟⎟⎟ ⎜⎜⎜−d c −b a −a c c d b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b c −d −b −d −c −d −a a⎟⎟⎟⎟ ⎜⎜⎜ b c d −b −d c c a −a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎝−d b c −c −b −d a −d −a⎟⎟⎟⎟ ⎜⎜⎝ d b c c −b −d −a c a⎟⎟⎟⎟⎠

c −d b −d −c −b −a a −d c d b −d c −b a −a c
(9.78d)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ −c a −a −b −c d b −d −c⎟⎟⎟ ⎜⎜⎜ d a −a b c d −b d −c⎟⎟⎟
⎜⎜⎜⎜−a ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ −c a d −b −c c b −d⎟⎟⎟⎟⎟ ⎜⎜⎜−a d a d b c −c −b d⎟⎟⎟⎟⎟
⎟ ⎜⎜⎜⎜ a ⎟
⎜⎜⎜ a
⎜⎜⎜ −a −c −c d −b −d c b⎟⎟⎟⎟ ⎜⎜⎜ −a d c d b d −c −b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎟ ⎟
⎜⎜⎜ b −d −c −c a −a −b −c d⎟⎟⎟⎟ ⎜⎜⎜−b d −c d a −a b c d⎟⎟⎟⎟
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
C1 = ⎜⎜⎜⎜ −c b −d −a −c a d −b −c⎟⎟⎟⎟ , C2 = ⎜⎜⎜⎜ −c −b d −a d a d b c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜−d −c b a −a −c −c d −b⎟⎟⎟⎟ ⎜⎜⎜ d −c −b a −a d c d b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b −c d b −d −c −c a −a⎟⎟⎟⎟ ⎜⎜⎜ b c d −b d −c d a −a⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎝ d −b −c −c b −d −a −c a⎟⎟⎟⎟ ⎜⎜⎝ d b c −c −b d −a d a⎟⎟⎟⎟
⎠ ⎠
−c d −b −d −c b a −a −c c d b d −c −b a −a d
(9.78e)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


298 Chapter 9

⎛ ⎞ ⎛ ⎞
⎜⎜⎜ a a a −b c −d b d −c⎟⎟⎟ ⎜⎜⎜−b −a a −b c d −b −d −c⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a a a −d −b c −c b d⎟⎟⎟⎟⎟ ⎜⎜⎜ a −b −a d −b c −c −b −d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a a a c −d −b d −c b⎟⎟⎟⎟⎟ ⎜⎜⎜−a a −b c d −b −d −c −b⎟⎟⎟⎟
⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜
⎜⎜⎜ b d −c a a a −b c −d⎟⎟⎟⎟⎟ ⎜⎜⎜−b −d −c −b −a a −b c d⎟⎟⎟⎟

⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟
C3 = ⎜⎜⎜⎜ −c b d a a a −d −b c⎟⎟⎟⎟ , C4 = ⎜⎜⎜⎜ −c −b −d a −b −a d −b c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜ d −c b a a a c −d −b⎟⎟⎟⎟⎟ ⎜⎜⎜−d −c −b −a a −b c d −b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b c −d b d −c a a a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜−b c d −b −d −c −b −a a⎟⎟⎟⎟
⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝−d −b c −c b d a a a⎟⎟⎟⎟⎠ ⎜⎜⎜ d
⎜⎝ −b c −c −b −d a −b −a⎟⎟⎟⎟

c −d −b d −c b a a a c d −b −d −c −b −a a −b
(9.78f)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−d a −a b −c −d −b −d c⎟⎟⎟ ⎜⎜⎜ −c −a a b −c d −b −d −c⎟⎟⎟
⎜⎜⎜⎜−a ⎟
−d a −d b −c c −b −d⎟⎟⎟⎟⎟ ⎜⎜⎜⎜ a ⎟
−c −a d b −c −c −b −d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a −a −d −c −d b −d c −b⎟⎟⎟⎟ ⎜⎜⎜−a a −c −c d b −d −c −b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−b −d c −d a −a b −c −d⎟⎟⎟⎟ ⎜⎜⎜−b −d −c −c −a a b −c d ⎟⎟⎟⎟
⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
D1 = ⎜⎜⎜⎜ c −b −d −a −d a −d b −c⎟⎟⎟⎟ , D2 = ⎜⎜⎜⎜ −c −b −d a −c −a d b −c⎟⎟⎟⎟ ,
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜−d c −b a −a −d −c −d b⎟⎟⎟⎟ ⎜⎜⎜−d c −b −a a −c −c d b⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b −c −d −b −d c −d a −a⎟⎟⎟⎟ ⎜⎜⎜ b −c d −b −d −c −c −a a⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎝−d b −c c −b −d −a −d a⎟⎟⎟⎟ ⎜⎜⎝ d b −c −c −b −d a −c −a⎟⎟⎟⎟
⎠ ⎠
−c −d b −d c −b a −a −d −c d b −d −c −b −a a −c
(9.78g)
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ b a −a b c d b −d −c⎟⎟⎟ ⎜⎜⎜ a a a −b −c d b −d c⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−a b a d b c −c b −d⎟⎟⎟⎟⎟ ⎜⎜⎜ a a a d −b −c c b −d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ a −a b c d b −d −c b⎟⎟⎟⎟⎟ ⎜⎜⎜ a a a −c d −b −d c b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b −d −c b a −a b c d⎟⎟⎟⎟⎟ ⎜⎜⎜ b −d c a a a −b −c d⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
D3 = ⎜⎜⎜⎜ −c b −d −a b a d b c⎟⎟⎟⎟ , D4 = ⎜⎜⎜⎜ c b −d a a a d −b −c⎟⎟⎟⎟ .
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜−d c b a −a b c d b⎟⎟⎟⎟⎟ ⎜⎜⎜−d c b a a a −c d −b⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ b c d b −d −c b a −a⎟⎟⎟⎟⎟ ⎜⎜⎜−b −c d b −d c a a a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝ d b c −c b −d −a b a⎟⎟⎟⎟⎠ ⎜⎜⎝ d −b −c c b −d a a a⎟⎟⎟⎟⎠
c d b −d −c b a −a b −c d −b −d c b a a a
(9.78h)
Now, if Xi , i = 1, 2, 3, 4 are T matrices of order k, then by substituting matrices
4 4 4 4
A= Ai ⊗ X i , B= Bi ⊗ Xi , C= C i ⊗ Xi , D= Di ⊗ Xi (9.79)
i=1 i=1 i=1 i=1

into the array in Eq. (9.20), we obtain the Baumert–Hall array of order 4 · 9k. Using
the Welch array we can obtain the Baumert–Hall array of order 4 · 5k. Hence, from
Remark 9.2.1 and Theorem 9.2.1, we have the following:
Corollary 9.6.1: There are Baumert–Hall arrays of orders 4k, 20k, and 36k,
where k ∈ M.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 299

Corollary 9.6.2: (a) The matrices in Eq. (9.79) are the generalized parametric
Williamson matrices of order 9k.
(b) Matrices {Ai }4i=1 , {Bi }4i=1 , {Ci }4i=1 , and {Di }4i=1 are generalized parametric
Williamson matrices of order 9. Furthermore, the array in Eq. (9.77), where
Ai , Bi , Ci , Di are mutually commutative parametric matrices, will be called a
Welch-type array.
Theorem 9.6.1: Let there be a Welch array of order 4k. Then, there is also a Welch
array of order 4k(p + 1), where p ≡ 1 (mod 4) is a prime power.
Proof: Let Eq. (9.9) be a Welch array of order 4k, i.e., {Ai }4i=1 , {Bi }4i=1 , {Ci }4i=1 , {Di }4i=1
are parametric matrices of order k satisfying the conditions

PQ = QP, P, Q ∈ {Ai , Bi , Ci , Di } ,
4 4 4 4 4 4
Ai BTi = AiCiT = Ai DTi = BiCiT = Bi DTi = Ci DTi = 0,
i=1 i=1 i=1 i=1 i=1 i=1 (9.80)
4 4 4 4
Ai ATi = Bi BTi = CiCiT = Di DTi = k(a2 + b2 + c2 + d2 )Ik .
i=1 i=1 i=1 i=1

Now, let p ≡ 1 (mod 4) be a prime power. According to Ref. 8, there exist cyclic
symmetric Williamson matrices of orders (p + 1)/2 of the form I + A, I − A, B, B.
Consider the matrices
   
I B A 0
X= , Y= . (9.81)
B −I 0 A

We can verify that (0, ±1) matrices X, Y of orders p + 1 satisfy the conditions

X ∗ Y = 0,
X T = X, Y T = Y,
XY = Y X, (9.82)
X ± Y is a (+1, −1)matrix,
X 2 + Y 2 = (p + 1)I p+1 .
Now we introduce the following matrices:

X1 = X ⊗ A 1 + Y ⊗ A 2 , X2 = X ⊗ A2 − Y ⊗ A1 ,
X3 = X ⊗ A3 + Y ⊗ A4 , X4 = X ⊗ A4 − Y ⊗ A3 ;
Y1 = X ⊗ B1 + Y ⊗ B2 , Y2 = X ⊗ B2 − Y ⊗ B1 ,
Y3 = X ⊗ B3 + Y ⊗ B4 , Y4 = X ⊗ B4 − Y ⊗ B3 ;
(9.83)
Z1 = X ⊗ C1 + Y ⊗ C2 , Z2 = X ⊗ C2 − Y ⊗ C1 ,
Z3 = X ⊗ C3 + Y ⊗ C4 , Z4 = X ⊗ C4 − Y ⊗ C3 ;
W1 = X ⊗ D1 + Y ⊗ D2 , W2 = X ⊗ D2 − Y ⊗ D1 ,
W3 = X ⊗ D3 + Y ⊗ D4 , W4 = X ⊗ D4 − Y ⊗ D3 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


300 Chapter 9

Let us prove that the parametric matrices in Eq. (9.83) of order k(p + 1) satisfy the
conditions of Eq. (9.80). The first condition is evident. We will prove the second
condition of Eq. (9.80).

X1 Y1T = X 2 ⊗ A1 BT1 + XY ⊗ A1 BT2 + Y X ⊗ A2 BT1 + Y 2 ⊗ A2 BT2 ,


X2 Y2T = X 2 ⊗ A2 BT2 − XY ⊗ A2 BT1 − Y X ⊗ A1 BT2 + Y 2 ⊗ A1 BT1 . (9.84)

Summarizing the above-given expressions and taking into account the


conditions of Eq. (9.82), we find that

X1 Y1T + X2 Y2T = (p + 1)(A1 BT1 + A2 BT2 ) ⊗ I p+1 . (9.85)

By similar calculations, we obtain

X3 Y3T + X4 Y4T = (p + 1)(A3 BT3 + A4 BT4 ) ⊗ I p+1 . (9.86)

Now, summarizing the two last equations, we have


4 4
Xi YiT = (p + 1) Ai BTi ⊗ I p+1 = 0. (9.87)
i=1 i=1

Similarly, we can prove the validity of the second condition of Eq. (9.80) for all
matrices Xi , Yi , Zi , Wi , i = 1, 2, 3, 4.
Now, prove the third condition of Eq. (9.80). We obtain

X1 X1T = X 2 ⊗ A1 AT1 + XY ⊗ A1 AT2 + Y X ⊗ A2 AT1 + Y 2 ⊗ A2 AT2 ,


X2 X2T = X 2 ⊗ A2 AT2 − XY ⊗ A2 AT1 − Y X ⊗ A1 AT2 + Y 2 ⊗ A1 AT1 . (9.88)

Summarizing, we obtain

X1 X1T + X2 X2T = (X 2 + Y 2 ) ⊗ (A1 AT1 + A2 AT2 )


= (p + 1)(A1 AT1 + A2 AT2 ) ⊗ I p+1 . (9.89)

Then, we find that

X3 X3T + X4 X4T = (p + 1)(A3 AT3 + A4 AT4 ) ⊗ I p+1 . (9.90)

Hence, taking into account the third condition of Eq. (9.80), we have
4
Xi XiT = k(p + 1)(a2 + b2 + c2 + d2 )Ik(p+1) . (9.91)
i=1

Other conditions can be similarly checked.

Remark 9.6.1: There are Welch arrays of orders 20(p + 1) and 36(p + 1), where
p ≡ 1 (mod 4) is a prime power.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 301

References
1. A. Hurwitz, “Uber die komposition der quadratischen formen,” Math. Ann.
88 (5), 1–25 (1923).
2. J. Radon, Lineare scharen orthogonaler matrizen, Abhandlungen aus dem,
presented at Mathematischen Seminar der Hamburgischen Universitat, 1–14,
1922.
3. T.-K. Woo, “A novel complex orthogonal design for space–time coding in
sensor networks,” Wireless Pers. Commun. 43, 1755–1759 (2007).
4. R. Craigen, “Hadamard matrices and designs,” in The CRC Handbook of
Combinatorial Design, Ch. J. Colbourn and J. H. Dinitz, Eds., 370–377
CRC Press, Boca Raton (1996).
5. A. V. Geramita and J. Seberry, Orthogonal Designs. Quadratic Forms and
Hadamard Matrices, in Lecture Notes in Pure and Applied Mathematics 45,
Marcel Dekker, New York (1979).
6. A. V. Geramita, J. M. Geramita, and J. Seberry Wallis, “Orthogonal designs,”
Linear Multilinear Algebra 3, 281–306 (1976).
7. A. S. Hedayat, N. J. A. Sloane, and J. Stufken, Orthogonal Arrays: Theory
and Applications, Springer-Verlag, New York (1999).
8. R. J. Turyn, “An infinitive class of Williamson matrices,” J. Combin. Theory,
Ser. A 12, 19–322 (1972).
9. M. Hall Jr., Combinatorics, Blaisdell Publishing Co., Waltham, MA (1970).
10. R. J. Turyn, “Hadamard matrices, Baumert–Hall units, four-symbol sequen-
ces, puls compression, and surface wave encoding,” J. Combin. Theory, Ser.
A 16, 313–333 (1974).
11. http://www.uow.edu.au/∼jennie.
12. S. Agaian and H. Sarukhanyan, “Generalized δ-codes and construction of
Hadamard matrices,” Prob. Transmission Inf. 16 (3), 50–59 (1982).
13. J. S. Wallis, “On Hadamard matrices,” J. Comb. Theory Ser. A 18, 149–164
(1975).
14. S. Agaian and H. Sarukhanyan, “On Plotkin’s hypothesis,” Dokladi NAS RA
LXVI (5), 11–15 (1978) (in Russian).
15. S. Agaian and H. Sarukhanyan, “Plotkin hypothesis about D(4k, 4) decompo-
sition,” J. Cybernetics and Systems Analysis 18 (4), 420–428 (1982).
16. H. Sarukhanyan, “Construction of new Baumert–Hall arrays and Hadamard
matrices,” J. of Contemporary Mathematical Analysis, NAS RA, Yerevan 32
(6), 47–58 (1997).
17. J. M. Goethals and J. J. Seidel, “Orthogonal matrices with zero diagonal,”
Can. J. Math. 19, 1001–1010 (1967).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


302 Chapter 9

18. K. Sawade, “A Hadamard matrix of order 268,” Graphs Combin. 1, 185–187


(1985).
19. J. Bell and D. Z. Djokovic, “Construction of Baumert–Hall–Welch arrays
and T-matrices,” Australas. J. Combin. 14, 93–107 (1996).
20. W. Tadej and K. Zyczkowski, “A concise guide to complex Hadamard
matrices,” Open Syst. Inf. Dyn 13, 133–177 (2006).
21. H. G. Gadiyar, K. M. S. Maini, R. Padma, and H. S. Sharatchandra, “Entropy
and Hadamard matrices,” J. Phys. Ser. A 36, 109–112 (2003).
22. W. B. Bengtsson, A. Ericsson, J. Larsson, W. Tadej and K. Zyczkowski,
“Mutually unbiased bases and Hadamard matrices of order six,” J. Math.
Phys. 48 (5), 21 (2007).
23. A. Hedayat, N. J. A. Sloane, and J. Stufken, Orthogonal Arrays: Theory and
Applications, Springer-Verlag, Berlin (1999).
24. M. Y. Xia, “Some new families of SDSS and Hadamard matrices,” Acta
Math. Sci. 16, 153–161 (1996).
25. D. Ž Djoković, “Ten Hadamard matrices of order 1852 of Goethals–Seidel
type,” Eur. J. Combin. 13, 245–248 (1992).
26. J. M. Goethals and J. J. Seidel, “A skew Hadamard matrix of order 36,”
J. Austral. Math. Soc. 11, 343–344 (1970).
27. J. Seberry and A. L. Whiteman, “New Hadamard matrices and conference
matrices obtained via Mathon’s construction,” Graphs Combin. 4, 355–377
(1988).
28. S. Agaian and H. Sarukhanyan, Williamson type M-structures, in Proc.
2nd Int. Workshop on Transforms and Filter Banks, Brandenburg, Germany,
223–249 (1999).
29. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, Construction of
Williamson type matrices and Baumert–Hall, Welch and Plotkin arrays,
in Proc. Int. Workshop on Spectral Transforms and Logic Design for
Future Digital Systems (SPECLOG-2000), Tampere, Finland, TICSP Ser. 10,
189–205 (2000).
30. W. H. Holzmann and H. Kharaghani, “On the amicability of orthogonal
designs,” J. Combin. Des. 17, 240–252 (2009).
31. S. Georgiou, C. Koukouvinos, and J. Seberry, “Hadamard matrices, orthogo-
nal designs and construction algorithms,” in DESIGNS 2002: Further Com-
putational and Constructive Design Theory, W. D. Wallis, Ed., 133–205
Kluwer Academic Publishers, Dordrecht (2003).
32. J. Cooper and J. Wallis, “A construction for Hadamard arrays,” Bull. Austral.
Math. Soc. 7, 269–278 (1972).
33. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs and
Hadamard matrices,” Combinatorial Mathematics VII, Lecture Notes in
Mathematics, Springer 829, 220–223 (1980).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 303

34. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs and


applications,” IEEE Trans. Inf. Theory 27 (6), 772–779 (1981).
35. M. Xia, T. Xia, J. Seberry, and J. Wu, “An infinite family of Goethals–Seidel
arrays,” Discrete Appl. Math. 145 (3), 498–504 (2005).
36. Ch. Koukouvinos and J. Seberry, “Orthogonal designs of Kharaghani type:
II,” Ars. Combin. 72, 23–32 (2004).
37. S. Georgiou, Ch. Koukouvinos, and J. Seberry, “On full orthogonal designs
in order 56,” Ars. Combin. 65, 79–89 (2002).
38. S. Georgiou, C. Koukouvinos, and J. Seberry, “Hadamard matrices, orthogo-
nal designs and construction algorithms,” in DESIGNS 2002: Further Com-
putational and Constructive Design Theory, W. D. Wallis, Ed., 133–205
Kluwer Academic Publishers, Dordrecht (2003).
39. L. D. Baumert, “Hadamard matrices of orders 116 and 232,” Bull. Am. Math.
Soc. 72 (2), 237. (1966).
40. L. D. Baumert, Cyclic Difference Sets, Lecture Notes in Mathematics, 182,
Springer-Verlag, Berlin (1971).
41. L. D. Baumert and M. Hall Jr., “Hadamard matrices of the Williamson type,”
Math. Comput. 19, 442–447 (1965).
42. T. Chadjipantelis and S. Kounias, “Supplementary difference sets and
D-optimal designs for n ≡ 2 (mod4),” Discrete Math. 57, 211–216 (1985).
43. R. J. Fletcher, M. Gysin, and J. Seberry, “Application of the discrete
Fourier transform to the search for generalized Legendre pairs and Hadamard
matrices,” Australas. J. Combin. 23, 75–86 (2001).
44. S. Georgiou and C. Koukouvinos, “On multipliers of supplementary
difference sets and D-optimal designs for n ≡ 2 (mod4),” Utilitas Math. 56,
127–136 (1999).
45. S. Georgiou and C. Koukouvinos, “On amicable sets of matrices and
orthogonal designs,” Int. J. Appl. Math. 4, 211–224 (2000).
46. S. Georgiou, C. Koukouvinos, M. Mitrouli, and J. Seberry, “Necessary
and sufficient conditions for two variable orthogonal designs in order 44:
addendum,” J. Combin. Math. Combin. Comput. 34, 59–64 (2000).
47. S. Georgiou, C. Koukouvinos, M. Mitrouli, and J. Seberry, “A new algorithm
for computer searches for orthogonal designs,” J. Combin. Math. Combin.
Comput. 39, 49–63 (2001).
48. M. Gysin and J. Seberry, “An experimental search and new combinatorial
designs via a generalization of cyclotomy,” J. Combin. Math. Combin.
Comput. 27, 143–160 (1998).
49. M. Gysin and J. Seberry, “On new families of supplementary difference sets
over rings with short orbits,” J. Combin. Math. Combin. Comput. 28, 161–186
(1998).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


304 Chapter 9

50. W. H. Holzmann and H. Kharaghani, “On the Plotkin arrays,” Australas. J.


Combin. 22, 287–299 (2000).
51. W. H. Holzmann and H. Kharaghani, “On the orthogonal designs of order
24,” Discrete Appl. Math. 102 (1–2), 103–114 (2000).
52. W. H. Holzmann and H. Kharaghani, “On the orthogonal designs of order
40,” J. Stat. Plan. Inference 96, 415–429 (2001).
53. N. Ito, J. S. Leon, and J. Q. Longyear, “Classification of 3-(24, 12, 5)
designs and 24-dimensional Hadamard matrices,” J. Combin. Theory, Ser. A
31, 66–93 (1981).
54. Z. Janko, “The existence of a Bush-type Hadamard matrix of order 36 and
two new infinite classes of symmetric designs,” J. Combin. Theory, Ser. A 95
(2), 360–364 (2001).
55. H. Kimura, “Classification of Hadamard matrices of order 28 with Hall sets,”
Discrete Math. 128 (1–3), 257–268 (1994).
56. H. Kimura, “Classification of Hadamard matrices of order 28,” Discrete
Math. 133 (1–3), 171–180 (1994).
57. C. Koukouvinos, “Some new orthogonal designs of order 36,” Utilitas Math.
51, 65–71 (1997).
58. C. Koukouvinos, “Some new three and four variable orthogonal designs in
order 36,” J. Stat. Plan. Inference 73, 21–27 (1998).
59. C. Koukouvinos, M. Mitrouli, and J. Seberry, “Necessary and sufficient
conditions for some two variable orthogonal designs in order 44,” J. Combin.
Math. Combin. Comput. 28, 267–286 (1998).
60. C. Koukouvinos, M. Mitrouli, J. Seberry, and P. Karabelas, “On suffici-
ent conditions for some orthogonal designs and sequences with zero auto-
correlation function,” Australas. J. Combin. 13, 197–216 (1996).
61. C. Koukouvinos and J. Seberry, “New weighing matrices and orthogonal de-
signs constructed using two sequences with zero autocorrelation function—
a review,” J. Statist. Plan. Inference 81, 153–182 (1999).
62. C. Koukouvinos and J. Seberry, “New orthogonal designs and sequences
with two and three variables in order 28,” Ars. Combin. 54, 97–108 (2000).
63. C. Koukouvinos and J. Seberry, “Infinite families of orthogonal designs: I,”
Bull. Inst. Combin. Appl 33, 35–41 (2001).
64. C. Koukouvinos and J. Seberry, “Short amicable sets and Kharaghani type
orthogonal designs,” Bull. Austral. Math. Soc. 64, 495–504 (2001).
65. C. Koukouvinos, J. Seberry, A. L. Whiteman, and M. Y. Xia, “Optimal
designs, supplementary difference sets and multipliers,” J. Statist. Plan.
Inference 62, 81–90 (1997).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 305

66. S. Kounias, C. Koukouvinos, N. Nikolaou, and A. Kakos, “The non-


equivalent circulant D-optimal designs for n ≡ 2 (mod4), n ≤ 54, n = 66,”
J. Combin. Theory, Ser. A 65 (1), 26–38 (1994).
67. C. Lam, S. Lam, and V. D. Tonchev, “Bounds on the number of affine,
symmetric, and Hadamard designs and matrices,” J. Combin. Theory, Ser.
A 92 (2), 186–196 (2000).
68. J. Seberry and R. Craigen, “Orthogonal designs,” in Handbook of Combina-
torial Designs, C. J. Colbourn and J. H. Dinitz, Eds., 400–406 CRC Press,
Boca Raton (1996).
69. S. A. Tretter, Introduction to Discrete-time Signal Processing, John Wiley &
Sons, Hoboken, NJ (1976).
70. R. J. Turyn, “An infinite class of Williamson matrices,” J. Combin. Theory,
Ser. A 12, 319–321 (1972).
71. A. L. Whiteman, “An infinite family of Hadamard matrices of Williamson
type,” J. Combin. Theory, Ser. A 14, 334–340 (1973).
72. I. S. Kotsireas, C. Koukouvinos, and J. Seberry, “Hadamard ideals and
Hadamard matrices with two circulant cores,” Eur. J. Combin. 27 (5),
658–668 (2006).
73. I. Bouyukliev, V. Fack, and J. Winne, “2-(31,15,7), 2-(35,17,8) and 2-
(36,15,6) designs with automorphisms of odd prime order, and their related
Hadamard matrices and codes,” Des. Codes Cryptog. 51 (2), 105–122 (2009).
74. S. Georgiou, C. Koukouvinos, and J. Seberry, “Hadamard Matrices, Ortho-
gonal Designs and Construction Algorithms,” in DESIGNS 2002: Further
Computational and Constructive Design Theory, W. D. Wallis, Ed., Kluwer
Academic Publishers, Dordrecht (2003).
75. V. D. Tonchev, Combinatorial Configurations, Designs, Codes, Graphs, John
Wiley & Sons, Hoboken, NJ (1988).
76. C. Koukouvinos and D. E. Simos, “Improving the lower bounds on
inequivalent Hadamard matrices, through orthogonal designs and meta-
programming techniques,” Appl. Numer. Math. 60 (4), 370–377 (2010).
77. J. Seberry, K. Finlayson, S. S. Adams, T. A. Wysocki, T. Xia, and B. J.
Wysocki, “The theory of quaternion orthogonal designs,” IEEE Trans. Signal
Proc. 56 (1), 256–265 (2009).
78. S. M. Alamouti, “A simple transmit diversity technique for wireless
communications,” IEEE J. Sel. Areas Commun. 16 (8), 1451–1458 (1998).
79. X.-B. Liang, “Orthogonal designs with maximal rates,” IEEE Trans. Inform.
Theory 49 (10), 2468–2503 (2003).
80. V. Tarokh, H. Jafarkhani, and A. R. Calderbank, “Space-time block codes
from orthogonal designs,” IEEE Trans. Inf. Theory 45 (5), 1456–1467 (1999).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


306 Chapter 9

81. C. Yuen, Y. L. Guan and T. T. Tjhung, Orthogonal space-time block code


from amicable complex orthogonal design, in Proc, of IEEE Int. Conf. on
Acoustics, Speech, and Signal Processing (ICASSP), Vol. 4, pp. 469–472
(2004).
82. Z. Chen, G. Zhu, J. Shen, and Y. Liu, “Differential space-time block codes
from amicable orthogonal designs,” IEEE Wireless Commun. Networking 2,
768–772 (2003).
83. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, Space-time codes
from Hadamard matrices, presented at URSI/FWCW’01, Finnish Wireless
Communications Workshop, 23–24 Oct., Tampere, Finland (2001).
84. S. L. Altmann, Rotations, Quaternions, and Double Groups, Clarendo Press,
Oxford (1986).
85. A. R. Calderbank, S. Das, N. Al-Dhahir, and S. N. Diggavi, “Construction
and analysis of a new quaternionic space-time code for 4 transmit antennas,”
Commun. Inf. Syst. 5 (1), 1–26 (2005).
86. B. S. Collins, “Polarization-diversity antennas for compact base stations,”
Microwave J. 43 (1), 76–88 (2000).
87. C. Charnes, J. Pieprzyk and R. Safavi-Naini, Crypto Topics and Applications
II, Faculty of Informatics—Papers, 1999, http://en.scientificcommons.org/j_
seberry.
88. I. Oppermann and B. S. Vucetic, “Complex spreading sequences with a wide
range of correlation properties,” IEEE Trans. Commun. COM-45, 365–375
(1997).
89. B. J. Wysocki and T. Wysocki, “Modified Walsh-Hadamard sequences for
DS CDMA wireless systems,” Int. J. Adapt. Control Signal Process. 16,
589–602 (2002).
90. S. Tseng and M. R. Bell, “Asynchronous multicarrier DS-CDMA using
mutually orthogonal complementary sets of sequences,” IEEE Trans.
Commun. 48, 53–59 (2000).
91. L. C. Tran, Y. Wang, B. J. Wysocki, T. A. Wysocki, T. Xia and Y. Zhao, Two
complex orthogonal space-time codes for eight transmit antennas, Faculty of
Informatics—Papers, 2004, http://en.scientificcommons.org/j_seberry.
92. J. Seberry and M. Yamada, Hadamard Matrices, Sequences and Block
Designs. Surveys in Contemporary Design Theory, Wiley-Interscience Series
in Discrete Mathematics, Wiley, Hoboken, NJ (1992).
93. J. Seberry, K. Finlayson, S. S. Adams, T. Wysocki, T. Xia, and B. Wysocki,
“The theory of quaternion orthogonal designs,” IEEE Trans. Signal Process.
56 (1), 256–265 (2008).
94. S. L. Altmann, Rotations, Quaternions, and Double Groups, Clarendo Press,
Oxford (1986).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Orthogonal Arrays 307

95. S. S. Adams, J. Seberry, N. Karst, J. Pollack, and T. Wysocki, “Quanternion


orthogonal designs,” Linear Algebra Appl. 428 (4), 1056–1071 (2008).
96. B. J. Wysocki and T. A. Wysocki, “On an orthogonal space-time-polarization
block code,” J. Commun. 4 (1), 20–25 (2009).
97. W. D. Wallis, A. P. Street and J. S. Wallis, Combinatorics: Room Squares,
Sum-Free Sets, Hadamard Matrices, Lecture Notes in Mathematics 292,
Springer, Berlin (1972).
98. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in
Mathematics, Springer, Berlin 1168 (1985).
99. H. G. Sarukhanyan, “On Goethals–Seidel arrays,” Sci. Notes YSU, Armenia
1, 12–19 (1979) (in Russian).
100. M. Plotkin, “Decomposition of Hadamard matrices,” J. Comb. Theory, Ser.
A 12, 127–130 (1972).
101. J. Bell and D. Z. Djokovic, “Construction of Baumert–Hall-Welch arrays and
T-matrices,” Australas. J. Combin. 14, 93–109 (1996).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 10
Higher-Dimensional Hadamard
Matrices
High-dimensional Hadamard matrices can be found in nature; e.g., a typical model
of a rock salt crystal is a 3D Hadamard matrix of order 4 (see Fig. 10.1).
Higher-dimensional Hadamard matrices were introduced several decades
ago. Shlichta was the first to construct examples of n-dimensional Hadamard
matrices.3,5 He proposed the procedures for generating the simplest 3D, 4D, and
5D Hadamard matrices. In particular, he put special emphasis on construction of
the “proper” matrices, which have a dimensional hierarchy of orthogonalities. This
property is a key for many applications such as error-correction codes and security
systems. Shlichta also suggests a number of unsolved problems and unproven
conjectures, as follows:
• The algebraic approach to the derivation of 2D Hadamard matrices (see
Chapters 1 and 4) suggests that a similar procedure may be feasible for 3D or
higher matrices.
• Just as families of 2D Hadamard matrices (such as skew and Williamson
matrices) have been defined, it may be possible to identify families of higher-

Figure 10.1 Rock salt crystal: The black circles represent sodium atoms; the white circles
represent chlorine atoms.1,2

309

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


310 Chapter 10

dimensional matrices, especially families that extend over a range of dimensions


as well as orders.
• An algorithm may be developed for deriving a completely proper n3 (or nm )
Hadamard matrix from one that is n2 .
• Two-dimensional Hadamard matrices exist only in orders of 1, 2, or 4t. No
such restriction has yet been established for higher dimensions. There may be
absolutely improper n3 or nm Hadamard matrices of order n = 2s  4t.
• Shlichta’s work prompted a study of higher-dimensional Hadamard matrix
designs. Several articles and books on higher-dimensional Hadamard matrices
have been published.1,5–35 In our earlier work,1,5,6,8,27 we have presented
the higher-dimensional Williamson–Hadamard, generalized (including Butson)
Hadamard matrix construction methods, and also have introduced (λ, μ)-
dimensional Hadamard matrices.
Several interesting methods to construct higher-dimensional Hadamard matrices
have been developed,1–30 including the following:
• Agaian and de Launey submitted a different way to construct an n-dimensional
Hadamard matrix for a given 2D Hadamard matrix.8,25
• Agaian and Egiazarian35 presented (λ, μ)-dimensional Hadamard matrix
construction methods.
• Hammer and Seberry developed a very interesting approach to construct high-
dimensional orthogonal designs and weighted matrices.21,22
• de Launey et al.19 derived first principles of automorphism and equivalence
for higher-dimensional Hadamard matrices. In addition, Ma9 investigated the
equivalence classes of n-dimensional proper Hadamard matrices.
• de Launey et al.19 constructed proper higher-dimensional Hadamard matrices
for all orders 4t ≤ 100, and conference matrices of order q + 1, where q is
an odd prime power. We conjecture that such Hadamard matrices exist for all
orders v = 0 (mod 4).
The first application of 3D WHTs in signal processing was shown by Harmuth.2
Recently, in Ref. 18, Testoni and Costa created a fast embedded 3D Hadamard
color video codec, which was developed to be executed by a set-top box device on
a broadband network. The applicability of this codec is best directed to systems
with complexity and storage limitations, possibly using fixed-point processes, but
enjoying high-bit-rate network connections (a low-cost codec that makes use of
high-performance links). A survey of the higher-dimensional Hadamard matrices
and 3D Walsh transforms can be found in Refs. 14, 17, 19, 25, and 32.
This chapter is organized as follows: Section 10.1 presents the mathematical
definition and properties of the 3D Hadamard matrices; Section 10.2 provides
the 3D Williamson–Hadamard matrix construction procedure; Section 10.3 gives
a construction method for 3D Hadamard matrices of order 4n + 2; Section 10.4
presents a fast 3D WHTs algorithm. Finally, Sections 10.5 and 10.6 cover 3D
complex HT construction processes.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 311

Figure 10.2 Three-dimensional Hadamard matrix of size (2 × 2 × 2).

10.1 3D Hadamard Matrices

Definition 10.1.1:3,4 The 3D matrix H = (hi, j,k )ni, j,k=1 is called a Hadamard matrix
if all elements hi, j,k = ±1 and

hi,c,r hi,b,r = hi,c,r hi,b,r = hi,c,r hi,b,r = n2 δa,b , (10.1)


i j j k i k

where δa,b is a Kronecker function, i.e., δa,a = 1, δa,b = 0, if a  b.


Later, Shlichta narrowed this definition and included only those matrices in
which all 2D layers (hi0 , j,k )nj,k=1 , (hi, j0 ,k )ni,k=1 , and (hi, j,k0 )ni, j=1 in all axis normal
orientations are themselves Hadamard matrices of order n.
Definition 10.1.2:4,8,17,32 A 3D Hadamard matrix H = (hi, j,k )ni, j,k=1 of order n is
called a regular 3D Hadamard matrix if the following conditions are satisfied:
n n
hi,c,r hi,b,r = ha, j,r hb, j,r = nδa,b ,
i=1 j=1
n n
hi,q,a hi,q,b = ha,q,k hb,q,k = nδa,b , (10.2)
i=1 k=1
n n
h p, j,a h p, j,b = h p,a,k h p,b,k = nδa,b .
j=1 k=1

Matrices satisfying Eq. (10.2) are called “proper” or regular Hadamard matrices.
Matrices satisfying Eq. (10.1) but not all of Eq. (10.2) are called “improper.”
A 3D Hadamard matrix of order 2 [or size (2 × 2 × 2)] is presented in Fig. 10.2.
Three-dimensional Hadamard matrices of order 2m (see Figs. 10.3 and 10.4) can
be generated as follows:
(1) From m − 1 successive direct products (see Appendix A.1 of 23 Hadamard
matrices.
(2) The direct product of three 2D Hadamard matrices of order m in different
orientations.5

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


312 Chapter 10

Figure 10.3 Illustrative example of Hadamard matrices as direct products of 2m Hadamard


matrices (courtesy of IEEE5 ).

Figure 10.4 Illustrative example of generation of 43 improper Hadamard matrices by the


direct product of three mutually perpendicular 2D Hadamard matrices (courtesy of IEEE5 ).

Shlichta noted that 3D matrices are 3D Hadamard matrices if the following


hold:
• All layers in one direction are 2D Hadamard matrices that are orthogonal to each
other.
• All layers in two directions are Hadamard matrices.
• In any direction, all layers are orthogonal in at least one layer direction so that
collectively there is at least one correlation vector in each axial direction.
Definition 10.1.3:5 A general n-dimensional Hadamard matrix H = (hi, j,k,...,m ) is
a binary matrix in which all parallel (m − 1)-dimensional sections are mutually
orthogonal, that is, all hi, j,k,...,m and

.... h pqr...ya h pqr...yb = n(m−1) δa,b , (10.3)


p q r y

where (pqr . . . yz) represents all permutations of (i jk . . . m).


This means that a completely proper n-dimensional Hadamard matrix is one
in which all 2D sections, in all possible axis-normal orientations, are Hadamard
matrices. As a consequence, all intermediate-dimensional sections are also
completely proper Hadamard matrices, or an m-dimensional Hadamard matrix is
specified if either all (m − l)-dimensional sections in one direction are Hadamard
matrices and also are mutually orthogonal, or if all (m − l)-dimensional sections in
two directions are Hadamard matrices.

10.2 3D Williamson–Hadamard Matrices


This section presents 3D Williamson–Hadamard matrices; first, we define the 3D
Williamson array.
Definition 10.2.1:8,11 The 3D matrix H(a, b, c, d) = (hi, j,k )4i, j,k=1 is called the 3D
Williamson array if all 2D matrices parallel to planes (X, Y), (X, Z), and (Y, Z) are
Williamson arrays.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 313

The 3D Williamson array is given in Fig. 10.5.


This matrix also can be represented in the following form:
33 3
33−d −c b a −c d a −b −b a −d c a b c d 333
33 c −d −a b −d −c b a −a −b −c −d −b a −d c 33
33 3 (10.4)
33−b a −d c −a −b −c −d d c −b −a −c d a −b 333
3−a −b −c −d b −a d −c −c d a −b −d −c b a 3

Example 10.2.1: It can be shown that from the 3D Williamson array (see
Fig. 10.5) we obtain the following:
(1) The matrices parallel to plane (X, Y) are Williamson arrays
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ a b c d⎟⎟⎟ ⎜⎜⎜−b a −d c ⎟⎟⎟
⎜⎜⎜−b a −d c ⎟⎟⎟ ⎜⎜⎜−a −b −c −d⎟⎟⎟⎟
AX,Y = ⎜⎜⎜⎜ ⎟⎟ , BX,Y = ⎜⎜⎜⎜ ⎟,
⎜⎜⎝−c d a −b⎟⎟⎟⎟⎠ ⎜⎜⎝ d c −b −a⎟⎟⎟⎟⎠
(10.5a)
−d −c b a −c d a −b
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−c d a −b⎟⎟⎟ ⎜⎜⎜−d −c b a⎟⎟⎟
⎜⎜⎜−d −c b a⎟⎟⎟ ⎜⎜⎜ c −d −a b⎟⎟⎟
C X,Y = ⎜⎜⎜⎜ ⎟⎟ , DX,Y = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎝−a −b −c −d⎟⎟⎟⎟⎠ ⎜⎜⎝−b a −d c ⎟⎟⎟⎟⎠
(10.5b)
b −a d −c −a −b −c −d

(2) Similarly, the matrices parallel to plane (X, Z) are Williamson arrays
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ a b c d⎟⎟⎟ ⎜⎜⎜−b a −d c ⎟⎟⎟
⎜⎜⎜−b a −d c ⎟⎟⎟ ⎜⎜⎜−a −b −c −d⎟⎟⎟⎟
AX,Z = ⎜⎜⎜⎜ ⎟⎟ , BX,Z = ⎜⎜⎜⎜ ⎟,
⎜⎜⎝−c d a −b⎟⎟⎟⎟⎠ ⎜⎜⎝−d −c b a⎟⎟⎟⎟⎠
(10.6a)
−d −c b a c −d −a b
⎛ ⎞ ⎛ ⎞
⎜⎜⎜−c d a −b⎟⎟⎟ ⎜⎜⎜ −d −c b a⎟
⎜⎜⎜ d c −b −a⎟⎟⎟ ⎜⎜⎜−c d a −b⎟⎟⎟⎟⎟
C X,Z = ⎜⎜⎜⎜ ⎟⎟ , DX,Z = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎝−a −b −c −d⎟⎟⎟⎟⎠ ⎜⎜⎝ b −a d −c ⎟⎟⎟⎟⎠
(10.6b)
−b a −d c −a −b −c −d

(3) Similarly, the following matrices that are parallel to plane (Y, Z) are Williamson
arrays:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ a −b −c −d⎟⎟ ⎜⎜⎜ b a d −c ⎟⎟
⎜⎜⎜−b ⎟ ⎟
−a d −c ⎟⎟⎟⎟ ⎜⎜⎜ a −b c d⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟, BY,Z = ⎜⎜⎜⎜ ⎟,
−d −a b⎟⎟⎟⎟⎠ −c −b −a⎟⎟⎟⎟⎠
AY,Z (10.7a)
⎜⎜⎝−c ⎜⎜⎝ d
−d c −b −a −c −d a −b
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ c −d a b⎟⎟ ⎜⎜⎜ d c −b a⎟⎟
⎜⎜⎜−d ⎟ ⎟
−c −b a⎟⎟⎟⎟ ⎜⎜⎜ c −d −a −b⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟, DY,Z = ⎜⎜⎜⎜ ⎟.
b −c d⎟⎟⎟⎟⎠ a −d −c ⎟⎟⎟⎟⎠
CY,Z (10.7b)
⎜⎜⎝ a ⎜⎜⎝−b
b −a −d −c a b c −d

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


314 Chapter 10

The 3D Williamson–Hadamard matrix of order 4 obtained from Fig. 10.5 by


substituting a = b = c = d = 1 is given in Fig. 10.6.

Example 10.2.2: Sylvester–Hadamard matrices obtained from the 3D Sylvester–


Hadamard matrix of order 4 (see Fig. 10.7).

b c d
a
–d
–b a c

–c d a –b
–c b a
–d –b a –d c
–a –b –c
–d
d c –b
–a
d a
–c –b
–c –b
d a
–d
–c b a
–a
–b –d
–c
b d –c
–d –a
a
–c b
c
–d –a b
–b
a c
–d
–a
–b –c –d

Figure 10.5 A 3D Williamson array.

Figure 10.6 3D Williamson–Hadamard matrix of order 4.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 315

Figure 10.7 3D Sylvester–Hadamard matrix of order 4.

Sylvester–Hadamard matrices parallel to planes (X, Y), (X, Z), and (Y, Z) have
the following forms, respectively:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟
⎟⎟⎟ ⎜⎜⎜+ − + −⎟⎟

⎜⎜⎜+ − + −⎟⎟ ⎜⎜− − − −⎟⎟⎟⎟
HX,Y = HX,Z = HY,Z = ⎜⎜⎜⎜
1 1 1 ⎟⎟⎟ , HX,Y = HX,Z = HY,Z = ⎜⎜⎜⎜⎜
2 2 2 ⎟ , (10.8a)
⎜⎜⎝+ + − −⎟⎟⎠ ⎜⎜⎝+ − − +⎟⎟⎟⎠⎟
+ − − + − − + +
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + − −⎟⎟
⎟ ⎜⎜⎜+ − − +⎟⎟

⎜⎜⎜+ − − +⎟⎟⎟⎟ ⎜⎜− − + +⎟⎟⎟⎟
HX,Y = HX,Z = HY,Z = ⎜⎜⎜⎜
3 3 3 ⎟ , HX,Y = HX,Z = HY,Z = ⎜⎜⎜⎜⎜
4 4 4 ⎟ . (10.8b)
⎜⎜⎝− − − −⎟⎟⎟⎟⎠ ⎜⎜⎝− + − +⎟⎟⎟⎟⎠
− + − + + + + +

Definition 10.2.2:9,11 3D matrices A, B, C, and D of order n are called 3D


Williamson-type matrices or Williamson cubes if all 2D matrices parallel to planes
(X, Y), (X, Z), and (Y, Z) are Williamson-type matrices of order n, i.e.,

AX,Y , BX,Y , C X,Y , DX,Y ;


AX,Z , BX,Z , C X,Z , DX,Z ; (10.9)
AY,Z , BY,Z , CY,Z , DY,Z

are Williamson-type matrices.

Illustrative Example 10.2.3: 2D Williamson-type matrices can be obtained from


Williamson cubes (see Fig. 10.8).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


316 Chapter 10

Figure 10.8 Williamson cubes A, B = C = D of order 3.

(1) Williamson-type matrices parallel to plane (X, Y),


⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜+ − −⎟⎟⎟
⎜ ⎟ ⎜ ⎟
A1X,Y = ⎜⎜⎜⎜+ + +⎟⎟⎟⎟ , B1X,Y = C X,Y
1
= D1X,Y = ⎜⎜⎜⎜− + −⎟⎟⎟⎟ ;
⎝ ⎠ ⎝ ⎠
+ + + − − +
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜− + −⎟⎟⎟
⎜ ⎟ ⎜ ⎟
A2X,Y = ⎜⎜⎜⎜+ + +⎟⎟⎟⎟ , B2X,Y = C X,Y
2
= D2X,Y = ⎜⎜⎜⎜− − +⎟⎟⎟⎟ ; (10.10)
⎝ ⎠ ⎝ ⎠
+ + + + − −
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜+ − −⎟⎟⎟
⎜ ⎟ ⎜ ⎟
A3X,Y = ⎜⎜⎜⎜+ + +⎟⎟⎟⎟ , B3X,Y = C X,Y
3
= D3X,Y = ⎜⎜⎜⎜− + −⎟⎟⎟⎟ .
⎝ ⎠ ⎝ ⎠
+ + + − − +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 317

(2) Williamson-type matrices parallel to plane (X, Z):


⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜+ − −⎟⎟⎟
AX,Z = ⎜⎜⎜⎝+ + +⎟⎟⎟⎠ , BX,Z = C X,Z = DX,Z = ⎜⎜⎝⎜− + −⎟⎟⎟⎟⎠ ;
1 ⎜ ⎟ 1 1 1 ⎜
+ + + − − +
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜− + −⎟⎟⎟
A2X,Z = ⎜⎜⎜⎜⎝+ + +⎟⎟⎟⎟⎠ , B2X,Z = C X,Z
2
= D2X,Z = ⎜⎜⎜⎜⎝− − +⎟⎟⎟⎟⎠ ; (10.11)
+ + + + − −
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜− − +⎟⎟⎟
⎜ ⎟
A3X,Z = ⎜⎜⎜⎝+ + +⎟⎟⎟⎠ , B3X,Z = C X,Z
3
= D3X,Z = ⎜⎜⎜⎝+ − −⎟⎟⎟⎟⎠ .

+ + + − + −
(3) Williamson-type matrices parallel to plane (Y, Z):
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜+ − −⎟⎟⎟
AY,Z = ⎜⎜⎜⎝+ + +⎟⎟⎟⎠ , BY,Z = CY,Z = DY,Z = ⎜⎜⎝⎜− + −⎟⎟⎟⎟⎠ ;
1 ⎜ ⎟ 1 1 1 ⎜
+ + + − − +
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜− + −⎟⎟⎟
A2Y,Z = ⎜⎜⎜⎜⎝+ + +⎟⎟⎟⎟⎠ , B2Y,Z = CY,Z
2
= D2Y,Z = ⎜⎜⎜⎜⎝+ − −⎟⎟⎟⎟⎠ ; (10.12)
+ + + − − +
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟ ⎜⎜⎜− − +⎟⎟⎟
AY,Z = ⎜⎜⎜⎝+ + +⎟⎟⎟⎠ , BY,Z = CY,Z = DY,Z = ⎜⎜⎜⎝− + −⎟⎟⎟⎟⎠ .
3 ⎜ ⎟ 3 3 3 ⎜
+ + + + − −
Let us denote the 3D Williamson array by S 3 (a, b, c, d) (see Fig. 10.5). The
following matrices are 3D Williamson–Hadamard matrices of order 4:

P0 = S 3 (−1, −1, −1, −1), P1 = S 3 (−1, −1, −1, +1),


P2 = S 3 (−1, −1, +1, −1), P3 = S 3 (−1, −1, +1, +1),
(10.13a)
P4 = S 3 (−1, +1, −1, −1), P5 = S 3 (−1, +1, −1, +1),
P6 = S 3 (−1, +1, +1, −1), P7 = S 3 (−1, +1, +1, +1),
P8 = S 3 (+1, −1, −1, −1), P9 = S 3 (+1, −1, −1, +1),
P10 = S 3 (+1, −1, +1, −1), P11 = S 3 (+1, −1, +1, +1),
(10.13b)
P12 = S 3 (+1, +1, −1, −1), P13 = S 3 (+1, +1, −1, +1),
P14 = S 3 (+1, +1, +1, −1), P15 = S 3 (+1, +1, +1, +1).

Let
 
V0 = R, U, U 2 , . . . , U n−2 , U n−1 ,
 
V1 = U, U 2 , . . . , U n−1 , R ,
 
V2 = U 2 , U 3 , . . . , U n−1 , R, U , (10.14)
...,
 
Vn−1 = U n−1 , R, U, . . . , U n−3 , U n−2 ,

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


318 Chapter 10

or
33 0 0 0 3
331 0 0 ... 0 0 000 1 0 ... 0 000 000 0 0 ... 0 0 1 33
330 0 0 0 3
1 0 ... 0 0 000 0 1 ... 0 000 001 0 0 ... 0 0 0 33
33 0 0 0 3
0 0 1 ... 0 0 00. . . ... . . 00 000 1 0 ... 0 0 0 33
V0 = 33 ··· . (10.15)
33. . . ... . . 000. . . ... . . 000 000. . . ... . . . 333
330 0 0 ... 1 0 0000 0 0 ... 0 100 0000
0 0 0 ... 1 0 0 333
33
0 0 0 ... 0 1 01 0 0 ... 0 00 00 0 0 ... 0 0 03

We can design the following matrix:

[S H]mn = V0 ⊗ Π0 + V1 ⊗ Π1 + V2 ⊗ Π2 + · · · + Vm−1 ⊗ Πm−1 , (10.16)

where Πi = 0, 1, 2, . . . , m − 1 are (+1, −1) matrices of order n. What conditions


must satisfy matrices Πi , i = 0, 1, 2, . . . , m−1 in order for [S H]mn to be a Hadamard
matrix?
The following statement holds true:

Statement 10.2.1:4 Let m in Eq. (10.16) be an odd number, and Πi = Πm−i ,


i = 0, 1, 2, . . . , m − 1 and Πi ∈ {P0 , P1 , . . . , P15 }.
If Π0 ∈ P1 = {P0 , P3 , P5 , P6 , P9 , P10 , P12 , P15 }, then Πi ∈ P2 = {P1 , P2 , P4 , P7 ,
P8 , P11 , P13 , P14 }, and vice versa; if Π0 ∈ P2 , then Πi ∈ P1 .

Theorem 10.2.1: (Generalized Williamson Theorem4,11 ) If there are spatial Willi-


amson-type matrices A, B, C, and D of order m, then the matrix H(A, B, C, D) (see
Fig. 10.9) is a spatial Williamson–Hadamard matrix of order 4m.

10.3 3D Hadamard Matrices of Order 4n + 2


Recently, Xian32 presented a very simple method of construction of a 3D
Hadamard matrix of the order n = 4k+2, k ≥ 1. Here, we will use Xian’s definitions
and construction method.32 First, we prove the following:

Theorem 10.3.1:32 If {H(i, j, k)}n−1


i, j,k=0 is a 3D Hadamard matrix, then n must be an
even number.

i, j,k=0 be a 3D Hadamard matrix, i.e., H(i, j, k) = ±1, and


Let {H(i, j, k)}n−1
H (i, j, k) = 1. Then, using the orthogonality condition, we obtain
2

n−1 n−1
H(i, j, 0)H(i, j, 1) = 0,
i=0 j=0
n−1 n−1
(10.17)
H(i, j, 0)H(i, j, 0) = n . 2

i=0 j=0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 319

B C D
A

A –D C
–B

D A –B
–C

–C B A
–D
–B A –D
C

–A –B –C
–D

D C –B
–A
D A
–C –B
–C
D A –B

–D
B A
–C
–A
–D
–B –C
B
D –C
–D –A
–C B A
C
–D –A B

–B A –D C
–A
–B –C –D

Figure 10.9 3D Williamson–Hadamard matrix of order 4m.

Thus,

n−1 n−1 n−1 n−1


H(i, j, 0) {H(i, j, 0) + H(i, j, 1)} = H 2 (i, j, 0)
i=0 j=0 i=0 j=0
n−1 n−1
+ H(i, j, 0)H(i, j, 1) = n2 . (10.18)
i=0 j=0

However,

H(i, j, 0) + H(i, j, 1) = (±1) + (±1) = even number. (10.19)

Thus, the number n2 must be even.


Now, we return to the construction of a 3D Hadamard matrix, particularly as an
illustrative example of a 3D Hadamard matrix order 6.

Definition 10.3.1:32 A (−1, +1) matrix A = [A(i, j)], 0 ≤ i ≤ n − 1, 0 ≤


j ≤ m − 1 is called a (2D) perfect binary array of dimension n × m and is
denoted by PBA(n, m), if and only if, its 2D autocorrelation is a δ-function,

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


320 Chapter 10

i.e.,
n−1 n−1
 
RA (s, t) = A(i, j)A (i + s)mod n, ( j + s)mod m = 0, (s, t)  (0, 0).
i=0 j=0
(10.20)
An example of PBA(6, 6) is given by (see Ref. 31)
⎛ ⎞
⎜⎜⎜− + + + + −⎟⎟

⎜⎜⎜+ − + + + −⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜+ + − + + −⎟⎟⎟⎟
A = ⎜⎜⎜⎜⎜ ⎟. (10.21)
⎜⎜⎜+ + + − + −⎟⎟⎟⎟
⎜⎜⎜+ ⎟
+ + + − −⎟⎟⎟⎟
⎝⎜ ⎠
− − − − − +

Theoremm−1 10.3.2: (See more detail in Ref. 31). If A is PBA(m, m), then B =
B(i, j, k) i, j,k=0 is a 3D Hadamard matrix of order m, where
 
B(i, j, k) = A (k + i)mod m, (k + j)mod m, m , 0 ≤ i, j, k ≤ m − 1. (10.22)

Now, using Theorem 10.3.2 and Eq. (10.21), we present a 3D Hadamard matrix of
order 6. Because B(i, j, k) = B( j, i, k), which means that the layers of the x and y
directions are the same, we need give only the layers in the z and y directions as
follows:
Layers in the z direction:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜− + + + + −⎟⎟
⎟ ⎜⎜⎜− + + + − +⎟⎟

⎜⎜⎜⎜+ − + + + −⎟⎟⎟⎟⎟ ⎜⎜⎜⎜+ − + + − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜+ + − + + −⎟⎟⎟⎟ ⎜+ + − + − +⎟⎟⎟⎟
[B(i, j, 0)] = ⎜⎜⎜⎜⎜ ⎟, [B(i, j, 1)] = ⎜⎜⎜⎜⎜ ⎟,
⎜⎜⎜+ + + − + −⎟⎟⎟⎟ ⎜⎜⎜+ + + − − +⎟⎟⎟⎟
⎜⎜⎜+ ⎟ ⎟
⎜⎝ + + + − −⎟⎟⎟⎟ ⎜⎜⎜−
⎜⎝ − − − + −⎟⎟⎟⎟
⎠ ⎠
− − − − − + + + + + − −
⎛ ⎞ ⎛ ⎞
⎜⎜⎜− + + − + +⎟⎟
⎟ ⎜⎜⎜− + − + + +⎟⎟

⎜⎜⎜+ − + − + +⎟⎟⎟⎟⎟ ⎜⎜⎜+ − − + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜+ + − − + +⎟⎟⎟⎟ ⎜⎜− − + − − −⎟⎟⎟⎟
[B(i, j, 2)] = ⎜⎜⎜⎜⎜ ⎟, [B(i, j, 3)] = ⎜⎜⎜⎜⎜ ⎟ , (10.23)
⎜⎜⎜− − − + − −⎟⎟⎟⎟ ⎜⎜⎜+ + − − + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟ ⎟
⎜⎝ + + − − +⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎝ + − + − +⎟⎟⎟⎟
⎠ ⎠
+ + + − + − + + − + + −
⎛ ⎞ ⎛ ⎞
⎜⎜⎜− − + + + +⎟⎟
⎟ ⎜⎜⎜ + − − − − −⎟⎟

⎜⎜⎜− + − − − −⎟⎟⎟⎟⎟ ⎜⎜⎜− − + + + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜+ − − + + +⎟⎟⎟⎟ ⎜⎜− + − + + +⎟⎟⎟⎟
[B(i, j, 4)] = ⎜⎜⎜⎜⎜ ⎟, [B(i, j, 5)] = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜+ − + − + +⎟⎟⎟⎟ ⎜⎜⎜− + + − + +⎟⎟⎟⎟
⎜⎜⎜+ ⎟ ⎟
⎜⎝ − + + − +⎟⎟⎟⎟ ⎜⎜⎜−
⎜⎝ + + + − +⎟⎟⎟⎟
⎠ ⎠
+ − + − + − − + + + + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 321

Layers in the y direction:


⎛ ⎞ ⎛ ⎞
⎜⎜⎜− − − − − +⎟⎟
⎟ ⎜⎜⎜+ + + + − −⎟⎟

⎜⎜⎜+ +
⎜⎜⎜ + + − −⎟⎟⎟⎟ ⎜⎜⎜−
⎜⎜⎜ − − − + −⎟⎟⎟⎟
⎟ ⎟
⎜+ + + − + −⎟⎟⎟⎟ ⎜+ + + − − +⎟⎟⎟⎟
[B(i, 0, k)] = ⎜⎜⎜⎜⎜ , [B(i, 1, k)] = ⎜⎜⎜⎜⎜ ,
⎜⎜⎜+ + − + + −⎟⎟⎟⎟⎟ ⎜⎜⎜+ + − + − +⎟⎟⎟⎟⎟
⎜⎜⎜+ − + + + −⎟⎟⎠ ⎟
⎟ ⎜⎜⎜+ − + + − +⎟⎟⎠ ⎟

⎝ ⎝
− + + + + − − + + + − +
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + − + −⎟⎟
⎟ ⎜⎜⎜+ + − + + −⎟⎟

⎜⎜⎜+ + + − − +⎟⎟⎟⎟ ⎜⎜⎜+ + − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜− − − + − −⎟⎟⎟⎟ ⎜+ + − − + +⎟⎟⎟⎟
[B(i, 2, k)] = ⎜⎜⎜⎜⎜ , [B(i, 3, k)] = ⎜⎜⎜⎜⎜ , (10.24)
⎜⎜⎜+ + − − + +⎟⎟⎟⎟⎟ ⎜⎜⎜− − + − − −⎟⎟⎟⎟⎟
⎜⎜⎜+ − + − + +⎟⎟⎠ ⎟
⎟ ⎜⎜⎜+ − − + + +⎟⎟⎠ ⎟

⎝ ⎝
− + + − + + − + − + + +
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ − + + + −⎟⎟
⎟ ⎜⎜⎜ − + + + + −⎟⎟

⎜⎜⎜+ − + + − +⎟⎟⎟⎟ ⎜⎜⎜− + + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜+ − + − + +⎟⎟⎟⎟ ⎜− + + − + +⎟⎟⎟⎟
[B(i, 4, k)] = ⎜⎜⎜⎜⎜ , [B(i, 5, k)] = ⎜⎜⎜⎜⎜ .
⎜⎜⎜+ − − + + +⎟⎟⎟⎟⎟ ⎜⎜⎜− + − + + +⎟⎟⎟⎟⎟
⎜⎜⎜− + − − − −⎟⎟⎟⎠⎟ ⎜⎜⎜− − + + + +⎟⎟⎟⎠⎟
⎝ ⎝
− − + + + + + − − − − −

In Fig. 10.10, a 3D Hadamard matrix of size 6 × 6 × 6 obtained from Eq. (10.21)


using Eq. (10.22) is given.
Example 10.3.1:32 The following matrices Am , m = 2, 4, 8, 12 are perfect binary
arrays of size m × m:
⎛ ⎞
⎜⎜⎜− + + + − + + +⎟⎟

⎜⎜⎜+ − + + − + − −⎟⎟⎟⎟
⎛ ⎞ ⎜⎜⎜ ⎟
  ⎜⎜⎜+ + + −⎟⎟⎟ ⎜⎜⎜+ + + − + + + −⎟⎟⎟⎟
⎜⎜⎜+ + + −⎟⎟⎟ ⎜⎜ ⎟
+ + ⎟⎟⎟ , A8 = ⎜⎜⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟
A2 = , A4 = ⎜⎜⎜⎜ , (10.25a)
+ − ⎜⎝+ + + −⎟⎟⎠ ⎜⎜⎜− − + − − − + −⎟⎟⎟⎟⎟
− − − + ⎜⎜⎜+ + + − − − − +⎟⎟⎟ ⎟

⎜⎜⎜
⎜⎜⎜+ − + + + − +
⎝ +⎟⎟⎟⎟⎠
+ − − + − + + −
⎛ ⎞
⎜⎜⎜+ − + − + − − + − − − −⎟⎟⎟
⎜⎜⎜+ − − − + − − − − − + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − + + + + − − + − −⎟⎟⎟⎟⎟
⎜⎜⎜− + − + − + + − + + + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ − − − + − − − − − + +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜+ + − + + + + − − + − −⎟⎟⎟⎟⎟
A12 = ⎜⎜⎜⎜ ⎟. (10.25b)
⎜⎜⎜⎜+ − + − + − − + − − − −⎟⎟⎟⎟⎟
⎜⎜⎜− + + + − + + + + + − −⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − + + + + − − + − −⎟⎟⎟⎟⎟
⎜⎜⎜+ − + − + − − + − − − −⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝+ − − − + − − − − − + +⎟⎟⎟⎟⎠
− − + − − − − + + − + +

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


322 Chapter 10

Figure 10.10 3D Hadamard matrix of size 6 × 6 × 6 (dark circles denote +1).

To prove this statement, we can verify the correctness of the condition Eq. (10.20)
only for the matrix A4 , i.e., we can prove that
3
 
RA (s, t) = A(i, j)A (i + s)mod 4, ( j + t)mod 4 , (s, t)  (0, 0). (10.26)
i, j=0

Let us consider the following cases:


Case for s = 0, t = 1, 2, 3:

RA (0, 1) = A(0, 0)A(0, 1) + A(0, 1)A(0, 2) + A(0, 2)A(0, 3) + A(0, 3)A(0, 0)


+ A(1, 0)A(1, 1) + A(1, 1)A(1, 2) + A(1, 2)A(1, 3) + A(1, 3)A(1, 0)
+ A(2, 0)A(2, 1) + A(2, 1)A(2, 2) + A(2, 2)A(2, 3) + A(2, 3)A(2, 0)
+ A(3, 0)A(3, 1) + A(3, 1)A(3, 2) + A(3, 2)A(3, 3) + A(3, 3)A(3, 0),
RA (0, 2) = A(0, 0)A(0, 2) + A(0, 1)A(0, 3) + A(0, 2)A(0, 0) + A(0, 3)A(0, 1)
+ A(1, 0)A(1, 2) + A(1, 1)A(1, 3) + A(1, 2)A(1, 0) + A(1, 3)A(1, 1)
(10.27)
+ A(2, 0)A(2, 2) + A(2, 1)A(2, 3) + A(2, 2)A(2, 0) + A(2, 3)A(2, 1)
+ A(3, 0)A(3, 2) + A(3, 1)A(3, 3) + A(3, 2)A(3, 0) + A(3, 3)A(3, 1),

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 323

RA (0, 3) = A(0, 0)A(0, 3) + A(0, 1)A(0, 0) + A(0, 2)A(0, 1) + A(0, 3)A(0, 2)


+ A(1, 0)A(1, 3) + A(1, 1)A(1, 0) + A(1, 2)A(1, 1) + A(1, 3)A(1, 2)
+ A(2, 0)A(2, 3) + A(2, 1)A(2, 0) + A(2, 2)A(2, 1) + A(2, 3)A(2, 2)
+ A(3, 0)A(3, 3) + A(3, 1)A(3, 0) + A(3, 2)A(3, 1) + A(3, 3)A(3, 2).

By substituting the elements of matrix A4 into these expressions, we obtain

RA (0, 1) = 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 = 0,
RA (0, 2) = 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 = 0, (10.28)
RA (0, 3) = −1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 = 0.

Case for s = 1, t = 0, 1, 2, 3:

RA (1, 0) = A(0, 0)A(1, 0) + A(0, 1)A(1, 1) + A(0, 2)A(1, 2) + A(0, 3)A(1, 3)


+ A(1, 0)A(2, 0) + A(1, 1)A(2, 1) + A(1, 2)A(2, 2) + A(1, 3)A(2, 3)
+ A(2, 0)A(3, 0) + A(2, 1)A(3, 1) + A(2, 2)A(3, 2) + A(2, 3)A(3, 3)
+ A(3, 0)A(0, 0) + A(3, 1)A(0, 1) + A(3, 2)A(0, 2) + A(3, 3)A(0, 3),
RA (1, 1) = A(0, 0)A(1, 1) + A(0, 1)A(1, 2) + A(0, 2)A(1, 3) + A(0, 3)A(1, 0)
+ A(1, 0)A(2, 1) + A(1, 1)A(2, 2) + A(1, 2)A(2, 3) + A(1, 3)A(2, 0)
+ A(2, 0)A(3, 1) + A(2, 1)A(3, 2) + A(2, 2)A(3, 3) + A(2, 3)A(3, 0)
+ A(3, 0)A(0, 1) + A(3, 1)A(0, 2) + A(3, 2)A(0, 3) + A(3, 3)A(0, 0),
(10.29)
RA (1, 2) = A(0, 0)A(1, 2) + A(0, 1)A(1, 3) + A(0, 2)A(1, 0) + A(0, 3)A(1, 1)
+ A(1, 0)A(2, 2) + A(1, 1)A(2, 3) + A(1, 2)A(2, 0) + A(1, 3)A(2, 1)
+ A(2, 0)A(3, 2) + A(2, 1)A(3, 3) + A(2, 2)A(3, 0) + A(2, 3)A(3, 1)
+ A(3, 0)A(0, 2) + A(3, 1)A(0, 3) + A(3, 2)A(0, 0) + A(3, 3)A(0, 1),
RA (1, 3) = A(0, 0)A(1, 3) + A(0, 1)A(1, 0) + A(0, 2)A(1, 1) + A(0, 3)A(1, 2)
+ A(1, 0)A(2, 3) + A(1, 1)A(2, 0) + A(1, 2)A(2, 1) + A(1, 3)A(2, 2)
+ A(2, 0)A(3, 3) + A(2, 1)A(3, 0) + A(2, 2)A(3, 1) + A(2, 3)A(3, 2)
+ A(3, 0)A(0, 3) + A(3, 1)A(0, 0) + A(3, 2)A(0, 1) + A(3, 3)A(0, 2).

By substituting the elements of matrix A4 into these expressions, we obtain

RA (1, 0) = 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 − 1 = 0,
RA (1, 1) = 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 = 0,
(10.30)
RA (1, 2) = 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 = 0,
RA (1, 3) = −1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 + 1 = 0.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


324 Chapter 10

Case for s = 2, t = 0, 1, 2, 3:
RA (2, 0) = A(0, 0)A(2, 0) + A(0, 1)A(2, 1) + A(0, 2)A(2, 2) + A(0, 3)A(2, 3)
+ A(1, 0)A(3, 0) + A(1, 1)A(3, 1) + A(1, 2)A(3, 2) + A(1, 3)A(3, 3)
+ A(2, 0)A(0, 0) + A(2, 1)A(0, 1) + A(2, 2)A(0, 2) + A(2, 3)A(0, 3)
+ A(3, 0)A(1, 0) + A(3, 1)A(1, 1) + A(3, 2)A(1, 2) + A(3, 3)A(1, 3),
(10.31a)
RA (2, 1) = A(0, 0)A(2, 1) + A(0, 1)A(2, 2) + A(0, 2)A(2, 3) + A(0, 3)A(2, 0)
+ A(1, 0)A(3, 1) + A(1, 1)A(3, 2) + A(1, 2)A(3, 3) + A(1, 3)A(3, 0)
+ A(2, 0)A(0, 1) + A(2, 1)A(0, 2) + A(2, 2)A(0, 3) + A(2, 3)A(0, 0)
+ A(3, 0)A(1, 1) + A(3, 1)A(1, 2) + A(3, 2)A(1, 3) + A(3, 3)A(1, 0),
RA (2, 2) = A(0, 0)A(2, 2) + A(0, 1)A(2, 3) + A(0, 2)A(2, 0) + A(0, 3)A(2, 1)
+ A(1, 0)A(3, 2) + A(1, 1)A(3, 3) + A(1, 2)A(3, 0) + A(1, 3)A(3, 1)
+ A(2, 0)A(0, 2) + A(2, 1)A(0, 3) + A(2, 2)A(0, 0) + A(2, 3)A(0, 1)
+ A(3, 0)A(1, 2) + A(3, 1)A(1, 3) + A(3, 2)A(1, 0) + A(3, 3)A(1, 1),
(10.31b)
RA (2, 3) = A(0, 0)A(2, 3) + A(0, 1)A(2, 0) + A(0, 2)A(2, 1) + A(0, 3)A(2, 2)
+ A(1, 0)A(3, 3) + A(1, 1)A(3, 0) + A(1, 2)A(3, 1) + A(1, 3)A(3, 2)
+ A(2, 0)A(0, 3) + A(2, 1)A(0, 0) + A(2, 2)A(0, 1) + A(2, 3)A(0, 2)
+ A(3, 0)A(1, 3) + A(3, 1)A(1, 0) + A(3, 2)A(1, 1) + A(3, 3)A(1, 2).
By substituting the elements of matrix A4 into these expressions, we obtain
RA (2, 0) = 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 = 0,
RA (2, 1) = 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 = 0,
(10.32)
RA (2, 2) = 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 = 0,
RA (2, 3) = −1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 = 0.

Case for s = 3, t = 0, 1, 2, 3:

RA (3, 0) = A(0, 0)A(3, 0) + A(0, 1)A(3, 1) + A(0, 2)A(3, 2) + A(0, 3)A(3, 3)


+ A(1, 0)A(0, 0) + A(1, 1)A(0, 1) + A(1, 2)A(0, 2) + A(1, 3)A(0, 3)
+ A(2, 0)A(1, 0) + A(2, 1)A(1, 1) + A(2, 2)A(1, 2) + A(2, 3)A(1, 3)
+ A(3, 0)A(2, 0) + A(3, 1)A(2, 1) + A(3, 2)A(2, 2) + A(3, 3)A(2, 3),
RA (3, 1) = A(0, 0)A(3, 1) + A(0, 1)A(3, 2) + A(0, 2)A(3, 3) + A(0, 3)A(3, 0)
+ A(1, 0)A(0, 1) + A(1, 1)A(0, 2) + A(1, 2)A(0, 3) + A(1, 3)A(0, 0)
+ A(2, 0)A(1, 1) + A(2, 1)A(1, 2) + A(2, 2)A(1, 3) + A(2, 3)A(1, 0)
+ A(3, 0)A(2, 1) + A(3, 1)A(2, 2) + A(3, 2)A(2, 3) + A(3, 3)A(2, 0),
(10.33)
RA (3, 2) = A(0, 0)A(3, 2) + A(0, 1)A(3, 3) + A(0, 2)A(3, 0) + A(0, 3)A(3, 1)
+ A(1, 0)A(0, 2) + A(1, 1)A(0, 3) + A(1, 2)A(0, 0) + A(1, 3)A(0, 1)
+ A(2, 0)A(1, 2) + A(2, 1)A(1, 3) + A(2, 2)A(1, 0) + A(2, 3)A(1, 1)
+ A(3, 0)A(2, 2) + A(3, 1)A(2, 3) + A(3, 2)A(2, 0) + A(3, 3)A(2, 1),
RA (3, 3) = A(0, 0)A(3, 3) + A(0, 1)A(3, 0) + A(0, 2)A(3, 1) + A(0, 3)A(3, 2)
+ A(1, 0)A(0, 3) + A(1, 1)A(0, 0) + A(1, 2)A(0, 1) + A(1, 3)A(0, 2)
+ A(2, 0)A(1, 3) + A(2, 1)A(1, 0) + A(2, 2)A(1, 1) + A(2, 3)A(1, 2)
+ A(3, 0)A(2, 3) + A(3, 1)A(2, 0) + A(3, 2)A(2, 1) + A(3, 3)A(2, 2).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 325

Figure 10.11 3D Hadamard matrix from PBA(4, 4).

By substituting the elements of matrix A4 into these expressions, we obtain

RA (3, 0) = −1 − 1 − 1 − 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 − 1 − 1 − 1 − 1 = 0,
RA (3, 1) = −1 − 1 + 1 + 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 − 1 − 1 − 1 + 1 + 1 = 0,
(10.34)
RA (3, 2) = −1 + 1 − 1 + 1 + 1 − 1 + 1 − 1 + 1 − 1 + 1 − 1 − 1 + 1 − 1 + 1 = 0,
RA (3, 3) = 1 − 1 − 1 + 1 − 1 + 1 + 1 − 1 − 1 + 1 + 1 − 1 + 1 − 1 − 1 + 1 = 0.

The 3D Hadamard matrix of order 4 obtained from PBA(4, 4) is given in


Fig. 10.11.

10.4 Fast 3D WHTs


We have seen that the transform technique based on sinusoidal functions has
been successfully applied in signal processing and in communications over a
considerable period of time. A mathematical reason for this success is the fact
that these functions constitute the eigenfunctions of linear operators, which are
modeled by means of ordinary linear differential equations. One nonsinusoidal
system is given by the Walsh functions (as they were first studied in 1923 by Walsh)
and their different variations and generalizations. Chrestenson33 has investigated
the “generalized” Walsh functions. An original work on this topic is reported
in Ahmed and Rao.26 The first application of 3D Walsh transforms in signal
processing is given by Harmuth.2 Finally, surveys of 3D Walsh transforms are also
in Refs. 14, 17, and 32.
In this section, we consider some higher-dimensional orthogonal transforms that
can be used to process 2D (e.g., images) and/or 3D (e.g., seismic waves) digital

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


326 Chapter 10

signals. Let W = [W(i, j, k)], 0 ≤ i, j, k ≤ 2n − 1 be a 3D (−1, +1)-matrix of size


2n × 2n × 2n such that

1
W −1 = W. (10.35)
2n

A 2D digital signal f = [ f (i, j)], 0 ≤ i, j ≤ 2n − 1 can be treated as a 3D matrix of


size 2n × 1 × 2n . Thus,

1
F = W f, f = WF (10.36)
2n

are a pair of orthogonal forward and inverse transforms.


Equation (10.35) can be satisfied by the following matrices:
* +
W = [W(i, j, k)] = (−1)i, j+i,k+ j,k+an+b(i,i+ j, j)+ck,k , (10.37)

where 0 ≤ i, j, k ≤ 2n − 1, a, b, c ∈ {0, 1}, i, j is the inner product of the vectors


i = (i0 , i1 , . . . , in−1 ) and j = ( j0 , j1 , . . . , jn−1 ), which are the binary expanded vectors
of integers i = n−1 t=0 it 2 and j =
t n−1 t
t=0 jt 2 , respectively.

Theorem 10.4.1:32 Let W = [W(i, j, k)] be the 3D Hadamard matrix in Eq. (10.37)
and f = [ f (i, j)], 0 ≤ i, j ≤ 2n − 1, an image signal. Then, the transform

1
F = [F(i, j)] = W f and f = WF (10.38)
2n
can be factorized as

"
n−1
F =Wf = (I2i ⊗ A ⊗ I2n−1−i ) f, (10.39)
i=0

where A = [A(p, q, r)], 0 ≤ p, q, r ≤ 1, is a 3D matrix of size 2 × 2 × 2 defined by

A(p, q, r) = (−1) pq+pr+qr+a+b(p+q)cr (10.40)

and can be implemented using n4n addition operations.

Example 10.4.1: A 3D forward HT of images of size 4 × 4.

Let n = 2 and a = b = c = 0. From Eqs. (10.37) and (10.40), we obtain a 3D


Hadamard matrix of the size 4 × 4 × 4 given in Fig. 10.12.
Denote the image matrix of order 4 by f = [ f (i, j)]. Realization of an F = W f
transform can be obtained from equations given in Example 10.3.1.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 327

Figure 10.12 3D Hadamard matrix of size 4 × 4 × 4 from Eq. (10.37).

F(0, 0) = f (0, 0) + f (1, 0) + f (2, 0) + f (3, 0), F(1, 0) = f (0, 0) − f (1, 0) + f (2, 0) − f (3, 0),
F(0, 1) = f (0, 1) − f (1, 1) + f (2, 1) − f (3, 1), F(1, 1) = − f (0, 1) − f (1, 1) − f (2, 1) − f (3, 1),
F(0, 2) = f (0, 2) + f (1, 2) − f (2, 2) − f (3, 2), F(1, 2) = f (0, 2) − f (1, 2) − f (2, 2) + f (3, 2),
F(0, 3) = f (0, 3) − f (1, 3) − f (2, 3) + f (3, 3); F(1, 3) = − f (0, 3) − f (1, 3) + f (2, 3) + f (3, 3);
(10.41)
F(2, 0) = f (0, 0) + f (1, 0) − f (2, 0) − f (3, 0), F(3, 0) = f (0, 0) − f (1, 0) − f (2, 0) + f (3, 0),
F(2, 1) = f (0, 1) − f (1, 1) − f (2, 1) + f (3, 1), F(3, 1) = − f (0, 1) − f (1, 1) + f (2, 1) + f (3, 1),
F(2, 2) = − f (0, 2) − f (1, 2) − f (2, 2) − f (3, 2), F(3, 2) = − f (0, 2) + f (1, 2) − f (2, 2) + f (3, 2),
F(2, 3) = − f (0, 3) + f (1, 3) − f (2, 3) + f (3, 3); F(3, 3) = f (0, 3) + f (1, 3) + f (2, 3) + f (3, 3).

Example 10.4.2: Factorization of 3D Hadamard matrix of size 4.


Consider the case where n = 2 and a = b = c = 0. From Eqs. (10.37) and (10.40),
we obtain
W = [W(i, j, k)] = (−1)i, j+i,k+ j,k ,
(10.42)
A = [A(p, q, r)] = (−1) pq+pr+qr .

From Theorem 10.4.1, we obtain

W = (I1 ⊗ A ⊗ I2 ) (I2 ⊗ A ⊗ I1 ) = (A ⊗ I2 ) (I2 ⊗ A) , (10.43)

where A, I2 , A ⊗ I2 and I2 ⊗ A are given in Figs. 10.13 and 10.14.


Example 10.4.3: A 3D HT using factorization.
Let f = [ f (i, j)] be an input image of size 4 × 4. A 3D HT of this image can be
implemented as

F = W f = (A ⊗ I2 ) (I2 ⊗ A) f = W1 W2 f = W1 R, (10.44)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


328 Chapter 10

Figure 10.13 The structure of the matrix A × I2 .

where W1 and W2 are taken from Figs. 10.13 and 10.14, respectively. From
Example 10.3.1, we have R = {R(p, q)} = W2 f , where

R(0, 0) = f (2, 0) + f (3, 0), R(0, 1) = f (2, 1) − f (3, 1),


R(0, 2) = f (2, 2) + f (3, 2), R(0, 3) = f (2, 3) − f (3, 3);
R(1, 0) = f (2, 0) − f (3, 0), R(1, 1) = − f (2, 1) − f (3, 1),
R(1, 2) = f (2, 2) − f (3, 2), R(1, 3) = − f (2, 3) − f (3, 3);
(10.45)
R(2, 0) = f (0, 0) + f (1, 0), R(2, 1) = f (0, 1) − f (1, 1),
R(2, 2) = f (0, 2) + f (1, 2), R(2, 3) = f (0, 3) − f (1, 3);
R(3, 0) = f (0, 0) + f (1, 0), R(3, 1) = − f (0, 1) − f (1, 1),
R(3, 2) = f (0, 2) + f (1, 2), R(3, 3) = − f (0, 3) − f (1, 3).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 329

Figure 10.14 The structure of the matrix I2 × A.

and F = {F(p, q)} = W1 R,

F(0, 0) = R(1, 0) + R(3, 0), F(0, 1) = R(1, 1) + R(3, 1),


F(0, 2) = R(1, 2) − R(3, 2), F(0, 3) = R(1, 3) − R(3, 3);
F(1, 0) = R(0, 0) + R(2, 0), F(1, 1) = R(0, 1) + R(2, 1),
F(1, 2) = R(0, 2) − f (2, 2), F(1, 3) = R(0, 3) − R(2, 3);
(10.46)
F(2, 0) = R(1, 0) − R(3, 0), F(2, 1) = R(1, 1) − R(3, 1),
F(2, 2) = −R(1, 2) − f (3, 2), F(2, 3) = −R(1, 3) − R(3, 3);
F(3, 0) = R(0, 0) − R(2, 0), F(3, 1) = R(0, 1) − R(2, 1),
F(3, 2) = −R(0, 2) − R(2, 2), F(3, 3) = −R(0, 3) − R(2, 3).

10.5 Operations with Higher-Dimensional Complex Matrices


In this section, we define some useful operations with higher-dimensional complex
matrices that can be additionally used for realization of multidimensional complex
HTs. Let A = [A(i1 , i2 , . . . , in )] be an n-dimensional complex matrix of size
m1 × m2 × · · · × mn and 0 ≤ ik ≤ mk − 1, where k = 1, 2, . . . , n. Complex matrix A
can be presented as A = A1 + √ jA2 , where A1 , A2 are real n-dimensional matrices of
size m1 × m2 × · · · × mn , j = −1.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


330 Chapter 10

(1) Scalar multiplication. sA = [sA(i1 , i2 , . . . , in )], s is a complex scalar number.


Since s = s1 + js2 , the real and imaginary parts of the sA matrix, respectively,
are given by s1 A1 − s2 A2 , s2 A1 + s1 A1 .

(2) Equality of two complex matrices. A = B means A(i1 , i2 , . . . , in ) = B(i1 , i2 ,


. . . , in ), where A and B have the same size. If A = Ai + jA2 and B = Bi + jB2 ,
then equality A = B means A1 = B1 , A2 = B2 .

(3) Addition of two complex matrices. C = A ± B = C1 + jC2 , where

C1 = [A1 (i1 , i2 , . . . , in ) ± B1 (i1 , i2 , . . . , in )],


C2 = [A2 (i1 , i2 , . . . , in ) ± B2 (i1 , i2 , . . . , in )].

(4) Multiplication of two complex matrices. Let A = [A(i1 , i2 , . . . , in )] and B =


[B(i1 , i2 , . . . , in )] be two n-dimensional complex matrices of sizes a1 × a2 ×
· · · × an and b1 × b2 × · · · × bn , respectively.

(a) If n = 2m and (am+1 , am+2 , . . . , an ) = (bm+1 , bm+2 , . . . , bm ), then the complex


matrix C[(k1 , k2 , . . . , kn )] = [C1 (k1 , k2 , . . . , kn ) + jC2 (k1 , k2 , . . . , kn )] = AB
of size a1 × a2 × · · · × am × bm+1 × · · · × bn is defined as

b1 −1 b2 −1 bm −1
C1 (k1 , k2 , . . . , kn ) = ··· A1 (k1 , . . . , km , t1 , . . . , tm )
t1 =0 t2 =0 tm =0

× B1 (t1 , . . . , tm , km+1 , . . . , kn )
b1 −1 b2 −1 bm −1
− ··· A2 (k1 , . . . , km , t1 , . . . , tm )
t1 =0 t2 =0 tm =0

× B2 (t1 , . . . , tm , km+1 , . . . , kn ),
b1 −1 b2 −1 bm −1
(10.47)
C2 (k1 , k2 , . . . , kn ) = ··· A1 (k1 , . . . , km , t1 , . . . , tm )
t1 =0 t2 =0 tm =0

× B2 (t1 , . . . , tm , km+1 , . . . , kn )
b1 −1 b2 −1 bm −1
+ ··· A2 (k1 , . . . , km , t1 , . . . , tm )
t1 =0 t2 =0 tm =0

× B1 (t1 , . . . , tm , km+1 , . . . , kn ).

(b) If n = 2m + 1 and (am+1 , am+2 , . . . , a2m ) = (b1 , b2 , . . . , bm ) and an = bn ,


then the complex matrix C[(k1 , k2 , . . . , kn )] = [C1 (k1 , k2 , . . . , kn ) + jC2 (k1 ,
k2 , . . . , kn )] = AB of size a1 × a2 × · · · × am × bm+1 × · · · × bn is then defined
as

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 331

b1 −1 b2 −1 bm −1
C1 (k1 , k2 , . . . , kn ) = ··· A1 (k1 , . . . , km , t1 , . . . , tm , kn )
t1 =0 t2 =0 tm =0

× B1 (t1 , . . . , tm , km+1 , . . . , kn )
b1 −1 b2 −1 bm −1
− ··· A2 (k1 , . . . , km , t1 , . . . , tm , kn )
t1 =0 t2 =0 tm =0

× B2 (t1 , . . . , tm , km+1 , . . . , kn ),
b1 −1 b2 −1 bm −1
(10.48)
C2 (k1 , k2 , . . . , kn ) = ··· A1 (k1 , . . . , km , t1 , . . . , tm , kn )
t1 =0 t2 =0 tm =0

× B2 (t1 , . . . , tm , km+1 , . . . , kn )
b1 −1 b2 −1 bm −1
+ ··· A2 (k1 , . . . , km , t1 , . . . , tm , kn )
t1 =0 t2 =0 tm =0

× B1 (t1 , . . . , tm , km+1 , . . . , kn ).

(5) Conjugate transpose of complex matrices. Let A = [A(i1 , i2 , . . . in )] be an


n-dimensional matrix. The conjugate transpose matrix of A is defined as
follows:
(a) If n = 2m, then A∗ = [B( j1 , j2 , . . . , jn )] = [A∗ ( jm+1 , jm+2 , . . . , jn , j1 , . . . , jm )].
(b) If n = 2m+1, then A∗ = [B( j1 , j2 , . . . , jn )] = [A∗ ( jm+1 , jm+2 , . . . , j2m , j1 , . . . ,
jm , jn )].
(6) Identity matrix. An n-dimensional matrix I of size a1 × a2 × · · · × an is called an
identity matrix if, for any n-dimensional matrix, A is satisfied by IA = AI = A.
Note the following:
(a) If n = 2m, then the size of matrix I should satisfy

(a1 , a2 , . . . , am ) = (am+1 , am+2 , . . . , an ). (10.49)

(b) If n = 2m + 1, then the size of matrix I should satisfy

(a1 , a2 , . . . , am ) = (am+1 , am+2 , . . . , an−1 ). (10.50)

Note also that for every given integer n and the size a1 × a2 × · · · × an
satisfying conditions (a) and (b), there is one, and only one, identity matrix.
That identity matrix is defined as follows:
(c) If n = 2m, then the identity matrix I = [I(i1 , i2 , . . . , in )] of size a1 × · · · ×
am × a1 × · · · × am is defined as

1, if (i1 , i2 , . . . , im ) = (im+1 , im+2 , . . . , in ),
I(i1 , i2 , . . . , in ) = (10.51)
0, otherwise.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


332 Chapter 10

(d) If n = 2m + 1, then the identity matrix I = [I(i1 , i2 , . . . , in )] of size


a1 × · · · × am × a1 × · · · × am × an is defined as

1, if (i1 , i2 , . . . , im ) = (im+1 , im+2 , . . . , in−1 ),
I(i1 , i2 , . . . , in ) = (10.52)
0, otherwise.

Let m and k be two integers, and A, B, and C be n-dimensional complex matrices.


The following identities hold:

A(B + C) = AB + AC, (B + C)A = BA + CA,


(m + k)A = mA + kA, m(A + B) = mA + mB,
m(AB) = (mA)B = A(mB), A(BC) = (AB)C (10.53)
∗ ∗
(A ) = A, (A + B) = A + B∗ ,
∗ ∗

∗ ∗ ∗
(AB) = B A .

10.6 3D Complex HTs


Recall that from the definition of a 2D complex Hadamard matrix, it follows that
a complex Hadamard matrix is a matrix such that its (2 − 1)-dimensional layers
(rows or columns) in each normal orientation of the axes are orthogonal to each
other. Similarly, we can define a 3D complex Hadamard matrix as follows:

Definition 10.6.1: A 3D complex Hadamard matrix H = (hi, j,k )ni, j,k=1 of order n
is called a regular 3D complex Hadamard matrix if the following conditions are
satisfied:
n n
hi,a,r h∗i,b,r = ha, j,r h∗b, j,r = nδa,b ,
i=1 j=1
n n
hi,q,a h∗i,q,b = ha,q,k h∗b,q,k = nδa,b , (10.54)
i=1 k=1
n n
h p, j,a h∗p, j,b = h p,a,k h∗p,b,k = nδa,b ,
j=1 k=1


where ha,b,c ∈ {−1, +1, − j, + j}, j = −1, δa,b is a Kronecker function, i.e., δa,a = 1
and δa,b = 0 if a  b.
From the conditions of Eq. (10.54), it follows that for fixed i0 , j0 , k0 , the matrices
(hi0 , j,k )nj,k=1 , (hi, j0 ,k )ni,k=1 , and (hi, j,k 0 )ni, j=1 are 2D complex Hadamard matrices of
order n. In Fig. 10.15, two 3D complex Hadamard matrices of size 2 × 2 × 2 are
given.
The higher size of 3D complex Hadamard matrices can be obtained by the
Kronecker product. The 3D complex Hadamard matrix of size 4 × 4 × 4 constructed

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 333

Figure 10.15 Two 3D complex Hadamard matrices.

Figure 10.16 3D complex Hadamard matrices of size 4 × 4 × 4.

by the Kronecker product of 3D Hadamard matrix of size 2 × 2 × 2 (see Fig. 10.2)


and the 3D complex Hadamard matrix of size 2 × 2 × 2 from Fig. 10.15 (left side
matrix) are given in Fig. 10.16.
Denote by C = [C(m, n, k)] the 3D complex Hadamard matrix of size 4 × 4 × 4
given in Fig. 10.16, and let f be a 2D image matrix of order 4, which can be
regarded as a 3D matrix f = [ f (i, 0, k)] of size 4 × 1 × 4. By this definition, matrix
D = [D(d1 , 0, d3 )] = C f can be obtained from the following equation:

3
D(m, 0, k) = C(m, n, k) f (m, 0, k), m, k = 0, 1, 2, 3. (10.55)
n=0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


334 Chapter 10

From this equation, we obtain

D(000) = − j f (000) + f (100) + j f (200) + f (300),


D(001) = f (001) + j f (101) − f (201) − j f (301),
D(002) = − j f (002) − f (102) − j f (202) − f (302),
D(003) = f (003) + j f (103) + f (203) + j f (303);
D(100) = f (000) + j f (100) − f (200) − j f (300),
D(101) = − j f (001) − f (101) + j f (201) + f (301),
D(102) = f (002) + j f (102) + f (202) + j f (302),
D(103) = − j f (003) − f (103) − j f (203) − f (303);
(10.56)
D(200) = j f (000) + f (100) + j f (200) + f (300),
D(201) = − f (001) − j f (101) − f (201) − j f (301),
D(202) = − j f (002) − f (102) + j f (202) + f (302),
D(203) = f (003) + j f (103) − f (203) − j f (303);
D(300) = − f (000) − j f (100) − f (200) − j f (300),
D(301) = j f (001) + f (101) + j f (201) + f (301),
D(302) = f (002) + j f (102) − f (202) − j f (302),
D(303) = − j f (003) − f (103) + j f (203) + f (303).

Or, by ignoring the second coordinate, we obtain

D(0, 0) = [ f (1, 0) + f (3, 0)] − j[ f (0, 0) − f (2, 0)],


D(0, 1) = [ f (0, 1) − f (2, 1)] + j[ f (11) − f (3, 1)],
D(0, 2) = −[ f (1, 2) + f (3, 2)] − j[ f (0, 2) + f (2, 2)],
D(0, 3) = [ f (0, 3) + f (2, 3)] + j[ f (1, 3) + f (3, 3)];
D(1, 0) = [ f (0, 0) − f (2, 0)] + j[ f (1, 0) − f (3, 0)],
D(1, 1) = −[ f (1, 1) − f (3, 1)] − j[ f (0, 1) − f (2, 1)],
D(1, 2) = [ f (0, 2) + f (2, 2)] + j[ f (1, 2) + f (3, 2)],
D(1, 3) = −[ f (1, 3) + f (3, 3)] − j[ f (0, 3) + f (2, 3)];
(10.57)
D(2, 0) = [ f (1, 0) + f (3, 0)] + j[ f (0, 0) + f (2, 0)],
D(2, 1) = −[ f (0, 1) + f (2, 1)] − j[ f (1, 1) + f (3, 1)],
D(2, 2) = −[ f (1, 2) − f (3, 2)] − j[ f (0, 2) − f (2, 2)],
D(2, 3) = [ f (0, 3) − f (2, 3)] + j[ f (1, 3) − f (3, 3)];
D(3, 0) = −[ f (0, 0) + f (2, 0)] − j[ f (1, 0) + f (3, 0)],
D(3, 1) = [ f (1, 1) + f (3, 1)] + j[ f (0, 1) + f (2, 1)],
D(3, 2) = [ f (0, 2) − f (2, 2)] + j[ f (1, 2) − f (3, 2)],
D(3, 3) = −[ f (1, 3) − f (3, 3)] − j[ f (0, 3) − f (2, 3)].

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 335

10.7 Construction of (λ, μ) High-Dimensional Generalized


Hadamard Matrices
An n-dimensional matrix of order m is defined as

A = [Ai1 ,i2 ,...,in ], i1 , i2 , . . . , in = 1, 2, . . . , m. (10.58)

The set of elements of the matrix in Eq. (10.58) with fixed values i j1 , i j2 , . . . , i jk of
indices i j1 , i j2 , . . . , i jk (1 ≤ jr ≤ n, 1 ≤ k ≤ n − 1) defines a k-tuple section of the
orientation (i j1 , i j2 , . . . , i jk ), and is given by the (n − k)-dimensional matrix of order
m. The matrix

Bi1 ,i2 ,...,in = Ai j1 ,i j2 ,...,i jk (10.59)

is called a transposition of the matrix in Eq. (10.58) according to a substitution


 
i1 i2 · · · in
. (10.60)
i j1 i j2 · · · i jn

The transposed matrix will be denoted by34


⎛ ⎞
⎜⎜⎜i
⎜⎜⎜⎜ 1
i2 · · · in ⎟⎟⎟⎟⎟

A i j1
⎝ i j2 · · · i jn ⎟⎠ . (10.61)

Let [A]n = [Ai1 ,i2 ,...,in ] and [B]r = [B j1 , j2 ,..., jr ] be n- and r-dimensional matrices of
order m, respectively, (i1 , i2 , . . . , in , j1 , . . . , jr = 1, 2, . . . , m).

Definition 10.7.1:34 The (λ, μ) convolute product of the matrix [A]n to the matrix
[B]r with decomposition by indices s and c is called a t-dimensional matrix [D]t of
order m, defined as
⎡ ⎤
⎢⎢⎢ ⎥⎥⎥
[D]t = [Dl,s,k ] = ([A]n [B]r ) = ⎢⎣⎢
(λ,μ) ⎢ Al,s,c Bc,s,k ⎥⎥⎦⎥ , (10.62)
(c)

where
n = k + λ + μ, r = ν + λ + μ, t = n + r − (λ + 2μ),
l = (l1 , l2 , . . . , lk ), s = (s1 , s2 , . . . , sλ ), c = (c1 , c2 , . . . , cμ ), (10.63)
k = (k1 , k2 , . . . , kν ).

Now we introduce the concept of a multidimensional generalized Hadamard


matrix.
Definition 10.7.2:8,35 An n-dimensional matrix [H]n = [Hi1 ,i2 ,...,in ] (i1 , i2 , . . . , in =
1, 2, . . . , m) with elements of p’th root of unity, here will be called an n-dimensional

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


336 Chapter 10

generalized Hadamard matrix of order m, if all (n−1)-dimensional parallel sections


of orientation (il ) 1 ≤ l ≤ n are mutually orthogonal matrices, i.e.,

··· ··· Hr,...,α,...,z Hr,...,β,...,z = mn−1 δα,β , (10.64)
r t z

where (r, . . . , t, . . . , z) represents all of the scrambling (i1 , . . . , il , . . . , in ), and δα,β is


the Kronecker symbol.

Let H  be an n-dimensional matrix of order m, and H  be a conjugate transpose


of matrix H 1 by several given indexes.

Definition 10.7.3:8,35 The matrix H  of order m will be called the (λ, μ) orthogonal
matrix by all axial-oriented directions with parameters λ, μ, if, for fixed values λ, μ
(μ  0), the following conditions hold:
λ,μ
Ht Ht = mμ E(λ, k), t = 1, 2, . . . , N, (10.65)

where k = n − λ − μ, E(λ, k) is a (λ, k)-dimensional identity matrix,34 and


⎧ n!




⎪ μ!k!, if μ  k,
⎨ λ!
N=⎪
⎪ (10.66)


⎪ n!
⎩ μ!k!, if μ = k
2λ!
are satisfied.

Remark 10.7.1: The concept of the multidimensional (λ, μ)-orthogonal matrix


coincides with the concept of the multidimensional generalized Hadamard matrix
[H(p, m)]n if λ + μ = n − 1.

We emphasize the following two special cases:


• For λ = 0 and μ = n−1, we have a general n-dimensional generalized Hadamard
matrix [H(p, m)]n . In this case, Eq. (10.65) can be rewritten as
0,n−1
Ht Ht = mn−1 E(0, 1), t = 1, 2, . . . , n, (10.67)

where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜i
⎜⎜⎜ 1 i2 · · · it−1 it ⎟⎟⎟⎟⎟ ⎜⎜⎜i
⎜⎜⎜ t it+1 · · · in−1 in ⎟⎟⎟⎟⎟
⎜⎝ ⎟ ⎟
Ht =  i2
(H ) i3 · · · it i1 ⎟⎠ , Ht =  in
(H )
⎜⎝
it · · · in−2 in−1 ⎟⎠ . (10.68)

• For λ = n − 2 and μ = 1, we obtain a regular n-dimensional generalized


Hadamard matrix [H(p, m)]n satisfying the following relationships:
 
n−2,1
(Ht,q Ht,q ) = mE(n − 2, 1), t = 1, 2, . . . , n, q = t + 1, t + 2, . . . , n, (10.69)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 337

where
⎛ ⎞
⎜⎜⎜i
⎜⎜⎜ 1 · · · it−1 it iq iq+1 · · · in ⎟⎟⎟⎟⎟
⎜⎝ ⎟

Ht,q =  i2
(H ) · · · it i1 in iq · · · in−1 ⎟⎠ ,
⎛ ⎞ (10.70)
⎜⎜⎜i
⎜⎜⎜ 1 · · · it−1 it iq iq+1 · · · in ⎟⎟⎟⎟⎟
⎜⎝ ⎟

Ht,q =  i2
(H ) · · · it i1 in iq · · · in−1 ⎟⎠ .

Theorem 10.7.1: 35 If a generalized Hadamard matrix H(p, m) exists, then there is


a 3D generalized Hadamard matrix [H(p, m)]3 .

Proof: First, we define a generalized Hadamard matrix. A square matrix H(p, m)


of order m with elements of the p’th root of unity is called a generalized Hadamard
matrix if HH ∗ = H ∗ H = NIN , where H ∗ is the conjugate-transpose matrix of H
(for more details about such matrices, see Chapter 11).

Now, let H1 = H(p, m) be a generalized Hadamard matrix

φ(i, j)
H1 = {hi, j } = γ p , i, j = 0, 1, 2, . . . , m − 1. (10.71)

According to this definition, we have

m−1 
φ(i1 , j)−φ(i2 , j) m, if i1 = i2 ,
γp = (10.72)
0, if i1  i2 .
j=0

m2 −1
The matrix H1(2) = H1 ⊗ H1 = h(2)
i, j can be defined as
i, j=0

φ(i1 , j1 )+φ(i0 , j0 )
h(2) (2)
i, j = hmi1 +i0 ,m j1 + j0 = hi1 , j1 hi0 , j0 = γ p , (10.73)

where i, j = 0, 1, . . . , m2 − 1, i0 , i1 , j0 , j1 = 0, 1, . . . , m − 1.
Now, consider the 3D matrix A = [H(p, m)]3 with elements Ai1 ,i2 ,i3 (i1 , i2 , i3 =
0, 1, . . . , m − 1),

φ(i1 ,i2 )+φ(i1 ,i3 )


A = {Ai1 ,i2 ,i3 } = h2i1 (m+1),mi12 +i3 = γ p . (10.74)

In other words, any section of a matrix A = {Ai1 ,i2 ,i3 } of the orientation i1 is the
i1 (m + 1)’th row of the matrix H1(2) .
Prove that A is the 3D generalized matrix A = [H(p, m)]3 . For this, we can check
the matrix system in Eq. (10.67), which can be represented as

0,2
(A1t A2t ) = m2 E(0, 1), t = 1, 2, 3, (10.75)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


338 Chapter 10

where E(0, 1) is an identity matrix of order m, and


⎛ ⎞
⎜⎜⎜i i2 i3 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜ 1 ⎟
A11 = A, A21 = ∗ i3
(A )
⎝ i1 i2 ⎟⎠ ,
⎛ ⎞ ⎛ ⎞
⎜⎜⎜i i2 ⎟⎟⎟⎟⎟ ⎜⎜⎜i i3 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 ⎟ ⎜⎜⎜ 2 ⎟
⎜⎝
A12 = A i2 i1 ⎟⎠ , A21 = ∗ i3
(A )
⎜⎝
i2 ⎟⎠ , (10.76)
⎛ ⎞
⎜⎜⎜i i2 i3 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 ⎟
⎜⎝
A13 = A i2 i3 i1 ⎟⎠ , A23 = A.

Now we will check the system in Eq. (10.75) for defining the matrix A by
Eq. (10.74).
(1) ⎛ ⎛
⎜⎜⎜i i i ⎟⎟⎟ ⎟
⎞⎞
⎜⎜⎜ ⎜⎜⎜ 1 2 3 ⎟⎟⎟ ⎟
⎜ ⎟⎟
⎜⎜⎜(AA∗ )⎝i3 i1 i2 ⎠ ⎟⎟⎟⎟⎟ = m2 E(0, 1),
m−1 m−1
0,2 ⎜

⎜⎜⎜ ⎟⎟⎟ i.e., Ai1 ,i2 ,i3 A j1 ,i2 ,i3 = m2 δi1 , j1 ,
⎝ ⎠ i2 =0 i3 =0

(10.77)
or according to Eqs. (10.72) and (10.74),
m−1 m−1
φ(i1 ,i2 )+φ(i1 ,i3 )−φ( j1 ,i2 )−φ( j1 ,i3 )
γp = m2 δi1 , j1 . (10.78)
i2 =0 i3 =0

(2) ⎛ ⎛⎜ ⎞ ⎛ ⎞⎞
⎜⎜⎜ ⎜⎜⎜⎜⎜i1 i2 ⎟⎟⎟⎟⎟⎟ ⎜⎜⎜i i ⎟⎟⎟ ⎟
⎜⎜⎜ 2 3 ⎟⎟⎟ ⎟
0,2 ⎜
⎜⎜ ⎜⎝i i ⎟⎠ ∗ ⎜⎝i i ⎟⎠ ⎟⎟⎟⎟ m−1 m−1
⎜⎜⎜A 2 1 (A ) 3 2 ⎟⎟⎟ = m2 E(0, 1), i.e. Ai1 ,i2 ,i3 Ai1 , j2 ,i3 = m2 δi2 , j2 ,
⎜⎜⎝ ⎟⎟⎠
i1 =0 i3 =0

(10.79)
or, according to Eqs. (10.72) and (10.74),
m−1 m−1
φ(i1 ,i2 )+φ(i1 ,i3 )−φ(i1 , j2 )−φ( j1 ,i2 )
γp = m2 δi2 , j2 . (10.80)
i1 =0 i3 =0

(3) ⎛ ⎛⎜ ⎞ ⎞
⎜⎜⎜ ⎜⎜⎜⎜⎜i1 i2 i3 ⎟⎟⎟⎟⎟⎟ ⎟⎟⎟
0,2 ⎜
⎜⎜ ⎜⎝i i i ⎟⎠ ∗ ⎟⎟⎟ m−1 m−1
⎜⎜⎜A 2 3 1 A ⎟⎟⎟ , i.e., Ai1 ,i2 ,i3 Ai1 , j2 , j3 = m2 δi3 , j3 , (10.81)
⎜⎜⎝ ⎟⎟⎠
i1 =0 i2 =0

or according to Eqs. (10.72) and (10.74),


m−1 m−1
φ(i1 ,i2 )+φ(i1 ,i3 )−φ(i1 ,i2 )−φ(i1 , j3 )
γp = m2 δi3 , j3 . (10.82)
i1 =0 i2 =0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 339

Hence, the matrix A defined by Eq. (10.74) is the 3D generalized Hadamard matrix
[H(p, m)]3 .
The generalized matrices contained in the 3D generalized Hadamard matrix
[H(3, 3)]3 of order 3 are given below.
• Generalized Hadamard matrices parallel to the flat (X, Y) are
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜⎜ x1 x2 1⎟⎟⎟ ⎜⎜⎜ x1 1 x2 ⎟⎟⎟
⎜⎜⎜⎜1 x1 x2 ⎟⎟⎟⎟ , ⎜⎜⎜⎜ x2 x1 1⎟⎟⎟⎟ , ⎜⎜⎜⎜1 1 1 ⎟⎟⎟⎟ . (10.83)
⎝ ⎠ ⎝ ⎠ ⎝ ⎠
1 x 2 x1 1 1 1 x2 1 x 1

• Generalized Hadamard matrices parallel to the flat (X, Z) are


⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜⎜1 x1 x2 ⎟⎟⎟ ⎜⎜⎜1 x2 x1 ⎟⎟⎟
⎜⎜⎜ x x 1 ⎟⎟⎟ , ⎜⎜⎜ x x 1 ⎟⎟⎟ , ⎜⎜⎜1 1 1 ⎟⎟⎟ . (10.84)
⎜⎝ 1 2 ⎠⎟ ⎜⎝ 2 1 ⎠⎟ ⎜⎝ ⎠⎟
x1 1 x2 1 1 1 x2 1 x1

• Generalized Hadamard matrices parallel to the flat (Y, Z) are


⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜⎜ x2 x1 1 ⎟⎟⎟ ⎜⎜⎜ x1 x2 1 ⎟⎟⎟
⎜⎜⎜⎜ x2 x1 1 ⎟⎟⎟⎟ , ⎜⎜⎜⎜1 x1 x2 ⎟⎟⎟⎟ , ⎜⎜⎜⎜1 1 1 ⎟⎟⎟⎟ . (10.85)
⎝ ⎠ ⎝ ⎠ ⎝ ⎠
x2 1 x1 1 1 1 x1 1 x2

References
1. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes of
Mathematics, 1168, Springer-Verlag, Berlin (1985).
2. H. Harmuth, Sequency Theory, Foundations and Applications, Academic
Press, New York (1977).
3. P. J. Shlichta, “Three- and four-dimensional Hadamard matrices,” Bull. Am.
Phys. Soc., Ser. 11 16, 825–826 (1971).
4. S. S. Agaian, “On three-dimensional Hadamard matrix of Williamson type,”
(Russian–Armenian summary) Akad. Nauk Armenia, SSR Dokl. 72, 131–134
(1981).
5. P. J. Slichta, “Higher dimensional Hadamard matrices,” IEEE Trans. Inf.
Theory IT-25 (5), 566–572 (1979).
6. S.S. Agaian, “A new method for constructing Hadamard matrices and the
solution of the Shlichta problem,” in Proc. of 6th Hungarian Coll. Comb.,
Budapesht, Hungary, 6–11, pp. 2–3 (1981).
7. A. M. Trachtman and B. A. Trachtman, (in Russian) Foundation of the Theory
of Discrete Signals on Finite Intervals, Nauka, Moscow (1975).
8. S. S. Agaian, “Two and high dimensional block Hadamard matrices,” (In
Russian) Math. Prob. Comput. Sci. 12, 5–50 (1984) Yerevan, Armenia.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


340 Chapter 10

9. K. Ma, “Equivalence classes of n-dimensional proper Hadamard matrices,”


Austral. J. Comb. 25, 3–17 (2002).
10. Y. X. Yang, “The proofs and some conjectures on higher dimensional
Hadamard matrices,” Kexue Tongbao 31, 1662–1667 (1986) (English trans.).
11. S. S. Agaian and H. Sarukhanyan, “Three dimensional Hadamard matrices,”
in Proc. of CSIT-2003, 271–274 NAS RA, Yerevan, Armenia (2003).
12. W. de Launey, “A note on n-dimensional Hadamard matrices of order 2t and
Reed–Muller codes,” IEEE Trans. Inf. Theory 37 (3), 664–667 (1991).
13. X.-B. Liang, “Orthogonal designs with maximal rates,” IEEE Trans. Inf.
Theory 49 (10), 2468–2503 (2003).
14. K. J. Horadam, Hadamard Matrices and Their Applications, Princeton
University Press, Princeton (2006).
15. Q. K. Trinh, P. Fan, and E. M. Gabidulin, “Multilevel Hadamard matrices and
zero correlation zone sequences,” Electron. Lett. 42 (13), 748–750 (2006).
16. H. M. Gastineau-Hills and J. Hammer, “Kronecker products of systems
of higher-dimensional orthogonal designs,” in Combinatorial Mathematics
X, Adelaide, 1982, Lecture Notes in Math., 1036 206–216 Springer, Berlin
(1983).
17. X. Yang and Y. X. Yang, Theory and Applications of Higher-Dimensional
Hadamard Matrices, Kluwer, Dordrecht (2001).
18. V. Testoni and M.H.M. Costa, “Fast embedded 3D-Hadamard color video
codec,” presented at XXV Simpósio Brasileiro de Telecomunicações—
SBrT’2007, Recife, PE, Brazil (Sept. 2007).
19. W. de Launey and R. M. Stafford, “Automorphisms of higher-dimensional
Hadamard matrices,” J. Combin. Des. 16 (6), 507–544 (2008).
20. W. de Launey, “(0, G)-designs and applications,” Ph.D. thesis, University of
Sydney, (1987).
21. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs and
Hadamard matrices,” Congr. Numer. 31, 95–108 (1981).
22. J. Seberry, “Higher dimensional orthogonal designs and Hadamard matrices,”
in Combinatorial Mathematics VII, Lecture Notes in Math., 829 220–223
Springer, New York (1980).
23. Y.X. Yang, “On the classification of 4-dimensional 2 order Hadamard matri-
ces,” preprint (in English) (1986).
24. W. de Launey, “On the construction of n-dimensional designs from 2-
dimensional designs,” Australas. J. Combin. 1, 67–81 (1990).
25. K. Nyberg and M. Hermelin, “Multidimensional Walsh transform and a
characterization of bent functions,” in Proc. of IEEE Information Theory
Workshop, Information Theory for Wireless Networks, July 1–6, pp. 1–4
(2007).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Higher-Dimensional Hadamard Matrices 341

26. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal


Processing, Springer-Verlag, Berlin (1975).
27. N. J. Vilenkin, “On a class of complete orthogonal systems,” (in Russian) Izv.
AN, SSSR 11, 363–400 (1947).
28. S. S. Agaian and A. Matevosian, “Fast Hadamard Transform,” Math. Prob.
Cybern. Comput. Technol. 10, 73–90 (1982).
29. K. Egiazarian, J. Astola and S. Agaian, “Orthogonal transforms based on
generalized Fibonacci recursions,” in Proc. of Workshop on Spectral Transform
and Logic Design for Future Digital Systems, June, Tampere, Finland,
pp. 455–475 (2000).
30. S.S. Agaian and H. Sarukhanyan, “Williamson type M-structure,” in Proc. of
2nd Int. Workshop on Transforms and Filter Banks, Brandenburg Der Havel,
Germany, TICSP Ser. 4, pp. 223–249 (2000).
31. J. Astola, K. Egiazarian, K. Öktem and S. Agaian, “Binary polynomial
transforms for nonlinear signal processing,” in Proc. of IEEE Workshop
on Nonlinear Signal and Image Processing, Sept., Mackinac Island, MI,
pp. 132–141 (1997).
32. X. Yang and X. Y. Yang, Theory and Applications of Higher-Dimensional
Hadamard Matrices, Kluwer Academic Publications, Dordrecht (2001).
33. H. E. Chrestenson, “A class of generalized Walsh functions,” Pacific J. Math.
5 (1), 17–31 (1955).
34. N. P. Sokolov, (in Russian) Introduction in Multidimensional Matrix Theory,
Naukova Dumka, Kiev (1972).
35. S.S. Agaian and K.O. Egiazarian, “Generalized Hadamard matrices,” Math.
Prob. Comput. Sci.12, 51–88 (in Russian), Yerevan, Armenia (1984).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 11
Extended Hadamard Matrices
11.1 Generalized Hadamard Matrices
The generalized Hadamard matrices were introduced by Butson in 1962.1
Generalized Hadamard matrices arise naturally in the study of error-correcting
codes, orthogonal arrays, and affine designs (see Refs. 2–4). In general, generalized
Hadamard matrices are used in digital signal/image processing in the form of the
fast transform by Walsh, Fourier, and Vilenkin–Chrestenson–Kronecker systems.
The survey of generalized Hadamard matrix construction can be found in Refs. 2
and 5–12.

11.1.1 Introduction and statement of problems

Definition 11.1.1.1:1 A square matrix H(p, N) of order N with elements of the p’th
root of unity is called a generalized Hadamard matrix if

HH ∗ = H ∗ H = NIN ,

where H ∗ is the conjugate-transpose matrix of H.


Remarks: The generalized Hadamard matrices contain the following:
• A Sylvester–Hadamard matrix if p = 2, N = 2n .13
• A real Hadamard matrix if p = 2, N = 4t.5
• A complex Hadamard matrix if p = 4, N = 2t.14
• A Fourier matrix if p = N, N = N.
Note: Vilenkin–Kronecker systems are generalized Hadamard H(p, p) and
H(p, pn) matrices, respectively.8
Example 11.1.1.1: A generalized Hadamard matrix H(3, 6) has the following
form:
⎛ ⎞
⎜⎜⎜ x2 x0 x1 x1 x0 x2 ⎟⎟⎟
⎜⎜⎜⎜ x0 x2 x1 x0 x1 x2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜ x x x x x x ⎟⎟⎟
H(3, 6) = ⎜⎜⎜⎜ 0 0 0 0 0 0 ⎟⎟⎟⎟ , (11.1)
⎜⎜⎜⎜ x2 x0 x2 x0 x1 x1 ⎟⎟⎟⎟
⎜⎜⎜ x0 x2 x2 x1 x0 x1 ⎟⎟⎟
⎝ ⎠
x2 x2 x0 x1 x1 x0

343

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


344 Chapter 11

√ √ √
where x0 = 1, x1 = −(1/2) + j( 3/2), x2 = −(1/2) − j( 3/2), j = −1. A
generalized Hadamard matrix H(p, N) with the first row and first column of the
form (11 . . . .1) is called a normalized matrix.
For example, from H(3, 6), one can generate a normalized matrix by two stages.
At first, multiplying the columns with numbers 1, 3, 4, and 6 of the matrix H(3, 6)
by x1 , x2 , x2 , and x1 , respectively, we obtain the matrix
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ x1 x2 1 x2 x1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜ x 1 x2 x2 1

x1 ⎟⎟⎟⎟
H 1 (3, 6) = ⎜⎜⎜⎜ 1 ⎟⎟ . (11.2)
⎜⎜⎜ 1 1 x1 x2 x1 x2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ x1 x2 x1 1 1 x2 ⎟⎟⎟⎟
⎝ ⎠
1 x2 x2 1 x1 x1

Then, multiplying the rows with numbers 2, 3, and 5 of the matrix H 1 (3, 6) by
x2 , we obtain the normalized matrix corresponding to the generalized Hadamard
matrix H(3, 6) of the following form:
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 x1 x2 x1 1 x2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜1 x2 x1 x1 x2 1 ⎟⎟⎟⎟
Hn (3, 6) = ⎜⎜⎜⎜ ⎟⎟ . (11.3)
⎜⎜⎜1 1 x1 x2 x1 x2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x1 1 x2 x2 x1 ⎟⎟⎟⎟
⎝ ⎠
1 x2 x2 1 x1 x1

Note that generalized Hadamard matrices also can be defined as the matrix with
one of the elements being the p’th root of unity. For example, the matrix Hn (3, 6)
can be represented as follows:
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 x x2 x 1 x2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜1 x2 x x x2 1 ⎟⎟⎟⎟
Hn (3, 6) = ⎜⎜⎜⎜ 2⎟
⎟, (11.4)
⎜⎜⎜1 1 x x2 x x ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x 1 x2 x2 x ⎟⎟⎟⎟
⎝ ⎠
1 x2 x2 1 x x

where x = −(1/2) + j( 3/2).
In Refs. 1 and 13 it was proven that for any prime p, nonnegative integer m,
and any natural number k (m ≤ k), there exists an H(p2m , pk ) matrix. If an
H(2, N) matrix exists, then for any nonzero natural number p, an H(2p, N) matrix
exists.
The Kronecker product of two generalized Hadamard matrices is also a
generalized Hadamard matrix. For example,

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 345

⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜⎜1 1 1 ⎟⎟⎟
⎜ ⎟ ⎜ ⎟
H(3, 3) ⊗ H(3, 3) = ⎜⎜⎜⎜⎜1 x x2 ⎟⎟⎟⎟⎟ ⊗ ⎜⎜⎜⎜⎜1 x x2 ⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
1 x2 x 1 x2 x
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x x2 1 x x2 1 x x2 ⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x2 x 1 x2 x 1 x2 x ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 1 x x x x2 x2 x2 ⎟⎟⎟⎟⎟
⎜ ⎟⎟
= ⎜⎜⎜⎜⎜1 x x2 x x2 1 x2 1 x ⎟⎟⎟⎟ . (11.5)
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x2 x x 1 x2 x2 x 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 x2 x2 x2 x x x ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 x x2 x2 1 x x x2 1 ⎟⎟⎟⎟
⎜⎝ ⎟⎠
1 x2 x x2 x 1 x 1 x2

• If p is prime, then the generalized Hadamard matrix H(p, N) can exist only for
N = pt, where t is a natural number.
• If p = 2, then the generalized Hadamard matrix H(p, 2p) can exist,
• If pn is a prime power, then a generalized Hadamard matrix H(pn , N) can exist
only for N = pt, where t is a positive integer.

Problems for exploration: The inverse problem, i.e., the problem of construction
or proof of the existence of the generalized Hadamard matrix H(p, pt) for any
prime p, remains open.
More complete information about construction methods and applications of
generalized Hadamard matrices can be obtained from Refs. 2,11, and 15–30.

Definition 11.1.1.2:12 A square matrix H of order N with elements of H jk is called


a complex Hadamard matrix if
• |H jk | = 1 has unimodularity.
• HH ∗ = H ∗ H = NI N has orthogonality, where H ∗ is the conjugate-transpose
matrix of H.

Definition 11.1.1.3: A square matrix H(p, N) of order N with elements xk e jαk is


called a parametric generalized Hadamard matrix if

HH ∗ = H ∗ H = NIN ,

where xk is a p’th root of unity, αk is a parameter, and H ∗ is the conjugate-transpose


matrix of H.
Problems for exploration: Investigate the construction of the parametric genera-
lized Hadamard matrices. The parametric generalized Hadamard matrices may
play a key role in the theory of quantum information and encryption systems.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


346 Chapter 11

Example: N = 4,
⎛ ⎞
⎜⎜⎜1 1 1 1⎟
⎜⎜⎜1 je jα −1 − je jα ⎟⎟⎟⎟⎟ √
H4 = ⎜⎜⎜⎜ ⎟⎟ , where α ∈ [0, π), j = −1.
−1⎟⎟⎟⎠⎟
(11.6)
⎜⎜⎝1 −1 1
1 − je jα −1 je jα

Any complex Hadamard matrix is equivalent to a dephased Hadamard matrix, in


which all elements in the first row and first column are equal to unity. For N = 2, 3,
and 5, all complex Hadamard matrices are equivalent to the Fourier matrix F N .
For N = 4, there is a continuous, one-parameter family of inequivalent complex
Hadamard matrices.

11.1.2 Some necessary conditions for the existence of generalized Hadamard


matrices
First, we give some useful properties of generalized Hadamard matrices from
Ref. 1.
Properties:
• The condition HH ∗ = NI N is equivalent to condition H ∗ H = NI N , i.e., the
condition of orthogonality of distinct rows of matrix H implies the orthogonality
of distinct columns of matrix H.
• The permutation of rows (columns) of matrix H(p, N) and multiplication of
rows (columns) by the fixed root of unity does not change a condition of the
matrix to a generalized Hadamard matrix. Note that if H1 = H(p1 , N) is the
generalized Hadamard matrix and r p2 is the primitive p2 ’th root of unity, then
H2 = r p2 H1 = H(p3 , N), where p3 = l.c.m.(p1 , p2 ) (l.c.m. stands for the least
common multiple).
• If H = (hi, j )i,Nj=1 is the normalized generalized Hadamard matrix, then
N N
hi, j = h∗i, j = 0, j = 2, 3, . . . , N,
i=1 i=1
N N (11.7)
hi, j = h∗i, j = 0, i = 2, 3, . . . , N.
j=1 j=1

• Let H1 = H(p1 , N1 ) and H2 = H(p2 , N2 ) be generalized Hadamard matrices;


then, the matrix
H(p3 , N3 ) = H(p1 , N1 ) ⊗ H(p2 , N2 ) (11.8)

is the generalized Hadamard matrix of order N3 = N1 N2 , where p3 = l.c.m.


(p1 , p2 ).
Now, we want to construct a generalized Hadamard matrix H(p, N) for any
nonprime number p. Note that if H(p, N) is the normalized generalized Hadamard
matrix and p is a prime number, then N = pt, but similar to the classical Hadamard
matrix H(2, N), the following conditions are not necessary:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 347

 
N = p2 t, N = 2pt,
or (11.9)
N=p N = p.

The normalized generalized


√ Hadamard matrices H(6, 10) and H(6, 14) are given as
follows [z = 1/2 + j( 3/2)]:
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜⎜1 z4 z z5 z3 z z3 z3 z5

z ⎟⎟⎟⎟
⎜⎜⎜
3⎟
⎜⎜⎜1
⎜⎜⎜ z z2 z3 z5 z5 z z3 z5 z ⎟⎟⎟⎟⎟

⎜⎜⎜1 z5 z3 z2 z z5 z3 z5 z3 z ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜1 z3 z5 z4 z5 z3 z3 ⎟⎟⎟⎟
H(6, 10) = ⎜⎜⎜⎜⎜
z z z
⎟, (11.10a)
⎜⎜⎜1 z3 z3 z3 z3 z3 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 z z z5 z5 z4 z3 1 z2 z4 ⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎜⎜ z z5 z3 z3 z2 z4 z3 z2 1 ⎟⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎝ z5 z3 z5 z5 z2 1 z2 z3 z4 ⎟⎟⎟⎟
3⎠
1 z3 z5 z z z4 z4 z2 1 z
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ 3⎟
⎜⎜⎜1 1 z
5
z5 z3 z z z4 z2 z3 z z3 z5 z ⎟⎟⎟⎟
⎜⎜⎜1 z5 z4 ⎟
⎜⎜⎜ z3 z z3 z 1 z5 z4 z3 z3 z z ⎟⎟⎟⎟⎟
⎜⎜⎜1 z5 z3 ⎟
⎜⎜⎜ z2 z5 z z3 z4 z z z4 z5 z z3 ⎟⎟⎟⎟

⎜⎜⎜1 z3 z
⎜⎜⎜ z5 z2 z3 z5 z4 z3 z z5 z4 z z ⎟⎟⎟⎟⎟

⎜⎜⎜⎜1 z z3 z z3 z4 z5 1 z z z3 z3 z4 z5 ⎟⎟⎟⎟
⎜⎜⎜ 2⎟ ⎟
z3 z5 z5 z4 z3 z5 z3 z3 z ⎟⎟⎟⎟
H(6, 14) = ⎜⎜⎜⎜⎜ 2 4
1 z z 1 z
5⎟ . (11.10b)
⎜⎜⎜1 z z z4 z4 z4 z2 z3 z5 z z z z z ⎟⎟⎟⎟
⎜⎜⎜ 4 5 2⎟ ⎟⎟
⎜⎜⎜⎜1 z3 z z z5 z3 z3 z z3 1 1 z2 z4 z ⎟⎟⎟

⎜⎜⎜1 z 1
⎜⎜⎜ 5 z z z3 z z3 1 z3 z4 1 z4 z4 ⎟⎟⎟⎟⎟

⎜⎜⎜1 z z z4 z3 z z3 z z4 z2 z3 1 z4 1 ⎟⎟⎟⎟
⎜⎜⎜ 3 ⎟
⎜⎜⎜⎜1 z z z3 z4 z z5 z 1 z4 1 z3 z2 z4 ⎟⎟⎟⎟⎟
⎜⎜⎜1 z z3 ⎟
⎜⎝ z z 1 z3 z3 z4 z4 1 z4 z3 1 ⎟⎟⎟⎟
3⎠
1 z3 z3 z5 z z5 z4 z z2 z4 z2 1 1 z

11.1.3 Construction of generalized Hadamard matrices of new orders


Now we consider the recurrent algorithm of construction of generalized Hadamard
matrix H(pn , pn ), where p is a prime, and n is a natural number.
Definition 11.1.3.1:11 We will call square matrices X and Y of order k with
elements zero and the p’th root of unity generalized two-element hyperframes and
denote this by Γ(p, k) = {X(p, k), Y(p, k)}, if the following conditions are satisfied:
X ∗ Y = 0,
X ± Y is the matrix with elements of p’th root of unity,
(11.11)
XY H = Y X H ,
XX H + YY H = kIk ,

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


348 Chapter 11

where * is a sign of Hadamard (pointwise) product, and H denotes the Hermitian


transpose.
Lemma 11.1.3.1: If there is a generalized Hadamard matrix H(p, 2m), then there
is also a generalized two-element hyperframe Γ(2p, 2m).
Proof: Represent the matrix H(p, 2m) by
 
A B
H(p, 2m) = D C (11.12)

and denote   

A 0 0 B
X= 0 C , Y = −D 0 . (11.13)

Prove that {X, Y} is a generalized hyperframe. The first two conditions of


Eq. (11.11) are evident. Prove the next two conditions. Because H(p, 2m) is the
generalized Hadamard matrix, the following conditions hold:
AAH + BBH = DDH + CC H = 2mIm ,
(11.14)
ADH + BC H = 0.
Now it is not difficult to check the accuracy of the next two conditions. This
completes the proof.
Lemma 11.1.3.2: If there is a generalized Hadamard matrix H(p, n), then there
is a generalized two-element hyperframe Γ(p, n).
n−1
Proof: Let H(p, n) = hi, j . Consider following matrices:
i, j=0
⎛ ⎞ ⎛ ⎞
⎜⎜⎜a0 h0 ⎟⎟⎟ ⎜⎜⎜b0 h0 ⎟⎟⎟
⎜⎜⎜a h ⎟⎟⎟ ⎜⎜⎜b h ⎟⎟⎟
⎜⎜⎜ 1 1 ⎟⎟⎟ ⎜⎜⎜ 1 1 ⎟⎟⎟
X = ⎜⎜⎜ .. ⎟⎟ , Y = ⎜⎜⎜ ..
⎟ ⎟⎟⎟ , (11.15)
⎜⎜⎜ . ⎟⎟⎟ ⎜⎜⎜ . ⎟⎟⎟
⎝ ⎠ ⎝ ⎠
an−1 hn−1 bn−1 hn−1
where hi is the i’th row of matrix H(p, n), and numbers ai , bi satisfy the conditions

ai bi = 0, ai + bi = 1, ai , bi ∈ {0, 1}, i = 0, 1, . . . , n − 1. (11.16)

Now it is not difficult to prove that X and Y satisfy the conditions in Eq. (11.11);
i.e., Γ(p, n) = {X, Y} is the generalized hyperframe.
Now, let H and G be generalized Hadamard matrices H(p, m) of the following
form:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ h0 ⎟⎟⎟ ⎜⎜⎜ h1 ⎟⎟⎟
⎜⎜⎜ h ⎟⎟⎟ ⎜⎜⎜ −h ⎟⎟⎟
⎜⎜⎜ 1 ⎟⎟⎟ ⎜⎜⎜ 0 ⎟
⎜⎜⎜ h2 ⎟⎟⎟ ⎜⎜⎜ h3 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
H = ⎜⎜⎜⎜ h3 ⎟⎟⎟⎟ , G = ⎜⎜⎜⎜ −h2 ⎟⎟⎟⎟ . (11.17)
⎜⎜⎜ .. ⎟⎟⎟ ⎜⎜⎜ .. ⎟⎟⎟
⎜⎜⎜ . ⎟⎟⎟ ⎜⎜⎜ . ⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎝hm−2 ⎟⎟⎟⎟⎠ ⎜⎜⎝ hm−1 ⎟⎟⎟⎟⎠
hm−1 −hm−2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 349

It is evident that

HG H + GH H = 0. (11.18)

Theorem 11.1.3.1: Let H0 = H(p1 , m) and G0 = H(p1 , m) be generalized


Hadamard matrices satisfying the condition Eq. (11.18) and let Γ(p2 , k) = {X, Y}
be a generalized hyperframe. Then, the matrices

Hn = X ⊗ Hn−1 + Y ⊗ Gn−1 ,
(11.19)
Gn = X ⊗ Gn−1 − Y ⊗ Hn−1 , n≥1
are
• generalized Hadamard matrices H(2p, mkn ), where p = l.c.m.(p1 , p2 ), if
l.c.m.(p1 , 2) = 1 and l.c.m.(p2 , 2) = 1;
• generalized Hadamard matrices H(p, mkn ), where p = l.c.m.(p1 , p2 ), if p1
and/or p2 , are even.
Proof: First, let us prove that Hn HnH = mkn Imkn . Using the properties of the
Kronecker product, from Eq. (11.19), we obtain

Hn HnH = (X ⊗ Hn−1 + Y ⊗ Gn−1 )(X H ⊗ Hn−1


H
+ Y H ⊗ Gn−1
H
)
= XX ⊗ Hn−1 Hn−1 + XY ⊗ Hn−1Gn−1 + Y X
H H H H H

⊗ Gn−1 Hn−1
H
+ YY H ⊗ Gn−1Gn−1
H

= (XX + YY ) ⊗ mk Imkn−1 + XY H ⊗ (Hn−1Gn−1


H H n−1 H
+ Gn−1 Hn−1
H
)
= kIk ⊗ mk Imkn−1 = mk Imkn .
n−1 n
(11.20)

Similarly, we can show that GnGnH = mkn Imkn . Now prove that HnGnH + Gn HnH = 0.
Indeed,

HnGnH + Gn HnH = XX H ⊗ (Hn−1Gn−1


H
+ Gn−1 Hn−1
H
) − XY H
⊗ (Hn−1 Hn−1
H
− Gn−1Gn−1
H
)
+ Y X ⊗ (Gn−1Gn−1 − Hn−1 Hn−1
H H H
) − YY H
⊗ (Hn−1Gn−1
H
+ Gn−1 Hn−1
H
)
= (XX − YY ) ⊗ (Hn−1Gn−1
H H H
+ Gn−1 Hn−1
H
)
− (XY + Y X ) ⊗ (Hn−1 Hn−1 − Gn−1Gn−1 ) = 0. (11.21)
H H H H

From Theorem 11.1.3.1, the following is evident:


Corollary 11.1.3.1: Let pi , i = 1, 2, . . . , k be prime numbers; then, there is a
generalized Hadamard matrix H(2pr11 pr22 · · · prkk , mk1t1 k2t2 · · · kktk ), where ri , ti are
natural numbers.
Theorem 11.1.3.2: Let Hn defined as in Theorem 11.1.3.1 satisfy the condition of
Eq. (11.19). Then, Hn can be factorized by sparse matrices as

Hn = M1 M2 · · · Mn+1 , (11.22)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


350 Chapter 11

where

Mn+1 = Ikn ⊗ H0 ,
Mi = Imi−1 ⊗ (X ⊗ Imki−1 + Y ⊗ Pmki−1 ) ,
⎛ ⎞ (11.23)
⎜⎜⎜ 0 1⎟⎟⎟
Pmki−1 = I(mki−1 )/2 ⊗ ⎜⎜⎝ ⎜ ⎟⎟⎟ , i = 1, 2, . . . , n.

−1 0
It is easy to show that H(p, pn ) exists where p is a prime number. Indeed, these
matrices can be constructed using the Kronecker product. Let us give an example.
Let p = 3; then, we have
⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟
⎜⎜ ⎟⎟
H(3, 3) = ⎜⎜⎜⎜1 a a2 ⎟⎟⎟⎟ ,
⎝⎜ ⎟⎠
1 a2 a
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 a a2 1 a a2 1 a a2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 a2 a 1
⎜⎜⎜ a2 a 1 a2 a ⎟⎟⎟⎟⎟

⎜⎜⎜1 1 1 a
⎜⎜ a a a2 a2 a2 ⎟⎟⎟⎟⎟
⎟⎟
H(3, 9) = ⎜⎜⎜⎜⎜1 a a2 a a2 1 a2 1 a ⎟⎟⎟⎟ . (11.24)
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 a2 a a 1 a2 a2 a 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 a2 a2 a2 a a a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 a a2 a2
⎜⎝ 1 a a a2 1 ⎟⎟⎟⎟⎟

1 a2 a a2 a 1 a 1 a2

11.1.4 Generalized Yang matrices and construction of generalized


Hadamard matrices
In this section, we introduce the concept of generalized Yang matrices and consider
the generalized case of the Yang theorem. First, we give a definition.
Definition 11.1.4.1:11 Square matrices A(p, n) and B(p, n) of order n with elements
of the p’th roots of unity are called generalized Yang matrices if

ABH = BAH ,
(11.25)
AAH + BBH = 2nIn .
Note that for p = 2, the generalized Yang matrix coincides with classical Yang
matrices.11 Now, let us construct generalized Yang matrices. We will search A and
B as cyclic matrices, i.e.,

A = a0 U 0 + a1 U 1 + a2 U 2 + · · · + an−1 U n−1 ,
(11.26)
B = b0 U 0 + b1 U 1 + b2 U 2 + · · · + bn−1 U n−1 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 351

We see that

AH = a∗0 U n + a∗n−1 U 1 + a∗n−2 U 2 + · · · + a∗1 U n−1 ,


(11.27)
BH = b∗0 U n + b∗n−1 U 1 + b∗n−2 U 2 + · · · + b∗1 U n−1 .

It can be shown that Eq. (11.25) is equivalent to

n−1
(ai a∗i + bi b∗i ) = 2n,
i=0
(11.28)
n−1 * + %n&
ai a∗(i−t)mod n + bi b∗(i−t)mod n = 0, t = 0, 1, . . . , .
i=0
2

Therefore, the condition of Eq. (11.25) or (11.28) is a necessary and sufficient


condition for the existence of generalized Yang matrices.
Examples of cyclic generalized Yang matrices (only the first rows of the matrices
are presented) are as follows:

• A(3, 3) : (a, 1, a), B(3, 3) : (a2 , a2 , 1), where a = − 12 + j 23 ; √
• A(3, 6) : (1, a2 , a2 , a, a2 , a2 ), B(3, 6) : (1, a2 , a, a, a, a2 ),√ where a = − 12 + j 23 ;
• A(4, 4) : (1, j, j, −1), B(4, 4) : (−1, j, j, 1), where j = −1; √
• A(6, 5) : (1, a, a2 , a2 , a), B(6, 5) : (1, −a, −a2 , −a2 , −a), where a = − 12 + j 23 ;

Theorem 11.1.4.1: Let A0 = A0 (p1 , n) and B0 = B0 (p1 , n) be generalized Yang


matrices , and let Γ(p2 , k) = {X(p2 , k), Y(p2 , k)} be a generalized hyperframe. Then,
the following matrices:

Ai+1 = X ⊗ Ai + Y ⊗ Bi ,
(11.29)
Bi+1 = X ⊗ Bi − Y ⊗ Ai , i = 0, 1, . . .

are generalized Yang matrices A(2p, nki+1 ) and B(2p, nki+1 ), where p = l.c.m.(p1 ,
p2 ), l.c.m.(p1 , 2) = 1, and l.c.m.(p2 , 2) = 1, i ≥ 1. We find generalized Yang
matrices A(2p, nki+1 ) and B(2p, nki+1 ), where p = l.c.m.(p1 , p2 ), and p1 and/or
p2 are even numbers.
Corollary 11.1.4.1: The following matrix is the generalized Hadamard matrix :
 
Ai Bi
H(2p, n2 ) =
i+1
. (11.30)
−Bi Ai

11.2 Chrestenson Transform


11.2.1 Rademacher functions
The significance of the Rademacher function system (see Fig. 11.1) among
various discontinuous functions is that it is a subsystem of the Walsh function

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


352 Chapter 11

Figure 11.1 Dr. Hans Rademacher (from


www.apprendre-math.info/hindi/historyDetail.htm).

system, and the latter plays an important role in Walsh–Fourier analysis. The
Rademacher functions {rn (x)} form an incomplete set of orthogonal, normalized,
periodic square wave functions with their period equal to 1. Using the Rademacher
system functions, one may generate the Walsh–Hadamard, Walsh–Paley, and
Harmuth function systems, as well the Walsh–Rademacher function systems. The
Rademacher function is defined in Refs. 2, 5–8, 31, and 32.

Definition: The n’th Rademacher function rn (x) is defined as follows:

r0 (x) = 1, x ∈ [0, 1),


 


⎪ 2i 2i + 1


⎪ 1, if x ∈ , ,

⎨ 2n 2n (11.31)
rn (x) = ⎪
⎪  


⎪ 2i + 1 2i + 2

⎩−1, if x ∈ , ,
2n 2n
where i = 0, 1, 2, . . . , 2n − 1.
The sequence {rn (x)} will be called the system of Rademacher functions, or the
Rademacher function system; or, the Rademacher function rn (x) may be defined as
a family of functions {rn (x)} defined by the unit interval by the formula



⎪ r (x) = 1, if x ∈ [0, 1), 
⎨0

⎪ i−1 i (11.32)

⎩nr (x) = (−1)i+1
, if x ∈ , ,
2n 22

where n = 0, 1, 2, 3, . . . which means



+1, for i even,
rn (x) = (11.33)
−1, for i odd.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 353

Rademacher function systems {rn (x)} may be also defined by the formula
rn (x) = sign sin(2 j πx) , (11.34)

where x is taken over the continuous interval 0 to 1.


Selected properties:
(1) |rn (x)| ≤ 1.
(2) rn (x)| = rn (x + 1).
(3) rn (x) = r1 (2n−1 x). ⎧  



1
⎨+1, if x ∈ 0, 2  ,
where r0 (x) = 1, r1 (x) = ⎪ ⎪

⎩−1, if x ∈ 21 , 1 .
(4) rn+m (x) = rn (2m x) = rm (2n x), m, n = 0, 1, 2, . . ..
(5) The Rademacher functions may be constructed as
 
rn (x) = exp( jπxn+1 ) = (e jπ ) xn+1 = cos(π) + j sin(π) xn+1 = (−1) xn+1 ,

where j = −1, x = n=0 N−1
xn 2 n .
(6) All Rademacher functions except r0 (x) are odd, i.e., rn (−x) = rn (x). Thus, it is
impossible to represent even functions by any combination of the Rademacher
functions, which means that the Rademacher functions do not form a complete
set with respect to the L2 norm.
(7) For the representation of even functions by combination of the Rademacher
functions, we may introduce even Rademacher functions

reven (n, x) = sgn[cos(2n πx)], n = 1, 2, 3...
reven (0, x) = r1 (x).

However, now it is impossible to represent odd functions by any combination


of the even Rademacher functions. It is also easy to check that the discrete
Rademacher functions Rad( j, k) may be generated by sampling the Rademacher
functions at times x = 0, 1/N, 2/N, . . . , (N − 1)/N.

11.2.2 Example of Rademacher matrices


For n = 3, 4, N = 2n , the Rademacher matrices R3,8 and R4,16 are taken to be
⎡ ⎤
⎢⎢⎢+ + + + + + + +⎥⎥⎥ ⇒ Rad(0, k)
⎢⎢⎢+ + + + − − − −⎥⎥⎥⎥ ⇒ Rad(0, k)
R3,8 = ⎢⎢⎢⎢ ⎥
⎢⎢⎣+ + − − + + − −⎥⎥⎥⎥⎦ ⇒ Rad(0, k)
(11.35)
+ − + − + − + − ⇒ Rad(0, k)
⎡ ⎤
⎢⎢⎢+ + + + + + + + + + + + + + + +⎥⎥⎥ ⇒ Rad(0, k)
⎢⎢⎢+ + + + + + + + − − − − − − − −⎥⎥⎥ ⇒ Rad(1, k)
⎢⎢ ⎥⎥
R4,16 = ⎢⎢⎢⎢⎢+ + + + − − − − + + + + − − − −⎥⎥⎥⎥⎥ ⇒ Rad(2, k) ,
⎢⎢⎢+ + − − + + − − + + − − + + − −⎥⎥⎥ ⇒ Rad(3, k)
⎢⎣ ⎥⎦
+ − + − + − + − + − + − + − + − ⇒ Rad(4, k)
where + and − indicate +1 and −1, respectively.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


354 Chapter 11

Properties:
• The Rademacher matrix is a rectangular (n + 1) × 2n matrix with (+1, −1)
elements where the first row has +1 elements,

Rn,N RTn,N = 2n In , (11.36)

where In is the identity matrix of order n.


• The Rademacher matrix uniquely determines the system of Rademacher
functions of the interval [0, 1).
There are several other ways to construct new Rademacher function systems.
For example, take any solution of the following equations:



⎪ α(x + 1) = α(x),




1 (11.37)
⎩α(x) + α x + 2 = 0, where x ∈ [0, 1), α(x) ∈ L (0, 1),
2

then define Rademacher functions Rad(n, x) via the dilation operation α(2nx).

11.2.2.1 Generalized Rademacher functions


The well-known system of Rademacher functions was generalized to a system of
functions whose values are ω f c, k = 0, 1, 2, . . . a − 1, where a is a natural number
and ω is one of the primitive c’th roots of 1, by Levy,31 and the latter was made
to be a complete orthonormal system, i.e., the W ∗ system of generalized Walsh
functions by Chrestenson.32 These systems are known to preserve some essential
properties of the original functions. Similarly, the analogous generalization has
been performed for the Haar system.
Now, we may extend the definition of Rademacher functions from the case p = 2
to an arbitrary p,



Rn (x) = exp j xn+1 , where x = xn pn . (11.38)
p n

Let p be an integer p ≥ 2, and w = exp[ j(2π/p)]. Then, the Rademacher functions


of order p are defined by
 
k k+1
ϕ0 (k) = w , k
x∈ , , k = 0, 1, 2, . . . , p − 1, (11.39)
p p

and for n  0, ϕn (x) = ϕn (x + 1) = ϕn (pn x). These functions form a set of


orthonormal functions. The Walsh functions of order p are defined by

φ0 (x) = 1,
(11.40)
φn (x) = ϕan11 (x)ϕan22 (x) · · · ϕanmm (x),

where n = nm
k=0 ak pnk , 0 < ak < 1, n1 > n2 > · · · > nm .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 355

11.2.2.2 The Rademacher–Walsh transforms


The Walsh functions33–35 are a closed set of two-valued orthogonal functions given
by
"
n−1
Wal( j, k) = (−1)(kn−t +kn−1−t ) jt , (11.41)
t=0

where jt , kt are determined by the binary expansions of j, k, respectively, and


j, k = 0, 1, . . . , 2n − 1, where j = jn−1 2n−1 + jn−1 2n−2 + · · · + j1 21 + j0 20 ,
k = kn−1 2n−1 + kn−2 2n−2 + · · · + k1 21 + k0 20 .
The Walsh transform matrix with corresponding Walsh functions is given below.

k/ j 000 001 010 011 100 101 110 111


000 +1 +1 +1 +1 +1 +1 +1 +1 Wal(0, k)
001 +1 +1 +1 +1 −1 −1 −1 −1 Wal(1, k)
010 +1 +1 −1 −1 −1 −1 +1 +1 Wal(2, k)
011 +1 +1 −1 −1 +1 +1 −1 −1 Wal(3, k) (11.42)
100 +1 −1 −1 +1 +1 −1 −1 +1 Wal(4, k)
101 +1 −1 −1 +1 −1 +1 +1 −1 Wal(5, k)
110 +1 −1 +1 −1 −1 +1 −1 +1 Wal(6, k)
111 +1 −1 +1 −1 +1 −1 +1 −1 Wal(7, k)

Note that the 2n Walsh functions for any n constitute a closed set of orthogonal
functions; the multiplication of any two functions always generates a function
within this set. However, the Rademacher functions are an incomplete set of n + 1
orthogonal functions, which is a subset of Walsh functions, and from which all
2n Walsh functions can be generated by multiplication. The Rademacher functions
may be defined as follows:

Rad( j, k) = sign sin(2 j πk) , (11.43)

where k is taken over the continuous interval 0 to 1.


The Rademacher and the Walsh functions are related by Rad( j, k) = Wal(2 j −
1, k). Taking the Rademacher functions as a basis set, the complete set of Walsh
functions is generated in an alternative order from the original Walsh order, as iden-
tified below (∗ indicates Hadamard product or element by element multiplication).
00 0
00+1 +1 +1 +1 +1 +1 +1 +1000 Rad(0, k)
00+1
00 +1 +1 +1 _1 −1 −1 −1000 Rad(1, k)
00+1 +1 −1 −1 +1 +1 −1 −100 Rad(2, 0)
0
00+1 −1 +1 −1 +1 −1 +1 −100 Rad(3, k)
00 0 (11.44)
00+1 +1 −1 −1 −1 −1 +1 +100 Rad(1, k) ∗ Rad(2, k)
0
00+1 −1 +1 −1 −1 +1 −1 +100 Rad(1, k) ∗ Rad(3, k)
00 0
00+1 −1 −1 +1 +1 −1 −1 +100 Rad(2, k) ∗ Rad(3, k)
0
0+1 −1 −1 +1 −1 +1 +1 −10 Rad(1, k) ∗ Rad(2, k) ∗ Rad(3, k).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


356 Chapter 11

We see that
Rad(0, k) = Wal(0, k),
Rad(1, k) = Wal(1, k),
Rad(2, k) = Wal(3, k),
Rad(3, k) = Wal(7, k),
(11.45)
Rad(1, k) ∗ Rad(2, k) = Wal(2, k),
Rad(1, k) ∗ Rad(3, k) = Wal(6, k),
Rad(2, k) ∗ Rad(3, k) = Wal(4, k),
Rad(1, k) ∗ Rad(2, k) ∗ Rad(3, k) = Wal(5, k).

It can be shown also that the natural-ordered Walsh–Hadamard matrix order


n, sequency-ordered Walsh matrix order n, dyadic-ordered Paley matrix order n,
and Cal–Sal-ordered Hadamard matrix order n can be represented by discrete
Rademacher function systems. For example, the natural-ordered Walsh–Hadamard
matrix can be represented by discrete Rademacher function systems using the
following rules:
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟⎟ Rad(0, k)
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟ Rad(3, k)
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟ Rad(1, k) ∗ Rad(2, k)
⎜⎜⎜ ⎟
+ − − + + − − +⎟⎟⎟⎟⎟ Rad(2, k) ∗ Rad(3, k)
Hh (8) = ⎜⎜⎜⎜⎜ ⎟
⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟⎟ Rad(1, k)
⎜⎜⎜ ⎟ (11.46)
⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟⎟ Rad(1, k) ∗ Rad(3, k)
⎜⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟⎟

⎜⎜⎝ ⎠
Rad(2, k)
+ − − + − + + − Rad(1, k) ∗ Rad(2, k) ∗ Rad(3, k).

Natural-ordered Walsh–Hadamard matrix.

The dyadic-ordered Paley matrix can be represented by discrete Rademacher


function systems using the following rules:
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟⎟ Rad(0, k)
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟ Rad(1, k)
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟ Rad(2, 0)
⎜⎜⎜ ⎟
⎜+ + − − − − + +⎟⎟⎟⎟⎟ Rad(1, k) ∗ Rad(2, k)
H p (8) = ⎜⎜⎜⎜⎜ ⎟
⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟⎟ Rad(3, k)
⎜⎜⎜ ⎟ (11.47)
⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟⎟ Rad(1, k) ∗ Rad(3, k)
⎜⎜⎜ ⎟
⎜⎜⎜+ − − + + − − +⎟⎟⎟⎟⎟ Rad(2, k) ∗ Rad(3, k)
⎝ ⎠
+ − − + − + + − Rad(1, k) ∗ Rad(2, k) ∗ Rad(3, k).

Dyadic-ordered Paley matrix

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 357

The Cal–Sal-ordered Hadamard matrix can be represented by discrete


Rademacher function systems using the following rules:
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟⎟ Rad(0, k)
⎜⎜⎜ ⎟
⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟⎟ Rad(1, k) ∗ Rad(2, k)
⎜⎜⎜⎜+ − − + + − − +⎟⎟⎟⎟ Rad(2, k) ∗ Rad(3, k)
⎜⎜⎜ ⎟
⎜+ − + − − + − +⎟⎟⎟⎟⎟ Rad(1, k) ∗ Rad(3, k)
Hcs (8) = ⎜⎜⎜⎜⎜ ⎟
⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟⎟ Rad(3, k)
⎜⎜⎜+ − − + − + + −⎟⎟⎟ Rad(1, k) ∗ Rad(2, k) ∗ Rad(3, k) (11.48)
⎜⎜⎜ ⎟
⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟⎟ Rad(2, k)
⎝⎜ ⎠⎟
+ + + + − − − − Rad(1, k).

Cal–Sal-ordered Hadamard matrix

11.2.2.3 Chrestenson functions and matrices


Chrestenson functions are orthogonal p-valued functions defined over the interval
[0, pn ) by
 $

Ch(p) (k, t) = exp j C(k, t) ,
p
n−1 (11.49)
C(k, t) = km t m ,
m=0

where km , and tm are the p-ary expansions of k and t, respectively, i.e.,

k = kn−1 pn−1 + kn−2 pn−2 + · · · + k0 p0 , t = tn−1 pn−1 + tn−2 pn−2 + · · · + t0 p0 .

For p = 3 and n = 1, Chrestenson matrices of order 3 have the form


⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟
Ch(1) ⎜⎜⎜1 a a2 ⎟⎟⎟⎟ .
3 =⎜ ⎜⎝ ⎟⎠ (11.50)
1 a2 a

For p = 3, n = 2, we obtain the following transform matrix:


 
00 01 02 10 11 12 20 21 22
⎛ ⎞ ⎛⎜1 1 1 1 1 1 1 1 1 ⎞⎟
⎜⎜⎜00⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜01⎟⎟⎟ ⎜⎜⎜1 a a2 1 a a2 1 a a2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜02⎟⎟⎟ ⎜⎜⎜1 a2 a 1 a2 a 1 a2 a ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜10⎟⎟⎟ ⎜⎜⎜1 1 1 a a a a2 a2 a2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟ (11.51)
Ch(2)
3 =⎜ ⎜⎜⎜11⎟⎟⎟⎟ ⎜⎜⎜⎜1 a2 a a a 12 a2 1 a ⎟⎟⎟⎟⎟ .
2 2 2

⎜⎜⎜⎜12⎟⎟⎟⎟ ⎜⎜⎜⎜1 a a a 1 a a a 1 ⎟⎟⎟⎟


⎜⎜⎜20⎟⎟⎟ ⎜⎜⎜1 1 1 a2 a2 a2 a a a ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜21⎟⎟⎟ ⎜⎜⎜1 a a2 a2 1 a a a2 1 ⎟⎟⎟⎟⎟
⎝ ⎠ ⎜⎝ ⎟
22 1 a2 a a2 a 1 a 1 a2 ⎠

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


358 Chapter 11

The important characteristics of this complete orthogonal matrix are as follows:


• Its dimensions are pn × pn .
• It is symmetric, i.e., (Ch(n) (n)
p ) = Ch p .
T
−1
• Its inverse is given by (Ch(n)
p ) = p1n (Ch(n)
p ) , where superscript H indicates the
H

transposed conjugate or Hermitian of Ch(n) p .


• It has the recursive structure for the ternary case, i.e.,
⎛ (n−1) ⎞
⎜⎜⎜Ch Ch(n−1)
Ch (n−1) ⎟
⎟⎟⎟
⎜⎜⎜⎜ 3 3 3

Ch(n) = ⎜⎜⎜Ch(n−1) aCh(n−1) a2Ch(n−1) ⎟⎟⎟⎟⎟ , (11.52)
3 ⎜⎜⎜ 3 3 3 ⎟⎟⎟
⎝ (n−1) 2 (n−1) ⎠
Ch3 a Ch3 aCh(n−1)
3

where Ch(0)
3 = 1.

An alternative definition for the Chrestenson functions yields the same complete
set, but in dissimilar order, as follows:
 $

Ch(p) (k, t) = exp j C(k, t) ,
p
n−1 (11.53)
C(k, t) = km tn−1−m ,
m=0

where km and tm are the p-ary expansions of k and t, respectively.


This definition gives the alternative transform matrix for p = 3, n = 2 as follows:
 
00 10 20 01 11 21 02 12 22
⎛ ⎞ ⎛⎜1 1 1 1 1 1 1 1 1 ⎞⎟
⎜⎜⎜00⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜01⎟⎟⎟ ⎜⎜⎜1 1 1 a a a a2 a2 a2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜02⎟⎟⎟ ⎜⎜⎜1 1 1 a2 a2 a2 a a a ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜10⎟⎟⎟ ⎜⎜⎜1 a a2 1 a a2 1 a a2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟ (11.54)
Ch3 = ⎜⎜11⎟⎟ ⎜⎜1 a a2 a a2 1 a2 1 a ⎟⎟⎟⎟ .
(2)
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜12⎟⎟⎟ ⎜⎜⎜1 a a2 a2 1 a a a2 1 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜20⎟⎟⎟⎟ ⎜⎜⎜⎜1 a2 a 1 a2 a 1 a2 a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎝21⎟⎟⎠ ⎜⎜⎜1 a2 a a 1 a2 a2 a 1 ⎟⎟⎟⎟⎟
22 1 a2 a a2 a 1 a 1 a2 ⎠

Finally, we can also consider a subset of the Chrestenson functions for any
p, n, which constitute the generalization of the Rademacher functions, and from
which the complete set of orthogonal functions for the given p, n can be generated
by element-by-element multiplication. The generalized Rademacher functions are
defined as
 $

Rad(p) (k, t) = exp j C  (k, t) ,
p
n−1 (11.55)
C  (k, t) = km tm ,
m=0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 359

where km here is a subset of k, whose decimal identification numbers are 0, 1 and all
higher values of k that are divisible by a power of p. The closed set of Chrestenson
functions for p = 3, n = 2 generated from the reduced set can be represented as
follows:
Ch(3) (0, t) = Rad(3) (0, t),
Ch(3) (1, t) = Rad(3) (1, t),
Ch(3) (2, t) = Rad(3) (1, t) ∗ Rad(3) (1, t),
Ch(3) (3, t) = Rad(3) (3, t),
Ch(3) (4, t) = Rad(3) (1, t) ∗ Rad(3) (3, t), (11.56)
Ch(3) (5, t) = Rad(3) (1, t) ∗ Rad(3) (1, t) ∗ Rad(3) (3, t),
Ch(3) (6, t) = Rad(3) (3, t) ∗ Rad(3) (3, t),
Ch(3) (7, t) = Rad(3) (1, t) ∗ Rad(3) (3, t) ∗ Rad(3) (3, t),
Ch(3) (8, t) = Rad(3) (1, t) ∗ Rad(3) (1, t) ∗ Rad(3) (3, t) ∗ Rad(3) (3, t).

11.3 Chrestenson Transform Algorithms


11.3.1 Chrestenson transform of order 3n
Now we will compute the complexity of the Chrestenson transform of order 3n .
First, we calculate the complexity of the C31 transform [see Eq. (11.50)]. Let
X T = (x0 + jy0 , x0 + jy0 , x0 + jy0 ) be a complex-valued
√ vector of length 3,
a = exp[ j(2π/3)] = cos(2π/3) + j sin(2π/3), and j = −1.
A 1D forward Chrestenson transform of order 3 can be performed as follows:
⎛ ⎞⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜z0 ⎟⎟ ⎜⎜i3 ⎟⎟ ⎜⎜z0 ⎟⎟ ⎜⎜v0 ⎟⎟
⎜ ⎟⎜ ⎟ ⎜ ⎟⎜ ⎟ ⎜ ⎟
C31 X = ⎜⎜⎜⎜1 a a2 ⎟⎟⎟⎟ ⎜⎜⎜⎜⎝z1 ⎟⎟⎟⎟⎠ = ⎜⎜⎜⎜⎝b3 ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝z1 ⎟⎟⎟⎟⎠ = ⎜⎜⎜⎜⎝v1 ⎟⎟⎟⎟⎠ , (11.57)
⎝ ⎠
1 a2 a z2 b∗3 z2 v2

where
v0 = (x0 + x1 + x2 ) + j(y0 + y1 + y2 ),
2π 2π
v1 = x0 + (x1 + x2 ) cos − (y1 − y2 ) sin
 3 3 
2π 2π
+ j y0 + (x1 − x2 ) sin + (y1 + y2 ) cos ,
3 3 (11.58)
2π 2π
v2 = x0 + (x1 + x2 ) cos + (y1 − y2 ) sin
 3 3 
2π 2π
+ j y0 − (x1 − x2 ) sin + (y1 + y2 ) cos .
3 3

We can see that


C + (i3 X) = 4, C × (i3 X) = 0,
C + (b3 X) = 6, C × (b3 X) = 4, (11.59)
C + (i3 X) = 3, C × (i3 X) = 0.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


360 Chapter 11

Therefore, the complexity of the C31 transform is: C + (C31 ) = 13, C × (C31 ) = 4, where
C + and C × denote the number of real additions and multiplications, respectively.
Now, let Z T = (xi + jyi )i=0
N−1
be a complex-valued vector of length N = 3n (n >
1). We introduce the following notations: Pi denotes a (0, 1) column vector of
length N/3 whose only i’th element is equal to 1 (i = 0, 1, . . . , N/3 − 1) and
Z i = (x3i + jy3i , x3i+1 + jy3i+1 , x3i+2 + jy3i+2 ).
The 1D forward Chrestenson transform of order N can be performed as follows
[see Eq. (11.52)]:
⎛ n−1 ⎞
⎜⎜⎜ (C3 ⊗ i3 )Z ⎟⎟⎟
⎜⎜ ⎟⎟
C3n Z = ⎜⎜⎜⎜(C3n−1 ⊗ b3 )Z ⎟⎟⎟⎟ . (11.60)
⎜⎝ n−1 ⎟⎠
(C3 ⊗ b∗3 )Z

Using the above notations, we have


 
(C3n−1 ⊗ i3 )Z = (C3n−1 ⊗ i3 ) P0 ⊗ Z 0 + P1 ⊗ Z 1 + · · · + PN/3−1 ⊗ Z N/3−1
= C3n−1 P0 ⊗ i3 Z 0 + C3n−1 P1 ⊗ i3 Z 1 + · · · + C3n−1 PN/3−1 ⊗ i3 Z N/3−1
= C3n−1 P0 (z0 + z1 + z2 ) + C3n−1 P1 (z3 + z4 + z5 )
+ · · · + C3n−1 PN/3−1 (zN−3 + zN−2 + zN−1 )
⎛ ⎞
⎜⎜⎜z0 + z1 + z2 ⎟⎟⎟
⎜⎜⎜z + z + z ⎟⎟⎟
⎜ ⎟⎟⎟⎟ .
= C3n−1 ⎜⎜⎜⎜⎜
3 4 5
.. ⎟⎟⎟ (11.61)
⎜⎜⎜⎝ . ⎟⎟⎠
zN−3 + zN−2 + zN−1

Then, we can write


C + (C3n−1 ⊗ i3 ) = C + (C3n−1 ) + 4 · 3n−1 ,
(11.62)
C × (C3n−1 ⊗ i3 ) = C × (C3n−1 ).

Now, let us compute the complexity of the C3n−1 ⊗ b3 transform


 
(C3n−1 ⊗ b3 )Z = (C3n−1 ⊗ b3 ) P0 ⊗ Z 0 + P1 ⊗ Z 1 + · · · + PN/3−1 ⊗ Z N/3−1
= C3n−1 (P0 ⊗ b3 Z 0 + P1 ⊗ b3 Z 1 + · · · + PN/3−1 ⊗ bZ N/3−1 ). (11.63)

From Eq. (11.58), it follows that C + (b3 Z i ) = 6 and C × (b3 Z i ) = 4. Then we obtain

C + (C3n−1 ⊗ b3 ) = C + (C3n−1 ) + 6 · 3n−1 ,


(11.64)
C × (C3n−1 ⊗ b3 ) = C × (C3n−1 ) + 4 · 3n−1 .

Similarly, we obtain
C + (C3n−1 ⊗ b∗3 ) = C + (C3n−1 ) + 3 · 3n−1 ,
(11.65)
C × (C3n−1 ⊗ b∗3 ) = C × (C3n−1 ).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 361

Finally, the complexity of the C3n transform can be calculated as follows:

C + (C3n ) = 3 · C + (C3n−1 ) + 13 · 3n−1 ,


(11.66)
C × (C3n ) = 3 · C × (C3n−1 ) + 4 · 3n−1 ,

or
C + (C3n ) = 13 · 3n−1 n,
(11.67)
C × (C3n−1 ⊗ b∗3 ) = 4 · 3n−1 n, n ≥ 1.

For example, we have C + (C32 ) = 78, C × (C32 ) = 24, C + (C33 ) = 351, C × (C33 ) = 108.

11.3.2 Chrestenson transform of order 5n


Let us introduce the following notations:
 
2π 2π 2π √
i5 = (1, 1, 1, 1, 1), a = exp j = cos + j sin , j= −1,
5 5 5 (11.68)
a1 = (1, a, a2 , a3 , a4 ), a2 = (1, a2 , a4 , a, a3 ).

From the relations in Eq. (11.49), we obtain the Chrestenson transform matrix of
order 5:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1 1 ⎟⎟⎟ ⎜⎜i5 ⎟⎟
⎜⎜⎜1 a a2 a3 a4 ⎟⎟⎟ ⎜⎜⎜a ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ 1 ⎟⎟⎟
C51 = ⎜⎜⎜⎜1 a2 a4 a a3 ⎟⎟⎟⎟ = ⎜⎜⎜⎜⎜a2 ⎟⎟⎟⎟⎟ . (11.69)
⎜⎜⎜ ⎟ ⎜ ∗⎟
⎜⎜⎝1 a3 a a4 a2 ⎟⎟⎟⎟⎠ ⎜⎜⎝⎜a2 ⎟⎟⎠⎟
1 a4 a3 a2 a a∗1

A Chrestenson matrix of order 5n can be generated recursively as follows:


⎛ n−1 ⎞
⎜⎜⎜C5 C5n−1 C5n−1 C5n−1 C5n−1 ⎟⎟⎟
⎜⎜⎜ n−1 ⎟
⎜⎜⎜C5 aC5n−1 a2C5n−1 a3C5n−1 a4C5n−1 ⎟⎟⎟⎟
⎜ ⎟⎟
C5n = ⎜⎜⎜⎜C5n−1 a2C5n−1 a4C5n−1 aC5n−1 a3C5n−1 ⎟⎟⎟⎟ . (11.70)
⎜⎜⎜ n−1 ⎟⎟
⎜⎜⎜C5 a3C5n−1 aC5n−1 a4C5n−1 a2C5n−1 ⎟⎟⎟⎟
⎝ n−1 ⎠
C5 a4C5n−1 a3C5n−1 a2C5n−1 aC5n−1

Now we compute the complexity of this transform. First, we calculate the


complexity of the C51 transform [see Eq. (11.69)]. Let Z T = (z0 , z1 , . . . , z4 ) be a
complex-valued vector of length 5. The 1D forward Chrestenson transform of order
5 can be performed as follows:
⎛ ⎞⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1 1⎟⎟⎟ ⎜⎜z0 ⎟⎟ ⎜⎜i5 ⎟⎟ ⎜⎜z0 ⎟⎟ ⎜⎜v0 ⎟⎟
⎜⎜⎜1 a a2 a3 a4 ⎟⎟⎟ ⎜⎜⎜z ⎟⎟⎟ ⎜⎜⎜a ⎟⎟⎟ ⎜⎜⎜z ⎟⎟⎟ ⎜⎜⎜v ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ 1 ⎟⎟⎟ ⎜⎜⎜ 1 ⎟⎟⎟ ⎜⎜⎜ 1 ⎟⎟⎟ ⎜⎜⎜ 1 ⎟⎟⎟
C51 Z = ⎜⎜⎜⎜1 a2 a4 a a3 ⎟⎟⎟⎟ ⎜⎜⎜⎜⎜z2 ⎟⎟⎟⎟⎟ = ⎜⎜⎜⎜⎜a2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜z2 ⎟⎟⎟⎟⎟ = ⎜⎜⎜⎜⎜v2 ⎟⎟⎟⎟⎟ . (11.71)
⎜⎜⎜ ⎟ ⎜ ⎟ ⎜ ∗⎟ ⎜ ⎟ ⎜ ⎟
⎜⎜⎝1 a3 a a4 a2 ⎟⎟⎟⎟⎠ ⎜⎜⎝⎜z3 ⎟⎟⎠⎟ ⎜⎜⎝⎜a2 ⎟⎟⎠⎟ ⎜⎜⎝⎜z3 ⎟⎟⎠⎟ ⎜⎜⎝⎜v3 ⎟⎟⎠⎟
1 a4 a3 a2 a z4 a∗1 z4 v4

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


362 Chapter 11

Using the relations a3 = (a2 )∗ , a4 = a∗ , we obtain

v0 = x0 + (x1 + x4 ) + (x2 + x3 ) + j[y0 + (y1 + y4 ) + (y2 + y3 )],


2π 4π 2π
v1 = x0 + (x1 + x4 ) cos + (x2 + x3 ) cos − (y1 − y4 ) sin
5  5 5
4π 2π 4π
− (y2 − y3 ) sin + j y0 + (x1 − x4 ) sin + (x2 − x3 ) sin
5  5 5
2π 4π
+ (y1 + y4 ) cos + (y2 + y3 ) cos ,
5 5
4π 2π 4π
v2 = x0 + (x1 + x4 ) cos + (x2 + x3 ) cos − (y1 − y4 ) sin
5  5 5
2π 4π 2π
+ (y2 − y3 ) sin + j y0 + (x1 − x4 ) sin − (x2 − x3 ) sin
5  5 5
4π 2π
+ (y1 + y4 ) cos + (y2 + y3 ) cos , (11.72)
5 5
4π 2π 4π
v3 = x0 + (x1 + x4 ) cos + (x2 + x3 ) cos − (y1 − y4 ) sin
5  5 5
2π 4π 2π
+ (y2 − y3 ) sin + j y0 − (x1 − x4 ) sin − (x2 − x3 ) sin
5  5 5
4π 2π
+ (y1 + y4 ) cos + (y2 + y3 ) cos ,
5 5
2π 4π 2π
v4 = x0 + (x1 + x4 ) cos + (x2 + x3 ) cos + (y1 − y4 ) sin
5  5 5
4π 2π 4π
+ (y2 − y3 ) sin + j y0 − (x1 − x4 ) sin + (x2 − x3 ) sin
5  5 5
2π 4π
+ (y1 + y4 ) cos + (y2 + y3 ) cos .
5 5

Now we precompute the following quantities:

t1 = x1 + x4 , t2 = x2 + x3 , t3 = y1 + y4 , t4 = y2 + y3 ,
b1 = x1 − x4 , b2 = x2 − x3 , b3 = y1 − y4 , b4 = y2 − y3 ,
2π 4π 2π 4π
c1 = b1 sin , c2 = b2 sin , c3 = b3 sin , c4 = b4 sin ,
5 5 5 5
2π 4π 2π 4π
d1 = t1 cos , d2 = t2 cos , d3 = t3 cos , d4 = t4 cos ,
5 5 5 5 (11.73)
4π 2π 4π 2π
e1 = t1 cos , e2 = t2 cos , e3 = t3 cos , e4 = t4 cos ,
5 5 5 5
4π 2π 4π 2π
f1 = b1 sin , f2 = b2 sin , f3 = b3 sin , f4 = b4 sin ,
5 5 5 5
A1 = x0 + d1 + d2 , A2 = c3 + c4 , A3 = y0 + d3 + d4 , A4 = c1 + c2 ,
B1 = y0 + e1 + e2 , B2 = f3 − f4 , B3 = y0 + e3 + e4 , B4 = f1 − f2 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 363

Then, Eq. (11.72) can be rewritten as follows:

v0 = x0 + t1 + t2 + j(y0 + t3 + t4 ),
v1 = A1 − A2 + j(A3 + A4 ),
v2 = B1 − B2 + j(B3 + B4 ), (11.74)
v3 = B1 + B2 + j(B3 − B4 ),
v4 = A1 + A2 + j(A3 − A4 ).

Subsequently, we can calculate the complexities of the following terms:

C + (i5 Z = v0 ) = 8, C × (i5 Z) = 0,
C + (a1 Z = v1 ) = 8, C × (a1 Z) = 8,
C + (a2 Z = v2 ) = 8, C × (a2 Z) = 8, (11.75)
C + (a∗2 Z = v3 ) = 2, C × (a∗2 Z) = 0,
C + (a∗1 Z = v4 ) = 2, C × (a∗1 Z) = 0.

Then, the complexity of the C51 transform is

C + (C51 ) = 28, C × (C51 ) = 16. (11.76)

Now, let Z T = (z0 , z1 , . . . , zN−1 ) be a complex-valued vector of length N =


5 (n > 1). We introduce the following notations: Pi is a (0, 1) column vector
n

of length N/5 whose only i’th i = 0, 1, . . . , N/5 − 1 element is equal to 1, and


(Z i )T = (z5i , z5i+1 , z5i+2 , z5i+3 , z5i+4 ).
The 1D forward Chrestenson transform of order N can be performed as follows
[see Eq. (11.70)]:
⎛ n−1 ⎞
⎜⎜⎜(C5 ⊗ i5 )Z ⎟⎟⎟
⎜⎜⎜(C n−1 ⊗ a )Z ⎟⎟⎟
⎜⎜⎜ 5 1 ⎟ ⎟⎟
C5n Z = ⎜⎜⎜⎜(C5n−1 ⊗ a2 )Z ⎟⎟⎟⎟ . (11.77)
⎜⎜⎜ n−1 ⎟
⎜⎝⎜(C5 ⊗ a∗2 )Z ⎟⎟⎟⎠⎟
(C5n−1 ⊗ a∗1 )Z

Using the above notations, we have


 
(C5n−1 ⊗ i5 )Z = (C5n−1 ⊗ i5 ) P0 ⊗ Z 0 + P1 ⊗ Z 1 + · · · + PN/5−1 ⊗ Z N/5−1
= C5n−1 P0 ⊗ i5 Z 0 + C5n−1 P1 ⊗ i5 Z 1 + · · · + C5n−1 PN/5−1 ⊗ i5 Z N/5−1
= C5n−1 P0 (z0 + · · · + z4 ) + C5n−1 P1 (z5 + · · · + z9 )
+ · · · + C5n−1 PN/5−1 (zN−5 + · · · + zN−1 )
⎛ ⎞
⎜⎜⎜z0 + z1 + · · · + z4 ⎟⎟⎟
⎜⎜⎜z + z + · · · + z ⎟⎟⎟
⎜5 ⎟⎟⎟⎟ .
= C5n−1 ⎜⎜⎜⎜⎜
6 9
.. ⎟⎟⎟ (11.78)
⎜⎜⎜ . ⎟⎟⎠

zN−5 + zN−4 + · · · + zN−1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


364 Chapter 11

Then, we can write

C + (C5n−1 ⊗ i5 ) = C + (C5n−1 ) + 8 · 5n−1 , C × (C5n−1 ⊗ i5 ) = C × (C5n−1 ). (11.79)

Now, compute the complexity of the (C5n−1 ⊗ a1 )Z transform,


 
(C5n−1 ⊗ a1 )Z = (C5n−1 ⊗ a1 ) P0 ⊗ Z 0 + P1 ⊗ Z 1 + · · · + PN/5−1 ⊗ Z N/5−1
 
= C5n−1 P0 ⊗ a1 Z 0 + P1 ⊗ a1 Z 1 + · · · + PN/5−1 ⊗ a1 Z N/5−1 (11.80)

Then, from Eq. (11.75), we obtain

C + (C5n−1 ⊗ a1 ) = C + (C5n−1 ) + 8 · 5n−1 ,


(11.81)
C × (C5n−1 ⊗ a1 ) = C × (C5n−1 ) + 8 · 5n−1 .

Similarly, we can verify

C + (C5n−1 ⊗ a2 ) = C + (C5n−1 ) + 8 · 5n−1 , C × (C5n−1 ⊗ a2 ) = C × (C5n−1 ) + 8 · 5n−1 ,


C + (C5n−1 ⊗ a∗2 ) = C + (C5n−1 ) + 2 · 5n−1 , C × (C5n−1 ⊗ a∗2 ) = C × (C5n−1 ), (11.82)
C + (C5n−1 ⊗ a∗1 ) = C + (C5n−1 ) + 2 · 5n−1 , × ∗
C (C5 ⊗ a1 ) = C (C5 ).
n−1 × n−1

Finally, the complexity of the C5n transform can be calculated as follows:

C + (C5n ) = 5 · C + (C5n−1 ) + 28 · 5n−1 , C × (C5n ) = 5 · C × (C5n−1 ) + 16 · 5n−1 ,


or (11.83)
+ ×
C (C5 ) = 28n · 5 , C (C5 ) = 16n · 5 , n = 1, 2, . . .
n n−1 n n−1

The numerical results of the complexities of the Chrestenson transforms are given
in Table 11.1.

Table 11.1 Results of the complexities of the Chrestenson transforms.

Size Addition Multiplication

C3n n 13 · 3n−1 n 4 · 3n−1 n

3 1 13 4
9 2 78 24
27 3 351 108
81 4 1404 432

C5n n 28 · 5n−1 n 16 · 5n−1 n

5 1 28 16
25 2 280 160
125 3 2 100 1200
625 4 14,000 8000

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 365

11.4 Fast Generalized Haar Transforms


The evolution of imaging and audio/video applications over the past two decades
has pushed data storage and transmission technologies beyond their previous
limits. One of the main and most important steps in data compression, as well
as in various pattern recognition and communication tasks, is the application
of discrete orthogonal (spectral) transforms to the input signals and images.
This step allows transforming the original signals into the much less redundant
spectral domain and performing the actual compression/recognition on spectral
coefficients rather than on the original signals.36–43 Developed in the 1960s and
1970s, fast trigonometric transforms such as the FFT and DCT (discrete cosine
transform) facilitated the use of such techniques for a variety of efficient data
representation problems. Particularly, the DCT-based algorithms have become the
industry standard (JPEG/MPEG) in digital image/video compression systems.41
Here, we consider the generalized Haar transform, develop the corresponding fast
algorithms, and evaluate its complexities.36,39,44,45

11.4.1 Generalized Haar functions


The generalized Haar functions for any p, n (p is a prime power) are defined as
follows:
0,0
H0,0 (k) = 1, 0 ≤ k < 1,
 
q,r ,√ -i−1 2π
Hi,t (k) = p exp j (t − 1)r ,
p
(11.84)
q + [(t − 1)/p] q + (t/p)
≤k< ,
pi−1 pi−1
q,r
Hi,t = 0, at all other points,

where j = −1, i = 1, 2, . . . , n, r = 1, 2, . . . , p − 1, q = 0, 1, 2, . . . , pi−1 − 1,
t = 1, 2, . . . , p.
For p = 2 from Eq. (11.84), we obtain the definition of classical Haar functions
(r = 1),

0
H0,0 (k) = 1, 0 ≤ k < 1,
q
√ i−1 2q 2q + 1
Hi,1 (k) = 2 , ≤k< ,
2i 2i
√ i−1 (11.85)
q 2q + 1 2q + 2
Hi,2 (k) = − 2 , ≤k< ,
2i 2i
q
Hi,t = 0, at all other points,

from which we generate a classical Haar transform matrix of order 2n (see previous
chapters in this book).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


366 Chapter 11

Example: Observe the generalized Haar transform matrix generation√ for p = 3


and n = 2; we use the following notations: a = exp[ j(2π/3)], s = 3:

1 1
01
Row 1: H11 (k) = 1, 0 ≤ k < , 02
Row 2: H11 (k) = 1, 0 ≤ k < ,
3 3
1 2 1 2 (11.86a)
01
H12 (k) = a, ≤k< , 02
H12 (k) = a2 , ≤k< ,
3 3 3 3
2 2
01
H13 (k) = a2 , ≤ k < 1; 02
H13 (k) = a, ≤ k < 1;
3 3
1 1 4
01
Row 3: H21 (k) = s, 0 ≤ k < , 11
Row 4: H21 (k) = s, ≤k< ,
9 3 9
1 2 4 5
01
H22 (k) = sa, ≤k< , 11
H22 (k) = sa, ≤k< , (11.86b)
9 9 9 9
2 5 2
01
H23 (k) = sa2 , ≤ k < 1; 11
H23 (k) = sa2 , ≤k< ;
9 9 3
2 7 1
21
Row 5: H21 (k) = s, ≤k< , 02
Row 6: H21 (k) = s, 0 ≤ k < ,
3 9 9
7 8 1 2
21
H22 (k) = sa, ≤k< , 02
H22 (k) = sa2 , ≤k< , (11.86c)
9 9 9 9
2 8 2 1
H23 (k) = sa ,
21
≤ k < 1; H23 (k) = sa,
02
≤k< ;
9 9 3
1 4 2 7
12
Row 7: H21 (k) = s, ≤k< , 22
Row 8: H21 (k) = s, ≤k< ,
3 9 3 9
4 5 7 8
12
H22 (k) = sa2 , ≤k< , 22
H22 (k) = sa2 , ≤k< , (11.86d)
9 9 9 9
5 2 8
12
H23 (k) = sa, ≤k< ; 22
H23 (k) = sa, ≤ k < 1.
9 3 9
Therefore, the complete orthogonal generalized Haar transform matrix for p = 3,
n = 2 has the following form:
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 a a a a2 a2 a2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 1 a2 a2 a2 a a a ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ s sa sa2 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜ ⎟
H9 = ⎜⎜⎜⎜⎜0 0 0 s sa sa2 0 0 0 ⎟⎟⎟⎟⎟ . (11.87)
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 s sa sa ⎟⎟⎟ 2
⎜⎜⎜ ⎟
⎜⎜⎜ s sa2 sa 0 0 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 s sa2 sa 0 0 0 ⎟⎟⎟⎟⎟
⎜⎝ ⎟⎠
0 0 0 0 0 s sa2 sa

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 367

The complete orthogonal generalized Haar transform matrix for p = 4, n = 2


has the following form:
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 ⎟⎟

⎜⎜⎜⎜1 1 1 1 j j j j −1 −1 −1 −1 −j −j −j − j ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 1 1 −1 −1 −1 −1 1 1 1 1 −1 −1 −1 −1 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 1 1 −j −j −j −j −1 −1 −1 −1 j j j j ⎟⎟⎟⎟
⎜⎜⎜2 2 j −2 −2 j ⎟
⎜⎜⎜ 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟

⎜⎜⎜0 0 0 0 2 2j −2 −2 j 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 0 0 0 0 0 2 2j −2 −2 j 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 −2 −2 j⎟⎟⎟⎟

= ⎜⎜⎜⎜⎜
0 0 0 0 0 0 0 0 2 2j
H16 ⎟ . (11.88)
⎜⎜⎜2 −2 2 −2 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟

⎜⎜⎜0 0 0 0
⎜⎜⎜ 2 −2 2 −2 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟

⎜⎜⎜0 0 0 0
⎜⎜⎜ 0 0 0 0 2 −2 2 −2 0 0 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜0 0 0 0 0 0 0 0 0 0 0 0 2 −2 2 −2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜2 −2 j −2 2 j 0 0 0 0 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟

⎜⎜⎜0 0 0 0
⎜⎜⎜ 2 −2 j −2 2j 0 0 0 0 0 0 0 0 ⎟⎟⎟⎟

⎜⎜⎜0 0 0 0
⎝ 0 0 0 0 2 −2 j −2 2j 0 0 0 0 ⎟⎟⎟⎟⎠
0 0 0 0 0 0 0 0 0 0 0 0 2 −2 j −2 2j

From the above-given generalized Haar transform matrices, we can see that the
Haar transform is globally sensitive for the first p of the pn row vectors, but locally
sensitive for all subsequent vectors.

11.4.2 2n -point Haar transform


Let us introduce the following notations: i2 = (1, 1), j2 = (1, −1). From the
relations in Eq. (11.85), we obtain

for n =
 1,   
1 1 i
H2 = = 2 ; (11.89)
1 −1 j2

for n = 2,
⎛ ⎞
⎜⎜⎜ 1 1 1 1⎟⎟
⎟ 
⎜⎜⎜ 1 
⎜⎜⎜ √ √1 −1 −1⎟⎟⎟⎟⎟ H2 ⊗ i2 (11.90)
H4 = ⎜⎜ ⎟ = √
⎜⎜⎜ 2 − 2 √0 √0⎟⎟⎟⎟⎟
;
2I2 ⊗ j2
⎝ ⎠
0 0 2 − 2
for n = 3,
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 1 1⎟⎟
⎟⎟⎟
⎜⎜⎜ 1 1
⎜⎜⎜⎜ √ √ √1 √1 −1 −1 −1 −1⎟⎟⎟⎟
⎜⎜⎜ 2 2 − 2 − 2 ⎟⎟
⎜⎜⎜ √ √ √ √ ⎟⎟⎟⎟  
⎜⎜⎜ 0 0 0 0 2 2 − 2 − 2 ⎟⎟⎟ H4 ⊗ i2 (11.91)
H8 = ⎜⎜ ⎟⎟⎟ = .
⎜⎜⎜ 2 −2 0 0 0 0 0 0⎟⎟ ⎟ 2I 4 ⊗ j 2
⎜⎜⎜ ⎟
⎜⎜⎜ 0 0 2 −2 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ 0 0 ⎟
⎜⎝ 0 0 2 −2 0 0⎟⎟⎟⎟⎠
0 0 0 0 0 0 2 −2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


368 Chapter 11

Continuing this process, we obtain recursive representations of Haar matrices of


any order 2n as
⎛ ⎞
⎜⎜ H2n−1 ⊗ i2 ⎟⎟⎟
H2n = ⎜⎜⎜⎝√ n−1 ⎟⎟ , H1 = 1, n = 1, 2, . . . . (11.92)
2 I n−1 ⊗ j ⎠
2 2

Now, we will compute the complexity of a Haar transform of order 2n .


Note that for n = 1, we have C + (H2 ) = 2, C × (H2 ) = 0. To calculate the complex-
ity of the H4 transform given above, let X T = (x0 , x1 , x2 , x3 ) be a real-valued vector
of length 4. The 1D forward Haar transform of order 4 can be performed as follows:
⎛ ⎞
⎜⎜⎜ 1 1 1 1⎟⎟ ⎛⎜ x0 ⎞⎟ ⎛⎜y0 ⎞⎟
⎜⎜⎜ 1 ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜ √ √1 −1 −1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y1 ⎟⎟⎟⎟⎟
H4 X = ⎜⎜ ⎟⎜ ⎟ = ⎜ ⎟,
⎜⎜⎜ 2 − 2 √0 √0⎟⎟⎟⎟⎟ ⎜⎜⎜⎝⎜ x2 ⎟⎟⎟⎠⎟ ⎜⎜⎜⎝⎜y2 ⎟⎟⎟⎠⎟
(11.93)
⎝ ⎠ x
0 0 2 − 2 3 y3

where
y0 = (x0 + x1 ) + (x2 + x3 ),
y1 = (x
√0 + x1 ) − (x2 + x3 ), (11.94)
y2 = √2(x0 − x1 ),
y3 = 2(x2 − x3 ).

Then, the complexity of H4 transform is C × (H2 ) = 6, C × (H4 ) = 2.


Now, let X T = (x0 , x1 , . . . , xN−1 ) be a real-valued vector of length N = 2n . We
introduce the following notations: Pi is a (0, 1) column vector of length N/2 whose
only i’th (i = 1, 2, . . . , N/2) element equals 1, and (X i )T = (x2i−2 , x2i−1 ). The 1D
forward Haar transform of order N can be performed as follows:
⎛ ⎞
⎜⎜⎜1 (HN/2 ⊗ i2 )X 2 ⎟⎟⎟
HN X = ⎜⎜⎝ ⎜ √  ⎟⎟ .
I2n−1 ⊗ j2 X ⎟⎠
n−1 (11.95)
2

Using the above-given notations, we have


 
(HN/2 ⊗ i2 )X = (HN/2 ⊗ i2 ) P1 ⊗ X 1 + P2 ⊗ X 2 + · · · + PN/2 ⊗ X N/2
= HN/2 P1 ⊗ i2 X 1 + HN/2 P2 ⊗ i2 X 2 + · · · + HN/2 PN/2 ⊗ i2 X N/2
= HN/2 P1 (x0 + x1 ) + HN/2 P2 (x2 + x3 ) + · · · + HN/2 PN/2 (xN−2 + xN−1 )
⎛ ⎞
⎜⎜⎜ x0 + x1 ⎟⎟⎟
⎜⎜⎜ x + x ⎟⎟⎟
⎜ 2 ⎟⎟
= HN/2 ⎜⎜⎜⎜⎜
3 ⎟
.. ⎟⎟⎟ . (11.96)
⎜⎜⎜ . ⎟⎟⎟
⎝ ⎠
xN−2 + xN−1
Then, we can write
N
C + (HN/2 ⊗ i2 ) = C + (HN/2 ) + ,
2 (11.97)
C × (HN/2 ⊗ i2 ) = C × (HN/2 ).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 369

√ n−1
Now we compute the complexity of the ( 2 IN/2 ⊗ j2 )X transform,
√ n−1 %√ n−1 & 
( 2 IN/2 ⊗ j2 )X = 2 IN/2 ⊗ j2 P1 ⊗ X 1 + P2 ⊗ X 2 + · · · + PN/2 ⊗ X N/2
√ n−1  
= 2 P1 (x0 − x1 ) + P2 (x2 − x3 ) + · · · + PN/2 (xN−2 − xN−1 )
⎛ ⎞
⎜⎜⎜ x0 − x1 ⎟⎟⎟
√ n−1 ⎜⎜⎜⎜ x2 − x3 ⎟⎟⎟⎟⎟

= 2 ⎜⎜⎜⎜ .. ⎟⎟⎟
⎟⎟⎟ (11.98)
⎜⎜⎜ . ⎟⎠

xN−2 − xN−1

from which we obtain


%√ n−1
& N
C+ IN/2 ⊗ j2 = ,
2
%√ n−1 2
& N (11.99)
×
C 2 IN/2 ⊗ j2 = .
2

Finally, the complexity of the HN transform can be calculated as follows:

C + (H2n ) = 2n+1 − 2,
(11.100)
C × (H2n ) = 2n − 2, n = 1, 2, 3, . . . .

11.4.3 3n -point generalized Haar transform


Let us introduce the following
√ notations: i3 = (1, 1, 1), b3 = (1, a, a2 ), where
a=√ exp[ j(2π/3)], j = −1. From the relations in Eq. (11.84), we obtain (below,
s = 3):

for n = 1,
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜ i3 ⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟
H3 = ⎜⎜1 a a2 ⎟⎟⎟⎟ = ⎜⎜⎜⎜⎝b3 ⎟⎟⎟⎟⎠ ;
(11.101)
⎝ ⎠
1 a2 a b∗3
for n = 2,
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 a a a a2 a2 a2 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1
⎜⎜⎜ a2 a2 a2 a a a ⎟⎟⎟⎟⎟
⎟ ⎛ ⎞
⎜⎜⎜ s sa sa2
⎜⎜ 0 0 0 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜ H3 ⊗ i3 ⎟⎟⎟
⎟⎟ ⎜ ⎜ ⎟⎟ (11.102)
H9 = ⎜⎜⎜⎜⎜0 0 0 s sa sa2 0 0 0 ⎟⎟⎟⎟ = ⎜⎜⎜⎜ sI3 ⊗ b3 ⎟⎟⎟⎟ ;
⎜⎜⎜ ⎟⎟ ⎜⎝ ⎟⎠
⎜⎜⎜⎜0 0 0 0 0 0 s sa sa2 ⎟⎟⎟⎟ sI3 ⊗ b∗3
⎜⎜⎜ ⎟⎟⎟
2
⎜⎜⎜ s sa sa 0 0 0 0 0 0 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 s sa2 sa 0 0 0 ⎟⎟⎟⎟
⎝ ⎟⎠
0 0 0 0 0 s sa2 sa

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


370 Chapter 11

for n = 3,
⎛ ⎞
⎜⎜⎜ H9 ⊗ i3 ⎟⎟⎟
⎜ ⎟
H27 = ⎜⎜⎜⎜⎜ s2 I9 ⊗ b3 ⎟⎟⎟⎟⎟ . (11.103)
⎝ 2 ⎠
s I3 ⊗ b∗3
Continuing this process, we obtain a recursive representation of the generalized
Haar matrices of any order 3n as follows:
⎛ ⎞
⎜⎜⎜ H3n−1 ⊗ i3 ⎟⎟⎟
⎜⎜⎜ n−1 ⎟
H3n = ⎜⎜⎜ s I3n−1 ⊗ b3 ⎟⎟⎟⎟⎟ . (11.104)
⎝ n−1 ⎠
s I3n−1 ⊗ b∗3
Now we compute the complexity of the generalized Haar transform of order 3n .
First, we calculate the complexity of the H3 transform. Let Z T = (z0 , z1 , z2 ) = (x0 +
jy0 , x√
1 + jy1 , x2 + jy2 ) be a complex-valued vector of length 3, a = exp[ j(2π/3)],
j = −1. Because the generalized Haar transform matrix H3 is identical to the
Chrestenson matrix of order 3, the 1D forward generalized Haar transform of
order 3 can be performed in the manner that was shown in Section 11.3.1 [see
Eqs. (11.57) and (11.58)], and has the complexity
C + (i3 Z) = 4, C × (i3 Z) = 0,
C + (b3 Z) = 6, C × (b3 Z) = 4, (11.105)
C + (b∗3 Z) = 1, C × (b∗3 Z) = 0.
That is, C + (H3 ) = 11, C × (H3 ) = 4.
Now, let Z T = (z0 , z1 , . . . , zN−1 ) be a complex-valued vector of length N = 3n .
We introduce the following notations: Pi denotes a (0, 1) column vector of length
N/3 whose only i’th element is equal to 1 (i = 0, 1, . . . , N/3 − 1), and (Z i )T =
(z3i , z3i+1 , z3i+2 ). The 1D forward generalized Haar transform of order N can be
performed as follows:
⎛ ⎞
⎜⎜⎜ (H3n−1 ⊗ i3) Z ⎟⎟⎟
⎜⎜ n−1 ⎟⎟
H3n Z = ⎜⎜⎜⎜ s I3n−1 ⊗ b3  Z ⎟⎟⎟⎟ . (11.106)
⎜⎝ n−1 ⎟⎠
s I3n−1 ⊗ b∗3 Z

Using the above-given notations, we have


 
(H3n−1 ⊗ i3 )Z = (H3n−1 ⊗ i3 ) P0 ⊗ Z 0 + P1 ⊗ Z 1 + · · · + PN/3−1 ⊗ Z N/3−1
= H3n−1 P0 ⊗ i3 Z 0 + H3n−1 P1 ⊗ i3 Z 1 + · · · + H3n−1 PN/3−1 ⊗ i3 Z N/3−1
= H3n−1 P0 (z0 + z1 + z2 ) + H3n−1 P1 (z3 + z4 + z5 )
+ · · · + H3n−1 PN/3−1 (zN−3 + zN−2 + zN−1 )
⎛ ⎞
⎜⎜⎜z0 + z1 + z2 ⎟⎟⎟
⎜⎜⎜z + z + z ⎟⎟⎟
⎜3 ⎟⎟⎟⎟ .
= H3n−1 ⎜⎜⎜⎜⎜
4 5
.. ⎟⎟⎟ (11.107)
⎜⎜⎜ . ⎟

⎝ ⎠
zN−3 + zN−2 + zN−1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 371

Then, we can write

C + (H3n−1 ⊗ i3 ) = C + (H3n−1 ) + 4 · 3n−1 ,


(11.108)
C × (H3n−1 ⊗ i3 ) = C × (H3n−1 ).

Now, compute the complexity of the (sn−1 I3n−1 ⊗ b3 )Z transform:


 
(sn−1 I3n−1 ⊗ b3 )Z = sn−1 (I3n−1 ⊗ b3 ) P0 ⊗ Z 0 + P1 ⊗ Z 1 + · · · + PN/3−1 ⊗ Z N/3−1
= sn−1 (P0 ⊗ b3 Z 0 + P1 ⊗ b3 Z 1 + · · · + PN/3−1 ⊗ bZ N/3−1 )
⎛ ⎞
⎜⎜⎜z0 + az1 + a2 z2 ⎟⎟⎟
⎜⎜⎜⎜z3 + az4 + a2 z5 ⎟⎟⎟
⎟⎟⎟
= sn−1 ⎜⎜⎜⎜⎜ .. ⎟⎟⎟ . (11.109)
⎜⎜⎜ . ⎟⎟⎟
⎝ ⎠
zN−3 + azN−2 + a2 zN−1

We obtain a similar result for the (sn−1 I3n−1 ⊗ b∗3 )Z transform. Hence, using Eq.
(11.105), we obtain

C + (sn−1 I3n−1 ⊗ b3 ) = 7 · 3n−1 ,


C × (sn−1 I3n−1 ⊗ b3 ) = 5 · 3n−1 ;
(11.110)
C + (sn−1 I3n−1 ⊗ b∗3 ) = 3n−1 ,
C × (sn−1 I3n−1 ⊗ b∗3 ) = 3n−1 .

Finally, the complexity of the H3n transform can be calculated as follows:

C + (H3n ) = C + (H3n−1 ) + 12 · 3n−1 ,


(11.111)
C × (H3n ) = C × (H3n−1 ) + 6 · 3n−1 ;

or

C + (H3n ) = 6(3n − 3) + 11,


(11.112)
C × (H3n ) = 3(3n − 3) + 4.

11.4.4 4n -point generalized Haar transform


Let us introduce the following√ notations: i4 = (1, 1, 1, 1), a1 = (1, j, −1, − j),
a2 = (1, −1, 1, −1), where j = −1. From the relations in Eq. (11.84), we obtain
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜ i4 ⎟⎟⎟ ⎜⎜⎜ H4 ⊗ i4 ⎟⎟⎟
⎜⎜⎜⎜ ⎟ ⎜ ⎟⎟ ⎜⎜⎜⎜ ⎟⎟
1 j −1 − j⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a1 ⎟⎟⎟⎟ 2I ⊗ a1 ⎟⎟⎟⎟
H4 = ⎜⎜⎜⎜⎜ ⎟ = ⎜ ⎟, H16 = ⎜⎜⎜⎜⎜ 4 ⎟. (11.113)
⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a2 ⎟⎟⎟⎟⎟ ⎜⎜⎜2I4 ⊗ a2 ⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ∗⎠ ⎝ ⎠
1 − j −1 j a1 2I4 ⊗ a∗1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


372 Chapter 11

Continuing this process, we obtain a recursive representation of the generalized


Haar matrices of any order 4n :
⎛ ⎞
⎜⎜⎜ H n−1 ⊗ i4 ⎟⎟⎟
⎜⎜⎜ n−1 4 ⎟
⎜2 I n−1 ⊗ a1 ⎟⎟⎟⎟⎟
H4n = ⎜⎜⎜⎜⎜ n−1 4 ⎟. (11.114)
⎜⎜⎜2 I4n−1 ⊗ a2 ⎟⎟⎟⎟⎟
⎝ n−1 ⎠
2 I4n−1 ⊗ a∗1

Now we will compute the complexity of a generalized Haar transform of


order 4n . First, we calculate the complexity of the H4 transform. Let Z T =
(z0 , z1 , z2 , z3 ) be a complex-valued vector of length 4. The 1D forward-generalized
Haar transform of order 4 can be performed as follows:
⎛ ⎞⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜z0 ⎟⎟⎟ ⎜⎜⎜i4 ⎟⎟⎟ ⎜⎜⎜z0 ⎟⎟⎟ ⎜⎜⎜v0 ⎟⎟⎟
⎜⎜⎜⎜ ⎟ ⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
1 j −1 − j⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜z1 ⎟⎟⎟⎟ ⎜⎜⎜⎜a1 ⎟⎟⎟⎟ ⎜⎜⎜⎜z1 ⎟⎟⎟⎟ ⎜⎜⎜⎜v1 ⎟⎟⎟⎟
H4 Z = ⎜⎜⎜⎜⎜ ⎟⎜ ⎟ = ⎜ ⎟⎜ ⎟ = ⎜ ⎟, (11.115)
⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜z2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜z2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜v2 ⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠ ⎝ ∗⎠ ⎝ ⎠ ⎝ ⎠
1 − j −1 j z3 a1 z3 v3

where

v0 = (x0 + x2 ) + (x1 + x3 ) + j[(y0 + y2 ) + (y1 + y3 )],


v1 = (x0 − x2 ) − (y1 − y3 ) + j[(y0 − y2 ) + (x1 − x3 )],
(11.116)
v2 = (x0 + x2 ) − (x1 + x3 ) + j[(y0 + y2 ) − (y1 + y3 )],
v3 = (x0 − x2 ) + (y1 − y3 ) + j[(y0 − y2 ) − (x1 − x3 )].

Then, the complexity of the H4 transform is C + (H4 ) = 16, and no multiplications


are needed.
Now, let Z T = (z0 , z1 , . . . , zN−1 ) be a complex-valued vector of length N = 4n ,
n > 1. We introduce the following notations: Pi is a (0, 1) column vector of length
N/4 whose only i’th (i = 1, 2, . . . , N/4) element equals 1, and (Z i )T = (z4i−4 ,
z4i−3 , z4i−2 , z4i−1 ). The 1D forward generalized Haar transform of order N = 4n can
be performed as follows:
⎛ ⎞
⎜⎜⎜ (H4n−1 ⊗ i4) Z ⎟⎟⎟
⎜⎜⎜⎜ 2n−1 I n−1 ⊗ a1 Z ⎟⎟⎟
⎟⎟⎟
⎜⎜ 4
H4n Z = ⎜⎜⎜⎜ n−1  ⎟⎟⎟ .
⎟⎟⎟ (11.117)
⎜⎜⎜ 2 I4n−1 ⊗ a2 Z ⎟⎟⎟
⎜⎜⎝  ⎠
2n−1 I4n−1 ⊗ a∗1 Z

Using the above-given notations, we have


 
(H4n−1 ⊗ i4 )Z = (H4n−1 ⊗ i4 ) P1 ⊗ Z 1 + P2 ⊗ Z 2 + · · · + PN/4 ⊗ Z N/4
= H4n−1 P1 ⊗ i4 Z 1 + H4n−1 P2 ⊗ i4 Z 2 + · · · + H4n−1 PN/4 ⊗ i4 Z N/4

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 373

= H4n−1 P1 (z0 + z1 + z2 + z3 ) + H4n−1 P2 (z4 + · · · + z7 )


+ · · · + H4n−1 PN/4 (zN−4 + · · · + zN−1 )
⎛ ⎞
⎜⎜⎜z0 + z1 + z2 + z3 ⎟⎟⎟
⎜⎜⎜z + z + z + z ⎟⎟⎟
⎜⎜⎜ 4 5 6 7 ⎟⎟⎟
= H4n−1 ⎜⎜⎜ .. ⎟⎟⎟ . (11.118)
⎜⎜⎜⎝ . ⎟⎟⎟⎠
zN−4 + zN−3 + zN−2 + zN−1

Because

z4i−4 + · · · + z4i−1 = (x4i−4 + x4i−2 ) + (x4i−3 + x4i−1 )


+ j[(y4i−4 + y4i−2 ) + (y4i−3 + y4i−1 )] (11.119)

we can write

C + (H4n−1 ⊗ i4 ) = C + (H4n−1 ) + 6 · 4n−1 ,


(11.120)
C × (H4n−1 ⊗ i4 ) = C shift (H4n−1 ) = 0.

Now we compute the complexity of the (2n−1 I4n−1 ⊗ a1 )Z transform:


 
(2n−1 I4n−1 ⊗ a1 )Z = (2n−1 I4n−1 ⊗ a1 ) P1 ⊗ Z 1 + P2 ⊗ Z 2 + · · · + PN/4 ⊗ Z N/4
 
= 2n−1 P1 ⊗ a1 Z 1 + P2 ⊗ a1 Z 2 + · · · + PN/4 ⊗ a1 Z N/4 (11.121)

Because
⎛ ⎞ ⎛ ⎞
⎜⎜z4i−4 ⎟⎟ ⎜⎜ x4i−4 + jy4i−4 ⎟⎟
  ⎜⎜⎜⎜z4i−3 ⎟⎟⎟⎟   ⎜⎜⎜⎜ x4i−3 + jy4i−3 ⎟⎟⎟⎟

a1 Z i = 1 j −1 − j ⎜⎜⎜⎜ ⎟⎟ = 1 j −1 − j ⎜⎜⎜ ⎟
⎜⎜⎝z4i−2 ⎟⎟⎟⎟⎠ ⎜⎜⎜ x4i−2 +
⎝ jy4i−2 ⎟⎟⎟⎠⎟
z4i−1 x4i−1 + jy4i−1
= (x4i−4 − x4i−2 ) − (y4i−3 − y4i−1 )
+ j[(y4i−4 − y4i−2 ) + (x4i−3 − x4i−1 )], (11.122)

we obtain

C + (2n−1 I4n−1 ⊗ a1 ) = 6 · 4n−1 ,


(11.123)
C shift (2n−1 I4n−1 ⊗ a1 ) = 2 · 4n−1 .

Similarly, we find that


 
(2n−1 I4n−1 ⊗ a2 )Z = (2n−1 I4n−1 ⊗ a2 ) P1 ⊗ Z 1 + P2 ⊗ Z 2 + · · · + PN/4 ⊗ Z N/4
 
= 2n−1 P1 ⊗ a2 Z 1 + P2 ⊗ a2 Z 2 + · · · + PN/4 ⊗ a2 Z N/4 , (11.124)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


374 Chapter 11

and

a2 Z i = (x4i−4 + x4i−2 ) − (x4i−3 + x4i−1 )


+ j[(y4i−4 + y4i−2 ) − (y4i−3 + y4i−1 )]. (11.125)

Now, taking into account Eq. (11.119), we obtain

C + (2n−1 I4n−1 ⊗ a2 ) = 2 · 4n−1 ,


(11.126)
C shift (2n−1 I4n−1 ⊗ a2 ) = 2 · 4n−1 .

The (2n−1 I4n−1 ⊗ a∗1 )Z transform has the form


 
(2n−1 I4n−1 ⊗ a∗1 )Z = (2n−1 I4n−1 ⊗ a∗1 ) P1 ⊗ Z 1 + P2 ⊗ Z 2 + · · · + PN/4 ⊗ Z N/4
 
= 2n−1 P1 ⊗ a∗1 Z 1 + P2 ⊗ a∗1 Z 2 + · · · + PN/4 ⊗ a∗1 Z N/4 (11.127)

because

a∗1 Z i = (x4i−4 − x4i−2 ) + (y4i−3 − y4i−1 )


+ j[(y4i−4 − y4i−2 ) − (x4i−3 − x4i−1 )]. (11.128)

Now, taking into account Eq. (11.122), we obtain

C + (2n−1 I4n−1 ⊗ a∗1 ) = 2 · 4n−1 ,


(11.129)
C shift (2n−1 I4n−1 ⊗ a∗1 ) = 2 · 4n−1 .

Finally, the complexity of the H4n transform can be calculated as follows:

C + (H4 ) = 16, C shift (H4 ) = 0,


16(4n − 1) (11.130)
C + (H4n ) = , C shift (H4n ) = 6 · 4n−1 , n ≥ 2.
3

11.4.5 5n -point generalized Haar transform


Let us introduce the following notations: i5 = (1, 1, 1,√1, 1), a1 = (1, a, a2 , a3 , a4 ),
a2 = (1, a2 , a4 , a, a3 ), and a = exp( j 2π
5 ), where j = −1. From Eq. (11.84), we
obtain
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1 1 ⎟⎟⎟ ⎜⎜⎜ i5 ⎟⎟⎟ ⎜⎜⎜ H5 ⊗ i5 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜⎜ √ ⎟
⎜⎜⎜1 a a2 a3 a4 ⎟⎟⎟⎟ ⎜⎜⎜⎜a1 ⎟⎟⎟⎟ ⎜⎜⎜ 5I5 ⊗ a1 ⎟⎟⎟⎟⎟
⎟ ⎜ ⎟ ⎜⎜ √ ⎟⎟
H5 = ⎜⎜⎜⎜⎜1 a2 a4 a a3 ⎟⎟⎟⎟⎟ = ⎜⎜⎜⎜⎜a2 ⎟⎟⎟⎟⎟ , H25 = ⎜⎜⎜⎜⎜ 5I5 ⊗ a2 ⎟⎟⎟⎟⎟ . (11.131)
⎜⎜⎜⎜1 ⎟ ⎜ ⎟
a2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a∗2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜ √ ⎟⎟
⎜⎜⎝ a3 a a4
⎠ ⎝ ∗⎠ ⎜⎜⎜ 5I5 ⊗ a∗2 ⎟⎟⎟⎟⎟
⎝ √ ⎠
1 a4 a3 a2 a a1 5I5 ⊗ a∗1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 375

Continuing this process, we obtain recursive representation of generalized Haar


matrices of any order 5n :
⎛ ⎞
⎜⎜⎜ H5n−1 ⊗ i5 ⎟⎟
⎜⎜⎜ √ n−1 ⎟⎟
⎜⎜⎜ 5 I5n−1 ⊗ a1 ⎟⎟⎟⎟⎟
⎜⎜⎜ √ n−1 ⎟⎟⎟
H5n = ⎜⎜⎜⎜ 5 I5n−1 ⊗ a2 ⎟⎟⎟⎟ , H1 = 1, n = 1, 2, . . . (11.132)
⎜⎜⎜ √ n−1 ⎟
⎜⎜⎜ 5 I5n−1 ⊗ a∗ ⎟⎟⎟⎟⎟
⎜⎜⎝ √ n−1 2⎟
⎟⎠
5 I5n−1 ⊗ a∗1

Now we will compute the complexity of the generalized Haar transform of order
5n . First, we calculate the complexity of the H5 transform. Let Z T = (z0 , z1 , . . . , z4 )
be a complex-valued vector of length 5; then,
⎛ ⎞
⎜⎜⎜1 1 1 1 1 ⎟⎟⎟ ⎛⎜⎜z0 ⎞⎟⎟ ⎛⎜⎜ i5 ⎞⎟⎟ ⎛⎜⎜z0 ⎞⎟⎟ ⎛⎜⎜v0 ⎞⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎜⎜⎜1 a a2 a3 a4 ⎟⎟⎟⎟ ⎜⎜⎜⎜⎜z1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜a1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜z1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜v1 ⎟⎟⎟⎟⎟
⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
H5 Z = ⎜⎜⎜⎜1 a2 a4 a a3 ⎟⎟⎟⎟ ⎜⎜⎜⎜z2 ⎟⎟⎟⎟ = ⎜⎜⎜⎜a2 ⎟⎟⎟⎟ ⎜⎜⎜⎜z2 ⎟⎟⎟⎟ = ⎜⎜⎜⎜v2 ⎟⎟⎟⎟ , (11.133)
⎜⎜⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ∗ ⎟⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟
⎜⎜⎜1 a3 a a4 a2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎝z3 ⎟⎟⎟⎠ ⎜⎜⎜⎝a2 ⎟⎟⎟⎠ ⎜⎜⎜⎝z3 ⎟⎟⎟⎠ ⎜⎜⎜⎝v3 ⎟⎟⎟⎠
⎝ ⎠ a∗1 z4
1 a4 a3 a2 a z4 v4

where

vr0 = x0 + (x1 + x4 ) + (x2 + x3 ), vi0 = y0 + (y1 + y4 ) + (y2 + y3 ),


2π π
vr1 = x0 + (x1 + x4 ) cos − (x2 + x3 ) cos
 5 5
2π π
− (y1 − y4 ) sin + (y2 − y3 ) sin ,
5 5
2π π
vi1 = y0 + (y1 + y4 ) cos − (y2 + y3 ) cos
 5 5
2π π
+ (x1 − x4 ) sin + (x2 − x3 ) sin ,
5 5
2π π
vr4 = x0 + (x1 + x4 ) cos − (x2 + x3 ) cos
 5 5
2π π
+ (y1 − y4 ) sin + (y2 − y3 ) sin ,
5 5
2π π
vi4 = y0 + (y1 + y4 ) cos − (y2 + y3 ) cos
 5 5
2π π
− (x1 − x4 ) sin + (x2 − x3 ) sin , (11.134)
5 5
π 2π
vr2 = x0 − (x1 + x4 ) cos + (x2 + x3 ) cos
 5 5
π 2π
− (y1 − y4 ) sin − (y2 − y3 ) sin ,
5 5

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


376 Chapter 11

π 2π
vi2 = y0 − (y1 + y4 ) cos + (y2 + y3 ) cos
 5 5
π 2π
+ (x1 − x4 ) sin − (x2 − x3 ) sin ,
5 5
π 2π
vr3 = x0 − (x1 + x4 ) cos + (x2 + x3 ) cos
 5 5
π 2π
+ (y1 − y4 ) sin − (y2 − y3 ) sin ,
5 5
π 2π
vi3 = y0 − (y1 + y4 ) cos + (y2 + y3 ) cos
 5 5
π 2π
− (x1 − x4 ) sin − (x2 − x3 ) sin .
5 5

Now we introduce the following notations:

X1 = x1 + x4 , X2 = x2 + x3 , X1 = x1 − x4 , X2 = x2 − x3 ,
Y1 = y1 + y4 , Y2 = y2 + y3 , Y1 = y1 − y4 , Y2 = y2 − y3 ;
2π π 2π π
C1 = X1 cos , C2 = X2 cos , C3 = Y1 cos , C4 = Y2 cos ,
5 5 5 5
π 2π π 2π
T 1 = X1 cos , T 2 = X2 cos , T 3 = Y1 cos , T 4 = Y2 cos ;
5 5 5 5
2π π 2π π
S 1 = X1 sin , S 2 = X2 sin , S 3 = Y1 sin , S 4 = Y2 sin ,
5 5 5 5
π 2π π 2π
R1 = Y1 sin , R2 = Y2 sin , R3 = X1 sin , R4 = X2 sin .
5 5 5 5
(11.135)

Using the above-given notations, Eq. (11.134) can be represented as

v0 = x0 + X1 + X2 + j(y0 + Y1 + Y2 ),
v1 = (x0 + C1 − C2 ) − (S 3 + S 4 ) + j[(y0 + C3 − C4 ) + (S 1 + S 2 )],
v4 = (x0 + C1 − C2 ) + (S 3 + S 4 ) + j[(y0 + C3 − C4 ) − (S 1 + S 2 )], (11.136)
v2 = (x0 − T 1 + T 2 ) − (R1 − R2 ) + j[(y0 − T 3 + T 4 ) + (R3 − R4 )],
v3 = (x0 − T 1 + T 2 ) + (R1 − R2 ) + j[(y0 − T 3 + T 4 ) − (R3 − R4 )].

Now, it is not difficult to find that C + (i5 ) = C + (a1 ) = C + (a2 ) = 8, C + (a∗1 ) =


C (a∗2 ) = 2, C × (i5 ) = C × (a∗1 ) = C × (a∗2 ) = 0, C × (a1 ) = C × (a2 ) = 8. Therefore, we
+

obtain C + (H5 ) = 28, C × (H5 ) = 16.


Now, let Z T = (z0 , z1 , . . . , zN−1 ) be a complex-valued vector of length N 5n (n >
1). We introduce the following notations: Pi is a (0, 1) column vector of length
N/5 whose only i’th element equals 1 (i = 1, 2, . . . , N/5), and (Z i )T = (z5i−5 , z5i−4 ,
z5i−3 , z5i−2 , z5i−1 ). The 1D forward generalized Haar transform of order N = 5n can

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 377

be performed as follows:
⎛ ⎞
⎜⎜⎜ (H5n−1 ⊗ i5 ) Z ⎟⎟⎟
⎜⎜⎜% & ⎟⎟⎟
⎜⎜⎜ √ n−1 ⎟⎟⎟
⎜⎜⎜⎜ 5 I5n−1 ⊗ a1 Z ⎟⎟⎟
⎟⎟⎟
⎜⎜⎜% & ⎟⎟⎟
⎜⎜⎜ √ n−1 ⎟⎟⎟
H5n Z = ⎜⎜⎜⎜⎜ 5 I5n−1 ⊗ a2 Z ⎟⎟⎟ . (11.137)
⎜⎜⎜% & ⎟⎟⎟
⎜⎜⎜⎜ √5n−1 I n−1 ⊗ a∗ Z ⎟⎟⎟
⎟⎟⎟
⎜⎜⎜ 5 2
⎟⎟⎟
⎜⎜⎜% & ⎟⎟⎟
⎜⎜⎝ √ n−1 ⎠
5 I5n−1 ⊗ a∗1 Z

Using the above-given notations, we have

 
(H5n−1 ⊗ i5 ) Z = (H5n−1 ⊗ i5 ) P1 ⊗ Z 1 + P2 ⊗ Z 2 + · · · + PN/5 ⊗ Z N/5
= HN/5 P1 (z0 + · · · + z4 ) + HN/5 P2 (z5 + · · · + z9 )
+ · · · + HN/5 PN/5 (zN−5 + · · · + zN−1 )
⎛ ⎞
⎜⎜⎜ z0 + z1 + · · · + z4 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ z5 + z6 + · · · + z9 ⎟⎟⎟⎟⎟
= HN/5 ⎜⎜⎜ .. ⎟⎟⎟⎟ . (11.138)
⎜⎜⎜ . ⎟⎟⎟
⎜⎝ ⎠
zN−5 + · · · + zN−1

Then, we can write

C + (H5n−1 ⊗ i5 ) = C + (H5n−1 ) + 8 · 5n−1 ,


(11.139)
C × (H5n−1 ⊗ i5 ) = C × (H5n−1 ).

√ n−1
Now we compute the complexity of the ( 5 I5n−1 ⊗ a1 )Z transform,

%√ n−1 & % √ n−1 & 


5 I5n−1 ⊗ a1 Z = 5 I5n−1 ⊗ a1 P1 ⊗ Z 1 + P2 ⊗ Z 2 + · · · + PN/5 ⊗ Z N/5
√ n−1  
= 5 P1 ⊗ a1 Z 1 + P2 ⊗ a1 Z 2 + · · · + PN/5 ⊗ a1 Z N/5 , (11.140)

from which we obtain


%√ n−1
&
C+ I5n−1 ⊗ a1 = 5n−1C + (a1 ) = 8 · 5n−1 ,
5
%√ n−1 & (11.141)
C × 5 I5n−1 ⊗ a1 = 5n−1 + 5n−1C × (a1 ) = 9 · 5n−1 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


378 Chapter 11

Table 11.2 Results of the complexities of the generalized Haar transforms.

Size N Addition Multiplication Shift


2n 2n+1 − 2 2n − 2 0

2 1 2 0 0
4 2 6 2 0
8 3 14 6 0
16 4 30 14 0

3n 6(3n − 3) + 11 3(3n − 3) + 4 0

3 1 11 4 0
9 2 47 22 0
27 3 155 76 0
81 4 479 238 0

4n 16(4n − 1)/3 6 · 4n−2

4 1 16 0 0
16 2 80 0 6
64 3 336 0 24
256 4 1360 0 96

5n 7(5n − 1) 6 · 5n − 14 0

5 1 28 16 0
25 2 168 136 0
125 3 968 736 0
625 4 4368 3736 0

Similarly, we find that


%√ n−1
&
C+ I5n−1 ⊗ a2 = 5n−1C + (a2 ) = 8 · 5n−1 ,
5
%√ n−1 &
C × 5 I5n−1 ⊗ a2 = 5n−1 + 5n−1C × (a1 ) = 9 · 5n−1 .
%√ n−1 &
C + 5 I5n−1 ⊗ a∗2 = 5n−1C + (a∗2 ) = 2 · 5n−1 ,
%√ n−1 & (11.142)
C × 5 I5n−1 ⊗ a∗2 = 5n−1 + 5n−1C × (a∗2 ) = 3 · 5n−1 .
%√ n−1 &
C + 5 I5n−1 ⊗ a∗1 = 5n−1C + (a∗1 ) = 2 · 5n−1 ,
%√ n−1 &
C × 5 I5n−1 ⊗ a∗2 = 5n−1 + 5n−1C × (a∗1 ) = 3 · 5n−1 .

Finally, the complexity of the H5n transform can be calculated as follows:

C + (H5n ) = 7(5n − 1),


(11.143)
C × (H5n ) = 6 · 5n − 14, n = 1, 2, . . . .

The numerical results of the complexities of the generalized Haar transforms are
given in Table 11.2.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 379

References
1. A. T. Butson, “Generalized Hadamard matrices,” Proc. Am. Math. Soc. 13,
894–898 (1962).
2. S. S. Agaian, Hadamard Matrices and their Applications, Lecture Notes in
Mathematics, 1168, Springer, New York (1985).
3. C. Mackenzie and J. Seberry, “Maximal q-ary codes and Plotkin’s bound,”
Ars Combin 26B, 37–50 (1988).
4. D. Jungnickel and H. Lenz, Design Theory, Cambridge University Press,
Cambridge, UK (1993).
5. S. S. Agaian, “Advances and problems of fast orthogonal transforms for
signal-images processing applications, Part 1,” in Ser. Pattern Recognition,
Classification, Forecasting Yearbook, The Russian Academy of Sciences,
146–215 Nauka, Moscow (1990).
6. G. Beauchamp, Walsh Functions and their Applications, Academic Press,
London (1980).
7. I. J. Good, “The interaction algorithm and practiced Fourier analysis,” J. R.
Stat. Soc. London B-20, 361–372 (1958).
8. A. M. Trachtman and B. A. Trachtman, Fundamentals of the Theory of
Discrete Signals on Finite Intervals, Sov. Radio, Moscow (1975) (in Russian).
9. P. J. Slichta, “Higher dimensional Hadamard matrices,” IEEE Trans. Inf.
Theory IT-25 (5), 566–572 (1979).
10. J. Hammer and J. Seberry, “Higher dimensional orthogonal designs and
applications,” IEEE Trans. Inf. Theory IT-27, 772–779 (1981).
11. S. S. Agaian and K. O. Egiazarian, “Generalized Hadamard matrices,” Math.
Prob. Comput. Sci. 12, 51–88 (1984) (in Russian), Yerevan.
12. W. Tadej and K. Zyczkowski, “A concise guide to complex Hadamard
matrices,” Open Syst. Inf. Dyn. 13, 133–177 (2006).
13. T. Butson, “Relations among generalized Hadamard matrices, relative
difference sets, and maximal length linear recurring sequences,” Can. J. Math.
15, 42–48 (1963).
14. R. J. Turyn, Complex Hadamard Matrices. Combinatorial Structures and their
Applications, Gordon and Breach, New York (1970) pp. 435–437.
15. C. H. Yang, “Maximal binary matrices and sum of two squares,” Math.
Comput 30 (133), 148–153 (1976).
16. C. Watari, “A generalization of Haar functions,” Tohoku Math. J. 8 (3),
286–290 (1956).
17. S. Agaian, J. Astola, and K. Egiazarian, Binary Polinomial Transforms and
Nonlinear Digital Filters, Marcel Dekker, New York (1995).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


380 Chapter 11

18. V. M. Sidel’nikov, “Generalized Hadamard matrices and their applications,”


Tr. Diskr. Mat. 3, 249–268 (2000) Fizmatlit, Moscow.
19. B. W. Brock, “Hermitian congruences and the existence and completion of
generalized Hadamard matrices,” J. Combin. Theory A 49, 233–261 (1988).
20. W. De Launey, “Generalised Hadamard matrices whose rows and columns
form a group,” in Combinatorial Mathematics X, L. R. A Casse, Ed., Lecture
Notes in Mathematics, Springer, Berlin (1983).
21. T. P. McDonough, V. C. Mavron, and C. A. Pallikaros, “Generalised
Hadamard matrices and translations,” J. Stat. Planning Inference 86, 527–533
(2000).
22. X. Jiang, M. H. Lee, R. P. Paudel, and T. C. Shin, “Codes from generalized
Hadamard matrices,” in Proc. of Int. Conf. on Systems and Networks
Communication, ICSNC’06, 67. IEEE Computer Society, Washington, DC
(2006).
23. K. J. Horadam, “An introduction to cocyclic generalized Hadamard matrices,”
Discrete Appl. Math. 102, 115–131 (2000).
24. S. A. Stepanov, “Nonlinear codes from modified Butson–Hadamard
matrices,” Discrete Math. Appl. 16 (5), 429–438 (2006).
25. J. H. Beder, “Conjectures about Hadamard matrices,” J. Stat. Plan. Inference
72, 7–14 (1998).
26. D. A. Drake, “Partial geometries and generalized Hadamard matrices,” Can.
J. Math. 31, 617–627 (1979).
27. J. L. Hayden, “Generalized Hadamard matrices,” Des. Codes Cryptog. 12,
69–73 (1997).
28. A. Pererra and K. J. Horadam, “Cocyclic generalized Hadamard matrices and
central relative difference sets,” Des. Codes Cryptog. 15, 187–200 (1998).
29. A. Winterhof, “On the non-existence of generalized Hadamard matrices,”
J. Stat. Plan. Inference 84, 337–342 (2000).
30. D. A. Drake, “Partial geometries and generalized Hadamard matrices,” Can.
J. Math. 31, 617–627 (1979).
31. P. Levy, “Sur une generalisation des fonctions orthogonales de M. Rade-
macher,” Commun. Math. Helv. 16, 146–152 (1944).
32. H. E. Chrestenson, “A class of generalized Walsh functions,” Pacific J. Math.
5, 17–31 (1955).
33. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal
Processing, Springer-Verlag, Berlin (1975).
34. M. G. Karpovsty, Finite Orthogonal Series in the Designs of Digital Devices,
John Wiley & Sons, Hoboken, NJ (1976).
35. S. L. Hurst, M. D. Miller, and J. C. Muzio, Spectral Techniques in Digital
Logic, Academic Press, New York (1985).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Extended Hadamard Matrices 381

36. K. R. Rao and R. K. Narasimham, “A family of discrete Haar transforms,”


Comput. Elec. Eng. 2, 367–368 (1975).
37. S. S. Agaian, K. O. Egiazarian, and N. A. Babaian, “A family of fast
orthogonal transforms reflecting psychophisical properties of vision,” Pattern
Recogn. Image Anal. 2 (1), 1–8 (1992).
38. J. Seberry and X. M. Zhang, “Some orthogonal designs and complex
Hadamard matrices by using two Hadamard matrices,” Austral. J. Combin.
Theory 4, 93–102 (1991).
39. S. Agaian and H. Bajadian, “Generalized orthogonal Haar systems: synthesis,
metric and computing properties,” in Proc. of Haar Memorial Conf., Collog.
Math. Soc. Janos Bolyai, 1 97–113 North-Holland, Amsterdam (1987).
40. N. Ahmed and K. R. Rao, Orthogonal Transforms for Digital Signal
Processing, Springer-Verlag, New York (1975).
41. Z. Haar, “Zur Theorie der Orthogonalen Funktionensysteme,” Math. Ann. 69,
331–371 (1914).
42. K. R. Rao and R. K. Narasimham, “A family of discrete Haar transforms,”
Comput. Elec. Eng. 2, 367–368 (1975).
43. J. Seberry and X. M. Zhang, “Some orthogonal designs and complex
Hadamard matrices by using two Hadamard matrices,” Austral. J. Combin.
Theory 4, 93–102 (1991).
44. Z. Mingyong, Z. Lui, and H. Hama, “A resolution-controllable harmonical
retrieval approach on the Chrestenson discrete space,” IEEE Trans. Signal
Process 42 (5), 1281–1284 (1994).
45. H. G. Sarukhanyan, “Fast generalized Haar transforms,” Math. Prob. Comput.
Sci. 31, 79–89 (2008) Yerevan, Armenia.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 12
Jacket Hadamard Matrices
In this chapter, we present a variation to the HT, which is called a centered-
weighted HT, such as the reverse jacket transform (RJT), complex RJT (CRJT),
extended CRJT (ECRJT) and extended CRJT over finite fields, and the generalized
RJT. Centered-weighted HTs have found several interesting applications in image
processing, communication sequencing, and cryptology that have been pointed
out.1–26 These transforms have a similar simplicity to that of the HT, but offer
a better quality of representation over the same region of the image.2 The reason
for developing this theory is motivated by the fact that (1) the human visual system
is most sensitive to the special (in general midspatial) fragments and (2) the same
part of data sequences or the middle range of frequency components are more
important.
First, we present the recursive generation of the real weighted HT matrices. Then
we introduce the methods of generating complex weighted Hadamard matrices.

12.1 Introduction to Jacket Matrices

Definition 12.1.1: The square matrix A = (ai, j ) of order n is called a jacket matrix
if its entries are nonzero and real, complex, or from a finite field, and satisfy

AB = BA = In , (12.1)

where In is the identity matrix of order n, B = n1 (a−1 T


i, j ) ; and T denotes the transpose
of the matrix. In other words, the inverse of a jacket matrix is determined by its
elementwise or blockwise inverse. The definition above may also be expressed
as
n 
−1 n, if j = t,
ai, j ai,t = j, t = 1, 2, . . . , n. (12.2)
0, if j  t,
i=1

12.1.1 Example of jacket matrices

   
1 1 1 1 1
(1) J2 = , J2−1 = , (12.3)
1 α 2 1 α

383

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


384 Chapter 12

where 1 + α = 0, α2 = 1, which means that α = −1, i.e., J2 is the Hadamard matrix


of order 2.
⎛ 1 1 ⎞⎟
 √  ⎜⎜⎜ √ ⎟⎟

1 ⎜⎜⎜⎜ a ac ⎟⎟⎟⎟⎟
a ac
(2) A= √ , A−1 = ⎜⎜⎜ ⎟; (12.4)
ac −c 2 ⎜⎜⎝ 1 1 ⎟⎟⎟
√ − ⎟⎠
ac c

hence, A is the jacket matrix for all nonzero a and c, and when a = c = 1, it is a
Hadamard matrix.
(3) In Ref. 9, the kernel jacket matrix of order 2 is defined as


a b
J2 = T , (12.5)
b −c

where a, b, c are all nonzero real numbers. Considering that J2 is orthogonal, we


should have
    2 
a b a bT a + b2 abT − bc
J2 J2T = 2I2 = T = T . (12.6)
b −c b −c b a − cb (bT )2 + c2

Therefore, we have bT = b, c = a; then, the orthogonal J2 can be rewritten as


 
a b
J2 = . (12.7)
b −a

According to Definition 12.1.1, the inverse of J2 should be rewritten as


⎛1 1 ⎞⎟
⎜⎜⎜ ⎟⎟
⎜⎜ b ⎟⎟⎟⎟ ,
J2−1 = ⎜⎜⎜⎜⎜ a ⎟ (12.8)
1 ⎟⎟⎟⎠
⎜⎝ 1

b a

where we must accept that a = b. Clearly, the result is a classical Hadamard matrix
of order 2,
 
a a
J2 = aH2 = . (12.9)
a −a

(4) Let α2 + α + 1 = 0, α3 = 1; then, we have


⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜1 1 1 ⎟⎟⎟
⎜⎜⎜ 2⎟ 1 ⎜⎜⎜⎜ ⎟
J3 = ⎜⎜1 α α ⎟⎟⎟⎟ , J3−1 = ⎜⎜⎜1 α α ⎟⎟⎟⎟ .
2
(12.10)
⎝ ⎠ 3⎝ 2⎠
1 α2 α 1 α α

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 385


(5) Let w be a third root of unity, i.e., w = exp( j 2π 3 ) = cos 3 + j sin 3 , j =
2π 2π
−1;
then, we have
⎛ ⎞
⎛ ⎞ ⎜⎜⎜1 1 1 ⎟⎟⎟
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜⎜ 1 1 ⎟⎟⎟⎟

⎜⎜⎜ 2⎟ 1 ⎜⎜ ⎟⎟
B3 = ⎜⎜1 w w ⎟⎟⎟⎟ , B3 = ⎜⎜⎜⎜1 w w2 ⎟⎟⎟⎟ .
−1
(12.11)
⎝ ⎠ 3 ⎜⎜⎜ ⎟⎟⎟
1 w2 w ⎜⎜⎝ 1 1 ⎟⎟⎠
1 2
w w

(6) Let w be a fourth root of unity, i.e., w = exp( j 2π


4 ) = cos 4 + j sin 4 = j;
2π 2π

then, we have a complex Hadamard matrix, which is a jacket matrix as well:


⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜1 j −1 − j⎟⎟⎟⎟⎟ 1 ⎜1 − j −1 j⎟⎟⎟⎟⎟
B4 = ( jnk )3n,k=0 = ⎜⎜⎜⎜⎜ ⎟, B−1 = ⎜⎜⎜⎜⎜ ⎟. (12.12)
⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟⎟ 4
4 ⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
1 − j −1 j 1 j −1 − j

5 ) = cos 5 + j sin 5 ; then, we


(7) Let w be a fifth root of unity, i.e., w = exp( j 2π 2π 2π

have the Fourier matrix of order 5, which is a jacket matrix as well:


⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1 1 ⎟⎟⎟ ⎜⎜⎜1 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟
⎜⎜⎜1 w2 w3 w4 ⎟⎟⎟⎟ ⎜1 w4 w3 w2 w ⎟⎟⎟⎟
1 ⎜⎜⎜⎜⎜
w
⎜ ⎟ ⎟
B5 = (wnk )4n,k=0 = ⎜⎜⎜⎜⎜1 w2 w4 w w3 ⎟⎟⎟⎟⎟ , B−1 = ⎜⎜⎜1 w3 w w4 w2 ⎟⎟⎟⎟⎟ . (12.13)
⎜⎜⎜⎜1 ⎟ 5 ⎜⎜⎜ ⎟
5
⎜⎜⎝ w3 w w4 w2 ⎟⎟⎟⎟⎟ ⎜⎜⎜1 w2 w4 w w3 ⎟⎟⎟⎟⎟
⎠ ⎝ ⎠
1 w4 w3 w2 w 1 w w2 w3 w4

12.1.2 Properties of jacket matrices


Some preliminary properties of jacket matrices are given below.
(1) The matrix A = (ai, j )n−1
i, j=0 , over a field F, is a jacket matrix if and only if

n−1 n−1
a j,i ai,k
= = 0, for all j  k, j, k = 0, 1, . . . , n − 1. (12.14)
i=0
ai,k i=0
a j,i

A proof follows from the definition of jacket matrices.


(2) For any integer n, there exists a jacket matrix of order n.
Indeed, let A = (at, j )n−1
t, j=0 be a matrix with elements at, j = exp i n k j , where


i = −1. It is easy to show that
n−1 n−1  $
a j,t 2π
= exp i ( j − k)t = 0 for all j  k. (12.15)
t=0
ak,t t=0
n

Hence, A = (at, j )n−1


t, j=0 is the jacket matrix of order n.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


386 Chapter 12

(3) Let A = (at, j )n−1


t, j=o be a jacket matrix of order n. Then, it can be shown that
(a) If |at, j | = 1 for all t, j = 0, 1, . . . , n − 1, then A = (at, j )n−1t, j=o is a complex
Hadamard matrix.
(b) If at, j is real and a2t, j = 1 for all t, j = 0, 1, . . . , n − 1, then A = (at, j )n−1
t, j=o is a
Hadamard matrix.
(4) Let A = (at, j )n−1
t, j=o be a jacket matrix of order n. Then, it can be shown that
(a) AT , A−1 , AH are also jacket matrices {AH = [(1/a j,t )]T }.
(b) (det A)(det AH ) = nn .
(c) Proof: Because A is a jacket matrix, AAH = nI n . Thus, A−1 = (1/n)AH .
Hence, AH (AH )H = AH A = nI n . Thus, AH is a jacket matrix. Similarly, A−1
and AT are also jacket matrices. Item (b) follows from nn = det(AAH ) =
(det A), (det AH ).
(d) Let A = (at, j )n−1
t, j=o be a jacket matrix of order n and let it be that P and
Q both are diagonal or permutation matrices. Then, PAQ is also a jacket
matrix.
(5) The Kronecker product of two jacket matrices is also a jacket matrix.
Proof: Let it be that A and B are jacket matrices of order n and m, respectively.
Then, AAH = nI n and BBH = mI m . Hence,

(A ⊗ B)(A ⊗ B)H = (A ⊗ B)(AH ⊗ BH ) = AAH ⊗ BBH = mnImn . (12.16)

Thus, A ⊗ B is a jacket matrix of order mn.


(6) Let it be that A and B are jacket matrices of order n, and α is a nonzero number.
Then
 
A αB
(12.17)
A −αB

is also jacket matrix of order 2n. For example, let


⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 ⎟⎟⎟ ⎜⎜⎜1 1 1 ⎟⎟⎟
⎜⎜⎜ 2⎟ ⎜⎜⎜ ⎟
A = ⎜⎜1 w w ⎟⎟⎟⎟ , B = ⎜⎜1 w w ⎟⎟⎟⎟ ,
2
(12.18)
⎝ ⎠ ⎝ 2⎠
1 w2 w 1 w w

where w is a third root of unity. Then, the following matrix is the jacket matrix
dependent on a parameter α:
⎛ ⎞
⎜⎜⎜1 1 1 α α α ⎟⎟
⎜⎜⎜1 ⎟
⎜⎜⎜ w w2 α αw 2
αw ⎟⎟⎟⎟
⎜⎜1 2 ⎟ ⎟
w2 α αw αw ⎟⎟⎟⎟
J6 (α) = ⎜⎜⎜⎜⎜
w
⎟, (12.19a)
⎜⎜⎜1 1 1 −α −α −α ⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎝ w w2 −α −αw −αw ⎟⎟⎟⎟
2
2⎠
1 w2 w −α −αw −αw

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 387

⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟

⎜⎜⎜⎜ 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1

w2 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1
⎜⎜⎜ w w2 w
⎜⎜⎜ 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
w ⎟⎟⎟⎟⎟
2
1 2
1 ⎜⎜⎜ w w w
1 ⎟⎟⎟ .
−1
J6 (α) = ⎜⎜⎜ 1 1 1 1 1 (12.19b)
6 ⎜⎜⎜ − − − ⎟⎟⎟⎟
⎜⎜⎜ α α α α α α ⎟⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ α αw2 α − α − αw2 − αw ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎝ 1 1 1 1 1 1 ⎟⎟⎟⎠
− − −
α α αw2 α α αw2
The jacket matrices and their inverse matrices of order 6 for various values of α
are given below (remember that w is a third root of unity):
(1) α = 2:
⎛ ⎞
⎜⎜⎜1 1 1 2 2 2 ⎟⎟

⎜⎜⎜⎜1 w w2 2 2w2 2w ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜1 w2 2 2w 2w2 ⎟⎟⎟⎟
J6 = ⎜⎜⎜⎜⎜
w
⎟, (12.20a)
⎜⎜⎜1 1 1 −2 −2 −2 ⎟⎟⎟⎟
⎜⎜⎜1 ⎟
⎜⎝ w w2 −2 −2w2 −2w ⎟⎟⎟⎟

1 w2 w −2 −2w −2w2
⎛ ⎞
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟

⎜⎜⎜ 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1

⎜⎜⎜ w w2
1
w2 ⎟⎟⎟⎟⎟
⎜⎜⎜ w
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟
⎜⎜ 1 ⎟
w ⎟⎟⎟⎟⎟
1
1 ⎜⎜⎜⎜⎜ w 2 w w 2
J6−1 = ⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟ . (12.20b)
6 ⎜⎜⎜ − − − ⎟⎟⎟⎟
⎜⎜⎜ 2 2 2 2 2 2 ⎟⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − − 2 − ⎟⎟
⎜⎜⎜ 2 2
2w 2w 2 2w 2w ⎟⎟⎟⎟
⎜⎜⎜
⎜⎝ 1 1 1 1 1 1 ⎟⎟⎟
2
− − − 2⎠
2 2w 2w 2 2w 2w
(2) α = 1/2:
⎛ 1 1 1 ⎞⎟
⎜⎜⎜1 1 1 ⎟⎟ ⎛ ⎞
⎜⎜⎜ 2 2 2 ⎟⎟⎟⎟ ⎜⎜⎜1 1 1 1 1 1 ⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎟
⎜⎜⎜ 1 w 2
w ⎟⎟⎟ ⎜⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜1 w w2 ⎜⎜⎜1 ⎟⎟
2 ⎟⎟⎟⎟⎟
1
⎜⎜⎜ 2 2 ⎜⎜⎜ w w 2 w w ⎟⎟⎟⎟
2
⎜⎜ 2 ⎟
w ⎟⎟⎟ ⎜⎜⎜ 1 ⎟⎟⎟⎟
  ⎜⎜⎜⎜1 w2 w
1 w
⎟⎟   ⎜⎜ 1 1 1
⎟⎟
1 ⎜⎜ 2 ⎟⎟⎟⎟ , −1 1 1 ⎜⎜⎜⎜1 w2 1
w ⎟⎟⎟⎟ .
= ⎜⎜⎜⎜ 2 2 = ⎜⎜⎜ w w2 (12.21)
1 ⎟⎟⎟⎟ ⎟
J6 J6
2 ⎜⎜⎜ 1 1 2 6 ⎜⎜⎜2 2 2 −2 −2 −2 ⎟⎟⎟⎟
⎜⎜⎜⎜1 1 1 − − − ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜⎜⎜ 2 2 2 ⎟⎟⎟ ⎜⎜⎜ 2 ⎟⎟⎟
−2 − 2 − ⎟⎟⎟⎟⎟
⎟ 2 2 2
⎜⎜⎜ 1 w 2
w ⎟⎟⎟ ⎜⎜⎜2
⎜⎜⎜1 w w2 − − − ⎟⎟⎟⎟ ⎜⎜⎜ w2
⎜⎜⎝
w w w ⎟⎟⎟
⎜⎜⎜ 2 ⎟⎟⎟ 2 ⎟⎟
− 2⎠
2 2 2 2 2
⎜⎜⎜ 1 w w2 ⎟⎟⎟⎠ 2 −2 −
⎝1 w w −
2
− − w w 2 w w
2 2 2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


388 Chapter 12

Consider the jacket matrix of order n of the following form:


 T
1 e
Jn = , (12.22)
e A

where e is the column vector of all 1 elements of length n − 1. The matrix A of


order n − 1 we call the core of the matrix Jn . Lee10 proves the following theorem:

Theorem 12.1.2.1: (Lee10 ) Let A1 , B1 , C1 , and D1 be the core of jacket matrices


A, B, C, and D of order n, respectively. Then, AC H = BDH if and only if
⎛ T ⎞
⎜⎜⎜1 e eT 1⎟⎟
⎜⎜⎜⎜e A1 B1 e ⎟⎟⎟⎟⎟
X2n = ⎜⎜⎜⎜e C −D −e ⎟⎟⎟⎟ (12.23)
⎜⎜⎝ 1 1 ⎟⎟⎠
1 e −e
T T
1

is a jacket matrix of order 2n, where e is the column vector of all 1 elements of
length n − 1 (remember that if A = (ai, j ), then AH = [(1/ai, j )]T ).
Proof: Because A is a jacket matrix of order n with a core A1 , we have
     
1 eT 1 eT 1 eT 1 eT
AA = A A =
H H
= = nIn . (12.24)
e A1 e A1H e A1H e A1

From Eq. (12.24), we obtain

eT + eT A1 = 0, (In−1 + A1 )e = 0,
(12.25)
eeT + A1 A1H = eeT + A1H A1 = nIn−1 .

Similarly, we find that

eT + eT B1 = 0, (In−1 + B1 )e = 0,
ee + B1 B1 = ee + B1H B1 = nIn−1 ,
T H T

eT + eT C1 = 0, (In−1 + C1 )e = 0,
(12.26)
eeT + C1C1H = eeT + C1H C1 = nIn−1 ,
eT + eT D1 = 0, (In−1 + D1 )e = 0,
eeT + D1 D1H = eeT + D1H D1 = nIn−1 .

Using Eqs. (12.25) and (12.26), we obtain


   
n 0 n 0
AC H = , BD H
= . (12.27)
0 eeT + A1C1H 0 eeT + B1 D1H

From Eq. (12.27), it follows that AC H = BDH if and only if A1C1H = B1 D1H . On the
other hand, from the fourth properties of jacket matrices given above, we obtain

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 389

C H C = nI n and DH D = nI n . Therefore, we have

AC H CAH = A(C H C)AH = n2 In ,


(12.28)
BDH DBH = B(DH D)BH = n2 In ,

from which it follows that AC H = BDH if and only if CAH = DBH . Finally, we
find that

AC H = BDH ⇔ A1C1H = B1 D1H


and CAH = DBH ⇔ XX H = 2nI2n .
 
1 1 1
Example: Let A = B = C = D = 1 w2 w2 , where w is the third root of unity.
1 w w
Then, by Theorem 12.1.2.1, we obtain the following jacket matrix of order 6:
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1⎟⎟

⎜⎜⎜⎜1 w w2 w w2 1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜1 w2 w w2 w 1⎟⎟⎟⎟
J6 = ⎜⎜⎜⎜⎜ ⎟. (12.29)
⎜⎜⎜1 w w −w −w −1⎟⎟⎟⎟⎟
2 2
⎜⎜⎜1 w2 w −w2 −w −1⎟⎟⎟
⎜⎝ ⎟⎠
1 1 1 −1 −1 −1

12.2 Weighted Sylvester–Hadamard Matrices


In this section, we present weighted Sylvester–Hadamard matrices and their simple
decomposition, which then is used to develop the fast algorithm. The matrix
decomposition has the form of the matrix product of Hadamard matrices and
successively lower-order coefficient matrices.
The main property of weighted Sylvester–Hadamard matrices is that the inverse
matrices of their elements can be obtained very easily and have a special structure.
Using the orthogonality of Hadamard matrices, a generalized weighted Hadamard
matrix called a reverse jacket matrix with a reverse geometric structure was
constructed in Refs. 2 and 4–6
Note that the lowest order of weighted Sylvester–Hadamard matrix is 4, and the
matrix is defined as follows (see Ref. 7):
⎛ ⎞ ⎛ ⎞⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜1 1 1 1⎟ ⎜4 0 0 0⎟
⎜⎜⎜1 −2 2 −1⎟⎟⎟ 1 ⎜⎜⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜0 6 −2 0⎟⎟⎟⎟⎟
[S W]4 = ⎜⎜⎜⎜ ⎟⎟ = ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ .
⎜⎜⎝1 2 −2 −1⎟⎟⎟⎟⎠ 4 ⎜⎜⎜⎝⎜1 1 −1 −1⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝0 −2 6 0⎟⎟⎟⎟⎠
(12.30)
1 −1 −1 1 1 −1 −1 1 0 0 0 4

The inverse of Eq. (12.30) is


⎛ ⎞ ⎛ ⎞⎛ ⎞
⎜⎜⎜2 2 2 2⎟⎟⎟ ⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜4 0 0 0⎟⎟

1 ⎜⎜⎜2 −1 1 −2⎟⎟⎟⎟ 1 ⎜⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟ ⎜⎜⎜⎜0 3 1 0⎟⎟⎟⎟
[S W]−1 = ⎜⎜⎜⎜ ⎟⎟⎟ = ⎜ ⎟⎜ ⎟.
⎟⎟⎠ 16 ⎜⎜⎜⎜⎝1 1 −1 −1⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝0 1 0⎟⎟⎟⎟⎠
(12.31)
4
8 ⎜⎜⎝ 2 1 −1 −2 3
2 −2 −2 2 1 −1 −1 1 0 0 0 4

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


390 Chapter 12

We can see that the matrix [S W]4 is derived by doubling elements of the
inner 2 × 2 submatrix of the Sylvester–Hadamard matrix. Such matrices are
called weighted Hadamard or centered matrices.4 As for the Sylvester–Hadamard
matrix, a recursive relation governs the generation of higher orders of weighted
Sylvester–Hadamard matrices and their inverses, i.e.,

[S W]2k = [S W]2k−1 ⊗ H2 , k = 3, 4, . . . ,
(12.32)
[S W]−1
2k
= [S W]−1
2k−1
⊗ H2 , k = 3, 4, . . . ,
 
where H2 = ++ +− .
The forward and inverse weighted Sylvester–Hadamard transform matrices are
given below (Fig. 12.1).
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 −2 −2 2 2 −1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
1 −1 −2 2 2 −2 −1 1⎟⎟⎟⎟
[S W]8 = [S W]4 ⊗ H2 = ⎜⎜⎜⎜⎜ ⎟, (12.33a)
⎜⎜⎜1 1 2 2 −2 −2 −1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 2 −2 −2 2 −1 1⎟⎟⎟⎟⎟
⎜⎜⎜1 1 −1 −1 −1 −1 1 1⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠
1 −1 −1 1 −1 1 1 −1
1
[S W]−1
8 = [S W]−1 44 ⊗ H 2
16 ⎛ ⎞
⎜⎜⎜2 2 2 2 2 2 2 2⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜2 −2 2 −2 2 −2 2 −2⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜2 2 −1 −1 1 1 −2 −2⎟⎟⎟⎟⎟

1 ⎜⎜⎜⎜2 −2 −1 1 1 −1 −2 2⎟⎟⎟⎟⎟
= ⎜ ⎟. (12.33b)
16 ⎜⎜⎜⎜⎜2 2 1 1 −1 −1 −2 −2⎟⎟⎟⎟⎟
⎜⎜⎜⎜2 −2 1 −1 −1 1 −2 2⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜2 2 −2 −2 −2 −2 2 2⎟⎟⎟⎟⎟
⎜⎝ ⎟⎠
2 −2 −2 2 −2 2 2 −2

Let us introduce the weighted coefficient matrix as7

[RC]2n = H2n [S W]2n . (12.34)

The expression in Eq. (12.35) can be represented as

[RC]2n = (H2n−1 ⊗ H2 )([S W]2n−1 ⊗ H2 )


= (H2n−1 [S W]2n−1 ) ⊗ (H2 H2 )
= (H2n−1 [S W]2n−1 ) ⊗ (2I2 )
= 2[RC]2n−1 ⊗ I2 . (12.35)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 391

Figure 12.1 The first (a) four and (b) eight continuous weighted Sylvester–Hadamard
functions in the interval (0, 1).

Therefore, we have

[RC]2n−1 = 2[RC]2n−2 ⊗ I2 . (12.36)

Hence, continuation of this process is given by

[RC]2n = 2n−2 [RC]4 ⊗ I2n−2 . (12.37)

It can be shown that [RC]4 has the following form:


⎛ ⎞
⎜⎜⎜4 0 0 0⎟⎟⎟
⎜⎜⎜⎜ ⎟
0 6 −2 0⎟⎟⎟⎟⎟
[RC]4 = ⎜⎜⎜⎜⎜ ⎟. (12.38)
⎜⎜⎜0 −2 6 0⎟⎟⎟⎟⎟
⎝ ⎠
0 0 0 4

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


392 Chapter 12

Therefore, the weighted coefficient matrix [RC]2n can be presented as


⎛ ⎞
⎜⎜⎜4I2n−2 O2n−2 O2n−2 O2n−2 ⎟⎟⎟
⎜⎜⎜⎜ n−2 ⎟⎟
6I2n−2 −2I2n−2 O2n−2 ⎟⎟⎟⎟
[RC]2n = 2n−2 ⎜⎜⎜⎜⎜ 2
O
⎟, (12.39)
⎜⎜⎜ O2n−2 −2I2n−2 6I2n−2 O2n−2 ⎟⎟⎟⎟⎟
⎝ ⎠
O2n−2 O2n−2 O2n−2 4I2n−2

where Om is the zero matrix of order m. The 8 × 8 weighted coefficient matrix has
the following form:
⎛ ⎞
⎜⎜⎜4 0 0 0 0 0 0 0⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜0 4 0 0 0 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 6 0 −2 0 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟
0 0 6 0 −2 0 0⎟⎟⎟⎟⎟
[RC]8 = 2 · ⎜⎜⎜⎜⎜
0
⎟. (12.40)
⎜⎜⎜0 0 −2 0 6 0 0 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 −2 0 6 0 0⎟⎟⎟⎟⎟
⎜⎜⎜⎜0 0 0 0 0 0 4 0⎟⎟⎟⎟⎟

⎜⎜⎝ ⎠
0 0 0 0 0 0 0 4

Because [RC]4 is the symmetric matrix and has at most two nonzero elements
in each row and column, from Eq. (12.37) it follows that the same is true for
[RC]2n (n ≥ 2). Note that from Eq. (12.34) it follows that

1
[S W]2n = H2n [RC]2n . (12.41)
2n

Using Eq. (12.37), from Eq. (12.34), we can find that

[S W]2n = [S W]4 ⊗ H2n−2 ,


1 (12.42)
[S W]−1 −1
2n = n−2 [S W]4 ⊗ H2n−2 .
2

Note that the weighted Sylvester–Hadamard matrix [S W]2n is a symmetric matrix,


i.e., [S W]2n = [S W]T2n .

12.3 Parametric Reverse Jacket Matrices


References 2, 4, 6, and 8 introduced and investigated the reverse jacket matrices
depending only on one and three parameters. In this section, we introduce a
parametric reverse jacket matrix that is more general than the above-mentioned
matrices.
Definition 12.3.1: Let [RJ]n be a real parametric matrix of order n with elements
x1 , x2 , . . . , xr and their linear superposition, and let Hn be a Hadamard matrix of

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 393

order n. A matrix [RJ]n with the property

1 T
[RJ]n = H [RJ]n Hn (12.43)
n n
is called the parametric reverse jacket matrix. Furthermore, we will consider the
case when Hn is a Sylvester–Hadamard matrix of order n = 2k , i.e., Eq. (12.43)
takes the following form: [RJ]n = (1/n)Hn [RJ]n Hn .

Examples of parametric reverse jacket matrices of order 4 with one and two
parameters are given as follows:
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜b 1 1 b⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟
⎜1 −a a −1⎟⎟⎟⎟ ⎜1 −a a −1⎟⎟⎟⎟⎟
[RJ]4 (a) = ⎜⎜⎜⎜⎜ ⎟, [RJ]4 (a, b) = ⎜⎜⎜⎜⎜ ⎟,
⎜⎜⎜1 a −a −1⎟⎟⎟⎟⎟ ⎜⎜⎜1 a −a −1⎟⎟⎟⎟⎟
⎝ ⎠ ⎝ ⎠
1 −1 −1 1 b −1 −1 b
⎛ ⎞ ⎛ ⎞
⎜⎜⎜ ⎟⎟ ⎜⎜⎜ 1 1 ⎟⎟⎟
⎜⎜⎜1 1 1 1⎟⎟⎟⎟ ⎜⎜⎜ 1 1 ⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ b b ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ ⎜⎜⎜⎜ ⎟⎟
⎜ −
1 1
−1⎟⎟⎟⎟ ⎜⎜⎜1 − 1 1 −1 ⎟⎟⎟⎟⎟
−1 1 ⎜⎜⎜⎜1 a a ⎟⎟⎟⎟ , 1 ⎜⎜⎜ a a ⎟⎟⎟
[RJ]4 (a) = ⎜⎜⎜ [RJ]−1
44 (a, b) = ⎜ ⎟⎟⎟ .
4 ⎜⎜⎜ ⎟⎟⎟
⎟ 4 ⎜⎜⎜⎜⎜ ⎟⎟
− −1 ⎟⎟⎟⎟⎟
⎜⎜⎜1 1 1 1 1
⎜⎜⎜ − −1⎟⎟⎟⎟ ⎜⎜⎜1
⎜⎜⎜
⎜⎜⎜ a a ⎟⎟⎟ a a ⎟⎟⎟
⎟⎟⎠ ⎜⎜⎜
⎝ ⎝⎜ 1 −1 −1 1 ⎟⎟⎠⎟
1 −1 −1 1
b b
(12.44)

Hence, we can formulate the following theorem.

Theorem 12.3.1: The matrix [RJ]n is a parametric reverse jacket matrix if and
only if

[RJ]n Hn = Hn [RJ]n . (12.45)

Note that if [RJ]n is a reverse jacket matrix, then the matrix ([RJ]n )k (k is an integer)
is a reverse jacket matrix, too. Indeed, we have

([RJ]n )2 Hn = [RJ]n ([RJ]n Hn ) = [RJ]n (Hn [RJ]n )


= ([RJ]n Hn )[RJ]n = Hn ([RJ]n )2 . (12.46)

We can check that


 
a b
[RJ]2 (a, b) = (12.47)
b a − 2b

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


394 Chapter 12

is a parametric reverse jacket matrix of order 2 because we have


     
+ + a b a b + +
= . (12.48)
+ − b a − 2b b a − 2b + −

For a2 − 2ab − b2  0, we obtain


 
1 a − 2b −b
[RJ]−1
2 (a, b) = . (12.49)
a2 − 2ab − b2 −b a

Note that the matrix [RJ]2 (a, b) is a unique parametric reverse jacket matrix of
order 2.

12.3.1 Properties of parametric reverse jacket matrices

(1) The Kronecker product of two parametric reverse jacket matrices satisfies
Eq. (12.45). Indeed, let [RJ]n (x0 , . . . , xk−1 ), [RJ]m (y0 , . . . , yr−1 ) be parametric
reverse jacket matrices and Hn , Hm be Hadamard matrices of order n and m,
respectively; then, we have

([RJ]n ⊗ [RJ]m )Hmn = ([RJ]n ⊗ [RJ]m )(Hn ⊗ Hm )


= [RJ]n Hn ⊗ [RJ]m Hm
= Hn [RJ]n ⊗ Hm [RJ]m
= (Hn ⊗ Hm )([RJ]n ⊗ [RJ]m )
= Hmn ([RJ]n ⊗ [RJ]m ). (12.50)

(2) The Kronecker product of a parametric reverse jacket matrix on the reverse
jacket (nonparametric) matrix is the parametric reverse jacket matrix. Indeed,
let [RJ]n (x0 , . . . , xk−1 ), [RJ]m be parametric and nonparametric reverse jacket
matrices, and Hn , Hm be Hadamard matrices of order n and m, respectively;
then, we have

([RJ]n (x0 , . . . , xk−1 ) ⊗ [RJ]m )Hmn = ([RJ]n (x0 , . . . , xk−1 ) ⊗ [RJ]m )(Hn ⊗ Hm )
= ([RJ]n (x0 , . . . , xk−1 )Hn ) ⊗ ([RJ]m Hm )
= (Hn [RJ]n (x0 , . . . , xk−1 )) ⊗ (Hm [RJ]m )
= (Hn ⊗ Hm )([RJ]n (x0 , . . . , xk−1 ) ⊗ [RJ]m )
= Hmn ([RJ]n (x0 , . . . , xk−1 ) ⊗ [RJ]m ). (12.51)

Some examples of jacket matrices using the above-given properties are as


follows:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 395

   
a b + +
[RJ]2 (a, b) ⊗ [RJ]2 (1, 1) = ⊗
b a − 2b + −
⎛ ⎞
⎜⎜⎜a a b b ⎟⎟⎟
⎜⎜⎜ ⎟
⎜a −a b −b ⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟.
⎜⎜⎜b b a − 2b a − 2b ⎟⎟⎟⎟⎟
(12.52)
⎝ ⎠
b −b a − 2b −a + 2b
   
a b 1 2
[RJ]2 (a, b) ⊗ [RJ]2 (1, 2) = ⊗
b a − 2b 2 −3
⎛ ⎞
⎜⎜⎜ a 2a b 2b ⎟⎟⎟
⎜⎜⎜ ⎟
⎜2a −3a 2b −3b ⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟.
⎜⎜⎜ b 2b a − 2b 2a − 4b ⎟⎟⎟⎟⎟
(12.53)
⎝ ⎠
2b −3b 2a − 4b −3a + 6b
   
a b 2 1
[RJ]2 (a, b) ⊗ [RJ]2 (2, 1) = ⊗
b a − 2b 1 0
⎛ ⎞
⎜⎜⎜2a a 2b b ⎟⎟⎟
⎜⎜⎜ ⎟
⎜a 0 b 0 ⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟.
⎜⎜⎜2b b 2a − 4b a − 2b⎟⎟⎟⎟⎟
(12.54)
⎝ ⎠
b 0 a − 2b 0

Sylvester-like construction for parametric reverse jacket matrices is held, i.e., if


P2 = [RJ]2 (a, b) is a parametric reverse jacket matrix of order 2, then the matrix
 
P n−1 P2n−1
P2n = 2 (12.55)
P2n−1 −P2n−1

1 2 2 , n = 2, 3, . . ..
n
is the parametric reverse jacket matrix of order
Here we provide an example. Let P2 = 2 −3 be a reverse jacket matrix. Then,
the following matrix also is a reverse jacket matrix:
⎛ ⎞
⎜⎜⎜1 2 1 2⎟⎟⎟
  ⎜⎜⎜ ⎟
P2 P2 ⎜2 −3 2 −3⎟⎟⎟⎟⎟
P4 = = ⎜⎜⎜⎜ .
⎜⎜⎜1 2 −1 −2⎟⎟⎟⎟⎟
(12.56)
P2 −P2
⎝ ⎠
2 −3 −2 3

Note that the matrix in Eq. (12.56) satisfies the condition of Eq. (12.45) for a
Hadamard matrix of the following form:
⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟
  ⎜⎜⎜ ⎟
H2 H2 ⎜1 −1 1 −1⎟⎟⎟⎟⎟
H4 = = ⎜⎜⎜⎜ .
⎜⎜⎜1 1 −1 −1⎟⎟⎟⎟⎟
(12.57)
H2 −H2
⎝ ⎠
1 −1 −1 1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


396 Chapter 12

One can show that


     
H2 H2 R P2 P2 R P2 P2 R H2 H2 R
= , (12.58)
RH2 −RH2 R RP2 −RP2 R RP2 −RP2 R RH2 −RH2 R
0 1
where R = 1 0 . This equality means that the matrix

⎛ ⎞
⎜⎜⎜1 2 2 1⎟⎟⎟
  ⎜⎜⎜ ⎟
P2 P2 R ⎜2 −3 −3 2⎟⎟⎟⎟⎟
= ⎜⎜⎜⎜
⎜⎜⎜2 −3 3 −2⎟⎟⎟⎟⎟
(12.59)
RP2 −RP2 R
⎝ ⎠
1 2 −2 −1

is the jacket matrix according to the following Hadamard matrix:


⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟
 
⎜⎜⎜⎜1 −1 −1 1⎟⎟⎟⎟
H2 H2 R
= ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎜1 −1 1 −1⎟⎟⎟⎟⎟
(12.60)
RH2 −RH2 R
⎝ ⎠
1 1 −1 −1

The inverse matrix of Eq. (12.59) has the form


⎛ ⎞
⎜⎜⎜3 2 2 3⎟⎟⎟
⎜ ⎟
1 ⎜⎜⎜⎜2 −1 −1 3⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ . (12.61)
14 ⎜⎜⎜2 1 1 −2⎟⎟⎟⎟
⎝ ⎠
3 −2 −2 −3
 
By substituting the matrix [RJ]2 (a, b) = ab ba − 2b for P2 into Eq. (12.59), we
obtain the parametric reverse jacket matrix of order 4 depending on two parameters,
⎛ ⎞
⎜⎜⎜a b b a ⎟⎟⎟
⎜⎜⎜⎜b a − 2b a − 2b ⎟
b ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟ . (12.62)
⎜⎜⎜b a − 2b −a + 2b −b⎟⎟⎟⎟
⎜⎝ ⎠
a b −b −a

It is not difficult to check that if A, B, C are invertible matrices of order n, then


the matrix
⎛ ⎞
⎜⎜⎜A B B A⎟⎟⎟
⎜⎜⎜ ⎟
1 ⎜ B −C C −B⎟⎟⎟⎟
Q = ⎜⎜⎜⎜ ⎟⎟ (12.63)
2 ⎜⎜⎜ B C −C −B⎟⎟⎟⎟
⎝ ⎠
A −B −B A

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 397

is also an invertible matrix, and its inverse matrix is


⎛ −1 ⎞
⎜⎜⎜A B−1 B−1 A−1 ⎟⎟⎟
⎜⎜⎜ −1 ⎟
1 ⎜ B −C −1 C −1 −B−1 ⎟⎟⎟⎟
Q−1 = ⎜⎜⎜⎜ −1 ⎟⎟ . (12.64)
2 ⎜⎜⎜ B C −1 −C −1 −B−1 ⎟⎟⎟⎟
⎝ −1 ⎠
A −B−1 −B−1 A−1

Theorem 12.3.1.1: Let A, B, and C be parametric reverse jacket matrices of order


n. Then, the matrix Q from Eq. (12.63) is a parametric reverse jacket matrix of
order 4n.
Note that if A, B, and C are nonzero matrices of order 1, i.e., they are nonzero
numbers a, b, c, then the matrices in Eqs. (12.63) and (12.64) take the following
forms, respectively:
⎛1 1 1 1⎞
⎜⎜⎜ ⎟
⎜⎜⎜ a b b a ⎟⎟⎟⎟⎟
⎛ ⎞ ⎜⎜⎜ ⎟
⎜⎜⎜a b b a⎟⎟⎟ ⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟⎟

1 ⎜⎜b −c c −b⎟⎟⎟⎟ ⎜
1 ⎜⎜⎜ b c c b ⎟⎟⎟⎟ − −
Q1 (a, b, c) = ⎜⎜⎜⎜ ⎟⎟ , Q−1
1 (a, b, c) = ⎜ ⎟ . (12.65)
2 ⎜⎜⎝b c −c −b⎟⎟⎟⎠ 2 ⎜⎜⎜⎜⎜ 1 1 − 1 − 1 ⎟⎟⎟⎟⎟
a −b −b a ⎜⎜⎜ b c c b ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎝ 1 1 1 1 ⎟⎟⎟⎠
− −
a b b a
Note that the parametric reverse jacket matrix in Eq. (12.65) was introduced in
Ref. 6.
For a = c = 1, b = 2, we have
⎛ ⎞
⎜⎜⎜ 1 1 1 ⎟⎟⎟
⎛ ⎞ ⎜
⎜⎜⎜ 1 ⎟⎟⎟
⎜⎜⎜1 2 2 1⎟⎟⎟ ⎜⎜⎜ 1 2 2 ⎟⎟
⎜⎜⎜ −1 1 − ⎟⎟⎟⎟⎟ 1
1 ⎜⎜⎜⎜2 −1 1 −2⎟⎟⎟⎟ 1 ⎜⎜⎜ 2 2 ⎟⎟⎟⎟ . (12.66)
Q1 (1, 2, 1) = ⎜⎜⎜ ⎟⎟ , Q−1
1 (1, 2, 1) = ⎜
2 ⎜⎜⎝2 1 −1 −2⎟⎟⎟⎠ 2 ⎜⎜⎜⎜ 1 1 ⎟⎟
⎜⎜⎜ 1 −1 − ⎟⎟⎟⎟
1 −2 −2 1 ⎜⎜⎜ 2 2 ⎟⎟⎟
⎜⎝ 1 1 ⎟⎟
1 − − 1⎠
2 2
For a = 1, b = 2, c = 3, we have the following matrices:
⎛ ⎞
⎜⎜⎜ 1 1 1 1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎛ ⎞ ⎜⎜⎜ 2 2
⎜⎜⎜1 2 2 1⎟⎟⎟ ⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟⎟

1 ⎜⎜2 −3 3 −2⎟⎟⎟⎟ 1 ⎜⎜⎜⎜ 2 − 3 3 − 2 ⎟⎟⎟⎟
Q1 (1, 2, 3) = ⎜⎜⎜⎜ ⎟⎟ , Q−1
1 (a, b, c) = ⎜ ⎟.
2 ⎜⎜⎝2 3 −3 −2⎟⎟⎟⎠ 2 ⎜⎜⎜⎜⎜ 1 1 − 1 − 1 ⎟⎟⎟⎟⎟
1 −2 −2 1 ⎜⎜⎜ 2 3 3 2 ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎝ 1 1 ⎟
1 − − 1⎠
2 2
(12.67)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


398 Chapter 12

Let
 
a b
A = [RJ]2 (a, b) = , B = [RJ]2 (c, d), C = [RJ]2 (c, d);
b a − 2b

then, from Eq. (12.63), we find the following reverse jacket matrix of order 8
depending on six parameters:
⎛        ⎞
⎜⎜⎜ a b c d c d a b ⎟⎟⎟
⎜⎜⎜ b a − 2b d c − 2d d c − 2d b a − 2b ⎟⎟⎟⎟
⎜⎜⎜        ⎟⎟⎟
⎜⎜⎜ c −e −f −c −d ⎟⎟⎟
⎜⎜⎜ d e f ⎟⎟
1 ⎜⎜⎜⎜ d c − 2d − f −e + 2 f f e − 2f −d −c + 2d ⎟⎟⎟⎟
[RJ]8 = ⎜⎜⎜        ⎟⎟⎟ . (12.68)
2 ⎜⎜⎜ c d e f −e −f −c −d ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ d c − 2d f e − 2f − f −e + 2 f −d −c + 2d ⎟⎟⎟⎟
⎜⎜⎜        ⎟⎟⎟
⎜⎜⎝ a b −c −d −c −d a b ⎟⎟⎟

b a − 2b −d −c + 2d −d −c + 2d b a − 2b

Using Theorem 12.3.1.1 and the parametric reverse jacket matrix Q1 (a, b, c)
from Eq. (12.43), we can construct a reverse jacket matrix of order 16 depending
on nine parameters. This matrix has the following form:
⎛ ⎞
⎜⎜⎜a b b a d e e d d e e d a b b a⎟⎟
⎜⎜⎜⎜b −c c −b e −f f −e e −f f −e b −c c −b⎟⎟⎟⎟⎟
⎜⎜⎜b −c −b −f −e −f −e −c −b⎟⎟⎟⎟⎟
⎜⎜⎜ c e f e f b c
⎜⎜⎜a
⎜⎜⎜ −b −b a d −e −e d d −e −e d a −b −b a⎟⎟⎟⎟⎟
⎜⎜⎜d e e d −g −h −h −g g h h g −d −e −e −d⎟⎟⎟⎟⎟
⎜⎜⎜ e −f −e −h −q −q −h −e −f e⎟⎟⎟⎟⎟
⎜⎜⎜ f q h h q f
⎜⎜⎜ e

f −f −e −h −q q h h q −q −h −e −f f e⎟⎟⎟⎟⎟
1 ⎜⎜⎜⎜d −e −e d −g h h −g g −h −h g −d e e −d⎟⎟⎟⎟
⎜⎜ ⎟ . (12.69)
2 ⎜⎜⎜⎜d e e d g h h g −g −h −h −g −d −e −e −d⎟⎟⎟⎟
⎜⎜⎜ e ⎟
⎜⎜⎜ −f f −e h −q q −h −h q −q h −e f −f e⎟⎟⎟⎟

⎜⎜⎜ e f −f −e h q −q −h −h −q q h −e −f f e⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜d −e −e d g −h −h g −g h h −g −d e e −d⎟⎟⎟⎟
⎜⎜⎜a ⎟
⎜⎜⎜ b b a −d −e −e −d −d −e −e −d a b b a⎟⎟⎟⎟

⎜⎜⎜b −c c −b −e f −f e −e f −f e b −c c −b⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎝b c −c −b −e −f f e −e −f f e b c −c −b⎟⎟⎟⎠
a −b −b a −d e e −d −d e e −d a −b −b a

Remark: (1) If a = b = c = d = e = f = 1, then the parametric reverse jacket


matrix from Eq. (12.68) is a Sylvester–Hadamard matrix of order 8, i.e.,
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟⎟
⎜⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − − + + − −⎟⎟⎟⎟⎟
⎜⎜⎜
+ − − + + − − +⎟⎟⎟⎟⎟
[RJ]8 (1, 1, . . . , 1) = ⎜⎜⎜⎜⎜ ⎟. (12.70)
⎜⎜⎜+ + + + − − − −⎟⎟⎟⎟⎟
⎜⎜⎜+ − + − − + − +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟⎟
⎝ ⎠
+ − − + − + + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 399

(2) If a = b = c = d = 1 and e = f = 2, then the parametric reverse jacket matrix


from Eq. (12.68) is the reverse jacket matrix of order 8 (see Ref. 7), i.e.,
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 −2 −2 2 2 −1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 −2 2 2 −2 −1 1⎟⎟⎟⎟⎟
[RJ]8 (1, 1, 1, 1, 2, 2) = ⎜⎜⎜⎜ ⎟⎟⎟ . (12.71)
⎜⎜⎜⎜1 1 2 2 −2 −2 −1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 2 −2 −2 2 −1 1⎟⎟⎟⎟⎟
⎜⎜⎜⎜1 1 −1 −1 −1 −1 1 1⎟⎟⎟⎟
⎜⎝ ⎟⎠
1 −1 −1 1 −1 1 1 −1

(3) If a = b = c = 1 and d = e = f = 2, then the parametric reverse jacket matrix


from Eq. (12.68) gives the following reverse jacket matrix of order 8:
⎛ ⎞
⎜⎜⎜1 1 1 2 1 2 1 1⎟⎟⎟
⎜⎜⎜⎜1 −1 2 −3 2 −3 1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 2 −2 −2 2 2 −1 −2⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜2 −3 −2 2 2 −2 −2 3⎟⎟⎟⎟⎟
[RJ]8 (1, 1, 1, 2, 2, 2) = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎜1 2 2 2 −2 −2 −1 −2⎟⎟⎟⎟⎟
(12.72)
⎜⎜⎜⎜ ⎟⎟⎟⎟
⎜⎜⎜2 −3 2 −2 −2 2 −2 3⎟⎟⎟
⎜⎜⎜1 1 −1 −2 −1 −2 1 1⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠
1 −1 −2 3 −2 3 1 −1

12.4 Construction of Special-Type Parametric Reverse Jacket


Matrices
In this section, we consider a special type of parametric reverse jacket matrix. Some
elements of such matrices are fixed integers and others are parameters.
It is well known that a Walsh–Hadamard or Sylvester transform matrix of order
N = 2n can be presented as
* +N−1
HN = (−1)i, j , (12.73)
i, j=0

where i, j = in−1 jn−1 + in−2 jn−2 + · · · + i1 j1 + i0 j0 .


The parametric reverse jacket matrix of order N depending on one parameter
can be represented as8
* +N−1
[RJ]N (a) = (−1)i, j a(in−1 ⊕in−2 )( jn−1 ⊕ jn−2 ) , (12.74)
i, j=0

where ⊕ is the sign of modulo-2 addition.


Note that [RJ]N (1) is the WHT matrix. The parametric reverse jacket matrices
and their inverse matrices of order 4 and 8 corresponding to the weight a are,

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


400 Chapter 12

respectively,

⎛ ⎞
⎜⎜⎜ ⎟⎟⎟
⎛ ⎞ ⎜

⎜⎜⎜ 1 1 1 1 ⎟⎟⎟
⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜
1 1 ⎟⎟⎟
⎜⎜⎜1 −a a −1⎟⎟⎟ 1 ⎜⎜⎜ ⎜ 1 − −1 ⎟⎟⎟
[RJ]4 (a) = ⎜⎜ ⎟
−1
, [RJ]4 (a) = ⎜⎜⎜ a a ⎟⎟⎟ , (12.75a)
⎜⎜⎜1 a −a −1⎟⎟⎟ ⎟ 4 ⎜⎜⎜ ⎟⎟⎟
⎝ ⎠ ⎜⎜⎜1 1 1 ⎟⎟⎟
1 −1 −1 1 ⎜⎜⎜ − −1 ⎟⎟⎟
⎜⎝ a a ⎟⎟⎠
1 −1 −1 1
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟⎟
⎜⎜⎜1 1 −a −a a a −1 −1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 −a a a −a −1 1⎟⎟⎟⎟⎟
[RJ]8 (a) = ⎜⎜⎜⎜ ⎟⎟ ,
⎜⎜⎜1 1 a a −a −a −1 −1⎟⎟⎟⎟⎟
(12.75b)
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 a −a −a a −1 1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝1 1 −1 −1 −1 −1 1 1⎟⎟⎟⎟⎠
1 −1 −1 1 −1 1 1 −1
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ 1 1 − − −1 −1 ⎟⎟⎟
⎜⎜⎜ a a a a ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟
⎜ 1 −1 − − −1 1 ⎟⎟⎟
−1 1 ⎜⎜⎜⎜ a a a a ⎟⎟⎟ .
[RJ]8 (a) = ⎜⎜⎜ ⎟⎟⎟ (12.75c)
8 ⎜⎜⎜ 1 1 1 1 ⎟

⎜⎜⎜⎜1 1 a a − a − a −1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 1 − 1 − 1 1 −1 1⎟⎟⎟⎟⎟
⎜⎜⎜ a a a a ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 −1 −1 −1 −1 1 1⎟⎟⎟⎟⎟
⎜⎜⎝ ⎟⎟⎠
1 −1 −1 1 −1 1 1 −1

Now, we introduce the following formulas:

[RJ]4 (a; i, j) = (−1)i, j ai1 ⊕i0 + j1 ⊕ j0 , i1 , i0 , j1 , j0 = 0, 1,


(12.76)
[RJ]8 (a; i, j) = (−1)i, j ai2 ⊕i1 + j2 ⊕ j1 , i2 , i1 , j2 , j1 = 0, 1.

One can check that the inverse matrices of Eq. (12.76) can be defined as

[RJ]4 (a; i, j) = (−1)i, j a−(i1 ⊕i0 + j1 ⊕ j0 ) , i1 , i0 , j1 , j0 = 0, 1,


(12.77)
[RJ]8 (a; i, j) = (−1)i, j a−(i2 ⊕i1 + j2 ⊕ j1 ) , i2 , i1 , j2 , j1 = 0, 1.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 401

It can be shown that the matrices generated by these formulas are also reverse
jacket matrices, examples of which are given as follows:
⎛ ⎞
⎜⎜⎜1 a a 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜a −a2 a2 −a⎟⎟⎟⎟⎟
[RJ]4 (a) = ⎜⎜ ,
⎜⎜⎜a a2 −a2 −a⎟⎟⎟⎟⎟
⎝ ⎠
1 −a −a 1
⎛ 1 1 ⎞
⎜⎜⎜ 1 1⎟⎟⎟⎟
⎜⎜⎜ a a ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ (12.78a)
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ − − ⎟
1 ⎜⎜⎜ a a2 a2 a ⎟⎟⎟⎟
[RJ]−1
4 (a) =
⎜⎜⎜ ⎟⎟ ,
4 ⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ − − ⎟
⎜⎜⎜ a a2 a2 a ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎝ 1 1 ⎟
1 − − 1⎠
a a
⎛ ⎞
⎜⎜⎜1 1 a a a a 1 1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜⎜1 −1 a −a a −a 1 −1⎟⎟⎟⎟
⎜⎜⎜a a −a2 −a2 a2 a2 −a −a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜a −a −a2 a2 a2 −a2 −a a⎟⎟⎟⎟⎟
[RJ]8 (a) = ⎜⎜⎜⎜ ⎟,
⎜⎜⎜a a a2 a2 −a2 −a2 −a −a⎟⎟⎟⎟⎟
(12.78b)
⎜⎜⎜ ⎟
⎜⎜⎜a −a a2 −a2 −a2 a2 −a a⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝1 1 −a −a −a −a 1 1⎟⎟⎟⎟

1 −1 −a a −a a 1 −1
⎛ ⎞
⎜⎜⎜ ⎟
1 1⎟⎟⎟⎟⎟
1 1 1 1
⎜⎜⎜ 1 1
⎜⎜⎜ a a a a ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 1 −1 1

1 1

1
−1 ⎟⎟⎟
⎜⎜⎜⎜ a a a a
1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ − 2 − 2 − − ⎟
⎜⎜⎜ a a a a a2 a2 a a ⎟⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜ − − 1 1 1 1
1 ⎜⎜⎜ a a a2 a2 a2 a2 a a ⎟⎟⎟⎟⎟⎜ − −
[RJ]−1
8 (a) = ⎜ ⎟. (12.78c)
8 ⎜⎜⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟⎟
⎜⎜⎜⎜ a a a2 a2 − a2 − a2 − a − a ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 1 1 1 1 1 1 1 1 ⎟⎟⎟⎟
⎜⎜⎜ −
⎜⎜⎜ a a a2 − a2 − a2 a2 − a a ⎟⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟⎟⎟
⎜⎜⎜ 1 1 1 1 ⎟⎟⎟
⎜⎜⎜ 1 1 − − − − 1 1 ⎟⎟⎟
⎜⎜⎜ a a a a ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎝ 1 −1 − 1 1

1 1
1 −1⎠
a a a a
Substituting
√ into the matrices of Eqs. (12.75a)–(12.75c) and (12.78a)–(12.78c)
a = j = −1, we obtain the following complex reverse jacket matrices, which are

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


402 Chapter 12

also complex Hadamard matrices:


⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜1 1 1 1⎟⎟⎟
⎜⎜⎜1 − j j −1⎟⎟⎟⎟ 1 ⎜⎜⎜1 j − j −1⎟⎟⎟⎟
[RJ]4 ( j) = ⎜⎜⎜⎜ ⎟⎟⎟ , [RJ]−1 ( j) = ⎜⎜⎜⎜ ⎟⎟ , (12.79a)
⎜⎜⎝ 1 j − j −1 ⎟⎟⎠ 4
4 ⎜⎜⎝1 − j j −1⎟⎟⎟⎠
1 −1 −1 1 1 −1 −1 1
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 1 − j − j j j −1 −1⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 − j j j − j −1 1⎟⎟⎟⎟⎟
[RJ]8 ( j) = ⎜⎜⎜ ⎟,
⎜⎜⎜1 1 j j − j − j −1 −1⎟⎟⎟⎟⎟
(12.79b)
⎜⎜⎜ ⎟
⎜⎜⎜1 −1 j − j − j j −1 1⎟⎟⎟⎟⎟
⎜⎜⎜1 1 −1 −1 −1 −1 1 1⎟⎟⎟
⎜⎝ ⎟⎠
1 −1 −1 1 −1 1 1 −1
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1⎟⎟⎟
⎜⎜⎜1 −1 1 −1 1 −1 1 −1⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜1 1 j j − j − j −1 −1⎟⎟⎟⎟⎟
⎜ ⎟
1 ⎜⎜⎜⎜1 −1 j − j − j j −1 1⎟⎟⎟⎟
[RJ]−1 ( j) = ⎜
⎜ ⎟⎟ , (12.79c)
8
8 ⎜⎜⎜⎜1 1 − j − j j j −1 −1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜1 −1 − j j j − j −1 1⎟⎟⎟⎟⎟
⎜⎜⎜1 1 −1 −1 −1 −1 1 1⎟⎟⎟
⎝ ⎠
1 −1 −1 1 −1 1 1 −1
⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 j j 1⎟⎟⎟ ⎜⎜⎜ 1 − j − j 1⎟⎟⎟
⎜⎜⎜ j 1 −1 − j⎟⎟⎟ ⎜ ⎟
[RJ]4 ( j) = ⎜⎜⎜⎜ ⎟⎟⎟ , [RJ]−1 ( j) = 1 ⎜⎜⎜⎜⎜− j 1 −1 j⎟⎟⎟⎟⎟ , (12.79d)
⎜⎝⎜ j −1 1 − j⎟⎟⎠⎟ 4
4 ⎜⎜⎝⎜− j −1 1 j⎟⎟⎠⎟
1 −j −j 1 1 j j 1
⎛ ⎞
⎜⎜⎜1 1 j j j j 1 1⎟⎟⎟
⎜⎜⎜1 −1 j − j j − j 1 −1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ j j 1 1 −1 −1 − j − j⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜ j − j 1 −1 −1 1 − j j⎟⎟⎟⎟⎟
[RJ]8 ( j) = ⎜⎜ ,
⎜⎜⎜ j j −1 −1 1 1 − j − j⎟⎟⎟⎟⎟
(12.79e)
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜⎜ j − j −1 1 1 −1 − j j⎟⎟⎟⎟
⎜⎜⎜1 1 − j − j − j − j 1 1⎟⎟⎟
⎝ ⎠
1 −1 − j j − j j 1 −1
⎛ ⎞
⎜⎜⎜ 1 1 − j − j − j − j 1 1⎟⎟⎟
⎜⎜⎜ 1 −1 − j j − j j 1 −1⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜− j − j 1 1 −1 −1 j j⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
−1 1 ⎜⎜⎜− j j 1 −1 −1 1 j − j⎟⎟⎟⎟
[RJ]8 ( j) = ⎜⎜⎜ ⎟⎟ . (12.79f)
8 ⎜⎜⎜− j − j −1 −1 1 1 j j⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜⎜− j j −1 1 1 −1 j − j⎟⎟⎟⎟⎟
⎜⎜⎜ 1 1 j j j j 1 1⎟⎟⎟
⎝ ⎠
1 −1 j − j j − j 1 −1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 403

We now introduce the notation [RJ]8 (a, b, c, d, e, f ) = (Ri, j )3i, j=0 , where Ri, j is the
parametric reverse jacket matrices of order 2 [see Eq. (12.68)], i.e.,

R0,0 = R0,3 = [RJ]2 (a, b), R0,1 = R0,2 = [RJ]2 (c, d),
R1,0 = −R1,3 = [RJ]2 (c, d), R1,1 = −R1,2 = −[RJ]2 (e, f ),
(12.80)
R2,0 = −R2,3 = [RJ]2 (c, d), R2,1 = −R2,2 = [RJ]2 (e, f ),
R3,0 = R3,3 = [RJ]2 (a, b), R3,1 = R3,2 = −[RJ]2 (c, d).

Let us consider the following matrices:


 3
(i1 ⊕i0 )( j1 ⊕ j0 )
R(1)
8 (a, b, . . . , e, f ; w) = w Ri, j ,
i, j=0
 3
(i1 ⊕i0 )+( j1 ⊕ j0 )
R(2)
8 (a, b, . . . , e, f ; w) = w Ri, j , (12.81)
i, j=0
 3
(i1 ⊕i0 ⊕ j1 ⊕ j0 )
R(3)
8 (a, b, . . . , e, f ; w) = w Ri, j .
i, j=0

The inverse matrices of Eq. (12.81) can be presented as


 −1  3
R(1)
8 (a, b, . . . , e, f ; w) = w−(i1 ⊕i0 )( j1 ⊕ j0 ) R−1
i, j i, j=0 ,
 (2) −1  3
R8 (a, b, . . . , e, f ; w) = w−(i1 ⊕i0 )−( j1 ⊕ j0 ) R−1i, j i, j=0 ,
(12.82)
 (3) −1  3
R8 (a, b, . . . , e, f ; w) = w−(i1 ⊕i0 ⊕ j1 ⊕ j0 ) R−1
i, j .
i, j=0

One can show that the matrices of Eq. (12.81) and their inverse matrices in
Eq. (12.82) are parametric reverse jacket matrices and have the following form,
respectively:
⎛ ⎞
⎜⎜⎜R0,0 R0,1 R0,1 R0,0 ⎟⎟⎟
⎜⎜⎜ ⎟
(1) ⎜⎜⎜R0,1 −wR1,1 wR1,1 −R0,1 ⎟⎟⎟⎟⎟
R8 = ⎜⎜⎜ ⎟, (12.83a)
⎜⎜⎜R0,1 wR1,1 −wR1,1 −R0,1 ⎟⎟⎟⎟⎟
⎝ ⎠
R0,0 −R0,1 −R0,1 R0,0
⎛ ⎞
⎜⎜⎜R−1 −1 −1 −1 ⎟ ⎟⎟
⎜⎜⎜ 0,0
R 0,1 R 0,1 R 0,0 ⎟ ⎟⎟
 (1) −1 ⎜⎜⎜⎜R−1 −1/w R−1 1/w R−1 −R−1 ⎟⎟⎟⎟
= ⎜⎜⎜⎜ 0,1 1,1 1,1 0,1 ⎟ ⎟,
⎜⎜⎜R−1 1/w R−1 −1/w R−1 −R−1 ⎟⎟⎟⎟⎟
R8 (12.83b)
⎜⎜⎜⎝ 0,1 1,1 1,1 0,1 ⎟ ⎟⎟
−1 ⎠
R−1
0,0 −R −1
0,1 −R −1
0,1 R 0,0
⎛ ⎞
⎜⎜⎜ R0,0 wR0,1 wR0,1 R0,0 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜wR0,1 −w R1,1 w R1,1 −wR0,1 ⎟⎟⎟⎟⎟ ,
2 2
R(2) = ⎜⎜⎜wR ⎟ (12.83c)
⎜⎜⎝ 0,1 w R1,1 −w R1,1 −wR0,1 ⎟⎟⎟⎟⎠
8 2 2

R0,0 −wR0,1 −wR0,1 R0,0

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


404 Chapter 12

⎛ ⎞
⎜⎜⎜ −1 1 −1 1 −1 −1 ⎟

⎜⎜⎜ R0,0 R R R0,0 ⎟⎟⎟⎟
⎜⎜⎜ w 0,1 w 0,1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ 1 −1 1 −1 1 −1 1 −1 ⎟⎟⎟⎟
 (2) −1 ⎜⎜ 0,1 ⎜ R − R R − R ⎟⎟
R8 = ⎜⎜⎜⎜⎜ w w2 1,1 w2 1,1 w 0,1 ⎟⎟⎟⎟ , (12.83d)

⎜⎜⎜ 1 −1 1 −1 1 −1 1 −1 ⎟⎟⎟⎟
⎜⎜⎜ R0,1 R − R − R ⎟⎟
⎜⎜⎜ w w2 1,1 w2 1,1 w 0,1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎝ −1 1 1 −1 −1 ⎟ ⎟⎠
R0,0 − R−1 − R R
w 0,1 w 0,1 0,0
⎛ ⎞
⎜⎜⎜ R0,0 wR0,1 wR0,1 R0,0 ⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜wR0,1 −R1,1 R1,1 −wR0,1 ⎟⎟⎟⎟
R(3) = ⎜⎜⎜wR ⎟, (12.83e)
8
⎜⎜⎝ 0,1 R1,1 −R1,1 −wR0,1 ⎟⎟⎟⎟⎟

R0,0 −wR0,1 −wR0,1 R0,0
⎛ ⎞
⎜⎜⎜ R−1 1 R−1 1 R−1 R−1 ⎟⎟
⎜⎜⎜ 0,0 w 0,1 w 0,1 0,0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜ 1 −1 1 −1 ⎟ ⎟⎟⎟
⎜⎜⎜ R0,1 −R−1 R −1
− R
 (3) −1 ⎜⎜ w 1,1 1,1
w 0,1 ⎟⎟⎟⎟
= ⎜⎜⎜⎜ ⎟⎟ .
1 −1 ⎟⎟⎟⎟
R8 (12.83f)
⎜⎜⎜ 1 −1 −1 −1
⎜⎜⎜ R0,1 R1,1 −R1,1 − R0,1 ⎟⎟⎟
⎜⎜⎜ w w ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎝ −1 1 −1 1 −1 ⎟⎟⎠
R0,0 − R0,1 − R0,1 R−1 0,0
w w

12.5 Fast Parametric Reverse Jacket Transform


As follows from Theorem 12.3.1.1, the general form of parametric reverse jacket
matrix has the following form [see also Eq. (12.63)]:
⎛ ⎞
⎜⎜⎜A B B A⎟⎟⎟
⎜⎜⎜ ⎟
1 ⎜ B −C C −B⎟⎟⎟⎟⎟
Q = ⎜⎜⎜⎜⎜ ⎟. (12.84)
2 ⎜⎜⎜ B C −C −B⎟⎟⎟⎟⎟
⎝ ⎠
A −B −B A

A, B, C are also parametric reverse jacket matrices of order n. Let X = (x0 , x1 ,


. . . , xN−1 )T be an input signal-vector column of length N = 4n. We split this vector
in four part as follows:

X = (X0 , X1 , X2 , X3 )T , (12.85)

where

X0T = (x0 , x1 , . . . , xn−1 ), X1T = (xn , xn+1 , . . . , x2n−1 ),


(12.86)
X2T = (x2n , x2n+1 , . . . , x3n−1 ), X3T = (x3n , x3n+1 , . . . , x4n−1 ).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 405

Figure 12.2 Flow graph of a Q transform [see Eq. (12.87)].

Now the parametric RJT can be presented as (the coefficient 1/2 is omitted)
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜A B B A⎟⎟⎟ ⎜⎜⎜X0 ⎟⎟⎟ ⎜⎜⎜A(X0 + X3 ) + B(X1 + X2 ) ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟
⎜ B −C C −B⎟⎟⎟ ⎜⎜⎜X1 ⎟⎟⎟ ⎜⎜⎜ B(X0 − X3 ) − C(X1 − X2 )⎟⎟⎟⎟⎟
QX = ⎜⎜⎜⎜ = .
⎜⎜⎜ B C −C −B⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜X2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ B(X0 − X3 ) + C(X1 − X2 )⎟⎟⎟⎟⎟
(12.87)
⎝ ⎠⎝ ⎠ ⎝ ⎠
A −B −B A X3 A(X0 + X3 ) − B(X1 + X2 )

A flow graph of the Eq. (12.87) transform is given in Fig. 12.2.


It is not difficult to calculate that the number of required operations for the
Eq. (12.87) transform is given by
+
CQ (N) = 2N + C A+ (n) + 2C +B (n) + CC+ (n),
× (12.88)
CQ (N) = C A× (n) + 2C ×B (n) + CC× (n),

where C P+ (n) and C P× (n) denote the number of additions and multiplications of the
jacket transform P. Below, we present in detail the reverse jacket transforms for
some small orders.

12.5.1 Fast 4 × 4 parametric reverse jacket transform


12.5.1.1 One-parameter case
(1) Let X = (x0 , x1 , x2 , x3 ) be a column vector. Consider the parametric reverse
jacket matrix with one parameter given in Eq. (12.74). The forward 1D
parametric RJT depending on one parameter can be calculated as
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜1 1 1 1⎟⎟⎟ ⎜⎜⎜ x0 ⎟⎟⎟ ⎜⎜⎜(x0 + x3 ) + (x1 + x2 ) ⎟⎟⎟
⎜⎜⎜1 −a a −1⎟⎟⎟ ⎜⎜⎜ x ⎟⎟⎟ ⎜⎜⎜(x − x ) − a(x − x )⎟⎟⎟
[RJ]4 (a)X = ⎜⎜⎜⎜ ⎟⎟ ⎜⎜ 1 ⎟⎟ = ⎜⎜ 0 3 1 2 ⎟
⎟ . (12.89)
⎜⎜⎝1 a −a −1⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝ x2 ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝(x0 − x3 ) + a(x1 − x2 )⎟⎟⎟⎟⎠
1 −1 −1 1 x3 (x0 + x3 ) − (x1 + x2 )

We see that the parametric RJT needs eight addition and one multiplication
operations. The higher-order parametric RJT matrix generated by the

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


406 Chapter 12

Figure 12.3 Flow graph of an 8-point parametric reverse jacket transform.

Kronecker product has the following form (N = 2n ):

[RJ]N (a) = [RJ]4 (a) ⊗ HN/4 . (12.90)

Taking into account Eq. (12.88), we find that

+ × N
C[RJ] (a) = N log2 N, C[RJ] (a) = . (12.91)
N N
4
From Eq. (12.75a), it follows that the inverse 1D parametric RJT has the same
complexity. Note that if a is a power of 2, then we have

+ shi f t N
C[RJ] (a) = N log2 N, C[RJ] (a) = , (12.92)
N N 4
where
 
+ +
H2 = , X T = (x0 , x1 , . . . , x7 ), Y T = (y0 , y1 , . . . , y7 ),
+ −
(12.93)
X0T = (x0 , x1 ), X1T = (x2 , x3 ), X2T = (x4 , x5 ), X3T = (x6 , x7 ),
Y0T = (y0 , y1 ), Y1T = (y2 , y3 ), Y2T = (y4 , y5 ), Y3T = (y6 , y7 ).

A flow graph of an 8-point [RJ]8 (a)X = ([RJ]4 (a) ⊗ H2 ) = Y transform is given


in Fig. 12.3.
(2) Consider the parametric RJT from Eq. (12.78a). Let X = (x0 , x1 , x2 , x3 ) be a
column vector. The forward 1D parametric RJT depending on one parameter
can be calculated as
⎛ ⎞⎛ ⎞
⎜⎜⎜1 a a 1⎟⎟ ⎜⎜ x0 ⎟⎟
⎟⎟ ⎜ ⎟
⎜⎜⎜
a −a2 a2 −a⎟⎟⎟⎟ ⎜⎜⎜⎜ x1 ⎟⎟⎟⎟
[RJ]4 (a)X = ⎜⎜⎜⎜⎜ ⎟⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎝a a −a −a⎟⎟⎟⎠ ⎜⎜⎝ x2 ⎟⎟⎠
2 2

1 −a −a 1 x3
⎛ ⎞
⎜⎜⎜(x0 + x3 ) + a(x1 + x2 ) ⎟⎟⎟
⎜⎜⎜ ⎟
a(x − x3 ) − a (x1 − x2 )⎟⎟⎟⎟
2
= ⎜⎜⎜⎜⎜ 0 ⎟⎟ . (12.94)
⎜⎜⎝a(x0 − x3 ) + a (x1 − x2 )⎟⎟⎟⎠
2

(x0 + x3 ) − a(x1 + x2 )

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 407

Figure 12.4 Flow graph of the 8-point parametric RJT in Eq. (12.94).

We see that this transform needs only eight addition and three multiplication
operations. The higher-order parametric RJT matrix generated by the
Kronecker product has the following form (N = 2n ):

[RJ]N (a) = [RJ]4 (a) ⊗ HN/4 . (12.95)

Taking into account Eq. (12.88), we find that

+ × 3N
C[RJ] (a) = N log2 N, C[RJ] (a) = . (12.96)
N N
4
The same complexity is required for the inverse parametric RJT from
Eq. (12.78a). Note that if a is the power of 2, then we have

+ shi f t 3N
C[RJ] (a) = N log2 N, C[RJ] (a) = . (12.97)
N N 4
A flow graph of an 8-point [RJ]8 (a)X = ([RJ]4 (a) ⊗ H2 )X = Y transform [see
Eqs. (12.93) and (12.94)] is given in Fig. 12.4.

12.5.1.2 Case of three parameters


Let X = (x0 , x1 , x2 , x3 ) be a column vector. Consider the parametric reverse jacket
matrix with three parameters given in Eq. (12.65). The forward 1D parametric RJT
depending on three parameters can be calculated as
⎛ ⎞⎛ ⎞
⎜⎜⎜a b b a⎟⎟⎟ ⎜⎜⎜ x0 ⎟⎟⎟
⎜⎜⎜b −c c −b⎟⎟⎟⎟ ⎜⎜⎜⎜ x1 ⎟⎟⎟⎟
Q1 (a, b, c)X = ⎜⎜⎜⎜ ⎟⎜ ⎟
⎜⎜⎝b c −c −b⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝ x2 ⎟⎟⎟⎟⎠
a −b −b a x3
⎛ ⎞
⎜⎜⎜a(x0 + x3 ) + b(x1 + x2 )⎟⎟⎟ ⎛⎜⎜y0 ⎞⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟
⎜b(x − x3 ) − c(x1 − x2 ) ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y1 ⎟⎟⎟⎟⎟
= ⎜⎜⎜⎜ 0 ⎟ = ⎜ ⎟. (12.98)
⎜⎜⎜⎝b(x0 − x3 ) + c(x1 − x2 ) ⎟⎟⎟⎟⎠ ⎜⎜⎜⎝y2 ⎟⎟⎟⎠
a(x0 + x3 ) − b(x1 + x2 ) y3

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


408 Chapter 12

Figure 12.5 Flow graph of the 4-point transform in Eq. (12.98).

Figure 12.6 Flow graph of the N-point transform in Eq. (12.99).

From Eq. (12.98), we can see that the forward 1D parametric RJT of order 4
requires eight addition and four multiplication operations. A flow graph of the
4-point transform in Eq. (12.98) is given in Fig. 12.5.
Note that if a, b, c are powers of 2, then the forward 1D parametric RJT of order 4
can be performed without multiplication operations. It requires only eight addition
and four shift operations.
A parametric reverse jacket matrix of higher order N = 2k (k > 2) can be
generated recursively as

[RJ]N (a, b, c) = [RJ]4 (a, b, c) ⊗ HN/4 . (12.99)

It can be shown that the complexity of a three-parameter RJT in Eq. (12.99) is


equal to

C N+ (a, b, c) = N log2 N, C N× (a, b, c) = N. (12.100)

A flow graph of an N-point transform in Eq. (12.99) is given in Fig. 12.6.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 409

12.5.2 Fast 8 × 8 parametric reverse jacket transform


In this section, we will consider a parametric reverse jacket matrix [RJ]8 (a, b, c,
d, e, f ) [see Eq. (12.68)] with a varying number of parameters.
12.5.2.1 Case of two parameters
Let a = b = c = d, e = f , and X and Y be input and output of vectors. From the
matrix in Eq. (12.68), we find that
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a a a a a a a a⎟⎟ ⎜⎜ x0 ⎟⎟ ⎜⎜y0 ⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜a −a a −a a −a a −a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y1 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜a
⎜⎜⎜ a −e −e e e −a −a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y2 ⎟⎟⎟⎟⎟
⎟ ⎜ ⎟
⎜a −a −e −e −a a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x3 ⎟⎟⎟⎟ ⎜⎜⎜⎜y3 ⎟⎟⎟⎟
[RJ]8 (a, e)X = ⎜⎜⎜⎜⎜
e e
⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ .
⎟ (12.101)
⎜⎜⎜a a e e −e −e −a −a⎟⎟ ⎜⎜ x4 ⎟⎟ ⎜⎜y4 ⎟⎟
⎜⎜⎜a ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜ −a e −e −e e −a a⎟⎟⎟⎟ ⎜⎜⎜⎜ x5 ⎟⎟⎟⎟ ⎜⎜⎜⎜y5 ⎟⎟⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜a a −a −a −a −a a a⎟⎟⎟⎟ ⎜⎜⎜⎜ x6 ⎟⎟⎟⎟ ⎜⎜⎜⎜y6 ⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
a −a −a a −a a a −a x7 y7

Hence, the 8-point transform [RJ]8 (a, e)X = Y can be computed as

y0 = a[(x0 + x7 ) + (x1 + x6 )] + a[(x2 + x5 ) + (x3 + x4 )],


y6 = a[(x0 + x7 ) + (x1 + x6 )] − a[(x2 + x5 ) + (x3 + x4 )],
y1 = a[(x0 − x7 ) − (x1 − x6 )] + a[(x2 − x5 ) − (x3 − x4 )],
y7 = a[(x0 − x7 ) − (x1 − x6 )] − a[(x2 − x5 ) − (x3 − x4 )],
(12.102)
y2 = a[(x0 − x7 ) + (x1 − x6 )] − e[(x2 − x5 ) + (x3 − x4 )],
y4 = a[(x0 − x7 ) + (x1 − x6 )] + e[(x2 − x5 ) + (x3 − x4 )],
y3 = a[(x0 + x7 ) − (x1 + x6 )] − e[(x2 + x5 ) − (x3 + x4 )],
y5 = a[(x0 + x7 ) − (x1 + x6 )] + e[(x2 + x5 ) − (x3 + x4 )].

From Eq. (12.102), it follows that an 8-point parametric RJT with two parameters
needs 24 addition and eight multiplication operations. A flow graph of this
transform is given in Fig. 12.7.
12.5.2.2 Case of three parameters
(1) Let a = b, c = d, and e = f . From the matrix in Eq. (12.68), we find that
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a a c c c c a a⎟⎟ ⎜⎜ x0 ⎟⎟ ⎜⎜y0 ⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜⎜a −a c −c c −c a −a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y1 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ c c −e −e e e −c −c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟
−c −e −e −c c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x3 ⎟⎟⎟⎟ ⎜⎜⎜⎜y3 ⎟⎟⎟⎟
[RJ]8 (a, c, e)X = ⎜⎜⎜⎜⎜
c e e
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ . (12.103)
⎜⎜⎜ c c e e −e −e −c −c⎟⎟ ⎜⎜ x4 ⎟⎟ ⎜⎜y4 ⎟⎟
⎜⎜⎜ c ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜ −c e −e −e e −c c⎟⎟⎟⎟ ⎜⎜⎜⎜ x5 ⎟⎟⎟⎟ ⎜⎜⎜⎜y5 ⎟⎟⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜a a −c −c −c −c a a⎟⎟⎟⎟ ⎜⎜⎜⎜ x6 ⎟⎟⎟⎟ ⎜⎜⎜⎜y6 ⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
a −a −c c −c c a −a x7 y7

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


410 Chapter 12

Figure 12.7 Flow graph of the 8-point transform in Eq. (12.102).

From Eq. (12.103), we obtain


y0 = a[(x0 + x7 ) + (x1 + x6 )] + c[(x2 + x5 ) + (x3 + x4 )],
y6 = a[(x0 + x7 ) + (x1 + x6 )] − c[(x2 + x5 ) + (x3 + x4 )],
y1 = a[(x0 − x7 ) − (x1 − x6 )] + c[(x2 − x5 ) − (x3 − x4 )],
y7 = a[(x0 − x7 ) − (x1 − x6 )] − c[(x2 − x5 ) − (x3 − x4 )],
(12.104)
y2 = c[(x0 − x7 ) + (x1 − x6 )] − e[(x2 − x5 ) + (x3 − x4 )],
y4 = c[(x0 − x7 ) + (x1 − x6 )] + e[(x2 − x5 ) + (x3 − x4 )],
y3 = c[(x0 + x7 ) − (x1 + x6 )] − e[(x2 + x5 ) − (x3 + x4 )],
y5 = c[(x0 + x7 ) − (x1 + x6 )] + e[(x2 + x5 ) − (x3 + x4 )].

From Eq. (12.104), it follows that an 8-point parametric RJT with three parameters
needs 24 addition and eight multiplication operations. A flow graph of this
transform is given in Fig. 12.8.
(2) Let a = b = c = d. From the matrix in Eq. (12.68), we find (below r = e−2 f )
that
⎛a a a a a a a a⎞ ⎛ x ⎞ ⎛y ⎞
⎜⎜⎜ ⎟⎟ ⎜⎜ 0 ⎟⎟ ⎜⎜ 0 ⎟⎟
⎜⎜⎜⎜a −a a −a a −a a −a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y1 ⎟⎟⎟⎟⎟
⎜⎜⎜a a −e − f e f −a −a⎟⎟⎟ ⎜⎜⎜ x2 ⎟⎟⎟ ⎜⎜⎜y2 ⎟⎟⎟
⎜⎜⎜a −a − f −r f r −a a⎟⎟⎟ ⎜⎜⎜ x ⎟⎟⎟ ⎜⎜⎜y ⎟⎟⎟
[RJ]8 (a, e, f )X = ⎜⎜⎜⎜a a e f −e − f −a −a⎟⎟⎟⎟ ⎜⎜⎜⎜ x3 ⎟⎟⎟⎟ = ⎜⎜⎜⎜y3 ⎟⎟⎟⎟ , (12.105)
⎜⎜⎜ ⎟ ⎜ 4⎟ ⎜ 4⎟
⎜⎜⎜a −a f r − f −r −a a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x5 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y5 ⎟⎟⎟⎟⎟
⎜⎜⎜a a −a −a −a −a a a⎟⎟⎟ ⎜⎜⎜ x ⎟⎟⎟ ⎜⎜⎜y ⎟⎟⎟
⎝ ⎠ ⎝ 6⎠ ⎝ 6⎠
a −a −a a −a a a −a x7 y7

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 411

Figure 12.8 Flow graph of the 8-point transform in Eq. (12.103).

where
y0 = a[(x0 + x7 ) + (x1 + x6 )] + a[(x2 + x4 ) + (x3 + x5 )],
y1 = a[(x0 − x7 ) − (x1 − x6 )] + a[(x2 + x4 ) − (x3 + x5 )],
y2 = a[(x0 − x7 ) + (x1 − x6 )] − e(x2 − x4 ) − f (x3 − x5 ),
y3 = a[(x0 + x7 ) − (x1 + x6 )] − f (x2 − x4 ) − r(x3 − x5 ),
(12.106)
y4 = a[(x0 − x7 ) + (x1 − x6 )] + e(x2 − x4 ) + f (x3 − x5 ),
y5 = a[(x0 + x7 ) − (x1 + x6 )] + f (x2 − x4 ) + r(x3 − x5 ),
y6 = a[(x0 + x7 ) + (x1 + x6 )] − a[(x2 + x4 ) + (x3 + x5 )],
y7 = a[(x0 − x7 ) − (x1 − x6 )] − a[(x2 + x4 ) − (x3 + x5 )].

We see that the 8-point parametric RJT in Eq. (12.105) with three parameters
needs 24 addition and 10 multiplication operations. A flow graph of this transform
is given in Fig. 12.9.
12.5.2.3 Case of four parameters
(1) Let a = b, c = d. From the matrix in Eq. (12.68), we find that
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a a c c c c a a⎟⎟⎟ ⎜⎜⎜ x0 ⎟⎟⎟ ⎜⎜⎜y0 ⎟⎟⎟
⎜⎜⎜⎜a −a c −c c −c a −a⎟⎟⎟⎟ ⎜⎜⎜⎜ x1 ⎟⎟⎟⎟ ⎜⎜⎜⎜y1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜ c c −e − f e f −c −c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟
c −c − f −r f r −c c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x3 ⎟⎟⎟⎟ ⎜⎜⎜⎜y3 ⎟⎟⎟⎟
[RJ]8 (a, c, e, f )X = ⎜⎜⎜⎜⎜ ⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ , (12.107)
⎜⎜⎜ c c e f −e − f −c −c⎟⎟⎟ ⎜⎜⎜ x4 ⎟⎟⎟ ⎜⎜⎜y4 ⎟⎟⎟
⎜⎜⎜ c −c f r − f −r −c c⎟⎟⎟ ⎜⎜⎜ x5 ⎟⎟⎟ ⎜⎜⎜y5 ⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜a a −c −c −c −c a a⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x6 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y6 ⎟⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
a −a −c c −c c a −a x7 y7

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


412 Chapter 12

Figure 12.9 Flow graph of the 8-point transform in Eq. (12.105).

where r = e − 2 f and

y0 = a[(x0 + x7 ) + (x1 + x6 )] + c[(x2 + x4 ) + (x3 + x5 )],


y1 = a[(x0 − x7 ) − (x1 − x6 )] + c[(x2 + x4 ) − (x3 + x5 )],
y2 = c[(x0 − x7 ) + (x1 − x6 )] − [e(x2 − x4 ) + f (x3 − x5 )],
y3 = c[(x0 + x7 ) − (x1 + x6 )] − [ f (x2 − x4 ) + r(x3 − x5 )],
(12.108)
y4 = c[(x0 − x7 ) + (x1 − x6 )] + [e(x2 − x4 ) + f (x3 − x5 )],
y5 = c[(x0 + x7 ) − (x1 + x6 )] + [ f (x2 − x4 ) + r(x3 − x5 )],
y6 = a[(x0 + x7 ) + (x1 + x6 )] − c[(x2 + x4 ) + (x3 + x5 )],
y7 = a[(x0 − x7 ) − (x1 − x6 )] − c[(x2 + x4 ) − c(x3 + x5 )].

From Eq. (12.108), it follows that an 8-point parametric RJT with four parameters
needs 24 addition and 10 multiplication operations. A flow graph of this transform
is given in Fig. 12.10.
(2) Let e = f and c = d. From the matrix in Eq. (12.68), we find
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a b c c c c a b⎟⎟ ⎜⎜ x0 ⎟⎟ ⎜⎜y0 ⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜⎜b p c −c c −c b p⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y1 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ c c −e −e e e −c −c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y2 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜ ⎟
c −c −e −e −c c⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x3 ⎟⎟⎟⎟ ⎜⎜⎜⎜y3 ⎟⎟⎟⎟
[RJ]8 (a, b, c, e)X = ⎜⎜⎜⎜⎜
e e
⎟⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ , (12.109)
⎜⎜⎜ c c e e −e −e −c −c⎟⎟ ⎜⎜ x4 ⎟⎟ ⎜⎜y4 ⎟⎟
⎜⎜⎜ c −c e ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜ −e −e e −c c⎟⎟⎟⎟ ⎜⎜⎜⎜ x5 ⎟⎟⎟⎟ ⎜⎜⎜⎜y5 ⎟⎟⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜a b −c −c −c −c a b⎟⎟⎟⎟ ⎜⎜⎜⎜ x6 ⎟⎟⎟⎟ ⎜⎜⎜⎜y6 ⎟⎟⎟⎟
⎝ ⎠⎝ ⎠ ⎝ ⎠
b p −c c −c c b p x7 y7

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 413

Figure 12.10 Flow graph of the 8-point transform in Eq. (12.107).

where p = a − 2b and
y0 = a(x0 + x6 ) + b(x1 + x7 ) + c[(x2 + x4 ) + (x3 + x5 )],
y1 = b(x0 + x6 ) + p(x1 + x7 ) + c[(x2 + x4 ) − (x3 + x5 )],
y2 = c[(x0 − x6 ) + (x1 − x7 )] − e[(x2 − x4 ) + (x3 − x5 )],
y3 = c[(x0 − x6 ) − (x1 − x7 )] − e[(x2 − x4 ) − (x3 − x5 )],
(12.110)
y4 = c[(x0 − x6 ) + (x1 − x7 )] + e[(x2 − x4 ) + (x3 − x5 )],
y5 = c[(x0 − x6 ) − (x1 − x7 )] + e[(x2 − x4 ) − (x3 − x5 )],
y6 = a(x0 + x6 ) + b(x1 + x7 ) − c[(x2 + x4 ) + (x3 + x5 )],
y7 = b(x0 + x6 ) + p(x1 + x7 ) − c[(x2 + x4 ) − (x3 + x5 )].

From Eq. (12.110), it follows that an 8-point parametric RJT with four
parameters needs 24 addition and 10 multiplication operations. A flow graph of
this transform is given in Fig. 12.11.
12.5.2.4 Case of five parameters
(1) Let e = f . From the matrix in Eq. (12.68), we find that
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a b c d c d a b⎟⎟ ⎜⎜ x0 ⎟⎟ ⎜⎜y0 ⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜b p d q d q b p⎟⎟⎟⎟ ⎜⎜⎜⎜ x1 ⎟⎟⎟⎟ ⎜⎜⎜⎜y1 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜ c
⎜⎜⎜ d −e −e e e −c −d⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y2 ⎟⎟⎟⎟⎟
⎜d −e −e −d −q⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x3 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y3 ⎟⎟⎟⎟⎟
[RJ]8 (a, b, c, d, e)X = ⎜⎜⎜⎜⎜
q e e
⎟⎜ ⎟ = ⎜ ⎟, (12.111)
⎜⎜⎜⎜ c d e e −e −e −c −d⎟⎟⎟⎟ ⎜⎜⎜⎜ x4 ⎟⎟⎟⎟ ⎜⎜⎜⎜y4 ⎟⎟⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜d q e −e −e e −d −q⎟⎟⎟⎟ ⎜⎜⎜⎜ x5 ⎟⎟⎟⎟ ⎜⎜⎜⎜y5 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎝a b −c −d −c −d a b⎟⎟⎠⎟⎟ ⎜⎜⎜⎜⎝ x6 ⎟⎟⎟⎟⎠ ⎜⎜⎜⎜⎝y6 ⎟⎟⎟⎟⎠
b p −d −q −d −q b p x7 y7

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


414 Chapter 12

Figure 12.11 Flow graph of the 8-point transform in Eq. (12.109).

where p = a − 2b, q = c − 2d, and

y0 = [a(x0 + x6 ) + b(x1 + x7 )] + [c(x2 + x4 ) + d(x3 + x5 )],


y1 = [b(x0 + x6 ) + p(x1 + x7 )] + [d(x2 + x4 ) + q(x3 + x5 )],
y2 = [c(x0 − x6 ) + d(x1 − x7 )] − [e(x2 − x4 ) + e(x3 − x5 )],
y3 = [d(x0 − x6 ) + q(x1 − x7 )] − [e(x2 − x4 ) − e(x3 − x5 )],
(12.112)
y4 = [c(x0 − x6 ) + d(x1 − x7 )] + [e(x2 − x4 ) + e(x3 − x5 )],
y5 = [d(x0 − x6 ) + q(x1 − x7 )] + [e(x2 − x4 ) − e(x3 − x5 )],
y6 = [a(x0 + x6 ) + b(x1 + x7 )] − [c(x2 + x4 ) + d(x3 + x5 )],
y7 = [b(x0 + x6 ) + p(x1 + x7 )] − [d(x2 + x4 ) + q(x3 + x5 )].

From Eq. (12.112), it follows that an 8-point parametric RJT with five
parameters needs 24 addition and 14 multiplication operations. A flow graph of
this transform is given in Fig. 12.12.
12.5.2.5 Case of six parameters
Let X = (x0 , x1 , . . . , x7 ) be a column vector. The forward 1D parametric reverse
jacket transforms depending on six parameters [see Eq. (12.68)] can be realized as

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 415

Figure 12.12 Flow graph of the 8-point transform in Eq. (12.111).

follows:
⎛ ⎞⎛ ⎞ ⎛ ⎞
⎜⎜⎜a b c d c d a b⎟⎟ ⎜⎜ x0 ⎟⎟ ⎜⎜y0 ⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜b p d q d q b p⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x1 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y1 ⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜ c
⎜⎜⎜ d −e −f e f −c −d⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x2 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜y2 ⎟⎟⎟⎟⎟
⎟ ⎜ ⎟
⎜d −f −r −d −q⎟⎟⎟⎟⎟ ⎜⎜⎜⎜⎜ x3 ⎟⎟⎟⎟ ⎜⎜⎜⎜y3 ⎟⎟⎟⎟
[RJ]8 (a, b, c, d, e, f )X = ⎜⎜⎜⎜⎜
q f r
⎟⎟ ⎜⎜⎜ ⎟⎟⎟ = ⎜⎜⎜ ⎟⎟⎟ , (12.113)

⎜⎜⎜⎜ c d e f −e −f −c −d⎟⎟ ⎜⎜ x4 ⎟⎟ ⎜⎜y4 ⎟⎟
⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎜d q f r −f −r −d −q⎟⎟⎟⎟ ⎜⎜⎜⎜ x5 ⎟⎟⎟⎟ ⎜⎜⎜⎜y5 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎜ ⎟ ⎜ ⎟
⎜⎜⎝a b −c −d −c −d a b⎟⎟⎟⎟ ⎜⎜⎜⎜ x6 ⎟⎟⎟⎟ ⎜⎜⎜⎜y6 ⎟⎟⎟⎟
⎠⎝ ⎠ ⎝ ⎠
b p −d −q −d −q b p x7 y7

where p = a − 2b, q = c − 2d, r = e − 2 f , and yi is defined as

y0 = [a(x0 + x6 ) + b(x1 + x7 )] + [c(x2 + x4 ) + d(x3 + x5 )],


y1 = [b(x0 + x6 ) + p(x1 + x7 )] + [d(x2 + x4 ) + q(x3 + x5 )],
y2 = [c(x0 − x6 ) + d(x1 − x7 )] − [e(x2 − x4 ) + f (x3 − x5 )],
y3 = [d(x0 − x6 ) + q(x1 − x7 )] − [ f (x2 − x4 ) + r(x3 − x5 )],
(12.114)
y4 = [c(x0 − x6 ) + d(x1 − x7 )] + [e(x2 − x4 ) + f (x3 − x5 )],
y5 = [d(x0 − x6 ) + q(x1 − x7 )] + [ f (x2 − x4 ) + r(x3 − x5 )],
y6 = [a(x0 + x6 ) + b(x1 + x7 )] − [c(x2 + x4 ) + d(x3 + x5 )],
y7 = [b(x0 + x6 ) + p(x1 + x7 )] − [d(x2 + x4 ) + q(x3 − x5 )].

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


416 Chapter 12

Figure 12.13 Flow graph of the 8-point transform in Eq. (12.113).

From Eq. (12.114), we can see that a forward 1D parametric RJT of order
8 requires 24 addition and 16 multiplication operations. A flow graph of this
transform is given in Fig. 12.13.

References
1. S. S. Agaian, K. O. Egiazarian, and N. A. Babaian, “A family of fast
orthogonal transforms reflecting psychophisical properties of vision,” Pattern
Recogn. Image Anal. 2 (1), 1–8 (1992).
2. M. Lee and D. Kim, Weighted Hadamard transformation for S/N ratio
enhancement in image transformation, in Proc. of IEEE Int. Symp. Circuits
and Syst. Proc., Vol. 1, May, Montreal, 65–68 (1984).
3. D. M. Khuntsariya, “The use of the weighted Walsh transform in problems of
effective image signal coding,” GPI Trans. Tbilisi. 10 (352), 59–62 (1989).
4. M.H. Lee, Ju.Y. Park, M.W. Kwon and S.R. Lee, The inverse jacket matrix
of weighted Hadamard transform for multidimensional signal processing, in
Proc. 7th IEEE Int. Symp. Personal, Indoor and Mobile Radio Communi-
cations, PIMRC’96, 15–18 Oct. pp. 482–486 (1996).
5. P. P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice-Hall,
Englewood Cliffs, NJ (1993).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Jacket Hadamard Matrices 417

6. M. H. Lee and S. R. Lee, “On the reverse jacket matrix for weighted Hadamard
transform,” IEEE Trans. Circuits Syst. 45, 436–441 (1998).
7. M. H. Lee, “A new reverse jacket transform and its fast algorithm,” IEEE
Trans. Circuits Syst. 47 (1), 39–47 (2000).
8. M. Lee, B. Sundar Rajan, and J. Y. Park, “Q generalized reverse jacket
transform,” IEEE Trans. Circuits Syst.-II 48 (7), 684–690 (2001).
9. J. Hou, J. Liu and M.H. Lee, “Doubly stochastic processing on jacket
matrices,” in Proc. IEEE Region 10 Conference: TENCON, 21–24 Nov. 2004,
1, 681–684 (2004).
10. M.H. Lee, “Jacket matrix and its fast algorithms for cooperative wireless signal
processing,” Report, 92 (July 2008).
11. M. H. Lee, “The center weighted Hadamard transform,” IEEE Trans. Circuits
Syst. 36 (9), 1247–1249 (1989).
12. K. J. Horadam, “The jacket matrix construction,” in Hadamard Matrices and
their Applications, 85–91 Princeton University Press, London (2007) Chapter
4.5.1.
13. W. P. Ma and M. H. Lee, “Fast reverse jacket transform algorithms,” Electron.
Lett. 39 (18), 47–48 (2003).
14. M. H. Lee, “A new reverse jacket transform and its fast algorithm,” IEEE
Trans. Circuits Syst. II 47 (1), 39–47 (2000).
15. M. H. Lee, B. S. Rajan, and J. Y. Park, “A generalized reverse jacket
transform,” IEEE Trans. Circuits Syst. II 48 (7), 684–691 (2001).
16. G.L. Feng and M.H. Lee, “An explicit construction of co-cyclic Jacket matrices
with any size,” in Proc. of 5th Shanghai Conf. on Combinatorics, May 14–18,
Shanghai (2005).
17. R. A. Horn and C. R. Johnson, Topics in Matrix Analysis, Cambridge Univ.
Press, New York (1991).
18. F. J. MacWilliams and N. J. A. Sloane, The Theory of Error Correcting Codes,
Elsevier, Amsterdam (1988).
19. E. Viscito and P. Allebach, “The analysis and design of multidimensional FIR
perfect reconstruction filter banks for arbitrary sampling lattices,” IEEE Trans.
Circuits Syst. 38, 29–41 (1991).
20. P. P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice-Hall,
Englewood Cliffs, NJ (1993).
21. S.R. Lee and M.H. Lee, “On the reverse jacket matrix for weighted Hadamard
transform,” Schriftenreihe des Fachbereichs Math., SM-DU-352, Duisburg
(1996).
22. M.H. Lee, “Fast complex reverse jacket transform,” in Proc. 22nd Symp. on
Information Theory and Its Applications: SITA99, Yuzawa, Niigata, Japan.
Nov. 30–Dec. 3 (1999).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


418 Chapter 12

23. M. H. Lee, B. S. Rajan, and J. Y. Park, “A generalized reverse jacket


transform,” IEEE Trans. Circuits Syst. II 48 (7), 684–690 (2001).
24. M.G. Parker and M.H. Lee, “Optimal bipolar sequences for the complex
reverse jacket transform,” in Proc. of Int. Symp. on Information Theory and
Applications, Honolulu, Hawaii, 1, 425–428 (2000).
25. C. P. Fan and J.-F. Yang, “Fast center weighted Hadamard transform
algorithms,” IEEE Trans. Circuits Syst. II 45 (3), 429–432 (1998).
26. M. H. Lee and M. Kaven, “Fast Hadamard transform based on a simple matrix
factorization,” IEEE Trans. Acoust. Speech Signal Process 34, 1666–1668
(1986).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 13
Applications of Hadamard
Matrices in Communication
Systems
Modern communication systems and digital signal processing (signal modeling),1,2
image compression and image encoding,3 and digital signal processing systems4
are heavily reliant on statistical techniques to recover information in the presence
of noise and interference. One of the mathematical structures used to achieve
this goal is the Hadamard matrix.4–17 Historically, Plotkin18 first showed the
error-correcting capabilities of codes generated from Hadamard matrices. Later,
Bose and Shrikhande19 found the connection between Hadamard matrices and
symmetrical block code designs. In this chapter, we will discuss some of these
applications in error-control coding and in CDMAs.

13.1 Hadamard Matrices and Communication Systems


13.1.1 Hadamard matrices and error-correction codes
The storage and transmission of digital data lies at the heart of modern computers
and communications systems. When a message is transmitted, it has the potential
to become scrambled by noise. The goal of this section is to provide a brief
introduction to the basic definitions, goals, and constructions in coding theory. We
describe some of the classical algebraic constructions of error-correcting codes,
including the Hadamard codes. The Hadamard codes are relatively easy to decode;
they are the first large class of codes to correct more than a single error. A
Hadamard code was used in the Mariner and Voyager space probes to encode
information transmitted back to the Earth when the probes visited Mars and the
outer planets of the solar system from 1969 to 1976.20 Mariner 9 was a space shuttle
whose mission was to fly to Mars and transmit pictures back to Earth. Fig. 13.1 is
one of the pictures transmitted by 9. With Mariner 5, six-bit pixels were encoded
using 32-bit long Hadamard code that could correct up to seven errors.

13.1.2 Overview of Error-Correcting Codes


The basic communication scenario between a sender and receiver is that the sender
wants to send k-message symbols over a noisy channel by encoding the k-message

419

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


420 Chapter 13

Figure 13.1 Part of the Grand Canyon on Mars. This photograph was transmitted by
Mariner 9 (from Ref. 20).

symbols into n-symbols. The receiver obtains a received word consisting of


n-symbols and tries to decode and recover the original k-message symbols, even
if pieces are corrupted. The receiver wants to correct as many errors as possible.
In order for the receiver to recover (decode) the correct message, even after the
channel corrupts the transmitted k-message symbols, the sender, instead of sending
the k-bit message, encodes the message by adding several redundancy bits, and
instead, sends an n-bit encoding of it across the channel. The encoding is chosen
in such a way that a decoding algorithm exists to recover the message from a
“codeword” that has not been too badly corrupted by the channel.
Formally, a code C is specified by an injective map E : Σk → Σn that maps
k-symbol messages to the n-symbol codeword, where Σ is the underlying set of
symbols called the alphabet. For this example, we will only consider the binary
alphabet (i.e., Σ = {0, 1}). The map E is called the encoding. The image of E is
the set of codewords of the code C. Sometimes, we abuse notation and refer to the
set of codewords E : {0, 1}k → {0, 1}n as the code, where k is referred to as the
message length of the code C, while n is called the block length.
Error-correcting code is a “smart technique” of representing data so that one
can recover the original information, even if parts of it are corrupted. Ideally, we
would like a code that is capable of correcting all errors that are due to noise;
we do not want to waste time sending extraneous data. It is natural that the more
errors that a code needs to correct per message digit, the less efficient is the time
transmission. In addition, there are also probably more complicated encoding and
decoding schemes in such a message.
One of the key goals in coding theory is the design of optimal error-correcting
codes. In addition, we would also like to have easy encoding/decoding systems that
can be very easily implemented by hardware.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 421

Figure 13.2 A digital channel for error-control coding.

Figure 13.3 Shannon (Bell Laboratories) and Hamming (AT&T Bell Laboratories) (from
http://www-groups.dcs.st-and.ac.uk/history/Bioglndex.html).

The key components of a typical communication system and the relationships


between those components are depicted graphically in Fig. 13.2. This is outlined
as follows:
• The sender takes the k-message symbols, which uses E (encoder) to convert it to
n-symbols or codewords suitable for transmission. This is transmitted over the
channel (air waves, microwaves, radio waves, telephone lines, etc.).
• The receiver obtains and converts the message signal back into a form useful for
the receiver (decoded process).
• A message sink often tries to detect and correct problems (reception errors)
caused by noise.
The fundamental questions that communication systems theory investigates are
as follows:
• How much information passes through a channel?
• How can one detect and correct errors brought into the channel?
• How can one easily encode and decode systems?
• How can one achieve better reliability of the transmission?
Major input for the development of coding theory comes from Shannon and
Hamming (see Fig. 13.3). A mathematical theory of communication was developed

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


422 Chapter 13

in 1940 by C. Shannon of Bell Laboratories. In 1948, Shannon21 discovered that


it is possible to have the best of both worlds, i.e., good error correction and a fast
transmission rate. However, Shannon’s theorem does not tell us how to design such
codes. The first progress was made by Hamming (see Ref. 22).
In telecommunications, a redundancy check is to add extra data to a message
for the purposes of error detection. Next, we will examine how to construct an
error-correcting code.

Example 13.1.2.1: Let us assume that we want to send the message 1101. Suppose
that we receive 10101. Is there an error? If so, what is the correct bit pattern? To
answer these questions, we add a 0 or 1 to the end of this message so that the
resulting message has an even number of ls. Thus, we may encode 1101 as 11011.
If the original message were 1001, we would encode that as 10010, because the
original message already had an even number of ls. Now, consider receiving the
message 10101. Because the number of ls in the message is odd, we know that an
error has been made in transmission. However, we do not know how many errors
occurred in transmission or which digit or digits were affected. Thus, a parity check
scheme detects errors, but does not locate them for correction. The number of extra
symbols is called the redundancy of the code.
All error-detection codes (which include all error-detection-and-correction
codes) transmit more bits than were in the original data. We can imagine that as
the number of parity bits increases, it should be possible to correct more errors.
However, as more and more parity check bits are added, the required transmission
bandwidth increases as well. Because of the resultant increase in bandwidth, more
noise is introduced, and the chance of error increases. Therefore, the goal of the
error-detection-and-correction coding theory is to choose extra added data in such
a way that it corrects as many errors as possible, while keeping the communications
efficiency as high as possible.

Example 13.1.2.2: The parity check code can be used to design a code that can
correct an error of one bit. Let the input message symbol have 20 bits: (10010
01101 10110 01101).
Parity check error-detection algorithm:
Input: Suppose we have 20 bits and arrange them in a 4 × 5 array:

1 0 0 1 0
0 1 0 0 1
(13.1)
1 0 1 1 0
0 1 1 0 1

Step 1. Calculate the parity along the rows and columns and define the last
bit in the lower right by the parity of the column/row of parity bits:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 423

1 0 0 1 0 : 0
0 1 0 0 1 : 0
1 0 1 1 0 : 1
(13.2)
0 1 1 0 1 : 1
.. .. .. .. .. . ..
0 0 0 0 0 . 0

Step 2. This larger matrix is sent.


Step 3. Suppose that an error occurs at the third row, fourth column. Then
the fourth column and third row parity checks will fail. This locates
the error and allows us to correct it.
Note that this scheme can detect two errors, but cannot correct “2” errors.
A block code is a set of words that has a well-defined mathematical property or
structure, and where each word is a sequence of a fixed number of bits. The words
belonging to a block code are called codewords. Table 13.1 shows an example of a
simple block code with five-bit codewords where each codeword has odd (i.e., an
odd number of 1s) and even (i.e., an even number of 1s) parity block codes.
A codeword consists of information bits that carry information pairs, and parity
checks that carry no information in the sense of that carried by the information
bits, but ensure that the codeword has the correct structure required by the block
code. Blocks of information bits, referred to as information words, are encoded
into codewords by an encoder for the code. The encoder determines the parity bits
and appends them to the information word, so giving a codeword.
A code whose codewords have k information bits and r parity bits has n-bit
codewords, where n = k+r. Such a code is referred to as an (n, k) block code, where
n and k are, respectively, the block length and information length of the code. The
position of the parity bits within a codeword is quite arbitrary. Fig. 13.4 shows a

Table 13.1 Odd-parity and


even-parity block codes with
five-bit codewords.

(00001) (00000)
(00010) (00011)
(00100) (00101)
(00111) (00110)
(01000) (01001)
(01011) (01010)
(01101) (01100)
(01110) (01111)
(10000) (10001)
(10011) (10010)
(10101) (10100)
(10110) (10111)
(11001) (11000)
(11010) (11011)
(11100) (11101)
(11111) (11110)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


424 Chapter 13

Figure 13.4 An n-bit systematic codeword.

codeword whose parity bits are on the right-hand side of the information bits:
n = k + r bits
The rate R of a code is a useful measure of the redundancy within a block code
and is defined as the ratio of the number of information bits to the block length,
R = k/n. Informally, a rate is the amount of information (about the message)
contained in each bit of the codeword. We can see that the code rate is bounded
by 0 ≤ R ≤ 1.
For a fixed number of information bits, the code rate R tends to 0 as the number
of parity bits r increases. Take the case where the code rate R = 1 if n = k. This
means that no coding occurs because there are no parity bits. Low code rates reflect
high levels of redundancy. Several definitions are provided, as follows.
The Hamming distance d(v1 , v2 ) of two codewords v1 and v1 , having the same n
number of bits, is defined as the number of different positions of words v1 and v2 ,
or
d(v1 , v2 ) = v11 ⊕ v12 + v21 ⊕ v22 + · · · + vn1 ⊕ vn2 , (13.3)

where ⊕ is the sum of modulo-2.


The Hamming weight w(v) of the codeword v is the number of nonzero elements
in v. For example, for codewords

v = (01101101) and u = (10100010) (13.4)

the Hamming weights and distance are

w(v) = w(01101101) = 5, w(u) = w(10100010) = 3,


d(u, v) = d(01101101, 10100010) = 6. (13.5)

The minimum distance d(C) of a code C is the minimum number of all


Hamming distances between distinct codewords, i.e. d(C) = mini j d(vi , v j ). The
minimum distance is found by taking a pair of codewords, determining the distance
between them, and then repeating this for all pairs of different codewords. The
smallest value obtained is the minimum distance of the code. It easy to verify that
both of the codes given in Example 13.1.2.2 and in Table 13.1 have the minimum
distance 2.
In coding theory, codes whose encoding and decoding operations may be
expressed in terms of linear operations are called linear codes. A block code is
said to be a linear code if the sum of modulo-2 of any two codewords gives another
codeword of that code. Hence, if ci and c j are the codewords of a linear code, then
ck = ci ⊕ c j is also a codeword, where ⊕ is the sign of modulo-2 addition.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 425

The linear code C, with n code length, m information symbols, and minimum
distance d, is said to be an [n, m, d] linear code. We will refer to an any code C that
maps m message bits to n codewords with distance d as an (n, m, d) code. Hence, a
linear code of dimension m contains 2m codewords.
A linear code has the following properties:
• The all-zero word (0 0 0 · · · 0) is always a codeword.
• A linear code can be described by a set of linear equations, usually in the shape
of a single matrix, called the parity check matrix. That is, for any [n, k, d] linear
code C, there exists an (n − k) × n matrix P such that
c∈C ⇔ cPT = 0.
• For any given three codewords ci , c j , and ck such that ck = c1 ⊕ c j , the
distance between two codewords equals the weight of its sum codewords, i.e.,
d(ci , c j ) = w(ck ).
• The minimum distance of the code dmin = wmin , where wmin is the weight of any
nonzero codeword with the smallest weight.
The third property is of particular importance because it enables the minimum
distance to be found quite easily. For an arbitrary block code, the minimum distance
is found by considering the distance between all codewords. However, with a
linear code, we only need to evaluate the weight of every nonzero codeword. The
minimum distance of the code is then given by the smallest weight obtained. This
is much quicker than considering the distance between all codewords. Because an
[n, m, d] linear code encodes a message of length m as a codeword of length n, the
redundancy of a linear [n, m, d] code is n − m.

13.1.3 How to create a linear code


Let S be a set of vectors from a vector space, and let (S ) be the set of all linear
combinations of vectors from S . Then, for any subset S of a linear space, (S ) is a
linear space that consists of the following words: (1) the zero word, (2) all words
in S , and (3) all sums of two or more words in S .
Example 13.1.3.1: Let S = {v1 , v2 , v3 , v4 , v5 } = {01001, 11010, 11100, 00110,
10101}. Then, we obtain

(S ) = {v0 , v1 , v2 , v3 , v4 , v5 , v1 ⊕ v2 , v1 ⊕ v3 , v1 ⊕ v4 , v1 ⊕ v5 , v2 ⊕ v3 ,
v2 ⊕ v4 , v2 ⊕ v5 , v3 ⊕ v4 , v3 ⊕ v5 , v4 ⊕ v5 }, (13.6)
or
(S ) = {00000, 01001, 11010, 11100, 00110, 10101, 10011, 10101
01111, 11100, 00110, 11100, 01111, 11010, 01001, 10011}. (13.7)

The advantages of linear codes are as follows:


• Minimal distance d(C) is easy to compute if C is an [n, m, d] linear code.
• Linear codes provide an easy description of detectable and correctable errors,
i.e., to specify a linear [n, m] code, it is enough to list m codewords.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


426 Chapter 13

• There are simple encoding/decoding procedures for linear codes.


• Easy computation of error probabilities and other properties exists.
• Several families of linear codes are known.
• It is easy to choose one for an application.
Definition: In a k × n matrix, whose rows form a basis of a linear [n, k] code
(subspace), C is said to be the generator matrix of C.
Example 13.1.3.2: From the base (generator) 3 × 4 matrix
⎧ ⎫


⎪ 0 0 1 1⎪ ⎪

⎨ ⎬

⎪ 0 1 0 1 ⎪ , (13.8)
⎩1 0 0 1 ⎪
⎪ ⎪

we obtain the following code (codewords):


⎧ ⎫


⎪ 0 0 0 0⎪ ⎪



⎪ ⎪





0 0 1 1 ⎪




⎪ 0 1 0 1 ⎪



⎨ ⎪


⎪ 1 0 0 1 ⎪
⎪ . (13.9)


⎪ ⎪





0 1 1 0 ⎪




⎪ 1 0 1 0⎪ ⎪



⎩1 1 0 0 ⎪ ⎪

The 4 × 7 base matrix


⎧ ⎫



1 1 1 1 1 1 1⎪




⎨1 0 0 0 1 0 1⎪




⎪ 0⎪


(13.10)


⎩0
1 1 0 0 0 1 ⎪

1 1 0 0 0 1⎭

generates the following codewords:


⎧ ⎫



0 0 0 0 0 0 0⎪




⎪ 1⎪





1 1 1 1 1 1 ⎪




⎪ 1 0 0 0 1 0 1⎪





⎪ ⎪


⎪ 1 1 0 0 0 1 0⎪




⎪ ⎪


⎪ 0 1 1 0 0 0 1⎪




⎪ ⎪


⎪ 0 1 1 1 0 1 0⎪




⎪ ⎪


⎪ 0 0 1 1 1 0 1⎪




⎪ ⎪

⎨1 0 0 1 1 1 0⎪


⎪ .
1⎪
⎪ ⎪ (13.11)


⎪ 0 1 0 0 1 1 ⎪




⎪ ⎪

0⎪



1 1 1 0 1 0 ⎪




⎪ 1 0 1 0 0 1 1⎪




⎪ ⎪



⎪ 1 0 1 1 1 0 0⎪




⎪ ⎪


⎪ 0 0 0 1 0 1 1⎪




⎪ ⎪



⎪ 0 1 0 1 1 0 0⎪






⎪ 0⎪



⎪ 0 0
⎩1 1
1 0 1 1 ⎪

0 1 0 0 1⎭

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 427

Theorem 13.1.3.1: Let C be a binary (n, k, d) code. Then, k ≤ 2n–d+1 .


Proof: For a codeword c = (a1 , a2 , . . . , an ), define c = (ad , ad−1 , . . . , an ), that is,
cut out the first d −1 places from c. If c1  c2 are any two codewords from the (n, k,
d) code, then they differ in at least d places. Because c1 , c2 are arrived at from c1 , c2
by cutting d − 1 entries, c1 , c2 differ at least in one place, hence c1  c2 . Therefore,
the number k of codewords in C is at most the number of vectors c obtained in this
way. Because there are at most 2n−d+1 vectors c, we have k ≤ 2n–d+1 .
If the distance of a codeword is large, and if not too many codeword bits are
corrupted by the channel (more precisely, if not more than d/2 bits are flipped),
then we can uniquely decode the corrupted codeword by picking the codeword
with the smallest Hamming distance from it. Note that for this unique decoding
to work, it must be the case that there are no more than d/2 errors caused by the
channel.

13.1.4 Hadamard code


Hadamard code is one of the family [2n , 2n+1 , 2n–1 ] codes. Remember that in this
code there is a subset of binary sets described by the following parameters: n is a
length of code, d is a minimal distance between various codewords, and M is the
number of codewords (capacity of a code).
Theorem 13.1.4.1: (Peterson22 ) If there is an n × n Hadamard matrix, then a
binary code with 2n code vectors of length n and minimum distance n/2 exists.
The [n, 2n, n/2] Hadamard code construction algorithm:
Input: A normalized Hadamard matrix Hn of order n.
Step 1. Generate matrix C2n :
 
Hn
C2n = . (13.12)
−Hn

Step 2. Generate the set 2n vectors from vi and −vi , i = 1, 2, . . . , n, where


vi is a row of the matrix Hn .
Step 3. Generate codeword, replacing +1 with 0, and −1 with 1.
Output: Codewords, i.e., the set 2n of binary vectors of length n.
Example 13.1.4.1:8,16,4 Hadamard code. This code is obtained from the
Sylvester–Hadamard matrix of order 8:
⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟

⎜⎜⎜+ − + − + − + −⎟⎟⎟⎟⎟
⎜⎜⎜
⎜⎜⎜+ + − − + +
⎜⎜⎜ − −⎟⎟⎟⎟⎟  
⎜+ − − + + − − +⎟⎟⎟⎟⎟
H8 = ⎜⎜⎜⎜⎜
H
⎟⎟⎟ ⇒ 8 (13.13)
⎜⎜⎜+ + + + − − − −⎟⎟

−H8
⎜⎜⎜+ − + − − + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + − − − − + +⎟⎟⎟⎟
⎝ ⎠
+ − − + − + + −

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


428 Chapter 13

or, the codewords of H8c are


⎛ ⎞
⎜⎜⎜+ + + + + + + +⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ 0 + 0 + 0 + 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜+ + 0 0 + + 0 0 ⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜+ 0 0 + + 0 0 +⎟⎟⎟⎟
H8c = ⎜⎜⎜⎜ ⎟⎟ . (13.14)
⎜⎜⎜+ + + + 0 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ 0 + 0 0 + 0 +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+ + 0 0 0 0 + +⎟⎟⎟⎟
⎝ ⎠
+ 0 0 + 0 + + 0

We changed the encoding alphabet from {−1, 1} to {0, 1}. It is also possible to
change the encoding alphabet from {−1, 1} to {1, 0}.
The codewords of C16 are

1111 1111, 1010 1010,


1100 1100, 1001 1001,
1111 0000, 1010 0101,
1100 0011, 1001 0110
0000 0000, 0101 0101,
0011 0011, 0110 0110,
0000 1111, 0101 1010,
0011 1100, 0110 1001.

Properties: Let vi be the i’th row of the vector C2n . It is not difficult to show the
following:
• This code has 2n codewords of length n.
• d(vi , −vi ) = n and d(vi , v j ) = d(−vi , −v j ) = n/2 for i  j, i, j = 1, 2, . . . , n,
which means that the minimum distance between any distinct codewords is n/2.
Hence, the constructed code has minimal distance n/2 and the code corrects
n/4 − 1 errors in an n-bit encoded block, and also detects n/4 errors.
• The Hadamard codes are optimal for this Plotkin distance/bound (see more
detail in the next section).
• The Hadamard codes are self-dual.
• Let e be a vector of 1s and −1s of length n. If vector e differs from vi
(a) in at most n/4−1 positions, then it differs from v j in at least n/4+1 positions,
whenever i  j.
(b) in n/4 positions, then it differs from v j in at least n/4 positions.
• A generator matrix of the Hadamard code of length 2n has an (n + 1) × 2n
rectangular generator matrix with 0, 1 elements. A Hadamard code of length
16 based on a 5 × 24 generator matrix has the form

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 429

Figure 13.5 Matrix of the Hadamard code (32, 6, 16) for the NASA space probe Mariner
9 (from Ref. 20).
⎛ ⎞
⎜⎜⎜1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1⎟⎟

⎜⎜⎜1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0⎟⎟⎟⎟
⎜⎜ ⎟
G16 = ⎜⎜⎜⎜⎜1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0⎟⎟⎟⎟ .
⎟ (13.15)
⎜⎜⎜1 0⎟⎟⎟⎟⎠
⎜⎝ 1 0 0 1 1 0 0 1 1 0 0 1 1 0
1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0

Note that the corresponding code encodes blocks of length five to blocks of
length 16. In practice, columns 1, 9, 5, 3, and 2 of matrix G16 form a basis for the
code. Every codeword from C16 is representable as a unique linear combination
of basis vectors. The generator matrix of the (32, 6, 16) Hadamard code (based
on Hadamard matrix of order 16) is a 6 × 32 rectangular matrix. The technical
characteristics of this code are: (1) codewords are 32 bits long and there are 64
of them, (2) the minimum distance is 16, and (3) it can correct seven errors. This
code was used on Mariner’s space mission in 1969 (see Fig. 13.5).
The encoding algorithm is also simple:
Input: The received (0, 1) signal vector v of length n (n must be divided to 4,
i.e., n is the order of Hadamard matrix).
Step 1. Replace each 0 by +1 and each 1 by −1 of the received signal v.
Denote the resulting vector y.
Step 2. Compute the n-point FHT u = HyT of the vector y.
Step 3. Find the maximal by modulus coefficient of the vector u.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


430 Chapter 13

Step 4. Apply the decision rule as follows:


(a) If the maximum absolute value of the transform coefficient ui
is positive, then take the i’th codeword.
(b) If the transform coefficient ui is negative, then take the
(i + 2n )’th codeword.
Step 5. Change all −1s to zeroes and all 1s to 1.
Output: The codeword that was sent.
An example of an encoding algorithm:
Input: Assume that a = (00011010) is the received word.
Step 1. Generate the following vector v by changing all zeroes to −1s:

a = (00011010) → v = (−1, −1, −1, 1, 1, −1, 1, −1). (13.16)

Step 2. Calculate an eight-point HT s = H8 vT

s = H8 vT = (−2, 2, −2, 2, −2, −6, −2, 2). (13.17)

Step 3. Find the absolute value of the largest component, i.e., s6 = −6.
Step 4. Apply the decision rule: the 8 + 6 = 14 codeword was sent, i.e.,
(−1, 1, −1, 1, 1, −1, 1, −1).
Step 5. Change all −1s to zeroes.
Output: Codeword that has been sent: 01011010.

Remarks:
• It can be shown that the Hadamard code is the first-order Reed–Muller code
in the case of q = 2. These codes are some of the oldest error-correcting
codes. Reed–Muller codes were invented independently in 1954 by Muller and
Reed. Reed–Muller codes are relatively easy to decode, and first-order codes
are especially efficient. Reed–Muller codes are the first large class of codes to
correct more than a single error. From time to time, the Reed–Muller code is
used in magnetic data storage systems.
• The Sylvester–Hadamard matrix codes are all linear codes.
• It is possible to construct the normalized Hadamard matrices Hn -based codes by
replacing in Hn each of the elements +1 to 0, and −1 to 1 (denote it Qn ). For
instance,
• [n − 1, n, n/2] code An consisting of the rows of Qn with the first column (of
1s) deleted.
• [n−1, 2n, n/2−1] code Bn consisting of the rows of An and their complements.
• [n, 2n, n/2] code Cn consisting of the rows of Qn and their complements.
• In general, Hadamard codes are not necessarily linear codes. A Hadamard code
can be made linear by forming a code with the generator matrix (In , Hn ), where
Hn is a binary Hadamard matrix of order n.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 431

Figure 13.6 Graphical representation of the Hadamard code with generator matrix (I4 , H4 ).

13.1.5 Graphical representation of the (7, 3, 4) Hadamard code

Example:40 Fig. 13.6 shows a graphical representation of the (7, 3, 4) 2-Hadamard


code with generator matrix (I4 , H4 ), where H4 is a binary Hadamard matrix of order
4:
⎛ ⎞
⎜⎜⎜1 0 0 0 0 1 1⎟⎟⎟
⎜⎜⎜0 1 0 0 1 0 1⎟⎟⎟
⎜⎜⎜ ⎟⎟ .
⎜⎜⎝⎜0 0 1 0 1 1 0⎟⎟⎟⎠⎟
(13.18)
0 0 0 1 1 1 1

It can be verified that the minimum distance of this code is at least 3. In this
representation, the left nodes (right nodes) are called the “variable nodes” (“check
nodes”). Thus, the code is defined as the set of all binary settings on the variable
nodes such that for all check nodes, the sum of the settings of the adjacent variable
nodes is zero.
Indeed, the minimum distance is not one; otherwise, there is a variable node
that is not connected to any check node, which is a contradiction to the fact that the
degree of the variable nodes is larger than one. Suppose that the minimum distance
is two, and assume that the minimum weight word is (1, 1, 0, . . . , 0). Consider the
subgraph induced by the two first variable nodes. All check nodes in this graph
must have an even degree (or else they would not be satisfied). Moreover, there
are at least two check nodes in this graph of a degree greater than zero, since the
degrees of the variable nodes are supposed to be greater or equal to two. Then, the
graph formed by the two first variable nodes and these two check nodes is a cycle
of length four, contrary to the assumption.

13.1.6 Levenshtein constructions


The minimum distance between any pair of codewords in a code cannot be larger
than the average distance between all pairs of different codewords. Using this
observation, Plotkin found the upper bound for the minimum distance of a linear
code with respect to the Hamming distance. For 0 < d < n, let A(n, d) denote the
maximum possible number of codewords in a binary block code of length n and
minimum (Hamming) distance d. Note that if d is odd, then C is an (n, k, d) code if

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


432 Chapter 13

and only if the code C  obtained by adding a parity-check bit to each codeword in
C is an (n + 1, k, d + 1) code. Therefore, if d is even, then A(n, d) = A(n − 1, d − 1).
The challenge here is to understand the behavior of A(n, d) for the case when d is
even.18,23
In 1965, Plotkin18 gave a simple counting argument that leads to an upper bound
B(n, d) for A(n, d) when d < n/2. The following also holds:
• If d ≤ n < 2d, then A(n, d) ≤ B(n, d) = 2[d/(2d − n)].
• If n = 2d, then A(n, d) ≤ B(n, d) = 4d.
Levenshtein13 proved that if Hadamard’s conjecture is true, then Plotkin’s bound
is sharp. Let Qn be a binary matrix received from a normalized Hadamard matrix
of order n by replacement of +1 by 0 and −1 by 1. It is clear that the matrix Qn
allows design of the following Hadamard codes:
• (n − 1, n, n/2) code An consisting of rows of a matrix Qn without the first column
of Qn .
• (n − 1, 2n, n/2 − 1) code Bn consisting of codewords of a code An and their
complements.
• (n, 2n, n/2) code Cn consisting of rows of a matrix Qn and their complements.
In Ref. 13, it was proved that if there are the suitable Hadamard matrices, then
the Plotkin bounds have the following form:
• If d is an even number, then
 
d
M(n, d) = 2 , n < 2d,
2d − n (13.19)
M(2d, d) = 4d.

• If d is an odd number, then


 
d+1
M(n, d) = 2 , d ≤ n < 2d + 1,
2d + 1 − n (13.20)
M(2d + 1, d) = 4d + 4.

Now we shall transition to a method of construction of the maximal codes.


A square (0, 1) matrix of order m is called the correct matrix13 if the Hamming
distance between two distinct rows is equal to m/2. It can be shown that the correct
matrix of order m exists if and only if a Hadamard matrix of order m exists. We
will call k a correct number if a correct matrix of order 4k exists.
Let us introduce the following notations: Am is a correct matrix of order m,
the last column of which consists of zeros, A1m is a matrix received from Am after
removal of the last (zero) column, and A2m is a matrix received from Am after
removal of the two last columns and all rows, where in the penultimate column
there is a zero. The conditions and formulas of construction of the maximal codes
for the given n and d are displayed in Table 13.2.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 433

Table 13.2 Conditions and formulas of construction of the maximal codes for the given n
and d.
N d/(2d − n) k = [d/(2d − n)] Correct Code

Even Fraction k and k + 1 (a/2)A24k ◦ (b/2)A24(k+1)

Odd Fraction Even k/2 and k + 1 aA12k ◦ (b/2)A24(k+1)

Odd Fraction Odd k and (k + 1)/2 (a/2)A24k ◦ bA12(k+1)

Even Integer k (a/2)A24k


Odd Integer Even k/2 aA12k

In this table, ◦ means matrix connections, and a and b are defined by the
following:

ka + (k + 1)b = d
(13.21)
(2k − 1)a + (2k + 1)b = n.

Example 13.1.6.1: Construction of a maximal equidistant code with parameters


n = 13, d = 8, M = 4.

One can verify that d/(2d − n) is a fractional number, and that k = 2. These
parameters correspond to the second row of Table 13.2. There are also correct
matrices of order 2k and 4(k + 1), obtained from Hadamard matrices of order 4 and
12. Solving the above linear system, we find that a = 1, b = 2. Hence, the code
found can be represented as A14 ◦ A212 .
Consider the Hadamard matrix of order 4 with last column consisting of +1:
⎛ ⎞
⎜⎜⎜+ + + +⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜− + − +⎟⎟⎟⎟
H4 = ⎜⎜⎜⎜⎜ ⎟⎟ . (13.22)
⎜⎜⎜− − + +⎟⎟⎟⎟
⎜⎝ ⎟⎠
+ − − +

Hence, according to the definition, we find that


⎛ ⎞ ⎛ ⎞
⎜⎜⎜0 0 0 0⎟⎟⎟⎟ ⎜⎜⎜0 0 0⎟⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟ ⎜⎜⎜⎜ ⎟⎟
⎜⎜1 0 1 0⎟⎟⎟⎟ ⎜⎜1 0 0⎟⎟⎟⎟
A4 = ⎜⎜⎜⎜ ⎟⎟⎟ , A4 = ⎜⎜⎜⎜
1 ⎟⎟⎟ . (13.23)
⎜⎜⎜1
⎜⎜⎝ 1 0 0⎟⎟⎟⎟ ⎜⎜⎜1
⎜⎜⎝ 1 1⎟⎟⎟⎟
⎟⎠ ⎟⎠
0 1 1 0 0 1 1

For a construction A212 matrix, consider the Williamson–Hadamard matrix H12


+
and, corresponding to it, the matrix H12 of order 12:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


434 Chapter 13

⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + + − − + − − + − −⎟⎟
⎟ ⎜⎜⎜− − − − + + − + + − + +⎟⎟

⎜⎜⎜+
⎜⎜⎜ + + − + − − + − − + −⎟⎟⎟⎟⎟ ⎜⎜⎜−
⎜⎜⎜ − − + − + + − + + − +⎟⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜+
⎜⎜⎜ + + − − + − − + − − +⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎜⎜ + + − − + − − + − − +⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜− + + + + + − + + + − −⎟⎟⎟⎟ ⎜⎜⎜+ − − − − − + − − − + +⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜+ − + + + + + − + − + −⎟⎟⎟⎟⎟ ⎜⎜⎜− + − − − − − + − + − +⎟⎟⎟
⎜⎜⎜+ + − + + + + + − − − +⎟⎟⎟⎟⎟ ⎜⎜⎜+ + − + + + + + − − − +⎟⎟⎟⎟⎟
H12 = ⎜⎜⎜⎜⎜ ⎟, +
H12 = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜⎜− + + + − − + + + − + +⎟⎟⎟⎟ ⎜⎜⎜⎜− + + + − − + + + − + +⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜+ − + − + − + + + + − +⎟⎟⎟⎟ ⎜⎜⎜+ − + − + − + + + + − +⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜+ + − − − + + + + + + −⎟⎟⎟⎟⎟ ⎜⎜⎜− − + + + − − − − − − +⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜− + + − + + + − − + + +⎟⎟⎟⎟ ⎜⎜⎜− + + − + + + − − + + +⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜+
⎜⎝ − + + − + − + − + + +⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎝ − + + − + − + − + + +⎟⎟⎟⎟
⎠ ⎠
+ + − + + − − − + + + + + + − + + − − − + + + +
(13.24)
+
Hence, according to definition from H12 , we find that
⎛ ⎞
⎜⎜⎜1 1 1 1 0 0 1 0 0 1 0 0⎟⎟⎟
⎜⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 1 0 1 0 0 1 0 0 1 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 1 1 0 1 1 0 1 1 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 1 1 1 1 1 0 1 1 1 0 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜1 0 1 1 1 1 1 0 1 0 1 0⎟⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜0 0⎟⎟⎟⎟
= ⎜⎜⎜⎜⎜
0 1 0 0 0 0 0 1 1 1
A12 ⎟⎟ ,
⎜⎜⎜1 0 0 0 1 1 0 0 0 1 0 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 1 0 1 0 1 0 0 0 0 1 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜1 1 0 0 0 1 1 1 1 1 1 0⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
0⎟⎟⎟⎟
⎜⎜⎜1 (13.25)
0 0 1 0 0 0 1 1 0 0
⎜⎜⎜ ⎟⎟
⎜⎜⎜0
⎜⎝ 1 0 0 1 0 1 0 1 0 0 0⎟⎟⎟⎟⎟

0 0 1 0 0 1 1 1 0 0 0 0
⎛ ⎞
⎜⎜⎜1 1 1 0 1 0 0 1 0 0⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜0 0 0 1 1 0 1 1 0 1⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜1 0 1 1 1 1 1 0 1 0⎟⎟⎟⎟
A212 = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎜0
⎜⎜⎜ 0 1 0 0 0 0 0 1 1⎟⎟⎟⎟⎟

⎜⎜⎜0
⎜⎝ 1 0 1 0 1 0 0 0 0⎟⎟⎟⎟⎟

1 1 0 0 0 1 1 1 1 1

Hence, the codewords of the maximal equidistant code A14 ◦A212 with the parameters
n = 13, d = 8, M = 4 are represented as
⎧ ⎫


⎪ 0 0 0 1 1 1 0 1 0 0 1 0 0⎪




⎪ ⎪


⎨1 0 1 0 0 0 1 1 0 1 1 0 1⎪


A14 ◦ A212 =⎪
⎪ ⎪ . (13.26)



⎪ 1 1 0 1 0 1 1 1 1 1 0 1 0⎪





⎩0 ⎪
1 1 0 0 1 0 0 0 0 0 1 1⎭

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 435

Figure 13.7 Block diagram of a typical multiple-access communication system.

13.1.7 Uniquely decodable base codes


Multiple-user communication systems were first studied by Shannon in 1961.21
Multiple input, multiple output (MIMO) systems provide a number of advan-
tages over single-antenna-to-single-antenna communication. The advantages of
multiple-user communication, exploiting the physical channel between many trans-
mitters and receivers, are currently receiving significant attention.24–27 Fig. 13.7
presents a block diagram of a typical multiple-access communication system
in which T statistically independent sources are attempting to transmit data to
T separate destinations over a common discrete memoryless channel. The T
messages emanating from the T sources are encoded independently according to
C1 , C2 , . . . , CT block codes of the same length N.10
The concept of “unique decodability” codes is that you have some input sym-
bols, and each input symbol is represented with one output symbol. Then, suppose
you receive a combined message. How can the original input be detected?
Let Ci , i = 1, 2, . . . , k be a set of (0, 1) vectors of length n. The set (C1 , C2 ,
. . . , Ck ) is called k-user code of length n. In Ref. 10, (C1 , C2 , . . . , Ck ) is called a
uniquely decodable code with k users, if for any vectors Ui , Vi ∈ Ci , i = 1, 2, . . . , k,
they satisfy the condition (Ci are called components of a code)
k k
Ui  Vi . (13.27)
i=1 i=1

Next, we consider a uniquely decodable base code, in which the individual


components contain only two codewords.10 Let (C1 , C2 , . . . , Ck ) be the uniquely
decodable base code, i.e., Ci = {Ui , Vi }. A vector di = Ui − Vi is called a difference
vector of the component Ci , and the matrix D = (d1 , d2 , . . . , dk )T is called a
difference matrixn of the code (C n 1
, C2 , . . . , Ck ).
Let Ui = {uij } j=1 , Vi = {vij } j=1 , i = 1, 2, . . . , k. For a given difference matrix D,
the codewords of components Ci can be defined as

0, if dij = 0 or dij = −1,
uij =
1, if dij = 1,
 (13.28)
j 0, if dij = 0 or dij = 1,
vi =
1, if dij = −1.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


436 Chapter 13

In Ref. 10, it is proved that (C1 , C2 , . . . , Ck ) is a uniquely decodable base code if


and only if for any (0, ±1) vector P, the condition PD  0 holds, and PD = 0 only
for P = 0. Hence, the problem of construction of a uniquely decodable base code
with k users of length n results in a problem in construction of a (0, ±1) matrix D
of dimension k × n, the rows of which are linearly independent in {0, +1, −1}.
Let us consider the method of construction of uniquely decodable base code.
The difference matrix of this code is defined by the following formula:
⎛ ⎞
⎜⎜⎜Dt−1 Dt−1 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
Dt = ⎜⎜⎜Dt−1 −Dt−1 ⎟⎟⎟⎟ , (13.29)
⎜⎝ ⎟⎠
I2t−1 O2 t−1

where D0 = (1) and Om is zero matrix of order m. Note that the matrix Dt has
a dimension (t + 2)2t−1 × 2t , i.e., it is a difference matrix of a uniquely decodable
base code with (t + 2)2t−1 users of length 2t . Note that in the formula in Eq. (13.29),
instead of D0 , one can substitute a Hadamard matrix.
Now we shall consider a problem of decoding. Let (C1 , C2 , . . . , Ck ) be a uniquely
decodable base code of length n, and let Ci = {Ui , Vi }. Y = V1 +V2 +· · ·+Vk is called
a base vector of a code. Let Xi ∈ Ci be a message of the i’th user. Let us calculate
r = X1 + X2 + · · · + Xk . S = r − Y is called a syndrome of a code corresponding to
a vector r. Because
 k
d , if Xi = Ui ,
Xi − Vi = i S = qi di , (13.30)
0, if Xi = Vi ,
i=1

where

1, if Xi = Ui ,
qi = (13.31)
0, if Xi = Vi .

q = (q1 , q2 , . . . , qk ) is called a determining vector of a code. Thus, S = qD and


the decoding problem consists of defining the vector q. The following theorem
holds:
Theorem 13.1.7.1: 13 Let there be a uniquely decodable base code with k users
of length n and Williamson matrices of order m. Then, there is also a uniquely
decodable base code with 2mk users of length 2nk.
Let D1 be a difference matrix of a uniquely decodable base code with k users of
length n, and A, B, C, D be Williamson matrices of order m. We can check that

X ⊗ D1 + Y ⊗ D 1 (13.32)

is a difference matrix of a required code, where


   
1 A+B C+D 1 A−B C−D
X= , Y= . (13.33)
2 C + D −A − B 2 −C + D A − B

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 437

We now provide an example of a base code with 30 users of length 18. Let us
have a difference matrix with five users of length three:
⎛ ⎞
⎜⎜⎜+ + +⎟⎟⎟
⎜⎜⎜+ + −⎟⎟⎟
⎜⎜ ⎟⎟
D1 = ⎜⎜⎜⎜⎜+ − +⎟⎟⎟⎟⎟ . (13.34)
⎜⎜⎜+ 0 −⎟⎟⎟
⎜⎝ ⎟⎠
+ − 0
According to Eq. (13.28), we determine the components of this code to be

C1 = {(111), (000)}, C2 = {(110), (001)},


(13.35)
C3 = {(101), (010)}, C4 {(100), (001)} C5 = {(100), (010)}.

Now, let A and B = C = D be cyclic Williamson matrices of order 3 with first rows
(+ + +) and (+ + −), respectively. Using Theorem 13.1.7.1, we obtain the following
difference matrix:
⎛ ⎞
⎜⎜⎜ D1 D1 D1 D1 −D1 −D1 ⎟⎟⎟
⎜⎜⎜ D1 D1 D1 −D1 D1 −D1 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜ D1 D1 D1 −D1 −D1 D1 ⎟⎟⎟⎟⎟
D2 = ⎜⎜⎜ ⎟
⎜⎜⎜ D1 −D1 −D1 −D1 D1 D1 ⎟⎟⎟⎟⎟
(13.36)
⎜⎜⎜−D ⎟⎟
⎝⎜ 1 D1 −D1 D1 −D1 D1 ⎟⎟⎠
−D1 −D1 D1 D1 D1 −D1

the components of which are (i = 1, 2, 3, 4, 5):

Ci1 = {(ui , ui , ui , ui , vi , vi ); (vi , vi , vi , vi , ui , ui )} ,


1
C5+i = {(ui , ui , ui , vi , ui , vi ); (vi , vi , vi , ui , vi , ui )} ,
1
C10+i = {(ui , ui , ui , vi , vi , ui ); (vi , vi , vi , ui , ui , vi )} ,
(13.37)
1
C15+i = {(ui , vi , vi , vi , ui , ui ); (vi , ui , ui , ui , vi , vi )} ,
1
C20+i = {(vi , ui , vi , ui , vi , ui ); (ui , vi , ui , vi , ui , vi )} ,
1
C25+i = {(vi , vi , ui , ui , ui , vi ); (ui , ui , vi , vi , vi , ui )} .

Denote Ci1 = {Ui ; Vi }, i = 1, 2, . . . , 30. The base vector of this code will be

Y = {10, 12, 12, 10, 12, 12, 10, 12, 12, 15, 12,
12, 15, 12, 12, 15, 12, 12}. (13.38)

Let the following vectors be sent: U i , V5+i , U10+i , U15+i , V20+i , U25+i , i = 1, 2, 3, 4, 5.
The total vector of this message will be

r = {20, 12, 12, 10, 12, 12, 20, 12, 12, 15,
12, 12, 15, 12, 12, 15, 12, 12}. (13.39)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


438 Chapter 13

The syndrome S = r − Y of the vector r has the form

S = {10, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}. (13.40)

Define the vector q from the equation

S = qD2 . (13.41)

Denote S = (S 1 , S 2 , . . . , S 6 ), q = (q1 , q2 , . . . , q6 ), where S i and qi are vectors


of length three and five, respectively. Now, solving this system with respect to
qi D1 , we find that q1 D1 = (5, 0, 0), q2 D1 = (0, 0, 0), q3 D1 = (5, 0, 0), q4 D1 =
(5, 0, 0), q5 D1 = (0, 0, 0), q6 D1 = (5, 0, 0). Finally, we find a vector q : q =
(111110000011111111110000011111). From Eq. (13.31), it follows that the
following vectors were sent: Ui , V5+i , U10+i , U15+i , V20+i , U25+i , i = 1, 2, 3, 4, 5.

13.1.8 Shortened code construction and application to data coding and


decoding
Let (c1 , c2 , . . . , ck ) be a binary information word, Pi = {pi,1 , pi,2 , . . . , pi,k }, i = 1,
2, . . . , n be a set of binary vectors, and the power of set P be 2k − 1. The binary
words Pi are called code projectors.4,9 Note that projectors are the columns of
the generating matrix of a linear code.9,28 The codeword u = (u1 , u2 , . . . , un ),
corresponding to the information word (c1 , c2 , . . . , ck ), is determined as

ui = c1 pi,1 ⊕ c2 pi,2 ⊕ · · · ⊕ ck pi,k , i = 1, 2, . . . , n, (13.42)

where ⊕ is the summation modulo-2.


The decoding process is as follows: The decoder receives the codeword u =
(u1 , u2 , . . . , un ) and processes it, writing down the results in a table. If ui = 0,
then the result in the table corresponding to address Pi increases by 1; if ui = 1,
then it decreases by 1. After these operations, we find the vector V of length 2k .
Then, a HT Hk V is applied and the maximal positive coefficient of the transform is
determined. The address in the decoder table, which corresponds to this coefficient,
will be the decimal representation of the initial information word (c1 , c2 , . . . , ck ).
As an example, let us consider the coder of repeated projectors for M = 21,
k = 3 with information word (0, 1, 1). As projectors, we consider binary sets of
decimal numbers {1, 2, . . . , 7}. Each projector is repeated three times. According to
Eq. (13.42), we obtain (111111000000111111000), which is transmitted through
the channel. The decoder forms the vector V = (0, −3, −3, +3, +3, −3, −3, +3).
Then, we obtain H3 V = (−3, −3, −3, +21, −3, −3, −3, −3)T . Since the maximum
element equals +21 and has the index 3, the decoder decides that the transmitted
codeword was (011). As will be shown next, the constructed code of length 21
corrects five errors. Indeed, let five errors occur during transmission. Let the
received codeword be (110111110000111110100). The received codeword error
bits are written in boldface type. In this case, the decoder defines the vector

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 439

V = (0, −1, −3, −1, +3, −3, −1, +1). Then, computing the spectral vector H3 V =
(−5, +3, +3, +11, −5, −5, +3, −5), the decoder again resolves that the information
word (011) was transmitted.
Let r be the repeating number of each projector, and k be an information word
length. In Ref. 13, we proved that the above-constructed code can be corrected to

t = 2k−2 r − 1 (13.43)

errors.
Suppose that there are 2k − 1 projectors, with each of them being repeated r − 1
times. Then, the length of the codewords will be M1 = (2k − 1)(r − 1), and by
Eq. (13.43), this code corrects t1 = 2k−2 (r − 1) − 1 errors. However, if each of the
projectors is repeated r times, the codeword length is M2 = (2k − 1)r, and that
code corrects t2 = 2k−2 r − 1 errors. It is necessary to build an optimal code with a
minimal length M(M1 < M < M2 ) that can correct t errors, t1 < t < t2 .
Let d = 2m ; (m < k) is the number of projectors. In Ref. 9, it is shown that
the shortened projection code of length M2 = (2k − 1)r − 2m can be corrected for
t = 2k−2 r − 2m−2 − 1 errors. Note that m = [log2 (2k−2 r − t − 1)] + 2.
Now we give an example. Let the information word (011) be transmitting.
The repetition is r = 3, and it is necessary to correct four errors. From the
previous formula, we obtain m = 2; hence, in d = 22 = 4 projectors, the
repetitions must be reduced by one. As was shown above, if in the first 2m
projectors with small values the repeating is reduced by one, the resulting code
is optimal. The coder forms the following shortened code: (11110000111111000)
of length 17. If no errors occur in the channel, the decoder receiving the codeword
will determine the vector V = (0, −2, −2, 2, 2, −3, −3, 3). Furthermore, we obtain
H3 V = (−3, −3, −3, 17, −1, −1, −1, −5)T . We see that the maximal coefficient 17
correctly identifies the information word (011).
Now, suppose that four errors occur in the channel, which is shown in bold
in the received codeword (01111100111111100). In this case, the decoder will
determine the vector V = (0, 0, −2, −2, 2, −3, −3, 1). Next, we obtain H3 V =
(−7, 1, 5, 9, −1, −1, 3, −9)T , which means that the maximal coefficient 9 still
correctly identifies the transmitted information word. Thus, using a code of length
17, four errors can be corrected.
The experiments are made for grayscale and color images, the results of which
are given in Table 13.3. For an eight-bit 256 × 256 image, a total of 255 projectors
is required. The first 26 = 64 projectors were repeated two times each, and the
other ones three times, i.e., the codeword length is 701 and, therefore, the resulting
code can correct all combinations of t = 26 · 3 − 24 − 1 = 175 errors.
Furthermore, in the codeword using a pseudo-random number generator, t ≥
175 errors have been entered. In Table 13.3, encoding results are given, where
“Err.num.” stands for the number of errors, “M.filter” with “+” is placed showing
that after decoding, the median filtering was performed. A similar trend is observed
also for other types of signals.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


440 Chapter 13

Table 13.3 Encoding results.


No. Err. num. M. filter MSE PSNR

1 0–175 - 0 Infinity
2 200 - 0.00000 Infinity
3 210 - 0.00024 84.25
4 220 - 0.057 60.53
5 230 - 8.65 38.76
6 240 - 35.91 32.57
7 250 - 143.21 26.57
8 250 + 67.16 29.86
9 260 - 476.26 21.33
10 260 + 75.58 29.34
11 270 - 1186.22 17.39
12 270 + 104.48 27.94
13 280 - 2436.11 14.26
14 280 + 166.03 25.92

13.2 Space–Time Codes from Hadamard Matrices


13.2.1 The general wireless system model
Consider a mobile communication system where a base station is equipped with
n antennas and the mobile unit is equipped with m antennas. Data is encoded by
the channel encoder. Then, the encoded data are passed through a serial-to-parallel
converter and divided into n streams. Each stream of data is used as an input to a
pulse shaper. The output of each shaper is then modulated. At each time slot t, the
output of a modulator i is a signal ct,i transmitted using antenna i for i = 1, 2, . . . , n.
We assume that n signals are transmitted simultaneously, each from a different
transmitter antenna, and that all of these signals have the same transmission period
T . The signal at each receiving antenna is a noisy superposition of the n transmitted
signals corrupted by Rayleigh or Rician fading.24–27,29–39 We also √ assume that the
elements of the signal constellation are contracted by a factor of E s , chosen so
that the average energy of the constellation is 1. The signal rt, j received by antenna
j at time t is given by
n
rt, j = αi, j ct,i + nt, j , (13.44)
i=1

where the noise nt, j at time t is modeled as independent samples of zero-mean


complex Gaussian random variables, with variance (1/2)N0 per dimension. The
coefficient αi, j is the path gain from transmitting antenna i to receiving antenna j.
αi, j is modeled as independent samples of zero-mean complex Gaussian random
variables with variance 0.5 per dimension. It is assumed that these path gains are
constant during a frame and vary from one frame to another (i.e., quasi-static flat
fading).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 441

Figure 13.8 Two-branch transmit diversity scheme with two transmitting and one receiving
antenna.

Assuming that perfect channel state information is available, the receiver


computes the decision metric
0 002
l m 00 n 0
00r − αi, j ct,i 000 (13.45)
00 t, j 0
k=1 j=1 i=1

over all codewords ct,1 , ct,2 , . . . ct,n , t = 1, 2, . . . , l and decides in favor of the
codeword that minimizes the sum in Eq. (13.45).
After several mathematical manipulations, we see that the problem is to obtain
ct,i , which gives a minimum of the expression in Eq. (13.45), and leads to
minimization of the following expression (x∗ is a conjugate of x):

m ⎢ n 

l
⎢⎢⎢  n ⎥⎥⎥
⎢⎢⎣ ∗ ∗ ∗
rt, j αi, j ct,i + rt, j αi, j ct,i − αi, j αk, j ct,i ct,k ⎥⎥⎥⎦ .
∗ ∗
(13.46)
t=1 j=1 i=1 i,k=1

Note that the l × n matrix C = (ci, j ) is called the coding matrix. More complete
information about wireless systems and space–time codes can be found in
Refs. 39–53 and 57.
We examine two-branch transmit diversity schemes with two transmitting and
one receiving antenna in Fig. 13.8. This scheme may be defined by the following
three functions: (1) the encoding and transmission sequence of information
symbols at the transmitter, (2) the combining scheme at the receiver, and (3) the
decision rule for maximum-likelihood detection.
(1) The encoding and transmission sequence: At a given symbol period T , two
signals are simultaneously transmitted from the two antennas. The signal trans-
mitted from antenna 0 is denoted by x0 and from antenna 1 by x1 . During the
next symbol period, signal (−x1∗ ) is transmitted from antenna 0, and signal x0∗ is
transmitted from antenna 1. Note that the encoding is done in space and time

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


442 Chapter 13

(space–time coding). The channel at time t may be modeled by a complex mul-


tiplicative distortion h0 (t) for transmit antenna 0 and h1 (t) for transmit antenna 1.
Assuming that fading is constant across two consecutive symbols, we can write

h0 (t) = h0 (t + T ) = h0 = α0 exp( jθ0 ),


(13.47)
h1 (t) = h1 (t + T ) = h1 = α1 exp( jθ1 ),

where T is the symbol of duration.


The received signals can be represented as follows:

r0 = r(t) = h0 x0 + h1 x1 + n0 ,
(13.48)
r1 = r(t + T ) = −h0 x1∗ + h1 x1∗ + n1 ,

where r0 and r1 are the received signals at time t and t+T and n0 and n1 are complex
random variables representing receiver noise and interference.
(2) The combining scheme: The combiner as shown in Fig. 13.8 builds the
following two combined signals that are sent to the maximum-likelihood detector:

x0 = h∗0 r0 + h1 r1∗ ,
(13.49)
x1 = h∗1 r0 − h0 r1∗ .

(3) The maximum-likelihood decision rule: Combined signals [Eq. (13.49)] are
then sent to the maximum-likelihood detector, which, for each of the signals x0 and
x1 , uses the decision rule expressed in

(α20 + α21 − 1) |xi |2 + d2 (x0 , xi ) ≤ (α20 + α21 − 1) |xk |2 + d2 (x0 , xk ) (13.50)

or in the following equation for phase shift keying (PSK) signals (equal energy
constellations):

d2 (x0 , xi ) ≤ d2 (x0 , xk ). (13.51)

The maximal-ratio combiner may then construct the signal x0 as shown in Fig. 13.8
so that the maximum-likelihood detector may produce x0 , which is a maximum-
likelihood estimate of x0 .

13.2.2 Orthogonal array and linear processing design


A square parametric n × n matrix A(x1 , x2 , . . . , xk ) is called an orthogonal array of
order n and type (s1 , s2 , . . . , sk ) 5–8 if the following are true:

• The elements of the matrix A have the form xi or −xi for i = 1, 2, . . . , k.


• The number of elements xi and −xi in every row (and column) is si .
• AAT = AT A = ki=1 si xi2 In .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 443

The matrices A2 (a, b), A4 (a, b, c, d), and A8 (a, b, . . . , h) are called Yang,
Williamson, and Plotkin arrays,14 respectively:
⎛ ⎞
  ⎜⎜⎜ a b c d⎟⎟⎟
a b ⎜⎜⎜−b a −d c⎟⎟⎟
A2 (a, b) = , A4 (a, b, c, d) = ⎜⎜⎜⎜ ⎟⎟ ,
−b a ⎜⎜⎝ −c d a −b⎟⎟⎟⎟⎠
−d −c b a
⎛ ⎞
⎜⎜⎜ a b c d e f g h⎟⎟

⎜⎜⎜ −b a d −c f −e −h g⎟⎟⎟⎟⎟
⎜⎜⎜ (13.52)
⎜⎜⎜ −c
⎜⎜⎜ −d a b g h −e − f ⎟⎟⎟⎟⎟
⎜ −d c −b a h −g f −e⎟⎟⎟⎟⎟
A8 (a, b, . . . , h) = ⎜⎜⎜⎜⎜ ⎟.
⎜⎜⎜⎜ −e − f −g −h a b c d⎟⎟⎟⎟

⎜⎜⎜− f e −h −g −b a −d c⎟⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎝ −g h e − f −c d a −b⎟⎟⎟⎟

−h −g f e −d −c b a

In general, an orthogonal array A(x1 , x2 , x3 , x4 ) of order n and type (k, k, k, k) is


called the Baumert–Hall array of order n = 4k, and array A(x1 , x2 , . . . , x8 ) of order
n and type (k, k, k, k, k, k, k, k) is called the Plotkin array of order n = 8k.
There are two attractions in providing transmit diversity via ODs, as follows:
(1) There is no loss in bandwidth, in the sense that the orthogonal array provides
the maximum possible transmission rate at full diversity.
(2) There is a simple maximum-likelihood decoding algorithm, which uses only
linear combining at the receiver. The simplicity of the algorithm comes from
the orthogonality of the columns of the ODs.
These two properties are preserved even if we allow linear processing at the
transmitter. Hence, we relax the definition of ODs to allow linear processing at
the transmitter. That is, signals transmitted from different antennas will now be a
linear combination of constellation symbols.
A linear processing OD of variables x1 , x2 , . . . , xk is an n × n matrix E 15 in
which the elements are linear combinations of variables x1 , x2 , . . . , xk and
⎧ n ⎫



n n ⎪

n 2⎬
E E = diag ⎪
T
⎪ s1 2
x , s2 2
x , . . . , si i⎪
x ⎪ , (13.53)
⎩ i i i i ⎭
i=1 i=1 i=1

where sij are positive integers.


Since the maximum-likelihood decoding algorithm is achieved using only the
orthogonality of columns of a design matrix, a linear processing OD can be
generalized as follows. A generalized processing OD G with rate R = k/p is a
matrix of dimension p × n with entries 0, x1 , x2 , . . . , xk 7 satisfying
⎧ ⎫



k k k ⎪
⎪⎬
G G = diag ⎪
T
⎪ p1,i xi2 , p2,i xi2 , . . . , pn,i xi2 ⎪
⎪ . (13.54)
⎩ ⎭
i=1 i=1 i=1

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


444 Chapter 13

Let A(R, n) be the minimum number p such that there is a p × n dimensional


generalized OD with a rate of at least R. The generalized OD accompanying the
value A(R, n) is called the delay optimal. It is evident that after removing some
columns, we can obtain the delay-optimal designs with rate 1. For example, from
Eq. (13.52), we find that
⎛ ⎞
⎜⎜⎜ a b c⎟⎟⎟
⎜⎜⎜−b a −d⎟⎟⎟
A34 (a, b, c, d) = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎜⎝ −c d a⎟⎟⎟⎟⎠
(13.55)
−d −c b

13.2.3 Design of space–time codes from the Hadamard matrix


Based on the multiplicative theorem5,8 in Ref. 42, it was proved that from a
Hadamard matrix of order 4n, a generalized linear processing real OD of size 4n×2
with rate R = 1/2 can be constructed, depending on 2n variables. Below, we give
an example.
Consider the Williamson–Hadamard matrix H12 and represent it as follows (also
see Chapters 2 and 3):

H12 = Q0 ⊗ I3 + Q1 ⊗ U + Q1 ⊗ U 2 , (13.56)

where
⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + + +⎟⎟ ⎜⎜⎜+ − − −⎟⎟ ⎛ ⎞
⎜⎜⎜− ⎟ ⎟ ⎜⎜⎜0 + 0 ⎟⎟⎟
+ − +⎟⎟⎟⎟ ⎜⎜⎜+ + + −⎟⎟⎟⎟
Q0 = ⎜⎜⎜⎜ ⎟, Q1 = ⎜⎜⎜⎜ ⎟, U = ⎜⎜⎜⎝0 0 +⎟⎟⎟⎟⎠ .

−⎟⎟⎟⎟⎠ +⎟⎟⎟⎟⎠
(13.57)
⎜⎜⎝− + + ⎜⎜⎝+ − +
+ 0 0
− − + + + + − +

Represent H12 as H12 = (++) ⊗ A1 + (+−) ⊗ A2 , where


⎛ ⎞ ⎛ ⎞
⎜⎜⎜+ + 0 − 0 −⎟⎟
⎟ ⎜⎜⎜0 0 + 0 + 0 ⎟⎟

⎜⎜⎜0 0 + 0 + 0 ⎟⎟⎟⎟ ⎜⎜⎜− − 0 + 0 +⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜0 0 0 + 0 +⎟⎟⎟⎟ ⎜⎜⎜− + + 0 + 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜⎜− + + 0 + 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜⎜0 0 0 − 0 −⎟⎟⎟⎟⎟
⎜⎜⎜0
⎜⎜⎜ − + + 0 −⎟⎟⎟⎟⎟ ⎜⎜⎜+
⎜⎜⎜ 0 0 0 + 0 ⎟⎟⎟⎟⎟
⎜+ + 0 ⎟⎟⎟⎟⎟ ⎜0 + − − +⎟⎟⎟⎟⎟
A1 = ⎜⎜⎜⎜⎜ A2 = ⎜⎜⎜⎜⎜
0 0 0 0
⎟, ⎟. (13.58)
⎜⎜⎜0 + 0 0 0 +⎟⎟⎟⎟ ⎜⎜⎜+ 0 − + + 0 ⎟⎟⎟⎟
⎜⎜⎜+ ⎟ ⎜⎜⎜0 ⎟
⎜⎜⎜ 0 − + + 0 ⎟⎟⎟⎟ ⎜⎜⎜ − 0 0 0 −⎟⎟⎟⎟
⎟ ⎟
⎜⎜⎜0 − 0 − + +⎟⎟⎟⎟ ⎜⎜⎜+ 0 + 0 0 0 ⎟⎟⎟⎟
⎜⎜⎜ ⎟ ⎜⎜⎜ ⎟
⎜⎜⎜+ 0 + 0 0 0 ⎟⎟⎟⎟⎟ ⎜⎜⎜0 + 0 + − −⎟⎟⎟⎟⎟
⎜⎜⎜0 + + 0 ⎟⎟⎟⎟⎠ ⎜⎜⎜+ + − +⎟⎟⎟⎟⎠
⎜⎝ 0 0 ⎜⎝ 0 0
+ 0 + 0 − + 0 − 0 − 0 0

Let xT = (x1 , x2 , x3 , x4 , x5 , x6 ) be a column vector. We can check that G =


(A1 x, A2 x) is a generalized linear processing real OD with rate 1/2 of six variables

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 445

(i.e., in a wireless communication systems we have two transmitting antennas):


⎛ ⎞
⎜⎜⎜ x1 + x2 − x4 − x6 x3 + x5 ⎟⎟⎟
⎜⎜⎜ x + x −x1 − x2 + x4 + x6 ⎟⎟⎟⎟
⎜⎜⎜ 3 5

⎜⎜⎜ x4 + x6 −x1 + x2 + x3 + x5 ⎟⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎜⎜⎜−x1 + x2 + x3 + x5 −x4 − x6 ⎟⎟⎟
⎜⎜⎜−x + x + x − x x1 + x5 ⎟⎟⎟
⎜⎜⎜ 2 3 4 6 ⎟⎟⎟
⎜⎜⎜ x1 + x5 x2 − x3 − x4 + x6 ⎟⎟⎟⎟
G = ⎜⎜⎜ ⎜ ⎟. (13.59)
⎜⎜⎜ x2 + x6 x1 − x3 + x4 + x5 ⎟⎟⎟⎟
⎜⎜⎜ x1 − x3 + x4 + x5 ⎟⎟⎟
⎜⎜⎜ −x2 − x6 ⎟⎟⎟
⎜⎜⎜−x2 − x4 + x5 + x6 x1 + x3 ⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜ x1 + x3 x2 + x4 − x5 − x6 ⎟⎟⎟⎟⎟
⎜⎜⎜ x + x x1 + x3 − x5 + x6 ⎟⎟⎟⎟⎠
⎜⎝ 2 4
x1 + x 3 − x 5 + x 6 −x2 − x4

References
1. H. F. Harmuth, Transmission of Information by Orthogonal Functions,
Springer-Verlag, Berlin (1969).
2. H. F. Harmuth, Sequency Theory: Foundations and Applications, Academic
Press, New York (1977).
3. A. K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Inc.,
Englewood Cliffs, NJ (1989).
4. R. K. Yargaladda and J. E. Hershey, Hadamard Matrix Analysis and Synthesis:
With Applications to Communications and Signal/Image Processing, Kluwer,
Dordrecht (1997).
5. S. S. Agaian, Hadamard Matrices and Their Applications, Lecture Notes in
Math., 1168, Springer-Verlag, Berlin (1985).
6. J. Seberry and M. Yamada, “Hadamard matrices, sequences and block
designs,” in Surveys in Contemporary Design Theory, Wiley-Interscience
Series in Discrete Mathematics, John Wiley & Sons, Hoboken, NJ (1992).
7. V. Tarokh, H. Jafarkhani, and A. R. Calderbank, “Space–time codes from
orthogonal designs,” IEEE Trans. Inf. Theory 45 (5), 1456–1467 (1999).
8. H. Sarukhanyan, “Decomposition of the Hadamard matrices and fast
Hadamard transform,” in Computer Analysis of Images and Patterns, Lecture
Notes in Computer Sciences, 1298 575–581 Springer, Berlin (1997).
9. H. Sarukhanyan, A. Anoyan, K. Egiazarian, J. Astola and S. Agaian, Codes
generated from Hadamard matrices, in Proc. of Int. Workshop on Trends and
Recent Achievements in Information Technology, Cluj-Napoca, Romania, May
16–17, pp. 7–18 (2002).
10. Sh.-Ch. Chang and E. J. Weldon, “Coding for T-user multiple-access
channels,” IEEE Trans. Inf. Theory 25 (6), 684–691 (1979).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


446 Chapter 13

11. R. Steele, “Introduction to digital cellular radio,” in Mobile Radio Communi-


cations, R. Steele and L. Hanzo, Eds., second ed., IEEE, Piscataway, NJ
(1999).
12. A. W. Lam and S. Tantaratana, Theory and Applications of Spread-Spectrum
Systems, IEEE/EAB Self-Study Course, IEEE, Piscataway, NJ (1994).
13. V.I. Levenshtein, A new lower bound on aperiodic crosscorrelation of binary
codes, Proc. of 4th Int. Symp. Commun. Theory and Appl., ISCTA’1997,
Ambleside, U.K., 13–18 July 1997, 147–149 (1997).
14. J. Oppermann and B. C. Vucetic, “Complex spreading sequences with a wide
range of correlation properties,” IEEE Trans. Commun. 45, 365–375 (1997).
15. J. Seberry and R. Craigen, “Orthogonal designs,” in Handbook of Combi-
natorial Designs, C. J. Colbourn and J. Dinitz, Eds., 400–406 CRC Press,
Boca Raton (1996).
16. H. Evangelaras, Ch. Koukouvinos, and J. Seberry, “Applications of Hadamard
matrices,” J. Telecommun. Inf. Technol. 2, 3–10 (2003).
17. J. Carlson, Error-correcting codes: an introduction through problems, Nov. 19,
1999, http://www.math.utah.edu/hschool/carlson.
18. M. Plotkin, “Binary codes with given of minima distance,” Proc. Cybernet. 7,
60–67 (1963).
19. R. C. Bose and S. S. Shrikhande, “On the falsity of Euler’s conjecture about
the nonexistence of two orthogonal Latin squares of order 4t + 2,” Proc. N.A.S
45, 734–737 (1959).
20. Combinatorics in Space The Mariner 9 Telemetry System, http://www.math.
cudenver.edu/∼wcherowi/courses/m6409/mariner9talk.pdf.
21. C.E. Shannon, Two-way communication channels, in Proc. of 4th Berkeley
Symp. Math. Statist. and Prob.1, 611–644 (1961).
22. W. W. Peterson and E. J. Weldon Jr., Error-Correcting Codes, second ed., MIT
Press, Cambridge, MA (1972).
23. J. H. Van Lindt, Introduction to Coding Theory, Springer-Verlag, Berlin
(1982).
24. G. J. Foschini, “Layered space–time architecture for wireless communication
in a fading environment when using multi-element antennas,” Bell Labs Tech.
J. 1 (2), 41–59 (1996).
25. I. E. Telatar, “Capacity of multi-antenna Gaussian channels,” Eur. Trans.
Telecommun. 10 (6), 585–595 (1999).
26. D.W. Bliss, K.W. Forsythe, A.O. Hero and A.L. Swindlehurst, MIMO
environmental capacity sensitivity, in Conf. Rec. of 34th Asilomar Conf.
on Signals, Systems and Computers 1, Oct. 29–Nov. 1, Pacific Grove, CA,
764–768 (2000).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Applications of Hadamard Matrices in Communication Systems 447

27. D.W. Bliss, K.W. Forsythe and A.F. Yegulalp, MIMO communication capacity
using infinite dimension random matrix eigenvalue distributions, in Conf. Rec.
35th Asilomar Conf. on Signals, Systems an Computers, 2, Nov. 4–7, Pacific
Grove, CA, 969–974 (2001).
28. F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes,
North-Holland, Amsterdam (1977).
29. N. Balaban and J. Salz, “Dual diversity combining and equalization in digital
cellular mobile radio,” IEEE Trans. Vehicle Technol. 40, 342–354 (1991).
30. G. J. Foschini Jr, “Layered space–time architecture for wireless
communication in a fading environment when using multi-element antennas,”
Bell Labs Tech. J. 1 (2), 41–59 (1996).
31. G. J. Foschini Jr. and M. J. Gans, “On limits of wireless communication
in a fading environment when using multiple antennas,” Wireless Personal
Commun. 6 (3), 311–335 (1998).
32. J.C. Guey, M.P. Fitz, M.R. Bell and W.-Y. Kuo, Signal design for transmitter
diversity wireless communication systems over Rayleigh fading channels, in
Proc. IEEE VTC’96, 136–140 (1996).
33. N. Seshadri and J. H. Winters, “Two signaling schemes for improving the
error performance of frequency-division-duplex (FDD) transmission systems
using transmitter antenna diversity,” Int. J. Wireless Inf. Networks 1 (1), 49–60
(1994).
34. V. Tarokh, N. Seshardi, and A. R. Calderbank, “Space–time codes for
high data rate wireless communication: Performance analysis and code
construction,” IEEE Trans. Inf. Theory 44 (2), 744–756 (1998).
35. V. Tarokh, A. Naguib, N. Seshardi, and A. R. Calderbank, “Space–time
codes for high data rate wireless communications: Performance criteria in
the presence of channel estimation errors, mobility and multiple paths,” IEEE
Trans. Commun. 47 (2), 199–207 (1999).
36. E. Telatar, Capacity of multi-antenna Gaussian channels, AT&T-Bell
Laboratories Internal Tech. Memo (Jun. 1995).
37. J. Winters, J. Salz, and R. D. Gitlin, “The impact of antenna diversion
the capacity of wireless communication systems,” IEEE Trans. Commun. 42
(2/3/4), 1740–1751 (1994).
38. A. Wittneben, Base station modulation diversity for digital SIMULCAST, in
Proc. IEEE VTC, 505–511 (May 1993).
39. A. Wittneben, A new bandwidth efficient transmit antenna modulation
diversity scheme for linear digital modulation, in Proc. IEEE ICC, 1630–1634
(1993).
40. K. J. Horadam, Hadamard Matrices and Their Applications, Princeton
University Press, Princeton (2007).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


448 Chapter 13

41. M. Bossert, E. Gabidulin and P. Lusina, Space–time codes based on Hadamard


matrices, in Proc. of Int. Symp. on Information Theory 2000, Sorrento, Italy,
283 (2000).
42. H. Sarukhanyan, S. Agaian, K. Egiazarian and J. Astola, Space–time codes
from Hadamard matrices, URSI 26 Convention on Radio Science, Finnish
Wireless Communications Workshop, Oct. 23–24, Tampere, Finland (2001).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Chapter 14
Randomization of Discrete
Orthogonal Transforms and
Encryption
Yue Wu, Joseph P. Noonan, and Sos Agaian∗

Previous chapters have discussed many discrete orthogonal transforms (DOTs),


such as the discrete fourier transform (DFT), discrete cosine transform (DCT),
and discrete Hadamard transform (DHT). These discrete orthogonal transforms
are well known for their successful applications in the areas of digital signal
processing1–19 and communications.20–29
As demand for electronic privacy and security increases, a DOT system that
resists attacks from possible intruders becomes more desirable. A randomized
discrete orthogonal transform (RDOT) provides one way to achieve this goal, and it
has already been used in secure communications and encryptions for various forms
of digital data, such as speeches,2,30 images,31–33 and videos.34,35
Early efforts on RDOTs have been made in different areas: Cuzick and Lai
introduced a sequence of phase constants in the conventional Fourier series;36
Ferreira studied a special class of permutation matrix that is commutable with the
DFT matrix;37 Dmitriyev and Chernov proposed a discrete M-transform that is
orthogonal and is based on m-sequence;38 Liu and Liu proposed a randomization
method on the discrete fractional Fourier transform (DFRFT) by taking random
powers for eigenvalues of the DFT matrix;39 Pei and Hsue improved on this39 by
constructing parameterized eigenvectors as well;40 and Zhou, Panetta, and Agaian
developed a parameterized DCT with controllable phase, magnitude, and DCT
matrix size.31
However, these efforts (1) focused on randomizing one specific DOT rather
than a general form of DOT; (2) proposed to obtain RDOT systems from scratch
instead of upgrading existent DOT systems to RDOT systems; (3) may have lacked
a large set of parameters (for example, in Ref. 38, one parameter has to be a prime


Yue Wu (yue.wu@tufts.edu) and Joseph P. Noonan (jnoonan@ece.tufts.edu) are with the Dept. of Electrical and Computer
Engineering, Tufts University, Medford, MA 02155 USA. Sos Agaian is with the Dept. of Electrical and Computer Engineering,
University of San Antonio, TX 78249 USA.

449

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


450 Chapter 14

number); (4) may not have generated the exact form of RDOT,39,40 leading to
inevitable approximation errors in implementation;41 and (5) contained minimal
mention of requirements for cryptanalysis.
This chapter proposes a randomization theorem of DOTs and thus a general
method of obtaining RDOTs from DOTs conforming to Eq. (14.1). It can be
demonstrated that building the proposed RDOT system is equivalent to improving
a DOT system by adding a pair of pre- and post-processes. The proposed
randomization method is very compact and is easily adopted by any existent
user-selected DOT system. Furthermore, the proposed RDOT matrix is of the
exact form, for it is directly obtained by a series of matrix multiplications
related to the parameter matrix and the original DOT matrix. Hence, it avoids
approximation errors commonly seen in those eigenvector-decomposition-related
RDOT methods,39,40 while keeping good features or optimizations, such as a fast
algorithm, already designed for existing DOTs. Any current DOT system can be
improved to a RDOT system and fulfill the needs of secure communication and
data encryption.
The remainder of this chapter is organized as follows:
• Section 14.1 reviews several well-known DOTs in the matrix form and briefly
discusses the model of secure communication.
• Section 14.2 proposes the new model of randomizing a general form of the
DOT, including theoretical foundations, qualified candidates of the parameter
matrix, properties and features of new transforms, and several examples of new
transforms.
• Section 14.3 discusses encryption applications for both 1D and 2D digital data;
the confusion and diffusion properties of an encryption system are also explored.

14.1 Preliminaries
This section will briefly discuss two topics: the matrix form of a DOT, and
cryptography backgrounds. The first step is to unify all DOTs in a general form
so that the DOT randomization theory presented in Section 14.2 can be derived
directly from this general form. The second step is to explain the conceptions
and terminologies used in the model so that secure communication and encryption
applications based on the RDOT can be presented clearly in Section 14.3.

14.1.1 Matrix forms of DHT, DFT, DCT, and other DOTs


Transforms, especially discrete transforms, play a vital role in our digital world;
this chapter concentrates on discrete transforms with an orthogonal basis matrix,
which have a general form of

Forward Transform: y = xMn
(14.1)
Inverse Transform: x = y M̃n .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 451

Without losing any generality, let the vector x be the time-domain signal of size
1 × n, the vector y be the corresponding transform domain signal of size 1 × n, the
matrix M be the forward transform matrix of size n × n, and the matrix M̃ be the
inverse transform matrix of size n × n.
Equation (14.1) is called the general matrix form of a DOT. Matrix theory states
that the transform pair in Eq. (14.1) is valid for any time-domain signal x if and only
if the matrix product of the forward transform matrix M and the inverse transform
matrix M̃ is the identity matrix In .

Mn M̃n = In , (14.2)

where In is the identity matrix of size n as Eq. (14.3), shows that:

⎛ ⎞
⎜⎜⎜1 0 ··· 0⎟⎟⎟
⎜⎜⎜0 ··· 0⎟⎟⎟⎟⎟
⎜ 1
In = ⎜⎜⎜⎜⎜.. .. .. .. ⎟⎟⎟⎟ . (14.3)
⎜⎜⎜. . . . ⎟⎟⎟
⎝ ⎠
0 0 0 1 n×n

In reality, many discrete transforms are of the above type and can be denoted
in the general form of Eq. (14.1). It is the transform matrix pair of M and M̃ that
makes a distinct DOT. For example, the DHT transform pair is of the form of
Eq. (14.1) directly, with its forward transform matrix H and its inverse transform
matrix H̃ defined in Eqs. (14.4) and (14.5), respectively, where ⊗ denotes the
Kronecker product and H T denotes the transpose of H [see equivalent definitions
in Eq. (1.1)]:

⎪ H1 = (1)



⎪  ,  n = 1,

⎨ 1 1
Hn = ⎪
⎪ H2 = , n = 2, (14.4)


⎪ 1 −1

⎩H2 ⊗ H k−1 ,
2 n = 2k m.
1
H̃n = HnT . (14.5)
n

Similarly, the pair of size n × n DFT matrices can be defined as Eqs. (14.6) and
(14.7),

⎛ ⎞
⎜⎜⎜w1 w21 · · · wn1 ⎟⎟⎟⎟
⎜⎜⎜ 1 ⎟⎟⎟

1 ⎜⎜⎜⎜w12 w22 · · · wn2 ⎟⎟⎟⎟
Fn = √ ⎜⎜⎜. .. . . .. ⎟⎟⎟⎟⎟ , (14.6)
n ⎜⎜⎜.. . . . ⎟⎟
⎜⎜⎜ ⎟⎟⎠
⎝ 1
wn wn · · · wnn
2

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


452 Chapter 14

⎛ ⎞
⎜⎜⎜w̄1 w̄21 · · · w̄n1 ⎟⎟⎟⎟
⎜⎜⎜⎜ 1 ⎟⎟⎟
1 ⎜⎜⎜w̄1 w̄22 · · · w̄n2 ⎟⎟⎟⎟
F̃n = √ ⎜⎜⎜⎜. 2 .. . . .. ⎟⎟⎟⎟⎟ = Fn ,

(14.7)
n ⎜⎜⎜.. . . . ⎟⎟
⎜⎜⎜ ⎟⎟⎠
⎝ 1
w̄n w̄n · · · w̄nn
2

where wkm is defined in Eq. (14.8), w̄km is the complex conjugate of wkm , and Fn∗ is
the complex conjugate of Fn . In Eq. (14.8), j denotes the standard imaginary unit
with property that j2 = −1:
 
j2π
wm = exp −
k
(m − 1)(k − 1) . (14.8)
n

Similarly, the pair of size n × n DCT matrices can be defined as Eqs. (14.9) and
(14.10), where ckm is defined in Eq. (14.11):
⎛ √ √ √ ⎞
⎜⎜⎜1/ N · c1 2/ N · c21 · · · 2/ N · cn1 ⎟⎟⎟⎟
⎜⎜⎜⎜ √ 1
√ √ ⎟⎟⎟
⎜⎜⎜1/ N · c1 2/ N · c22 · · · 2/ N · cn2 ⎟⎟⎟⎟
Cn = ⎜⎜⎜⎜. 2
.. . . ..
⎟⎟⎟ , (14.9)
⎜⎜⎜.. . . . ⎟⎟⎟
⎜⎜⎜ ⎟⎟⎟
⎝ √ √ √ n⎠
1/ N · c1n 2/ N · c2n · · · 2/ N · cn
C̃n = C T , (14.10)
ckm = cos [π(2m − 1)(k − 1)/2n] . (14.11)

Besides the examples shown above, other DOTs can be written in the form of
Eq. (14.1), for instance, the discrete Hartley transform1 and the discrete M-trans-
form.38

14.1.2 Cryptography
The fundamental objective of cryptography is to enable two people, usually
referred to as Alice and Bob, to communicate over an insecure channel so that
an opponent, Oscar, cannot understand what is being said.42 The information that
Alice wants to send is usually called a plaintext, which can be numerical data, a text
message, or anything that can be represented by a digital bit stream. Alice encrypts
the plaintext, using a predetermined key K, and obtains an encrypted version of
the plaintext, which is called ciphertext. Then Alice sends this resulting ciphertext
over the insecure channel. Oscar (the eavesdropper), upon seeing the ciphertext,
cannot determine the contents of the plaintext. However, Bob (the genuine receiver)
knows the encryption key K and thus can decrypt the ciphertext and reconstruct the
plaintext sent by Alice. Figure 14.1 illustrates this general cryptography model.
In this model, it seems that only the ciphertext communicated over the insecure
channel is accessible by Oscar, making the above cryptosystem appear very secure.
In reality, however, Oscar should be considered a very powerful intruder who

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 453

Figure 14.1 A general cryptography model.

knows everything in the above cryptosystem except the encryption key K. As a


result, the security of communication between Alice and Bob depends solely on
the encryption key and, consequently, the encryption key source.
Based on the data loss of cryptography, methods of cryptography can be
classified into two groups:43 lossless encryption algorithms and joint compres-
sion/encryption algorithms. More specifically, the lossless encryption algorithm
can be further divided into43 affine transform algorithms,44 chaotic-system-based
algorithms,45–47 and frequency-domain algorithms;30,31 and the joint compres-
sion/encryption algorithm can be further divided into43 base switching algorithms,
entropy coding algorithms,48 and SCAN-language-based algorithms.49–51
Although whether a cryptography system is secure is a very complex question,
this chapter focuses on three fundamental aspects: (1) the key space, namely,
how many keys Alice can choose; (2) the confusion property, which can generate
similar ciphertexts for distinct plaintexts by using different keys so that Oscar is
confused; and (3) the diffusion property, which states that a minor change in a
plaintext makes its ciphertext dissipated such that the new ciphertext is largely
different from the previous one when the encryption key remains the same.
The first concern is simple, yet important, because if the key space is not large
enough, Oscar can try key by key and thus guess the plaintext. Conventionally, this
method is called a “brute-force attack.”52 According to the computer calculation
capacity of the time, it is believed that a 256-bit key, i.e., the key space is 2256 , is
safe. Many well-known commercial ciphers, encryption algorithms, and standards
adopt this key length.
The second and third concern are proposed in a 1949 paper53 by Claude
Shannon. In this paper, the term “confusion” refers to making the relationship
between the key and the ciphertext very complex and involved.53 In other words,
it is desirable to ensure that ciphertexts generated by different keys have the same
statistics so that the statistics of a ciphertext give no information about the used
key.53 The term “diffusion” refers to the property that the redundancy in the
statistics of the plaintext is “dissipated” in the statistics of the ciphertext.53

14.2 Randomization of Discrete Orthogonal Transforms


The previous section showed that many DOTs are of the general form of Eq. (14.1).
This section concentrates on using Theorem 14.1 to obtain a class of new

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


454 Chapter 14

transforms that randomize the original transform’s basis matrix by introducing two
new square matrices P and Q, such that the response y in the transform domain
will dramatically change, while keeping the new transform pair valid for any given
input signal x.

14.2.1 The theorem of randomization of discrete orthogonal transforms


Let M and M̃ be the forward and inverse DOT square matrices, respectively. Once
again, this M can be defined as Eq. (14.4), Eq. (14.6), Eq. (14.9), or another
qualified transform matrix, and M̃ is the corresponding inverse transform matrix.
Then M and M̃ together define a pair of DOTs as shown in Eq. (14.1). Thus,
there exists a family of randomized DOTs defined by L and L̃ as presented in
Theorem 14.1.

Theorem 14.1: [Randomization Theorem for DOTs (RTDOT)]. If Mn and M̃n


together define a valid pair of transforms, i.e.,

Forward Transform: y = xMn
Inverse Transform: x = y M̃n ,

and square parameter matrices Pn and Qn are such that Pn Qn = In , then Ln and
L̃n define a valid pair of transforms,
 
Forward Transform: y = xLn Ln = Pn Mn Qn
where
Inverse Transform: x = yL̃n , L̃n = Pn M̃n Qn .

Proof: We want to show that for any signal vector x, the inverse transform (IT)
response z of x’s forward transform (FT) response y is equal to x. Consider the
following:

z = IT (y) = yL̃n = y(Pn M̃n Qn ) = FT (x) · (Pn M̃n Qn )


= (xLn ) · (Pn M̃n Qn ) = (xPn Mn Qn ) · (Pn M̃n Qn )
= x(Pn (Mn (Qn Pn ) M̃n )Qn ) = x(Pn (Mn M̃n )Qn )
= x(Pn Qn ) = xIn = x.

Therefore, as long as Pn Qn = In is satisfied, Ln and L̃n together define a pair of


DOTs conforming to the forms in Eq. (14.1). It is worth noting that Theorem 14.1
is the core of the chapter. In order to simplify the notation, Mn , M̃n , Ln , L̃n , Pn , Qn ,
In , etc., will be denoted without the subscript n, which is the matrix size.

14.2.2 Discussions on the square matrices P and Q


Theorem 14.1 implies that the relationship between the matrix P and Q in general
is Eq. (14.12), namely Q is the inverse matrix of P and vice versa.

Q = P−1 . (14.12)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 455

Matrix theory states that as long as a matrix P is invertible, then its inverse
matrix exists. Therefore, infinitely many matrices can be used here for matrix
P. According to Eq. (14.12), Q is determined once P is determined. The
remainder of this section focuses only on the matrix P since Q can be determined
correspondingly.
One good candidate for the matrix P is the permutation matrix family P.
The permutation matrix is sparse and can be compactly denoted. Two types of
permutation matrices are introduced here: the unitary permutation matrix and the
generalized permutation matrix.
Definition 14.1: The unitary permutation matrix.54 A square matrix P is called a
unitary permutation matrix (denoted as P ∈ U), if in every column and every row
there is exactly one nonzero entry, whose value is 1.
Definition 14.2: The generalized permutation matrix.54 A square matrix P is
called a generalized permutation matrix (denoted as P ∈ G), if in every column
and every row there is exactly one nonzero entry.
If P is a unitary permutation matrix, i.e., P ∈ U, then an n × nP matrix can be
denoted by a 1 × n vector.
Example 14.2.1: The row permutation sequence [3, 4, 2, 1] denotes Eq. (14.13):54
⎛ ⎞
⎜⎜⎜0 0 0 1⎟⎟⎟
⎜⎜⎜0 1 0 0⎟⎟⎟
P = PU = ⎜⎜⎜⎜ ⎟⎟ .
⎜⎝⎜1 0 0 0⎟⎟⎟⎠⎟
(14.13)
0 0 1 0

Meanwhile, Q is also a permutation matrix55 and is defined as

Q = P−1 = PT . (14.14)

Correspondingly, the new DOT matrix L = PMQ can be interpreted as a shuffled


version of the original transform matrix M.
If P is a generalized permutation matrix, i.e., P ∈ G, then P and Q can be
denoted as Eqs. (14.15) and (14.16), respectively, where D is an invertible diagonal
matrix defined in Eq. (14.17) and d1 , d2 , . . . , dn are nonzero constants; D−1 is
defined in Eq. (14.18); PU is a unitary permutation matrix; and P can be denoted
by two 1 × n vectors:

P = DPU , (14.15)
Q = PTU D−1 , (14.16)
⎛ ⎞
⎜⎜⎜d1 0 · · · 0 ⎟⎟⎟
⎜⎜⎜0 d · · · 0 ⎟⎟⎟
⎜ ⎟
D = ⎜⎜⎜⎜⎜.. .. . . .. ⎟⎟⎟⎟⎟ ,
2
(14.17)
⎜⎜⎜. . . . ⎟⎟⎟
⎝ ⎠
0 0 · · · dn

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


456 Chapter 14

⎛ −1 ⎞
⎜⎜⎜d1 0 · · · 0 ⎟⎟
⎜⎜⎜⎜0 ⎟
d2−1 · · · 0 ⎟⎟⎟⎟⎟
D−1 = ⎜⎜⎜⎜⎜. .. . . .. ⎟⎟⎟⎟⎟ . (14.18)
⎜⎜⎜.. . . . ⎟⎟
⎝ ⎠
0 0 · · · dn−1

Example 14.2.2: The row permutation sequence [3, 4, 2, 1] and the weight
sequence [w1 , w2 , w3 , w4 ] define the generalized permutation matrix:
⎛ ⎞
⎜⎜⎜0 0 0 w4 ⎟⎟⎟
⎜⎜⎜ ⎟
⎜⎜⎜0 w2 0 0 ⎟⎟⎟⎟⎟
P = PG = ⎜⎜⎜⎜ ⎟⎟ . (14.19)
⎜⎜⎜w1 0 0 0 ⎟⎟⎟⎟⎟
⎜⎝ ⎟⎠
0 0 w3 0

As a result, the new DOT matrix L = PMQ can be interpreted as a weighted and
shuffled basis matrix of the original transform basis matrix M. It is worth noting
that the parameter matrix P can be any invertible matrix and that the permutation
matrix family is just one special case of the invertible matrix family.

14.2.3 Examples of randomized transform matrix Ms


Based on the used matrix P, the new transform basis matrix L may have different
configurations. Take the 8 × 8 DHT matrix as an example. The original transform
basis matrix H8 is obtained via Eq. (14.4) as follows:
⎛ ⎞
⎜⎜⎜+1 +1 +1 +1 +1 +1 +1 +1⎟⎟⎟
⎜⎜⎜ ⎟⎟
⎜⎜⎜+1
⎜⎜⎜ −1 +1 −1 +1 −1 +1 −1⎟⎟⎟⎟⎟
⎜⎜⎜+1 ⎟⎟
⎜⎜⎜ +1 −1 −1 +1 +1 −1 −1⎟⎟⎟⎟
⎜⎜⎜+1 ⎟⎟
−1 −1 +1 +1 −1 −1 +1⎟⎟⎟⎟
H8 = ⎜⎜⎜⎜ ⎟⎟ . (14.20)
⎜⎜⎜+1
⎜⎜⎜ +1 +1 +1 −1 −1 −1 −1⎟⎟⎟⎟⎟
⎜⎜⎜+1 ⎟⎟
⎜⎜⎜ −1 +1 −1 −1 +1 −1 +1⎟⎟⎟⎟
⎜⎜⎜+1 ⎟⎟
⎜⎜⎝ +1 −1 −1 −1 −1 +1 +1⎟⎟⎟⎟
⎟⎠
+1 −1 −1 +1 −1 +1 +1 −1

In this example, four Ps, namely Pa , Pb , Pc , and Pd , are used, and the
corresponding new transform matrices are La , Lb , Lc , and Ld , respectively.
Figure 14.2 illustrates the used Ps and resulting transform matrices Ls. For Pa = I8 ,
La = I8 H8 I8−1 and thus La = H8 as plot (a) shows that for Pb = PU , a unitary
permutation matrix, Lb is a row-column-wise shuffled matrix of H8 ; for Pc = PG ,
a generalized permutation matrix, Lc is a row-column-wise shuffled matrix of H8
with additional weights; and for Pd = R8 , an invertible square random matrix, Ld
also becomes a random matrix. Obviously, the new transform basis matrices (f), (g)
and (h) in Fig. 14.2 are very different from the original transform basis matrix (e).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 457

Figure 14.2 Randomized transform matrices based on H8 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


458 Chapter 14

Table 14.1 Transform matrix statistics. Mean and standard deviations were calculated from
100 experiments.
Ps \ Ms I F C H

I 0.0156 ± 0.1240 −0.0751 ± 1.8122 0.0019 ± 0.1250 0.0156 ± 1.0000


PU 0.0156 ± 0.1240 −0.0751 ± 1.8122 0.0019 ± 0.1250 0.0156 ± 1.0000
PG 0.0001 ± 0.0715 −0.0288 ± 1.8333 0.0063 ± 3.8480 −0.0986 ± 31.1434
S 0.0000 ± 0.4119 0.0045 ± 1.8172 0.0476 ± 5.6024 −0.0559 ± 47.1922
R 0.0000 ± 0.5778 0.0025 ± 1.8219 −0.0221 ± 5.7738 −0.2053 ± 43.4050

Therefore, the proposed randomization framework of the discrete transform is able


to generate distinct new transforms based on the existing transforms. Moreover,
it is clear that the randomness of new transform matrices follows the order: (e) <
(f) < (g) < (h). It is the matrix P that generates the randomness into the original
transform matrix.
In order to evaluate the randomness of the new generated transform basis matrix,
the following tests are used as measurements:

n n
mean(M) = u M = Mi, j /n2 , (14.21)
i=1 j=1
4
5 n n
std(M) = σ M = (Mi, j − u M )2 /n2 , (14.22)
i=1 j=1

where Mi, j denotes the element at the intersection of the i’th row and j’th column
of the matrix M.
Table 14.1 illustrates the first two statistical moments of the new generated
transform matrix L under various pairs of P and M at size 64. Here the random
full rank matrix R is uniformly distributed on [−1, 1], and the symmetric matrix
S is generated via Eq. (14.23); the nonzero elements in PG are also uniformly
distributed on [−1, 1].

S = (R + RT )/2. (14.23)

The selected M matrices are F (the DFT matrix), C (the DCT matrix) and
H (the DHT matrix). For real Ls, statistical tests are made on its elements
directly; for complex Ls, tests are made on its phase elements. In general, a more
uniformly distributed P matrix on [−1, 1] leads to a higher standard deviation in the
L matrix.
It is worth noting that the first two statistical moments measure only the
population properties; thus, the permutation within the matrix is not accounted
for. However, the permutations within the matrix will cause a significant difference
in the resulting new generated transform.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 459

14.2.4 Transform properties and features


In general, the new transform basis matrix L based on the original transform matrix
M has all of following properties:
• Identity matrix: Ln L̃n = In .
• M matrix: L ≡ M, if P and the matrix M are commutable, i.e., PM = MP.
• New basis matrix: L  M, if P and the matrix M are not commutable, i.e.,
PM  MP.
• Unitary L matrix: if P and M are both unitary, then L = PMQ = PMP−1 is also
unitary.
• Symmetric L matrix: if P is a unitary permutation matrix and the matrix M is
symmetric, then L = PMQ = PMP−1 is also symmetric.
In addition, if the forward transform defined by matrix M is denoted as S (.) and its
inverse transform as S −1 (.), and correspondingly, if the forward transform defined
by matrix L is denoted as R(.) and its inverse transform as R−1 (.),

Original Forward Transform: S (x) = xM
(14.24)
Original Inverse Transform: S −1 (y) = y M̃,

New Forward Transform: R(x) = xL
(14.25)
New Inverse Transform: R−1 (y) = yL̃,

then the new transform system of R(.) and R−1 (.) can be directly realized by the
original transform system of S (.) and S −1 (.), as the following equation shows:

R(x) = xL = x(PMQ) = S (xP) · Q
(14.26)
R−1 (y) = yL̃ = y(P M̃Q) = S −1 (yP) · Q.

Equation (14.26) is very important because it demonstrates that the new transform
system is completely compatible with the original transform. Unlike other
randomization transforms,39,40 the new randomized transform does not require
any computations on eigenvector decompositions and thus does not create any
approximation-caused errors. Any existing transform system conforming to the
model can be used to obtain new transforms without any change.

14.2.5 Examples of randomized discrete orthogonal transforms


This section uses the DFT basis matrix F at a size of 64 as an example and
compares its new randomized transforms with conventional DFT, DRFT39 and
DFRFT.40 The following definitions are based on the used P matrix:
• PDFT: permuted discrete Fourier transform, where the new transform L =
PU FPTU .
• WPDFT: weighted permutated discrete Fourier transform, where the new trans-
form L = PG FPG−1 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


460 Chapter 14

Table 14.2 Statistics of transformed signals.

Transform Phase Phase Phase Magnitude Eigenvector approximation


mean std. symmetry symmetry

DFT 0.0490 1.8703 −1.0000 1.0000 No


DRFT 0.2647 2.0581 0.9997 0.9999 Yes
DFRFT 0.2145 2.5824 0.9984 0.9974 Yes
PDFT 0.0490 1.9972 −0.0558 −0.0517 No
WPDFT 0.0490 1.9401 −0.0038 −0.0284 No
RMDFT 0.0379 1.7425 −0.0125 0.0963 No

• RMDFT: random-matrix-based discrete Fourier transform, where the new trans-


form L = RFR−1 .
The test discrete signal is a rectangular wave defined as:

1, k ∈ [16, 48]
x[k] = (14.27)
0, k ∈ [1, 15] ∪ [49, 64].

The resulting signals in transform domains are shown in Fig. 14.3. It is easy
to see that the signals in the proposed transforms tend to have more random-like
patterns than DFT, DRFT and DFRFT in both phase and magnitude components.
More detailed statistics of each transformed signal are shown in Table 14.2. In
addition, compared to other methods, our randomization method is advantageous
because
• no eigenvector decomposition approximation is required;
• it is completely compatible with DFT at arbitrary size; and
• it can be directly implemented with fast Fourier transforms.
In Table 14.2, it is worth noting that a uniformly distributed random
√ phase
variable on [−π, π] has a mean u = 0 and a standard deviation σ = π/ 3 ≈ 1.8138.
In addition, the symmetry of the phase and the magnitude component of each
transformed signal was measured. This measure compares the left half and the
right half of the transform domain signal and is defined in Eq. (14.28), where y
denotes the transformed signal, and corr denotes the correlation operation defined
in Eq. (14.29), where E denotes the mathematical expectation:

Sym = corr[y(33 : 2), y(33 : 64)]. (14.28)


E[(A − uA )(B − uB )]
corr[A, B] = . (14.29)
σA σ B

14.3 Encryption Applications


Given the transform matrix M, a random transform L can be generated easily
(depending on the parameter square matrix P) by using Theorem 14.1. From now
on, the parameter matrix P is called the key matrix, for different P matrices give a
different resulting transform matrix L, as Theorem 14.1 shows.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 461

DFT response PDFT response


35 35
30 30
25 25
20 20
15 15
10 10
5 5
0 0
–5
10 20 30 40 50 60 10 20 30 40 50 60

DRFT response WPDFT response


14 35
12
30
10
8 25
6 20
4 15
2
10
0
–2 5
–4 0
–6
10 20 30 40 50 60 10 20 30 40 50 60

DFRFT response RMDFT response Magnitude of RMDFT


5 10
4
3 8
2 6
1
4
0
–1 2
–2 0
–3
–4 –2
–5 –4
10 20 30 40 50 60 10 20 30 40 50 60

Figure 14.3 Randomized transform comparisons.

Such a randomized transform can be directly considered as an encryption


system, as Fig. 14.4 illustrates. Although the opponent Oscar knows the encryption
and decryption algorithms, i.e., Theorem 14.1, which is implemented as RDOT
(randomized DOT) and IRDOT (inverse random DOT) modules in Fig. 14.4, he
is not able to determine the exact DOT that Alice used because such a DOT is
random and only dependent on the key matrix P. Therefore, without knowing the
key matrix P, Oscar cannot perfectly restore the plaintext sent from Alice, whereas
Bob is always able to reconstruct the plaintext by using the exact key matrix P to
generate the paired inverse transform.
It is worth noting that the same idea is applicable to any DOT that conforms
to Eq. (14.1). Besides DHT, DWT, and DCT discussed in Section 14.2, qualified
DOTs also include the discrete Hartley transform,1 discrete sine transform,56
discrete M-transform,38 and the discrete fractional-Fourier transform,40 among
others.
Since the plaintext sent by Alice is unknown and could be a word, a message, an
image, or something else, it is important to discuss the encryption system according
to the plaintext types. Depending on the dimension of the digital data, the data can
be classified as

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


462 Chapter 14

Figure 14.4 Secure communication model using Random DOT (RDOT).

• 1D digital data, such as a text string or an audio data,


• 2D digital data, such as a digital gray image, or
• high-dimensional (3D and 3D+) digital data, such as a digital video.
The remainder of this section discusses the digital data encryption scheme using
the randomized DOT for 1D, 2D, and 3D cases.

14.3.1 1D data encryption


Figure 14.4 gives us a general picture of the way in which the secure communica-
tion between Alice and Bob over an insecure channel is realized. Two core mod-
ules, i.e., RDOT and IRDOT in Fig. 14.4, can be realized by using Theorem 14.1.
Consider methods for improving the security of an existent communication
system using Theorem 14.1. Suppose in the beginning of their hypothetical
scenario, Alice and Bob communicate over a public channel, which means that
the RDOT and IRDOT modules in Fig. 14.4 degrade to a pair of DOTs and
IDOTs (inverse discrete orthogonal transforms). Then the relation in Eq. (14.26)
can be easily adapted and implemented so that the old and insecure communication
system becomes a new and secure communication system, depicted in Fig. 14.4.
Equation (14.26) shows that the effect of using a new transform matrix is
equivalent to first right multiplying the input 1D signal data with the key matrix
P and then right multiplying the resulting signal with P−1 . This fact provides the
convenience of improving an existent transform system to an encryption system
directly.
Figure 14.5 illustrates the relationship between the pair of DOTs and IDOTs
and the pair of RDOTs and IRDOTs, where the symbols S (.) and S −1 (.) denote
the original transform pair of DOTs and IDOTs. As a result, the “Encryption”
module and the “Decryption” module become RDOT and IRDOT, respectively.
The implementation of Theorem 14.1 on an existent qualified DOT system is the
equivalent of adding a preprocess and postprocess, as Fig. 14.5 shows.
Three good examples implementing the above encryption system are PDFT,
WPDFT and RMDFT, seen in Fig. 14.3. All three new transform systems use
different key matrices. Signals under these three transforms, i.e., y in Fig. 14.5 and
the ciphertext in Fig. 14.4, are all asymmetric and almost uniformly distributed on

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 463

Figure 14.5 The flowchart of the encryption system based on existent transforms.

the transform domain. As a result, their transform domain signal provides very little
information about the plaintext, the rectangular wave in the time domain defined
by Eq. (14.27).

14.3.2 2D data encryption and beyond


Previous discussions focused on cases involving 1D data; however, the same
encryption concept can be extended naturally to 2D data. Such extension can be
thought of in two ways. The first approach is simply to use a matrix version of X
instead of a vector x in the 1D system developed in Theorem 14.1. In other words,
a 1D transform system can gain the ability to process 2D data (a matrix X) by
transforming row by row. The 1D transform system proposed in Section 14.3.1 can
satisfy this extension without any changes.
The second approach uses a conventional 2D transform, which transforms a
matrix not only according to its row vectors but also its column vectors. Since
calculating the 2D discrete transform is equivalent to computing the 1D discrete
transform of each dimension of the input matrix, a 2D transform can be defined
via its 1D transform defined by the relation in Eq. (14.30), where M is the trans-
form matrix of 1D cases and x is an n × n 2D matrix in time domain. As a result,
2D DHT,16 2D DFT57 and 2D DCT57 can be defined as Eqs. (14.31)–(14.33) via
Eq. (14.4), Eq. (14.6), and Eq. (14.9), respectively:

S 2D (X) = MXM
−1 (14.30)
S 2D (Y) = M −1 Y M −1 ,

S 2D−DFT (X) = FXF
−1 (14.31)
S 2D−DFT (Y) = F ∗ Y F ∗ ,

S 2D−DCT (X) = CXC
−1 (14.32)
S 2D−DCT (Y) = C T YC T ,

S 2D−DHT (X) = HXH
−1 (14.33)
S 2D−DHT (Y) = H T Y H T/n2 .

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


464 Chapter 14

It is easy to verify that the flowchart of improving an existent DOT system


depicted in Fig. 14.5 automatically satisfies the 2D discrete transforms. After
applying the forward 2D DOT transform, one obtains Eq. (14.34), and after
applying the inverse 2D DOT transform, one obtains Eq. (14.35), which implies
that the original signal in the time domain has been perfectly restored. Therefore,
the encryption flowchart of Fig. 14.5 for 1D cases can be directly used for 2D cases,
i.e., digital image encryption.

Y = S 2D (XP) · P−1 = (M(XP)M) · P−1 . (14.34)


−1 −1 −1 −1 −1 −1 −1 −1 −1
S 2D (Y P) · P = (M (Y P)M ) · P = M ((M(XP)M · P )P)M · P
(14.35)
= (M −1 M)(XP)(M · (P−1 P)M −1 ) · P−1 = XPP−1 = X.

One might ask whether the proposed encryption method for 1D cases can be
also used for high-dimensional digital data. The answer is affirmative, because the
high-dimensional data can always be decomposed to a set of lower-dimensional
data.
For example, a digital video clip is normally taken in the format of a sequence
of digital images. In other words, a digital video (3D data) can be considered as
a set of digital images (2D data). Therefore, from this point of view, the proposed
encryption method in Fig. 14.5 can be applied even for high-dimensional digital
data for encryption.

14.3.3 Examples of image encryption


Recall the three important properties of a good encryption algorithm mentioned in
Section 14.1.2:
(1) a sufficiently large key space that an exhaustive key search algorithm cannot
complete in a certain time, for example, ten years under current computer
calculation capacity;
(2) a sufficiently complex confusion property, which was identified by Shannon
and refers to making the relationship between the key and the ciphertext very
complex and involved;53 and
(3) an effective diffusion property, which was also identified by Shannon and
refers to the redundancy in the statistics of the plaintext being dissipated in
the statistics of the ciphertext.53

14.3.3.1 Key space analysis


Theoretically, the key space of the proposed encryption method is completely
dependent on the key matrix P, where such dependence has a two-fold meaning.
First, broadly, it means that the number of qualified key matrix P is the key space.
From this point of view, there are infinitely (uncountably) many encryption keys.
Second, narrowly, it means that the key space depends on the size and type of the
key matrix P. Without considering the size and the type of the key matrix P, the
key space does not make sense in reality.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 465

For example, assume that the input digital data is an n × n gray image. If the key
matrix is restricted to the unitary permutation matrix set U (see Definition 14.1),
then the number of allowed keys is n!. For a 256 × 256 gray image (this size is
commonly used for key space analysis and is much smaller than standard digital
photos), the allowed key space is 256! ≈ 21684 bits, which implies a sufficiently
large key space. It is worth noting that our current encryption algorithms and
ciphers consider a key space of 2256 bits large enough to resist brute-force attacks.
If P is restricted to the generalized permutation matrix set G (see Definition 14.2),
then the number of allowed keys is infinite because the possible weights can be
defined as any decimal number.
14.3.3.2 Confusion property
The confusion property is desirable because ciphertexts generated by different
keys have the same statistics, in which case the statistics of a ciphertext give no
information about the used key.53 This section argues that even the naïve image
encryption system presented in Fig. 14.5 has promising confusion and diffusion
properties. The confusion property of the proposed system is difficult to prove,
so this property will be illustrated with various images. The diffusion property of
the proposed system can be shown by calculating the number-of-pixel change rate
(NPCR) of the system, so a proof will be given directly.
Figures 14.6–14.8 depict the confusion property of the system from different
aspects by using various plaintext image and P matrices. It is worth noting
that the transforms that these figures all refer to are 2D transforms defined in
Eqs. (14.31)–(14.33), and a random transform is taken with respect to the general
form of Eq. (14.34).
In Fig. 14.6, the gray 256 × 256 “Lena” image is used as the plaintext image.
The ciphertext for each 2D transform according to the unitary permutation matrix
PU , the generalized permutation matrix PG , and the full-rank random matrix
R are shown in the odd rows. In the even rows, histograms are plotted below
their corresponding images. From a visual inspection and histogram analysis
perspective, it is clear that various key matrices P generate similar statistics.
In Fig. 14.7, three 256 × 256 gray images from the USC-SIPI image database
demonstrate the effect of using various plaintext images and different types of P
matrices. It is not difficult to see that the ciphertext images have similar coefficient
histograms compared to those in Fig. 14.6, although both of the P matrices and the
plaintext images are different. This indicates that the statistics of ciphertext images
provide very limited information about the key, and thus the proposed system has
the property of confusion.
Figure 14.8 investigates the transform coefficient histogram of the ciphertext
image in the first row of Fig. 14.7. Note that the ciphertext images are sized at
256 × 256. Instead of looking at coefficient histograms for the whole ciphertext
image, the whole ciphertext image was divided into sixteen 64 × 64 image
blocks without overlapping, then the coefficient histogram of each image block
was inspected. These block coefficient histograms are shown in the second
column of Fig. 14.8. The third and fourth columns show the mean and standard

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


466 Chapter 14

Figure 14.6 Image encryption using the proposed random transform—Part I: influences
of different types of random matrix P. (See note for Fig. 14.7 on p. 467.)

deviation, respectively, of these block coefficient histograms. The block coefficient


histograms show that different image blocks have similar histograms like the
overall histograms, which implies that ciphertext images somewhat present the
self-similarity property. In other words, an invader would be confused because
different parts of the ciphertext image have more or less the same statistics.

14.3.3.3 Diffusion property

Conventionally, the diffusion property can be easily tested by the NPCR,46,58–60


which counts the number of pixel changes in ciphertext by changing only one
pixel in plaintext. The higher percentage the NPCR is, the better the diffusion
property.
Mathematically, the NPCR of an encrypted image can be defined as follows:

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 467

Figure 14.7 Image encryption using the proposed random transform—Part II: using
different input images. Note: (1) The histograms of O are plotted by using 256 bins that
are uniformly distributed on [0, 255], the range of a gray image. (2) The histograms of PU
are plotted by using two bins, i.e. 0 and 1; the number of pixels in each bin in the histogram is
the number expressed after taking logarithm 10. (3) The histograms of PG and R are plotted
by using 256 bins, which are uniformly distributed on [−1, 1]; the number of pixels in each
bin in the histogram is the number expressed after taking logarithm 10. (4) The histogram
of ciphertext images of DCT and WHT are plotted by using 256 bins whose logarithm 10
length are uniformly distributed on [− log m, log m], where m is the maximum of the absolute
transform domain coefficient. (5) The histogram of ciphertext images of DFT are plotted with
respect to magnitude and phase, respectively. The magnitude histogram is plotted by using
256 bins whose logarithm 10 length are uniformly distributed on [0, log m], where m is the
maximum magnitude. The phase histogram is plotted by using 256 bins that are uniformly
distributed on [−π, π].

Definition 14.3:
NPCR = Di, j /T × 100%,
i, j
⎧ (14.36)


⎨1, if Ci,1 j  Ci,2 j
Di, j = ⎪

⎩0, if C 1 = C 2 ,
i, j i, j

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


468 Chapter 14

Figure 14.8 Image encryption using the proposed random transform—Part III: random-
ness of the transform coefficients.

where C 1 and C 2 are ciphertexts before and after changing one pixel in plaintext,
respectively; D has the same size as image C 1 ; and T denotes the total number of
pixels in C 1 .

The diffusion property is one characteristic of a 2D transform. For a 2D


transform system with a transform matrix M, if none of the elements in M is zero
(note that all DFT, DCT and DHT matrices are of this type), then even one element
change in the input matrix X will lead to a completely different resulting matrix Y,
according to Lemma 14.1, by forcing matrix A = B = M.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 469

Lemma 14.1: Suppose A and B are both n × n transform matrices with nonzero
elements, i.e., for any subscript pair i, j, ∃Ai, j  0 and Bi, j  0, its corresponding
2D transform S (.) 6is defined as Eq. (14.30). Suppose x and y are two n × n
x else
matrices and yi, j = z i, j if i = r, j = c , where r and c are constant integers between 1
and n, and z is a constant with z  xr,c . Then, for any subscript pair i, j, there is
[S (x)]i, j  [S (y)]i, j .

Proof: For any subscript pair i, j, there is


n
[S (x)]i, j = [AxB]i, j = [Ax]i,k × Bk, j
k=1
⎛ ⎞
n ⎜⎜⎜ n ⎟⎟
= ⎜⎜⎝ Ai,m × xm,k ⎟⎟⎟⎠ × Bk, j (14.37)
k=1 m=1
n n
= Ai,c × xc,r × Br, j + Ai,m × xm,k × Bk, j .
k=1 m=1
kr nc

Similarly, there is
n n
[S (y)]i, j = Ai,c × yc,r × Br, j + Ai,m × ym,k × Bk, j
k=1 m=1
kr nc
n n
= Ai,c × yc,r × Br, j + Ai,m × xm,k × Bk, j . (14.38)
k=1 m=1
kr nc

Then,

[S (x)]i, j − [S (y)]i, j = Ai,c × (xc,r − yc,r ) × Br, j  0. (14.39)

Consequently, as long as the transform matrix M and Z = PMP−1 have


nonzero elements, the encrypted image produced by the proposed system has 100%
NPCR, which is the theoretical maximum of NPCR. For this there is the following
theorem:

Theorem 14.2: The NPCR of the proposed 2D encryption system described in


Fig. 14.5 is 100%, as long as none of the elements of Z = PMP−1 and M is zero,
where P is an invertible matrix and M is the transform matrix defined under a 2D
transform system S (.), as shown in Eq. (14.34).

Proof: Suppose that x and X are two plaintexts with one pixel difference as
Lemma 14.1 shows, and y and Y are corresponding ciphertext obtained by using
the proposed encryption system. The ciphertext can be denoted as

y = S (xP)P−1 = MxPMP−1 = MxZ, (14.40)

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


470 Chapter 14

and correspondingly,

Y = S (XP)P−1 = MXZ. (14.41)

If both M and Z satisfy the condition that they contain nonzero components, then
Lemma 14.1 can be applied directly. Therefore, ∀i, j, ∃yi, j  Yi, j , and equivalently,
∀i, j ∃ Di, j = 1.
As a result,

NPCR = 100%.
Remarks 14.1: Nonzero conditions imposed on M and Z are automatically
satisfied if P is a unitary permutation matrix PU or a generalized permutation
matrix PG .

References
1. C.-Y. Hsu and J.-L. Wu, “Fast computation of discrete Hartley transform via
Walsh–Hadamard transform,” Electron. Lett. 23 (9), 466–468 (1987).
2. S. Sridharan, E. Dawson, and B. Goldburg, “Speech encryption in the
transform domain,” Electron. Lett. 26, 655–657 (1990).
3. A. H. Delaney and Y. Bresler, “A fast and accurate Fourier algorithm for
iterative parallel-beam tomography,” Image Processing, IEEE Transactions on
5, 740–753 (1996).
4. T. M. Foltz and B. M. Welsh, “Symmetric convolution of asymmetric
multidimensional sequences using discrete trigonometric transforms,” Image
Processing, IEEE Transactions on 8, 640–651 (1999).
5. I. Djurovic and V. V. Lukin, “Robust DFT with high breakdown point for
complex-valued impulse noise environment,” Signal Processing Letters, IEEE
13, 25–28 (2006).
6. G.A. Shah and T.S. Rathore, “A new fast Radix-2 decimation-in-frequency
algorithm for computing the discrete Hartley transform,” in Computational
Intelligence, Communication Systems and Networks, 2009. CICSYN ’09. First
International Conference on, 363–368, (2009).
7. J. Bruce, “Discrete Fourier transforms, linear filters, and spectrum weighting,”
Audio and Electroacoustics, IEEE Transactions on 16, 495–499 (1968).
8. C. J. Zarowski, M. Yunik, and G. O. Martens, “DFT spectrum filtering,”
Acoustics, Speech and Signal Processing, IEEE Transactions on 36, 461–470
(1988).
9. R. Kresch and N. Merhav, “Fast DCT domain filtering using the DCT and the
DST,” Image Processing, IEEE Transactions on 8, 821–833 (1999).
10. V. F. Candela, A. Marquina, and S. Serna, “A local spectral inversion of a
linearized TV model for denoising and deblurring,” Image Processing, IEEE
Transactions on 12, 808–816 (2003).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 471

11. A. Foi, V. Katkovnik, and K. Egiazarian, “Pointwise shape-adaptive DCT for


high-quality denoising and deblocking of grayscale and color images,” Image
Processing, IEEE Transactions on 16, 1395–1411 (2007).
12. N. Gupta, M. N. S. Swamy, and E. I. Plotkin, “Wavelet domain-based video
noise reduction using temporal discrete cosine transform and hierarchically
adapted thresholding,” Image Processing, IET 1, 2–12 (2007).
13. Y. Huang, H. M. Dreizen, and N. P. Galatsanos, “Prioritized DCT for
compression and progressive transmission of images,” Image Processing,
IEEE Transactions on 1, 477–487 (1992).
14. R. Neelamani, R. de Queiroz, F. Zhigang, S. Dash, and R. G. Baraniuk, “JPEG
compression history estimation for color images,” Image Processing, IEEE
Transactions on 15, 1365–1378 (2006).
15. K. Yamatani and N. Saito, “Improvement of DCT-based compression
algorithms using Poisson’s equation,” Image Processing, IEEE Transactions
on 15, 3672–3689 (2006).
16. W. K. Pratt, J. Kane, and H. C. Andrews, “Hadamard transform image
coding,” Proc. IEEE 57, 58–68 (1969).
17. O. K. Ersoy and A. Nouira, “Image coding with the discrete cosine-III
transform,” Selected Areas in Communications, IEEE Journal on 10, 884–891
(1992).
18. D. Sun and W.-K. Cham, “Postprocessing of low bit-rate block DCT
coded images based on a fields of experts prior,” Image Processing, IEEE
Transactions on 16, 2743–2751 (2007).
19. T. Suzuki and M. Ikehara, “Integer DCT based on direct-lifting of DCT-IDCT
for lossless-to-lossy image coding,” Image Processing, IEEE Transactions on
19, 2958–2965 (2010).
20. S. Weinstein and P. Ebert, “Data transmission by frequency-division mul-
tiplexing using the discrete Fourier transform,” Communication Technology,
IEEE Transactions on 19, 628–634 (1971).
21. G. Bertocci, B. Schoenherr, and D. Messerschmitt, “An approach to the
implementation of a discrete cosine transform,” Communications, IEEE
Transactions on 30, 635–641 (1982).
22. E. Feig, “Practical aspects of DFT-based frequency division multiplexing
for data transmission,” Communications, IEEE Transactions on 38, 929–932
(1990).
23. S. Toledo, “On the communication complexity of the discrete Fourier
transform,” Signal Processing Letters, IEEE 3, 171–172 (1996).
24. S. Hara, A. Wannasarnmaytha, Y. Tsuchida, and N. Morinaga, “A novel
FSK demodulation method using short-time DFT analysis for LEO satellite
communication systems,” Vehicular Technology, IEEE Transactions on 46,
625–633 (1997).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


472 Chapter 14

25. L. Hyesook and E. E. Swartzlander Jr., “Multidimensional systolic arrays for


the implementation of discrete Fourier transforms,” Signal Processing, IEEE
Transactions on, 47, 1359–1370 (1999).
26. H. Bogucka, “Application of the joint discrete hadamard-inverse Fourier
transform in a MC-CDMA wireless communication system-performance and
complexity studies,” Wireless Communications, IEEE Transactions on 3,
2013–2018 (2004).
27. S. M. Phoong, C. Yubing, and C. Chun-Yang, “DFT-modulated filterbank
transceivers for multipath fading channels,” Signal Processing, IEEE
Transactions on, 53, 182–192 (2005).
28. T. Ran, D. Bing, Z. Wei-Qiang, and W. Yue, “Sampling and sampling rate
conversion of band limited signals in the fractional Fourier transform domain,”
Signal Processing, IEEE Transactions on, 56, 158–171 (2008).
29. A. N. Akansu and H. Agirman-Tosun, “Generalized discrete Fourier
transform with nonlinear phase,” Signal Processing, IEEE Transactions on,
58, 4547–4556 (2010).
30. S. Sridharan, E. Dawson, and B. Goldburg, “Fast Fourier transform based
speech encryption system,” Communications, Speech and Vision, IEEE
Proceedings I 138, 215–223 (1991).
31. Z. Yicong, K. Panetta and S. Agaian, “Image encryption using discrete
parametric cosine transform,” in Signals, Systems and Computers, 2009 Confe-
rence Record of the Forty-Third Asilomar Conference on, 2009, pp. 395–399
(2009).
32. J. Lang, R. Tao, and Y. Wang, “Image encryption based on the multiple-
parameter discrete fractional Fourier transform and chaos function,” Optics
Communications 283, 2092–2096 (2010).
33. Z. Liu, L. Xu, T. Liu, H. Chen, P. Li, C. Lin, and S. Liu, “Color image
encryption by using Arnold transform and color-blend operation in discrete
cosine transform domains,” Optics Communications 284 (1), 123–128 (2010).
34. B. Bhargava, C. Shi, and S.-Y. Wang, “MPEG video encryption algorithms,”
Multimedia Tools and Applications 24, 57–79 (2004).
35. S. Yang and S. Sun, “A video encryption method based on chaotic maps in
DCT domain,” Progress in Natural Science 18, 1299–1304 (2008).
36. J. Cuzick and T. L. Lai, “On random Fourier series,” Trans. Amer. Math. Soc.
261, 53–80 (1980).
37. P. J. S. G. Ferreira, “A group of permutations that commute with the discrete
Fourier transform,” Signal Processing, IEEE Transactions on, 42, 444–445
(1994).
38. A. Dmitryev and V. Chernov, “Two-dimensional discrete orthogonal trans-
forms with the ‘noise-like’ basis functions,” in Int. Conf. GraphiCon 2000,
pp. 36–41 (2000).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Randomization of Discrete Orthogonal Transforms and Encryption 473

39. Z. Liu and S. Liu, “Randomization of the Fourier transform,” Opt. Lett. 32,
478–480 (2007).
40. P. Soo-Chang and H. Wen-Liang, “Random discrete fractional Fourier
transform,” Signal Processing Letters, IEEE 16, 1015–1018 (2009).
41. M. T. Hanna, N. P. A. Seif, and W. A. E. M. Ahmed, “Hermite–Gaussian-like
eigenvectors of the discrete Fourier transform matrix based on the singular-
value decomposition of its orthogonal projection matrices,” Circuits and
Systems I: Regular Papers, IEEE Transactions on, 51, 2245–2254 (2004).
42. D. Stinson, Cryptography: Theory and Practice, CRC press, (2006).
43. M. Yang, N. Bourbakis, and L. Shujun, “Data-image-video encryption,”
Potentials, IEEE 23, 28–34 (2004).
44. T. Chuang and J. Lin, “A new multiresolution approach to still image
encryption,” Pattern Recognition and Image Analysis 9, 431–436 (1999).
45. Y. Wu, J.P. Noonan and S. Agaian, “Binary data encryption using the
Sudoku block,” in Systems, Man and Cybernetics, 2010. SMC 2010. IEEE
International Conference on, (2010).
46. G. Chen, Y. Mao, and C. K. Chui, “A symmetric image encryption scheme
based on 3D chaotic cat maps,” Chaos, Solitons & Fractals 21, 749–761
(2004).
47. L. Zhang, X. Liao, and X. Wang, “An image encryption approach based on
chaotic maps,” Chaos, Solitons & Fractals 24, 759–765 (2005).
48. W. Xiaolin and P.W. Moo, “Joint image/video compression and encryption via
high-order conditional entropy coding of wavelet coefficients,” in Multimedia
Computing and Systems, 1999. IEEE International Conference on, 2, 908–912,
(1999).
49. S. S. Maniccam and N. G. Bourbakis, “Image and video encryption using
SCAN patterns,” Pattern Recognition 37, 725–737 (2004).
50. N. Bourbakis and A. Dollas, “SCAN-based compression-encryption-hiding
for video on demand,” Multimedia, IEEE 10, 79–87 (2003).
51. S. Maniccam and N. Bourbakis, “Lossless image compression and encryption
using SCAN,” Pattern Recognition 34, 1229–1245 (2001).
52. R. Sutton, Secure Communications: Applications and Management, John
Wiley & Sons, New York (2002).
53. C. E. Shannon, “Communication theory of secrecy systems,” Bell Systems
Technical Journal 28, 656–715 (1949).
54. L. Smith, Linear Algebra, 3rd ed., Springer-Verlag, New York (1998).
55. D. Bernstein, Matrix Mathematics: Theory, Facts, and Formulas with Applica-
tion to Linear Systems Theory, Princeton University Press, Princeton (2005).
56. A. Gupta and K. R. Rao, “An efficient FFT algorithm based on the discrete
sine transform,” Signal Processing, IEEE Transactions on 39, 486–490 (1991).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


474 Chapter 14

57. R. González and R. Woods, Digital Image Processing, Prentice Hall, New
York (2008).
58. C. K. Huang and H. H. Nien, “Multi chaotic systems based pixel shuffle for
image encryption,” Optics Communications 282, 2123–2127 (2009).
59. H. S. Kwok and W. K. S. Tang, “A fast image encryption system based on
chaotic maps with finite precision representation,” Chaos, Solitons & Fractals
32, 1518–1529 (2007).
60. Y. Mao, G. Chen, and S. Lian, “A novel fast image encryption scheme based
on 3D chaotic Baker maps,” Int. J. Bifurcation and Chaos 14 (10), 3613–3624
(2003).

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


499

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms


Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms
Prof. Sos S. Agaian (Fellow SPIE, Fellow AAAS, Foreign
Member, National Academy of Sciences of the Republic of
Armenia) received the M.S. degree (summa cum laude) in
mathematics and mechanics from Yerevan State University,
Yerevan, Armenia, the Ph.D. degree in mathematics and physics
(Steklov Institute of Mathematics Academy of Sciences of the
USSR) and the Doctor of Engineering Sciences degree from
the Academy of Sciences of the USSR, Moscow, Russia. He
is currently the Peter T. Flawn distinguished professor in the
College of Engineering at The University of Texas at San Antonio. He has
authored more than 450 scientific papers and 4 books, and holds 13 patents. He
is an associate editor of several journals. His current research interests include
signal/image processing and systems, information security, mobile and medical
imaging, and secure communication.

Prof. Hakob Sarukhanyan received the M.Sc. degree in


mathematics from Yerevan State University, Armenia, in 1973,
the Ph.D. degree in technical sciences and Doctor Sci. degree in
mathematical sciences from the National Academy of Sciences
of Armenia (NAS RA) in 1982 and 1999 respectively. He was
Junior and Senior researcher at the Institute for Informatics and
Automation Problems of NAS RA from 1973 to 1993, where he
is currently a head of the Digital Signal and Image Processing
Laboratory. He was also a visiting professor at the Tampere
International Center of Signal Processing, Finland, from 1999 to 2007. His research
interests include signal/image processing, wireless communications, combinatorial
theory, spectral techniques, and object recognition. He has authored more than 90
scientific papers.

Prof. Karen Egiazarian received the M.Sc. degree in


mathematics from Yerevan State University, Armenia, in 1981,
the Ph.D. degree in physics and mathematics from Moscow
State University, Moscow, Russia, in 1986, and the D.Tech.
degree from Tampere University of Technology, Finland, in
1994. He has been Senior Researcher with the Department of
Digital Signal Processing, Institute of Information Problems and
Automation, National Academy of Sciences of Armenia. Since
1996, he has been an Assistant Professor with the DSP/TUT,
where he is currently a Professor, leading the Transforms and Spectral Methods
group. His research interests are in the areas of applied mathematics, signal/image
processing, and digital logic.

Prof. Jaakko Astola (Fellow SPIE, Fellow IEEE) received B.Sc.,


M.Sc, Licentiate, and Ph.D. degrees in mathematics (specializing
in error-correcting codes) from Turku University, Finland, in
1972, 1973, 1975, and 1978, respectively. Since 1993 he has
been Professor of Signal Processing and Director of Tampere
International Center for Signal Processing leading a group of
about 60 scientists. He was nominated Academy Professor by
the Academy of Finland (2001–2006). His research interests
include signal processing, coding theory, spectral techniques, and
statistics.

Downloaded From: http://ebooks.spiedigitallibrary.org/ on 01/23/2014 Terms of Use: http://spiedl.org/terms

You might also like