Download as pdf or txt
Download as pdf or txt
You are on page 1of 333

WILEY SERIES IN REMOTE SENSING

Jin Au Kong, Editor

Tsang, Kong, and Shin THEORY OF MICROWAVE REMOTE SENSING


Hord REMOTE SENSING: METHODS AND APPLICATIONS
Elachi INTRODUCTION TO THE PHYSICS AND TECHNIQUES OF
REMOTE SENSING
Szekielda SATELLITE MONITORING OF THE EARTH
Maffett TOPICS FOR A STATISTICAL DESCRIPTION OF
RADAR CROSS SECTION
Asrar THEORY AND APPLICATIONS OF OPTICAL REMOTE
SENSING
Curlander and McDonough SYNTHETIC APERTURE RADAR:
SYSTEMS AND SIGNAL PROCESSING

SYNTHETIC
APERTURE
RADAR
Systems and
Signal Processing
John C. Curlander
California Institute of Technology
Jet Propulsion Laboratory
Pasadena, Cqlifornia

Robert N. McDonough
Johns Hopkins University
Applied Physics Laboratory
Laurel, Maryland

A WILEY-INTERSCIENCE PUBLICATION

JOHN WILEY & SONS, INC.


New York Chichester Brisbane Toronto Singapore

A NOTE TO IBE READER


This book has been electronically reproduced from
digital information stored at John Wiley & Sons, Inc.
We are pleased that the use of this new technology
will enable us to keep works of enduring scholarly
value in print as long as there is a reasonable demand
for them. The content of this book is identical to
previous printings.

To my father and mother for their enduring guidance and support (JGC)
This book is sold as is, without warranty of any kind, either
express or implied, respecting its contents, including
but not limited to implied warranties for the book's quality,
merchantability, or fitness for any particular
purpose. Neither the authors nor John Wiley & Sons, Inc., nor
its dealers or distributors, shall be liable to the purchaser or
any other person or entity with respect to any liability, loss,
or damage caused or alleged to be caused directly or indirectly
by this book.
In recognition of the importance of preserving what has been
written, it is a policy of John Wiley & Sons, Inc., to have books
of enduring value published in the United States printed on
acid-free paper, and we exert our best efforts to that end.
Copyright 1991 by John Wiley & Sons, Inc.
All rights reserved. Published simultaneously in Canada.
Reproduction or translation of any part of this work
beyond that permitted by Section 107 or 108 of the
1976 United States Copyright Act without the permission
of the copyright owner is unlawful. Requests for
permission or further information should be addressed to
the Permissions Department, John Wiley & Sons, Inc.

Library of Congress Cataloging in Publication Data:


Curlander, John C.
Synthetic aperture radar : systems and signal processing I John C.
Curlander, Robert N. McDonough.
p.
cm. - (Wiley series in remote sensing)
"A Wiley-Interscience publication."
Includes index.
ISBN 0-471-85770-X
1. Synthetic aperture radar. 2. Signal processing. 3. Remote
sensing. I. McDonough, Robert N. II. Title. Ill. Series.
TK6592.S95C87 1991
621.36'78-dc20
90-29175'
CIP
Printed in the United States of America
10 9 8 7 6

s4

To my wife Natalia for her support during 5 years of intermittent planning,


study and writing (RNM)

CONTENTS

PREFACE

xiii

ACKNOWLEDGMENTS

xvii

CHAPTER 1

INTRODUCTION TO SAR
1.1

The Role of SAR in Remote Sensing


1.1.1 Remote Sensing Across the EM Spectrum
1.1.2 Remote Sensing Programs

7
9
13
16

1.2 Overview of SAR Theory


1.2.1 Along-Track (Azimuth) Resolution
1.2.2 Doppler Filtering
1.3 History of Synthetic Aperture Radar
1.3.1 Early History
1.3.2 Imaging Radars: From SLAR to SAR
1.3.3 SAR Prqcessor Evolution
1.3.4 SAR Systems: Recent and Future
1.4 Applications of SAR Data
1.4. 1 Characteristics of SAR Data
1.4.2 Surface Interaction of the Electromagnetic Wave
1.4.3 Surface Scattering: Models and Applications
1.4.4 Volume Sca~ring: Models and Applications

1.5 Summary
.
References and Further Reading
j:

.,

"'

22
26
26
28
31
33

44
45
46
48
55
"~,,.}

6G
66

viii

CONTENTS

CHAPTER 5

CHAPTER 2

THE RADAR EQUATION


2.1
2.2

Power Considerations in Radar


The Antenna Properties
2.2.1 The Antenna Gain
2.2.2 The Antenna Directional Pattern

2.3 The Target Cross Section


2.4 The Antenna Receiving Aperture
2.5 Thermal Noise
2.6 Source and Receiver Noise Description
2.6.1 Source Noise
2.6.2 Receiver Noise
2.6.3 An Example
2.7 The Point Target Radar Equation
2.8 The Radar Equation for a Distributed Target
References

71
72
75
80
84
91
94

96
99
101
108
116
119
120
124

CHAPTER 3

THE MATCHED FILTER AND PULSE COMPRESSION

126

3.1

The Matched Filter


3.1.1 Derivation of the Matched Filter
3.1.2 Resolution Questions

127
128
131

3.2

Pulse
3.2.1
3.2.2
3.2.3

135
135
142
148

Compression
Linearity, Green's Function and Compression
The Matched Filter and Pulse Compression
Time Sidelobes and Filter Weighting

References

152

CHAPTER 4

IMAGING AND THE RECTANGULAR ALGORITHM


4.1

Introduction and Overview of the Imaging Algorithm


4.1.1 Data Coordinates and the System Impulse Response
4.1.2 Imaging Algorithm Overview
4.1.3 Range Migration and Depth of Focus
4.1.4 An Example
4.2 Compression Processing
4.2.1 Range Compression Processing
4.2.2 Time Domain Azimuth Processing
4.2.3 Time Domain Range Migration Compensation
4.2.4 Frequency Domain Azimuth Processing
References

Ix

CONTENTS

ANCILLARY PROCESSES IN IMAGE FORMATION


5.1

Digital Range Processing

5.2
5.3

Speckle and Multilook Processing


Clutterlock and Autofocus
5.3.1 Clutterlock Procedures
5.3.2 Autofocus

5.4 Resolution of the Azimuth Ambiguity


References

210
214
221
223
234
238
247

CHAPTER 6

SAR FLIGHT SYSTEM


6.1

System Overview

6.2

Radar Performance Measures


6.2.1 Linear System Analysis
6.2.2 Nonlinear System Analysis
6.3 The Radar Subsystem
6.3.1 Timing and Control
6.3.2 RF Electronics
6.3.3 Antenna
6.3.4 Digital Electronics and Data Routing
6.4 Platform and Data Downlink
6.4.1 Channel Errors
6.4.2 Downlink Data Rate Reduction Techniques
6.4.3 Data Compression
6.4.4 Block Floating Point Quantization
6.5 System Design Considerations
6.5.1 Ambiguity Analysis
6.5.2 PRF Selection
6.6 Summary
References

154
155
157
164
171
176
182
182
187
189
196
208

210

249
249
256
256
261
263
263
264
273
279
283
283
285
288
289
294
296
305
307
308

CHAPTER 7

RADIOMETRIC CALIBRATION OF SAR DATA


7.1

7.2

Definition of Terms
7.1.1 General Terms
7.1.2 Calibration Performance Parameters
7.1.3 Parameter Characteristics
Calibration Error Sources
7.2.1 Sensor Subsystem
7.2.2 Platform and Downlink Subsystem
7.2.3 Signal Processing Subsystem

310
311
311
312
314
314
315
320
320

CONTENTS

CONTENTS

7.3

Radiometric Error Model

7.4

The Radar Equation


Radiometric Calibration Techniques
7.5.1 Internal Calibration
7.5.2 External Calibration
7.5.3 Polarimetric Radar Calibration

7.5

7.6

Radiometric Calibration Processing


7.6.1 Calibration Processor
7.6.2 Calibration Algorithm Design

7.7

Polarimetric Data Calibration

7.8

Summary

References
CHAPTER 8

GEOMETRIC CALIBRATION OF SAR DA TA


8.1

Definition of Terms

8.2

Geometric Distortion
8.2.1 Sensor Errors
8.2.2 Target Location Errors
8.2.3 Platform Ephemeris Errors
8.2.4 Target Ranging Errors

8.3

8.4

8.5

Geometric Rectification
8.3.1 Image Resampling
8.3.2 Ground Plane, Deskewed Projection
8.3.3 Geocoding to a Smooth Ellipsoid
8.3.4 Geocoding to a Topographic Map
Image Registration
8.4.1 Mosaicking
8.4.2 Multisensor Registration
Summary

References

CHAPTER 9

THE SAR GROUND SYSTEM


9.1

I
1\

9.2

Correlator Requirements Definition


9.1.1 Doppler Parameter Analysis
9.1.2 Azimuth Processing Bandwidth
9.1 .3 Range Reference Function
Correlator Algorithm Selection and Computational Analysis
9.2.1 Spectral Analysis Algorithms
9.2.2 Frequency Domain Fast Convolution
9.2.3 Time Domain Convolution
9.2.4 Comparison of the Azimuth Correlators
9.2.5 Range Correlation

322

9.3

326
327
329
337
349

9.4

353
354
358
364

9.5

367
367

370
371
372
372
374
377
379
387
388
390
393
399
411
412
416

xi

SAR Correlator Architectures


9.3.1 Architecture Design Requirements
9.3.2 Pipeline Arithmetic Processor
9.3.3 Common Node Architecture
9.3.4 Concurrent Processor Architecture
Post-Processor Systems
9.4.1 Post-Processing Requirements
9.4.2 Radiometric Correction
9.4.3 Geometric Correction
9.4.4 Post-Processor Architecture

452
452
454
460
467
473
475
477
479
486

Image Data Browse System


9.5.1 Browse System Requirements
9.5.2 Queueing Analysis of the Online Archive System
9.5.3 Image Quality
9.5.4 Compression Algorithm Complexity Analysis

487
488
489
490
492

References

499

CHAPTER 10

OTHER IMAGING ALGORITHMS

502
504

10.1

Deramp Compression Processing

10.2

Step Transform Processing

507

10.3

Polar Processing
10.3.1 The Basic Idea of Polar Processing
10.3.2 Polar Processing Details
10.3.3 An Autofocus Procedure for Polar Processing

519
520
524
529

References

535

APPENDIX A

DIGITAL SIGNAL PROCESSING

536

424
425

427
428
430
434
436
437
437
443
444
446
449

A.1

Analog Linear System Theory

536

A.2

541

A.3

Sampling of Bandlimited Signals


Discrete Convolution

A.4

The Fast Fourier Transform Algorithm

554

A.5

Additional Topics Relating to the FFT

A.6

Interpolation of Data Samples

558
561

References

545

564

APPENDIX B

SATELLITE ORBITS AND COMPRESSION FILTER


PARAMETERS
B.1

Parameters in Terms of Satellite Track and Target Position

B.2

Trajectory Parameters in Terms of Satellite Orbit

565
566
572

xii
8.3
8 .4

CONTENTS

Compression Parameters in Terms of Satellite Attitude


Simplified Approximate Models for Azimuth Compression Parameters

References

APPENDIX C

THE ALASKA SAR FACILITY


C.1 ASF Operations
C.2 The Receiving Ground Station
C.3

The SAR Processor System

C.4
C.5

Archive and Operations System


The Geophysical Processor System

C.6

Summary

References

APPENDIX D

NONLINEAR DISTORTION ANALYSIS


References

BIBLIOGRAPHY
MATHEMATICAL SYMBOLS
LIST OF ACRONYMS
INDEX

580
588
591

592
593

PREFACE

596

598
603

605
613
614

615
618

619
622

630
634

The forty year history of synthetic aperture radar (SAR) has produced only a
single spaceborne orbiting satellite carrying a SAR sensor dedicated to remote
sensing applications. This system, the Seasat-A SAR, operated for a mere
100 days in the late 1970s. We learned from the data collected by Seasat, and
from the Shuttle Imaging Radar series and aircraft based SAR systems, that
this instrument is a valuable tool for measuring characteristics of the earth's
surface. As an active microwave sensor, the SAR is capable of continuously
monitoring geophysical parameters related to the structural and electrical
properties of the earth's surface (and its subsurface). Furthermore, through
signal processing, these observations can be made at an extremely high resolution
(on the order of meters), independent of the sensor altitude.
As a result of the success of these early systems, we are about to embark on
a new era in remote sensing using synthetic aperture radar. Recognition of its
potential benefits for global monitoring of the earth's resources has Jed the
European Space Agency, the National Space Development Agency of Japan,
and the Canadian Space Agency to join with the United States National
Aeronautics and Space Administration in deploying a series of SAR systems in
polar orbit during the 1990s. A primary mission goal of these remote sensing
SAR systems is to perform geophysical measurements of surface properties over
extended periods of time for input into global change models. To reach this
end, the SAR systems must be capable of reliably producing high quality image
data products, essentially free from image artifacts and accurately calibrated in
terms of the target ' s scattering characteristics.
In anticipation of these data sets, there is widespread interest among the
scientific community in the potential applications of SAR data. However,
xiii

xiv

PREFACE

interpretation of SAR data presents a unique challenge in that there can be


severe geometric and radiometric distortions in the data products, as well as
the presence of false targets (resulting from the radar pulse mode operation).
Although these effects can be minimized by proper design of the radar system
and use of calibration techniques to characterize the systematic error sources,
full utilization of SAR data mandates that the scientist be aware of the potential
for misinterpretation of the imagery. A full understanding of the characteristics
of the SAR imagery requires some knowledge of the sensor design, the mission
operations, and the ground signal processing.
In this text we specifically address these items, as applied to the design and
implementation of the spaceborne SAR system (with heavy emphasis on si~nal
processing techniques). The reader will find that the book has been written
from two points of view, reflecting each author's perspective on SAR systems
and signal processing. We believe that these two perspectives complement each
other and serve to present a complete picture of SAR from basic theory to the
practical aspects of system implementation and test. In preparing the manuscript,
there were three key areas that we wished to address.
First, we had in mind that, in an expanding field such as synthetic aperture
radar, new workers would need an introduction to the basics of the technology.
We have therefore included considerable material on general radar topics, as
well as material on the specific signal processing methods which lie at the heart
of the image formation algorithms. Second, engineers in disciplines closely allied
to SAR would benefit from a ready compilation of the engineering considerations
which differentiate a SAR system from a conventional radar system. Third, the
users of SAR images may wish to know in some detail the procedures by which
the images were produced, as an aid to understanding the product upon which
their analyses are based.
In seeking to serve this broad potential readership, we have written the book
at various levels of detail, and assuming various levels of prior background.
Chapter 1 is intended for all our readers. It provides an overview of the general
capabilities of SAR to contribute to remote sensing science, and a brief
explanation of the underlying principles by which SAR achieves its su~e~ior
spatial resolution. We include a survey of past SAR systems, and a descnpt1on
of systems planned for the near future. The chapter concludes with a summary
of some important topics in modeling, by which the SAR image is related to
geophysical parameters of interest.
Chapter 2 is devoted to a careful derivation of the "radar equation", from
first principles which we hope will be shared by both engineers and remote
sensing scientists. This chapter is intended to serve those readers who may be
new arrivals to the topic of radar. The chapter culminates, in Section 2.8, with
various forms of the radar equation appropriate for remote sensing work.
Chapter 3 continues our discussion of basics, but more specifically those signal
processing techniques which underlie the treatment of radar signals in a ~igital
receiver. Section 3.2.2 in particular treats the matched filter from a pomt of
view which is appropriate to the discussion of SAR image formation.

PREFACE

xv

Chapter 4 is the first material of the book devoted in detail specifically to


SAR systems. It addresses the central question in formation of a SAR image
from the raw radar signal data, that is, the "compression" of the point target
response, distributed in space and time by the radar system, back into a point
in the image. Section 4.1 gives an overview of the factors involved, and includes
an example, in Section 4.1.4, "stepping through" the formation of a SAR image
from raw signal to the level ofa "raw" (uncalibrated) image. Section 4.2 describes
in detail the various algorithms which have been developed to carry out the
corresponding digital signal processing. Chapter 5 is a companion to Chapter 4,
and describes a number of ancillary algorithms which are necessary to implement
the main procedures described in Chapter 4. Chapter 10 discusses a number of
image formation algorithms which are alternative to those of Chapter 4 and
Chapter 5, but which have to date been less commonly used in the remote
sensing "community". They are, however, of considerable interest in that context,
and are much used in aircraft SAR systems.
Chapter 6 presents an end-to-end description of the part of a SAR system
which is related to the sensor a nd its data channels. The emphasis is on space
platforms. The various error sources, in terms of their characterization and
effect, are described for a general SAR system from the transmitted signal
formation through downlink of the received echo signal data to a ground station.
The point of view is that of the system designer, and in Section 6.5 some of the
important tradeoffs are described.
Chapters 7 and 8 together present in some detail the means by which a SAR
system and its images are calibrated. Chapter 7 is concerned with calibration
in the sense that the surface backscatter intensity in each system resolution cell
is correctly replicated in a single resolution cell of the image ("radiometric"
calibration). In Chapter 8, the companion question of "geometric" calibration
is treated. The techniques described aim at ensuring that a specific resolution
cell in the scene being imaged is correctly positioned relative to its surface
location. Section 8.3 treats techniques for assigning map coordinates to a SAR
image. This allows registration of images from multiple sensors, a topic which
is dealt with in Section 8.4.
Chapter 9 is a companion to Chapter 6, which deals primarily with "flight
hardware". In Chapter 9, the "ground hardware" is described, including a
characterization of the system considerations necessary for efficient realization
of the image formation and geometric and radiometric correction algorithms
discussed in previous chapters. Specific systems are described, along with the
various tradeoff considerations affecting their design. The subsystems described
range from those for initial processing of the raw radar signals, through those
for image archiving, cataloging, a nd distribution.
After the discussions of Chapter 10, on alternative image formation
algorithms, there follow four Appendixes. Appendix A is a basic introduction
to digital signal processing, with particular emphasis on the fast Fourier
transform algorithm. Appendix B is an introductory explanation of satellite
orbit mechanics, and culminates in Section B.4 with some simple parameter

xvi

PREFACE

models needed in image formation. Appendix C describes the ~ASA SAR data
reception, image formation, and image archive. system newly _implemente~ at
the University of Alaska in Fairbanks, Alaska. Fmally, Appendix D summanzes
a technique for the characterization of nonlinear systems. Througho~t the text,
equations of particular importance have been indi~t~d by an aste~isk.
We believe that this text provides a needed, missmg element m ~he SAR
literature. Here we have detailed the techniqu~s needed . for design ~nd
development of the SAR system with an emphasis on the signal pr~cessi~g.
This work is a blend of the fundamental theory underlying the SA~ i~agmg
process and the practicalsystem engineering required to produce qualtty n~ages
from real SAR systems. It should serve as an aid for both the radar engtn~er
and the scientist. We have made special effort to annotate ou~ concepts ~~t
fi ures plots and images in an effort to make our ideas as accessible as possi. e.
I;is o~r sinc~re beliefthat this work will serve to reduce the _mystery surroundi~g
the generation of SAR images and open the door t~ a wider user commumty
to develop new, environmentally beneficial applications for the SAR data.

ACKNOWLEDGMENTS

JoHN C. CuRLANDER
ROBERT

Pasadena, California
Laurel, Maryland
April 1991

N.

McDONOUGH

This work draws in large part from knowledge gained during participation in
the NASA Shuttle Imaging Radar series. For this reason we wish to give special
recognition to Dr. Charles Elachi, the principal investigator of these instruments,
for providing the opportunity to participate in both their development and
operation.
The text presents results from a number of scientists and engineers too
numerous to mention by name. However, we do wish to acknowledge
the valuable inputs received from colleagues at the California Institute of
Technology Jet Propulsion Laboratory, specifically A. Freeman, C. Y. Chang,
S. Madsen, R. Kwok, B. Holt, Y. Shen and P. Dubois. At The Johns Hopkins
University Applied Physics Laboratory, collaboration with B. E. Raff and
J. L. Kerr has stimulated much of this work. Among those who shared their
knowledge of SAR, special thanks go to E.-A. Berland of the Norwegian Defence
Research Establishment, B. Barber of the Royal Aircraft Establishment, and
W. Noack and H. Runge of the German Aerospace Research Establishment
(DLR). Additionally, without the technical support of K. Banwart, J. Elbaz
, and S. Salas this text could not have been compiled.
We both benefited from the intellectual atmosphere and the financial support
of our institutions. Special recognition should go to Dr. F. Li of the Jet
Propulsion Laboratory for his support to JCC during the preparation of this
manuscript. Additionally, we wish to thank Prof. 0. Phillips for hosting RNM
as the J. H. Fitzgerald Dunning Professor in the Department of Earth and
Planetary Sciences at The Johns Hopkins University during 1986-87. The
financial support provided by the JHU Applied Physics Laboratory for that
position, and for a Stuart S. Janney Fellowship, aided greatly in this work.
xvii

SYNTHETIC APERTURE RADAR


Systems and Signal Processing

1
INTRODUCTION TO SAR

Nearly 40 years have passed since Wiley first observed that a side-looking radar
can improve its azimuth resolution by utilizing the Doppler spread of the echo
signal. This landmark observation signified the birth of a technology now
referred to as synthetic aperture radar (SAR). In the ensuing years, a flurry of
activity followed, leading toward steady advancement in performance of both
the sensor and the signal processor. Although much of the early work was
aimed toward military applications such as detection and tracking of moving
targets, the potential for utilizing this instrument as an imaging sensor for
scientific applications was widely recognized.
Prior to the development of the imaging radar, most high resolution sensors
were camera systems with detectors that were sensitive to either reflected solar
radiation or thermal radiation emitted from the earth's surface. The SAR
represented a fundamentally different technique for earth observation. Since a
radar is an active system that transmits a beam of electromagnetic (EM)
radiation in the microwave region of the EM spectrum, this instrument extends
our ability to observe properties about the earth's surface that previously were
not detectable. As an active system, the SAR provides its own illumination and
is not dependent on light from the sun, thus permitting continuous day / night
operation. Furthermore, neither clouds, fog, nor precipitation have a significant
effect on microwaves, thus permitting all-weather imaging. The net result is an
instrument that is capable of continuously observing dynamic phenomena such
as ocean currents, sea ice motion, or changing patterns of vegetation (Elachi
et al., I 982a ).
Sensor systems operate by intercepting the earth radiation with an aperture
of some physical dimension. In traditional (non-SAR) systems, the angular
1

INTRODUCTION TO SAR

INTRODUCTION TO SAR

resolution is governed by the ratio of the wavelength of the EM radiation to


the aperture size. The image spatial resolution is the angular resolution ~imes
the sensor distance from the earth's surface. Therefore, as the sensor altitude
increases, the spatial resolution of the image decreases unless the physical size
of the aperture is increased. At visible and near infrared wavelengths, a high
resolution image can be obtained even at spaceborne altitudes for modest
aperture sizes. However, for a microwave instrument where the wavelengths
are typically 100,000 times longer than light, high resolution imagery from a
reasonably sized antenna aperture is not possible. For example, consider an
instrument such as the Seasat SAR at 800 km altitude with a 10 m antenna
aperture (Fig. 1.1). At the radar wavelength of 24 cm, the real aperture resolution
is nearly 20 km. To achieve a 25 m resolution image similar to the Landsat
Thematic Mapper, an antenna over 8 km long would be required.

To improve this resolution without increasing the physical antenna size


synthetic aperture radar technology is employed. A synthetic aperture radar i~
a coherent system in that it retains both phase a nd magnitude of the
back~catte red echo signal. The high resolution is achieved by synthesizing in
the signal processor an extremely long antenna apertu re. This is typically
performed digitally in a ground computer by compensating for the quadratic
phase characteristic associated with what is effectively near field imaging by the
Jong synt.heti~ array. The net effect is that the SAR system is capable of achieving
a resolution independent of the sensor altitude. This characteristic makes the
SAR an extremely valuable instrument for space observation.
In conjunction with implementation of an operational SAR system for global
monitoring of the earth's surface, there are a number of technical challenges.
Among these are two specific areas that are addressed in detail in this text:
( l) Design and development of a reliable high speed ground data system; and
(2) Techniques and technology for end-to-end system calibration.
Ground Data System

SOlAR
ARRAY

Any remote sensor designed for global coverage at high resolution inherently
generates a large volume of data. An additional factor for the SAR is that
to form an image from the downlinked signal da ta, literally hundreds of
mathematical operations must be performed on each data sample. Consider,
for example, a 15 s ( 100 km x 100 km ) Seasat image frame consisting of several
hundred million data samples. T o digitally process this data into imagery in
real -ti~e requires a computer system capable of several billion floating point
operations per second. As a result, much of the early processing of the data
was performed optically using laser light sources, Fourier optics, and film. The
early digital correlators could process only a small po rtio n of the acquired data.
Furthermore, they generally approximated the exact matched filter image
formation a lgorithms to accommodate the limited capabilities of the computer
hardware. The net result of the limita tions in these signal processors was
generally an image product of degraded quality with a very limited dynamic
range that could not be reliably calibrated. The inconsistency and qualitative
nature of the optically processed imagery, in conjunction with the limited
performance and limited quantity of the digital products, served to constrain
progress in the scientific application of SAR data during its formative years.
Geometric and Radiometric Calibration

VISIBLE-INFRARED
RADIOMETER

-J

SAR DATA
LINK ANTENNA

Figure 1.1

MUL TICHANNEl
MICROWAVE RADIOMETER

ALTIMETER

Illustration of the Seasat-A SAR satellite.

! he geometric calibration of an image refers to the accuracy with which an


image pixel can be registered to an earth-fixed grid; the radiometric calibration
refers ~o the accuracy with which an image pixel can be related to the target
scattenng characteristics. Geometric distortion arising from variation in the
terrain elevation is especially severe for a side-looking, ranging instrument such
as~ SAR. Precision correction requires either a second imaging channel (stereo
or interferometric imaging) or a topographic map. Radiometric distortion, which
arises primarily from system effects, requires precise measurements from

INTRODUCTION TO SAR

calibration devices to derive the processor correction factors. To achieve the


calibration accuracies required for most scientific analyses, a complex process
utilizing internal (built-in device) measurements and external (ground deployed
device) measurements is needed. As a result of the difficulty of operationally
implementing these calibration procedures, only in special cases have SAR
systems produced radiometrically and geometrically calibrated data products.
The implication of poorly calibrated data products on the scientific utilization
of the data is far reaching. Without calibrated data, quantitative analysis of the
SAR data cannot be performed, and therefore the full value of the data set is
not realized.
Over the past decade substantial progress has been made, both in digital
computing technology and in our understanding of the SAR signal processing
and system calibration algorithms. Perhaps just as challenging as the development of the techniques underlying these algorithms is their operational
implementat ion in real systems. In this text, we begin from first principles,
deriving the radar equation and introducing the theory of coherent apertures.
We then bring these ideas forward into the signal processing algorithms required
for SAR image formation. This is followed by a discussion of the post-processing
algorithms necessary for radiometric and geometric correction of the final data
products. The various radar system error so urces are addressed as well as the
processor architectures required to sustain the computing loads imposed by
these processing algorithms.

1. 1 THE ROLE OF SAR IN REMOTE SENSING


Short wavelength infrared
Ultra violet

Near infnted
Far i nfrared

Mid infrared

80

60
40
20

o':--'-:-":--'--......_.--'-:-'':--:-':--~~..1_-L-'--L..--L..i-LL-__J::_..l-=:::l
0.3
0.5

5.0

c
0
;;;

..

10.0

15 0 20.0

30.0

Wavelength (m)

Far infrared ~ ---- Microwave


~ lOO r-~~~~--;;:;-:::~-----~~~~~-=-~::;;oo,~.,.-~~~.,--~..,.-~---,

.E
~

90 GHz Window

80
60

135 GHz Window

!
'

..

i:>

40

.0
....J

20
01---.-.............~......-::""

300

500

5.0

10

60 80

0 1
Wavelength (ml
Wavelength (cm)

1.1

THE ROLE OF SAR IN REMOTE SENSING

In the introduction we alluded to several of the features that make the SAR a
unique instrument in remote sensing: (1) Day / night and all-weather imaging;
(2) Geometric resolution independent of sensor altitude or wavelength; and
(3) Signal data characteristics unique to the microwave region of the EM
spectrum. An overview of the theory behind the synthetic aperture and pulse
compression techniques used to achieve high resolution is presented in the
following section. In this section, we principally address the unique properties
of the SAR data as they relate to other earth-observing sensors. As an active
sensor, the SAR is in a class of instruments which includes all radars (e.g.,
altimeters, scatterometers, lasers). These instruments, in contrast to passive
sensors (e.g., cameras, radiometers), transmit a signal and measure the reflected
wave. Active systems do not rely on external radiation sources such as solar
or nuclear radiation (e.g., Chernobyl). Thus the presence of the sun is no t
relevant to the imaging process, although it may affect the target scattering
characteristics. Furthermore, the radar frequency can be selected such that its
absorption (attenuation) by atmospheric molecules (oxygen or water vapor) is
small. Figure 1.2 illustrates the absorption bands in terms of percent atmospheric
transmission versus frequency (wavelength). Note that in the 1- 10 GHz
(3 - 30 cm) region the transmissivity approaches 100%. Thus, essentially

Percent transmission through the earth's atmosphere for the microwave portion of
the electromagnetic spectrum.

Figure 1.2

independent of the cloud cover or prec1p1tation, a SAR operating m this


frequency range is always able to image the earth's surface.
As the radar frequency is increased within the microwave spectrum the
transmission attenuation increases. At 22 GHz there is a water vapor absor~tion
band that reduces transmission to about 85% (one-way) while near 60 GHz
the oxygen absorption band essentially prevents any signal from reaching the
surface. Around these absorption bands are several windows where high
frequency microwave imaging of the surface is possible. These windows are
especially useful for real aperture systems such as altimeters and microwave
radiom~ters .relying on a shorter wavelength (i.e., a narrower radiation beam)
to obtain high. reso.lution. Additionally, for an interferometric SAR system,
the topographic height mapping accuracy increases with antenna baseline
separa~ion, or_ eq ~ivalently with decreasing wavelength (Li and Goldstein, 1989).
For this apphca tton, the 35 GHz window is an especially a ttractive operating
frequency.
The_ selection of the radar wavelength, however, is not simply governed by
resolutio n and atmospheric absorption properties. The interaction mechanism

1.1

INTRODUCTION TO SAR

between the transmitted electromagnetic (EM) wave and the surface is highly
wavelength dependent. The EM wave interacts with the surface by a variety of
mechanisms which are related to both the surface composition and its
structure. For the microwave region in which spaceborne SAR systems
operate ( 1- 10 GHz), the characteristics of the scattered wave (power, phase,
polarization) depend predominantly on two factors: the electrical properties of
the su rface (dielectric constant) and the surface roughness.
As an example, consider a barren (non-vegetated) target area where surface
scattering is the dominant wave interaction mechanism. For side-looking
geometries (i.e., with the radar beam poi nted at an angle > 20 off nadir), if
the radar wavelength is long relative to the surface roughness then the surface
will appear smooth, resulting in very little backscattered energy. Conversely,
for radar wavelengths o n the scale of the surface rms height, a significant fraction
of the incident power will be reflected back toward the radar system. This
scattering characteristic is illustrated as a function of wavelength in Fig. 1.3
(Ulaby et al., t 986). Note that the variation in backscatter as a function of rms
height and angle of incidence is highly dependent on the radar frequency or
wavelength. A similar wavelength dependence is also observed for the surface
dielectric constant. Generally, a fraction of the incident wave will penetrate
the surface and be attenuated by the subsurface media. This penetration
characteristic is primarily governed by the radar wavelength and the surface
dielectric properties. It is especially important in applications such as soil
moisture measurements and subsurface sounding, where proper selection of the
radar wavelength will determine its sensitivity to the surface dielectric properties.
25

SOIL
MOISTURE
lg cm 311N
TOP 1 cm
0.40

RMS
HEIGHT
lcml

20

. _ _ 41

.
iii
:!:!
0

1-

zw

u
u:

...w
0

,,
10

\I~ 6- --.C.

<.:>

10

a:w
,_

~...... ~

................ __ _
\ \ '""''~

15

,_'"

\0

1<t

20

"
"'

25

"'u

1 1

35
38
39
34

: "\

0
0
0
0

..-.
----
--... 23 02
o---o 1 8

..

-------- ~
........ _

'"' ........ __'--0

<t

FREQUENCY 1 1 GHz
- 30

.._

20

10

30

ANGLE OF INCIDENCE ldegl

la!

FREQUENCY 7 25 GHz

FREQUENCY 4 25 GHz

-..,,.

L..-....1-.....J--...__.__..___,
0

20

10

30

ANGLE OF INCIDENCE ldegl

lb)

20

10

30

ANGLE OF INCIDENCE ldegl


lei

Figure 1.3 Normalized backscatter coefficient as a function of surface roughness for three radar
frequencies (Ula by et a l., 1986 ).

THE ROLE OF SAR IN REMOTE SENSING

Thus the selection of radar wavelength is influenced by both atmospheric


effects and target scattering characteristics. In addition to the relati o nship
between :adar wavelength and surface characteristics such as roughness and
dielectric constant, there are a number of other system parameters, such as the
imaging geometry and the wave polarization, that can be used to further
characterize the surface properties. These applications and the underlying
scattering mechanisms will be discussed in Section 1.4. There are also a number
of sensor design constraints that influence selection of the radar operating
frequency which are detailed in Chapter 6.

1.1 .1

Remote Sensing Across the EM Spectrum

Despite the unique capabilities of the SAR to measure properties of the surface,
it's operating range is limited to a small portion of the electromagnetic spectrum.
Thus, a full characteriza tion of the surface properties with a single instrument,
such as the SAR, is not possible. To get a complete description of the
surface chemical, thermal, electrical, and physical properties, observation by a
variety of sensors over a large portion of the electromagnetic spectrum is
required. Figure 1.4 illustrates the various regions of the electromagnetic
spectrum from the radio band (25 cm ~ A. ~ l km) to the ultraviolet band
(0.3 m ~ A. ~ 0.4 m).
Each region of the EM spectrum plays an important role in some aspect of
remote sensing. For characterizing the earth's surface properties, the most
useful bands, in addition to the microwave, are: ( l) Infrared (3-30 m);
and (2) Visible / near infrared (0.4- 3 m). At frequencies lower than 1 GHz,
ionospheric disturbances and ground interference d omi nate the received
signal characteristics, while in the millimeter and submillimeter region
( 100 GHz- 10 THz) a la rge number of molecular absorption bands provide
information about the atmospheric constituents, but little or no information
about surface properties. Sensors that perform measurements in the thermal
infrared region such as the Heat Capacity Mapping Mission ( HCMM )
radiometer (Kahle et al., 1981 ), as well as those in the visible/ near infrared
region such as SPOT and Landsat Thematic Mapper (TM) ( Freden and Gordon,
1983 ), measure surface properties that are complementary to the microwave
measurements of the SAR. The thermal infrared ( 10- 15 11m) band is sensitive
to emissions from the surface (and atmosphere) relating to the vibrational and
rotational molecular processes of the sensed object. Information on the surface
temperature and heat capacity of an object can be derived from these
measurements. In the visible and near infrared regions, vibrational and electronic
molecular processes are measured. This information can be interpreted in terms
of chemical composition, vegetation, and biological properties of the surface.
Within the microwave region ( 1- 300 GHz) there a re several windows in the
atmospheric absorption bands outside the nominal SAR frequency range of
1- 10 GHz. Most active, real apertu re radar systems, such as the scatterometer
and altimeter, operate in the 10 20 GHz region (Ulaby et al., 1982). These are

1.1

INTRODUCTION TO SAR
~-----+------+-----+-----~ (Hz)
1010

MICROWAVE

VISIBLE

'

106

1010

I
~

'

1012

'st ..

f (HZ)

W
--- ---- - - -- - - -- -ITT

NAVIGATION
LONG DIST. COMMUN IC~ M O~ COMM. PT-TO-PT COMM.
RADAR

15235

_ 1

39

~ _!...

RADIOMETER

39

1 55

s_

_......;;
L;....__ _ _

62

._

_ x_

1 39

Figure 1.4

3.6 46 56

~ <.?
Ku

Remote Sensing Programs

X-RAY

I'

radiometers can play an important role in the geophysical interpretation of


SAR data and are especially useful for absolute calibra tio n of the SAR system.

1.1.2

I
I

10~//

INFARED ULTRAVIOLET

RADIO

POWER

THE ROLE OF SAR IN REMOTE SENSING

y _::!!._

Ka

Definition of various regions of the electromagnetic spectrum.

generally not imaging instruments; rather they collect time series data prima~ily
for oceanographic and meteorological applications. In the extremely high
frequency (EHF) range of the microwave spectrum (30- 300 GHz) only the
atmospheric window regions of 35 GHz, 90 GHz, and 135 GHz are useful for
observation of surface properties. With few exceptions, only passive systems
such as microwave radiometers operate in these regions. These sensors measure
the surface brightness temperature (the intensity of the radiation emitted by
the object), which in conjunction with a surface radiation emission model can
be used to measure surface properties. A key application of EHF spaceborne
radiometry is for measuring ice extent in the polar regions as well as determining
ice type. Other applications include measuring land surface properties such as
snow cover and soil moisture. Historically, there has been very little utilization
of these data sets in conjunction with the SAR data since the resolution is
typically several o rders of magnitude coarser than that of the synthetic aperture
radar. For every resolution cell in a radiometer image, the SAR may have
1000- 10,000 cells. In spite of this large resolution disparity, space borne

If a calibrated set of remote sensors were available to perform measurements


of the surface in each of the key EM bands, then an extended spectral
characterization of the surface properties could be developed. This data set
would provide a more detailed description of the surface than could be obtained
from any individual instrument. The practical difficulties in implementing a
comprehensive program for monitoring the earth 's surface and atmosphere
have limited most scientific studies to at most a few instruments. This is a result
of a number of factors limiting the scope of such remote sensing programs,
including financial and political as well as technical constraints. Furthermore,
the technology for the SAR in terms of system calibration and signal processing
is just now maturing to the point where it can be considered for inclusion into
these synergistic multisensor data sets. However, the most important facto r
leading toward initiation of a comprehensive multisensor remote sensing
program is the increased awareness about changes in the earth's environment
(depletion in the ozone layer, global warming, acid rain, deforestation, etc.).
In this era of global concern for our changing environment, there is a
recognized need for a remote sensing program that can quantitatively monitor
the phenomenological processes that govern these environmental changes. Even
with the recent advances in SAR technology, along with improvements in other
remote sensors spanning the electromagnetic spectrum, characterizing these
changes is an extremely difficult task. Remote sensing only allows us to
extract biophysical and geophysical information about the earth's surface. To
understand the underlying mechanisms controlling global change, an add itional
step of parameterizing large-scale models in terms of these observables is
required.
Prior to developing global models of these processes (e.g., the hydrologic or
carbon cycles), an understanding of the capabilities of individual instruments
must be developed. For example, how sensitive is each instrument to a specific
geophysical parameter such as the moisture content of the soil or the biomass
density of a forest canopy; what environmental parameters are key variables
in performing relia ble measurements. To this end, significant progress has been
made in the use of both airborne and spaceborne remote sensor data. Although
a deta iled review of the progress in this area is beyond the scope of this text,
the current state of the art is well documented in the technical journals
(e.g., J ournal of Geophysical R esearch, IEEE Transactions on Geoscience and
R emote Sensing) and in recent monographs (Colwell, I983a,b; Elachi, 1987 ).
The Earth Observing System (EOS)

In an effort to further ad vance our understanding of these sensor systems, and


to integrate their measurements into a coordinated framework of simultaneous

1.1

10

THE ROLE OF SAR IN REMOTE SENSING

11

INTRODUCTION TO SAA

observations of the atmosphere, oceans, and solid earth, an international


remote sensing program has recently been initiated (Butler et al., 1984 ).
The United States National Aeronautics and Space Administration (NASA),
in conjunction with the European Space Agency (ESA) and the National Space
Development Agency (NASDA) of Japan, have embarked on a far reaching
program that goes beyond all previous studies (NASA, 1988). This program,
referred to as the Earth Observing System (EOS), will place in orbit a series
of remote sensing platforms carrying a wide variety of instruments spanning
the electromagnetic spectrum. An illustration of the first platform, planned to
be in operation by 1998, is shown in Fig. 1.5. The prime objective of this program
is to monitor global change, both human-induced effects and those resulting
from natural forces. The ultimate goal is to understand the mechanisms causing
these changes and to predict future trends.
The suite of EOS instruments will contain no less than three SAR systems
that span the frequency range from 1- 10 GHz with multiple polarizations and
variable imaging geometries. Additionally, the platforms will carry a number
of other microwave sensors such as altimeters, scatterometers, and passive
radiometer systems. These instruments will be complemented with both high
and medium resolution imaging spectrometers and several thermal infrared
radiometers (GSFC, 1989). As currently envisioned, EOS will launch four

Figure 1.5

The NASA EOS Platform A design and instrument layout.

platform instrument packages, each carrying 10- 20 instruments that have been
grouped to optimize the synergism resulting from simultaneous observations
(Table 1.1 ). Each platform is designed for a five year life cycle and will be
follo.~ed by two "ident~cal" platforms for a total 15 year observation period.
Add1t1onally, a free-flymg SAR satellite with an instrument similar to the
SI R-Cl_X-_SA R_(Table 1.4) will be launched during this period by NASA. Special
emphasis 1s bemg placed on the signal processing and calibration elements of
the EOS ground data s_ystem to ensure that high precision, geodetically registered
data products are deltvered to the user in a timely fashion.

TABLE 1.1 Selected Instruments from the Sensor Packages Planned for
each of the EOS Platforms

NASA EOS-A
Moderate Resolution Imaging Spectrometer - Nadir/ -Tilt (MODJS-N / -T)
Lightning Imaging Sensor (LIS)
Advanced ~paceborne Thermal Emission and Reflection (ASTER)
Atmosphenc Infrared Sounder/ Advanced Microwave Sounding Units
(AIRS/ AMSU-A / -B)
High-Resolution Dynamics Limb Sounder (HIRDLS)
Stick Scatterometer (STIKSCAT)
Clouds and Earth Radiant Energy System (CERES)
Earth Observing Scanner Polarimeter (EOSP)
Multi-Angle Imaging Spectro-Radiometer (M ISR)
High Resolution Imaging Spectrometer (HI RIS), 2nd platform only
NASA EOS-B
Stratospheric Wind Infrared Sounder (SWIRLS)
Microwave Limb Sounder (MLS)
X-Ray Imaging Experiment (X IE)
Tropospheric Emission Spectrometer (TES)
Stratospheric Aerosol and Gas Experiment III (SAGE III)
Altimeter (ALT)
Multi-Frequency Imaging Microwave Radiometer (MIMR)
Global Geopositioning Instrument (GG I)
ESA European Polar Orbiting Platform (EPOP)
Clouds and Earth Radiant Energy System (CERES)
Synthetic Aperture Radar - C-band (SAR-C)
Atmospheric Lidar (ATLID)
High Resolution Imaging Spectrometer (HRIS)
Advanced Medium Resolution Imaging Radiometer (AMRIS)
Search and Rescue (S&R)
NASDA Japanese Polar Orbiting Platform (JPOP)
Laser Atmospheric Wind Sounder (LAWS)
Synthetic Aperture Radar - L-Band (SAR-L)
Ocean Color and Temperature Scanner (OCTS)
Advanced Visible and Near Infrared Radiometer (AVNIR)
Advanced Microwave Sounding Radiometer (AMSR)

1.2

OVERVIEW OF SAR THEORY

13

SAR Satelllle Missions

I/)

E
Q)
'1ii
>(/)

"j
nl
(/)

c:c
<
(/)
Cl

;.

...
...
Q)

LL

.2

...
I/)

.!
Q)

Prior to the full implementation of the EOS program by the year 2000, there
will be four free-flying satellites containing SAR systems as part of their
instrument package. The first system, launched in M a rch 1991 , is the Soviet
S-band (ALMAZ) system, followed by the European Space Agency C-band
(ERS-1) system to be launched in summer 1991. The Japanese L-band SAR
(J-ERS-1) will be launched in 1992 and the Canadian Radarsa t, a C-band system
with electronic scanning capability, is planned for 1995. The parameters for
these sensors are given in Table 1.2. The data from three of these instruments
(excluding ALM AZ) will be received by a United States ground receiving station
in Alaska (as well as other facilities worldwide) and operationally calibrated
and processed to high level (geophysical ) products. A description of the
design and operation of this station, the Alaska SAR Facility, is provided in
Appendix C.
Considering that to date the only spaceborne SAR systems for remote sensing
have been t he NASA Seasat-A SAR and the Shuttle Imaging Radars (SIR-A,
SIR-B), for a total of Jess than four months of operation, these upcoming SAR
missions offer a significant opportunity to utilize SAR data for global science.
(We should also note that the recently deorbited USSR Cosmos 1870 SAR
(,l = 10 cm) was used primarily for remote sensing purposes and that the Soviets
have made this data available to the scientific community.) Given the recent
advances in processing and calibration technologies that will be applied to the
data products, these near future free-flying SAR systems should greatly advance
our understanding of the use of SAR data for modeling global processes.
Considering the vo lume of SAR data that is to be collected, it is reasonable to
assume that the number of scientists working with these data sets will increase
tenfold over the next decade.
To properly interpret and fully utilize the information contained in these
d ata sets, an understanding by the user community of the signal processing
p rocedures and the system error sources is crucial. For this reason, we first
provide a complete theoretical development of the SAR imaging process and
signa l processing algorithms. This is followed by a description of the sensor
flight and ground data systems that emphasizes aspects of the sensor and
processor performance in terms of data product characteristics. Our goal is to
provide a useful guide, not only for the SAR system engineer but also for the
scientist using these data sets. We believe that an understanding of the techniques
underlying production of the SAR imagery will enhance the scientist's ability
to interpret the data products.

...
nl

nl
Q.

>Q)

1.2

OVERVIEW OF SAR THEORY

C'!
....

w
...I

m
<
....
1 ')

In Fig. 1.6 we show a simplified geometry of a side-looking real-aperture radar


(SLAR). The radar is carried on a platform (aircraft or satellite) moving at
speed V. in a straight line at constant altitude. We assume the radar beam is

14

INTRODUCTI ON TO SAR

1.2 OVERVIEW OF SAR THEORY


V5

SAR
ANTENNA

TRAJECTORY

RADIATED
PULSES

Figure 1.7

Figure 1.6

Simplified geometry of a side-looking real-aperture radar (SLAR ).

directed perpendicular to t he flight path of the vehicle and d ownwards a t the


surface of a flat ea rth. T he rela tive speed between platfo rm a nd ea rth is V. 1 Fo r
this geometry, the po inting (lo ok) angle y, relative to the ve rtical, is the same
as the incidence a ngle, Y/, which is the a ngle between the rada r beam a nd the
no rmal to t he eart h's surface at a pa rticular po int of interest. The radar transmits
pulses of E M energy. The return echoes a re sampled for fu t ure time coherent
signa l processing. We will first discuss the capability o f the rad ar system to
resolve separa te terrai n elements on the earth 's surface.
In Fig. 1.7 the ra nge extent JiYg of the rad ar beam (i.e., the ground swa th
width ) is established by the a ntenna height W.. which determines the ve rtical
beamwidth, flv = A./ W.. If Rm is the (sla nt ) ra nge from rada r to midswa th, then

A.Rm

w~ --11
cos Y/

w.

( 1.2. 1)

Radar geometry illustra ting the ground swath,

W,

and rada r beam width, Ov.

The resolution of the radar in (gro und ) range (Fig. 1.7) is defined as the minimum
range separation _of t~o po ints that can be distinguished as sepa rate by the
s~stem. If ~he .am val time of the leading edge o f the pulse echo fro m the mo re
distant po int 1s later than the a rri val time of the t railing ed ge o f the echo fro m
the nea rer p~int, each point can be distinguished in the time history of the radar
echo. If the time extent of the rada r pulse is r , t he m inimum separa tion of two
resolva ble po ints is then
P

( 1.2.2)
where tiR. t~e reso~ution in slant range and c is the speed of light.
A~ we will discus~ in C hapter 3, to o btain a reasona ble resolutio n tiR , t he
8
required pulse dura ti on rP. wo uld be t_oo short to deliver adequate energy per
pulse .to pro duce a sufficient echo signal to no ise ratio (SNR ) fo r reliable
det~ct1on. Ther~fore, a pul~e compressio n technique is commonly employed to
achieve _bo th high resolut10n (with a lo nger pulse) and a high S N R. With
appro~nate pr?cessing of the received pulse (ma tched filtering), th e range
resolutio n o btainable is

!s

15

16

INTRODUCTION TO SAR
1.2

OVERVIEW OF SAR THEORY

17

where BR is the frequency bandwidth of the transmitted pulse. This resolution


can be made arbitrarily fine (within practical limits) by increasing the pulse
bandwidth.
The radar system range resolution is therefore determined by the type of
pulse coding and the way in which the return from each rul~e i~ rrnces~cd All
radar systems, conventional, SLAR, or SAR, resolve targets tn the range
dimension in the same way. It is the resolution of targets in the dimension
parallel to the platform line of flight (i.e., the azimuth or along-track dim_ension )
that distinguishes a SAR from other radar systems. We now overview th~
mechanisms used by the SAR to achieve fine azimuth resolution and defer until
Chapter 4 a detailed discussion of the techniques for range and azimuth
processing.

1.2.1

Along-Track (Azimuth) Resolution

As shown in Fig. 1.6, suppose that the radar antenna has a length L. in the
dimension along the line offlight. Then the radar beam (i.e., the angular direction
in space to which the transmitted electromagnetic energy is confined and fro~
which the system can respond to received signals) has an angular spread m
that dimension of eH = A./ L. where)._ is the wavelength of the transmitted energy.
Two targets on the ground separated by an amount
in the azimuth direction
(Fig. t.8 ), and at the same slant range R , can be resolved only if they are not
both in the radar beam at the same time. Thus we have

ox

( 1.2.3)

OX= ROH = RA. I L.

Figure 1.8

Illustration of real-aperture radar capability to resolve two targets separated in azimuth.

aperture length, L. = 160 m is problematic to deploy in space. The Seasat


antenna, with L./ A. = 45, at altitude 800 km would attain a SLAR resolution
of only
18 km.

ox=

This quantity is the resolution limit of a conventional SLAR, in the azimuth


coordinate.
To improve the along-track resolution
at some specified slant range R
and wavelength A., it is necessary to increase the antenna length in the along-tra_ck
dimension. The mechanical problems involved in constructing an antenna with
a surface precision accurate to within a fraction of a wavelength, and the difficulty
in maintaining that level of precision in an operational environment, make it
quite difficult to attain values of L./ ). greater than a few hundred. For a range
R = 50 km, such as might be useful for airport surveillance radars, a modest
value of L./ )._ = JOO results in a cross-beam resolution limit
= 500 m, which
is sufficient. Similarly, a shipboard antenna with L. = I m, operating at X -band
(,1. = 3 cm) and a range of 10 km has a resolution
300 m, again adequate
for the purpose of detection and avoidance. However, from a space platform,
with say R = 800 km, even a value of L. / A. = 200 yields only a
= 4 k~,
which is unacceptable for use in scientific applications that typically require
high-resolution imagery. To attain a value of only fJx = I km with R = ~00 ~m
requires L. / )._ = 800, which is impractical. Even if this ratio, or somethmg l~ke
it, could be attained mechanically at L-band (). = 20 cm), the correspondmg

ox

ox

ox=

ox

The key observ~tion that ultimately led to SAR, and the vastly improved
along-track resolution that makes spaceborne imaging radars possible d ates
from about 1951 , and is attributed originally to Wiley (1965). He observ~d that
two point targets, at slightly different angles with respect to the track of the
moving radar, have different speeds at any instant relative to the platform.
T_he_refore, the radar pulse when reflected from the two targets will have two
distinct Doppler frequency shifts.
For a point target at slant range R and along-track coordinate x relat ive to
the side-looking radar (Fig. 1.8), the Doppler shift relative to the transmitted
frequency is

( 1.2.4)

where V., is the relative velocity, 0 is the a ngle of the target off broadside,
and t~e fact?r of 2 results from the two-way travel inherent in an active system.
(In this _sectw_n, we assume that V. 1 is just the platform speed V.-) Therefore, if
the received s1gn~I at the instant shown in Fig. 1.8 is frequency analyzed, any
energy observed m the return at time corresponding to ra nge Rand at Doppler

18

INTRODUCTION TO SAR

1.2

= A.Rfo,f2V.1

+ R; + H2

where s is the time along the flight path. The range rate is given by

R= The echo time delay

= _ 2R(O) =

fo

Similarly, energy at a different frequency fo, will be assigned to a corresponding


coordinate x 2 Thus, even though the targets are at the same range and in the
beam at the same time, they can be discriminated by analysis of the Doppler
frequency spectrum of the return signal, hence the early name given by Wiley
of "Doppler beam sharpening" for this technique.
The use of Doppler frequency effectively provides a second coordinate for
use in distinguishing targets. These two coordinates are the ground range Rg
and the along-track distance x relative to a point directly beneath the vehicle
(i.e., the nadir point) as shown in Fig. l.9. The SAR system effects an invertible
transformation of coordinates from ground range and along-track position to
the observable coordinates, pulse delay t and Doppler shift / 0 .
From Fig. 1.9 we can write

R2 = (x - V.1s)2

V.,(x - V.,s)/ R

= 2R(O)/c and Doppler shift / 0 0 at s = 0 are related by


( 1.2.5)

A.

2V. 1x
A.R(O)

( 1.2.6)

Substituting Eqn. ( 1.2.5) into Eqn. ( 1.2.6), we get

which is the equation of a conic in the (R g, x) plane. From Eqn. ( 1.2.6) and
Fig. 1.9 we can write

2v.,/-/ R(o>/
- >I
x

1A.Jo.

resulting in a hyperbola as shown in Fig. 1.10.


The use of Doppler frequency in addition to pulse time delay thereby provides
targ~t (terrain point) localization in two dimensions. That is (Fig. 1.10), a
specific delay t 0 = 2R(O)/c and Doppler shift / 0 0 correspond to a specific circle
Eqn. ( 1.2.5) and hyperbola, which intersect in only four points in the plane of
range R, and along t~ack distance x. The left/ right ambiguity is reso lved by
our knowledge of the side of the platform from which the radar beam is directed
while. the branch of the hyperbola is indicated by the sign of the D o ppler shift'.
Wllh the use of Doppler analysis of the radar returns, the resolution fJx of
targets in the along-track coordinate is related to the resolution fJ/ of
0

Figure 1.9

Illustra tion or ground range and along-track coordinates.

19

and

frequency say Jo, will be associated with a target at coordinate


X1

OVERVIEW OF SAR THEORY

Figure 1.10 Illustra tion or use or range delay and Doppler shirt to loca te the ta rget.

20

INTRODUCTION

TO SAR

1.2

measurement of the Doppler frequency. The antenna beamwidth in the


horizontal dimension no longer enters directly as a limiting factor. From
Eqn. ( 1.2.4 ), the azimuth resolution is then

OVERVIEW OF SAR THEORY

21

swath width is bounded by


( 1.2.10)

bx

= ( A.R )bfo

2v.,

(1.2.7)

Furthermore, the measurement resolution in the frequency domain is nominally


the inverse of the time span S of the waveform being analyzed (i.e., b/0 = I/ S).
Since this time is potentially the time during which any particular target is in
view of the radar (i.e., the time during which a target remains in the beam) we
have from Fig. 1.8 that
( 1.2.8)
which results in

bx=

(~)(L V.,) = L. / 2
2V.,

RA.

*( 1.2.9)

This counter-intuitive result, which states that improved resolution comes from
smaller antennas, was first proposed by Cutrona et al. ( 1961 ). This result actually
makes some assumptions that are not always valid, as we will discuss in
Section 1.2.2, however, the resolution of contemporary SARs does approach
this limit. Seasat, for example, had an antenna with an along-track dimension
L. = 10.7 m, and attained a resolution bx= 6 m from an orbital altitude of
H = 800 km.
Although Eqn. ( 1.2.9) predicts that an arbitrarily fine resolution is attainable
by reducing the antenna azimuth dimension, at least one factor operates to put
a lower bound on resolution, even at this simple level of modeling. Since we
need to measure range as well as along-track position, the radar must be pulsed.
When a pulse is transmitted, the radar then goes into a listening mode to detect
the target echo. Suppose the span of the (slant) range to which targets are
confined (i.e., the slant range swath) is W. (Fig. 1.7). We then require that the
time of reception of the earliest possible echo from any point in the swath due
to a particular pulse transmission be later than the time of reception of the last
possible echo from any other point due to transmission of the previous pulse.
Otherwise we will attribute the trailing portion of the previous pulse echo to
a nearby point illuminated by the current pulse. If the near and far edges of
the swath in slant range are R' and R", this requires that (Fig. 1.7)

H?wever, coupled to t.his. requi~ement is measurement of the Doppler frequency


shift. The J?oppler shift 1s the incremental change in phase difference between
the transmitted ~nd receive~ carrier waveform due to change in position of the
radar and target m consecutive pulses. To relate unambiguously the incremental
change in phase difference to a Doppler frequency, the frequency bandwidth
Bo of th~ Doppler signal must be less than the PRF, B0 < fp (see Appendix A).
From Fig. 1.8, this implies
Bo= Jo.high - fo,Jow

= (2V., / .A.)[sin(8H/ 2) - sin(-8"/ 2)] ~ 2V.,8H/ A.


= 2 V. 1/ L. = V.,/bx < fp

*(1.2.11)

Equation ( 1.2.11) states that the radar must transmit at least one pulse each time
the platform travels a distance equal to one half the antenna length.
Combining Eqn. ( 1.2. 10) and Eqn. ( 1.2. l l ), we have

*(l.2.12)

~hich req~ires that the swath width W. decrease as the azimuth resolution is
mcreased (1.e., as bx is made smaller).
T~e in.equalities in Eqn. ( 1.2.12) can be rearranged to illustrate the
relat1onsh1p between swath width and resolution as follows
( 1.2.13)
For a satellite in earth orbit, the right side in Eqn. ( 1.2.13) is nearly constant
on t~e ord~r of 20,000. Using Eqn. ( 1.2.1) and Eqn. ( 1.2.9) with the nominal
relat10n (Fig. 1. 7)

W. = vv.i sin 11
the inequality Eqn. ( 1.2.13) yields a requirement on the antenna area of

2R"/ c < 2R'/ c + TP


where TP = l /fp is the time separation between two pulse transmissions (i.e.,
the interpulse period) and fp is the pulse repetition frequency (PRF). Thus the

A.= W.L. > 4V. 1.A.Rm(tan 17)/ c


which is the lower bound for realization of full resolution SAR.

*( 1.2.14)

22

INTRODUCTION TO SAR
1.2

1.2.2

Doppler Filtering

= [R~ + (x

- xo )2]112

(1.2.15)

The phase difference between transmitted and received waveforms due to


two-way travel over the range R is
</J

<P

=(-

4n/ A)[Rc + (xc - Xo )(x - x 0 )/ R0

+ R ~ (x - x0 ) 2 /(2R~)]

( 1.2. 16)

wher~ we can approximate R 0 and R0 as equal for the narrow bea m rada rs
used in most practical applications. For this case then
fo

. (-2)

= <P/ 2n =

).Ro [(x 0

x 0 ) + (x - x 0 ) ]

If we define the value of x ~t which the Doppler frequency ceases to be effecti vel y
constant as that x for which the quadratic term in Eqn. ( 1.2.16) contributes a
value of. n/ 4 to <P at the edge of the aperture, then we ca n confine atten tion to
the received waveform collected over an " aperture" X , where

X / 2 = Jx - x0 J <

JiRJ8

or

( 1.2. 17)
The corresp onding time interval (i.e., the integration time of the SAR ) is limited
to

S = X I V,1< _J _;._R_o_/_2

- 4nR / ).

where the time derivative of <Pis the Doppler frequency (in rad / s). Expanding
the relation in Eqn. ( 1.2.15) to second order around some radar position x 0 at

V.1
With this limitatio n, the resolution fro m Eqn. ( 1.2.7) is

*( 1.2.18 )

.JiiRo

In the literature, va.lues ranging fro m


to
).R0 / 8 are given for
Eqn. ( l.~.18), depending on the criterio n assumed for the ma ximum allowa ble
quadratic phase error.
A SAR

proc~ssing system which a tta ins its al ong-track resolu tio n by si mple

~e~uency filter~ng

Figure 1.11
and time.

23

a slant ra nge R0 , we have

There is one restriction in the derivation leading to the azimuth resolution


expression of Eqn. ( l.2.9). lf a target is to be positioned along track (relative
to the platform) in accord with its observed Doppler frequency, it must produce
a constant Doppler frequency over the observation interval S. However, if this
interval is the entire time the target is within the radar footprint, as was assumed
for Eqn. ( 1.2.9), then the corresponding Doppler signal will have a frequency
which sweeps over the entire Doppler bandwidth as the vehicle passes by the
target. The actual analysis interval available using a frequency filtering technique
may be much less than S, since it is restricted to the time span over which any
particular point target has essentially a constant Doppler frequency. Put another
way, the Doppler waveform for any finite interval due to a point target will
not be that of a sinusoid. A Fourier analysis of such a waveform will always
result in frequency components at more than one frequency, so that the target
may be inferred to have a physical extent greater than f>x = (A.R/ 2 V.1)( I / S), the
resolution cell size. The target return will spread over multiple resolution cells
of the Fourier spectrum.
To investigate this point further, consider Fig. I.I I , which shows a point
target at some along-track position x 0 and slant range of closest approach R 0 .
With the radar at some arbitrary position x along track we have

OVERVIEW OF SAR THEORY

Geometry illustrating rada r target and the quadra tic relation between ra nge

of the Doppler waveform is called a n " unfocussed " SAR.


fr his pro~essor ts. unable to accommo~ate the va ria ble rate of change of phase
om a s1.ngle point target. If Seasat signals were processed in the unfoc ussed
mode, using the resolution expression in Eqn. ( 1.2. 18), the resulting resolution
wou ld .be f>x = 316 m, in contrast with a resolution of f>x = J8.6 km which
results if no unfocussed SAR processing is applied . The ultima te SAR resolution
of f>x = L./ 2 = 6 ~ which can be achieved for fully focussed processing, ta kes
account of the nonlinear phase behavio r. Thus a n unfocussed SAR is a dramatic

24

INTRODUCTION TO SAR
1.2

improvement over real-aperture radars, but still does no t provide sufficient


resolution for most scientific applications. Even a Seasat-type system designed
to use an X-band carrier frequency would have an unfocussed SAR resolution
of only 112 m.
In order to attain high resolution images, it is necessary to process the SAR
Doppler signals in some way that can account for the variation in Doppler
frequency of a target as it passes through the footprint. The result would be a
focussed SAR image that approaches the along-track resolution limit of La/ 2.
The processing required in a focussed SAR is suggested by Fig. 1.12. As the
radar footprint passes over the target, the phase change over the two-way path
from radar to target is
A</>

OVERVIEW OF SAR THEORY

25

The range p:~cessing of any particular return, due to a target at x for the
0
sensor at a pos1t1on x, results in a point on the complex Doppler waveform
f(x) =exp[ - J</>( x )] = exp[ - j4nR( x )/ A]

;:::: exp{ -j(4n/ A)[R 0

+ (x

- x 0 ) 2 / (2R 0 )] }

( 1.2.20)

using Eqn. ( 1.2.19). This signal has the instantaneous frequency

fo,(x)

=~

d<f>
2n dx

= - 4nAR/ A.

=-

2(x - Xo)
2R 0

and a spatial bandwidth B, = 2X / (2R 0 ) corresponding to a Doppler bandwidth

where

B0 = 2v;1S/ (A.R 0 ).

AR = [R~

+ (x -

By processing f 0 (x) we want to discern that the azimuth coordinate x of


the point tar~et in question is bro~dside of the radar platform (i.e., x =
If
1
we knew x 0 m advance, we could introduce this compensation immediately as

'xS

Xo)2]112 - Ro

or

AR;::::

(x - x 0 ) 2

2R 0

'

Ix - xol

( 1.2.19)

....
and R 0 is the range at the point of closest approach (i.e., s = 0). Since x = V.is,
A</> is a quadratic function of the along-track time, s, and the change in Doppler
frequency is linear with time. For full resolution, we must use all the data
collected over the interval, X = lJHR 0 , for which the target is in the radar beam.
If this quadratic phase is compensated such that the returns from each pulse
due to the target at x 0 can be added coherently, targets at x # x 0 will correspond
to improperly compensated returns so they will cancel. The processed returns
from the target at x 0 will then dominate returns from other targets at the same
range.

However, lacking that knowledge we must process with a variety of


~ompensations matched to trial values of x 0 = x' and pick the peak response
m order to measure x 0 .
. This is all _to say only that the signal processing should correlate the Doppler
signal / 0 (x ) m Eqn. ( 1.2.20) with the known waveform

Ix -

x'I < X / 2

After some mathematics, we obtain a normalized correlator output


h( x' ) = ( 1/ X)

f 0 (x )g*(x - x' ) dx

whose magnitude is

AR

lh(x' )I

= l { sin[2n(x' -

x 0 )(X -

Ix' -

x 0 1)/ (2R0 )] } / [2n( x' - x )X/ (A.R )]1,


0

lx'- x 0 l< X

taking _careful account of limits of integration and the sign of x'. If the time
bandwidth product of this signal,
Figure 1.12

Slant pla ne geometry illustra ting SAR focussing technique.

-26

1.3

INTRODUCTION TO SAR

is sensibly large, say > 10, over regions where lh(x')I is not small we have
lh(x')I =!sin [u(x' - x 0 )] / [u(x' - Xo)JI,

u = 2nX /(AR 0 )

( 1.2.2 1)

This function peaks at x' = x 0 , the target location, and has a width on the
order of

bx

= AR 0 / (2X) =

1/ B,

*( 1.2.22)

This is an important result which we will expand upon in detail in Chapter 3.


Replica correlation of the quadratic phase waveform in Eqn. ( 1.2.20) with itself
results in a correlator output with a width which is independent of waveform
duration X , under reasonable assumptions. The same result can be generated
by matched filtering of the Doppler waveform and the two approaches can be
shown to be equivalent.
Such replica correlation, or matched filtering, is the heart of high resoluti on
SAR image formation algorithms. In the specific context at hand, from
Eqn. ( 1.2.22) the correlator output is seen to resolve targets to within

*( 1.2.23)

which is the result argued heuristically above, leading to Eqn. ( 1.2.9).


Many effects need to be discussed before a full picture of the various focussed
SAR processing procedures will be clear. The intent of this overview was to
introduce the SAR concept. From this basis, the reader can better appreciate
the historical developments in SAR sensor and processor technology, as well
as the various applications of SAR data which follow in the remainder of this
chapter.

1.3

HISTORY OF SYNTHETIC APERTURE RADAR

To gain a perspective on the progress that has been made in the evolution of
synthetic aperture radar systems, we present a brief history of SAR. To set the
stage for the discovery of SAR, we first address the early history of radar from
ground based detection systems to side-looking airborne mappers. We will then
trace key developments in the SAR sensor technology as well as the signal
processor by highlighting the technology milestones leading toward modern
radar systems.

1.3.1

Early History

Prior to discovery of synthetic aperture radar in the early 1950s, radar had long
been recognized as a tool for detection and tracking of targets such as aircraft

HISTORY OF SYNTHETIC APERTURE RADAR

27

and ships. In 1903, a mere 15 years following the studies by Hertz on


the generation, reception, and scattering of electromagnetic waves, Christian
Hulsmeyer of Germany demonstrated a ship collision avoidance radar which
he later patented (Hulsmeyer, 1904). In 1922, Marco ni eloquently stated the
value of radar for detection and tracking of shi ps in his acceptance speech for
the IRE Medal of Honor. Most of the early US work in development of radar
detection systems was conducted at the Naval Research Laboratory (NRL ). In
1922, the first continuous wave radar system was demonstrated by A. H. Taylor
and later patented (Taylor et al., 1934 ). However, it was not until 1934 that
the first airborne pulsed radar system, operating at a carrier frequency of
60 MHz, was demonstrated by R. M. Page of NRL. In a parallel effort, radar
systems for tracking and detection of aircraft were developed both in Great
Britain and Germany during the early 1930s. By 1935, each of these countries
successfully demonstrated the capability to track aircraft targets using shortpulse ranging measurements. Sir Robert Watson-Watt ( 1957 ) is generally
credited with building the first operational radar system in 1937. This evolved
into the Chain Home network. These stations were used throughout World
War II to track aircraft across Western Europe.
Between the development of the first operational systems and the start of
World War II, radar technology became generally available such that all the
major warring powers had aircraft tracking capability. Additional enhancements
in component technology enabled increases in both the tracking range and the
radar frequency from the VHF band (30-300 MHz) to the UHF band
(300 MHz-3 GHz). In 1938, an anti-aircraft fire control radar with a range of
over 100 nautical miles operating at 200 MHz went into production (Brookner,
1985). Over 3000 units of this system (SCR-268) and its successors were built
during the early war years. They contributed significantly to the success of the
allied forces. In fact, an early-warning SCR system, installed in Honolulu,
detected the Japanese invasio n in 1941, but by the time the radar echoes were
correctly interpreted it was too late to assemble a defense. During this period,
parallel radar development activities were ongoing in both the USSR and Japan.
However, very little information about that work is available.
Early in World War 11, operational airborne radars were deployed by the
US, Germany, and Great Britain. The first systems, which operated at VHF
frequencies, were used for detection of other aircraft and ships with mixed
success. Following the war, improvements in these systems came rapidly, in
large part as a result of high frequency component technology development at
the Massachusetts Institute of Technology (M IT) Radiation Labora tory. Most
significant among those developments was a high frequency, high peak power
microwave transmitter. Another im portant development came in image display
syst~ms. Most of the early radar displays presented the echoes on a long
persistence cathode ray tube (CRT) in a range-angle format (B-Scan) in which
the scan angle was presented relative to the aircraft flight direction. The
development of the plan-position indicator (PPI ) corrected for the angular
distortions in the display and later scan converters enabled the binary phosphor

28

INTRODUCTION TO SAR
1.3

HISTORY OF SYNTHETIC APERTURE RADAR

displays to present a full gray scale. It was these among other early technology
developments that set the stage for the evolution of imaging radar.

1.3.2

Imaging Radars : From SLAR to SAR

In the early 1950s, engineers first recognized that, instead of rotating the antenna
to scan the target area, it could be fixed to the fuselage of the aircraft. This
allowed for much longer a pertures and hence improved along-track resolution.
An additional improvement was the use of film to record the CRT display of
the pulse echoes. The early versions of these side-looking aperture radar (SLAR)
systems were primarily used for military reconnaissance purposes. They were
typically opera ted at relatively high frequencies compared to ground based
radar systems, to achieve good a long-track resolution. Some systems (e.g.,
Westinghouse AN/ APQ-97), that operated at frequencies as high as 35 GHz
with pulse durations a small fraction of a microsecond, were capable of producing
imagery at resolutio ns in the 10-20 m range. It was not until the mid 1960s
that the first high resolution SLAR images were declassified and made available
for scientific use. The value of SLAR images for scientific applications such as
geologic mapping, oceanography, and land use studies was recognized almost
immediately ( MacDonald, 1969 ). Perhaps the most widespread interest in the
use of SLAR was generated by the mapping campaigns to Central America
(Viksne, 1969) and South America (van Roessel and de Godoy, 1974). Large
areas of these perpetually cloud-covered regions were mapped for the first time,
dramatically demonstrating the benefits of a high resolution radar imager.
It is generally agreed that the earliest statement describing the use of Doppler
frequency analysis as applied to a coherent moving radar was put forth by Carl
Wiley of Goodyear Aircraft Corp. in June 195 1 (Wiley, 1985). Wiley noted that
the reflections from two fixed targets at an angular separatio n relative to the
velocity vector could be resolved by frequency analysis of the along-track
spectrum. This characteristic permitted the azimuth resoluti on of the return
echoes to be enhanced by separating the echoes into groups based o n their
Doppler shift, as described in Section 1.2. In his patent application, Wiley ( 1965)
referred to his technique as Doppler beam sharpening rather than synthetic
aperture radar, as it is k nown today. His design, shown in Fig. 1.l 3a, is today
referred to as squint mode SAR.
Although the radar group at the Goodyear research facility in Litchfield,
Arizona, was primarily interested in high resolution radar as applied to missile
guidance systems, they pursued Wiley's beam sharpening concept and built the
first airborne SAR system, flown aboard a DC-3 in 1953. This system, which
operated at 930 MHz, used a Yagi antenna wit h a real aperture beamwidth of
100. The coherent video was filtered to extract the desired portion of the
Doppler spectrum, weighting was applied to the baseband analog signal, and
it was summed in a storage tube to achieve a synt hetic beamwidth of
approximately 1 (Fig. l.13b ).

a
(TERRAIN A)

(TERRAIN B)
RECEIVER MECHANISM
FOR
PICKUP OF TIME &
FREQUENCY
SEPARATED
REFLECTIONS

PULSE GENERATOR
ALTERNATELY
ILLUMINATING
TERRAIN A&B,
OR FOR ILLUMINATING ONLY
TERRAIN B, FOR EX.

MECHANISM FOR
SEPARATION OF
REFLECTIONS INTO
FREQUENCY
SEPARATED
GROUPS

MECHANISM FOR
VISUAL PRODUCTION
OF EACH
REFLECTION IN
EACH GROUP

MECHANISM FOR
SEPARATION BY
TIME OF THE VARIOUS
REFLECTIONS IN A
SINGLE GROUP

~gure 1.13

(a) ~a~ar configuration; and (b) Operational How diagram, as proposed by Wiley
his patent apphcat1on for the Doppler beam sharpening radar (Wiley, 1965).

An in?ependent and nearly parallel development of synthetic aperture radar

~as earned o~t by a group at the University of Illinois under the direction of

L W. Sherwin ~ 1?~2). !his lllin~is group, part of the Control Systems


aboratory, was m1~1ally interested m developing techniques to detect movin
~argets, base? on their Doppler characteristics, using incoherent airborne SLA:
ata. It was in 1952 that a member of the group, John Kovaly, recognized that

29

30

1.3

INTRODUCTION TO SAR

variation in terrain height produced distinctive peaks that migrated across the
azimuth frequency spectrum. He reported that these experimental observati~ns
could provide the basis for a new type of radar with improved angular resolution.
It was also in 1952 that Sherwin first reported the concept of a fully focussed
array at each range bin by providing the proper phase corrections. Addition~lly,
he put forth the concept of motion compensation based on phase correct10ns
derived from platform accelerometer measurements, as applied to the received
signal before storage. These ideas eventually evolved into development of a
coherent X-band radar system. The first published article that included a
focussed strip image was in a 1953 University of Illinois re~ort. This syst~m
was designed to study sea surface characteristics as well as ship and submarine
wakes.
As a result of the accomplishments of the Illinois group, a much larger effort
was initiated. This study, coordinated by the University of Michigan, was termed
Project W olverine. The study team, whose activities are summa~ized by Cutrona
( 1961 ), was commissioned by the US Army to develop a h1g~ perfo~mance
combat surveillance radar. They developed a number of operational airborne
SAR systems that routinely began producing strip maps by 1~58. It is this gr~up
that is credited with developing the first operational motion compensat10n
system, using a Doppler navigator to measure lo~g-term av_e rage drifts in
conjunction with a gyro to correct for short-term yawing _of the aircraft. Perh~ps
the most important development by Cutrona' s group 1s the onboard op~1cal
recorder and ground optical correlator for converting the coherent SAR video
signal into high resolution strip images.
In conjunction with the development of these early SAR syste~s, there were
a number of other activities wh ich advanced the state of the art m component
technology. Recall that the key difference between the real aperture SLAR
system and the SAR (besides the signal processing required) is that SAR is a
coherent system. This requires both the magnitude and the phase of the echo
samples to be preserved, which implies that the system pulse-to-pulse phase
must be stable. The high power magnetron, which was such an important
development for the SLAR, could not be used directly in the SAR system since
the starting phase of each pulse was random. Instead, the early SAR systems
used a coho-stalo arrangement, where, for each magnetron pulse, the starting
phase of the pulse was measured. This phase was retained in a phase locked
intermediate frequency COHerent Oscillator (coho), referenced to the ST Able
Local Oscillator (stalo ), which was then used to demodulate the received echo.
The development of linear beam power amplifiers such as the klystron in
1939, followed shortly by the traveling wave tube (TWT), was a key advance
in SAR technology, since these devices provided both the high peak power and
phase stability required for SAR systems. The major advance in the TWT over
the klystron is the bandwidth. The klystron ' s bandwidth is limited to only a
few percent of the carrier frequency, while the TWT is capable of octave
bandwidths. Many of today's airborne SAR systems, and some spacebor~e
systems requiring high peak power, still use TWT technology, although sohd

HISTORY OF SYNTHETIC APERTURE RADAR

31

state power amplifiers are now used in many applications because of their
increased reliability. Just as the solid state high power transistor technology
matured through the 70s and 80s, the technology of monolithic microwa ve
integrated circuit (MMIC) devices is moving toward the forefront in the 90s
and should become the standard in the next generation of spaceborne and
airborne SAR systems.

1.3.3

SAR Processor Evolution

Given the rapid early advancement in coherent radar sensor technology, in


most cases the limiting element in radar system performance was the signal
processor. In the early 1950s, with the advent of the first SAR systems, skeptics
observed that the SAR simply trades antenna fabrication problems for signal
processing problems. It was true that in this era, prior to digital computing,
focussing the synthetic array posed a severe technical challenge. The key
problems were: (I) How to store the information during the synthesis period;
and (2) How to apply the range dependent quadratic phase correction to o btain
a fully focussed synthetic array.
The early signal processors used an algorithm that is known today as
unfocussed SAR (Section 1.2.2). The processing was essentially an incoherent
sum of adjacent samples without phase compensation. One of the first processors,
using a re-entrant delay line, was developed and tested at the University of
Illinois in 1952. This system could integrate approximately I 00 echoes before
the distortion of the range pulse ( rP = 0.5 s) became excessive. This delay line
effectively gave an improvement factor of 7 over the real aperture resolution.
The Illinois group also evaluated other storage media, such as a photographic
process using film for storage, in which direct integration of the film produced
the desired synthetic aperture image. A third device, the electronic storage tube
integrator, which was similar to Wiley' s design, produced the best results among
the storage devices evaluated. Early in the development of the SAR signal
processor, because of the great difficulty in storing and reproducing analog
data, it was recognized that a quantized signal would be a better approach
( Blitzer, 1959). A key limitation in the analog storage devices was their relatively
small dynamic range and nonlinear transfer characteristic. However, development of the required digital computing technology was at best a decade into
the future.
Recognizing the limitations of electronic processing, the Michigan group
embarked on a major effort to develop an optical recorder and correlator using
photographic film. With film as the storage medium all three dimensions (range,
azimuth, intensity) could be simultaneously recorded, thus providing a
permanent record of the video signal for later processing, allowing optimization
of the processing parameters by iterative correlations. Cutrona ' s group designed
the first processor capable of achieving fully focussed resolution by applying a
correction function that varied with range to compensate for the quadratic
phase term. In 1957, their laboratory breadboard was converted into an

32

1.3

INTRODUCTION TO SAR

operational unit and the first successful flight of an op~ical recorder .was
conducted. The recording was performed on 35 mm film using CRTs modified
to generate the intensity modulated range trace. The system featured a Doppler
navigator for drift angle compensation to center the return on zero D_oppler
and an optical recorder whose film advance rate was controlled by the estimated
ground speed.
.
.
The ground processing equipment was housed in a van for transportat~on
to the test sites. It contained both the optical correlator and the film processing
equipment, including a photo enlarger for analyzing strip imagery. This_ system
produced the first fully focussed SAR image in August 1957. The architecture
developed by the Michigan group became the standard for SAR correlators for
nearly two decades while the digital computing technology matured. A_layout
of a modern optical correlator is shown in Fig. 1.14. Improvements in laser
light sources and Fourier optics enhanced the _quality of the ?Ptically proces~ed
image product. Hybrid architectures were also introduced (using acou~to-o~t1cal
and charge coupled devices) to generate digital images from the optical signal,
but the use of film greatly constrained the performance of these systems..A
detailed description of optical processing theory and systems can be found in
Cutrona et al. (1960).
It was not until the late 1960s that the first fully digital SAR correlator was
developed. These ground based systems could not ope~ate in real-time. Init_ially,
onboard optical recorders were used to collect the signal data from wh~c~ a
small portion of the signal film was digitized and processed. These. early d1g1tal
systems were limited in performance due to both the memory requirements ~nd
the number of operations needed to perform fully focussed SAR processing.
Azimuth presummers were typically employed to reduce the data rate ~nd
therefore the processing load on the correlator. The push for a real-time
onboard SAR correlator, particularly for military applications, led to the first
demonstration system in the early 1970s (Kirk, 1975). This system included a

SAR VIDEO SIGNAL SHIFTED TO CENTER Of AOD BAND

LIGHT
SOURCE

___.. COLLIMATING

MATCHED
FILTERING ~
(3LENSES)

Figure 1.14

FOCUSING
LENS

LENS

MASK

CCD
OUTPUT
ARRAY

ACOUSTO
OPTICAL
DEVICE

OUTPUT
IMAGE

Functional block diagram of an optical SAR correlator.

CD

CD

---

HISTORY OF SYNTHETIC APERTURE RADAR

---~--r

CD

--

-"'""'

r=wr=w

'

COlm!Ol

CD'

CD'

IGGGGI

-.
0

'

33

Figure 1.15 The onboard SAR processor built by MacDonald-Dettwiler and Assoc. for the CCRS
airborne system (Bennett, 1980).

motion compensation computer to calculate the reference function co rrections


needed to produce high quality imagery (especially in the spotlight mapping
mode). The first onboard digital SAR processor for non-military a pplications
is believed to be the MacDonald-Dettwiler and Associates (MDA) system built
for the Canadian Center for Remote Sensing (CCRS), which was installed in
1979. This system, shown in Fig. 1.15, is a one-look processo r capable of real-time
processing of the presummed signal data (Bennett et al., 1980).

1.3.4

SAR Systems : Recent and Future

Just as in the early days of SAR, a majority of current work in high resolutio n
SAR systems is funded by the US Department of Defense (DoD ), and therefo re
information about these systems is not available for open publication. However,
there are a number of civilian SAR systems that were developed under the
sponsorship of NASA, beginning in the late 1960s and early 1970s. The first
system, a single polarization X-band SAR, built originally by the Environmental
Research Institute of Michigan (ERIM) for the DoD in 1964, was declassified
in the late 60s by reducing its range bandwidth to 30 MHz. This system, flo wn
on a C-46 aircraft, was upgraded by NASA in 1973 by adding a second frequency
at L-band and equipping the system with servoed dual-polarized antennas
(Rawson and Smith, 1974). The two receive chains (one per frequency ) fed into
t~o 70 mm optical recorders which captured both the like- and cross-polarized
signals for each frequency. This ERIM SAR was used for a number of scientific
research applications, especially the imaging of arctic sea ice. The Jet Propulsion
Laboratory (JPL) also developed (under NASA sponsorship) an L-band SAR
system that evolved from some early rocket radar tests (see below). The JPL SAR
~ad been upgraded to a simultaneous quad-polarized (polarimetric) capability
in both L- and C-bands by the early 1980s. This system was used for a number
of scientific research applications, especially those relating to geologic mapping

34

INTRODUCTION TO SAR

and the study of geomorphic processes (Schaber et al., 1980). Although neither
of these original systems is in operation today, they have both been replaced
with modern systems of much higher performance. The parameters of these
current systems, along with those of the Canadian Centre for Remote Sensing
(CCRS) SAR, are given in Table 1.3.
Spaceborne SAR History

Considering that both ERIM and JPL conducted most of the early airborne
SAR studies for NASA, it was logical that NASA turned to these two
organizations to build the first (non-military) spaceborne SAR system. Contrary
to popular belief, the Seasat-A SAR was not the first operational spaceborne
system. In 1962, JPL conducted the first of four rocket experiments at the White
Sands, New Mexico, missile test range (Fig. 1.16). These rockets carried an
experimental L-band sounding radar that was being evaluated for the lunar
lander. At the conclusion of these experiments in 1966, this radar was transferred
to the NASA CV-990 aircraft and was eventually upgraded to the JPL airborne
SAR system. The sounder's cavity-backed dipole antenna was replaced with a
dual-polarized planar array and the original magnetron (built by Raytheon)
was upgraded to a TWT. This system, which was used for a number of
applications including the study of oceanic phenomena in the Gulf of California,
collected data that eventually led to the approval of the Seasat SAR. In the
period between the conclusion of the rocket experiments and the approval of
the Seasat mission in 1975, NASA initiated the Apollo Lunar Sounder
Experiment (ALSE). This experiment, conducted jointly by ERIM and JPL,
was flown aboard the Apollo 17 lunar orbiter in December, 1972. It consisted
of four major hardware subsystems (Porcello et al., 1974): ( 1) RF Electronics
(CSAR); (2) IF antennas; (3) VHF antenna; and ( 4) Optical recorder (Fig. 1.17 ).
At the heart of the system is the coherent SAR (CSAR) transmitter/ receiver
subsystem which could operate at any of three radar frequencies (5, 15, and
150 MHz). The objectives of the experiment were threefold: to detect subsurface
geologic structures; to generate a continuous lunar profile; and to map the
lunar surface at radar wavelengths. The data was recorded on photographic
film using a 70 mm optical recorder. The two high frequency (HF) dipole
antennas were used for mapping the subsurface geologic features and the very
high frequency (VHF) Yagi antenna oriented 20 off local vertical was used
primarily for surface mapping and profiling (Fig. 1.18 ). The bulk of the signal
processing was carried out at ERIM using a modified version of their airborne
SAR coherent optical processor. Due to the large dynamic range of the data
(conservatively estimated at 45 dB), the image film was inadequate to observe
a number of subsurface features. At JPL, a small amount of the signal film was
scanned and processed digitally using a PDP-11 computer, while ERIM
constructed several holographic viewers to directly observe and manipulate the
image projection on a liquid crystal display.
The success of the lunar sounder experiment, coupled with the oceanographic
phenomena observed by the JPL L-band airborne SAR, led NASA in 1975 to

---

><

Cll

E
u;
QI

>.

(/)

--

G
'O
M

!!!
QI

Qi
E

...IUIU

a.
E

2Cll

>.

(/)

1.3

Figure 1.17

HISTORY OF SYNTHETIC APERTURE RADAR

37

Optical recorder flown as part of the Apollo Lunar Sounder Experiment and later

on SI R-A.

approve the inclusion of a SAR as part of the Seasat mission (Fig. 1.1 ). Despite
the I 0 years of oceanographic observation with airborne SAR systems, the
proposed Seasat SAR created tremendous controversy within the scientific
community. The dissenting camp argued that the coherent integration time was
too long (...., 2.5 s), and would result in decorrelation of the signal due to
movement of the ocean surface. The issue was never resolved theoretically and
finally it was decided that the only possi ble means of resolution would be
actually to fly the SAR on Seasat. As it turned out, the Seasat SAR observed
a number of unique ocean features that significantly contributed to our
understanding of the global oceans (Fu and Holt, 1982). Although the system
(Table 1.2) was designed primarily to image the oceans with its steep 23
incidence angle, Seasat data has found a wide variety of applications. The most
significant of these are in geology, polar ice, and land use mapping (Elachi et al.,
1982a). The success of Seasat, however, was limited in terms of the duration of
the data collection. A complete power failure just JOO days after its July 1978
launch, attributed to a short circuit in the slip rings that articulated the solar

38

1.3

INTRODUCTION TO SAA

TABLE 1.4

..........---HF ANTENNA No.210.414 m

i,./
J

OPTICAL
7 RECORDER

<t.

SIM

Figure 1.18 The Apo llo

17 Command Service M odule (CSM) showing the Lunar Sounder

configuration.

panels, resulted in a premature end to what promised t~ be a very important


mission. Nevertheless, in the more than a d ozen years smce Seasat, hundreds
of papers have been published using its data that have significantly contributed
to remote sensing science.
The early scientific results from Seasat quickly Jed to the approval b y NASA
of the Shuttle Imaging Radar (SIR) series offlights (Elachi, 1982b; Elachi et al.,
1986). These systems, which used many of the Seasat designs, were also L-ba nd,
HH, single channel SARs. The SIR-A was primarily for geologic and land
a pplications with a fixed look angle 45 off nadir, while SIR-B featured a
mechanicall y steerable antenna mount for a range of look a ngles fro m 15- 60.
The SIR-A system flew an optical recorder identical to the Apollo Lunar Sounder
Experiment reco rder and all imagery was processed o ptically. The SIR-B w~s
a fully digital system with selectable quantization (3 - 6 bits per sample). This
design gave the investigator the option of a large dyna mic range ( 6 bps) or a
wide swath (3 bps). The SIR-C instrument, currently under development for a
mid 1990s launch, is a quad-polarized L- and C-band SAR. It will be flown
with a n X-band vertically polarized SAR developed jointly by Germa ny
a nd Italy. These systems will operate synchrono usly and are capable of
simulta neously recordi ng nine polarizations ( L- and C -bands HH, HV, VH,

HISTORY OF SYNTHETIC APERTURE RADAR

39

Key Parameters for the Shuttle Imaging Radar Missions

Mission
Date
Altitude (km)
Frequency Band (G Hz)
Polarization
Incidence Angle
Antenna Size (m x m)

9.4 x 2.2

SIR-B
1984
225
L( 1.28)
HH
15- 60
10.7 x 2.2

Noise Equiv a 0 (d B)
Swath Width (km)
Az/ Rng Resolution (m)

-25
50
4.7 / 33

- 35
15- 50
5.4/ 14.4

SIR-A
1981
259
L( 1.28)
HH

so

SIR-C
1993, 1994
215
L( 1.28 ), C( 5.3)
HH, HV, VH, VV
15- 60
12. 1 x 2.8(L)
12.1 x 0.8(C)
-50(L), -40(C)
30- 100
6.1 / 8.7

X-SAR
1993, 1994
215
X(9.6)

vv

15- 60
12.1 x 0.4
-26
10- 45
6.1/ 8.7

VV a nd X-band VV). The parameters of each of these systems are given in


Table 1.4.
Planetary Radars

For many years the surface of Ve nus remained hidden to planetary astronomers
due to t he dense atmosphe re surrounding the planet. In the late 1960s, the
NASA 64 m deep space tracking antenna, in conjunction with the 43 m Haystack
antenna in Massachusetts and 300 m Arecibo radar a ntenna in P uerto Rico
produced t he first detailed map of Venus using radar interferometry ( Pettengilj
et al., 1980). These images, a long with the early scientific results from the 1967
Mariner 5 mission to Venus, led to the a ppro val of the Pioneer mission ( 1978),
which carried a radar altimeter, a nd prompted t he first design study in 1972
for a Venus Orbiting Imaging Rada r (VOi R) system to generate a high resolution
ma p of the planet using SAR technology.
The VOIR went through many design phases before fi nal a pproval by NASA.
These changes resulted from both a strong scientific contingent, which expressed
the need fo r high resolution maps to study the geologic history of the planet,
and the success of t he Soviet Venera mapping missions which demonstrated
the potential value of a high resolution planetary radar. In 1982, a modified
VOIR design was formally approved as the Venus Radar Mapper (V RM ), after
which it was renamed Magellan (MGN ). At fi rst glance this system appears to
be a step back wa rd in technology relative to the earlier Seasat a nd SIR systems,
but, considering the harsh space environment and the limited mass, power,
and d ownlink da ta rates, its performance is quite remarkable. The system
specifications in relation to the most recent Ve nera missions a nd the NASA
Pioneer radar a ltimeter are provided in Table 1.5. A number of novel concepts
were implemented in the Magellan system (Fig. 1.19a), such as burst-mode
im.aging and block adaptive quantization (Johnson.and Edgerton, 1985). The
primary rad a r mapping mission, 240 days in J 990- 91, is designed to generate
a glo?al map of ~en~s at a pproximately 150 m resolution. The signal processing
and image mosaicking a re a ll performed digitally. One of the first Magellan

40

INTRODUCTION TO SAR

TABLE 1.5

1.3

HISTORY OF SYNTHETIC APERTURE RADAR

Comparison of the Radar Missions to Venus

Mission
Launch Date
Frequency Band (GHz)
Polarization
Incidence Angle (deg)
Antenna (m)
Swath Width (km)
Ra/ Az Resolution (km)
Planet Coverage ( % )

Pioneer/ USA
1978
S( 1.75)
Linear
0.5
0.38 (dish)
Variable
23 / 70
92

Venera 16/ USSR


1983
S(3.75)
Linear
7- 17
6 x 1.4 (parabolic)
~120

1.0/ 1.0
25

Magellan/ USA
1990
S(2.38)
HH
15- 45
3.7 (dish)
20- 25
0.12/ 0.12
95

images of Venus is shown in Fig. l.19c. This image is overlaid on a Venera


image to illustrate the improvement in resolution in the MGN system. The
jagged edge results from the burst mode imaging process. An extended mission
of up to five years will be used to provide more detailed maps of the local
topography (using stereo and interferometric imaging), as well as information
on the planet's atmosphere.
A second planetary radar, currently under development by NASA / JPL for
a mid 1990s launch, is the Titan Radar Mapper. This instrument, which is part
of the Cassini mission to Saturn, is a multimode radar designed to measure the
surface characteristics of the moon Titan, which is covered by a dense optically
opaque atmosphere (Hunten et al., 1984 ). The system was designed for maximum
flexibility since there is a large uncertainty in the actual characteristics of the
surface as well as in the orbit determination. The Cassini orbit will actually
circle both Saturn and Titan.
The baseline radar instrument package consists of four modes as shown in
Table 1.6 (Elachi et al., 1991). The synthetic aperture radar (SAR) mode will
operate over limited periods (due to data rate constraints) at resolutions between
300 and 600 m. Additionally, three nadir pointing modes will be employed. The
radiometer mode (RAD), used to measure surface emissivity, employs a 12-bit
quantizer to achieve a wide dynamic range. The altimeter (ALT) mode will be
used for surface ranging measurements at a vertical resolution of 30 m. The
scatterometer (SCAT) mode is for surface backscatter measurements.
Radar data will be collected in 35 close fly-bys of Titan over the four year
nominal mission, mapping 30% of the moon's surface. Due to uncertainties in
the elevation of the surface and the orbit ephemeris, the instrument will be
operated in a burst mode without attempting to interleave the transmit and
receive pulses. The data will be recorded on an onboard digital recorder and
downlinked for digital processing and distribution to the science community.

Sensor
subsystem
(130 kg)

antenna
(7 kg)

Data
storage

SAR/comm

antenna

--- ---- ---

Non-USA SAR Sys tems

In recent years, a number of civilian government agencies around the world


have embarked on both airborne and spaceborne SAR development programs.
The basic parameters of these sensors are provided in Table 1.7 for the

__

_,

Data
Products

b
Figure 1.19 The Magellan system: (a) Spacecraft configuration; (b) End -to-end data path.

41

42

INTRODUCTION TO SAR

..,

.2

~
c
:.c:
u
vi'

<
u

c
"
E
.c

..

.!a

:;;

Ji

"'
.!!
.~

0.

Cl)

"'"

-0

Lil"
;,

....

c
,g

z
~
c
u"
vi'
UJ

u
Figure 1.19 (continued)

...:;

The Magellan system: (c) Magellan image (right) overlaid on lower

E
c

resolution (left) Venera image.

Ci

'-

TABLE 1.6

System Parameters for Four Operational Modes of Casslnl Titan Radar

Mapper

Mode
Frequency Band (GHz)
Polarization
Incidence Angle (deg)
Az/ Yert Resolution (m)
Range Bandwidth (MHz)
Dynamic Range (dB)

SAR
Ku(l 3.8)
Linear
20- 40
300-600(A)
0.42, 0.85
9

RAD
Ku( 13.8)
Linear
0
30000- 60000(A)
100
92

ALT
Ku(\3 .8)
Linear
0
30(V)
4.3
21

SCAT
Ku( 13.8)
Linear
0
7500(A)
0.1
21

..

Cll

.!!
QI
E
ca
ca
a.

..

'i?!

"
2
>

::::>

Cij c
.~ ~
]

Cl)

u '"
....

0
>.

::::>

-0

E
ci' .,

....

* <

4~

44

INTRODUCTION TO SAR

1.4

operational airborne systems and Table 1.2 for the near-future spaceborne
systems currently under development. Perhaps most notable is the nu~ber of
SAR systems that will be in operation in the 1990s. The strong commitment
by the Europea n Space Agency (ESA), as well as the National Space
Development Agency of Japan (NASDA) and the Canadian Space Agency,
bodes well for advancement in the scientific use of SAR data. Furthermore: the
increasing cooperation between agencies, as evidenced by the American,
German, and Italian cooperation on the SIR-C/ X-SAR instrument packa.ge,
the increasing availability of the Soviet SAR data, and the planned worldwide
participation in the Earth Observing System (EOS) program, should lead to
rapid advancements in both SAR sensor and processor technology.

1.4

APPLICATIONS OF SAR DATA

The application of SAR data to geophysical measurements in a number of


scientific disciplines is well documented (Elachi, 1988; Colwell, l98~b). A key
element in the design and implementation of any SA R system 1s a clear
understanding of the planned scientific utilization of the in~trum~nt and t~e
primary science parameters affecting the radar design. In t~1s section we w1.ll
describe several key remote sensing applications of SAR imagery and their
dependence on the radar parameters.
.
.
The first stage in the design of any scientific instrument 1s to establish a
set of scientific goals that can then be translated into quantitati.ve scie~ce
requirements and ultimately system specifications. A design fi~wchart illustrating
the flow of requirements from science objectives to expenm~n~, sensor'. and
platform design specifications is given in Fig. 1.20..A ~ore real.1sttc. scen~r.1~ for
an instrument such as SAR, where the technology 1s still evolving, 1s to in1t1ally

EXPERIMENT
REOOIREMENTS

SCIENCE OBJECTIVES

POL.AR ICE

OCEAN WAVES

e SWAniW1Dni

SURFACE/SlJBSURFACE
MAPPING

SOIL MOISTURE

FOREST ECOLOGY

f+

SYSTEM PERFORMANCE

INCIDENCE ANGLE(S)

NOISE FIGURE

DURATION

EIRP

LAUNCH DATE/TIME

ISLFL PSLR

e COVERAGE

STABILITY

SIGNAL TO.NOISE RATIO

CALIBRATION ACCURACY

RESOLUTION

I+

e TARGET

1-4 e

ALTITUDE

DYNAMIC RANGE

DATA LINK

INSTANTANEOUS

COMMANOING

LINEARITY va
FREQUENCY

DATA RATE

ATTITUDE CONmOI.

QIJANTIZATION

EPHEMERIS ACCURACY

LOCATION

'l

DtURNAUSEASONAI.
VARIATION

. In th~ above .example, the system trade-offs were relatively simple and the
science impact, in terms of swath width or calibration accuracy, is generally
:-vell unders.tood. However, trade-offs among other parameters, such as the
~ntegrated s~delobe ratio (ISLR) or the quadratic phase error, are not so easily
in terpre~ed in terms of their impact in limiting science applications. Similarly,
geophysical measurements, such as ice type classification or soil moisture
~ontent, are difficult to translate into system specifications. This section is
in~end~d to ~resent some key applications for SAR data in conjunction with a
bn ef ~1sc.uss1~n of the scattering mechanisms, as an aid for the engineer to gain
some insight into the dependency of various geophysical measurements on the
radar system design.

1.4.1

Characteristics of SAR Data .

The design of a SAR for remote sensing begins with scientific goals which are
used. to define a quantitative set of scientific requirements. Generally these
req uirements can be divided in to those affecting the radar subsystem, the
p~oc.essor s.ubsyste~, or the platform and downlink subsystems (including
m1ss10n design). A hst of the key parameters is given in Table 1.8.
. To translate scientific requirements into system specifications, some assumpllons must be made about the target characteristics. This necessitates some
a pr iori understanding of the interaction between the transmitted wave and the
target. Some of the parameters that characterize the received signal depend
weakly on the target characteristics, such as :
Doppler centroid or azimuth spectral characteristics
slant range or round trip propagation time

PARAMETER
SELECTION

e UPLINK FREQUENCY

Figure 1.20 Mission design flowchart illustrating How from science requirements to senso r and
platfo rm specificatio ns.

45

specify the system and then define the experiments that are feasible within its
~erfo~mance constraints. The final design is the result of an iterative process
in which s~ste.m trade-.offs are made to optimize the performance for a specific
set o.f a~pltcalions. A simple example of these trade-offs for a geologic mapping
appltca.t10n would be to consider wide swath as higher priority than system
dynam1~ range or radio~etric calibration accuracy. Given that the system is
constrained by the d~wnlink data rate, the quantization (bits per sample) could
be reduced to downlink more samples per interpulse period and thus obtain
the wider swath.

ORBIT NOOE

ADJUSTABLE

CHARACTERISTICS

a-vs

PLATFORM DESIGN

APPLICATIONS OF SAR DATA

Other parameters describe the received signal, such as:


amplitude (absolute value, statistics)
relative phase (cross-channel statistics)
polarization (orientation, ellipticity)

1.4

46

47

APPLICATIONS OF SAR DATA

INTRODUCTION TO SAR

TABLE 1.8

List of Key SAR System Design and Performance

Parameters

Sensor Parameters
Radar Frequency or Wavelength
.
.
Antenna Polarization - Ellipticity and Onentatton
Range Bandwidth
Signal to Thermal Noise Ratio
Dynamic Range
Swath Width
Look Angle

this text and can be found elsewhere (Ulaby et al., 1982, 1986). Instead, it is
our intention to provide an overview of the scattering mechanisms as a
foundation from which we can discuss various applications of the SAR data.
If we assume for simplicity that the wave is propagating in a homogeneous,
isotropic, non-magnetic medium, then from Maxwell's equations we can write
an expression for the complex electric field vector as
E( z, t) =A expU(k'z - wt+)]

(l.4.1)

where A is the amplitude vector. The angular frequency, w, is given by

w = 2nf0 = 2nc/ )..

Image Parameters
Range and Azimuth Resolution .
Peak and Integrated Sidelobe Rattos
.
Effective Number of Looks (Speckle Noise)
. Image Presentation (Radio~etric .andGGeome.tnc ~o~:,:t!imetric)
Calibration Accuracy (Rad1ometnc, eometnc an

Mission / Platform Parameters


Altitude/ Orbit Coefficients
Flight Date/ Time
Data Link
Platform Stability

These requirements are then translated into system specifications such as:
noise temperature or noise figure
antenna gain
f mance
amplitude/ phase versus frequency / temperature per or
transmitter power
.
mechanisms These specifications
which d irectly reflect the surface ;catte:ing type ofsurfa~e It is the dependency
can be used to predict the response or :a~~:t:ristics that m~st be understood to
of these parameters on th~ surface c
h . 1'nformation from the SAR data.
develop models for extraction of the geop ys1ca '

1.4.2

Surface Interaction of the Electromagnetic Wave . .


ve (am litude phase, polanzatt0n) are
The characteristics of the reflected wa
p t
l ) Dielectric constant
th ee surface parame er .
primarily depend ent on
r
.
. d ( 3 ) Local slope. To relate these
(permittivity); (2) Roughness.(rm~ hhe1ght~, ~ntics some type of scattering model
, d ls is beyo nd the scope of
surface characteristics to the s1gna c arac e.ns
. requtr
. ed . A detailed analysis of the various mo e
ts

s: (

( 1.4.2)

where f 0 is the carrier frequency and ).. is the wavelength. The wave propagates
in some direction z, with k' related to the wavenumber k by

k' = .fi,k = 2n.fif )..

( 1.4.3 )

Here e, = e/ e0 is the permittivity of the medium relative to that of free space (e0 ).
The relative permeability, is assumed to be unity, which is a good assumption
at microwave frequencies.
The polarization of the electric field refers to the direction of the ampli tude
vector, A, at some instant in time. For a linearly polarized wave, the direction
of A is fixed (i.e., independent of time) relative to the propagation direction as
shown in Fig. l.21a. For an elliptically polarized wave, the direction of A is a
function of time and effectively rotates about the axis of propagation. The easiest
way to conceptualize this is to consider the E field vector as consisting of signal
components oriented along the x axis and the y axis as shown in Fig. 1.21 b.
Each component has the same frequency, but in general a different amplitude
and phase. The vector sum of these two field vectors
E(z, t) =A, expU(k'z - wt+ 1 )]

+ AYexp(j(k' z -

wt+ 2 )]

1.4.4)

is an elliptically polarized wave. If 1 1 - 2 1= n/ 2 and A,= Ay the wave is


said to be circularly polarized.
When an EM wave is emitted from a sou rce, such as an antenna, the energy
is radiated over a range of angles. At any given time t0 , the phase of E, that is,
<Po = k' z - wt 0

( l.4.5)

is constant over a surface. If this surface is a plane of constant amplitude, then


the wave is referred to as a uniform plane wave. In a vacuum, this plane
propagates at a phase velocity
( 1.4.6)

48

INTRODUCTION TO SAR
1.4
y

SAR SYSTEM

Ay

APPLICATIONS OF SAR DATA

49

INCIDENT UNIFORM PLANE


WAVE: LINES OF CONSTANT
PHASE AND AMPLITUDE

~'
~'

ROUGH SURFACE

; ! ELECTRIC CONSTANT, Er

y
Ay

Figure 1.22

Interaction of radiated electromagnetic wave with a rough surface.

smooth .surf~ce) is very small as compared to the radar wavelength, the scatterin
mecha~1s~ is specular. In specular scattering the incident wave's reflection anJ
transm1ss1on through the surface are governed by Snell ' s la Th

d
w.
us, given a
wave mc1 :nt at an angle YJ, a portion of the energy will be reflected at an angle Y/
and a portion refracted at an angle r/', where

Electric field vector propagation: (a) Linearly polarized wave, and (b) Elliptically
polarized wave (after Purcell, 1981 ).

Figure 1.21

( 1.4.7)
Subsurface Mapping

where c is the speed of light and e0 , 0 are the permittivity and permeability of
free space. Generally, the propagation speed through the atmosphere of an
EM wave in the 1- 10 GHz range can be well approximated by c. At frequencies
above 10 GHz molecular absorption can significantly attenuate the signal, while
for frequencies below 1 GHz the ionosphere is dispersive, resulting in rotation
of the polarized wave, attenuation of the signal amplitude, and a reduced
propagation velocity. These effects are discussed in more detail in Chapter 7.
The interaction of the radiated EM wave with the surface is represented
pictorially in Fig. 1.22. The interaction of the wave and the surface is generally
referred to as scattering and is classified into either surface scattering or volume
scattering. Surface scattering is defined as scattering from the interface between
two dissimilar media, such as the atmosphere and the earth 's surface, while
vo lume scattering results fro m particles within a non-homogeneous medium.

An example of this type of scattering was observed in the Libyan D esert region
of sou~hwest~rn ~gypt. by .the Shuttle Imaging Radar (SIR-A) instrument.
chma~e m this regi on 1s hyperarid, resulting in a surface totally devoid
o vegetation. The s ubsur~ace. composition is a ho mogeneous sand layer of
1- 2 meters depth under which 1s a second layer of bedrock (Fig. t.23 ). Scattering

Tte

Er= 2.5
~ 2m

1.4.3

Surface Scattering: Models and Applications

Given a homogeneous medium (i.e., no volume scattering), characte rized by a


relative dielectric constant e,, if the surface roughness (rms height relative to a

_ _ __....._ _ _ _ _ _..,. , _ _.___ _ STONE DRAINAGE


Er
1

Figure .23

= 8.0

CHANNELS

Scattering mechanism for imaging subsurface drainage channels in Libyan desert.

1.4

50

APPLICATIONS OF SAR DATA

51

INTRODUCTION TO SAR

of image features not evident in

~i~~,;hd:a,
~:~~;g~ht~~~~t.,ePd~~to~~b~e:~) l:;r~::~~~:~~:~v~:~;'~~~r~c~~~~t~~;h~!
the san ayer ts es tma
500
"r -

= 8 (Elachi et al., ,1984\ Using Eqn. (l.4.7)


the refraction angle is es~imated to be '1 ~ 29 r'
bedrock layer

dielectric constant, e,

or '1

'

produces a
The resultant scattering from the s~ sur ace f
. t natural drainage
.
I oviding a detailed map o ancten
relatively strong signa pr
f
.
f shifting sand This is illustrated
channels buried by thousa~ds o ~entunes o.
'th a L~ndsat scene of the
F' 1 24 by the SIR-A image tn companson w1
h
tn tg. . The Landsat visible wavelength detectors can only measure t e
same area.

surface reflectance which is nearly featureless, while the SAR 's subsurface
imaging capability illustrates a detailed map of the bedrock layer. Such radar
sounding techniques are invaluable for scientists studying the geologic history
of the region, and may also prove useful for locating sources of water deep
below the surface.
Bragg Scattering

If we now extend our specular scattering model to slightiy rough surfaces,


assuming a homogeneous medium (i.e., no volume scattering), with an rms
height variation Jess than J../ 8, then we can describe the scattering using a Bragg
model (Barrick and Peake, 1968). Given the spatial spectrum of the surface (as
derived from the two-dimensional transform of the height profile), the Bragg
model states that the dominant backscattered energy will arise from the surface
spectral components that resonate with the incident wave. Thus, for surface
variation patterns at wavelengths given by
I\= nJ.. / (2sin17)

= 1,2,3, ...

( 1.4.8)

a strong backscattered return will result. The dominant return will be for the
wavelength where n = l. At steep incidence angles, the scattering is generally
a combination of Bragg and specular scattering. Even for a Bragg surface, the
return can be dominated by specular scattering, which is strongly dependent
on the distribution and extent of the local slope (Winebrenner and Hasselmann,
1988). A natural surface can be approximated by a series of small planar facets,
each tangential to the actual surface, upon which the small-scale roughness is
superimposed. The incident wave therefore has a scatter component that is due
to the local slope (i.e., from facet scattering), as well as a point scatterer
component dependent on the roughness (i.e., from Bragg or resonant reflection).
The resultant backscatter curve as a function of local incidence angle is a
combination of these two mechanisms as shown in Fig. 1.25.

N\
b

0 L-~-'--~--' 20 km

.
b' ()La dsat(b)SIR-A.SIR-A
Figure 1.24 Images ofdesert region between Iraq and s.aud; :i::hia .I;82b )~
'
detail of drainage channels is from subsurface penetration
'

Oceanography. Bragg models are most frequently used for describing scattering
from the sea surface. Due to the large dielectric constant of water, the scattering
mechanism is exclusively surface scattering. The resonance phenomenon on
which the Bragg model is based is well suited to the periodic structure of the
ocean waves. Ocean waves are detectable as periodic bands on SAR imagery,
due to the spatial variation of the short waves within the longer waves, as well
as to the orbital motion of the long waves themselves. However, due to the rms
height variation limitation of the Bragg model (i.e., < J../ 8), only the small
capillary waves or short gravity waves exhibit Bragg resonance. The analysis
of SAR ocean wave imagery is typically performed in the spatial transform
domain where the Bragg resonance can be observed directly. Figure 1.26 shows
a set of ocean wave spectra for an area off the coast of Chile. The SIR-B
spectrum is shown after removal of the system transfer function and smoothing
(Monaldo, 1985). From the wave spectra, parameters such as the direction of

52

INTRODUCTION TO SAR

Backscatter curve for natural surfaces illustrating the two scattering mechanisms:
Facet scattering for steep incidence angles; Bragg scattering for shallow incidence angles.

Figure 1.25

the waves (with a 180 ambiguity), the wavelength (or wave number), and the
wave height can be directly measured. The Bragg resonance is strongest for
waves traveling in the radar look direction. As the azimuth component of the
wave motion increases, the backscattered energy is attenuated and nonlinear
corrections need to be applied for an accurate estimate of the geophysical
parameters (Alpers et al., 1981 ).
Information derived from the directional wave energy spectra can be directly
used for updating and validating ocean wave forecast models. These models
are key elements in predicting global climatology. The measurement of ocean
characteristics is a primary objective of future space orbiting SAR systems such
as the E-ERS-1 and SIR-C, both of which have implemented special modes for
ocean wave imaging. The SlR-C system will feature an onboard processor
experiment, developed by the Johns Hopkins University Applied Physics
Laboratory, to directly generate ocean wave spectra for near real-time analysis
of the ocean wave properties (MacArthur and Oden, 1987). The E-ERS-1 system
also features a special wave mode of operation in which the SAR acquires only
small patches of data ( 5 km x 5 km) spaced at regular intervals ( 250 km)
across the oceans. These patches will be ground-processed to produce wave
spectra images for distribution to the science community (Cordey et al., 1988).

Geology. The Bragg model is commonly used to characterize scattering from

slightly rough terrain that is sparsely vegetated. This is best demonstrated by


the geologic application of SAR data to classification of rock type / age based
on the surface roughness. Two SIR-B images covering the southeastern region
of Hawaii, acquired at '1 = 28 and 48, are shown in Fig. l.27. The Kilauea

1.4

.>I.

0
N

APPLICATIONS OF SAR DATA

55

volcanic crater is clearly visible in the right center portion of the frame. Due
to acidification from volcanic fumes, there is very little vegetation in this region.
Two main types of lava flows are easily distinguished. The aa flows, which are
rough, appear brightest in the scene, while the smoother pahoehoe flows
comprise the darker regions. Additionally, as a result of the smoothing effect
from weathering, the change in radar brightness as a function of incidence angle
can be used to identify the relative age of the two lava types. This is especially
prominent in the Kau desert region where the contrast between the lava types
is more distinct at 17 = 48 than at 17 = 28.

1.4.4

Volume Scattering: Models and Applications

Target areas that can be characterized as Bragg scatterers are essentially special
examples of the general scattering problem, which is significantly more complex.
Most natural surfaces are generally of an inhomogeneous composition, and at
some wavelengths or under some conditions they are penetrated by the EM
wave. Thus, scattering from natural terrain is generally a combination of surface
scattering and volume scattering. Volume scattering results from dielectric
discontinuities within the media. Assuming the spatial locations and orientations
of these discontinuities are random, the incident wave scattering will be
omnidirectional. Thus, the portion of the incident wave scattered back toward
the radar will depend on the relative dielectric constant between the two types
of media in the inhomogeneous layer, as well as on the geometric shape, density,
and orientation of the imbedded inhomogeneities. Volume scattering is modeled
using either EM wave theory or principles of radiative transfer (Fung,
1982). The wave approach uses Maxwell's equations, and some restrictive
approximations on the type of scattering, to derive an expression for the scattered
signal. The radiative transfer approach is based on the average power or intensity,
and generally ignores diffraction effects. A detailed treatment of the various
models and their applications is given in Ulaby et al. ( 1986).
A useful quantity for characterizing the scattering within a medium is the
penetration depth. Given a wave incident on a surface, the depth at which the
refracted portion of the wave is attenuated by I / e of its value at the layer
boundary is given by (Ulaby, 1982)
( 1.4.9)
where the relative dielectric constant, a complex number e, = e' + je", must
satisfy e" / e' < 0.1 for Eqn. ( 1.4.9) to be valid. In calculating the penetration
depth from Eqn. ( 1.4.9) the scattering within the medium is assumed to be
negligible.
Vegetation. In volume scattering, just as in surface scattering, the wavelength

plays an important role. Similar to the resonance phenomenon in Bragg


scattering, the size and distribution of the inhomogeneities, relative to the

56

1.4

INTRODUCTION TO SAR

wavelength, determine to a large part the fraction of backs~att~red. energy. If


we consider for example, a forest canopy, the size and d1stribut10n of the
scatterers c~n range from a small-scale very dense distribution, such as t~e
needles of a pine tree, to a sparse configuration of branches and trunks, .as m
a deciduous stand during the winter season (Fig. 1.28). Additionally, the moisture
content of each of these component parts determines the fraction of the ~ave
energy that is scattered from the tree limb versus the energy a.ttenuated w1t~m
the limb. Several models have been recently developed to describe the scattering
in vegetation canopies (Richards et al., 1987; Durden et al., 1989; Ulaby et al.,
1990).
.
.
Although all scattering models make some. approx1mat10ns a9out the EM
wave and surface interaction, certain relationships between the radar wavel~ngth
and the surface properties (such as effective canopy density) can be predicted.
Generally, the lower the radar frequency (longer wavelength) the greater the
penetration of the canopy, as indicated by Eqn. ( 1.4.9). For sh?rt ""'.avele~gths,
dense canopies, and grazing incidence angles, the scattering 1s typically
dominated by surface scatter from the top of the cano.py. For long wavel~ngths,
sparse canopies, and steep incidence angles the sca.ttering may ~e ~redommantly
from the ground, again resulting in surface scattering charac.teristtcs. In between
these extremes, a combination of surface and volume scattenng from the canopy

SIZE DISTRIBUTION OF
CANOPY SCATTERERS _

,..

..::

l~ lll
I mm

SIZE OF SCATTERER
TWIGS

Figure 1.28

LEAVES

BRANCHES

TRUNKS

Model response of forest canopy to vario us wavelengths based on number and

distribution of scatterers (Carver ct al., 1987).

57

is observed. This type of scattering mechanism is demonstrated by the


NASA / JPL trifrequency radar data (see Table 1.3). The wavelength dependence
can be observed in Fig. 1.29 by noting the relative change in scene brightness
among the P-band (A = 65 cm), L-band (A.= 23 cm), and C-band (A.= 5 cm)
images. The three scenes of a farm region near Thetford, England, were acquired
simultaneously (i.e., on a single pass) in July, 1989.
In addition to the frequency dependent variation in the backscatter coefficient
( u 0 ) , other scattering characteristics of the forest canopy can be measured using
a coherent, multipolarization capability such as that of the JPL system. Its
capability to simultaneously image a target with both horizontally (H) and
vertically (V) oriented electric field vectors, and to record both the like- and
cross-polarized returns, allows synthesis of the target's polarization signature
(Zebker et al., 1986). A key parameter in a polarimetric SAR such as the JPL
system is the relative phase between the two like-polarized returns. For
single-bounce scattering, such as in the surface scattering from the canopy top
or the soil, there is zero relative phase shift between the HH and VV
like-polarized returns (i.e., the phase difference of the transmit plus receive
like-polarized channels is constant). If the dominant scattering mechanism is
two-bounce (e.g., ground to the trunk to radar), the relative phase shift is
constant, but 180 out of phase with the single-bounce returns. However, for
volume scattering within the canopy, the relative HH to VY phase shift is
random due to the multiple scattering of the EM wave. Given this characteristic
we can determine the scattering mechanism by analyzing the relative phase
term (van Zyl, 1989). As we vary the frequency, the scattering mechanism
changes, since the EM wave canopy penetration is frequency dependent. This
is illustrated in Fig. 1.30 for a scene near Freiburg, Germany. As might be
expected, the short wavelength C-band image is dominated by surface scattering
from the top of the canopy, resulting in a zero relative HH to VV phase shift,
which is classified as single-bounce scatter. At the longer L-band wavelength,
the volume scattering dominates, while the longest wavelength, at P-band,
penetrates the canopy, giving rise to significant two-bounce scatter.
If we measure the change in the relative phase statistics across the three
frequencies, we can derive a very sensitive measure of the canopy density. Thus,
even in the absence of a comprehensive model to characterize the absolute
backscatter from the canopy, relative changes in certain canopy parameters can
be detected using a multichannel SAR system under the right conditions. If the
system is calibrated, a multifrequency, multipolarization SAR, such as SIR-C,
is capable of monitoring environmental changes, including deforestation. This
is becoming an especially important application for free-flying orbital SARs as
the effects of acid rain on our global ecosystem become more severe.

lm

10cm

lcm

APPLICATIONS OF SAR DATA

Polar Ice
A second example of volume scattering is in the imaging of polar sea ice. Polar
ice has imbedded in it a mixture of salt, brine pockets, and air bubbles. It is a
characteristic inhomogeneous medium with a relatively low dielectric constant

1.4

f-------

APPLICATIONS OF SAR DATA

MULTl YEAR:

FIRST YEAR:

LOW-SALINITY,
LOW-LOSS

HIGH- SALINITY
HIGH -LOSS

HIGH- LOSS

SURFACE VOLUME
SCATTERING

SURFACE
SCATTERING

SURFACE
SCATTERING

---~-OPEN

59

WATER:

RIDGE

.D

Figure 1.31 Scattering mechanisms for various ice types: multiyear, first year and open water
(courtesy of W. Weeks).

of about c, = 3. (Some ice exhibits an t:, lower than that of sand in hyperarid
Saharan desert.) Depending on ice type, which is usually correlated with age,
the cha racteristic scattering can change dramatically. Generally, at steep
incidence a ngles surface scattering will dominate the ret urn signal, while at more
shallow incidence a ngles volume scattering efTects become prominent, depending
on the radar wavelength a nd the ice type. These scattering mechanisms are
illustrated in Fig. 1.3 1 (Carver et al., 1987). For o pen water the mechanism is
exclusively surface scattering with a la rge specular scatter component depending
on the surface roughness (i.e., on wind speed). The first year ice is also
predo minantly su rface scattering, due to the large t:, resulting from the high
salinity content. However, the surface scatter is mo re d iffuse as a result of the
ice ridges a nd rubble fields. The mul tiyear ice exhi bits both volume and surface
scatteri ng due to the low dielectric constant resul ting from its characteristic low
salinity.
The relatio nship between penetrat io n dept h, bP, and the radar frequency for
each of these ice type (age) categories is show n in Fig. 1.32. As expected from
Eqn. ( 1.4.9), the penetration depth is inversely proportional to radar frequency.

60

INTRODUCTION TO SAR

10

T -lOOC

.c

g.
0

c:

~
.....
Q)

c:

Q)

0..

l. 5
Figure 1.32

3 4 5 6 7 8 910
Frequency (GHzl

15

20

Penetration depth in pure ice, first-year sea ice and multiyear sea ice (Ula by et al., 1982).

However, despite the fact that the real dielectric component e' of the multiyear
ice is smaller than that of the first year, the imaginary component e" typically
offsets this factor, resulting in a deeper penetration depth for multiyear ice. The
value of e" is dependent on a number of factors, such as the ice density,
temperature, and salinity. T hus, depending on the environmental conditions
from the point of fo rmation of the ice until it is observed, e" can assume a wide
range of values. Since e" decreases with decreasing temperature, the penetration
depth could vary widely with a diurnal cycle period during the summer
months.
Figure 1.33 illustrates dramatically the wavelength dependence of scattering
from multiyear ice. T his P-, L- and C-band total power (i.e., all polarizations)
three-frequency image set was acquired by the NASA/ JPL airborne SAR in
61

62

INTRODUCTION TO SAR

March, 1988, over the Beaufort Sea. In the C-band image, the bright regions
correspond to multiyear floes, the darker regions to first year floes. There is no
open water in this scene. As the wavelength is increased, the distinction between
the multiyear and first year ice diminishes and the ice ridges are predominantly
highlighted. This is a result of the increased penetration in the multiyear ice at
longer wavelengths attenuating the backscatter, coupled with a highlighting of
the ridges at longer wavelengths from surface scattering. (The ridge size
approximates the wavelength at P-band.) Scattering from sea ice is another
example where the statistics of the relative phase across polarizations can be
used to produce a more detailed description of the properties of the ice
(Nghiem et al., 1990).
The use of SAR for monitoring the characteristics of sea ice in the polar
region has widespread scientific and commercial application. The environmental
impact of C0 2-induced atmospheric warming could be most severe in the arctic
region, causing wider swings in the freezing and thawing mechanisms that
establish the ice extent, ice concentration, and the physical characteristics of
the ice formations. Changes in the extent of the polar ice cap are correlated
with climatology, since the growth of sea ice is a primary mechanism for removal
of C0 2 from the atmosphere. In addition to the scientific utilization of sea ice
imagery, there are a number of commercial applications of the SAR data.
Airborne SAR imagery has been used operationally for monitoring the
movement of sea ice in the polar region. Ice kinematic maps are useful for
fishing and shipping industries, which require knowledge of the relative
movement and position of the ice for navigation. Additionally, monitoring both
the size and velocity of the multiyear floes is important for the oil industry in
establishing the location of temporary drilling rigs. The demand for this type
of data is sufficiently large that currently a commercial organization, Intera, is
flying an airborne SAR in the arctic region to provide ice floe and ice extent
maps to a number of corporations and government agencies (Mercer, 1989).
Soil Moisture. Another key application of SAR d a ta is in measuring the
moisture content of the soil. As might be expected, the volumetric content of
water in the soil is directly related to the dielectric constant, as shown in
Fig. l.34a (Ulaby et al., 1982). As water is added to the soil, the real part of
the relative dielectric constant increases slowly as most of the water molecules
bind to the soil. However, as the fractional moisture content increases past
15 %, the number of free water molecules increases rapidly, allowing molecular
alignment similar to that of free water. For saturated soil, r,' approaches that
of liquid water (r,' = 80 at 2 < 50 cm) as shown in Fig. l.34b. Similarly, the
imaginary part increases with increasing moisture fraction, although at a slower
rate. The net effect (at a given wavelength) is that the penetration depth decreases
as the soil moisture increases. The reduction in signal penetration with higher
free water content effectively increases the backscattered energy. Thus, the signal
received by the radar results predominantly from surface scattering of the
non-refracted portion of the incident wave. The backscatter coefficient is

1. 4
(a)

I-

2
<l'.

I-

CJ)

u
u

oc

I-

u
lJ.J

..J

lJ.J

Ci

24
22
20
18
16
14
12
10
8
6
4
2
0
0

APPLICATIONS OF SAR DATA

SOIL
(L - BAND)

10

20

30

SOIL MOISTURE (%)

(b) 90 -

LIQUID
WATER

80
I-

2
<l'.

70

CJ)

60

I-

2
<l'.

u 50

oc
I-

en
I

40

..J

u 30

lJ.J

..J

lJ.J

Ci 20
10
0
10

0.3

O. l

WAVELENGTH (cm)

Figure 1.34 Depend


f
.
ence o comp1ex dielectric constant on. (a)
(b) Radar wavelength (Ulaby et al., 1982 ).

~~;e~::v~;p;;rdebn~t~n

sOt1

moisture at L-band ; and

t~e

both
su rface roughness and the soil moisture content.
Fig 1 35 A
smoot and rough surfaces at C-band are given in
s an example of the rad
t
S
.
.
ar sens1 iv1ty to soil moisture at L-band
eas~t imhage o.f an agricultural region in Iowa is shown in Fig l 36 The b . ,hat
area is t e region where re
t . r II h
. . .
ng
the soil.
cen ram1a
as increased the moisture content of
An accurate estimate of the water c t
.
. .
on! ent in the soil is a critical parameter
for modeling the global h d I .
y ro og1c eye e. These models in turn are used to

63

64

INTRODUCTION TO SAR

25
al

:S!

20

'b 15
'E
....
o 10
c;;
0

u
C7'
c

.:

5
0

-5

"' -10
=
~

mv 0. 05 g~c:m:j-3:------l

~ -15

-20

10
25
15
20
Angle of Incidence !Degrees)
a

30

25
al

:S!

20

'b 15
'E

....
o 10
c;;
0

u
C7'
c

.:
2

~
~
u

"'
al

mv 0. 38 g cm-3

0
-5

mv. 0.09 g cm-3

-10
-15
-20

10
15
20
25
Angle of Incidence (Degrees)

30

b
Figure 1.35 Dependence of backscatter coefficient on incidence angle and soil moisture ( i.e.,
volumetric water content g/ cm 3 ) for C-band (l = 7 cm), HH polarization at : (a) u,m, = I.I cm ; and
(b) Urms = 4.1 cm (Ulaby et a l., 1986).

~~

w~

Oci

Cl.

a:
:::;
0

Cl.

-o
a: .
~o

develop an understanding of the energy balance in the earth's climate system.


The impact of certain environmental changes, such as the greenhouse effect, on
both short-term and long-term changes in our weather patterns can be quantified
through modeling these effects. The measurements made by the SAR, in
conjunction with other instruments will play a vital role in helping scientists
to determine the factors that influence global climatic conditions.

Vl

a:

...J
...J

<
u..
z
~

a:

66

INTRODUCTION TO SAR
REFERENCES AND FURTHER READING

1.5

SUMMARY

In this chapter, we have introduced the synthetic aperture radar in terms of its
use as a remote sensing instrument. Emphasis was placed on the potential
application of the SAR data, in conjunction with other remote sensing systems,
to obtain measurements from different portions of the electromagnetic spectrum.
The synergism resulting from combining multisensor data sets from simultaneous
observations is key to our understanding of the earth's processes. The SAR
contributes uniquely to this global database in that it measures both the electrical
and structural characteristics of the surface. Furthermore, this instrument can
generate large-scale, high resolution maps of these surface characteristics
independent of cloud cover, sun angle, and sensor altitude.
In the 40 years since the discovery of synthetic aperture radar a number of
technical challenges, in both the sensor and processor subsystems, have been
met and overcome. It appears that we are now positioned to embark on a new
era in SAR remote sensing, with no fewer than six spaceborne systems planned
for the 1990s. This wealth of data, in conjunction with the diverse set of
applications, should attract a broad scientific community toward the geophysical
interpretation of SAR data products.
In recognition of this widespread interest among both novice and experienced
radar data analysts, this text is structured to provide an in-depth understanding
of both the characteristics of the data and the algorithms used to generate the
image products. We have put special emphasis on addressing the errors inherent
in real systems, and the techniques required to produce radiometrically and
geometrically calibrated images. It is our goal to describe in detail both the
techniques and technologies required for the design and implementation of the
SAR signal processor, since it is in this part of the data system that the calibrated,
registered, multifrequency, multipolarization image products are generated.
We recognize that it is only after these calibrated data products are presented
to the science community that the real work begins. Remote sensing provides
the key to our understanding of the impact of our lifestyle and the industrialized
society on the environment. We are just now beginning to recognize the extent
of the problem, and we expect that the synthetic aperture radar measurements,
in conjunction with other remote sensors, will be instrumental in monitoring
the effects on our changing environment.

REFERENCES AND FURTHER READING


Allen, C. T. and F. T. Ulaby ( 1984). "Modelling the Polarization Dependence of the
Attenuation in Vegetation Canopies," Proc. IGARSS '84 Symposium, Strasbourg,
France, pp. 119- 124.
Alpers, W., D. B. Ross and C. L. Rufenach ( 1981 ). "On the Detectability of Ocean
Surface Waves by Real and Synthetic Aperture Imaging Radar," J. Geophys. Res.,
86, pp. 6481 - 6498.

67

Barrick, E. E. and W. H. Peake ( 1968) " A R .


ev1ew of Scattering from Surfaces with
Different Roughness Scales " R d' S. .
,
a io c1ence, 3, pp. 865- 868
.
.
al, R. C., P. S. Deleonibus and I. Katz ( 1981) S
for Oceanography, Johns Hopkins U . p . pac~borne Synthetic Aperture Radar
niv. ress, Baltimore MD
Beal, R. J., J. L. MacArthur, F. M. Monaldo and S F 0
'
." . .
Wave Spectra: Prospects for Acquiring a GI b I. D. den ( 1988). Directwnal Ocean
Digest, Vol. I, Edinburgh, UK, pp. 149- 15g a ata Base from SIR-C," IGARSS'88
Be

Bennett, J. R., P. Widmer and I G Cu


.
Processor," Proc. ESA Worki~g
mFmmg (1.980). "A Real-Time Airborne SAR
Bl'
roup, rascat1, Italy, ESA SP-!031
rtzer, D. L. (1959). "Steering and Focusin A
.
Phase Shifts," Control Syst. Lab., Universi~ ~~~tna. Beams by the Use of Quantized
Yo ino1s, Report No. I-84, Urbana IL
Blom R G R E C .
' " nppen and C Efachi (1984) "D

'
Seasat Radar Images of Mea~s V II
M :
etection of Subsurface Features in
pp. 346- 349.
a ey,
OJave Desert, California," Geology, 12,

:r~fokn~,

E. ( 1985). Radar Technology, Chapter I , Artech House Dedham MA


u er, ., et al. ( 1984). Earth Observing System. Scie
,. .
,
.
nee ~nd Mission Requirements
Working Group Report Vol. I NASA TM 86.
129, Washington DC
arsey, F. D. and H. J. Zwally ( 1980) Geo h .
'
.
Research Tool (N. Untersteiner ed ).Pl p yps1cs of Sea Ice, Remote Sensing as a

enum ress New York
1021
Carver, K., et al (1987) "Earth Ob
.
S
'
' pp.
- 1098.
Aperture Rad~r, Instr~ment Panel ~e;pv~~:" ~~eSmA RWepor~s Volume Ilf: Synthetic
Chandr kh
'
' ashington, DC.
C d aRseAar, S. (1960)'.Radialive Transfer, Dover Publications, New York
or ey, ., J. T. Macklin J -P G
d d
.
.

Studies for the ERS-1 W~v~ Mo~;~~~SAanJ E. Onol-Pibernat ( 1989). "Theoretical


Cow
,
ournal, 13, pp. 343 - 362
.
.
ell, R. N., ed. (I 983a ). Manual of Remote Sensin V. 1
.
g, . o ume I . Theory, Instruments
and Techniques (Simonett D S d) A
Church, VA.
' " e mencan Society of Photogrammetry, Falls

'

'

V. I
Colwell, R. N. (I 983b ). Manual of Remote Sen i1
Applications (Estes J E ed ) A

S .
s ig, 0 ume II : Interpretation and
Cutrona L J E N ' ~ . ~ ~ merrcan oc1ety of Photogrammetry, Falls Church VA.
' . . ., . . . ert , . J. Palermo and L. J. Porcefl 0 ( 1960) "O .
'
P rocessing

and Filtering Systems ,, 1RE Ti


,r.
.
pt1cal Data
Cutrona L
. .
,
rans. / 11.JOrmat1on Theory, IT-6, pp. 386- 400
, . J., W. E. Vivian, E. N. Leith and G 0 Hall
"
.
.
RadarCombatSurveillanccSystem "IRETi . . . . (1961). A High Resolution
zI
,
rans. MihtaryElec., MJL-5,pp. 127 131.
Durden S L J J
, . ., . . van y and H. A Zebker (1989) "M
.
the Radar Polarization Signature of
dA
;,
odehng and Observation of
Sens., GE-27, pp. 290- 301.
oreste
reas. IEEE Trans. Geosci. and Remote

Elachi, C. ( 1982). " Radar Images of th E h " . .


Elachi C (1987) I
d .
e art , Scientific American, 271, pp. 54- 61
'
ntro UC/Ion to the Ph
d r
.

New York.
ysics an ' echmques of Remote Sensing, Wiley,
Elac h', C . ( 1988 ). Spaceborne Radar Remote S
. .
. .
IEEE Press, New York.
ensmg . Applications and Techniques,
Elachi, C., T. Bicknell, R. L. Jordan and C Wu
"
Imaging Radars: Applications T h ..
( 1982). Spaceborne Synthetic-Aperture
pp. I I 74- I 209.
, ec niques and Technology," Proc. 1EEE, 70,
Elachi, C., J. B. Cimino and M. Settle ( 1986) "O
.
28.6 verv1ew of the Shuttle Imaging Radar-B
Preliminary Science Results " Sc'
'
ience,
' pp. 151 I - 151 6.

68

INTRODUCTION TO SAR

Elachi, C., L. E. Rot h and G. G. Schaber ( 1984 ). " Spaceborne Radar Subsurface Imaging
in Hyperarid Regions," IEEE Trans. Geosci. and Remote Sens., GE-22, pp. 383 - 388.
Elachi, C., E. Im, L. Roth and C. Werner (1991). "Cassini Titan Radar Mapper," Proc.
IEEE (in press).
Ford, J. P., J. B. Cimino, B. Holt and M. R. Ruzek ( 1986). "Shuttle Imaging Radar
Views the Earth from Challenger: The SIR-B Experiment," JPL Pub. 86-10, Jet
Propulsion Laboratory, Pasadena, CA.
Ford, J.P., R. G. Blom, M. L. Bryan, M . I. Daily, T. H. Dixon, C. Elachi and E. C. Xe~os
( 1980). " Seasat Views North America, the Caribbean and Western Europe with
Imaging Radar," JPL Pub. 80-67, Jet Propulsion Laboratory, Pasadena, CA.
Freden, S. C. and F. Gordon, Jr. ( 1983). Landsat Satellites, Manual of Remote Sensing
(Simonett, D . and F. Ulaby, eds.), Chapter 12, Vol. I, Am. Society of Photogrammetry,
Sheridan Press, Falls Church, VA.
Fu, L.-L. and B. H olt ( t 982). " Seasat Views Oceans and Sea Ice with Synthetic Aperture
Radar," JPL Pub. 81-120, Jet Propulsion Laboratory, Pasadena, CA.
Fung, A. K. ( 1982). " A Review of Volume Scattering Theories for Modeling Applications,"
Radio Science, 17, pp. 1007- 1017.
Goddard Space Flight Center ( 1989). Earth Observing System, Reference Handbook,
NASA GSFC, Greenbelt, Maryland.
Holahan, J. ( 1963). " Synthetic Aperture Radar," Space/ Aeronautics, 40, pp. 88-93.
Hulsmeyer, C. ( 1904). " Hertzian-Wave Projecting and Rec~iving Apparatus Ad~pted
to Indicate or Give Warning of the Presence of a Metallic Body such as a Ship or
a Train," British Patent 13,170.
Hunten, D . M., M. G . Tomasko, F. M . Flasar, R. E. Samuelson, D. Strobel an_d D._J.
Stevenson (1984). Titan , in Saturn (T. Gehrels and M. S. Mathews, eds.), U01vers1ty
of Arizona Press, Tucson, pp. 67 1- 759.
Im, E., c. Werner and L. Roth ( 1989). " Titan Radar Mapper for the Cassini Mission,"
21st Lunar and Planetary Science Co11f., Joh11son Space Center, Houston, TX,
pp. 544- 454.
Jensen, H., L. C. Graham, L. J. Po rcello and E. N . Leith ( 1977). "Side-Looking Airborne
Radar," Scientific American, 237, pp. 84- 95.
Johnson, W. T. K. and A. T. Edgerton ( 1985). " Venus Radar Ma pper (VRM): Multimode
Radar System Design," SPJE, 589, pp. 158- 164.
Jordan, R. ( 1980). "The Seasa t-A Synthetic Aperture Radar System," IE EE J. of Oceanic
Eng., 0 E-5, pp. 154- 164.
Kahle, A. a nd A. Goetz (1983). "Mineralogical Information from a New Airborne
Thermal Infrared Multispectral Scanner," Science, 222, pp. 24- 27.
Kahle, A. B., J. P. Schieldge, M. J. Abrams, R. E. Alley and C. J. LeVine ( 1 9~1 ).
"Geological Applicatio n of HC MM Data," JPL Pub. 81-55, Jet Propulsion
Laboratory, Pasadena, C A.
Kirk , J. C., Jr. ( 1975). " Digital Synthetic Aperture Rada r Technology," IEEE
f11ternational Radar Conference Record, pp. 482- 487.
Li, F. and R. Goldstein ( 1989). "Studies of Multi-baseline Spaceborne lntederometric
Synthetic Aperture Radars," IEEE Trans. Geosci. and Remote Sens., GE-28, pp. 88- 97.
MacArthur, J. L. a nd S. F. Oden ( 1987). "Real-Time Global Ocean Wave Spectra from
SIR-C: System Design," IGA RSS'87 Digest, Vol. II , Ann Arbor, Ml, pp. 1105- 1108.

REFERENCES AND FURTHER READING

69

MacDonald, H. C. ( 1969). " Geologic Evaluation ofR adar Imagery from Darien Provi nce
Panama," Modern Geology, 1, pp. t - 63.
'
Mercer, J. B; ( 1989). " A New Airborne SAR for Ice Reconnaissance Operations," Proc.
IGARSS 89, Vancouver, BC, p. 2192.
Monaldo, F. M. ( 1985). " Meas urements of Directional Wave Spectra by the Shuttle
Synthetic Aperture Radar," John s Hopkins APL Tech. Digest, 6, pp. 354- 360.
Natio_nal Aeronautics and Space Administration Advisory Council ( 1988). Earth System
Science: A Program/or Global Change, NASA, Washington, DC.
Nghiem, S. ~ J. A. Kong and R. T. Shin ( 1990). "Study of Po larimetric Response of
Sea Ice with a Layered Random Medium Model," Proc. /GARSS '90, Washington,
DC, pp. 1875- 1878.
Pettengill, G. H., D. B. Campbell and H. Mas ursky ( 1980). " The Surface of Venus"
Scientific American, 243, pp. 54- 65.
'
Porcell o, L. J., R. L. Jordan, J. S. Zelenka, G. F. Ada ms, R. J. Phillips, W. E. Brown, Jr.,
S. W. Ward and P. L. Jackson ( 1974 ). " The Apollo Lunar Sounder Radar System "
Proc. IEEE, 62, pp. 769- 783.
'
Purcell, E. M. ( 1981 ). Electricity and Magnetism, Berkeley Physics Course, Vol. 2, 2nd Ed.
McGraw-Hill, New York.
Rawson,~; and F. Smith ( 1974). "Four C hannel Simultaneous X-L Band Imaging SAR

Radar, 9th / mer. Symp. Remote Sensing of E11viro11ment, University of Michigan,


Ann Arbor, pp. 251 - 270.
Richards, J. A., G. Q. Sun and D. S. Simonett ( 1987). "L-Band Radar Backscatter
Modeling of Forest Stands," IEEE Trans. Geosci. and Remote Sens., GE-25,
pp. 487- 498.
Schaber, G. G., C. Elachi and T. F. Farr ( 1980). " Remote Sensing of S. P. Mountain
and S. P. Lava Flow in North Central Arizona," Rem. Sens. Env., 9, pp. 149- 170.
Sherwin, C_. W., J.P. Ruina a nd R. D. Rawcliffe ( 1962). " Some Early Developments in
Synt hetic Aperture Radar Systems," I RE Trans. on Military Elec., MIL-6, pp. 111 - 115.
Skolnik, M. I., ed. ( 1970). Radar Handbook, McGraw-Hill, New York.
Skolnik, M. I. ( 1980). Introduction to Radar Systems, 2nd Ed., McGraw-Hill, New York.
Taylor, A. H., L. C. Young and L. A. Hyland ( 1934 ). "System for Detecting Objects by
Radio," United States Patent 1,98 1,884.
Ulaby, F. T.: R. K. Moore and A. K. Fung (1981). Microwave Remote Sensing, Active
and _Passive, Volume I : Microwave Remote Sensing Fundamentals and Radiometry,
Addison-Wesley, Reading, MA.
Ulaby, F. T..' R. K. Moore and A. K. Fung ( 1982). Microwave Remote Sensing, Active
and Passive, _Volume /J : Radar Remote Sensing and Surface Scattering and Emission
Theory, Addison-Wesley, Reading, MA.
Ulaby, F. ~- R. K. Moore a nd A. K. Fung ( 1986). Microwave Remote Sensing, Active
and Passive, Volume I I I : From Theory to Applications, Artech House, Dedham, MA.
Ulaby, F. T. ( 1980). " Vegetation Clutter Model," IEEE Trans. Ant. Prop AP-28
pp. 538 - 545.
.,
,

Ul ~.by,_ F._ T., K .. Sarabandi, K. McDonald, M. W. Whitt and M. C. Dobson ( 1990).


M1ch1gan Microwave Ca nopy Scattering Model (MIM ICS), Int. J. Remote Sensing
11, pp. 1223- 1253.
,

70

INTRODUCTION TO SAR

I G R ( 1978) "Theories for Interaction of Electromagnetic Waves and


Valenzue ~ W . . A R~view " Boundary Layer Meteorology, 13, pp. 61 - 85.
Oceanic aves
p
RADAM "
van Roessel, J. W. and R. D. de Godoy ( 1974). " SLAR mosaic for roJeCt
'

Phoiogram. Eng., 40, pp. 583- 595.


.
.
.
avior Usin Radar
Z I J J ( 1989 ). "Unsupervised Class1ficat1on of Scattenng Beh
g
van y '. . . D t " IEEE Trans Geosci. and Remote Sens., GE-27, pp. 36- 45.

Polarimetry a a,

f p
"
.
d C D Sapp ( 1969). "SLR Reconnaissance o anama,
Viksne, A., T . C. Liston an

Geophysics, 34, pp. 54- 64.
att R (1957) Three Steps to Victory, Odhams Press, London.
Watson- W '

d A
t " United States
Wiley, C. A. ( 1965). "Pulsed Doppler Radar Methods an
ppara us,

2
THE RADAR EQUA TION

Patent, No. 3,196,436, Filed August 1954.


d'
for Technology
C A ( 1985). " Synthetic Aperture Radars - A Para tgm 3
1
Wt ey, . .
El S
AES 21 pp 440- 44 .
Evolution" IEEE Trans. Aerospace ec. ys.,
- ' .
.
. .
'
H
I
( 1988) "Specular Point Scattering Contr1but1on
Winebrenner, D. P. and~ asse manRn d I . age of the Ocean Surface," J. Geophys.
to the Mean Synthetic Aperture a ar m
R . 93(C) p~9281 -9294.
.
f
es.,
,
D N H Id (1986). "Imaging Radar Polarimetry rom
Zebker, H. A., J. J. van Zyl and . . e
- 701
Wave Synthesis," J . Geophys. Res., 92, pp. 683
.

In Section 1.2, we have given a heuristic discussion of the way in which a SAR
achieves higher resolution along track than does a real a perture radar (RAR).
In Section l .4 we indicated many of the links between geophysical parameters
of interest for remote sensing and the corresponding radar signals. In the
remainder of the book, we want to make more precise these matters of SAR
operation and SAR image formation, and their effects on the ability to accurately
determine geophysical information from SAR images.
Since a SAR is a particular kind of RAR, one which maintains precise time
relationships between transmitter and receiver (a " coherent" RAR), with the
"SAR" qualities added in the signal processing, in order to understand SAR it
is necessary to have an understanding of RAR. In this chapter, we develop
carefully the basic mathematical model of a RAR system, the radar equation.
Radar technology, and in particular RAR, has been under continuous active
development for well over a half century. Skolnik ( 1985) gives an account of
the history of the early days of radar, while in Section l .3 we have traced the
historical development of SAR. The state of the art as of about 1950 required
28 volumes to codify (Ridenour, the " Rad. Lab. Series" ). Even to survey in
overview the main aspects of the technology requires a book , for example that
by Skolnik ( 1980) or by Barton ( 1988), while a more detailed review (Skolnik,
1970) runs to over 1500 pages. Therefore in our discussions of RAR we will
necessari ly be selective in choosing topics. Within that framework, however, we
will relate the main ideas of RAR systems to basic physical concepts.
71

2.1

72

THE RADAR EQUATION

2.1

POWER CONSIDERATIONS IN RADAR

The traditional purpose of radar is to detect the pre~ence ~f. "hard" targets,
such as aircraft, and to localize to some extent their pos1t1ons. The rad~r
transmitter (Fig. 2.1 ) generates a brief (microseconds) high power burst of radio
frequency electromagnetic energy. (The more powerful .th.e better - a few
megawatts is not unusual for a ground based radar.) This is c.onveyed to ~n
antenna through appropriate microwave "plumbing". At the high frequen~1es
of radar (0.1-tOGHz, typically), an antenna structure of rea~onable physical
size acts to confine the radiated energy to a narrow fan or cone m .space, thereby
providing localization in one or two spatial dimensions, respectlv~ly. " .
,,
Having launched the pulse, the transmitter turns off and the re~e1ver l~stens
for any echos of the pulse returned from thf se~tor of the s~y mto which ~he
pulse was launched. Any perceived echo has it~ t1~e of recept1~n ~oted, relat1~e
to the time of transmission of the pulse. This time delay . ts 1~terpr~ted m
terms of range to target, R = cr/2, providing another spatial d1mens10~ for
localization. The power of the received echo relative to that of the _transmitted
pulse scales in free space as 1/ R 4 Mega~atts ~uic~ly tum in_t~ m1.crowatts at
ranges of interest, requiring sensitive receiver circmts, so sens1t1v~, m fact, that
the noise internally generated in the receiver must be reck~ned wit~. The radar
equation expresses this conversion of transmitted power mto r~ce1ved po~er,
in terms of the ratio of received power due to a target reflection to receiver
power due to noise, together with some system and target param~ters. .
The earliest radar receivers used a simple "A-scan" presen~atton, ~1th !he
receiver output power presented as a function of time (range) ~u~mg the hstenmg
time after transmission of one pulse and before transm1ss1on of _the _next
(Fig. 2.2). The "grass" along the baseline is .due to r~ndom thermal no.1se, either
internal to the receiver or entering along with the signal from the environment,
while target echos show up as "bumps" above the grassy baseline. The radar

Energy pulse

Trans

z~:~

Rcvr

--R----i1sl

Ii.-..:

Figure 2.1

Notional radar system.

POWER CONSIDERATIONS IN RADAR

73

Time (range)
Figure

~.2

The receiver signal to noise ratio SNR 0 = P,/ Pn

is roughly characterized by the range and character of a target which produces


an echo strong enough to show up above the noise.
The height of a target bump above the noise grass is measured by the ratio
of receiver output power P. due to target echo to the average output power
Pn due to other causes (noise), the output signal to noise (power) ratio SNR0
Specifying the minimum SNR which is required for "reliable" detection is not
easy, and we shall simply assume that some minimum power ratio SNR0 has
been determined as necessary for satisfactory performance.
With a required minimum value of SNR at the receiver output specified,
it remains to relate that number to the transmitter, target, and receiver
characteristics in order to assess the performance of the system in terms of what
target size can be discerned at what range. The conversion of transmitter power
into receiver power by the system and target characteristics is simple in concept.
However, the seeming simplicity covers a number of assumptions about the
way the system operates, as weil as some factors wrapping very complicated
phenomena into a few numbers whose determination may not be at all simple.
We will try to be careful in pointing out these matters as they arise. Similar
discussions are given by, (or example, Skolnik ( 1980), Barton ( 1988), Silver.
(1949), and Colwell (1983).
The parameters to be discussed in detail in this chapter are those appearing
in the (point target) radar equation. The general form of that equation is
*(2.1.1)

Here SNR0 is the SNR which has been specified as required for reliable operation.
That is equated to the SNR provided by the system for a target at range R, on
the right. The equation may then be solved for any one of its parameters (often
range) in terms of the others to determine operational capability or a system
requirement.

74

2.2

THE RADAR EQUATION

the scattered power density at the antenna. This backscattered power


is mtercepted by the antenna, of surface area A., to provide signal power

\
\

()a
I

~Ql(R)
I
I
I
I

AmplifierB, F

Decision point /
I
I

Figure 2.3

75

~n~wer for

\
\
\
I

Trans

THE ANTENNA PROPERTIES

Relation of radar equation parameters to system elements.

It is worth stepping briefly through equation Eqn. (2.1.1) before beginning


a detailed analysis of its factors (Fig. 2.3). The first parameter on the right of
the radar equation Eqn. (2.1.1) is P,, the average transmitter power delivered
to the antenna during the time of a transmitted pulse. That power, if radiated
uniformly in all spatial directions, would result in a power density (or intensity)
P 1/4rrR 2 (W /m 2 ) flowing across a spherical surface at range R. The
dimensionless antenna gain G1 in the direction of the antenna beam adjusts
that value according to the concentrating properties of the directive antenna
structure, and takes account of the power loss in the antenna structure itself.
(The quantity P 1G 1 is called the effective isotropic radiated power, EIRP.) The
2
result is an intensity in W /m

at the "target", a point in the center of the beam at range R.


A target in the beam at range R intercepts that flow of power in space and
scatters power back towards the antenna. The amount of that backscattered
power is characterized by an area ("cross section") u imputed to the target.
The normalizing assumption is made that the target scatters omnidirectionally,
regardless of the actual state of affairs. The presumed intercepted power
P, G'u / 4rrR 2 (W), available for omnidirectional scattering, therefore creates at
2
the antenna, at range R from the target, a power density in W /m

The value of u is in fact specified such that this last expression yields the correct

at the antenna output (receiver input) terminals.


The receiver is broadly defined as all elements beyond the antenna terminals
in the receiving system. The receiver input noise is characterized by a power
per unit frequency bandwidth kT W /Hz, where k = 1.38 x 10- 23 J /K is
Boltz~ann's constant and T is a "temperature". The temperature T is a
numencal val~e selected such that the product kT is the correct noise power
spectral density for the case at hand, assuming the antenna and receiver
impedances are matched. The receiver bandwidth B converts this into a noise
power kTB (W). The receiver system is characterized by a noise factor F > 1
whi~h e~presses ~he ~xtent to which internal receiver noise increases apparent
receiver mput noise, if we were to observe the receiver noise output and assume
(incorrectly) tqat the receiver system itself were free of internal noise sources.
The (power) SNR at the input to the noiseless receiver would then be

This notional noiseless receiver will be assumed here to have a gain relative
to average power which is constant over the frequency band of the signal. This
gain is applied uniformly to both signal input and noise input. Thus the SNR
in the signal band is the same at the output as it is at the input. Therefore, the
SNR; can be equated to the SNR0 which is required at the detection point to
yield the radar equation, Eqn. (2.1.1).
In Chapter 3, we will discuss an important modification of the radar equation,
Eqn. (2.1.1), that resulting by use of a matched filter in the receiver. Such a
filter makes use of the detailed structure of the signal input and the noise
s~atistic~ to maximize the instantaneous signal to noise power ratio at a particular
time of mterest. The procedure generalizes the idea implicit in Eqn. ( 2.1.1 ), that
the receiver has uniform response over a bandwidth B appropriate to that
occupied by the signal. In the remainder of this chapter, however, we will work
through the factors of the radar equation, Eqn. (2.1.1 ), in some detail, both as
a tuto.rial mechanism for introducing necessary radar background, and in order
to pomt out carefully the assumptions involved in their use. A more precise
statement of Eqn. (2.1.1) results as Eqn. (2.7.1).

2.2

THE ANTENNA PROPERTIES

We need first to characterize the extent to which the antenna of the radar system
concentrates the power delivered to it by the transmitter into a beam aimed in
the target d.irection. That is expressed in the radar equation, Eqn. (2.1.1), by the
antenna gam G'.

76

2.2

THE RADAR EQUATION

During the time of a radar pulse, while the transmitter is on, suppose that
the average power flowing into the antenna input port is P1 (Fig. 2.3). (This is
often called the peak power of the radar, to distinguish it from the true average
power, which takes into account the transmitter "off" time as well. The ratio
of "on" time to total time is the duty cycle.) If all of this power were radiated
into space by the antenna, and ifthe antenna radiated uniformly in all directions
(isotropic radiation), then at range R from the antenna the power density
(intensity) of the electromagnetic wave would be
(2.2.l)
However, some power is.lost by dissipation in the antenna itself. Also, by design,
the antenna does not radiate isotropically. Rather, the intensity at some space
point with polar coordinates ( R, 0, </>) relative to the antenna is some value
l(R, 0, </>) = G'(O, </>)I 0 (R)

where the gain function G'( 0, </>) has values both greater and less than unity.
Usually, the gain G1( 0, </>) is maximum at 0 = 0, </> = 0, the direction of the
radar beam. The parameter G1 in the radar equation is the maximum (on-axis)
value of G1( 0, </> ).
The gain function G'( (), </>) can be interpreted as the power P( 0, </>) per unit
solid angle Q radiated by the antenna in direction ( (), </>) (the radiation pattern
(Ulaby et al., 1981, p. 97)), relative to the power per unit solid angle P 1/4n
which would be radiated in that direction by a lossless isotropic antenna. This
follows by relating intensity I, power P, and solid angle Q through
l(R, 0, </>) dA = P((), </>) dQ = P((), </>) dA/ R

(2.2.2)

so that
P((), </>)/(P1/4rr.) = (4rr.R 2 /P1 ) J(R, 0, </>)

= l(R, 0, </>)/I0 (R) = G'(O, </>)

is the antenna directivity pattern. From the relation

D'( 0, </>) dQ

</>) = G ( (),</>)/Pe

= 4rr.P( 0, </> )/ Prad

P( 0, </>) dQ

= 4n

(2.2.5)

sphere

it is clear that the directivity function trades power increase in one solid angle
sector for decrease in another.
. T~e ~articular form of the gain function G'( (), </J) depends on the spatial
dtstnbution of current imposed by the transmitter on the antenna structure.
We will s~mmarize the development of that relationship. However, the design
of a ~hys1ca~ antenna t~ accomplish the current distribution corresponding to
a desired gam pattern 1s a separate problem with which we will not deal.
.One of the. most c~mplete discussions is that of Silver ( 1949), who proceeds
usmg the basic equations of electromagnetic theory to relate antenna currents
and gain. The relations of main interest are summarized by Stutzman and Thiele
( 1981 ). Sherman ( 1970) has given a comprehensive summary of the engineering
results.
The central result for calculation, developed from Maxwell's equations, is
the Huygens diffraction integral. Let us assume that the antenna excitation is
sinus~idal, ~ith radian frequency w, and that the wavelength ..1. = 2nc/w of the
elect~1c field ts m~ch less than the physical extent of the antenna. For simplicity,
and m accord with the usual practice in SAR, we will assume that the field
impressed on the antenna by the transmitter is linearly polarized (i.e., the electric
field vector has a constant direction). The field radiated by the antenna can
then be expressed by a scalar diffraction integral.
We can write the one dimensional electric field vector as
E(R, t)

= E(R, t)x

for example, where x is the unit vector along the x coordinate in space and we
assume linear polarization in that direction. Using phasor analysis, for the scalar
coordinate of this field we write
E(R, t) =Re{ j2E(R) exp(jwt)}

where E(R) is the corresponding complex rms electric field phasor. This can
be written (Silver, 1949, p. 170)

of the power delivered to the antenna is radiated into space, where Pe < 1 is
the antenna radiation efficiency. The correspondingly scaled function
D

= (4rr./ Prad) f

sphere

E(R) =

77

*(2.2.3)

Only some portion

1( (),

THE ANTENNA PROPERTIES

(2.2.4)

4~ L.

E(x', y')[exp( -jkr)/r]

x [(jk

+ l/r)zr + jkzs] dx' dy'

(2.2.6)

where the geometric terms are defined in Fig. 2.4 and k = 2n/ ..1. is the carrier
wave number. The quantities z, r are unit vectors along the corresponding rays

78

2.2

THE RADAR EQUATION

79

THE ANTENNA PROPERTIES

yielding the diffraction integral Eqn. (2.2.6) in the approximate form


E(R) = (j/2A.)[exp(-jkR)/R]
E (x,y,z)

E(x',y')exp[-jk(r-R)](cosO+zs)dx'dy'

Aa

(2.2.8)
in which Eqn. (2.2.7) is to be used to approximate the quantity r - R.
Finally, in the case of interest for us, the quadratic terms in R' /Rare discarded
in the expansion Eqn. (2.2.7). We then enter the Fraunhofer region of diffraction
(the far field), for which case

figure 2.4 Geometry for calculation of field due to aperture illumination.

E(R)
in Fig. 2.4, and s is the unit vector along the spatial gradient of the phase of
the electric field E(x, y) induced by the antenna currents across the aperture.
The direction of the electric field vector at position R is the same as its direction
on the surface A, the "aperture", since we assume free space propagatio~.
This integral Eqn. (2.2.6) expresses the field at an arbitrary space pom~ R
in terms of its values over a planar surface A in the vicinity of the physical
antenna. The purpose of the antenna is to force some specified field distribution
E(x, y) to exist over this surface. With the usual linear phase variation of field
across the aperature, s is constant and indicates the direction of the antenna
radiated power beam. If the field across the aperture furth~r has co~stant p?ase,
then s = z. The antenna design problem, which we wtll not discuss, is to
determine from the desired spatial distribution E(R) of radiation, what should
be the ap~rture field distribution E(x, y), and then to determine what physical
structure will produce that aperture distribution.
. .
.
In working with the (scalar) diffraction integral Eqn. (2.2.6), 1t is convement
to make various levels of approximation, corresponding to increasing distance
of the field point R of interest from the aperture surface. For the clo~est poin~s
(the near field region), no approximations are reasonable, and the mtegral is
taken as it stands. (Silver ( 1949) remarks that, even in the case of no
approximations, within a few wavelengths of the aperture the approximatio~s
leading to Eqn. (2.2.6) do not hold very well. The equation is not useful as it
stands for quantitative work very near the antenna.)
Moving further than a few wavelengths from the a~erture we ent~r the
Fresnel region. Here it is assumed that r A., so that 1/r k. Further, m the
magnitude terms of the diffraction integral Eqn. (2.2.6) we assu~e that r ~ R
(Fig. 2.4). In the more critical phase terms, we use the expansion (keepmg
through second order terms in R' IR):
r 2 =IR - R'l 2 = R 2 + R' 2
r ~ R - RR'

2RR'

+ R' 2 /2R -(RR') 2 /2R

(2.2.7)

(j/2A.R)(zs +cos 0) exp( -jkR)


x

E(x', y') exp(jkR R') dx' dy'

*(2.2.9)

A.

where we further assume the usual case of constant aperture phase gradient,
so that s is constant. This expression Eqn. (2.2.9) shows that, at least for
cos e ~ 1, in the far field the antenna radiation pattern in space is determined
by the two-dimensional Fourier transform of the electric field distribution over
the aperture.
As the usual criterion for discarding the quadratic terms in Eqn. (2.2.7) it is
required that the phase error thereby incurred in the integrand of Eqn. (2.2.8)
at the boundary of the aperture integration reg_ion be at most rc/8 radian.
That is to say that the far field region is defined by
(k/2R)[R' 2

(R R') 2 ] ~(re/ A.R)(D/2) 2 < n/8


R > 2D 2 /A.

*(2.2.10)

where D is a measure of the linear extent of the aperture and we assume


R R' = sin e 1
Silver (1949, p. 196) has given a more detailed discussion of the origin of the
limitation Eqn. (2.2.10). The Fraunhofer diffraction expression Eqn. (2.2.9) is.
the basis for antenna gain calculations in the far field.
One practical consequence of the restriction Eqn. (2.2.10) might be pointed
out. For an antenna, say at C-band (5 GHz) and with linear extent 10 m, the
far field begins at a range R = 3.3 km. Thus, operation is usually in the far field,
and certainly so for a satellite platform. However, to verify that the construction
meets the design goals, and to obtain a precise antenna pattern for use in image
calibration, it is desirable to measure the pattern of an antenna after construction.

80

THE RADAR EQUATION

2.2

THE ANTENNA PROPERTIES

81

where E is the complex phasor scalar length of the vector E and we assume
free space. (Note that I= III varies as 1/ R 2 since, from Eqn. (2.2.9), IEI oc 1/ R.)
Using Eqn. (2.2.2), this Eqn. (2.2.11) yields the antenna power pattern as

(Silver, 1949, p. 177)


(2.2.12)
so that, from Eqn. ( 2.2.3 ), the gain function is

(2.2.13)

Figure 2.5

Near-field and far-field antenna patterns with uniform aperture illumination (from
Skolnik, 1970). With permission of McGraw-Hill, Inc.

For precision of results, this should be carried out in an enclosed reflection-free


test space. It is then often impractical to work in the far field for such
measurements. Techniques to extrapolate near field pattern measurements
reliably to the far field are therefore of importance. Fig. 2.5 compares near-field
and far-field directivity patterns for an antenna with constant illumination
E(x', y') in Eqn. (2.2.9). In the case of a large spaceborne antenna, antenna
pattern measurements may need to be made after deployment, using earth
calibration points (Chapter 7).

2.2.1

By "the" gain of an antenna is meant the maximum of the direction dependent


gain Eqn. (2.2.13), taken over all directions. If the antenna is to be useful in a
radar system, this maximum must be sharply defined, and have a level
considerably above the gain in other directions; the antenna must have a beam.
This requires a phase gradients of the field which is constant across the aperture,
as we assumed in Eqn. (2.2.9). Thus we will write
E(x', y')

= IE(x', y')I exp[jk(iXxx' + iXyy')]

on the aperture, where !Xu iXy are the direction cosines of the aperture phase
gradient unit vectors:

z,

For specified !Xx, IXy, i.e., specified s, provided s ~ the maximum of the gain
G'(O, c/>) then occurs in directions (Silver, 1949, p. 176). Henceforth we will
consider only the usual case s =
so that

z,

The Antenna Gain

The result Eqn. (2.2.9) for the far electric field of an antenna is related to time
average power in W /m 2 , the quantity we need in the radar equation, by the
Poynting relation (Silver, 1949, p. 70)

E(x', y')

= IE(x', y')I

Using the field expression Eqn. (2.2.9) with s =


( 2.2.13) is found to be

I(R) = Re(E x H*)


This expresses the time averaged spatial density of power flow I (the intensity)
in the electromagnetic field at a point R, in both magnitude and direction, in
terms of the electric and magnetic field vector complex rms phasors. For the
particular case Eqn. (2.2.9) of the diffraction far field this becomes
(2.2.11)

G'(O, c/>) =

z, the gain function

Eqn.

[n~/ A. 2 P,Jl1 +cos 01 2


x

IL.

E(x',y')exp[j(2n/A.) sin O(x' cos c/>

+ y' sin cf>)] dx' dyf


(2.2.14)

82

2.2

THE RADAR EQUATION

This gain function is maximum on the antenna axis ( lJ = 0) (Silver, 1949, p. 177 ),
with:
G1 =max G1(8, </J) = G1(0, </J)

THE ANTENNA PROPERTIES

83

for the case f = E(x, y) and g = 1, yields:

If

Aa

E(x', y') dx' dy'l2

~ Aa

IE(x', y')l2 dx' dy'

A.

9,<J>

[2nj(eo/ o)/A.2P1JIL. E(x', y') dx' dyf

*(2.2.15)

This last quantity is the gain parameter G1 in the radar equation, Eqn. (2.1.1 ).
Using the Poynting expression Eqn. (2.2.11~, evaluated,.acr~ss the aper!ure,
the total power Prad radiated by the antenna m the cases= z can be wntten
(Silver, 1949, p. 177):
Prad

= [j(e0 / 0 )]

IE(x', y')l dx' dy'

(2.2.16)

A.

Defining the radiation efficiency of the antenna as

where Aa is the geometric area of the aperture. Then we have

so that, in particular, the maximum possible directivity D0 satisfies


D0 = max (D 1) ~ 4nA 8 / A. 2

~ubstitution of the constant aperture amplitude distribution, IE(x, y)I =canst,


mto Eqn. (2.2.19) verifies that the'maximum directivity D0 of Eqn. (2.2.20) is
then actually attained. Any other amplitude function necessarily results in a
directivity which is less than D0 The amount of decrease is expressed by the
"aperture efficiency"

(2.2.17)

the gain Eqn. (2.2.15) can then be written


(2.2.18)

where
D1 = (4n/ A. 2

)\L.

E(x', y') dx' dyf /

L.

IE(x', y')l dx' dy'

*(2.2.19)

(Steering this maximum in space is done by changing the gradient ~f the aperture
phase distribution in the technology of phased array ~ntennas, usmg ax, ay #- 0.
The gain of the steered beam may be less than ~hat m Eqn. (~.2:19).)
The quantity D1 in Eqn. ( 2.2.19) is the on-axis antenna ~am. 1~ the an.tenna
itself were lossless (p. = 1), and is called the antenn~ duectlVlty. It is the.
maximum of the directivity pattern Eqn. (2.2.4). It still includes the aperture
illumination amplitude distribution IE(x, y)I as a function to be chosen.
(In choosing s = Z, the aperture phase has been set to zero.) The Schwartz
inequality

If

fg* dx' dyf

(f

1/1 2 dx' dy')(f 191 dx' dy')

(2.2.20)

IE(x,y)I

(2.2.21)

Since Pa = 1 is in fact attainable, at least in principle, one might wonder why


a lesser value would intentionally be sought. The answer has to do with the
off-axis behavior of the function G1( 0, <P ), and the appearance of local maxima
(sidelobes) whose values, although less than the global (on-axis) maximum,
may still be large enough to be troublesome. The choice of amplitude
distributions other than uniform, inorder to control these sidelobes, is an area
of independent study. Suffice it to say at this point that the use of a nonuniform
distribution sacrifices on-axis gain and broadens the global maximum in angle,
while decreasing the levels of the off-axis local maxima.
The directivity D1 compares the far-field on-axis power intensity to the total
radiated power, while the power gain Gt, the quantity needed in the radar
equation, compares far-field on-axis intensity to the antenna input power Pl"
Combining Eqn. (2.2.18) with Eqn. (2.2.21) yields the power gain as
(2.2.22)

where the product

P=P.Pa

(2.2.23)

is the antenna efficiency.


Since D0 = 4nAa/ A. 2 is the maximum possible gain for a physical aperture

84

THE RADAR EQUATION

2.2

of area Aa, the actual power gain Gt is often expressed as

where

at(e, <P) = (1 /4)11 +cos 01 2


X

(2.2.25)
is the effective aperture for transmission. An effective directive area can then
also be defined from Eqn. ( 2.2.24)
(2.2.26)
or, correspondingly, a directive area
(2.2.27)
which does not include power loss in the antenna structure itself, or in its feed
lines.
h
The single parameter Gt of Eqn. (2.2.22), the antenna (power) gam, t ~s
wraps into itself a good deal of complexity. It applies only on the beam axis
in the far field, and takes account of ohmic losses in the antenna and an.y
shading (use of nonuniform amplitude distribution) for sidelobe ~o~trol. It is
a parameter the antenna designer must supply, and allows t~e butldmg. of the
radar equation to be carried one step beyond the transmitter, to wnte the
intensity of power incident on the target (assumed to be on the antenna beam
axis) as

IJA a F(x, y) expLJ(2n/ A.) sin(} (x cos <P + y sin <P )] dx dyl 2
IJA. F(x, y) dx dyl 2
(2.2.28)

Here F(x, y) = IE(x, y)I is the aperture amplitude distribution to be chosen by


the designer. The integral factor usually reduces to small values while the off-axis
angle (} is near enough beam center that cos () ~ 1. Hence the angular pattern
d1( (), <P) can be calculated over the angular regions of interest by making that
approximation.
The physical shape of the antenna aperture grossly determines the pattern
funtion Eqn. (2.2.28). The aperture might be circular, producing a symmetric
penci! beam in space. A circular aperture with uniform illumination (F = const)
yields a beam as in Fig. 2.6, which shows the variation of dt( (), <P) with off-axis
angle () for arbitrary </J. Shading of the aperture function can be used to reduce
the sidelobes of the pattern (secondary maxima) while widening the main beam
somewhat and reducing the gain (aperture efficiency Pa< 1). In the case of
SLAR and SAR, rectangular apertures are typical. This is because the antenna
height controls the swath width and range ambiguity (Section 6.5.1 ), while
antenna length controls along track resolution, azimuth ambiguities (Section
6.5.1 ), and PRF selection. Consideration of these various factors usually leads
to selection of a rectangular aperture as the optimal trade-off. The length and
height of a rectangular aperture must satisfy the ambiguity constraint Eqn.
( 1.2.14) in any event.
Considering a uniformly illuminated rectangular aperture of dimension
La x W,., we can verify the beamwidth formula assumed in Section 1.2.1:
OH =A./ La for example. The directivity pattern function Eqn. (2.2.28) in this
case becomes
2

The Antenna Directional Pattern

Although it does not affect the form of the point target radar ~quation, an
important property of an antenna, in addition to its gain, can be discussed here
based on the material above. That is its directivity pattern

This is conventionally normalized to unit gain on axis to yield the pattern

where D 1 and Gt are the on-axis (maximum) values of Dt( 0,

85

From Eqn. (2.2.14) and Eqn. (2.2.15) the antenna pattern is


*(2.2.24)

2.2.2

THE ANTENNA PROPERTIES

<P) and

Gt( 0, <P ).

d1(0,</J)=(l/A 8 ) 2

1JLa/ fw.
-L./2

12

expLJ(2n/A.)sinfJ

-W./2

x (x' cos <P + y' sin <P)] dx' dyf

(2.2.29)

This pattern is roughly characterized by its principal cuts for <P = 0, n/2, which
are identical except for scaling by La or W,., respectively. Fig. 2.6 shows the
generic result, with l being the length of the antenna in the direction of the cut
in question. Shading of the aperture is used to reduce sidelobes, using for
example Taylor weighting (discussed in Section 3.2.3 ), which changes the result
to the curve shown in Fig. 2.7.

86

THE RADAR EOUATION

2.2

THE ANTENNA PROPERTIES

87

-10

-:!:!2.

-20

1'"''\

I
I
I
I
I

\
\

II
II
11
11

where

I1

1'I
1

'\\
'

0.001

\
\

I
I

,,,1

Figure 2.7 Directivity pattern of uniformly illuminated rectangufar aperture with Taylor
weighting, 30 dB levels, ii = 5.

I
I

,,,,

u = (nan..) sin Q

/-,

q
5

4
U=

Then, for small Ou, the two-sided 3 dB beamwidth is


7

10

11

*(2.2.31)

nt sin 9 nD sin Q
A.

A.

Figure 2.6 Directivity patterns of uniformly illuminated circular (solid) and square (dashed)
apertures (from Skolnik, 1970). With permission of McGraw-Hill, Inc.

Along the main pattern cut


carried out to yield

</>

= 0 the integration in Eqn. (2.2.29) can be

d1(8, </>)=[(sin u)/u] 2

where

The pattern thereby has 3 dB width given by

(2.2.30)

the rough result we used earlier in Chapter 1.


It might be mentioned explicitly that the antenna response pattern Eqn.
(2.2.29), for example, is not the image pattern produced by a SAR in response
to an isolated point target. Although related, for example through Doppler
bandwidth, the antenna pattern is only one factor entering into the more
complicated SAR processor response function which will be presented in
Chapter 4.
It is usual to describe an antenna directivity function d1( 8, </>) in terms of a
few scalar measures. The predominant ones of these are the directivity Eqn.
(2.2.19) and the 3 dB (two-sided one-way) beamwidth, as in Eqn. (2.2.31) for
the uniformly weighted linear array. In addition, the height of the highest
secondary lobe is also important, usually being the one next adjacent to the
main lobe. This is the peak sidelobe ratio (PSLR); it controls the extent to
which a point target outside the main beam, but in an unfortunate location at
the peak of a secondary lobe, will be sensed by the radar. Finally, when viewing
a distributed "target", such as in the case of SAR viewing earth terrain, the
sidelobe area beyond the first nulls of the pattern, in ratio to the total area of
the pattern, is important. This is the integrated sidelobe ratio (ISLR), and

88

2.2

THE RADAR EQUATION

THE ANTENNA PROPERTIES

89

TABLE 2.1 Pattern Parameters of Some Common


Antenna lllumlnatlons ( 88 - two-sided 3 dB beamwidth; PSLR - peak sldelobe ratio; D/D0 directivity relative to unHorm Illumination. Illumination of rectangular array assumed unHorm In
one dimension for calculation of D)

(L/ ).)OB

PSLR

D/D 0

0.89
l.15
l.2
l.45
l.05

13
21
23
32
25

l.O
0.83
0.81
0.67
0.98

R 9H =RA/La
...... ,..~. ------:;:.:::r-------

l.02
l.15
l.27

18
21
25

l.O
0.75
0.64

,.,,,,,.""

....

Rectangular
Uniform
l -(2x/L) 2
cos( xx/ L)
cos 2 (xx/ L)
Taylor
25 dB, ii= 5

...

:_"::.:...!E.i_ ____ L_

XI

Circular
Uniform
j(l - r 2)
l -

,2

.------------.........

................

.... /

..

expresses the proportion of sensed radar power contributed by terrain outside


the nominal field of view. Table 2.1 indicates some of these measures for a few
representative aperture illuminations. (The Taylor illumination is discussed in
Section 3.2.3. ).
The different parameters in Table 2.1 affect SAR image response in different
ways. Sidelobes in the azimuth dimension cause targets outside the nominal
horizontal beamwidth to produce signal. Because of the resulting geometry, the
corresponding Doppler frequency Eqn. ( 1.2.4) may lie outside the nominal
bandwidth Eqn. ( 1.2.11 ). On the one hand, this would seem to have some
potential advantage, because the resulting broadened Doppler band, if processed,
would produce a finer azimuth resolution <>x of Eqn. ( 1.2.22) than the nominal
value La/2 of Eqn. ( 1.2.23 ). On the other hand, and more usually, this Doppler
band broadening creates an ambiguity problem (Section 6.5.1 ).
If the radar pulse repetition frequency fp is not high enough to properly
sample the broadened Doppler band, as required by Eqn. ( 1.2.11 ), the higher
frequencies will be "aliased" into a lower frequency region, as discussed in
Section 6.5 and Appendix A, and diagrammed in Fig. 2.8. From Eqn. ( 1.2.4)
and Eqn. (1.2.11), sampling atfp corresponds to an unambiguous position span
).Rfp/2V.,. A bright point target at azimuth position outside that band, say at

x, > A.Rfp/4V.,
for a side-looking system, will re8ult in an apparent target at azimuth

x; = x, -

A.Rfp/2V.,

Figure 2.8

Ambiguous placement of target ~. at

f Dt

~0-------

x; due to ambiguity of Doppler frequency / 0 , at

f'o, caused by sampling (PRF)fp < B.

Sidelobes in the other (range) dimension contribute to another type of ambiguity,


known in conventional radar as "second time around". In this case, the other
side, Eqn. ( 1.2.10), of the ambiguity constraint Eqn. ( 1.2.12) is in question. The
problem is diagrammed in Fig. 2.9. With the desire for a high value of fp in
order to sample adequately the Doppler spectrum, there may be more than one
pulse "in flight" at any particular time. Although the slant range swath W.
defined by the main beam extent may be narrow enough to satisfy Eqn. (1.2.10),
for large JP the spacing of consecutive pulses may be so close that range sidelobe
returns (from elements St> S 2 in Fig. 2.9, for example) are received concurrently
with the mainlobe return (from element M). Antenna aperture weighting for
azimuth sidelobe control is essential in alleviating the ambiguity effects indicated
here and discussed in detail in Section 6.5.
The integrated sidelobe ratio has to do with suppression of contrast at edges
between bright and dark parts of a scene. As in Fig. 2.10, energy entering from
a bright distributed region through the sidelobes may artificially increase the
apparent brightness of a darker region in the mainlobe of the beam. Suppression
of a weak point target can also result from the same mechanism, through
suppression of weak image points in the main beam by automatic gain control

90

2.3

THE RADAR EQUATION

THE TARGET CROSS SECTION

91

acting on the signal produced by a strong extended target in a sidelobe. Such


matters will be discussed in more detail in Chapter 6, dealing with the flight
SAR system design and the performance trade-offs involved.

2.3

THE TARGET CROSS SECTION

We now proceed to the next factor in the point target radar equation, Eqn.
(2.1.t ). This concerns the extent to which a target returns energy incident upon

y "'' p~

it back towards the radar.


If a target present at range R is in the center of the radar beam, and if it is
small enough that the incident intensity J(R) is constant over the physical extent
of the target, the scattering properties are summarized in a single parameter,
the (radar scattering) cross section a. This is defined in terms of the intensity
actually received at the antenna due to scattering by a far distant target as
(Fig. 2.3)
/rec

Wg
Figure 2.9

Targets at S 1 , S2 in range sidelobes appear as "ghosts" in image.

/------0-----

L
Figure 2.10

Bright

= al(R)/4nR 2

(2.3.l)

That is, a is the target area we would infer, based on /rec by assuming area a
intercepted the transmitted beam in the far field, with the resulting incident
power scattered isotropically. The value of a depends on a multitude of
parameters of the target. It need not have any direct relation to the actual
frontal area presented by the target to the radar beam. The cross section of a
target will be nearly zero if the target scatters little power back towards the
antenna. This can occur because the target is small, or absorbing, or transparent,
or scatters in some other direction, or possibly all of these. The cross section
a may be drastically larger than the target frontal area in the case that some
electromagnetic resonance effect has been excited.
Only for the very simplest shapes (such as used in calibration measurements,
Table 7.1) can the value of a be calculated analytically, for example for a
perfectly conducting sphere or a flat plate, and even in such cases a depends
markedly on wavelength. For shapes other than a sphere, a depends strongly
on the aspect angle of the target to the radar beam. In practice, one can only
say that if a target at range R presents a cross section a of some given value
to the radar, then the radar system will detect it with some corresponding
probability.
In remote sensing applications, the "targets" usually extend in physical size
beyond what one would regard as a point, for example in observation of the
earth surface. In such a case,,each element dA of the extended target (terrain,
sea surface, etc.) can be assigned a local value of a. This inferred target area a,
relative to the geometrical area dA, is the specific backscatter coefficient at the
particular point in question on the extended target

Darker

Bright terrain seen by a range sidelobe masks dimmer targets in the main beam.

a 0 =a/dA

(2.3.2)

92

THE RADAR EQUATION

2.3

This quantity <To usually depends on wavelength and on the aspect from which
the terrain element is viewed. Here we want to discuss that quantity. In Section
2.8 we will discuss a form of the radar equation which is often stated for
distributed targets.
Let us begin by introducing the notion that the specific radar cross section
<To for a terrain element is appropriately considered as a random variable (Ulaby
et al., 1982, p. 476). Consider some nominal region of the earth surface which
we want to image using a SAR. The smallest area dA of that surface with which
we will be concerned is of the order of the resolution cell of the ultimate SAR
image. Usually this will be large enough to encompass multiple physical
scattering centers, each of size the order of the radar carrier wavelength, and
each of which responds to the incident electric field vector from the radar
transmitter. It is the superposition at the receiver of those elemental field phasor
responses over the region dA which determines the voltage at the receiver input,
and thereby the specific cross section <To of that element dA.
Except in idealized situations, there will be different configurations of
elemental scatterers in each terrain element dA of a larger nominally
homogeneous region. The value of <To taken over a collection of nominally
similar terrain elements will therefore not be constant, but will rather appear
to be multiple realizations of a random quantity. The implication of this is that
it is usually unfruitful to attempt to define a single deterministic backscatter
coefficient for each terrain element and to replicate the terrain map of <To in a
SAR image. Even if a terrain element dA contained one, or at most a few,
dominant point scattering centers, so that a single deterministic value <To might
apply, aspect dependence may make the value <To change in an apparently
random fashion over the course of a synthetic aperture.
The backscatter voltage responses due to different isolated terrain elements
can thus in most cases reasonably be modeled as random variables. Since even
two grossly similar terrain elements are usually physically different at the
scale of the radar wavelength, the backscattered fields of two elements can
further be taken as independent random variables. As a consequence of this
independence, the receiver (ensemble) average power for a single pulse, viewing
a larger terrain region, can be taken as the sum of the average powers which
would have resulted if each terrain element in view of the radar were in isolation
(power superposition).
Using the defining Eqn. (2.3.2), the backscattered power intensity Eqn. (2.3.1)
at the antenna for a single terrain element in direction (0, </>)from the radar is
dJ. 00 = [<F0 (0, </>)I(R, 0, </>)/4n:R 2 ] dA

00

<T (

e, </>) = 8<To( e, </>)

e, </>) of this quantity, the backscatter coefficient, which is


sought as the (speckle) image of the terrain in view of the SAR.
Since <T 0 is random, any particular realization of an image cell will not portray
its mean value <To, but rather a value governed by the probability distribution
of <To. The entire image will be made up of r(falizations of the random variable
<To appropriate to each resolution element. The sought i~age is the asse~blage
of means <To of the random variables <To in each resolution cell. In Section 5.2
we will discuss procedures for estimation of this mean image, and for reduction
of the estimator variance ( speckJe noise).
If some region of the terrain in view is reasonalily -homogeneous over say
several hundred resolution elements (wheat fields, the. sea surface, etc.), the
mean specific backscatter <To (the backscatter coefficient) may be assumed
constant over that segment of the image. The above expression Eqn. ( 2.3.4) for
average intensity can then be written
It is just the map

<T (

(2.3.5)
a form which we will expand upon in Section 2.8 in developing the "SAR radar
equation". For the present, we will return to the relation for point targets.
With the increasing availability of radars which respond to the vector
electromagnetic field (polarimetric radars), a more general form of the
backscatter coefficient has become important. Suppose that the electric field
launched by the antenna towards a scattering element (Fig. 2.4) is:
E,(R, t) = [E~(x, y)x

+ E;(x, y)y) exp[j(ro

x,

t - kz)]

where
y are unit vectors in space. Then E~ and E; are the horizontal and
vertical polarization components of the field. The polarization component
phasors are

so that the component vector of the transmitted field is

h1

-(ah
-

[C1 0 (0, </>)I(R, 0, </>)/4n:R 2 ] dA

93

where c; 0 (0, </>) is. the ensemble m~an. of <To in each farticular cell dA.
Conventionally, this ensemble mean 1s given the symbol <T

(2.3.3)

Using power superposition, the ensemble average received intensity for a single
pulse, taken over the ensemble of possible interference patterns in each resolution
cell, is then

T.,

THE TARGET CROSS SECTION

(2.3.4)

where f> = <f'h - <f'v

av exp(-N)

94

2.4

THE RADAR EQUATION

Similarly, the scattered field at the receiver will have a plane wave
representation in terms of a polarization vector

in which

is the complex scattering matrix of the target. Its terms indicate the extent to
which the two orthogonal spatial components of the incident wave each scatter
into the two orthogonal components of the scattered wave.
Finally, if the polarization vector hr characterizes the extent to which receiver
input voltage is induced by the two components h8 of the scattered wave at
the antenna, the receiver voltage phasor is

where the superscript T indicates the transpose. The elements of 6 may be


measured using two transmitted waves with only the horizontal or vertical
direction excited, and two receivers sensing separately the horizontal and
vertical components. The determination of these coefficients from radar data
will be discussed in Chapter 7.
2.4

THE ANTENNA RECEIVING APERTURE

In the simple case of an isolated target in the far field, we have expressed the
intensity of backscattered power at the antenna as

I rec = Pt G'a/(4nR 2 ) 2

(2.4.1)

a value which we will assume to be constant over the physical aperture of the
antenna. This intensity represents the scattered electromagnetic field incident
on the antenna structure. Some, all, or none of that field will actually be
effective in introducing signal power into the receiver circuits, which is
necessary in order to detect a target. Again, in building the radar equa~ion,
a single parameter is introduced to cover a number of effects and assumpt10ns,
namely, the antenna (receiving) aperture Ar.
The receiving aperture of an antenna at a particular frequency is an area
defined in terms of the intensity I rec at the antenna structure and the power
Pr flowing towards the receiver, across the antenna/receive.r interface, by

THE ANTENNA RECEIVING APERTURE

95

The receiver input is taken at the same point in the circuitry as the antenna
output, which we will assume to be the connection between the antenna
structure and the feed line to the first stage of electronics.
The extent to which the power potentially available to be extracted from
the electromagnetic field at the antenna will actually appear in the receiver
depends on the relative impedance levels in the system. Some power potentially
available will be lost through reflection (re-radiation) of the incident field
away from the antenna. In addition, since the elements of any real antenna
will have some non-zero resistance, part of the power represented by antenna
currents induced by the incident intensity /rec will be lost as heat in the antenna.
Both these effects are expressed through the antenna impedance.
The antenna impedance has two components; that due to resistance,
inductance, and capacitance in the structure itself, and a less obvious component,
the "radiation" impedance. This latter expresses the re-radiation of power
through the coupling between the impinging field and the currents induced in
the antenna conductors. Both these quantities can be calculated for simple
structures, or measured more or less precisely.
The power Pr flowing from the antenna port towards the receiver for a
particular incident intensity /rec defines the antenna receiving aperture Ar by
A,= Pr/ /rec

(2.4.2)

The aperture Ar depends on receiver input impedance through P,. For


maximum possible transfer of power potentially available from the field into
the receiver, the receiver input impedance must be the conjugate of the total
antenna impedance, the maximum power transfer theorem of AC circuit theory.
The corresponding maximum value of Ar is defined to be the effective
(receiving) aperture Ae. This value Ae depends on both the antenna radiation
efficiency Pe and the antenna aperture efficiency Pa defined in Eqns. (2.2.17)
and ( 2.2.21 ).
In the common case that the same antenna and microwave circuitry is used
for reception as for transmission, reciprocity applies (Silver, 1949, Ch. 2) such
that the effective aperture A 0 , applicable to reception, is precisely the same
area as appears in the transmission power gain formula, Eqn. (2.2.24)
G1 = 4nAe/ A. 2

(2.4.3)

with
(2.4.4)

where Aa is the geometric area of the antenna. Correspondingly, the directional


aperture Eqn. (2.2.26) applies also for reception. In the same way, the gain
G'( (), <P) is the power received per unit solid angle relative to that for an
isotropic antenna. The same remarks apply for the directivity function D'( (), <P)

98

THE RADAR EQUATION

2.6

SOURCE AND RECEIVER NOISE DESCRIPTION

99

which is Nyquist's theorem. By Eqn. (2.5.l ), the noise is Gaussian, and by Eqn.
(2.5.5) it is white, with the indicated power spectral density.
A quantum mechanical refinement (van der Ziel, 1954, p. 301) of the statistical
mechanical argument results in a more precise form ofthe Nyquist theorem:

N(f) = 4kTRp(f)

where
Circuit with resistor noise equivalent voltage source.

Figure 2.11

p(f) =(hf /kT)/[exp(hf /kT) - 1]

we take the inductor branch current i and capacitor branch voltage v. The
stored energy in the system is quadratic in the state variables
E = (1/2) (Cv 2

+ Li 2 )

Hence we have a diagonal quadratic energy functional, and equipartition applies.


Since equipartition holds, we know at once that the ensemble average noise
energy associated with the capacitor voltage is kT /2. On the other hand, we
can calculate the average noise energy as (Whalen, 1971, p. 47)
00

Ee= (C/2)v 2 = (C/2) {

IH(f)l2 N(f)

d/

(2.5.3)

Here v 2 is calculated as the integrated power spectral density of the random


variable v, N (f) is the sought (one-sided) power spectral density of the noise
voltage source e0 (t), and
H(f) = (1

+ jwRC + R/jwL)-

(2.5.6)

is the Planck factor. Neglecting the Planck factor, which contributes a non-white
character to the noise, results in an error ofless than 5% in noise power spectral
density so long as hf /kT < 0.1. At radar frequencies, say/< 35 GHz, this allows
the Planck factor to be neglected for T > 17 K. In some applications, for example
sky noise or very low noise receiver front ends, equivalent temperatures below
that limit may be in question, in which cases the more precise form Eqn. (2.5.6)
should be used.
Thus we have a basic result, supported independently by observations. The
thermal noise equivalent source voltage in a resistor of resistance R at
temperature Tis a Gaussian random process with a constant power spectral
density (white noise) 4kTR. Further (van der Ziel, 1954, p. 17), the same result
holds for any passive system at uniform temperature, where the resistance is
the equivalent resistance "looking back into" the output terminals of the system.
If such a system is connected to an impedance matched load, the one-sided
spectral density of the power delivered to the load in W /Hz is just
N 3 (f) = 4kTR/4R = kT

*(2.5.7)

is the system transfer function from en to v.


By choosing R, L, C appropriately, we can make IH(f)l 2 arbitrarily narrow
around the particular frequency / 0 , and write Eqn. (2.5.3) approximately as

This is the "available power" spectral density of the noise source. If attention
is confined to a frequency band of width B, say by a lossless filter circuit, the
thermal noise power (W) delivered to the matched load is kTB. It is quantities
of this latter form which will appear in the final equation for SNR.

fci

Ec=(C/2)

f10

IHl 2 Ndf

= (C/2)N(f0 )

L"

IH(f)l2

2.6

d/ =

N(f0 )/8R = kT/2

(2.5.4)

evaluating the integral using Gradshteyn and Ryzhik ( 1980, Section 3.112.3).
Letting the arbitrary frequency / 0 in Eqn. (2.5.4) be labeled as a general frequency
/yields
N(f) = 4kTR

*(2.5.5)

SOURCE AND RECEIVER NOISE DESCRIPTION

Theoretical calculation of the noise power at the output of a system in a


frequency band of interest is not often feasible. Direct measurement of noise
power is usually more practical. Such measurements can be made for the various
elements of a system and the results combined into equivalent parameters of
the total system under actual operating conditions. In our application, two
noise components must be considered, noise entering from the antenna port
into the input terminals of the receiver system and noise generated in the receiver
system itself. The parameters used to characterize these are respectively the

100

THE RADAR EQUATION

2.6

Figure 2.12

General system with source and load resistances.

"equivalent noise temperature" of the antenna in operation and the receiver


"noise figure". Ulaby et al. (1981, Ch. 4) set up the framework for dealing
with noise external to the radar, while a careful account of receiver noise
considerations has been given by Pettai ( 1984 ).
For detection of point targets, we are concerned only with the signal to noise
power ratio at the point in the system at which the data is digitized, or at which
detection decisions are made. Absolute levels of signal and noise separately
at that point are, however, crucial to the calibration questions treated in
Chapter 7. In this chapter, we consider only additive noise. The effects of speckle
noise will be discussed in Section 5.2, while saturation, quantization, and bit
error noise will be discussed in Chapter 6.
Jn Fig. 2.12 we show a simple system of source, amplifier, and load. Regardless
of load resistance RL, assumed noiseless, we obtain the same SNR at the load,
since output signal and additive noise power will change in the same proportions
as RL varies. Thus, we could even consider RL to be infinite, so that the output
signal power P. and noise power P 0 are separately zero but with the same fixed
ratio as for any other value of RL. Since load resistance is of no effect as regards
SNR (assuming load self noise is negligible), when convenient in discussion it
is customary to assume that the load is in fact matched: RL = R00 t in Fig. 2.12.
(In the case of impedances, ZL = Z!ut)
At the input to the receiver of Fig. 2.12 we usually want a resistance match
R10 = R. (more generally, an impedance match) for an operational reason. The
output signal power P. will be due only to signal entering the receiver input,
while the output noise power P0 is due to both input noise and noise generated
internal to the receiver. The degree of impedance match at the input affects
the absolute level of signal power entering the receiver, and thereby affects the
ability of the signal to compete against internal receiver noise. To maximize
signal effectiveness, we normally want to use a receiver nl:arly matched to the
source. More precisely ( Pettai, 1984, p. 149 ), the receiver input impedance should
be tuned for maximum output SNR, a condition usually close to that of an
input impedance match. In calculations, an impedance match is assumed and
any SNR difference due to tuning is accounted for by a "loss" factor adjoined
to the radar equation.
Let us now summarize and illustrate the most common techniques for
characterizing the noise with which signal must compete at the output of a

101

system. We consider first the source, then the receiver, and finally the
combination.
2.6.1

Rour

SOURCE AND RECEIVER NOISE DESCRIPTION

Source Noise

The point in the radar system which separates source from receiver is arbitrary.
As we have done earlier, we shall take the separating point as the signal port
of the antenna structure. This is the point at which received power P, in
Eqn. (2.4.2) is taken in defining the antenna receiving aperture. All elements
prior to that point in the receiving chain contribute noise to be counted in
source noise power. Past that point noise is counted against the receiver.
In turn, source noise is separated broadly into two parts. The first is antenna
noise due to such local effects as thermal noise in the resistance of the antenna
current paths and thermal noise radiated by any radome structure. The second
is external noise, due to relatively distant noise sources (thermal or interfering).
Jn either case, it is conventional to describe a noise source formally by a
temperature, in analogy to the formula for available noise power spectral density
Eqn. (2.5.7) from a resistor at temperature T,.
(2.6.1)
Thus the external noise sources, as viewed from the antenna terminals, are
assigned a temperature T.xt while the local sources have a temperature T..nt
These temperatures may or may not relate to the physical temperature of any
actual object.
In considering the expression Eqn. (2.6.1 ), we can assume that different
physical sources produce independent random noise voltage waveforms. Hence
noise powers, and thereby noise temperatures, from separate sources simply
add numerically. The expression Eqn. (2.6.1) is frequency dependent, in general,
since the actual noise represented by the thermal noise formalism may not be
white, for example in the case of an interfering signal in view of the antenna,
or a radio star radiating at some specific frequency. Jn the case of a narrowband
noise, the temperature is implied to refer to the center of the band. More
generally, an equivalent constant temperature is used across the band of the
receiver such that kT,.B 0 gives the correct total power, where B0 is a measure
of system bandwidth appropriate for noise calculations.
External Source Noise

Let us consider first the external source temperature, defined by Next= kT.xt
where Next is whatever noise power would flow out of the antenna into a
matched receiver system which could not be accounted for by noise sources
local to the antenna structure. It will be helpful to develop some of the
conventions used to describe the situation. Ulaby et al. (1981, Ch. 4) present a
more complete summary.
Radiation reaching the earth from the sky is described in terms of Planck's

102

THE RADAR EQUATION

2.6

law. The motivation for this is that the frequency dependence of radiation
reaching the earth from the principal physical source, the sun, is thereby well
described at visible and infrared frequencies in terms of a single temperature
parameter.
Consider first a closed cavity whose walls are at constant physical temperature
T. The walls of the cavity are assumed to constitute a black body, an idealized
passive object which by definition absorbs and re-radiates all incident radiation.
It is a basic result of theoretical physics (Page, 1935, p. 547) that the radiation
inside the cavity is omnidirectional and homogeneous, with energy frequency
spectral density per unit volume at any point in J / m 3 Hz (Planck's law)

u = 8nh(f/c) 3 [exp(hf/kT)-

1r

(2.6.2)

The apparent intensity spectral density per unit solid angle incident on any
point in the cavity in W /m 2 sr Hz is then
B = uc/4n
= (2hf 3 /c 2 )[exp(hf/kT)

- l]- 1

SOURCE AND RECEIVER NOISE DESCRIPTION

103

to the antenna in W /Hz is then


Next=

BAd dO

where Bis the brightness perceived by the antenna. In radiometry (Slater, 1980,
p. 88; Nicodemus, 1967; Meyer-Arendt, 1968), the surface giving rise to the
radiation receives central attention, and is assigned a spectral radiance in
W/m 2 sr Hz
L = J/A. cos 0

(2.6.4)

where J is the spectral radiant intensity (W /sr Hz), the angular power spectral
density emitted by surface area A. in direction 0.
An antenna of directive area Ad at range R subtends a solid angle Ad/ R 2 as
seen by the radiating surface element A. (Fig. 2.13). Thus the power impinging
on the antenna surface in W /Hz is

(2.6.3)

This is defined as the "brightness" (or radiance) (Ulaby et al., 1981, p. 192) of
the source, the cavity wall.
We ultimately want to calculate the power incident on an antenna directive
aperture Ad. To that end, we need the power per unit area impinging on the
antenna from various directions (Fig. 2.13). The noise power density available

At the antenna, the electromagnetic intensity in W /m 2 Hz impinging from the


direction of the elemental source A. is N /Ad. The solid angle subtended by that
source, as viewed by the antenna, is A.( cos 0)/ R 2 , so that the antenna perceives
an incident intensity in W /m 2 sr Hz
(2.6.5)
This quantity is called the brightness of the source in remote sensing (Stewart,
1985). Although Eqn. (2.6.5) indicates that the intrinsic source property,
radiance, is numerically equal to the sensed quantity, brightness, more generally
the latter is defined to take into account the spectral characteristics of the
sensing instrument.
An emitting surface which is perfectly diffuse obeys Lambert's law: L = const,
independent of aspect angle 0. Equation (2.6.5) indicates that such a surface
element as perceived by an antenna corresponds to a perceived power density
which is independent of aspect. The source thus appears omnidirectional to the
viewer.
As noted in Section 2.5, at microwave frequencies and temperatures above a
few .te.ns of Kelvin, the Planck factor Eqn. (2.5.6) evident in Eqn. (2.6.3) is
neghgtble. When neglected, the result is the Rayleigh-Jeans law:
B = 2kTf 2 /c 2

Figure 2.13

Brightness B in W /m 2 sr Hz at a collector due to surface of radiance L.

*(2.6.6)

It happens that the main contributor to radio noise, the sun, as perceived from
earth generally obeys the functional form of the Planck law, Eqn. (2.6.3),

104

2.6

THE RADAR EQUATION

provided a temperature T = 5900 K is used (Elachi, 1987, p. 47). At radio


frequencies, however, many electromagnetic effects intrude on the ideal form
Eqn. (2.6.6), and a general frequency dependent "temperature" T.i(f) must be
used to describe correctly the radiation spectrum of the sun. Below 30 GHz, in
fact, roughly (Hogg and Mumford, 1960)
(2.6.7)
Consider now the situation of an antenna viewing a radiating black body
(Fig. 2.13). The source radiates with brightness Bas in Eqn. (2.6.6). A linearly
polarized antenna receives half this power, so that for that case (Gagliardi,
1978, p. 99)
B

= kTf 2 /c 2

(2.6.8)

SOURCE AND RECEIVER NOISE DESCRIPTION

Even though the Planck factor in Eqn. (2.6.3) may not be negligible in some
applications, Eqn. (2.6.11) as it stands defines T.i such that the correct value for
B results from its use.
Proceeding one final step, it is then useful to extend the black body
Eqn. ( 2.6.10) to the general case Eqn. ( 2.6.11 ), and to express the result in terms
of an available noise power spectral density into a matched load in the form
of Eqn. (2.5.7)
(2.6.12)
where T..xt is a (possibly frequency dependent) temperature so defined by the
actual noise density at the antenna terminals. Considering the directionally
dependent brightness Eqn. (2.6.11 ), the available power density Eqn. (2.6.10)
takes the form

The antenna directive aperture Ad ( 0, <P) in general will not be omnidirectional.


The definition Eqn. (2.2.27) expresses the directional properties through the
antenna directivity:

N.,,(f) =

B(O, </J )Ad(O, </J) dO

= (k/4n)
(2.6.9)
The antenna available power, without -;:onsidering antenna self-loss, is then
Next=

D'(O, </J) dO = kT.i

T...1 = (1/4n)
(2.6.10)

using Eqn. (2.6.1 ). The region of integration is that portion of the antenna
pattern which views the black body. In the case of an antenna inside a cavity,
from Eqn. (2.2.5)

D'(O, </J)T.i(O, </J) dO

(2.6.13)

where we use Eqn. (2.6.11) and the definition Eqn. (2.6.9). Comparing
Eqn. (2.6.13) with Eqn. (2.6.12) then yields

f BAd(O, </J)dQ

= (Bc 2 /4nf2)

105

D'(O, </J)T.i(O, </J) dO

*(2.6.14)

as a (possibly frequency dependent) temperature parameter in terms of which


the available external noise power spectral density is expressed (Gagliardi, 1978,
p. 100).

If the directional temperature T;,( 0, <P) is nominally constant in direction at


value T.i over some sector 0 0 , a directivity weighted equivalent temperature can
be defined from Eqn. (2.6.14)

T..x1 = T;,D.

D'(O,</J)d0=4n

(2.6.15)

sphere

where
2

we simply recover T0 = Bc /kf = T.


An extension of this formalism is used to describe radiation reaching the
earth preferentially from various directions in the sky, or indeed any radiation
reaching an antenna. The expression Eqn. (2.6.8) motivates defining a '
directionally dependent incident power density (brightness) formally in terms of
a directionally dependent temperature as (Gagliardi, 1978, p. 99)
B(O, </J) = (kj2/c 2 )1'.i(O, </J)

(2.6.11)

D,=(l/4n)

fno D'(O,</J)dQ

(2.6.16)

is a receiving directivity taking into account the sidelobe structure of the antenna.
Since the antenna directivity is by definition normalized as in Eqn. (2.2.5), the
directivity D. is always less than unity. In the case of a nominal point source,
such as the sun or a planet, the temperature function in Eqn. (2.6.11) is

106

2.6

THE RADAR EQUATION

nonzero over a very narrow sector; Dr of Eqn. (2.6.16) is then approximately


Dr(00 , </> 0 )0.0 /4n, where 0.0 is the small solid angle subtended by the source.
If the antenna is pointed to the night sky, and away from any point sources
of radio noise, T.xi is due mainly to galactic radiation. For clear sky, the value
of 7;. in Eqn. (2.6.13) is quite low at radar frequencies, although significant at
radio frequencies. The nominal temperature variation (K) is (Skolnik, 1980,
p. 462; Gagliardi, 1978, p. 102)
Tn = 10(10 9 //) 2

(2.6.17)

On the other hand, if the antenna were pointed at the sun, a very high value
7;. as in Eqn. (2.6. 7) would be expected over the narrow sector of the sun's disk.
At typical radar frequencies, pointing away from the sun the main noise
contribution is from the sun's radiation scattered into the antenna by the earth's
atmosphere. Combined with galactic noise, the result is nearly constant at
Tn = 10 Kover the radar band (Gagliardi, 1978, p. 103). In the case of a SAR,
with the antenna viewing the earth surface, the external noise temperature can
be calculated nominally using Eqn. (2.6.15) for a body at 300 K. The factor
Eqn. (2.6.16) results by integration of the beam pattern over the radar footprint.
Since the external noise in the environment of the antenna is directionally
dependent, as well as frequency dependent, even at a specified frequency, the
calculation of a single temperature T.xt for use in the radar equation involves the
antenna sidelobe structure, the pointing direction of the main beam, the type of
atmospheric layers in view of the antenna, and so on. Skolnik (1980, Ch. 12)
discusses many of the considerations involved. The user of the radar equation
sweeps all these considerations into a single parameter which will presumably
be supplied: the total source external equivalent noise temperature T.xt

r--

SOURCE AND RECEIVER NOISE DESCRIPTION

PHYS

PHYS

P1a

Available power and physical temperature of lossy system. P: Sijlnal; k~~.,: Noise.

physical temperature Tphys The- available input noise power density from the
source resistor is then kTphys by Eqn. (2.5.7), so that the available output noise
power density attributable to the input must be kTphys/ L. On the other hand, the
total available output noise power density from the source and circuit
combination at temperature Tphys must also be kTphys as for any system at
constant temperature. The difference between available output power density
and that attributable to the source is then just
Nini= kTphys(l - l/L)

(2.6.18)

This is necessarily attributable to the circuit itself. Referring this circuit-generated


output noise back to the circuit input, using the attenuation 1/ L in reverse
direction, results in an equivalent input noise temperature component
*(2.6.19)

Intrinsic Antenna Noise

The other component of source noise is thermal noise arising in the lossy portions
of the antenna structure. These effects are lumped together into an antenna
temperature T..nt again defined such that kT..ni is the correct available noise
power density from the antenna, if the antenna source noise Next were not
present. We will suppose that these losses are expressed as a portion of the
available signal power reaching the antenna which is not available atthe antenna
output terminals, that is, by the antenna radiation efficiency Pe
In general, suppose that only some portion 1/ L < 1 of the power available
from a source is available at the output of a system (Fig. 2.14): P 0 a =Pia/ L. (In
the antenna case, L = 1/ Pe) This available power loss implies power absorption
in the system. (Available power relations become actual power relations in
operation if the system impedance is matched at input and output.) Such power
absorption implies in turn the presence of resistive elements, which generate
internal thermal noise which we want to characterize.
Suppose that the circuit in question were connected to a source resistance at
its input, and that the combination of source resistance and circuit were at a

P1a /L

kl PHYS

kl PHYS
Figure 2.14

107

to be added to the actual source noise temperature to account for the resistive
noise in the system. For example, for a matched attenuator at Tphys = 290 K
which delivers 63% of its input power to its output port (L = 1.58, or a 2 dB
loss), the equivalent input noise temperature is T. = 170 K, which must be added
to the source noise temperature.
In application to the antenna noise question, we usually want the source noise
temperature T. referred to the antenna output (receiver input) terminals, just
beyond the loss element represented by the antenna efficiency parameter Pe In
that case, the available noise power density, referred to the antenna output, is
N. = kT,. = k[T.,, 1/L

+ (1

- l/L)J;,hysJ

(2.6.20)

where Tphys is the temperature of the antenna structure. This value should be
used in conjunction with the antenna power gain Gr in the radar equation,
because Gr includes the signal attenuation factor Pe due to antenna losses. On

108

2.6

THE RADAR EQUATION

the other hand, the value


T~ =

7;,xt

+ (L -

1) Tphys

appropriate to the input of the lossy system represented by Pe corresponds to


the antenna directivity D'. The radar equation could be written using either pair,
but the former is conventional.
In Section 2.6.3 we will work through an example further illustrating the use
of such expressions as we have been developing. First we will discuss the
characterization of noise in the receiver.

2.6.2

Receiver Noise

In discussing receiver noise, the actual impedance conditions make a difference


in SNR. Let us continue to model the receiver as a constant power gain G0 P over
a band B0 The receiver input signal and noise powers are P., P0 Noise due to
internal receiver sources adds some amount Nint to output noise density with no
corresponding signal enhancement. The resulting output SNR

depends on the absolute level of receiver input noise power P0 This in turn
depends on the input impedance conditions, which govern the extent to which
available source noise power is delivered to the circuit.
Because the available output power Eqn. (2.5.7) of a thermal noise source is
independent of source or load impedance, it is a great convenience to assume in
system noise calculations that all units have matched impedance sources and
loads. Were such to be the case, the actual powers would be identical to the
available powers. Such is not necessarily the case. However, we can assume load
matching, since output SNR is independent of load (with some exceptions
discussed by Pettai ( 1984) ), signal and noise being treated the same by the load.
But source mismatch will require a factor in the radar equation to adjust the
results, calculated assuming source matching and available power, to the actual
case. For the moment, we assume impedance match between all system elements.
Any system which generates noise can be characterized by a "noise factor"
F, or a "noise figure" IO log F. (The terminology is not consistently applied often noise figure is used for both.) The unwanted output might be due to internal:
noise, thermal (white Gaussian) or otherwise. It might also be due to the'
deterministic generation from the input of frequency components which later;
interfere with signal (nonlinear effects present in mixers, for example), or loss of
signal power in converting from RF to IF. Some possibilities have been
summarized by Skolnik (1980, p. 347). Pettai (1984, Ch. IO) gives a more
complete discussion. Various different definitions of noise figure can be made
(Pettai, 1984, Ch. 9). We will discuss some of them in turn, indicating their use
in the radar equation.

SOURCE AND RECEIVER NOISE DESCRIPTION

109

Receiver noise can also be summarized in terms of an equivalent input noise


temperature for the system, together with the available power gain of the system.
This has already been done above in discussing the self noise Eqn. (2.6.18) for
an attenuator. We will consider that description for a receiver first, returning
later to the formalism of noise factor.
Aval/able Power Gain

Recall that the signal power'entering into the receiver is expressed in the radar
equation in terms of the receiving aperture Ae of Eqn. (2.4.4 ). This by definition
relates to the signal power which would flow into a matched receiver. If, in
considering noise power into the receiver, we also assume impedance matching,
we then have to do with available noise power quantities kT at the input, and
available power gain Ga to transfer them to the output, along with the signal.
We thereby arrive at the output SNR for matched conditions, which is the SNR
in operation, except for the internal noise effects indicated in Eqn. (2.6.21).
The available power gain Ga of a circuit is the ratio of power available at the
circuit output, which depends on both the circuit and the source, to power
available from a source connected at the input, which depends only on the source.
We take this quantity relative to some frequency of interest, with the ratioed
powers referred to unit bandwidth over a narrow (infinitesimal) band. Thereby
all gains, temperatures, and noise factors generally become functions of
frequency.
The available power from a circuit is the power that would be delivered to a
matched load. For example, in Fig. 2.12

is the actual input power, while

P; = e;/4R.
is the available input power, corresponding to Rio= R . From Fig. 2.12 then
(2.6.22)
Pettai ( 1984, Ch. 7) has discussed this quantity carefully. It is independent of the
actual load conditions at the circuit output, but depends on the input impedance
conditions. It is not the ratio of output power to input power under operating
conditions, unless the input and output are matched, so that R. = Rio and the
circuit is loaded by RL = R 001 It depends on source impedance, a fact which, as
we shall see, feeds directly into a property of "the" noise figure of a circuit.
Receiver Noise Temperature

Using available power gain, the additional output noise contributed by a circuit
can be expressed in terms of an equivalent temperature. Suppose a source of

110

THE RADAR EQUATION

2.6

equivalent noise temperature T. feeds a receiver. The consequent available input


noise power density is kT,,. For the particular equivalent source resistance R. in
question, suppose the circuit has available power gain Ga. The output available
noise power density due to source noise is then GakT.. The actual output
available noise power density will be found to be some larger number Noa The
difference Nini is attributable to receiver internal noise, and can be used to
define an equivalent input receiver noise temperature J'., such that
(2.6.23)

SOURCE AND RECEIVER NOISE DESCRIPTION

111

sensibly narrowband, so that thermal noise temperatures are approximately


constant, the passband is not strictly rectangular. The bandwidth Bn used in the
radar equation is an equivalent "noise bandwidth". This is defined such that
GakT.Bn would give the right available noise power if we assumed white noise
from a source at constant temperature T. to have passed through a circuit with
rectangular band of width Bn and amplitude A(f0 ) at band center.
If the actual circuit (receiver) had transfer function H(jro), and if T,,(f) were
the actual input noise temperature function, the actual output noise power would
be

Then
(2.6.28)
(2.6.24)
Assuming T. to be constant over the band, and letting

where
(2.6.25)
is the "operating" noise temperature of the combined source and receiver. Since
the gain Ga is used to refer the receiver noise Nini to the input, the receiver
equivalent temperature T. and the operating noise temperature T.,p depend on
the impedance of the source feeding the circuit.
The equivalent noise temperatures Te,, Te,, ... of a cascade of elements combine
easily. Each unit of the cascade is specified by its available power gain Ga; and
equivalent input noise temperature Te;, both specified for the impedance and
temperature conditions present in the cascade. Then for three elements, for
example, the total available excess output power is

leading to

T. =Nini/Ga, Ga Ga3
2

=Te,+ 'I'e)Ga, + 'I'e)Ga,Ga2

*(2.6.26)

The radar equation Ecin. (2.1.1) can now be written as


(2.6.27)
where P. is the received power as in Eqn. (2.4.5), that is the signal power which
would flow from the antenna port under matched conditions (hence the available
input signal power).
Noise Bandwidth

In Eqn. (2.6.27) Bn is "the" bandwidth of the receiver, chosen wide enough to


pass all the signal, but no wider, in order to limit noise. Although receivers are

be the midband value, we obtain


*(2.6.29)
as the receiver noise bandwidth. If the actual noise temperature T,,(f) is not
constant over the band, the expression Eqn. ( 2.6.28) must be used in calculations.
Receiver Noise Factor

The operating noise temperature Eqn. (2.6.25) wraps together source and
receiver noise into one parameter. It is sometimes convenient to continue to
keep these as separate. To that end, it is usual to define a noise factor F in
terms of which to characterize the intrinsic receiver noise. This is taken in
reference to the specific source impedance which will feed the circuit in operation,
but with the source assumed to be at a standard temperature T0 = 290 K. The
receiver itself is assumed to be at its physical operating temperature. Then the
("standard") noise factor Fis the ratio of the total available output noise power
density Noa with the input at temperature 'JO, to the output available power
density attributable to input:
(2.6.30)
Using Eqn. (2.6.23) to express the total receiver output available noise power
density in terms of the equivalent receiver temperature J'., we have
F =(Nini+ GakTo)/GakTo

= 1 + G.kT./GakT0 = 1 + T./To

(2.6.31)

112

2.6

THE RADAR EQUATION

SOURCE AND RECEIVER NOISE DESCRIPTION

113

we have from Eqn. (2.6.36) that

whiCh is to say
T.=(F- l)T0

*(2.6.32)

Note that F, like T., depends on the source impedance, through Ga, ~lt~ough
not on its temperature. (The dependence of F on source impedan~ 1s m fact
more profound than simply via Ga (Pettai, 1984, p. 149); the matter mvolves t_he
particular distribution of noise sources in the receiver.) ~lso note that n01se
factor, like noise temperature, may depend on frequency smce we deal always
with power spectral densities.
It is interesting to note the noise factor of a lossy element at standard
temperature T0 From Eqn. (2.6.19) and Eqn. (2.6.31), this is
(2.6.33)

Note that Eqn. (2.6.33) is not correct unless the element is at standard
temperature.
Jn terms of receiver noise factor F, using Eqn. (2.6.25) and Eqn. (2.6.32) the
radar equation Eqn. (2.6.27) becomes

so that always SNR 0 < SNRi from Eqn. (2.6.37); all else being equal, SNR can
only degrade in the presence of system noise. The amount of degradation is
governed by the ratio of output noise density N 1n 1 due to internal sources to
amplified input noise density GakT.. Since the former is nominally the same in
each of the various stages of a receiver, while the latter increases from stage to
stage, it is the noise figure of the earliest receiver stages which mainly
controls the output SNR, an observation we will make more precise below as
Eqn. (2.6.46).
The standard noise factor F defined in Eqn. (2.6.30) is the operating noise
factor Eqn. (2.6.36), but assuming that the source temperature T. is the standard
temperature T0 = 290 K. Using Eqn. (2.6.24) in Eqn. (2.6.36), we have
*(2.6.38)

Since from Eqn. (2.6.32) we have


(2.6.34)

Only in the particular (and unusual) case T. = T0 does this become the common
expression

T.=(F-l)T0

there results the relation between the operating and standard noise factors

(2.6.35)

In order to rescue the functional form Eqn. (2.6.35), an "operating" noise factor
F can be defined. The operating noise factor of a combined system, including
source and the receiver, is defined as the ratio of the actual available output
noise power density Noa to the available output noise power density ifthe receiver
had no internal noise sources

tl;;

F 0 P = 1 + (F - l)(To/T.)

*(2.6.39)

which for T. = T0 again shows Fop = F.


Finally, one can define a "system noise factor" for use in the simple form
Eqn. (2.6.35), retaining T0 even when T. =F T0 This noise factor is defined such
that

(2.6.36)

This parameter has the advantage that it ta~es into acc~unt t~e _act~al source
temperature T., so that the receiver output signal to noise rat10 ts simply

Fsys = (F - 1) +

SNR 0 = GaP./F 0 pGakT.Bn


=

P./ FopkT.Bn = SNR;/ Fop

that is

(2.6.37)

where SNRi is the output SNR in the case that the receiver, of bandwidth Bn,
had no internal noise sources. Since

T./To

*(2.6.40)

Then the radar equation Eqn. (2.6.34) is just


(2.6.41)

which is the form Eqn. (2.6.35), but not limited to T. = T0 The system noise
factor, like the operating noise factor, accounts for both source noise and internal

114

2.6

THE RADAR EQUATION

receiver noise, and is simply expressed as

SOURCE AND RECEIVER NOISE DESCRIPTION

115

power densities due to internal sources of Nintl, Nint 2 when fed by the impedances
R., R0011 , respectively. (Note that these powers do depend on source impedance,
since the flow of internal noise power depends on the character of the complete
driving circuit.) Then the combined output noise density is

The two are related by


(2.6.45)
*(2.6.42)
using Eqn. (2.6.38).
The radar equation can be written using any of the various noise factors and
corresponding temperature according to preference. For example, using the
operating noise factor Fop the source noise temperature T. appears explicitly, and
the radar equation has the form Eqn. (2.6.37):

were the source to be at physical temperature T0 as assumed in using the standard


noise factor F. Using the cascaded gain Ga 2Ga 1 with the definition Eqn. (2.6.30)
yields a cascade noise factor

= (Nint2

(2.6.43)

== (Nintl + Ga1kTo)/Ga1kTo + (Nint2 + Ga2kTo - Ga2kTo)/Ga2Ga1kTo


= F1

rather than Eqn. (2.6.41).

+ Ga2Nint1 + Ga2Ga1kTo)/Ga2Ga1kTo

+ (F 2 -

1)/Ga 1

The expression iterates, as for example

Cascaded System Elements

The available power gain Ga and the circuit (standard) noise factor Fare defined
so as to combine easily for cascaded networks. Consider first the gains Ga1 and
Ga 2 of two cascaded networks. Since available output power is independent of
load, the available output power density N 1 (Fig. 2.15) is just

*(2.6.46)
The cascade relation Eqn. (2.6.46) makes precise the decreasing importance of
internal noise in the later stages of the electronics chain. The same result follows
from Eqn. ( 2.6.45 ), by referring the total output noise to the input of the cascade:

where Ga 1 is the gain of circuit 1 assuming the particular input impedance R .


In turn, if Ga 2 is the gain of circuit 2 assuming R0011 as source impedance,
(2.6.44)
showing the cascade relation.
As to noise factor, suppose the two circuits have equivalent output noise

All the relations above deal with available power gain G. and available power.
The source resistance R. enters through the dependence of Ga on R., and in other
ways. The resulting SNR, calculated using available power, is not the actual SNR
unless the receiver input impedance is in fact R . In practice, one strives to meet
that condition approximately, by use of an impedance matching transformer at
the receiver input, for example. However, some mismatch may exist, perhaps
introduced intentionally by noise tuning to increase output SNR. In that case,
the actual output SNR and the available power output SNR will differ by some
factor, which can be included in the radar equation as a mismatch factor Lm:
(2.6.47)

Figure 2.15

Cascade of noisy amplifiers.

for example. In the case of tuning, the factor Lm may be less than unity. In that
case, SNR has been improved by deliberate mismatching. Skolnik ( 1980, p. 345)
mentions the effect, and Pettai ( 1984, p. 149) analyzes the matter.
From another point of view, the operating noise factor depends on source
impedance. If, by the noise factor F corresponding to the factor F 0 P in the radar

116

2.6

THE RADAR EQUATION

equation, we imply "the" noise factor of the receiver, we must have reference to
a specific source impedance. If that is the source impedance which matches the
receiver input impedance, then all is well if the source and receiver are matched
in operation. However, if mismatch is present at the input during operation, the
corresponding operating noise figure Eqn. (2.6.38) is not the correct number to
use in the radar equation. It must be modified by some factor Lm as in
Eqn. (2.6.47).
We turn now to a simplified example of application of the expressions
developed in this section for noise characterization.

2.6.3

An Example

In this section we want to give an example of noise calculations using the above
relations. The analysis will be simplified in comparison with an actual situation.
We will consider only the primary effects in operation; a thorough analysis is
complicated, specific to each situation, and beyond our aims.
Consider then the system schematized in Fig. 2.16. A down-looking antenna
views the earth, with the received signals passed through a waveguide connection
and isolator (to protect the receiver during pulse transmission) to the carrier
frequency (RF) amplifier. After amplification, the signal is shifted to another
frequency band by a mixer and local oscillator (LO), and then passed through
an intermediate frequency (IF) amplifier and filter chain to the output. In a radar
receiver, the IF amplifier output would be detected to determine its power as a
function of time for decision making, in a simple system, or perhaps digitized for
further processing. On the other hand, the amplified IF signal might be converted
to another carrier frequency for telemetry to a ground station.
The down-looking geometry is such that the antenna effectively sees only the

TPHYS =

180 K

+
I
I
I
I
I
I

250K

Ga= 20dB
F=4dB

I
I
I
I
I
I

400K
L=SdB
t... 1.5

Ga= 60dB
F=3dB

SOURCE AND RECEIVER NOISE DESCRIPTION

117

earth surface. The noise radiation impinging on the antenna is predominantly


thermal in origin, since terrestrial point sources are mostly highly directive and
away from the orbiting satellite. The thermal noise in turn is partly reflected noise
from the sun, and partly radiation from the relatively warm earth (Elachi, 1987,
p. 144 ). Scattering of incident solar radiation by the atmosphere also occurs.
At radar frequencies, the primary effect is that of radiation from the earth, in
thermal equilibrium with its atmosphere. The earth surface can be taken as a
grey body at temperature 'I'g = 300 K, as a nominal average value. (A gray body
emits according to the Rayleigh-Jeans law Eqn. (2.6.6) of a black body, but with
power reduced by a factor e < 1, the emissivity.) The emissivity of the earth
depends on the character of the surface in view, and the geometry of the viewing
situation. The emissivity is expressed as (Elachi, 1987, p. 117)
e=l-p

where p is the reflectivity or reflectance or, in the case of the sun as the energy
source, the albedo. A nominal value p = 0.1 is reasonable as an order of
magnitude for the earth surface in the microwave region at 20 viewing angle
from vertical (incidence angle) (Elachi, 1987, p. 146).
The antenna structure itself has losses expressed by the radiation efficiency Pe
of Eqn. (2.2.17). The signal loss resulting from this is already accounted for by
use of the power gain in the radar equation, rather than the directivity. The
implied noise increase is expressed by the loss Le= 1/ Pe For argument we take
Pe = 0.95. The antenna feed and extraneous losses might amount to 1 dB, and
we lump those losses with the antenna loss. These together comprise the source
noise temperature T..
The circulator we take to have a loss 1 dB in the signal direction in its
operating position, with the transmitter feed connected. Along with the antenna,
we assume a circulator physical temperature say J;,hys = 180 K.
The RF amplifier, as fed by its actual source impedance, we take to have an
available power gain 20 dB. The (standard) noise figure, with the same source
impedance, and measured with the amplifer at its operating temperature, we take
as 4 dB. The RF output undergoes cable loss of 1 dB before reaching the mixer.
The mixer has a conversion loss (RF to IF) of 5 dB, and a noise temperature
ratio (Pettai, 1984, p. 101 ): ,,
(2.6.48)

NVVVVVVVVV

300K
t:=0.9
Figure ~.16

Notional receiver used in noise calculations.

where T.. is the output noise temperature under operating conditions, assuming
an input temperature T0 The local oscillator is followed by an extraneous loss
of 1 dB, and the IF amplifier, as shown in Fig. 2.16. The later components operate
at 400 K.
Let us first determine the source temperature T. (Fig. 2.16). The detailed
situation is diagrammed in Fig. 2.17. The antenna and feeds are assumed to be

118

THE RADAR EQUATION

2.7

L12 = L1 L2 = 1.33
TANT= (l-1)TPHYS= 59 K
180 K
G1 =0.95

L1 = 1 I Pe

TexT

=ET

t---0----1

I
I
I

I
I
I

250K

I
I
I

119

400 K

Ts

TPHYS = 180 K

Ts= G12 (TEXT+ TANT)

G:

=248K

=270K

0.79
1.16
47
Te:

F:

Figure 2.17 "Front end" of Fig. 2.16.

matched, so that the attenuation units shown have available power gains
Ga= 1/L.
The generator temperature is just the temperature of the earth, modified by
the emissivity: T.. 1 = eT = 270 K. The available power gains G 1 = p. = 0.95,
G2 = -1 dB= 0.79 cascade as G12 = G1 G2 = 0.75, corresponding to antenna
and feed loss L 12 = l/G 12 = 1.33 ( 1.2 dB). From Eqn. (2.6.19), this corresponds
to an effective input temperature T,. 01 = 59 K, considering the physical temperature 180 K.
The sum of T.. 1 and T,.01 , 329 K, is brought forward using G 12 to yield a total
source temperature T. = 248 K. This value is mainly driven by the high earth
temperature. Were the antenna to be situated on earth and looking at a cold sky
(T.,1 = 50 K, say), the antenna losses (more precisely, the implied noise sources)
would be proportionally more significant than in the earth viewing case.
The receiver chain can be characterized in terms of noise figure or noise
temperature. For illustration, we will consider both procedures. Let us first seek
the receiver equivalent (input) noise temperature T., leading to a total operating
noise temperature T;,p as in Eqn. (2.6.25). This would be appropriate if the radar
equation were written in the form Eqn. (2.6.27). The receiver chain is expanded
in Fig. 2.18.
The equivalent input noise temperature of each 1 dB loss follows from'
Eqn. (2.6.19), taking the varying physical temperatures into account. The noise
factors then result from Eqn. (2.6.31). Those of the two amplifiers follow from
Eqn. (2.6.32). The noise temperature ratio t = 1.5 of the mixer yields an output
noise temperature Eqn. (2.6.48) of T0 = 435 K, were the input to have a
temperature 290 K. Considering the - 5 dB gain, this yields an equivalent input
noise due to the mixer internal noise

T. = 435/0.316 -

I
I
I

THE POINT TARGET RADAR EQUATION

290

= 1086 K

and a corresponding noise factor from Eqn. (2.6.31). (Mixer noise temperatures
being high, the ratio t is a numerically more convenient quantity.)

100
2.51
438

0.79
1.22
65

0.32
4.74
1086

0.79
1.36
104

106
2.00
289

Figure 2.18 Parameters in receiver Fig. 2.16. G: Available power gain. F: Noise factor. T.,:
Equivalent input temperature of self noise.

Cascading the various temperatures T. back to the source point, where T,. is
taken, using Eqn. (2.6.26) yields a receiver equivalent input value

T.

= 47 + 438/0.79 + 65/79 + 1086/63 + 104/20 + 289/16

= 47 +

552 + 1+17 + 5 + 18

= 640 K

Then, from Eqn. (2.6.25), T;,P = 888 K would be used in Eqn. (2.6.27). The
cascade relation makes clear the deleterious effect of extraneous losses early in
the receiver chain, and the importance of an early low noise gain, especially
before the lossy mixer stage.
Alternatively, proceeding in terms of noise figures, the cascade relation
Eqn. (2.6.46) yields

= 1.16 +

1.51/0.79 + 0.22/79 + 3.74/63 + 0.36/20 + 1.0/16

= 3.21

This again corresponds to an overall T. = (F - l)T0 = 640 K, from Eqn. (2.6.32),


as it must. The operating noise factor follows from Eqn. (2.6.38): F 0 P = 3.58,
and the system noise factor from Eqn. (2.6.40): Fsys = 3.06.
The various forms of the radar equation use quantities
Eqn. (2.6.27): kT.,P = 1.23 x 10- 20 = -199.1 dB
Eqn. (2.6.34): k[T. + (F - l)T0 ] = -199.l dB
Eqn. (2.6.37): kF0 P T. = -199.1 dB
2.7

THE POINT TARGET RADAR EQUATION

Finally we now have the "simple" radar equation, Eqns. (2.6.27), (2.6.34), and
(2.6.37), which of course is not simple at all, since its parameters embody a wealth
of complexity, and in any particular case are not easy either to calculate or to

120

THE RADAR EQUATION

2.8

design towards. We will henceforth refer to the form Eqn. (2.6.37)


SNR0 = P,/kF0 p T.B 0
1

= P1G CTAe/[(4nR

2 2
) kF0P T.B 0

*(2.7.1)

where F 0 P is the operating noise factor and T. is the total source equivalent noise
temperature, including ohmic noise generated in the antenna, both at the radar
carrier frequency. (In case of any impedance mismatch, the loss factor Lm should
be included in the denominator.) If it has been decided what value of SNR0 is
required for reliable detection with tolerable false alarm rate, or for adequate
performance more generally, this equation indicates the trade-offs among the
system parameters, the target characteristics (CT), and the maximum range.
Consideration of these trade-offs leads directly to the concept of the matched
filter, to be developed in Chapter 3.
The development of the radar equation above assumes only a single pulse is
available for processing. If only a single pulse is used to make the decision as to
presence or absence of a target at some range, a signal to noise ratio SNR0 at
the detection point of the order of 15 dB is required for reliable operation.
Normally, however, measurement using more than one pulse is used, with the
power from the multiple pulses averaged before a decision is taken. In that case,
the signal power might be assumed constant from one pulse to another, while
the noise power fluctuates randomly. Alternatively, the signal power itself due to
a possible target might be assumed to fluctuate in accord with some stated
statistical behavior. This latter situation is similar to that discussed above in
defining the specific backscatter coefficient for an extended scene, in which case
the objective is to estimate the mean of the backscattered power for each scene
element.
Calculation of the SNR needed for each single pulse in order that the average
of some number of pulses behave reasonably as a detection criterion for point
targets has been carried through in detail for cases of interest in practice. Various
statistical assumptions about the nature of the underlying target randomness are
analyzed. The single pulse SNR needed for detection in the multiple pulse case
is of course less than needed if only the single pulse itself is to be used. In rough
terms, the single pulse SNR required decreases by the square root of the number
of pulses whose power is averaged before a decision is taken. As a specific case,
for a simple hard target with 300 pulse powers averaged, a SNR of 0 dB yields
adequate performance, while 16 dB is required for a single pulse decision. The
subject is elegant and thoroughly analyzed (DiFranco and'Rubin, 1968), but we
will not pursue it further.
2.8

THE RADAR EQUATION FOR A DISTRIBUTED TARGET

In remote sensing applications, in which the "target" is extended, as we discussed


in Section 2.3 it is appropriate to define the radar cross section per unit

THE RADAR EQUATION FOR A DISTRIBUTED TARGET

121

geometrical area of the scene as a random variable, with a mean CTo which in
general varies from one scene resolution element to another. The quantity of
interest in the radar system is then not the deterministic power of a single echo
pulse received in response to a target with some deterministic cross section CT,
but rather the (ensemble) average power for a single pulse with terrain in view
having average specific cross section CT 0, which will generally depend on which
scene elements are in question.
In those terms, the radar equation of the previous section, Eqn. (2.7.1 ),
becomes
*(2.8.1)

where the integration is over the terrain illuminated by the antenna beam and
sidelobes, and we take account that the effective receiver aperture depends on
the direction from which the received field impinges. Taking account that, for a
receiving antenna,

this is the usual radar equation for a distributed target (Ula by et al., 1982, p. 463 ).
This form Eqn. (2.8.l) of the radar equation, appropriate for average power
received from a distributed target, expresses the average power due to terrain
backscatter as it competes with average thermal noise. However, any particular
realization ofa SAR image will use as data particular realizations of the (random)
received power for each pulse used in the processing, and each pulse will in turn
involve some particular realization of the random variable CT 0 in each scene
element. The processed image will have intensity in each image element which
is some realization of a random process, whereas what we want in each image
element is the value of the mean backscatter CT 0 for that element. The discrepancy
is speckle noise, and results in a mottled appearance of the SAR image of a
terrain which is nominally homogeneous.
The fact of speckle is inherent in the nature of the radar signal itself, whose
voltage is the result of random interference of the backscattered electric field from
the multitudinous facets of a distributed scene, as discussed in Section 2.3. In
remote sensing applications, it is necessary to reduce the speckle noise in the
image, and this is done by averaging multiple realizations of the backscatter
coefficient from the same scene element. In Section 5.2 we will discuss a means
for doing that, and the resulting statistical improvement of the smoothed image.
The quantity SNR0 in the form Eqn. (2.8.1) of the radar equation says nothing
directly about speckle noise, but affects the relative influence of speckle. Unlike
the case of detection of point targets, for detection of distributed targets one can
only seek to set a value of SNR0 from the radar equation such that thermal noise
is not the dominant noise effect in the image. Further processing designed to

122

THE RADAR EQUATION

2.8

defeat speckle will then be relatively more effective in improving the image for
remote sensing use.
Since the distributed target radar equation serves the general purpose of
expressing the mean influence of receiver noise on the image, over some ensemble
of random images, it is useful to assume that the radar views a homogeneous
scene, in the sense that the mean backscatter coefficient u 0 is constant over the
scene, and the same for each position of the radar. The radar equation,
Eqn. (2.8.1), then appears as
*(2.8.2)

with the integral taken over the footprint of the radar beam on the earth.
Equation (2.8.1), and its special case Eqn. (2.8.2), are exact, insofar as the
parameters can be precisely specified. It is informative to recast them in various
other forms, however. Although only approximate, these reveal the role of
various parameters more readily related to SAR systems and the resulting
images than the parameters of the exact equations. We will now develop two
of these alternative forms.
In normal SAR imaging situations, as in Fig. 1.6, we can approximate R as
constant and equal to the slant range at midswath. The cross beam extent of the
footprint is by definition the region of terrain over which the antenna gain is
appreciable. We might take the gain G(O, </>)as approximately constant at the
midbeam value G, the parameter in the radar equation, over the 3 dB azimuth
beamwidth (JH, and zero outside the beam. In the range dimension, the
appropriate limit for the footprint is related to the time extent of the radar
pulse. This is because the radar return voltage at any instant, in the case of a
distributed target, is comprised of contributions from a .slant range span
.1.R = crp/2, corresponding to the radar pulse time width, projected on the
horizontal using the incidence angle 11 Then approximately:

THE RADAR EQUATION FOR A DISTRIBUTED TARGET

element with isotropic mean backscatter u 0 and extent ox in azimuth and oR


in ground range, where these are the SAR resolutions. From Eqn. (2.8.1), th;
single pulse SNR in this single cell case is
(2.8.4)

where we again use the antenna receiving gain.


The terrain element is effectively in view of the moving radar during
transmission of some number NA of pulses. The pulse return envelope is sampled
at a rate f. =BR to produce NR = rpBR samples per pulse, where BR is the
one-sided bandwidth of the radar pulse. (The numbers NR, NA are just the
dimension of the two-dimensional compression filter used for SAR image
formation.) The totality of N 1 = N ANR data samples are processed coherently
through the SAR image formation algorithm to produce a single image
resolution cell. The thermal noise samples can be taken independent from sample
to sample within each pulse (the noise bandwidth Bn:::::: BR), and from pulse to
pulse. As a result of coherent processing of N 1 input samples, the SNR of each
SAR processor output sample (image point) improves by N 1 Thus the signal
(u 0 ) to (thermal) noise ratio at the output image resolution cell is
(2.8.5)

It remains to express N R and NA in terms of factors in Eqn. ( 2~8.4 ). First,


(2.8.6)

123

introducing the average power Pav over both the on time rP and off time
of the pulse. The terrain point in question is in view for a time

TP - tP

using Eqn. ( 1.2.6), from which NA = S/P follows. Finally assuming the nominal
fully focussed resolution Eqn. ( 1.2.7), ox= La/2, there follows

[G 2 (0, </J)/R 4 ] dA = G 2 (dA)/R 4

= G 2 (RlJH)(crp/2 sin rf)/ R 4

(2.8.7)

With this the radar equation, Eqn. (2.8.2), appears as

Using Eqns. (2.8.6) and (2.8.7) in Eqn. (2.8.5) there results finally (after
recalling /p TP = 1):
*(2.8.3)
*(2.8.8)

This is a form which has been called the SLAR radar equation (Ulaby et al.,
1982, p. 572 ).
Equation (2.8.3) expresses the average SNR of a single radar pulse viewing a

terrain with homogeneous mean specific backscatter coefficient u 0 Another


recasting of Eqn. ( 2.8.1) is also of interest. Suppose the radar views a single terrain

This is the SAR radar equation in Cutrona ( 1970). It expresses the average signal
to thermal noise ratio of a SAR image point whose mean backscatter coefficient
is u 0 It is valuable as an indicator of the role of its various parameters. (Note
for example that the azimuth resolution ox does not appear.) However, it will be

124

REFERENCES

THE RADAR EQUATION

appreciated from the use of simple nominal relations in its derivation that it
should not be used for numerical calibration work.
In Section 7.6 we will investigate more fully SNR and calibration considerations in SAR images. We now pass on to development of the basis for the SAR
imaging algorithms.

REFERENCES
Barton, D. K. (1988). Modern Radar System Analysis, Artech House, Norwood, MA.
Bohm, D. ( 1951 ). Quantum Theory, Prentice-Hall, Englewood Cliffs, NJ.
Colwell, R. N., ed. ( 1983 ). Manual of Remote Sensing, American Society of Photogrammetry,
Falls Church, Virginia.
Cutrona, L. J. (1970). "Synthetic Aperture Radar'', Chapter 23 in Radar Handbook
(Skolnik, M. I., ed.) McGraw-Hill, New York.
Di Franco, J. V. and W. L. Rubin ( 1968). Radar Detection, Prentice Hall, Englewood Cliffs,
NJ (Reprinted by Artech House, Dedham, MA, 1980)
Elachi, C. ( 1987). Introduction to the Physics and Techniques of Remote Sensing, Wiley,
New York.
Gagliardi, R. (1978). Introduction to Communications Engineering, Wiley, New York.
Gradshteyn, I. S. and I. M. Ryzhik ( 1980). Table of Integrals, Series, and Products,
Academic Press, New York.
Hogg, D. C. and W. W. Mumford ( 1960). "The effective noise temperature of the sky,"
The Microwave Journal, 3(3), pp. 80-84.
Kennard, E. H. ( 1938). Kinetic Theory of Gases, McGraw-Hill, New York.
Lawson, J. L. and G. E. Uhlenbeck (eds.) (1950). Threshold Signals, McGraw-Hill, New
York.
Meyer-Arendt, J. R. ( 1968 ). "Radiometry and photometry: Units and conversion factors,"
Applied Optics, 7(10), pp. 2081-2084.
Nicodemus, F. E. ( 1967). Radiometry. Chapter 8 in Applied Optics and Optical
Engineering, Academic Press, New York.
Page, L. (1935). lntrodution to Theoretical Physics, Van Nostrand, New York.
Pettai, R. (1984). Noise in Receiving Systems, Wiley, New York.
Ridenour, L. N., editor-in-chief, MIT Radiation Laboratory Series, McGraw-Hill, New
York, Vols. 1-28. Various titles and dates.
Sherman, J. W. III ( 1970). "Aperture-antenna analysis," Chapter 9 in Radar Handbook
(Skolnik, M. I., ed.) McGraw-Hill, New York, pp. 9.1-9.40.
Silver, S., ed. ( 1949 ). Microwave Antenna Theory and Design, McGraw-Hill, New York.
Skolnik, M. I., ed. ( 1970). Radar Handbook, McGraw-Hill, New York.
Skolnik, M. I. (1980). Introduction to Radar Systems, McGraw-Hill, New York.
Skolnik, M. I. ( 1985). "Fifty years of radar," Proc. IEEE, 73(2), pp. 182-197.
Slater, P. N. (1980). Remote Sensing-Optics and Optical Systems, Addison-Wesley,
Reading, MA.

125

StewBarkt, RI . H. (1985). Methods of Satellite Oceanography, University of California Press

~eey

'

Stutzman, W. L.and G. A. Thiele( 1981 ). Antenna Theory and Design Wiley New York
Ulaby,
Remote
'
'
Add'F. T.,WR. 1K. Moore. and A. K . Fu ng (1981). M"icrowave
Sensing
Vol. I
1son- es ey, Readmg, MA.
'
'
Ulaby,
Add'F. T.,WR. K Moore. and A K Fung (1982) M'1crowave Remote Sensing Vol 2
1son- es 1ey, Readmg, MA.
'
'
van der Ziel, A. (1954). Noise, Prentice-Hall, New York.
Whalen, A. D. ( 1971 ). Detection of Signals in Noise, Academic Press, New York.

3.1

127

which generalizes to the algorithm of image formation from SAR signals.


Subsequent chapters will add the details of realization of such processing.

3.1

THE MATCHED FILTER AND


PULSE COMPRESSION

In Chapter 2 the basic functional units of a radar system were discussed. The
transformation of power fed to the antenna input by the transmitter into power
at the receiver output due to scattering from a target was described. The
competing influence of thermal noise was emphasized. The result of the
development was the (point target) radar equation, Eqn. ( 2. 7.1 ). Its specialization
to a side-looking radar viewing a spatially extended terrain appears as ;
Eqn. ( 2.8. l ), an approximate form of which is the side-looking aperture radar
(SLAR) equation, Eqn. (2.8.3). Finally, drawing upon some nominal relations
for synthetic aperture radar systems from Chapter 1, the SAR radar equation,
Eqn. (2.8.8), was developed.
In this chapter, we want to describe some developments in radar signal
processing which helped overcome the limitations implied by the point target
equation, Eqn. (2.7.1). The discussion will lay the basis for later description of
SAR imaging algorithms. Ultimately, a clear understanding of the simple SAR
relations of Section 1.2, underlying the SAR radar equation, Eqn. (2.8.8), will
evolve.
The discussion begins with the development of the matched filter. (The
terminology is not meant to imply any connection with the question of
impedance matching discussed in Section 2.6.2.) The matched filter is important
in its own right, but it is of considerable interest also in pointing the way towards
the solution of a fundamental problem in early radar systems: the conflict
between detectability and resolution.
After developing the matched filter, and examining its target resolution
properties, we discuss the procedure of pulse compression from a point of view
126

THE MATCHED FILTER

THE MATCHED FILTER

The point target radar equation, Eqn. (2.7.l ), indicates the main trade-offs
available in a simple radar system. Early radars had ranges for targets of interest
such as aircraft which were rather short for surveillance and warning purposes.
Interest therefore centered on extending the range R for targets with specified
values of cross-section <1, while realizing some specified adequate value of output
signal to noise ratio SNR0 An apparent barrier was the fact that all of the
remaining parameters of the radar equation are limited by available hardwa;e
technology.
The transmitter power P 1, the average power while the radar pulse is turned
on, is limited by the capability of RF power generation technology. Even if
possible, its increase is costly, and involves scaling up components which are
already large and costly. The antenna gain G has a theoretical maximum value
(p = 1) related to the antenna physical area A by G = 4nA /A. 2 , as in Eqn.
(2.2.24), so that the antenna linear extent La relative to a wavelength is
controlling. It is difficult to build antennas with ratios Lal A. greater than a few
100, and this practical limit was reached early on. The receiving aperture A. is
directly related to the gain by G = 4nA./ A. 2 , and is not an independent parameter.
The source noise temperature T,, is largely imposed externally, while the receiver
noise figure F 0 P depends on the technology of the time, and is reducible only
to some limited extent. Finally, the receiver bandwidth must be wide enough
t? pass the trans~itter pu~se, so that roughly B0 ~ 1/ p where p is the on
time of the transmitter. Thts latter would appear to be limited by the required
< 2bR/c. If a pulse length r larger
slant range resolution bR of the radar:
than this limiting value is used, two targets separated by bR in range will create
overlapping returns in the receiver, which may not be distinguished as arising
from two separate targets.
One further possibility remains. All of the development of Chapter 2 assumed
that the receiver did nothing more sophisticated than amplify the input signal
(and noise), while adding its own noise contribution. The receiver frequency
response function was taken to be essentially constant over some band
appropriate to the signal. The earliest aim of radar signal processing, as distinct
from radar signal observation, was to determine how the receiver might be
more effective than a simple amplifier. The fundamental advance which resulted
was the technology of pulse compression. This is the exact one-dimensional
analog of SAR image formation processing, and its development will lead directly
to SAR algorithms. We begin with an earlier development which is related, that
of the "matched filter".

128

THE MATCHED FILTER AND PULSE COMPRESSION

3.1.1

Derivation of the Matched Filter

3.1

THE MATCHED FILTEA

129

With the noise n(t) as input we have (Whalen, 1971, p. 47)

In a classic study, North ( 1963) considered the following problem. Suppose the
radar transmits a waveform s(t). This is intercepted by a target at some range
R and scattered back to the receiver, where it arrives with time delay t = 2R/ c.
Assume the idealized situation that only the pulse amplitude is changed in the
process. The receiver input is thus

Slg0 (t}l2 = (N/2)

f:
f:

IH(jro)l2 df.

00

= (N /2)

(3.1.3)

lh(t)l2 dt

00

r(t) = as(t - t)

+ n(t),

where n(t) is the waveform of the combined source noise and equivalent receiver
noise referred to the receiver input. The noise n( t) is assumed to be white
(i.e., to have a constant power spectral density N W /Hz, one-sided, over the
receiver band).
We are at liberty to choose the receiver to be any linear, time invariant
system we please, so that the receiver transfer function H(jro) is to be chosen.
In order that we perceive the target to be present, and assign to it the correct
range R, we want the power output of this receiver at time t to be as high as
possible a "bump" above the surrounding "grass", characterized by the average
value of the noise power at the output (Fig. 2.2). We have no direct interest in
the receiver power output at times other than the time the target return is
received. The receiver itself contributes no noise, since the input noise n( t)
includes equivalent receiver self noise.
The mathematical problem to be solved is thus to choose the transfer function
H(jro) such that (where we allow complex time waveforms for generality and
use ensemble expectation C) the quantity

where N /2 is the two-sided noise density and we have used the Parseval relation
in the last step.
From Eqn. (3.1.1), as the quantity to be maximized we can take the output
signal to noise ratio SNR0 Using Eqn. (3.1.2) and Eqn. (3.1.3), with a change
of variable of integration in the former, this is
SNR =(2a /N)IJ~ 00 h(t)s(-t)dtl
0
s~oo lh(t}l2 dt

(3.1.4)

The neatest procedure at this point uses the Schwartz inequality

(3.1.5)

in which equality holds if and only if / 1 ( t) = kf2 ( t ), with k an arbitrary constant.


Using this in the numerator ofEqn. (3.1.4), with/1 = h(t) andf2 = s*(-t), we
have for any choice of h(t) that

a= Slg.(t) + g 0 (t}l2/Slg 0 (t}l2


(3.1.1)

(3.1.6)

is maximum. Here g.(t) and g 0 (t) are the receiver outputs for signal and noise
inputs respectively. We take g.(t) to be deterministic, and use the fact that the
noise n( t) has zero mean, so that the random variable g0 ( t) also has zero mean.
The output of a linear stationary system with input/(t) and transfer function
H(jro) is the convolution (Appendix A)

where E is the total energy of the received pulse as( t - t ). Since the choice
h(t) =ks*( -t) attains this upper bound, that filter impulse response is the
choice which maximizes the receiver output SNR. Since k is arbitrary, we can
choose k = 1, and obtain just

= 1+lg.(t}l2/Slg0 (t}l2=1 + SNR0

g(t) =

f:

h(t) = s*( -t)


h(t - t')J(t') dt'

00

where h(t) is the system unit impulse response, the inverse Fourier transform
of the transfer function. Hence, with signal as(t - t) as input, we have the ,
output value
g.(t) =a

f:

00

*(3.1.7)

This is the "matched filter" (Whalen, 1971, p. 167).


In the frequency domain, the result Eqn. (3.1.7) is

H(f)

f:

s*( -t) exp( -j2nft) dt


00

h(t - t')s(t' - t)dt'

(3.1.2)

= S*(f) = A(f) exp[ -jl/J(f)]

f:

s*(t) exp(j2n/t) dt
00

(3.1.8)

130

3.1

THE MATCHED FILTER AND PULSE COMPRESSION

where
S(f) = A(f) exp[jt/J(f)]

is the spectrum of the transmitted signal s(t).


The output SNR Eqn. (3.1.6) attained by the matched filter is the
instantaneous value precisely at the delay time of the target echo. The filter
is usually implemented in the intermediate frequency (IF) amplifier (Fig. 2.16),
and its output therefore oscillates at the IF frequency. It is the average power
of that output over a pulse that is the quantity which corresponds to the average
received power in the radar equation. That average power corresponds to a
SNR of E / N, half the SNR corresponding to the peak power of the sinusoidal
matched filter output.
The energy of the input signal is just

where P. is the average power over the signal duration tP. If the noise bandwidth
of the receiver is Bn, then the attained average output SNR is

where Pn is the average input noise power. Thus the matched filter achieves a
SNR increase equal to the bandwidth time product of the transmitted pulse.
Assuming use of a matched filter in the receiver, the radar equation, Eqn.
( 2. 7.1 ), becomes
SNR0

= EJN = P 5 tp/N = P 1<pG 10Ae/[(4nR 2 ) 2 kF0 pT.]


= E1G1uAe/[(4nR 2 ) 2 kF0 P T.J

*(3.1.9)

where tP is the pulse length and E1 = P1 <pis the energy of the transmitted pulse.
For a simple transmitter pulse, say a rectangular envelope burst of RF carrier;
the pulse duration and bandwidth relate as <p :::::: 1/ B, with B some reasonable
measure of bandwidth, say the noise bandwidth Bn. Then the matched filter
radar equation, Eqn. (3.1.9), is just the simple radar equation Eqn. (2.7.1 ). The
pulse bandwidth time product is unity. In the case of a simple transmitter pulse,
the development of the matched filter formalism therefore~dded little of practical
importance. However, the solution to the above optimization problem, the
matched filter, provided a precise foundation upon which to base understanding
of some ad hoc procedures. In the earlier form of the radar equation, Eqn.
(2.7.1), the noise bandwidth Bn appears. It was clear that this bandwidth should
somehow be optimized to improve SNR. Obviously, the receiver band should
be adjusted so that in some sense "most of" the signal pulse is passed but the
noise is blocked "as much as possible". This led to procedures for tailoring the

131

receiver circuits to yield a transfer function matched in some sense to the


transmitter pulse envelope shape. Even if the matched filter were not precisely
realizable in practice, it provided a known upper bound to SNR, and an exact
specification of the ideal filter against which to judge more convenient
sub-optimal realizations.
For a simple RF burst, the matched filter improves performance less than
1 dB compared with simple filters (Skolnik, 1980, p. 374 ). In general, however,
it is of prime importance that only pulse energy appears in the matched filter
radar equation Eqn. (3.1.9), rather than power and bandwidth separately. So
far as detection performance is concerned, the net result is an additional degree
of freedom in system design. We will now develop the implications of this at
more length.

3.1.2

E = Ps<p

THE MATCHED FILTER

Resolution Questions

In Section 3.1.1 the radar equation was derived assuming a matched filter
receiver. In the case of a simple transmitter pulse, for which pulse duration tP
and bandwidth Bare related nominally by B = 1/rp, the result Eqn. (3.1.9) is
essentially the same as the radar equation developed earlier, Eqn. ( 2. 7.1 ). That
is, the simple receiver with uniform response over its passband is nearly the
matched filter for this case. On the other hand, the matched filter radar equation
in general involves the energy of the transmitted pulse, and thereby its time
duration for a fixed (and limited) available transmitter power, but nowhere
does the pulse bandwidth appear. This is a significant difference, and the
difference has to do with target resolution. Use of a matched filter opens the
possibility to use a long high energy pulse for good SNR, but without sacrificing
resolution. The resolution expression, t5R = crp/2 is no longer in effect, as we
shall now see.
To determine resolution, we need to find the extent to which a point target
in space is "smeared out" by viewing it through the radar sensing system. With
no signal processing, a point target produces a response at the receiver output
which is essentially the time history of average power of the transmitted pulse,
which has width rP. Thus, two point targets separated in slant range by less
than dR. = crp/2 will produce receiver outputs which overlap in time. Such a
response is impossible to distinguish from a return due to a single target of
space extent wider than a point. It cannot be guaranteed that two targets closer
together in range than crp/2 will be distinguished as two targets. This is the
resolution limit of the simple radar system.
On the other hand, suppose a matched filter processor is used. An isolated
point target produces a response s(t - t) at the filter input, where r = 2R/c is
the delay since transmission. The corresponding filter output is the convolution

f:>

h(t - t')s(t' - r) dt'

= :> s*(t' -

t)s(t' - r) dt'

132

THE MATCHED FILTER AND PULSE COMPRESSION

3.1

Shifting origin to center the response at timer, and making a change of variable,
this is
g(t) =

f:

The pulse most often used to do this job is the linear-FM, or" chirp", pulse
s(t) = cos [2n(f.,t

s*(t' - t)s(t')dt'

IS(f)l2 exp( -j2nft) df

(3.1.11)

S;(t) =

The time width of this function g( t ), the matched filter output in response to
a point target, controls the resolving capability of the system. That width depends
on the details of the transmitted pulse. For example, if the pulse is a simple
burst of RF, so that (using complex notation)
s(t) = a(t) exp(jw 0 t);

being careful with limits in the integral in Eqn. (3.1.10) we obtain


(3.1.12)
The power function lg(t)l2 is a quadratic shape of width the order ofrP. This
yields a time resolution '5t = rP, which is the same as obtained without the
matched filter.
As a more interesting example, for any pulse with a constant (say unity)
spectrum magnitude (and arbitrary phase) over some (one-sided) band
If- f.I < B/2, the second form, Eqn. (3.1.11), gives
*(3.1.13)

which has a power function Ig(t )12 of width '5t :::::: 1/ B Hz at the 3 dB point.
Thus, the time resolution in the matched filter output in this case has nothing:
to do with input pulse length rP, but only pulse bandwidth B. The width of the
matched filter input rP would be the time resolution afforded without matched
filter processing, while the time resolution with processing is 1/ B. The ratio of
these, the pulse "compression" ratio afforded by matched filter processing, is
the bandwidth time product BrP of the transmitted pulse.
The important point is that use of a matched filter, in addition to enhancing
detectability by maximizing receiver output SNR, decouples pulse length from'
range resolution. Therefore, long pulses of tolerable average power can be used
to obtain large energy E = P. rP for satisfying the detectability requirements,
while at the same time a wide bandwidth can be used to obtain good resolution.

COS

[2n(f.,

+ i~f)t],

i = -(N - 1)/2, N/2,

~f=

B/N

The N pulses s;(t) are transmitted sequentially in a burst to create a "step


chirp", or "synthetic pulse" wave. Wehner ( 1987) discusses the technique in
detail.
Because of its practical importance, the matched filter has been analyzed in
extensive detail in the literature. The central quantity studied is the ambiguity
function of various transmitter pulse waveforms, that being the time behavior
of the average output power of the matched filter corresponding to the
transmitted pulse in question. An excellent resource is the book by Cook and
Bernfeld ( 1967), or that of Rihaczek ( 1969). Many other texts provide more or
less detailed treatments of the subject.
In the brief discussion above, we assumed that the target return, the input
to the matched filter, was simply a time delayed and attenuated version of the
transmitted pulse. More generally, because of relative motion between radar
and target, there will be Doppler frequency shift as well as time delay. For a
narrowband transmitted pulse

a(t) = l,

g(t) = B exp( -jw0 t)[sin(nBt)/nBt]

(3.1.14)

with frequency (time derivative ofphase)f=/. +Kt which is a linear function


of time over the pulse duration. To a quite close approximation in practical
cases, the bandwidth of this is just KrP, since the frequency "starts at"f., - Kr /2,
sweeps through all intermediate frequencies, and ends atf., + Krp/2. Depending
on the sign of K, we have an "up chirp" or a "down chirp". A variety of other
waveforms may be used (Skolnik, 1980, p. 420); in remote sensing SAR, however,
the linear FM is almost universal. The most important exception is the discrete
version of the linear FM

The squared magnitude of this is a special case of the "ambiguity function"


corresponding to the transmitter waveform s(t), introduced by Woodward
( 1953) in an influential book. Using the Parseval relation, it can also be written as

f~00

+ Kt 2 /2)],

(3.1.10)

00

g(t) =

133

THE MATCHED FILTER

s(t) = a(t) exp(jw 0 t),

If - fcl < B/2

the received pulse is approximately


r(t) = a(t - r) exp[j(<.00

+ w0 )(t -

r)]

where the Doppler shift is w0 = -4nR/ A., with R the target range rate. The
matched filter for the transmitted pulse, which is what will have been designed
into the receiver, has impulse response h(t) = s*( -t). The resulting matched
filter output function, shifted for convenience of notation to have time origin
at r, is
f(t.fo) = exp(jw.t)

f:

00

a(t')a*(t' - t)exp(j2nf0 t')dt'

(3.1.15)

134

THE MATCHED FILTER AND PULSE COMPRESSION

3.2

fo

/
/-fo=kt
I

-.-----1--- - ---

I
I

// __________l __

135

appear the same at the matched filter output as a target at some other range
which is not moving. Another way to say this is that the linear FM wave has
frequency and time "locked" together. A frequency shift tif at the matched
filter input causes a time shift llt = llf/ K at the output.
Extensive discussion of ambiguity function analysis can be found in the
references mentioned. Since the systems of interest to us in this book all use
the linear FM pulse, we need not pursue the matter further in generality. We
will later return to some specific results as needs arise.
3.2

PULSE COMPRESSION

As we discussed in Section 3.1.2, the matched filter output, Eqn. (3.1.13), realizes
time compression of pulses of unit (or at least constant) spectrum magnitude
in the ratio of the bandwidth time product Br:P. This is the case in particular
for the common linear FM waveform. That the two objectives of SNR
maximization and resolution improvement (by compression) are realized by
the matched filter follows because, if the transmitted waveforms( t) has spectrum
S(f)

= A(f) exp[jt/J(f)]

then the matched filter Eqn. (3.1.8) is


H(jw) = S*(f) = A(f)exp[-jt/J(f)]

--'ttp-~I

--I
Figure 3.1

PULSE COMPRESSION

while on the other hand, as we will discuss below, the general pulse compression
filter is

Ambiguity function of 3 dB contour for linear FM pulse.

By convention of definition, the corresponding ambiguity function is


!Y.(t.fo)

=If( -t,fo)l2

As an example, for the linear FM pulse Eqn. (3.1.14), a contour of the ambiguity
function is shown in Fig. 3.1. For a target with no motion relative to the radar
line of sight, fo = 0 and the time width of the matched filter output power
function is nominally t / B, as developed above. For a .target known to be af
some particular range R = cr:/2, t = 0 and the Doppler shift due to targett
motion can be measured with resolution nominally 1/ r:P. The locus of the peak
of the ridge of the function Eqn. (3.1.16) isf0 =Kt, so that a target which is:,
in fact in motion with a consequent Doppler shiftf0 will be assigned to a range
different from its true range by an amount llR = ( c/2 )(f0 / K ). This is the source
of the adjective "ambiguity" in ambiguity function. A target which is at some
particular range and moving may (and will, for this example of the linear FM)

(3.2.t)

H(jw) = 1/S(f) = [1/A(f)] exp[-jt/J(f)],

A(f) =I= 0

(3.2.2)

The two filters Eqn. (3.2.1) and Eqn. (3.2.2) are identical provided A(f) = 1
over the signal band, or at least A (f) = const.
Looking ahead to application to imaging algorithms, it is desirable to consider
pulse compression processing in its own right, apart from considerations of
detection and matched filtering. We begin with a development which will
generalize to SAR image formation, and then develop some material of later
use having to do with the properties of the linear FM waveform, and with some
modifications of the compression filter to alleviate time sidelobes in its response.
3.2.1

Linearity, Green's Function and Compression

We now want to relate compression processing to a general procedure in linear


system theory. This amounts to inverting the system impulse response, in the
operator sense. In mathematical terms, we have to do with the Green's function
of the dynamic system, and its operator inverse. It is an absolute requirement

136

THE MATCHED FILTER AND PULSE COMPRESSION

3.2

137

PULSE COMPRESSION

that the system we deal with be linear (but not necessarily time invariant in its
properties). We will first discuss the linearity of the radar hardware and signal
processing, and then describe the target features which enter linearly into the
radar received signal.
R

The Radar as a Linear System

Radar systems are designed and operated to be linear in the various voltage
waveforms, at least up to the output of the IF amplifier and filter stages. In the
coherent radars of later interest to us, the (nonlinear) operation of average
power formation at the IF output is replaced with the linear operation of
"quadrature demodulation", also called "I, Q detection", or "complex
basebanding". In this, the high frequency structure of the IF output is stripped
away by shifting the signal to a frequency band centered on zero frequency,
leaving the low frequency envelope waveform (Whalen, 1971, Chapter 3). As a
result, all the operations in an imaging radar and its associated signal processing
are designed to be strictly linear. The only exception is the final operation of
forming the real image intensity from the signal processor output, the so-called
"complex image".
In the radar range equation, Eqn. (2.7.1), the target cross-section u appears.
This is the area we impute to the target based on the power it scatters toward
the receiver, under the assumption that the target is an isotropic scatterer (which
it might or might not actually be). If multiple targets are in view, or if we view
an extended region with multiple scattering elements, the receiver response will
depend on the characteristics of all the ta,gets. Since the electromagnetic field
equations are linear in field strength, rather than in power, the cross-sections
of the individual targets are not immediately appropriate for combining into a
total response. In fact, as we discussed in Section 2.3, for extended targets with
specific cross sections u 0 ( 0, q, ), the superposition of mean elemental cross-
sections by means of the expression Eqn. (2.3.4):
T,

= [u 0 (0, </J)l(R, 0, </J)/4nR 2 ]

dA

is only approximately correct, and only when interpreted with care, as discusse4
by Ulaby et al. ( 1982, p. 508 ). In order to preserve and make use of linearity~
it is therefore more appropriate to deal with receiver voltage, rather than power,
To that end, we want to describe the target in terms ,of its effect on electric '
field, rather than on average power. The Fresnel reflection coefficient Cis the 1
appropriate quantity to introduce (Ulaby et al., 1981, p. 73).
7
Consider an extended target of area A, which we will take as planar, and .
normal to the radar beam center (Fig. 3.2 ). Let E10 ( x, y) be the electric '
field phasor incident at some point on A, and let E.(x, y) be the corresponding
reflected field. The incident field is assumed linearly polarized. Then (Ulaby
et al., 1981, Chapter 2) the reflected field is also polarized, in the same direction

---------/

,---- -----

//

//

..,

..,

EIN

//
Es
L_____________________
// / ' A, DA
Figure 3.2

A terrain element of area A illuminated by an incident field E10 The scattered field

E, acts as secondary aperture illumination resulting in directivity DA of the terrain element.

as the incident field, and has phasor


E.(x, y) = ((x, y)E10 (x, y)

(3.2.3)

where ((x, y) is the (possibly complex) dimensionless Fresnel reflection


coefficient of the surface element. It is determined by the local dielectric constant
of the reflecting surface. (More generally, the phasor E. will result by scattering,
and will have components both parallel and perpendicular to the incident,wave.
Here we consider only the "like-polarized" reflection coefficient.)
Now suppose a receiving antenna views the surface from range R. The electric
field E, (R) at the receiving antenna is given by the diffraction integral Eqn.
(2.2.6)
E, 00 (R)=(jk/2n)

E.(x,y)[exp(-jkr)/r]dA

(3.2.4)

in which we have made the approximations (Fig. 2.4)


r A.,

zr = 1,

S=i

Further, using Eqn. (2.2.13), the incident field phasor at the terrain is
(3.2.5)

138

THE MATCHED FILTER AND PULSE COMPRESSION

3.2

where Z 0 = j( 0 /e 0 ) is the impedance of free space and we have inserted the


appropriate phase shift, and assumed the target region does not extend beyond
nominal beam center.
Combining Eqns. (3.2.3), (3.2.4), and (3.2.5), there results
Erec = (jk/2n)(P1Z 0 G 1/4n) 112

((R')[exp( -j2kr)/r 2 ] dA'

PULSE COMPRESSION

139

From Eqn. (2.3.3 ), the received intensity and the terrain backscatter coefficient
are related as

*(3.2.6)

assuming constant incident intensity IA, so that, comparing Eqn. (3.2.9), we


obtain

Recognizing (Fig. 2.4) that


L cro(R') dA' =DAL l((R')l2 dA'

r=IR-R'I
this is of the form of a convolution of the target (terrain) reflectivity coefficient .
((R') with the Green's function (impulse response)
h(RIR') = const exp[ -j2klR - R'IJ/IR - R'l 2
It is through Eqn. (3.2.6) that the radar observable Erec is linearly related to .
the terrain "complex image" elements ((R).
It is interesting to relate the (power) backscatter coefficient cr 0 of the surface
with the Fresnel (voltage) reflection coefficient(. Using the far field expression
Eqn. (2.2.9) with(}= 0 (Fig. 2.4), the received field phasor Eqn. (3.2.4) is

Erec = (j/ A.R) exp( -jkR)

(3.2.10)

where DA is the directivity of the illuminated terrain patch.


In this Eqn. (3.2.10) the terrain directivity DA involves the distribution ((R').
In the idealized case of constant reflectivity ((R') (specular reflection), from
Eqn. ( 2.2.19) we have DA = 4nA /A. 2, so that Eqn. ( 3.2.10) becomes

More generally, a complex random scattering coefficient ( can be defined in


analogy with Eqn. (3.2.3). From Eqn. (3.2.10), the mean backscatter coefficient
of a terrain patch is then

(3.2.11)

E.(x, y) dx dy

Since the statistics of the coefficient DA depend on detailed structure of the


scattering patch, the mean backscatter coefficient is taken simply as

Then

*(3.2.12)
IEr.cl2 = ( 1/ A.R)2IL E.(x, y) dx dyj2
= (1/4nR 2)DA

IE.(R')l 2 dA'

introducing from Eqn. ( 2.2.19) the directivity DA of the terrain region, considered.
as an aperture over which the field E. is maintained.
Further assuming a constant terrain illumination IE;11 I2, and recognizing that :
intensity I= IEl 2/Z 0 , Eqn. (3.2.8) yields the electromagnetic intensity at the)'
receiving antenna as
(3.2.9)
;~

where /~ = IE;n 12/ Z 0 is the intensity illuminating the terrain.

'.,

It is cr 0 (R) which is the terrain "image". Using the radar and processing system,
one attempts to reconstruct it as nearly as feasible. This is done by fin~.t forming
approximations to ((R), so-called complex images, and from them constructing
a statistical estimate of their mean square, that estimate being taken as an
estimate of the real image cr 0 (R). The associated calibration techniques are
discussed in Chapter 7.

Compression and the Inverse Green's Function

In the case of an extended target, then, our first objective is to produce the
complex reflectivity distribution ((R') of the target from the observed receiver
voltage phasor functions
Or(R) = aErec(R)

140

THE MATCHED FILTER AND PULSE COMPRESSION

3.2

where a is a system constant which we will absorb into the Green's function
Eqn. (3.2.7). Combining Eqns. (3.2.6) and (3.2.7), we then can write the voltage
output phasor of the linear receiver as the convolution

v.(R)

h(RIR')((R') dA'

f_'XJ

f~

00

(3.2.13)

(3.2.14)

where R = cr:/2 is essentially the receiver voltage time variable, and the finite
length of the target, or the finite coverage of the radar beam, will limit the
interval of integration. We want to determine the complex image ((R') given
the signal v.(R) and the impulse response h(RIR'). Note that, if ( = o(R' - R0 )
represents a unit point target at range R 0 , where c5 is the Dirac delta function
(unit impulse), the receiver response is

v.(R)

=f

00

h- 1 (R 0 IR)v,(R)dR

h(RIR')((R') dR'

h(RIR')o(R' - R 0 ) dR' = h(RIR 0 )

Thus the impulse response h(RIR') can be calculated as the receiver output
should the reflectivity function be an ideal impulse, since the receiver system is:'
known.

Now suppose that in some way or another we have found a


1
h - ( R 0 IR) (the inverse Green's function) such that

141

yields

This convolution v.(R) is generally a two dimensional data set, with one
dimension of R being time during each radar pulse, and the other being the
position of the radar along its trajectory of motion, in the case of SAR. We
will eventually deal with the signal processing involved in inverting the relation
Eqn. (3.2.13), which is a Fredholm integral equation of the first kind.
As a prelude, consider the one dimensional case

v.(R) =

PULSE COMPRESSION

f~ooh- 1(RolR)(f~oo h(RIRoK(Ro) dRo) dR

= f

:00 ((Ro>(f:00 h- (RolR)h(RIRo) dR) dRo

= f :

00

(3.2.16)

((R 0)o(Ro - Ro) dRo =((Ro)

This is to say that the indicated operation on the received data v,(R) exactly
reconstructs the complex reflectivity distribution ((R) in view of the radar. The
processing by h - 1( R 0 IR) produces an image of the reflectivity distribution, and
the operations involved in its application constitute an imaging algorithm. The
processing amounts to correlating the received signal v,(R) with a function
h - 1( R0 IR) for various values corresponding to ranges R 0 = er: /2 where the
reflectivity function is to be determined.
Let us now consider how to determine the inverse Green's function h - 1(Ro IR)
from the specified Green's function h(RIR 0 ). Consider first the case that we
have available the radar system output time function v,(R) over the infinite
time span ( - oo, oo ), an assumption which we will o0viously need to m.odify ~ater.
Suppose also (the actual situation for the current case of one d1me~s1~nal
"range" processing) that the radar, in addition to being a linear system, ts ttme
stationary, i.e., h(RIR 0 ) = h(R - R0 ). (Here we have used a common abuse of
notation in designating the one-variable function h(R) with the same letter as
the two-variable function h(RIR 0 ).) Then defining a corresponding h- 1(RolR) =
h- 1 (R 0 - R), the convolution integral Eqn. (3.2.15) which we want to solve
becomes

f~

h- 1 (R 0

-oo <Ro, Ro< oo

f~

R)h(R - R 0)dR

0 h- (R
1

-R 0 -x)h(x)dx=c5(Ro-Ro)

or

Linear processing of the received signal v,(R) of Eqn. (3.2.14) with this operator

IYI ~

00

(3.2.17)

142

THE MATCHED FILTER AND PULSE COMPRESSION

3.2

Applying the Fourier transformation to the convolution Eqn. (3.2.17) yields

S(f) =

The solution, Eqn. (3.2.18), is obvious in this simple case. The filter H- 1 (!),
which compresses the signal h(x) back to an impulse, simply undoes whatever
the radar linear transfer function H(f) has done.
In the particular case that

143

We want to find the spectrum S(f) of s(t):

(3.2.18)
where we mean that

PULSE COMPRESSION

f~cxi

a(t)exp{ -j[2nft- </>(t)]} dt

(3.2.19)

The integration is in general not possible to carry out in closed form. However,
the principle of stationary phase provides a useful approximation.
If we consider (say) the real part of the spectrum Eqn. (3.2.19), we have
Re[S(f)] =

f:

a(t) cos[2nft - </>(t)] dt

(3.2.20)

00

H(f) = exp[jt/J(f)]

we have
H- 1 (!) = 1/exp[jtjl(f)] =exp[ -jt/J(f)]

so that

and we recover the matched filter as the compression processor. (Recall that
R = ct/2 relates range and receiver signal time.) In the general case that
IH(f)I =fa 1, the compression processor is not the matched filter; the filter
amplitude 1/IH(f)I =fa IH(f)I.
3.2.2

The Matched Filter and Pulse Compression

Cook and Bernfeld ( 1967, Chapter 3) have given a careful discussion relating
the matched filter with compression processing. The developments there also .
make precise the relationship locking time with frequency for linear FM
waveforms having large bandwidth time products, an important basic concep~
we have so far referred to only in passing. Since SAR processing mostly involv~.
compression of linear FM waveforms, we will here summarize some pointf,;
relating to the procedure. Much of the development involves an approximate;
way of calculating the spectrum of a time waveform.
'
The Principle of Stationary Phase

There may exist time ranges of the interval of integration over which the angle
2nft - </>(t) changes rapidly with respect to the changes of the function a(t).
Then the contribution to the integral value from regions of adjacent positive
and negative loops of the cosine function will nearly cancel, with no net
contribution to the value of the integral. Application of the principle of stationary
phase amounts to taking note of that fact, and concentrating attention elsewhere,
over intervals where the angle of the cosine function changes only slowly.
The location of such time ranges, with slowly varying angle 2nft - </>(t),
depends on the particular value off for which we are trying to calculate the
number S(f), since f appears as a parameter in the angle. Ranges of time for
which we do get net contribution to the integral are characterized by the fact
that the integrand does not oscillate rapidly, which is to say that the phase
angle 2nft - </>(t) is nearly constant. Thus we can confine attention to time
ranges near the stationary points of the phase function, which are times t(f)
for which
d[2nft - </>(t)]/dt = 0,

2nf

= d</>/dt

(3.2.21)

Since we are confining attention to times t near solutions t(f) of Eqn. (3.2.21 ),
we can expand the integrand of the Fourier transform Eqn. (3.2.19) as a Taylor
series around t(f ). Keeping only the zeroth order term in a( t ), and terms through
the quadratic in the angle, noting that the first order term in the angle is zero
by the definition Eqn. (3.2.21) of t(f), and for simplicity of notation assuming
that only a single stationary point exists, we obtain (where we write tr = t(f))

Consider a general waveform


S(f) = a(tr) exp{j[ -2nftr +</>(tr)]}
s(t) = a(t) exp[j </>(t)]

tr+A

x
which is incidentally of the form of the complex envelope of a narrow bani!';
signal (Whalen, 1971, Chapter 3)
~
v(t) = a(t) cos[wct + </>(t)]

expU~(tr)(t-tr) 2 /2]dt

(3.2.22)

t 1-A

where 2A is the interval (in general a function off) over which the quadratic
approximation to the phase function in Eqn. (3.1.19) is reasonable.

144

THE MATCHED FILTER AND PULSE COMPRESSION

3.2

Making a change of variable in the integral Eqn. (3.2.22) results in:

PULSE COMPRESSION

Amplitude

145

Phase

S(f) = 2a(tr)[2n/ltP(tr)l] 112 exp[ -j(2nftr - </>(tr)]

av"<l9il/2n)

Jo

exp{j sgn[~(tr)Jny 2 } dy

(3.2.23)

In the particular case that the upper limit of the integral can be extended with
little error to infinity, the Fresnel integral that arises can be evaluated
(Gradshteyn and Ryzhik, 1965, Section 3.691.1) to yield
S(f) = [2n/ltP(tr)I] 112 a(tr)

x expj{ -2nftr +</>(tr)+ sgn[tP(tr)Jn/4}

*(3.2.24) .

Spectrum of the Linear FM Pulse

The special case of the linear FM pulse should be noted explicitly. Thus we , '
consider
s(t) = exp[j2n(f.t + Kt 2 /2)],
for which tP( t) = 2nK = const. The stationary phase relation Eqn. (3.2.21) yields

Figure 3.3 Amplitude and phase spectra of linear FM signals with various bandwidth time
products. Phase shown is residual after removal of nominal quadratic phase (from Cook and
Bernfeld, 1967 and after Cook, 1960. Proc. IRE, 48, PP: 300-316. IEEE)

That is to say, for any frequency f, only time portions of the signal located near
the value Eqn. (3.2.26) contribute to the spectrum at the frequency in question.
Frequency and time are approximately locked together in the linear FM ;
waveform.
Since the phase of the signal Eqn. ( 3.2.25) is exactly quadratic in time, the '
expression Eqn. (3.2.22) is exact, with the range of integration changed tq,
ltl < -rp/2, the full pulse extent. The approximate expression Eqn. (3.2.23) iti
replaced by the exact expression

S(f) = IKl- 112 exp[ -jn(f-J.) 2 /K]


jlh.(1-ysgn K)/2

exp[j(sgn K)ny~] dy

In Eqn. (3.2.27), we define

B = IKl-rp,

f - !. = yB/2

For adequately large bandwidth time product B-rP, the Fresnel integral in
Eqn. (3.2.27) can be evaluated; and it is found that S(f) ~ 0 for If - fcl > B/2,
so that the quantity B, defined by Eqn. (3.2.28), is the signal bandwidth. Cook
and Bernfeld ( 1967, p. 139) calculate that to be the ~ase for B-rP > 100
(Fig. 3.3 ). In the band, for large B-rP the same expression as Eqn. ( 3.2.24) results:
S(f) = IKl- 112 exp[j(n/4) sgn (K)] exp[ -jn(f - fc) 2 / K],

jlh.( - I - y sgn K)/2

using Eqn. (3.2.26) to calculate

(3.2.28)

If-I.I< B/2
*(3.2.29)

The principle of stationary phase can also be applied to the inverse transform
relation

2nftr - </>(tr)= 2nftr - 2n(f.tr + Ktf /2)


= n(f-f.)2/K

s(t) =

f:

00

A(f)expj[t/J(f) + 2nft] df

146

THE MATCHED FILTER AND PULSE COMPRESSION

3.2

obtaining
s(t) = A(J;)[2n/lrfr(.ft)l]1;2
x expj{2nJ;t

+ t/J(J;) + sgn[rfr(J;)]n/4]}

(3.2.30)

where the frequency J; is defined for any specified t of interest by


tfr(J;) = -2nt

(3.2.31)

For the lar~e bandwidth time product quadratic phase function Eqn. (3.2.29),
the expre~s1on Eqn. (3.~.30) reduces to the linear FM, Eqn. (3.2.25), while Eqn.
(3.2.31) yields the lockmg relationship J; = fc + Kt.
The .above relation~ are approximate. They will be more or less accurate
dependmg on the specific nature of the signals(t) in question the more so the
larger the bandwidth time product BrP of the waveform. For~ signal s(t) with
both a smooth envelope a(t) and a smooth spectrum amplitude A(f), according
to Cook an~ Bernfeld (1967, p. 49), a bandwidth time product nominally of to
suffices to yield accurate t/J(f) and </>(t) using respectively the approximations'
Eqns. (3.2.24) and. (3:2.30). If on~ of a(t) or A(f) is discontinuous, BrP needs
to be 20 ~r 30, whtle 1f both are discontinuous, BrP needs to be 100. This latter
case apphes to the nominally time limited, band limited linear FM waveform.

147

PULSE COMPRESSION

The envelope of this is a pulse of form (sin t) / t with time width nominally 1/ B,
centered at the time delay r = 2R/ c corresponding to the target range R. This
is just the result Eqn. (3.1.13).
On the other hand, even if the input spectrum A (f) is not rectangular, the
(sin t)/t form may still be a good approximation to the filter time output. Again
according to Cook and Bernfeld (1967, p. 49), provided s(t) is a linear FM with
BrP > 20, and provided the proper matched filter is used for the S(f) that is
the actual spectrum (not having unit amplitude A(f) if BrP is considerably
smaller than 100), the matched filter output envelope will have the (sin t)/t
form. Thus, although in practice time bandwidth products much larger than
20 (or even 100) are used, even for products as small as 20 the resolution result
c5t = 1/ B is valid, although not necessarily the linkage expression Eqn. ( 3.2.26 ).
The implications of such results are important in considering SAR azimuth
compression algorithms. In contrast, for a transmitted pulse which is a simple
burst of carrier: s(t) = exp(jwct), !ti< p/2, the.matched filter output for a
target at range R will be just the expression Eqn. (3.1.12) delayed by 2R/c:

g(t) =

f
f

s*(x - t)s(x - 2R/c) dx

t+r 0/2

exp[ -jwc(x - t)] exp[jwc(x - 2R/c)] dx

2R/c-rp/2

Compression Processing

= (rp -

Let us now consider the compression properties of the matched filter. Regardless
?f what waveform s( t) is transmitted, a matched filter will be us~d in a receiver
mtended for ~arg~t detection. Its impulse response is h(t) = s*( -t) and its
transfer funct10n is H(f) = S*(f), where S(f) is the spectrum of s(t). For a
t~rget .at range R ~rom ~he radar, so that (ignoring scale factor) the received
signal is the transmitted signal delayed by a time 2R/ c, the receiver output will be .

It - 2R/cl) exp[jwc(t - 2R/c)],

It - 2R/cl ~ rP

(3.2.33)
The time correlation form of the matched filter output expression:
g(t) =

(3.2.34)

s*(x - t)s(x - r) dx

G(f) = H(f)S(f) exp( -j2wR/c)


= S*(f)S(f)exp(-j2wR/c)

= A (/) exp( -j2wR/ c)

If A (f) is rectangular, as is the case in particular for s( t) a linear FM with: '.


BrP > 100, then GU) has a rectangular amplitude spec!rum with bandwidth B.
The matched filter time output is then

g(t) =

(f

-fc+B/2

-fc-B/2

f'fc+B/2)

exp( -j4nf R/c) exp(j2nft) df

makes it easy to see pictorially why the resolution of the large time bandwidth
product linear FM pulse is so much sharper than that of the simple RF burst.
In Fig. 3.4, one can visualize the lower function of each pair:
s*(x - t) = cos[2nfc(x - t)
(

+ nK(x -

t) 2 ]

sliding along the upper function s(x - r) as t varies. For each pair (t, r), the
product function in the integrand of Eqn. ( 3.2.34) contains sum and difference
frequencies (with x as the "time" variable)

fc-B/2

= 2B cos [2nf(tc -

2R/c)]

x {sin[nB(t - 2R/c)]}/[nB(t - 2R/c)]

= K[(x - )

(x -

t)]

The sum frequency term will integrate to zero, as will the difference frequency

148

THE MATCHED FILTEA AND PULSE COMPRESSION

3.2

PULSE COMPRESSION

in the signal band. (Only in the case A(f) = 1, or at least A(f) = const, such
as for example the linear FM with large bandwidth time product, are the
matched filter S*(f) and the compression filter 1/S(f) the same.)
In the case of a transmitted signal s(t) of finite bandwidth, so that the
qualification in Eqn. (3.2.35) has effect, the problem of finding the compression
filter H(f) is "ill-posed" (Tikhonov and Arsenin, 1977), in the sense that the
conditions of the problem do not lead us to a unique solution. (Since the signal
s(t) has zero frequency content outside the band If - !cl< B/2, we can add any
out of band components to H(f) and not change the filter output G(f) =
H(f)S(f).) The problem is "regularized" (made to have a unique solution) by
adding some extra conditions solely for that purpose. If we choose to add the
condition that the filter H(f) have zero spectral amplitude outside the signal
band (which corresponds to the "p~incipal solution" of such problems (Bracewell
and Roberts, 1954)), we obtain a compressed output as in Eqn. (3.2.32) above
(for R = 0, say):

g(t) = const[sin(nBt)/nBt] cos 2nfct

Figure 3.4 Correlation of linear FM waveforms. Average product peaks near zero value of
difference frequency !J.f = K ( t - r ).

~ -r. The larger is K, the closer must t be to -r in order to


obtain a non-zero value for the integral Eqn. (3.2.34).

f = K ( t - ) unless t

3.2.3

Time Sidelobes and Filter Weighting


,\,

Let us consider again the resolution available from a transmitted pulse od


spectrum
1,
S(f) = A(f) exp[jl/J(f)]
over a band B. This is the Green's function of the radar system, except for a
~onstant amplitude factor, since it is the system response to a point target. T~;

mverse Green's function, a compression filter, is


H(f) = l/S(f),

S(f) =I- 0

since then
H(f)S(f) = 1

149

(3.2.36)

We have in fact always done that without comment. Radar receivers are always
so constructed.
For the linear FM waveform with high bandwidth time product, the matched
filter Eqn. (3.2.35) is the appropriate compression processor if we use the
principal solution. We thus reconstruct the complex reflectivity profile ((R) in
view of the radar as in Eqn. (3.2.16) with the best resolution attainable by linear
processing. (The adjoining of out of band components to the filter output is a
nonlinear process, since zero filter input does not then correspond to zero
output.) However, with that reconstruction of ((R) we have sidelobes to contend
with, just as in the case of a finite antenna aperture (Section 2.2.2). The first
sidelobes of g(t) of Eqn. (3.2.36) are only 13 dB lower than the main lobe. Thus,
for example, a target 13 dB stronger than an adjacent target one resolution cell
away will mask its weaker neighbor.
These time (or range) sidelobes in the ambiguity function Eqn. (3.2.36) must
be dealt with to obtain a properly functioning system. Cook and Bernfeld ( 1967)
discuss the problem in general in the context of signals with large B-rP
products. Suppose we maintain the desirable constant power requirement that
ls(t)I = a(t) = 1, and vary IS(f)I (analogous to antenna illumination) to attempt
to improve the ambiguity diagr&m Eqn. (3.1.11). Assume we will always use a
matched filter H(f) = S*(f) wh~tever S(f) may be (thereby deviating from the
true compression filter H(f) = 1/S(f) over the band). Then some improvement
is possible (Cook and Bernfeld, 1967, Chapter 3), but only at the expense of
needing to generate rather inconvenient phase behaviors </J( t) for the transmitted
signal s(t) = exp[j</J(t)].
The usual way to deal with undesirably high time sidelobe levels in the
matched filter output is to unmatch the filter. There is thereby an inevitable
reduction in output SNR, and a consequent decrease in detection performance

150

THE MATCHED FILTEA AND PULSE COMPRESSION

3.2

which, although usually not severe, must be evaluated. Beyond that we deal
with a trade-off between desirable improvement in sidelobe structure, and
consequent undesirable, but usually tolerable, broadening of the mainlobe of
the filter output function (degradation of resolution). Again, Cook and Bernfeld
( 1967, Chapter 7) have given a thorough discussion in the context of the radar
matched filter receiver, although the general subject is discussed ubiquitously.
Farnett et al. (1970) give a convenient summary, while Harris (1978) has given
a particularly comprehensive discussion of the available alternatives in the case
of time sampled data. Here we will follow only one line of thought, leading to
some filters commonly used in SAR processing.
Let us again assume that the transmitted pulse is the linear FM with constant
envelope:

PULSE COMPRESSION

out that not only is the maximum sidelobe level no larger than the requested
bound, but that all the sidelobes attain that bound - hence the distribution is
called also the Dolph-Chebyshev weighting.
A flexible and convenient approximation to the Dolph weighting function
Eqn. (3.2.37) is the Taylor weighting function (Cook and Bernfeld, 1967, p. 180;
Taylor, 1955). Again relative to the center of the band this is:
il-1

W(f + /.) = 1 + 2

F(m, A, n) cos (2nmf/ B)

(3.2.39)

m=l

where the number of terms


The numbers Fare

n determines the goodness of the approximation.

s(t) = exp[j(2nfct + nKt 2 )]

n-1 (

in complex form (positive frequencies ),,and that the bandwidth time product
is large, so that the spectrum has constant amplitude over the band B, say unity:
S(f) = exp[jl{!(f)]. The receiver filter is taken as

151

F(m, A, n) = 0.5(-1r+ 1

l_J

l -

n-i

n-l

A2

. (m/CT)2
)
2
(n-05)

( 3.2.40)

(1 - (m/n) 2 )

n=l
n#m

H(f) = W(f)exp[-jl{!(f)]
where A is as in Eqn. ( 3.2.38) (determined from the requested sidelobe level) and
where W(f) is a real function to be found. Assume W(f) to be symmetric around
the band center .r.,.
We can then formulate the optimization problem of minimizing the mainlobe
width of the filter output jg(t)j, for a specified maximum sidelobe level, where
G(f) = H(f)S(f) = W(f). The answer is (Cook and Bernfeld, 1967, p. 178) the
continuous form of the Dolph ( 1946) antenna current distribution function ..
Over the band, this is:
W(f + / 0 ) = nAI 1 (z)/ B cosh(nAz)

writing the frequency relative to band center, where

z = nA[l - 4(f/B) 2 ]

CT=

n[A 2

+ (n - o.5) 2 r

1 2
'

(3.2.41)

This latter happens also to be the factor by which the Taylor mainlobe is
broadened beyond the Dolph mainlobe width. We want CT to be not too much
larger than unity, so that to some extent n (quality of approximation) and A
(sidelobe level) are coupled. Reasonable nominal values are of the order of
n ~ 3 for 25 dB sidelobes and n ~ 6 for 40 dB sidelobes.
The Taylor weighting function Eqn. (3.2.40) can be realized with reasonable
convenience, either directly as a filter in the frequency domain, or in the time
domain. Time domain realization makes use of the fact that
g;- 1 (F(w)cos(aw)) = [/(t +a)+ f(t - a)]/2

and / 1 is the modified Bessel function of first kind and order 1. The parameter A i$1 .
set by the requested maximum sidelobe level a such that the maximum (voltage)'.'~
sidelobe is a factor

a= 1/cosh(nA)
ft

below the mainlobe peak. (For example, if we demand that the largest sidelobe!Jf
be 40 dB below the peak of the mainlobe, then a= 0.01 and A = 1.69.)
In addition, at the band edges, f = B /2, W(f) has impulses of strength1
1/ B cosh( nA ), which fact makes this weighting inconvenient to realize. It turns; "

so that the cosine terms in the Taylo~ filter Eqn. (3.2.39) can be realized by a
linear combination of delayed and advanced (by integral multiples of 1/ B)
replicas of the filter input (so-called tapped delay line realization).
For typical sidelobe levels, the numbers F(m) of Eqn. (3.2.40) in the Taylor
filter approximation to the Dolph filter become small rather rapidly as
m increases towards n. For example, for n = 6 and 40 dB sidelobes, the
filter coefficients are: F(l,. . ., 5) = 0.3891, -0.945 x 10- 2 , 0.488 x 10- 2 ,
-0.161 x 10- 2 , 0.035 x 10- 2 (and incidentally CT= 1.043, so that the main
lobe broadens only by 4.3% ). This suggests dropping higher order terms in
Eqn. (3.2.39), without changing n, which would involve recalculating the

152

THE MATCHED FILTER AND PULSE COMPRESSION

coefficients. If this is done in the (6, -40 dB) case, for example, there results

W(f) = 1 + 0.78 cos (21Cf/ B)


or, normalizing,
W(f) = 0.56

+ 0.44 cos (27C// B)

This is very near the Hann weighting function


W(f) = 0.5

+ 0.5 cos (2nf/ B)

or the Hamming function


W(f) = 0.54

+ 0.46 cos (2nf/ B)

In practice, any of these cases may provide satisfactory sidelobe behavior with
negligible main lobe broadening beyond that of the full Taylor approximation.
At this point we have summarized a reasonable range of results from classical
one-dimensional (range) radar theory. In Chapter 5 we will generalize them to
the case of a two-dimensional imaging radar.

REFERENCES
Bracewell, R. N. and J. A. Roberts ( 1954 ). "Aerial smoothing in radio astronomy,"
Austral. J. Phys., 7, pp. 615-640.
Cook, C. E. and M. Bernfeld (1967). Radar Signals, Academic Press, New York.
Dolph, C. L. ( 1946). "A current distribution for broadside arrays which optimizes1
the relationship between beam width and side-lobe level," Proc. IRE, 34(June),
pp. 335-348.
Farnett, E. C., T. B. Howard and G. H. Stevens (1970). "Pulse-compression Radar,"'
Chapter 20 in Radar Handbook (Skolnik, M. I., ed.), McGraw-Hill, New York.
Gradshteyn, I. S. and I. M. Ryzhik ( 1965). Tables of Integrals, Series, and Products,,~
Academic Press, New York.

Harris, F. J. (1978). "On the use of windows for harmonic analysis with the discreii~i:
Fourier transform," Proc. IEEE, 66(1), pp. 51-83.
..~
North, D. 0. (1963). "An analysis of the factors which determine signal/noi5'
discrimination in pulsed-carrier systems," Proc. IEEE, 51 (7), pp. 1016-1027 (Reprint'
of: RCA Technical Report PTR-6C, June 25, 1943).
'
Rihaczek, A. W. (1969). Principles of High Resolution Radar, McGraw-Hill, New Yo
(Reprinted by Peninsula Puhl., Los Altos, CA, 1985).
Skolnik, M. I. (1980). Introduction to Radar Systems, McGraw-Hill, New York.
Taylor, T. T. (1955). "Design of line-source antennas for narrow beamwidth and lowi
side lobes," IRE Trans. Ant. and Prop., AP-3(1), pp. 16-28.

REFERENCES

153

Tikhonov, A. N. and V. Y. Arsenin (1977). Solutions of Ill-Posed Problems, Wiley, New


York.
Ulaby, F. T., R. K. Moore and A. K. Fung (1981). Microwave Remote Sensing, Vol. 1,
Addison-Wesley, Reading, MA.
Ulaby, F. T., R. K. Moore and A. K. Fung (1982). Microwave Remote Sensing, Vol. 2,
Addison-Wesley, Reading, MA.
Wehner, D. R. ( 1987). High Resolution Radar, Artech House, Norwood, MA.
Whalen, A. D. ( 1971 ). Detection of Signals in Noise, Academic Press, New York.
Woodward, P. M. ( 1953 ). Probability and Information Theory, with Applications to Radar,
McGraw-Hill, New York.

4.1

4
IMAGING AND THE
RECTANGULAR
ALGORITHM

In Chapter 3 we discussed in detail the signal processing used with a real


aperture radar in order to map out the complex reflectivity coefficient ((R) of
the terrain in view of the radar as a function of range. In this chapter, we will
extend that discussion to the two-dimensional pattern ((x, R) of the terrain
scanned over by a SAR. We will discuss mainly the most common algorithm
for realizing SAR processing in remote sensing from a space platform. That is
the rectangular algorithm, in which the range (R) and Doppler (x) coordinates
of the complex image ((x, R) correspond to the processor coordinate frame.
The rectangular algorithm is a two-dimensional correlation procedure, which
operates on the received radar signals by correlating them with a computed
replica of the signals which would result from observing a unit reflectivity point
target in isolation. The two dimensions of the correlation processing are realized
usually as two one-dimensional matched filter operations. The first operates on
the single-pulse radar returns just as described in Chapter 3. The pulse-to-pulse
phase history of the output of that operation is the phase of the Doppler shift..
imposed on the radar carrier by change of relative position between target an~ .
radar. The second matched filtering operation of the rectangular algorithm '
operates on that Doppler signal. In Chapter 3 we discussed the matched filt~
and pulse compression in the domain of continuous time waveforms. In
Appendix A we summarize the signal processing algorithms needed in order to
carry out the processing in the domain of time sampled waveforms. In thi$ .
chapter and the next we will bring these together and describe the operations
needed for image formation based on time sampled radar signals.
In this chapter, we will first discuss the imaging algorithm from the
point of view of the Green's function introduced in Section 3.2.1. We then
154

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

155

introduce the rectangular (range Doppler) coordinate system, and describe the
corresponding signals received by a SAR, assuming a "chirped" transmitter
waveform. Range migration of the received signals over the many pulses needed
to carry out SAR processing is described in detail. The difficulty of dealing with
range migration has led to various ways in which the correlation operations of
the rectangular algorithm have been realized, and we will distinguish among
those algorithms from that point of view. In this chapter we will describe four
of the methods which have been used. In Chapter 10 we will discuss one more,
deramp processing, which has been used less commonly in remote sensing work,
but which is nonetheless of importance.
The algorithms discussed in this chapter realize range migration correction
by interpolation operations on a rectangular grid of data, in either the time or
frequency domains. The frequency domain realizations have been developed
mainly by the Jet. Propulsion Laboratory of NASA and by MacDonaldDettwiler and Associates of Canada. A time domain SAR compression
algorithm, which operates without using fast convolution in the azimuth
coordinate, has been developed by the British RAE. In Chapter 10 we will
discuss the polar processing algorithm, which has its heritage in the aircraft
SAR systems which have been under steady development since the 1950s.

4.1

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

In Section 3.2.1 we introduced the complex radar reflectivity ((R) of a terrain


region. This is the coefficient {<:qn. (3.2.3) relating the complex phasor incident
electric field at a terrain location R to the scattered electric field. It may depend
on the polarization direction of the incident field, and may be different for
different polarization components of the scattered field. It will usually depend
on the angle of incidence of the incident field, and on the angle at which the
scattered field is observed. We will not introduce a notation for these possibilities.
Rather, by the coefficient ((R) we imply a specific choice of incident and scattered
polarizations and angles.
Relying on linearity of the radar system, in Eqn. ( 3.2.13) we related the radar
receiver voltage phasor v.(R) to the terrain reflectivity through a Green's
function h(RIR')
*( 4.1.1)

Here the two dimensions of R' are the geographic coordinates of the terrain.
The dimensions of Rare time t = 2R/c within each pulse, and time of travels
of the radar platform along its motion path, or equivalently R and x = V.s,
where V. is platform speed.
The convolution Eqn. (4.1.1) has one more hidden assumption, namely, that
the reflectivity coefficient ((R') is independent of the radar position R, at least

156

IMAGING AND THE RECTANGULAR ALGORITHM

4.1

over the (usually small) change of aspect angle during the time that any particular
point is illuminated (the time extent S of the synthetic aperture), and constant
in time. That may often be the case. Otherwise, the image to be derived from
the data will be a weighted combination of the reftectivities C(R') as observed
from some range of positions R at varying times.
The two-dimensional inverse Green's function h - 1 (R 0 IR), corresponding to
the Green's function (impulse response) h(RIR'), is defined by

4.1.1

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

157

Data Coordinates and the System Impulse Response

In order to describe the system impulse response h(RIR 0 ), we need to write


down the data set ll.(R) resulting from an isolated point target. The scene
reflectivity is therefore taken as
'(R') = <5(R' - R 0 )

From Eqn. (4.1.1), the result is


(4.1.2)
ll.(R)

f:

f:

h(RIR') <5(R' - R 0 ) dR'

= h(RIR 0 )

(4.1.4)

00

where the multidimensional Dirac function is defined by


f(R') <5(R - R') dR' = f(R)

00

An image formation algorithm is then described by


*( 4.1.3)

This last follows by substituting Eqn. (4.1.1) and using the definition Eqn.
(4.1.2), as done in Eqn. (3.2.16) for the one-dimensional case. Here R represents
any coordinates used to describe the data, ll.(R) is the complex received data
phasor array, and '(R0 ) is the complex image function at an arbitrary point
R0 . In general, the reconstructed image '(R0 ) will replicate the ground truth
only insofar as the system resolution properties allow the Dirac function to be
reconstructed from the impulse response, as discussed in Section 3.2.3.
The usual (real) image is finally an estimate of the mean intensity of ((R 0 )

We want to express this in terms of an appropriate scalar coordinate system.


We begin by specifying a coordinate set for the image. Suppose that the
radar is carried on a vehicle in motion above the earth. The distance along the
radar path we will denote as x. It is convenient to take the radar path as an
arc at constant height above the surface of the nominally spherical earth. We
assume henceforth that, whatever the actual path of the radar, the signals ll.(R)
submitted for processing have been compensated to allow this assumption to
be made. (The important subject of motion compensation will not be discussed
here.) The location of a point on the earth (an image point) can then be described
in terms of the platform position Xe along its path at which the point in question
is in the center of the radar beam, and the corresponding slant range R 0 (Fig. 4.1 ).
Vs

as in Eqn. (3.2.12).
From Eqn. (4.1.3), it is evident that the image formation process is one of
correlation of the data ll.(R) with the inverse Green's function. It is further
clear from Eqn. (4.1.2) that the inverse Green's function can be described
operationally in terms of whatever correlation operati~ns will compress the
system unit impulse response h(RIR 0 ) into the image of an impulse. In
developing an image formation algorithm, we therefore first need to determine
what the system impulse response is, working from the known system properties.
We then must specify the correlation operations necessary to convert the impulse
response into an impulse. Applying exactly those correlation operations to the
full data set ll.(R) then produces the complex image C(R0 ).

Figure 4.1 A terrain point is located by the radar position xe when the point is in beam center,
and the corresponding range Re.

158

IMAGING AND THE RECTANGULAR ALGORITHM

4.1

Data Coordinates

p(t) =

L s(t -

(4.1.5)

nTP)

where T. is the pulse repetition period and the sum includes all pulses for which
the targ~t is in the radar beam. Note that we assume synchronization of t~e
detailed pulse waveform s(t) with the repetition period. That is, the radar ts
time coherent. Since SAR is based on Doppler shift, it is essential that
pulse-to-pulse phase changes be recoverable from the radar signal, requiring
coherent operation.
At any arbitrary time t, the radar is at some slant range R(t) from the target
point with image coordinates (xc, Re) (Fig. 4.2). The real received signal. v,(t)
at that instant has the value which the transmitted signal had at some time r
earlier, scaled by a factor which is locally constant:
(4.1.6)

v,(t) = ap(t - t)

The time r is the time of propagation of the instantaneous pulse wavefront at


time t - r out to the target, a distance R(t - r), and back to the receiver at
time t, a distance R(t). Thus

= [R(t

159

Thus

As it moves along its path, the radar transmits narrowband pulses, typically
the linear FM signal. The multipulse real transmitted signal is then

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

- t)

+ R(t)]/c ~ [2R(t) -

R(t)t]/c

2R(t)/[c

+ R(t)]

since c is in all cases very much larger than


pulse train Eqn. (4.1.6) is then
v.(t) =

L ans[t -

~ 2R(t)/c

R(t ). From Eqn. (4.1.5), the received

nTP - 2R(t)/c]

(4.1.7)

where an is an amplitude scale factor appropriate to pulse n. At most one of


the terms in the sum Eqn. (4.1. 7) is nonzero for any particular time t.
The impulse response Eqn. (4.1.7) can be written formally as a function of
two variables, time t' = t - nTP within pulse number n, and the time nTP of
transmission of that pulse. That is, the received data samples at s = nTP are
a function of two variables
v.(s, t') = v.(t),

t= s

+ t',

0 < t' < TP

The operations required for image formation are those of correlation of the
radar data with the impulse response. We therefore have to do with a two
dimensional correlator. However, the range R(t) varies over the time of
each pulse for which the point target is in view. The received pulses
s [ t - nTP - 2R ( t) / c] are therefore distorted versions of the transmitted pulses
s(t - nTP). The distortion can be different for each pulse of the received pulse
train, since the local functional form of the time varying range R(t) depends
on the differing geometry along the radar trajectory. Were it necessary to account
for these effects in processing, the two dimensional correlation would not
decouple into a sequence of two independent one dimensional procedures. It
is therefore important to examine the consequences of this pulse dependent
distortion. We follow the development of Barber (1985).

Segmentation of the Correlator

We want to determine the effect of variation of the range R(t) from sensor to
a target point during the time span <p of reception of a particular pulse. Let
time t 1 be nominally at the midpoint of a received pulse, and consider the
expansion
R(t) ~ R(ti)

+ R(ti)(t -

ti)+ R(ti)(t - t 1 ) 2 /2

(4.1.8)

The geometry is shown in Fig. 4.3.


Suppose we transmit the linear FM
Figure 4.2

The radar views a terrain point at (xe, Re) from positions (x, R).

s(t) = exp[j2n(fct

+ Kt 2 /2)],

(4.1.9)

160

IMAGING AND THE RECTANGULAR ALGORITHM

4.1

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

161

fort ;;i: t 1' as


g(t) = p exp(jwct) exp( -j4nRi/...1.){sin[u(l - It - 2Ri/cl/p)]}/u

(4.l.12)
where

Provided the term


It - 2Ri/cl/p 1

(4.1.13)

whenever lul is not large, we will have approximately


g(t) =

't'P

exp(jwct) exp( -j4nRi/ ...1.)[sin(u)]/u

*( 4.1.14)

The envelope of this has a 3 dB width centered at Ri of


Figure 4.3 Nominal geometry of changing range during a radar pulse length

where we consider only the positive frequency components of the real signal.
If we were to assume no distortion of the pulse waveform, except for scale factor
this would be the single pulse impulse response. We would then compress the
received signal by correlating it with delayed versions of s*(t) (equivalent to
matched filtering withs*( -t)). The compressor output would be:

f:

corresponding to range resolution t5R = c/2B.


In order that the approximation Eqn. ( 4.1.14) be adequate, we require that
Eqn. (4.1.13) hold say when
It - 2Ri/cl < 4/IKlrP

If we then require say


(4.1.10)

( 4.1.15)

where the input is actually one pulse of the (positive frequency part of the)
distorted return Eqn. (4.1.7).
Substituting Eqn. (4.1.7) and Eqn. (4.1.9) iiito the integral of Eqn. (4.1.10),
and being careful of limits, for t ~ ti for example we obtain

the relation Eqn. (4.1.13) will follow. Eqn. (4.1.15) indicates that a modest
bandwidth time product will suffice for the validity of the approximation
Eqn. (4.1.14). This is in correspondence with the discussion of Section 3.2.2, in
which it was noted that, for BrP > 20, the matched filter output for the linear
FM would have the form Eqn. (4.1.14).
In the actual case that R(t) is not constant over the pulsewidth p the series
expansion Eqn. (4.1.8) can be used in the received pulse waveform Eqn. (4.1.7).
We assume the matched filter Eqn. ( 4.1.10) will still be used as matched to the
transmitted pulse Eqn. (4.1.9). We want to determine the effect of the resulting
input distortion on the filter output Eqn. (4.1.11).
We could work directly with approximate evaluation of Eqn. (4.1.11).
However, it suffices to assume a large bandwidth time product for the transmitted
pulse. This-all,ows us to relate frequency displacement to time shifts, as discussed

g(t) =

s*(t' - t)v,(t') dt'

00

g(t) = exp(jwct)

fa

exp[ -j4nR(t')/...1.]

x exp{jnK[t - 2R(t')/c][2t" +ti - 2R(t')/c]} dt"

(4.1.11)

where t' = t" + (t + ti)/2 and a= (t - t 1 + p)/2.


If R(t) were constant over the received pulse, then R(t) = R(ti) = R 1 in
Eqn. (4.1.8), ti= 2Rifc, and we would obtain Eqn. (4.1.11) (and its companion

162

IMAGING AND THE RECTANGULAR ALGORITHM

4.1

in Section 3.2.2. Specifically, the filter input signal Eqn. (4.1.7) with range
Eqn. (4.1.8) will have a phase variation
</> = 2n{.fc[t - 2R(t)/c]

+ K[t -

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

recalling the nominal beamwidth and azimuth resolution relations:

2R(t)/c]2/2}

Using the nominal

+ K[t- 2R(t)/c]}{l

(4.1.16)

- 2R(t)/c}

The frequency function Eqn. (4.1.16) differs from the nominal variation
f=f0

V. = 7 km/sofa space platform and a

+ K(t- 2Rifc)

by an amount which depends slightly on time within the pulse, but which is
approximately

Jx>lOV.1rp=2m

This is well satisfied for current space systems.


While frequency discrepancy between the returned pulse and the filter
waveform gives rise to a range shift of the image point, a mismatch in frequency
rate results in defocussing. From Eqn. (4.1.16), the frequency ratejofthe received
pulse differs from the frequency rate K of the filter waveform by an amount
which is approximately
AK= ( -2/c)(2KR 1

Because of the assumed high bandwidth time product, this frequency shift
corresponds to a time shift Af/ K of the filter input, and correspondingly a
range shift at the filter output
AR=

typical pulse length

rP = 30 s, the ratio Eqn. (4.1.18) will be less than 0.1 provided

corresponding to a frequency variation


f= <b/2n = {.fc

163

-Rd /K

+ .fcRi)

From Fig. 4.3, we have the nominal relation

for which approximately

R1 =

Using the nominal value (Fig. 4.3)

RiR.1

- V.1 sin (}

+Rf= v;

Using these, the relative change in frequency rate i.s approximately

this becomes
(4.1.17)
When the point target of Fig. 4.3 is viewed from the forward and rear edges of
the real radar beam, nominally at (} = f}u/2, the range shift Eqn. (4.1.17) will
be opposite in direction. The difference represents a distortion, which should
be much less than the range resolution interval, which is
JR= c/2B = c/21KlrP

Comparing the difference in range shift to this latter yields a criterion


2jARlmax/JR = 2V.1f}Hfctp/c
= 2V.1rp/La = V.i<p/Jx 1

*(4.1.18)

(AK)/K = (2V.1/c)(W- .fc V. 1/KRi)

The factor 2V/c is extremely small, while the other factor is not large. We
conclude that any defocussing due to distortion of the received pulse is negligible.
It might be remarked that we have only considered returned pulse distortion
effects related to the geometry of the situation. There may also be pulse distortion
due to the frequency dependent propagation speed (dispersion) of the earth's
ionosphere. Polarization change due to the earth's magnetic field may also be
noticeable. We will consider these effects in Ch. 7. Brookner (1977) has given
a summary of the effects, with useful charts of sample calculations. In a study
specifically concerning SAR, Quegan and Lamont ( 1986) indicate that the effect
on image focus can be severe for low frequency ( L-band) and an aircraft system
operating at long range, but is less marked for a spaceborne system. The effects
lessen at higher frequencies.

164

4.1

IMAGING AND THE RECTANGULAR ALGORITHM

With the approximation of constant range from radar to target point over
the width of a transmitted pulse, the received signal Eqn. (4.1.7) from a point
target is
00

v,(t) =

(4.1.19)

ans(t - nTP - 2Rn/c)

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

S=xN8

Locust='tn ~.,.,

~"-

S n+1

__________ _j_____J?t-'_ __ 1_______ _

n= -oo
/

where Rn is the range to target during the time of reception of the nth pulse:
Rn= R(tn), with tn the center (say) of the time interval over which pulse n is
received.
We now segment the received signal Eqn. (4.1.19) (the voltage out of the
radar receiver) of a single scalar variable (time) into a two-dimensional data
set. This is convenient to do because the formalism of two dimensional Green's
function analysis can be segmented now into two sequential one-dimensional
problems.
We define specifically
nTP :>;; t < (n

+ l)Tp

----

Imaging Algorithm Overview

To design the imaging algorithm, we need to describe operationally how to


"compress" the system response to a point target, shown in Fig. 4.4, back into
a point. Any such procedure will approximately attain the result of Eqn. (4.1.2 ),
and will thereby constitute an operational description of the inverse Green's
function of the radar.

----~---11
I I

--------------

Sn-1

j ____ a(__j__L

- ----0---1I'

I
I

-----------------

II
I
I
I
t

Figure 4.4

4.1.2

~ Vr (Sn,t-'tn)

(4.1.20)

That is, v,(nTP' t) is the received signal from the time of transmission of pulse
n until the time of transmission of pulse n + 1. (In fact more than one pulse
may be "in flight" from the radar to the target simultaneously, in which case
some integral number of pulse periods intervenes between transmission of pulse
n and the time origin of the corresponding received signal.) Ifwe define a "slow"
time variable s as the time of flight of the vehicle along its track, in contrast
with the "fast" time variable t of the radar signal voltage, then v,(nTP, t) is a
function v,(s, t) sampled in slow times at the pulse repetition frequency. Using
the transformations R = c(t - nTp)/2 and x = V.s, we will also write the data
set as v,(x, R) when convenient.
The (slow time sampled) two-dimensional Green's function of the system is
now seen to be that sketched in Fig. 4.4. This is an array of (fast) time delayed
versions of the transmitted pulse, with the delays -rn = 2Rn/ c depending on
target position (xc, Re) and radar position as determined by the geometry of
the problem. The Green's function is inherently sampled in slow time by the
pulsed radar, and will additionally often be sampled in fast time for digital
processing.

165

=2R/c

Two-dimensional Green's function of radar system sampled by PRF.

The point target response Fig. 4.4 is dispersed in fast time by the structure
of the transmitted pulse, and in slow time by the multiple (perhaps thousands
of) pulses which reach the target as the radar travels past it. Ideally, we would
like the compressed signal to be a point, as was the target. In practice, the finite
b~nd~id.th of the transmitter and the finite time during which the target is in
view hmit us to a compressed version of the target of nonzero width in the two
dim~nsions of the image. As discussed in connection with range processing in
Section 3.2.3, we then content ourselves with the principal solution of the
problem. Roughly speaking, the ideal point target (impulse function) has infinite
bandwidth in slow and fast time. The physical radar has finite bandwidth and
obliter~tes all but a finite band of target return frequencies. Then, by linear
processmg of the radar observables, we can produce only a finite bandwidth
(smeared image) approximation to the observed point target.
In concept, the image formation procedure is straightforward. It is exactly
that .operational procedure which compresses to a point the radar response to
a pomt target. Assuming a point target with coordinates (xc, Re) (Fig. 4.1), let
us now describe the procedure. We will take advantage of the possibility of
segmenting the two-dimensional correlation Eqn. (4.1.3) into a sequence of two
one-dimensional correlations.
Range Processing

The received signal v,(nTP, t) from each transmitted pulse s(t) is first passed
through the matched filter with impulse response s*( -t), or, equivalently,
correlated over time t' with the replica s*(t' - t). Dropping a scale factor i-P,

166

4.1

IMAGING AND THE RECTANGULAR ALGORITHM

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

167

the positive frequency portion of the result, for pulse n, is the filter or correlator
output Eqn. (4.1.14)
9n(t) = exp(jwct)exp(-j4nRn/A.)[sin(u)]/u

*(4.1.21)
Sn

-------------

where

and we assume a large bandwidth time product

Here Rn is the range from the radar at time of transmission of pulse n to the
terrain point with coordinates (xc, Re).
The carrier structure of the signal Eqn. (4.1.21) is stripped away by the linear
operation of complex demodulation, which amounts to a left shift by le in the
freqency domain, to obtain the complex low pass signal
bn(t) = exp(-j4nRn/A.)[sin(u)]/u

(4.1.22)

(Removal of the carrier structure of Eqn. (4.1.21) by the nonlinear operation


of average power computation would destroy the crucial phase term 4nRn/ A.,
the origin of the Doppler shift and the SAR effect.) Alternatively, the signal
v,(nTP, t) can first be basebanded, and the corresponding filter used to obtain
Eqn. (4.1.22) directly.
The time of occurrence of the maximum of IOn ( t) I is tn = 2Rn/ c. Reading off
the value of On(t) at that particular time yields the complex number
(4.1.23)
This procedure is repeated for each pulse for which the target was effectively
in view of the radar. Collecting together all the values Eqn. (4.1.23), we can
consider them as samples at times sn of a function of slow time s
*(4.1.24)
where Rn = R(sn).
The locus in the ( x, R) plane of values of the function Eqn. (4.1.24) is a
one-dimensional path (Fig. 4.5). The radar returns, originally dispersed in two
dimensions, have now been compressed to a one-dimensional space. The
remaining task is to compress this path into a point at (xc, Re), the original
target location. The fact that range to target could be considered constant
during the time of one pulsewidth, as discussed in Section 4.1.1, has allowed
the general two-dimensional compression problem to be decoupled into a

t =2Rlc
Figure 4.5

Locus ofrange compressed returns from point target in plane of slow and fast times (s, t ).

sequence of two one-dimensional compression operations, one in fast time and


one in slow time. Since nominally slow time measures a coordinate (along-track
distance) orthogonal to fast time (range perpendicular to vehicle track), this
processing sequence is called the rectangular algorithm.
Azimuth Processing

The signal g(slxc, Re) of Eqn. ( 4.1.24), which we want to compress as the second
operation of the rectangular algorithm, is in fact the Doppler signal received
from the point target as the radar moves by. Hence this second compression
operation is the "Doppler" part of the range-Doppler processing algorithm.
The waveform in slow times of the azimuth signal g(slxc, Re) of Eqn. (4.1.24)
is not necessarily simple, since R(s) is a nonlinear function of slow time s, the
form of which depends on the target parameters (xc, Re). Thus, while the slow
time ("azimuth") compression operation will be a correlation, the correlator
waveform will depend in general on which point in the image we are computing.
That is, in full generality, to compress the point target function Eqn. (4.1.24)
we need to compute the correlation
(4.1.25)
using a separate correlator function h- 1 for each point of the image. The "time
domain" image formation algorithm described by Barber ( 1985) implements
correlation in just this way. However, there is a considerable gain in
computational efficiency if the correlation can be implemented as a matched

168

IMAGING AND THE RECTANGULAR ALGORITHM

4.1

filter (Chapter 9). Such implementation of azimuth compression as a matched


filter operation, as is commonly done, requires further investigation.
To that end, it is helpful to expand the range function R(s) as a Taylor series
around sc = xc/ V., the slow time at which the center of the radar beam crosses
the target. (That time is unknown in value, and in fact is just the information
we want to derive by azimuth processing, that is the location along-track of
the target). We have

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

in which foe /R depend only on Re. Thus the operation can be realized as a
fast convolution (matched filter) over slow time for each range Re of the image.
As with linear FM range pulse compression, with a bandwidth time product
of 20 or more the correlation operation yields an output Eqn. (4.1.25) whose
modulus is a pulse
i((s~lsc,

Rc)I =SI sin (u)/ul


u = niRS(s~ - sc)

(4.1.26)
In such an expansion, it is often possible to neglect terms of order higher than
the quadratic, although the possibility of realizing the correlation expression
Eqn. (4.1.25) by matched filtering does not depend on that assumption. Rather,
we need to determine that the retained coefficients in the expansion Eqn. (4.1.26)
are independent of sc over the filter span S in slow time. (In Appendix B we
give a detailed discussion of the terms in the expansion Eqn. (4.1.26).)
We can identify the leading time derivatives in the expansion Eqn. (4.1.26)
in terms of the Doppler center frequency and Doppler rate of the slow time
signal Eqn. (4.1.24). The time rate of change of phase cf>(s) in the complex
exponential is just Doppler (radian) frequency, so that we have

169

The peak of this pulse occurs at

s~

*(4.1.31)

= sc, the target azimuth location.

Azimuth Resolution
The width of the pulse Eqn. (4.1.31) is nominally

<>s = 1/B0

where
(4.1.32)

cf>= -4nR(s)/).

(p /2n = io = -2R(s)/).
(4.1.27)

is the Doppler bandwidth. The time S is that nominal time for which a point
target is effectively in view. It is the SAR "integration time", and is determined
by the antenna horizontal beamwidth. The target is therefore located in azimuth
with spatial resolution

*(4.1.28)

*(4.1.33)

Both of these are functions of sc and Re in general, since R(s) contains sc, Re
as parameters.
Assuming that a quadratic expansion Eqn. (4.1.26) suffices, which is often
the case, the Doppler signal Eqn. ( 4.1.24) becomes

where V. 1 is the speed of the radar platform relative to the target point. For an
antenna of physical extent La along track, the nominal beam width is (}" = )./ L 30
so that any particular earth point at range Re is illuminated for a nominal time

~/2n =iR = -2R(s)/).


These yield the leading coefficients in Eqn. (4.1.26) as

Re =

- .A.inc/2,

*(4.1.34)

g(slsc, Re)= exp( -j4nRc/ .A.) exp{j2nl/nc(s - Sc)+ /R(s - sc)2 /2]},
Is - sci< S/2

(4.1.29)

This is a linear FM wave with center frequency inc and frequency rateiR As
we discuss in Appendix B, the FM parameters inc and iR can depend strongly
on Re, but usually depend only weakly on sc. The azimuth correlation operation
Eqn. (4.1.25) can then be realized approximately using a correlator function
(using the leading terms of the expansion Eqn. (4.1.26))
(4.1.30)

For the geometry of Fig. 4.6, where the radar beam center has a squint angle
xcl we have

e. ahead of broadside, assuming RC Ix R 2 (s) = R~ +

v; (s 1

sc)2 - 2Rc V.,(s - sc) sin O.

R(s) ~Re +

v;,(s -

sc) 2 /2Rc - V.1(s - sc) sin {}9

inc=

-2R(sc)/.A. = (2V.1/.A.) sine.

iR = -2R(sc)/). = -2v;1/.A.Rc

*(4.1.35)

170

IMAGING AND THE RECTANGULAR ALGORITHM

4.1

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

171

value La/2 of Eqn. (4.1.37), since the correlator output Eqn. (4.1.25) has time
resolution 1I B 0 in any event. (This assumes compensation for the antenna
pattern in the correlator, that is the compressor operator must be used.) On
the other hand, to use the potential wider Doppler band requires sampling (at
the radar pulse repetition frequency) at a rate somewhat greater than the Doppler
band to be processed (Appendix A). Such an increase in PRF may result in
range ambiguities.
Correlator Structure

......

......

............
......

......

---------------';:a'

,,,..

................................
Figure 4.6

Simplified encounter geometry for a radar with a beam center squinted at angle 6,.

Equations (4.1.32), (4.1.34), and (4.1.35) yield a Doppler bandwidth


*( 4.1.36)
and a system along-track resolution
*(4.1.37)
This is in accord with the earlier result Eqn. (l.2.9) obtained by incomplete
arguments.
The azimuth bandwidth time product should be large in order that the above
relations hold to an adequate approximation. This requires that
(4.1.38)
using Eqn. (4.1.33) and Eqn. (4.1.35). The criterion Eqn. (4.1.38) is usually well
satisfied.
Unlike the range bandwidth, which is strictly limited to the bandwidth of
the transmitted pulse, the Doppler bandwidth is not closely limited to the
nominal value Eqn. (4.1.36 ). This is because the target is actually in view over
a wider angle span than the nominal 3 dB beamwidth (}"'although with reduced
response due to the fall off of the antenna pattern outside the nominal beam.
On the one hand, in principle this makes possible finer resolution than the

The correlation operation Eqn. (4.1.25) on the azimuth Doppler signal can
efficiently be implemented as a matched filter operation for each particular
value of R0 , provided the parameters f 00 , fR are sufficiently independent of s0
over the span S to allow the use of fast convolution. In Appendix B these
parameters are discussed in detail, and expressions presented which allow
assessment of the situation in any particular case. In practice, considerations
of range migration, which we will elaborate on below, also enter into the question.
In Seasat-like cases, the approximations involved are usually justified, and
azimuth compression is usually implemented as the more efficient matched filter
operation, rather than by correlation. In either case, all the factors dealt with
in range compression must be considered, and in particular weighting of the
filter for sidelobe control is necessary.
The geometry of the encounter between radar and target is closely involved
in the azimuth correlator or matched filter structure through the expression for
slant range R(s) in terms of target position (x 0 , R0 ). The structure of the impulse
response function in the slow time domain, Eqn. (4.1.24), may or may not be
closely approximated as a linear FM in the Doppler domain. If it is not, then
terms in the expansion of R(s), Eqn. (4.1.26), of order higher than the quadratic
will need to be considered. It is also possible that the azimuth impulse response
depends significantly on the location x 0 of the target, as well as on R 0 This
will be the case only for rather long slow time span S, or for high squint
geometries. If such is the case, use of a matched filter may not be possible for
azimuth compression, since the filter response function called for would then
change over the filter time span. At the least, some tracking of the azimuth
filter parameters must be implemented over such an image span (Section 9.3.2).
4.1.3

Range Migration and Depth of Focus

Two further considerations have impact on the way in which azimuth processing
is carried out: range migration and depth of focus. Range migration is an
inevitable consequence of SAR operation, but may or may not be so severe as
to require compensation, depending on system parameters. Azimuth resolution
in SAR depends closely on the bandwidth of the Doppler signal, as in Eqn.
(4.1.33). Since the phase of the Doppler signal Eqn. (4.1.24) is</>= -4nR(s)/A.,
if the Doppler signal is to have a nonzero bandwidth, the range to target must
change during the time of view S, and the compressed point target response

172

IMAGING AND THE RECTANGULAR ALGORITHM

4.1

necessarily occurs at different ranges for different pulses (Fig. 4.5). This is the
phenomenon of range migration.

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

173

component of migration. In later sections we will describe how range migration


compensation is achieved in various imaging algorithms.

Range Migration Criterion

Depth of Focus Criterion

The azimuth signal needed for compression processing therefore must be


assembled from different range resolution cells, depending on pulse number.
The locus of these range cells in the data array is just the curve of R(s), or
approximately (using Eqn. ( 4.1.28))

The second important quantity, depth of focus, relates to the fact that the
azimuth correlator parametersfocJR in Eqn. (4.1.30) depend on range Re. Use
of a somewhat incorrect value offoe is not particularly serious, leading to some
loss of signal to noise ratio and increase in ambiguity level (Section 6.5.1 ), but
mismatch of the correlator value offR to that of the signal can cause unacceptable
loss of azimuth resolution (defocusing).
Using Eqn. (4.1.35), we can find the mismatch in azimuth chirp constantfR
if the range Re used in the correlator differs from the range of the target point

R(s) = Re+ Rc(S - Sc)+ Rc(s - Sc) /2

= Re - P.foc/2)(s - sc) - (J.fR/4)(s -

sY

(4.1.39)

The linear part of this is range walk and the quadratic part is range curvature.
The total change AR= R(s) - Re is range migration, and might involve higher
order terms in the expansion Eqn. ( 4.1.26), but usually does not.
We can easily determine a rough criterion to indicate whether range migration
compensation is needed. Again consider the simple geometry of Fig. 4.6, with
a beam squint angle o. Using Eqns. (4.1.35) and (4.1.39), for the maximum
values - sc = S/2 we have

(4.1.44)
This mismatch causes a phase drift between the correlator function Eqn. ( 4.1.30)
and the signal, just as we discussed in Section 1.2 relative to the unfocussed
SAR processor. At the Doppler band edges (s = S/2), for negligible mismatch
we require (somewhat arbitrarily) a phase error in Eqn. (4.1.30) due to mismatch
of fR limited by

AR = (SV.i/2) sin O. + (SV.1) 2 /8Rc


IARI ~ (SV.1/2)(1sin O.I + SV.1/4Rc)

( 4.1.45)
(4.1.40)

Using the nominal relations Eqn. ( 4.1.34) and Eqn. ( 4.1.37), we have the synthetic

aperture length as

Using Eqn. (4.1.44), in terms of the azimuth bandwidth time product Eqn.
(4.1.38) this can be written
(4.1.46)

( 4.1.41)
In order that migration not require compensation, the maximum distance Eqn.
(4.1.40) should be less than (say) 1/4 of a range resolution cell oR. Thus we
have the criterion
(A.Rc/ox)(lsin O.I + A./8ox) <

oR

*(4.1.42)

For an unsquinted beam ( o. = 0), this criterion that no compensation be needed


becomes
(ox/A.) 2 > Rc/8oR

*( 4.1.43)

At nominal Re = 800 km, oR = ox = 7 m, for example, an unsquinted L-band


system (J. = 25 cm) requires compensation, while an unsquinted X-band system
(A.= 3cm) does not. On the other hand, with a squint o. = 1, Eqn. (4.1.43)
indicates compensation is needed in the latter case also due to the range walk

Using Eqn. (4.1.38) we can also write Eqn. (4.1.45) as


*(4.1.47)
Thus with say oR =ox= 7 m, an L-band system (A.= 25 cm) must nominally
update JR each oRc/OR = 56 range resolution cells, while an X-band (J. = 3 cm)
system needs to update only once each 467 cells.
Cook and Bernfeld ( 1967, Chapter 11) have given a comprehensive analysis
of both deterministic and random errors, in the case of general waveforms to
be match filtered. For the linear FM waveform, precise results can be calculated
(Cook and Bernfeld, 1967, Chapter 6). These can be used as the basis for a
quantitative analysis of the effects of range migration and limited depth of focus.
In particular, defining the mismatch ratio
(4.1.48)

174

IMAGING AND THE RECTANGULAR ALGORITHM

4.1

the criterion Eqn. (4.1.45) becomes y < 1/ B0 S. For values y(B0 S) < 2, where
B0 S is the azimuth filter bandwidth time product, there is little loss of
resolution due to using a compression filter with chirp constant f ~ = fR + ofR
with a linear FM input with constant fR (Fig. 4.7). With the Seasat
value B 0 S = 3500, for example, this amounts to a proportional error
(ofR)lfR ~ 0.6 x 10- 3 (0.3 Hz/s at the nominal fR = 500 Hz/s). From Eqn.
(4.1.44), for the nominal R 0 = 850 km this corresponds to a mismatch oR. =
(0.6 x 10- 3 )(850 km)= 510 m, so that over the swath in slant range of 35 km
about 70 different filters would be needed for no loss of resolution. The depth
of field of the processor is 510 m, using this criterion.
In addition to resolution changes, however, compression filter mismatch
disturbs the sidelobes of the matched filter output. For example, whereas the
filter output with matched JR has sidelobes down 13 dB (Eqn. (4.1.22)), for even
the case of yB 0 S = 2.5 (mismatch ratio y = 0.7 x 10- 3 with bandwidth time
product 3500), the first sidelobe is only 7.5 dB down from the peak (Fig. 4.8a).
Even mismatches only moderately larger, say yB0 S = 5 (ofR = 0.7 Hz/s for
Seasat ), cause serious disruption of the shape of the filter output (Fig. 4.8b ).
On the other hand, for sidelobe control the matched filter will always be
used with some sort of weighting. The presence of this weighting considerably
ameliorates the degrading effects of chirp rate mismatch, since the influence of

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

175

2.0 (a)

Cl>
'O

,e

ii

1.8
1.6
1.4

E 1.2
1.0
,!;! 0.8
111

0.6
0.4
0.2
00

0.4

0.8

1.2

1.6
2.0
2sfyS

2.4

2.8

3.2

3.6

1.8 (b)
1.6
~

1.4

1.2

~ 1.0

~ 0.8

Iz
0

0.6
0.4
0.2
o...._~~~~~~~~_._~__._~~...._~_._~__,,~___,

0
8

0.4

0.8

1.2

1.6
2.0
2s/yS

2.4

2.8

3.2

3.6

Figure 4.8 Distortionlofmatched filter output for linear FM pulse with filter mismatch y = j(CJfR)/fRI
in terms of bandwidth time product (from Cook and Bernfeld, 1967). Courtesy J. Paolillo.

errors at the ends of the matched filter is decreased by the weighting. For
example (Cook and Bernfeld, 1967, p. 158), with weighting designed to produce
sidelobes down 40 dB, a mismatch factor y = 8/ B 0 S raises the first sidelobe
only by 4 dB. However, the mainlobe widens by an additional factor 2.3 beyond
that produced by the original weighting (Fig. 4.9). For y = 4/ B 0 S, the widening
is by a factor 1.4. This value of y for Seasat corresponds to a mismatch in fR
of 0.6 Hz/s, or a depth of field in R 0 of 1 km. The azimuth filter in that case
would need to be updated 35 times across the 35 km Seasat slant range swath
to stay within the limit.
The determining parameter in such matters, unity for n/4 phase error, is

yB 0 S
Compressed pulse widening factor due to filter mismatch y =
terms of bandwidth time product (from Cook and Bernfeld, 1967).

Figure 4.7

I( CJfR)lfRI for linear FM in

using Eqn. (4.1.38) and Eqn. (4.1.48). Therefore short wavelength systems are
more resistant to filter mismatch than long wavelength systems, that is they

176

IMAGING AND THE RECTANGULAR ALGORITHM

2.4
2.2

C>

ec

2.0

~;:: 1.8
5l
~ 1.6
1.4
1.2

4
1

s0S

10

Effect of filter mismatch y = i(bfR)/fRI on compressed pulsewidth for linear FM in


terms of bandwidth time product. Case offilter weight function 0.088 + 0.921 cos 2 [(it/ B)(f - .f.)]
(from Cook and Bernfeld, 1967).
Figure 4.9

have better depth of focus. Also, depth of focus degrades quickly as azimuth
resolution becomes finer.

4.1.4

An Example

An example of the steps in the image formation procedure is of interest. In


Fig. 4.10 is shown a classic Seasat image of the NASA Goldstone antenna
complex in the Mojave Desert of California. The bright cross to the left of
center is the image of a large antenna dish, pointing towards the radar transmitter
on the satellite. The resulting very high radar reflectivity overloads the satellite
receiver, and results in the visibility of many sidelobes of the imaging algorithm
(the arms of the cross).
Fig. 4.1 ta shows the time waveform of the received signal for one radar pulse.
The time interval from about point 2700 to point 4200 is essentially the
transmitted linear FM pulse, as reflected from the antenna, and is of width tP.
The amplitude fully saturates the receiver. In Fig. 4.1 lb is shown the result of
matched filter range processing of the single pulse in Fig. 4.1 la. The matched
filter output is of width &, the slant range resolution.
Fig. 4.12 shows the amplitude of the complex number of each cell of the
data plane of slow and fast times. The single pulse waveform of Fig. 4.llb is
177

178

IMAGING AND THE RECTANGULAR ALGORITHM

32

4.1

(a)

24

".!::'.

a.

16

<(

8
0

0
(b)

10

20 30 40 50 60 70
Relative sample points X 102

80

90

160

....0

179

one horizontal cut through Fig. 4.12a. Since the very bright antenna dominates
the scene, its corresponding data are clearly visible in Fig. 4.12a. We are viewing
the system impulse response.
The curved trajectory in Fig. 4.12a is the locus of the pulse by pulse range
compressed peak responses, the nearly parabolic trajectory Eqn. ( 4.1 .26). Range
migration correction is needed, as discussed in Section 4.1.3. The linear (walk)
component, the first term in Eqn. ( 4.1.40), is removed using a procedure discussed
in Section 4.2.3 below. The result is the locus of Fig. 4. I 2b, with only the
quadratic (curvature) migration component present.
Without range curvature, each vertical (constant range ) cut through the
complex data field whose amplitude is shown in Fig. 4. I 2b would yield a complex
function of slow time, a linear FM Doppler signal. However, with curvature

Q)

::J

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

120

120

80

100

40

80

x
Q)

"

::J

.!::'.

a.

<(

0
2100

60
2120

21 40
2160
Relative sample points

2180

40
20

Figur e 4.11

Video offset signal and range compressed result for pulse viewing bright scattering
point of Fig. 4.10 (from McDonough et al., 1985)).

0
300
CD

240

x 160
Q)

::J

>"'

80
0
300
250
200
150
100
50

Figure 4.12 Range migration of Goldstone antenna before (a) and after (b) nominal conection
(from McDonough et al., 1985).

80 160 240 320 400 480 560 640 720 800


Points X 10
Figure 4.13 Doppler spectra for three range bins through and adjacent to Goldstone antenna
showing the effect of range curvature on Goldstone data (from McDonough et al., 1985).

180

IMAGING AND THE RECTANGULAR ALGORITHM

4. 1

/'IULT ILOOK PROCESS JrlG

.OD

INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM

181

each cut passes through two arms of the parabolic locus (except for the single
cut at the apex). Each segment of the linear FM waveform traversed by a single
range cut has adequate bandwidth time product to lock together slow time and
Doppler frequency. Therefore, the two branches of the parabola cut at different
slow times map into different Doppler frequency regions. This is evident in the
Doppler amplitude spectra shown in Fig. 4.13.
The procedure of range (quadratic) curvature correction assembles the spectra
of Fig. 4.13 for the various range cuts into a single Doppler spectrum
corresponding to the range of the parabola apex. That spectrum is then processed
with the Doppler compression filter to yield a line of complex image in slow
time. Fig. 4. 14a shows the result of separately compressing four subbands of
the available Doppler spectrum to obtain four statistically independent images
of the antenna point. Fig. 4. I 4b finally shows the result of adding the intensities
of the four images to obtain a single image line along slow time, at the range
of the antenna point ("multilook" processing). Fig. 4.14b is the constant range
cut through the antenna point in Fig. 4.10 .

OD

C<l!BlliED DATA

b
(a) Four single-look images of Goldstone antenna in Fig. 4.10. (b) One four-look
image resulting from images of (a) (from McDonough et al., 1985 ).

Figure 4.14

Point reflectors on Goldstone (dry) Lake, showing attained resolution and sidelobe
structure (from McDonough et al., 1985).

Figure 4.15

182

4.2

IMAGING AND THE RECTANGULAR ALGORITHM

In Fig. 4.10, the antenna point so dominates the scene that no other structure
appears in the image cut Fig. 4.14b. On the other hand, Fig. 4.15 shows a detail
of the image of small point reflectors near the large antenna (Fig. 7.16) on
the dry bed of Goldstone Lake (a smooth background which appears dark to
the radar). The sidelobe structure and mainlobe width of the radar and image
formation algorithm response to a point target are plotted as cuts through the
rightmost reflector point.

COMPRESSION PROCESSING

The received waveform at the radar in response to a unit point target with
coordinates (xc, Re) (Fig. 4.1) is the impulse response:
h(x, Rlxc, Re)= cos{2n[fc(t - r:) + K(t- r:) 2 /2]}

(4.2.3)

COMPRESSION PROCESSING

The unique aspect of SAR processing is the compression of the complex range
data Eqn. (4.1.24) in the slow (azimuth) time variables. In order to carry that
out, it is necessary that the results of range processing of perhaps thousands of
radar pulses be available. Since each radar pulse produces thousands of range
time samples, the memory requirements on the computer are considerable. In
addition, range data are naturally produced and ordered with range as the
minor index, and pulse number as the major index. For azimuth processing,
the reverse is needed. This leads to the necessity for some kind of "corner
turning" in order to access the data matrix by columns after having stored it
by rows. With the availability of increasingly large random access memory, or
with the construction of special purpose computing units, these difficulties have
tended to recede in importance. However, in the earlier development of SAR
processing algorithms for data from space platforms they were a considerable
hindrance to achieving high speed image formation.
In Chapter 9 considerable attention is given to the computing systems which
have been developed to carry out the SAR imaging process. Here we will be
concerned entirely with the signal processing algorithms which act on the data,
assuming it is available where and when needed. The variety of approaches
taken by various designers is a reflection of the difficulty of the problem. There
is no clear-cut "best" way to proceed, although lately the trade-offs among
various alternatives have become much clearer.
We begin the discussiQn with some details common to all processors which
use the rectangular algorithm. We then discuss an azimuth compression
algorithm which is in concept the most direct of the various algorithms in
current use, the time domain processor. This is followed by a detailed discussion
of azimuth compression algorithms which operate in the Doppler frequency
domain. The computational aspects of these algorithms are discussed in
Section 9.2.
4.2.1

(4.2.2)

The delay is r: = 2R(x)/c with


R(x) =Re+ Rc(s - Sc)+ Rc(s - sc}2/2 + ...

4.2

183

The slow time variable is s = x /Va, where Va is the speed of the platform along
its path.
The received signal Eqn. (4.2.2) is often converted to some different frequency
band (S-band for example), perhaps for transmission to ground, and further
converted after ground station reception to a relatively low frequency carrier
f 1 (the offset video frequency) (Fig. 4.16). The result is an offset video impulse
response function
h(s, tlxc, Re)= cos {2n[f1 t - 2R(s)/A. + K(t - 2R(s)/c) 2 /2]},

it - 2R(s)/cl :s::; r:p/2 (4.2.4)

lH (f)I

-fc

<>

.l BR--o--,
fc

-I'0------0-1

Range Compression Processing

To be specific, we will assume henceforth that the transmitted pulse is the linear
FM that is commonly used in remote sensing SAR systems
s(t) = cos[2n(fct + Kt 2 /2)],

ltl :s::; r:P/2

(4.2.1)

Figure 4.16 (a) Conversion of RF signal to video offset signal (f1 / 0 ). (b) Complex basebanding
(I, Q detection) of offset video signal.

184

IMAGING AND THE RECTANGULAR ALGORITHM

4.2

The range of s values over which this is effectively nonzero depends on the
radar antenna beamwidth Ou =A./ L 8 , since that determines the length of slow
time S = OuRcf V. 1 for which any particular terrain point is in view.
The received data array v,(s, t) will be roughly rectangular, and will extend
in slant range R = ct/2 the full swath width W. and in azimuth x = V.s
some indefinite amount depending on the amount of data which must be
simultaneously accessed for image processing. The impulse response Eqn. (4.2.4)
will cover a region, as indicated in Fig. 4.17, which is of extent cr:p/2 in slant
range R for every x. The extent of x over which the impulse response is nonzero
is not sharply defined, since the edges of the antenna beam are not sharp. The
midpoint of the region of the impulse response, shown in Fig. 4.17 as a solid
line, is the curve Eqn. (4.2.3 ), which is often well approximated as a parabola.
The real valued data v,(s, t) is naturally sampled in slow time s at the radar
pulse repetition frequency. In fast time t the sampling is done after down
conversion to the offset video frequency at a rate a little above the Nyquist
rate (Appendix A). This is typically somewhat greater than 2Ba. where Ba is
the bandwidth of the radar pulse around the carrier (Fig. 4.16).
As an example of the size of this real data matrix, the Seasat offset video
frequency f 1 = 11.38 MHz required a sampling frequency somewhat greater
than 2BR = 38 MHz, and 4f1 = 45.53 MHz was used. The target point
illuminated may be located anywhere in the range swath. Therefore provision
must be made to store sampled values for each pulse over a time span nominally
equal to the slant range swath width W. plus the pulse width r:P. For Seasat,
this was (2/c)(37 km)+ 33.8 s = 280 s; in fact, 300 s was used, resulting in
(exactly) 13680 real data samples to be stored. In the along-track coordinate
x, the Seasat impulse response spans about 4000 pulses, while something like

COMPRESSION PROCESSING

185

8000 data pulses need to be considered simultaneously for efficient processing


(Section 9.2.4 ).
The range dimension of the two-dimensional compression processing is
common to most SAR systems. The operations required are whatever it takes
to compress the function Eqn. (4.2.4) into a pulse at fast time t = 2R(x)/c. The
same processing is done for every pulse over the range of slow time needed to
form an image. The first operation is complex base banding (coherent detection),
which is initiated by Fourier transformation of the received data vr(s, t) with
respect to fast time t. We assume that the radar pulse has a properly large
bandwidth time product Bar:p = IKlr:~ (say > 100) so that the point response
function Eqn. (4.2.4) has a rectangular amplitude spectrum. The complex
basebanding operation amounts to deleting the negative or positive frequency
portion of the spectrum and shifting the remaining half to center on zero
frequency. Figure 4.16b shows one case.
Which half of the real range signal spectrum is used depends on a detail of
conversion to the offset video frequency f 1 . The procedure is to multiply the
signal Eqn. (4.2.2) by a local oscillator signal cos (2efLt), and reject by filtering
any frequency components of the result near the carrier fc. Letting r: = 0 for
convenience, the result is
2{ cos(2nfLt) cos[2n(fct + Kt 2 /2)] }rniered =cos{ 2n[(fc - fdt + Kt 2 /2]}
(4.2.5)
Figure 4.16a shows the case A <fc, so thatfc -A> 0. In that case, the positive
frequency components of the signal Eqn. (4.2.5) have phase

cP

= 2n[(fc - Jdt

+ Kt 2 /2]

and frequency rate K the same as the transmitted waveform. The appropriate
operation is to shift that part of the spectrum left to center on zero.
The case fL > fc may also occur in the system hardware arrangement, in
which case the spectra in Fig. 4.16a cross over. It is then the negative frequency
components of Eqn. (4.2.5) which have frequency rate K (rather than -K).
The basebanding operation is then to shift the left half of the spectrum to the
right to center on zero. In either case ofA> fc orfL <fc, the resulting spectrum
V,.(s,f) of the basebanded data corresponds to a complex time function tJr(s, t)
for each pulse time s = nI;, which from Eqn. (4.2.4) is
v,(s, t) = 0.5 exp[ -j4nR(s)/A.] exp{jnK[t - 2R(s)/c]2},

It A
Figure 4.17

Span in memory of responses to point targets at (xc, Re) beam center coordinates.

2R(s)/cl ~ r:p/2

The spectrum of this, except for the constant, is the phase factor
exp[ -j4nfR(s)/c]

(4.2.6)

186

IMAGING AND THE RECTANGULAR ALGORITHM

4.2

corresponding to the time shift t = 2R (s) / c, times the spectrum of the complex
base banded transmitted pulse Eqn. (4.2.1)
.S(t) = (0.5) exp(jnKt 2 ),

(4.2.7)

COMPRESSION PROCESSING

187

usually required during range migration compensation in azimuth processing.


We will discuss the details below.
Alternatively, the basebanded data Eqn. (4.2.6) could be correlated in fast
time t with the basebanded transmitted pulse Eqn. (4.2.7) to compute

Since the pulse Eqn. (4.2.7) by assumption has a large bandwidth time product,
its bandwidth is just BR= IKltP, and its spectrum is (Eqn. (3.2.29))

t+<p/2

g(s, t)

v.(s, t')s(t' - t) dt'

(4.2.13)

t-rp/2

S(f) = (0.5)1Kl- 112 exp[j(n/4) sgn(K)] exp[ -jnl 2 / K],

Ill< IKltp/2 = BR/2 (4.2.8)


The spectrum of the basebanded impulse response vr(s, t) of Eqn. (4.2.6) is then
f/;(s,f) = 0.51KI - 112 exp[j(n/4) sgn(K)] exp[ -j4nR(s)/ A.]
x exp[ -j4nl R(s)/c] exp(-jnf2/K),

Ill< BR/2

(4.2.9)

Since the transmitted spectrum Eqn. (4.2.8) has constant amplitude, the
compression filter is just the matched filter

H(f) = 1/S(f) = 21KI 112 exp[ -j(n/4) sgn(K)] exp(jnf2 / K),

Ill< BR/2
(4.2.10)

Applying this filter to the basebanded signal spectrum Eqn. (4.2.9) yields for
each radar pulse a filter output
G(s,f) = H(f)V,(s,f) = exp[-j4nR(s)/A.] exp[-j4nlR(s)/c],

lll<BR/2 (4.2.11)

Since the correlation operation Eqn. (4.2.13) is stationary in this case of range
processing (Appendix A), i.e., the integrand involves s(t' - t), and not s(t'lt),
the matched filter realization Eqn. (4.2.11) is exact. There is no reason to carry
out range processing as a correlation, unless it is more efficient than the fast
convolution processing involved in matched filtering. That will only be the case
for a transmitted pulse which spans a small (less than say 64) number of time
samples, so that the time bandwidth product is less than 64. This is rarely the
case, although in at least one aircraft system (Bennett and Cumming, 1979)
range processing (as well as azimuth processing) was realized as a time domain
correlation (convolution).
It is worth recalling that the matched filter output Eqn. (4.2.12) is
approximately correct even for transmitted pulses with rather small bandwidth
time products, on the order of 20, provided the full signal and matched filter
bandwidth are used for whatever pulse is transmitted (Section 3.2.2).
It is in the stage of azimuth (slow time) processing that matters become more
complicated. This is because the azimuth impulse response function Eqn. (4.2.12)
depends on range R 0 , through Eqn. (4.2.3 ). The compression filter is thereby
non-stationary, and processing in the frequency domain (fast convolution)
requires care. Second, the data to be compressed lie along the range migration
curve Eqn. (4.2.3). Both these effects were discussed in Section 4.1.3. We will
now discuss the ways in which they affect SAR azimuth compressor design.

The corresponding time response is

g(s, t) =BR exp[ -j4nR(s)/A.] sinc{nBR[t - 2R(s)/c]}

*(4.2.12)

where BR = IKltP is the transmitted pulse bandwidth and sinc(u) =(sin u)/u.
This is of nominal time width & = 1/BR, and is the result of range compression.
The collection of these over slow time s constitutes the data for azimuth
compression.
The range compression operations will usually be carried out digitally. We
describe the details in Section 5.1. The effect is that values of the range
compressed data Eqn. (4.2.12) are available only at fast times which are integer
multiples tk = k/f. of the sampling interval 1/f. of the complex video data
(f. > BR). This time quantization step due to sampling is usually of the order
of the nominal width & = 1/BR of the compressed response function Eqn.
(4.2.12). As a result, range interpolation of the range compressed data array is

4.2.2

Time Domain Azimuth Processing

The most straightforward way to deal with the problems of range migration
and point dependent impulse response in azimuth processing is that implemented
in the processor of the RAE of Great Britain (Barber, 1985). In this procedure,
azimuth correlation is carried out on basebanded range compressed data,
corresponding to Eqn. (4.2.12), using the correlation kernel Eqn. (4.1.30), taking
account that h- 1 depends weakly on s0 Since only one image point is produced
for each correlation operation, the process is markedly slower than procedures
which use fast convolution in the slow time domain. On the other hand, no
approximations are necessary such as are required to use fast convolution in
azimuth time in the usual case of range migration and non-constant filter
parameters.

188

IMAGING AND THE RECTANGULAR ALGORITHM

4.2

Specifically, in time domain azimuth correlation each point of the complex


image is calculated separately as
(4.2.14)
where
h- 1 (slsc, Re)= exp{ -j2n(f0 c(s - sc)

+ fR(s -

sc) 2 /2]

+ jR(s -

sc) 3 /6}
(4.2.15)

with both foe and fR depending (markedly) on Re and (weakly) on Sc. (The
cubic term in the expansion Eqn. (4.1.26) is retained for better accuracy.) The
basebanded range compressed data g(s, t) corresponding to Eqn. (4.2.12) are
collected along the trajectory (Fig. 4.5)

to form the integrand data function in Eqn. (4.2.14)


(4.2.16)
Since the data Eqn. (4.2.16) are available only at slow times s = sn which are
pulse reception times, the correlation integral Eqn. (4.2.14) is realized as a sum
((sc, Re)=

2: h-

(sc, Rclsn)g(snlsc, Re)

(4.2.17)

or with some more accurate numerical procedure. Regardless of sn, sc, Re, the
values of h- 1 (s 0 lsc, Re) in Eqn. (4.2.17) can be calculated using Eqn. (4.2.15)
and refined orbit data, according to the equations of Appendix B. Analysis
indicates that for an L-band SAR, and especially operating at high latitudes,
the cubic term in Eqn. (4.2.15) is only marginally negligible. Although there is
no difficulty in doing a precise calculation of these values, with negligible error
the values of foe fR, jR can be calculated on a grid of some reasonable fineness
(e.g., 10 x 10 km), and polynomial interpolation used to the particular sc, Re
of interest.
Although values of h- 1 (snlsc,Rc) are available for any sn, sc, Re, the same
is not true of the data function g(snlsc, Re), since for the, specified sc, Re the
locus point R(snlsc, Re) will only by coincidence coincide with a range sampling
point. For each sn, in general, range interpolation is needed to find the value
Eqn. (4.2.16). This is conveniently done by "zero padding" the Fourier
coefficients corresponding to the basebanded range compressed data spectrum
Eqn. (4.2.11) before taking the inverse transform to obtain time samples
corresponding to Eqn. (4.2.12). The procedure, discussed in Appendix A, yields
values Eqn. (4.2.16) on a finer grid (within memory constraints) than the

COMPRESSION PROCESSING

189

grid of the time sampled Eqn. (4.2.12) to allow the interpolated value at the
range nearest to R(snlsc, Re) to suffice.
The considerable additional calculations needed to produce a final SAR
image are relatively generic to all processors, and will be discussed separately
in later chapters. These involve such things as Doppler filtering to obtain data
needed for subaperture processing as a part of formation of multilook images
(Section 5.2), radiometric (Chapter 7) and geometric (Chapter 8) corrections,
resampling to a standard grid (Chapter 8), automatic determination of values
foe fR (clutterlock and autofocus) (Section 5.3), and so on. Some of these are
not needed by a time domain processor, such as resampling, and some (e.g.
autofocus) need not be used if accurate orbit information is available, but all
will be discussed to some extent in latter sections.
4.2.3

Time Domain Range Migration Compensation

The procedure of forming an image from SAR data encounters two basic
difficulties. The first is that the system impulse response h(x, Rlxc, Re) depends
strongly on Re. That is, the system responds differently to targets which are at
different ranges Re from the radar at the center of the radar beam. The difference
is embodied in the functional form of the range compressed data impulse
response Eqn. (4.2.12) arising from the differing shape of the range to target
function R(s) for different target positions. In time domain azimuth processing,
as in Section 4.2.2, one acknowledges that fact, and uses for correlation whatever
impulse response function corresponds to the image point in question. On the
other hand, if one wishes to use a more efficient fast convolution azimuth
processing (Appendix A), then approximations connected with depth of focus
enter (Section 4.1.3).
The second fundamental difficulty encountered is the fact that range R(s) to
a point varies with po.sition of the radar along its track. Therefore, the numbers
representing the system impulse response are found in data memory along a
curved locus R = R(s). A processing algorithm must access the data to be
compressed in azimuth along that trajectory, the shape of which depends on
the target range Re. In time domain processing the access is done directly, and
is relatively slow. In algorithms using fast convolution, other procedures have
been developed for access in the frequency domain. In this section and the next
we will describe the two most common such procedures used with the
"rectangular range Doppler" algorithm. In Chapter 10 we will discuss a
procedure which has been developed for so-called polar processing.
The Data Array

Consider then the complex base banded range compressed response Eqn. ( 4.2.12)
due to a unit point target at beam center coordinates Xe, Re (Fig. 4.1)
g(s, R) = h(s, Rise, Re)= exp[ -j4nR(s)/ A.] sinc{(2nBR/c)[R - R(s)]}

(4.2.18)

190

IMAGING AND THE RECTANGULAR ALGORITHM

4.2

To an approximation which is normally adequate,

COMPRESSION PROCESSING

191

R=R(s) ~ ,/"

(4.2.19)
with foe fR being the Doppler chirp parameters for the scene in question,
depending markedly on Re and weakly on sc. (In Eqn. (4.2.18), we ignore the
antenna weighting pattern for simplicity of writing. It can easily be included in
the compression filter, but often is not in order to provide sidelobe control.)
For a particular pulse number m, corresponding to azimuth time sm, the
values of Eqn. (4.2.18) for various range bins Rn,

,,I
Sm

would be stored in memory locations corresponding to the nodes (m, n) of the


data matrix (Fig. 4.18). Prior to azimuth direction Fourier transforming, we
need to compute and collect together the numbers

, ,,"
,
,
,,

'/

Data sample

/\R~

Once these values are found, azimuth compression proceeds by computing their
spectrum over some range of slow time s, multiplying by the corresponding
matched filter spectrum for the image range Re in question, and inverse
transforming. This achieves the matched filter computation of the correlator
output Eqn. (4.2.14) for a full azimuth line of image.
The procedure described in this section has been used by Bennett et al. ( 1981 ),
Herland ( 1981, 1982), and McDonough et al. ( 1985). The interpolation
necessary to compute the numbers which would be present in the data matrix
along the trajectory R(s) (Fig. 4.18), given the numbers which are present at
the nodes of the matrix, is carried out mostly in the time domain, before azimuth
Fourier transformation. The remaining interpolation operations are carried out
in the Doppler frequency domain. In effect, the bulk of the range walk, the
linear component of R(s) in Eqn. (4.2.19), is removed before azimuth Fourier
transformation of the data, with the remaining small range walk, and the full
range curvature (the quadratic term of R(s )), removed in the frequency domain.
Skewing the Data Array

We begin by choosing a nominal value R~ of Re, say the midswath value, and
a nominal s~, say the midscene value. The corresponding Doppler center
frequency foe is assumed to be known, perhaps by a clutterlock proc.edure
(Chapter 5) used in conjunction with the simple model for foe as a functton of
Re developed in Appendix B. For the entire data field, at all ranges R, we now
remove an amount of range migration corresponding to a range independent
linear walk,
(4.2.21)

R
Figure 4.18

ranges R . R

Sampling nodes in data memory. At pulse m, a target at range R(sm) is sampled at


is the range migration locus of a point target.

= R(s)

To do this, at azimuth time s, counted from time s = 0 taken at the beginning


of the scene for convenience in indexing, we want to compute data corresponding
to the range at the tail of each arrow in Fig. 4.19 and store it at the memory
node corresponding to the arrowhead. Thus, at time s we want to shift the
(unavailable) analog data left by an amount AR= -A.f 0cs/2, and then sample
at the discrete range bins (the memory nodes). Note that this removes the full
amount of the linear range walk only for the range R~, because we use the same
value foe for all ranges Re.
.
The unavailable data values at the tails of the arrows in Fig. 4.19 are computed
by interpolating the values available at the memory nodes. Since the range
compressed data are bandlimited, and adequately sampled by the range bin
spacings, the interpolation procedures of Appendix A apply. In particular, let
Gk be the N discrete Fourier coefficients (taken overt) of the range compressed
data g( s, t) corresponding to Eqn. ( 4.2.18 ), sampled at the N range bin values
R = R 0 as in Eqn. (4.2.20). Then the Fourier coefficients of the function
g(s, R + AR) are just

G~ = Gk exp(j2nkAR/ NJx.)
where the slant range sampling interval is Jx. = c/2f.. In particular, for
(4.2.22)

192

IMAGING AND THE RECTANGULAR ALGORITHM

4.2

193

and stored. Which set to use for any given data row (radar pulse) is determined
by calculating the index p such that

p/8 :s:; integer(.1R)/c5x. < (p

+ 1)/8,

p = 0, 1,. . .,P - 1

(4.2.24)

Since the operations of range compression and interpolation are both linear
and stationary, they commute. We can therefore interpolate directly on the
complex basebanded range data Eqn. (4.2.6), and then apply the range
compression filter. Therefore we can precompute P sets of range compression
filter coefficients

p = O,. . .,P - l
where H;,. is the usual range compression filter and Fk(P) is the appropriate set
of interpolator coefficients Eqn. (4.2.23) calculated for i:x = pc5x. corresponding
to p as in Eqn. (4.2.24 ). After compression and interpolation, the shifting
operation by the appropriate integer number of range bins amounts simply to
re-indexing the output of the compression filter before storing in the data matrix.
After this compression and interpolation process, the data corresponding to
a point target at some beam center slant range Re lie within 1/ P of a complex
range bin of the locus given by

R
Figure 4.19

COMPRESSION PROCESSING

Re-indexing of data matrix and interpolation in time domain migration compensation.

where .1R is the total shift Eqn. (4.2.22) carried out in correcting for the nominal
linear range migration. This can be written

for some i:x with 0 :s:; a :s:; l for convenience, we have


G~ = Gk exp(j2nkn/ N) exp(j2nki:x/ N)

R(s) =Re - (A./2)Uoc - foc)(s - sc) - (A.fR/4)(s - sc) 2

The second exponential factor corresponds to interpolation by an amount ac5x.,


and the first corresponds to left shift of that interpolated sequence by n samples.
The left shift is accomplished simply by storing the interpolated sequence
appropriately at the output of the interpolating filter. The interpolating filter
Fourier coefficients are

+ A.focsc/2
(4.2.25)

The last term of this represents a skewing of the final image, which can be
removed after azimuth compression. The remaining terms of R(s) - Re represent
a residual range migration after the interpolation and re-indexing procedure.
Doppler Domain Interpolation

Fk(i:x) = exp(j2nkcx/N)

(4.2.23)

Each row of the data matrix will in general be associated with a different value of a

a= integer(.1R)/c5x.
where "integer" indicates the integer part of the number. To avoid the necessity
of computing the interpolating filter coefficients during data processing, a can
be quantized into some appropriate number P oflevels (four or eight, typically),
and the corresponding sets ofinterpolator coefficients exp(j2nkcx/ N) precomputed

For Seasat-like systems, the Doppler center frequency foe varies by only a few
hundred Hertz over the range swath, while the azimuth extent of the point
target response is a few seconds at most. Even for the larger values of A., say
at L-band, for which migration effects are more severe, the residual linear and
the quadratic terms together, Eqn. (4.2.25), amount to only a few tens ofrange
bins over the full point target response history. For Seasat, for example, from
Eqn. (4.l.34) the nominal integration time is S = 2.4 seconds. Using a nominal
value foe - foe= 100 Hz, the residual range walk in Eqn. (4.2.25) is 28 m, or
about 5 range bins, while withfR = 500 Hz/s the curvature amounts to 7 range
bins. Thus, the bandwidth time product of the interpolated and shifted data in

194

IMAGING AND THE RECTANGULAR ALGORITHM

each range bin is on the order of 1/12 the full azimuth product ( 3200 for Seasat ),
or about 250 per bin, which is more than enough to lock together time and
Doppler frequency in each bin. (A basebanded waveform of length T and
two-sided band B, sampled at J. = B, yields a number of samples N = BT, the
bandwidth time product.)
With time and frequency locked together by
S-

4.2

COMPRESSION PROCESSING

195

Sc= (f- f0c)/fR


I

where now f is Doppler frequency, corresponding to slow times, after Fourier


transformation of the interpolated range compressed data g(s, R) along each
range bin to produce Doppler spectra G' (f, R ), the residual migration correction
needed, Eqn. (4.2.25), can be written in the frequency domain as

I
I

,
I

,
,,
I

Slope -2/)..foc

For each value of Re for which an image line ((s, Re) is to be constructed, we
need to assemble the proper Doppler spectrum for azimuth compression
processing from data G'(f, R) located at Re + J' R for each frequency f of the
discrete spectrum over the Doppler band. Although Re will be an integral
number of range bins, generally J' R will not, so that there will not usually be
a data node at (f, Re + J' R). Interpolation is then needed, to calculate
G' (f, Re + J' R) from adjacent values G' (f, nJx.). Simple polynomial interpolation
using perhaps four adjacent values suffices. This finally corrects the last range
migration effect, and compression in azimuth follows using the appropriate
sidelobe weighted compression filter.
In the case of small range migration, such as for a Seasat-like system with
beam squint angle e. at most a fraction of a degree, it may not be necessary to
do any time domain adjustments. All the range migration can then be removed
in the Doppler frequency domain using Eqn. (4.2.26) (Bennett et al., 1980), taking
foe= foe= 0.
Criterion for Success of the lnlerpolalion

The procedure we have described here is simple and accurate, unless the linear
range walk is excessive. The potential difficulty in the case of large range walk
(which the technique of secondary range compression is designed to circumvent;
as described in Section 4.2.4) can be understood from Fig. 4.20. By removal of
the nominal linear range walk in the time domain, we are ip effect carrying out
compression processing along the indicated diagonal line through memory. As
shown in the figure, targets with different values of Re have their data lying
near the same diagonal. Since the azimuth chirp constant fR depends on Re,
along the line of analysis there occur linear FM functions in the Doppler domain
with different chirp constants. These will all be compressed by the same azimuth
compression filter, embodying some fixed value fR. Any target for which the
filter constant fR differs from the target constant more than allowed by the

"\,/
I
I

I.

,
,,,

I
I
I

I
I

R
Figure 4.20 Two point targets with extreme range migration may involve chirp constants which
exceed the azimuth depth of focus.

depth of focus will be defocussed. Therefore, the length of azimuth time used
in batch processing in the fast azimuth compression process must be short
enough so that for whatever nominal range walk is present, the span of values
Re is within the depth of focus of the processor. In extreme cases (for example,
with squint angle more than a degree, especially at L-band and lower), this
may force the azimuth FFT length to be shorter than would otherwise be desired.
Since the nominal range walk locus in Fig. 4.20 is given by Eqn. (4.2.21 ),
where foe is the selected (say midswath) value used in the compensation
procedure, and s~, R~ are say the midscene values, the slope of the nominal
walk line is
ds/dR = -2/Afoc
For an azimuth analysis time span L\s, the span of target values Re included is then

From Appendix B, the model Eqn. (4.1.35) for fR, that is

196

IMAGING AND THE RECTANGULAR ALGORITHM

4.2

holds quite closely, with V taken as a velocity parameter which depends only
weakly on sand not on R. Therefore, the change in target fR across the span ARc is

or
(4.2.27)
If we require a mismatch ratio Eqn. (4.1.48)

then the span of azimuth processing time As is limited by


*(4.2.28)

As < 2eRcl A.I foci

The parameter e depends on the system depth of focus, discussed in Section 4.1.3.
There it was determined that
y = 2/B0 S

was within good tolerance, where B 0 S is the system azimuth bandwidth time
product. Using Seasat values, say Re= 850 km, A.= 0.235, and (marginally)
with e = 0.001, if we want to use say 8K azimuth points for efficient fast
convolution, with a PRF of 1650 Hz we must have

COMPRESSION PROCESSING

197

motivation to use fast correlation whenever it is reasonably possible to do so.


The phenomenon of range migration, however, considerably complicates the
design of a processor using fast correlation. (The slow variations in azimuth
compression parameters foe fR with slow time s are. a lesser inconvenience,
compared with range migration.)
The earliest suggested processor for space-based SAR data of the family we
will discuss in this section (Wu, 1976) was envisioned to operate entirely in the
Doppler frequency domain for azimuth processing. Such a processor is able to
deal with only small range migration effects, essentially only the quadratic
curvature component. Beam squint angles larger than a small value lead to
data sets which are difficult to process accurately. Accordingly, two subsequent
refinements were made.
Firstly, the processor was developed which carried out range migration
correction partly in the slow time domain and partly in the Doppler frequency
domain, as described in Section 4.2.3. Secondly, a refined algorithm operating
entirely in the frequency domain was developed (Jin and Wu, 1984; Chang
et al., 1992) which is free of approximations which would be unjustified, even
for data with rather severe amounts of range migration. In this section, we will
describe the latter processor. It is a direct descendent of the earlier hybrid
correlation algorithm of Wu (1976) and Wu et al. (1982b), but free of certain
approximations used there which are not well satisfied in the case of data with
large range walk.
Impulse Response for Range Compressed Data

To begin, consider again the system impulse response. A general transmitted


waveform

(4.2.29)

1/ocl < 1500 Hz

s(t) = cos[2n.fct + </>(t)],


This is a somewhat small value, which might be exceeded if the satellite has a
squint of more than a fraction of a degree. Decreasing the azimuth FFT size
to 4K would double the limit Eqn. (4.2.29), however, which is reasonably within
the typical operating range of a side-looking platform. For higher frequency
systems (C or X band), the problem tends to disappear because A. decreases,
allowing f De to increase in inverse proportion for the same azimuth FFT length.
Nonetheless, L-band systems with squint angle
of more than fraction of a
degree can be difficult to deal with using the time domain migration compensation
described here. The algorithm in the next section was designed to deal with that
situation.

e.

4.2.4

Frequency Domain Azimuth Processing

The advantage in processing speed which fast correlation, based in the frequency
domain, has over time domain correlation is considerable. There is a strong

will result in a received response to a unit point target whose positive frequency
portion is
s(t - 2R/c) = exp{j[2nfc(t - 2R/c) + t/>(t - 2R/c)]}

where

R = R(s) =Re+ Rc(s - Sc)+ Rc(s - sc) 2 /2 +


~Re - (A./2)[/oc(s - Sc)+ fR(s - sc) 2 /2]

(4.2.30)

Complex basebanding of this yields


v.(s, t) =exp[ -j4nR(s)/).] exp{j</>[t - 2R(s)/c]},

It - 2R(s)/cl < rp/2


(4.2.31)

198

IMAGING AND THE RECTANGULAR ALGORITHM

4.2

Range compression of the received data is easily carried out as the first operation
of image formation. The result corresponds to an impulse response which is
the range compressed version of Eqn. ( 4.2.31 ). Let S(v) be the spectrum of the
base banded transmitted signal:
S(v) = ffe{exp[j<f>(t)]},

where we use v for the frequency variable corresponding to range R or range


time t, reserving f now for Doppler frequency. Range compression is then
carried out by filtering the basebanded data using:

=0,

COMPRESSION PROCESSING

199

and the function h is that on the right of Eqn. (4.2.33 ). This is the impulse
response of a two-dimensional system which is approximately stationary in s,
but nonstationary in R, both through the explicit appearance of R 0 and through
the strong dependence of f 00 , JR on R 0 We wish to determine its inverse, the
corresponding image formation operator to be used on range compressed
basebanded data.
Image Formation and Secondary Range Compression

Given the system response function h( s, RI R 0 ) of Eqn. (4.2.34 ), suppose that


we want to produce a line of complex image ((s, R0 ). Then
(4.2.36)

otherwise

The result corresponds to the range compressed spectrum


G(s, v) = HR( v )ffe {ilr(s, t)} = exp[ -j4nR(s)/ A.] exp[ -j4nvR(s)/ A.],

lvl < BR/2

where the inverse Fourier transform is two dimensional, G and Hare the two
dimensional Fourier transforms of g(s, R), the range compressed complex data,
and h(s, RI R 0 ), and the quantity G/ H is defined as zero for any frequencies for
which His zero. Writing the two dimensional inverse transform in Eqn. (4.2.36)
as a sequence of one-dimensional transforms, we have

so that
g(s,t) = BRexp[-j4nR(s)/A.]sinc{nBR[t- 2R(s)/c]}
where

Writing t = 2R/c, this is

R = l/H for H =I= 0 and R = 0 for H = 0. Then

( 4.2.32)

*(4.2.37)

The response function Eqn. (4.2.32) involves both s0 and R 0 other than in the
combinations s - s0 and R - R 0 That is to say, the linear radar system is
nonstationary (Appendix A). However, the corresponding impulse response is
well approximated as

where the convolution is in the variable Rand i}(f, R) is the Doppler spectrum
of the range compressed data field taken for fixed R. We now need the function
h(f, RIR 0 ) in order to describe the imaging algorithm.
The Doppler spectrum of the system function Eqn. (4.2.34) is

g(s, R) = BR exp[ -j4nR(s)/ A.] sine{ (2nBR/c)[R - R(s)]}

(4.2.33)
where we redefine the function h in so writing. In this, we take note that s0
enters into the expression Eqn. (4.2.30) only in the forms - s0 , and in the weak
dependence of f 00 , JR on s0
From Eqn. (4.2.32), we can then write the impulse response for range
compressed data as
h(s, RIR 0 ) =BR exp[ -j4nR 1 (s)/ A.] sine{ (2nBR/c)[R - R 1 (s)]}
*( 4.2.34)
where
(4.2.35)

H(f,RIR 0 ) =BR

f:

G(s)exp[-j4n(R 0 /A.- f 00 s/2 - fRs 2 /4)]

00

x sinc[(2nBR/c)(R - R 0

+ A.j00 s/2 + A.fRs 2 /4)] exp( -j2nfs)ds


(4.2.38)

where we have explicitly inserted R 1 (s) from Eqn. (4.2.35), and where we also
include the two way antenna voltage pattern G(s) in azimuth. (This is the
one~way power pattern G( (), </>)evaluated at constant slant range and expressed
as a function of azimuth time.) Since we include the pattern G(s), the limits can
be left as infinite, although the antenna effectively imposes the limits ( - S /2, S /2 ),
where S is the integration time of the SAR. In evaluating this integral, a second
order approximation based on the method of stationary phase, discussed in
Section 4.2.2, leads to the result of Jin and Wu ( 1984 ).

200

4.2

IMAGING AND THE RECTANGULAR ALGORITHM

COMPRESSION PROCESSING

201

For the second spectrum, since we have the inverse transform relation

The points of stationary phase in s depend on the frequency fas a parameter


of the integrand, and are given for the integral Eqn. (4.2.38) by setting to zero
the derivative of the phase function

a/21'

(n/a)

exp(j2nfs)df = sinc(as)

-a/21'

we have
The points

of stationary phase are then given by


[o</>(s)/os]l .. =s = O

The spectrum Eqn. ( 4.2.40) is then


or
H(f,RIRe) = BRG(s)exp(-j4nRe/A.)

(4.2.39)

J:

G1 (f-f')G 2 (f')df'

00

=exp[ -j4nRe/A.

which is just the locking relationship between time and frequency familiar for
waveforms with high bandwidth time product.
In the integral Eqn. (4.2.38) we do not replace slow time s in the amplitude
factors of the integrand by the stationary points Eqn. (4.2.39) everywhere, but
rather only in the second order (s 2 ) term of the sine function. This is because
we want to allow for a large range walk term foes in the locus R 1 (s), and
therefore make no approximation there. Specifically, the linear part of the range
migration at the end of the integration time, A.lfoelS/4, may be larger than the
quadratic part, A.lfRIS 2 /16. On the other hand, if the linear range walk is small,
no harm is done by the approximation of s = in the quadratic term of the
sine argument, because for small range walk the stationary phase approximation
becomes increasingly accurate.
With these replacements, we obtain the spectrum Eqn. (4.2.38) as
H(f, RI Re)= BRG(s) exp(-j4nRe/ A.)

f:

x G((f-foe)lfR]exp[-jn(f-foe) 2 /fR]A(R- Ri(s)IR.J


(4.2.43)
where
s.12

A(RIRe) =

(4.2.45)

(4.2.40)
where
(4.2.41)
(4.2.42)
Therefore, we need to compute the convolution of two constituent spectra G1 (f)
and G2 (f) (the spectrum of the product g 1 g 2 ).
For the first spectrum, we have at once from Eqn. (3.2.29) that

'

with = (f - foe)/ fR. The result Eqn. (4.2.43) is the central result of Jin and
Wu (1984), up to a constant multiplier {Af0 e/2)(2/lfRl) 112
Jin and Wu ( 1984) present plots of their function IA(RIRe)I for various values
of the parameter ex = (A.foe BR/ c )2 /I fR I, shown here as Fig. 4.21. The parameter
ex is the bandwidth (2R/c) "time" (x) product of the chirp transform evident
in A(RIRe) of Eqn. (4.2.44). Even for rather large (many kHz) values of foe ex
is small (say < 10), so that, for a side-looking SAR, A(RIRe) never has the
shape of a chirp in frequency. Rather, A(RIRe) is of the shape of a typical low
bandwidth time product spectrum.
Proceeding further towards the explicit form of Eqn. ( 4.2.37), from
Eqn. (4.2.44), letting x = cv/2 it is recognized that
B.lc

A(RIRe) = (c/2)
since the waveform Eqn. (4.2.41) has high bandwidth time product.

exp{j2n[(2R/c)x -(A.f0 ex/c) 2 /2fR]} dx *(4.2.44)

and

g 1 (s)g 2 (s) exp( -j2nfs) ds

+ fRs 2 /2)]

-B.12

00

g 1 (s) = exp[j2n(f0 es

+ j(n/4)sgn(fR)JlfRl- 1' 2

-B.fc exp{j[2nvR

- (re/ fR)(A.foe/2) 2 v2 ]} dv

202

IMAGING AND THE RECTANGULAR ALGORITHM

4.2

COMPRESSION PROCESSING

203

where we write

The integral expression in Eqn. (4.2.47) is just A*[ -R - R(f)IRcJ, up to a


constant, as can be seen from the defining expression for A (RI R 0 ), Eqn. (4.2.44 ).
It then follows from Eqn. (4.2.37) that (up to a constant)
((s,Rc) = ffr- 1{exp[jrr(f-focl2/!R]G- 1[(f- foc)/fR]
x [g(f,R)*A*(-R - R(f)IR 0 ) ] }
where the convolution is over R. If we define
*( 4.2.48)
then finally
((s, R 0 ) =ff r- 1{k- 1 exp[jrr(f - fo 0 ) 2 lfR]G- 1[(f - foc)/fR]

x B[f,R

r
Secondary range compression function for various values of IX= ()..f 00 BR/c) 2I fR (from
Jin and Wu, 1984). IEEE.
Figure 4.21

+ R(f)IR

0 ]}

*(4.2.49)

The imaging algorithm Eqn. (4.2.49) is the final result obtained by Jin and Wu
(1984). The computation of the function B(f, RIR 0 ) from the range compressed
spectra g(f, R) as in Eqn. (4.2.48) is referred to as "secondary range compression",
or "azimuthal range compression". The collation of values B[f, R + R(f)IR 0 ]
in Eqn. (4.2.49) is also referred to as "frequency domain range migration
correction".

Therefore,

Correlation Algorithm Operations

where fR and foe depend on R 0 Then from Eqn. (4.2.43) and Eqn. (4.2.46),

The expression Eqn. (4.2.49) contains the operational prescription for forming
the image. The raw radar data are first compressed in range in the usual way
to obtain the field g(s, R). Fourier transformation in the slow time coordinate
s for every range R, ignoring range migration, yields g(f, R). These data are
then correlated over R for each fixed frequency f (and for each R 0 , in general)
with the function A*( RI Re), to form the field B(f, RI Re). Then, for every range
R 0 of interest in the image ((s, R 0 ), a spectrum B[f, R + R(f)IR 0 ] is assembled.
That is, for each frequency f for some particular range R 0 , we read out the
number B[f, R + R(f)IR 0 ], where

lvl < BR/c


(4.2.46)

H(f, vlR 0 ) =aG[(f - fo 0 ) / fR] exp[ -jn(f - foc) 2 / fR]


x exp[ -j2rrvR 1(s)] exp[ -j(rr/ fR)(Af00 /2) 2 v2 ],

where a is a constant. Inverse transformation over v of f1 = 1/ H leads to

R(f) = R-1 [(/ - foe)/ fRJ

(4.2.50)

h(f, RIR0 ) = ( 1/a){ l/G[(f - fo 0 ) /fR]} exp[jrr(f - focl / fR]


x

B./c

-B./c

exp[j(rr/ fR)(A.f00 /2) 2 v2 ] exp{j2rrv[R

+ R(f)]} dv
( 4.2.47)

The number B[f, R

+ R(f)IR is multiplied by
0

204

IMAGING AND THE RECTANGULAR ALGORITHM

4.2

to form a single point of the composite Doppler spectrum of ((s, Re). Finally,
inverse Fourier transformation yields all azimuth points ((s, Re) of the range
line Re.
Since range compression processing will have been digital, the ranges for
which image will be computed are the values at which compressed range function
samples were produced (the range bins), the interval between samples being
Ax.= c/ J., where f. is the sampling rate of the range complex video signal.
The spacing in the discrete version of the Doppler frequency variable f depends
on the span in slow time s over which the azimuth FFT blocks are taken. Thus
the field of values B(f, RI Re) of Eqn. (4.2.48) is on a specified grid in the (f, R)
plane. For any particular discrete value off, and some specified discrete range
Re for which the line of image is being constructed, there will not in general be
a discrete range value R(f) of Eqn. (4.2.50) available on the grid. Therefore
interpolation is necessary between neighboring nodes of B(f, RI Re) to find the
needed value. Polynomial interpolation using a few points in range at the
frequency of interest suffices.
As mentioned above, foe and fa depend weakly on sc and strongly on Re.
The procedure of the last paragraph must then be carried out in range blocks
of size small enough that these parameters are sensibly constant over the block.
The variations with sc are usually slow enough to allow use of FFT blocks in
slow time of reasonable length ( 4K or SK, typically). In range, the changes in
foe fa are more rapid, and typically these parameters are changed every few
tens of range resolution intervals, depending on the processor depth of focus.
The parameters are updated, perhaps in accordance with one of the models of
Appendix B, as the image production moves across the range swath.
Combined Primary and Secondary Range Compression

Jin and Wu ( 1984) indicate that the parameters in A(RIRc) need not be updated
at all across a reasonable swath width in range, so that only the parameter
values in the phase of the Doppler filter w(f) are critical. For such casesy the
secondary range compression operation Eqn. (4.2.48) can be combined with
range compression, and therefore done with no additional computations needed
beyond what is needed in any case for range compression. The operation
Eqn. (4.2.48) of forming B(f, RI Re) by correlation with the range compressed
data can then be realized as
B(f, R) =

f: ~{g(s,

COMPRESSION PROCESSING

205

Thus the secondary compression filter, with transfer function


A*( -v) = (c/2)exp[j(n/ fa)(Voc/2) 2 v2 ],

lvl <

Ba/c

(4.2.52)

using Eqn. (4.2.46), can simply be combined in a product with the primary
range compression filter. The result is an adjusted range compression filter,
relative to range time t = 2R/c, with transfer function
H(f)

= exp(-jnf 2 /Ke),

(4.2.53)

where the effective chirp rate Ke is


*( 4.2.54)

for a transmitted pulse Eqn. (4.2.1 ).


In order that secondary and primary range compression can be combined
as in Eqn. (4.2.54 ), it is necessary that the secondary compression filter,
Eqn. (4.2.52), evaluated say at midswath, have a phase </J( v) which is adequately
matched to the data. The phase mismatch is due to drift Afa and Afoc of the
phase parameters from their midswath values:
(4.2.55)
where the derivatives are evaluated for midswath fa, foe
From Fig. 4.9, for an acceptable 10 % broadening of the output of a filter
matched to a linear FM with bandwidth time product Ba 'l:P' it is required that
the proportional drift in chirp constant K be such that
(4.2.56)
This takes account that the primary range filter with which the secondary filter
will be combined will include weighting for sidelobe control.
The restriction Eqn. (4.2.56) can be written in terms of phase deviation A<jJ(f)
at band edge. Using the general linear FM phase function,

R')}A*(R' - RIR:)dR'

00

~{f~

at f = Ba/2 we have
g(s,R')A*(R' - RIRc)dR'}

~ ~{~; 1 {G(s,v)A*(-v)}}

(4.2.51)

where G(s, v) is the transform of the range compressed data g(s, R) and A*(v)
is the transform of A*(R), the (say) midswath value of A*(RIRc>

since K = Ba/rp. The restriction Eqn. (4.2.56) then takes the form
(4.2.57)

206

IMAGING AND THE RECTANGULAR ALGORITHM

4.2

Taking account that "range" frequency v and "time" frequency f are related
by v = 2f /c, the secondary filter Eqn. (4.2.52) has a phase function

so that

Evaluating this at band edge, f

= BR/2, leads to the restriction Eqn. (4:2.57) as


*(4.2.58)

Equation (4.2.58) is essentially that set forth by Jin and Wu (1984). Wong and
Cumming ( 1989) have made a similar calculation and present examp~es.
Equation (4.2.58) is well satisfied across the entire range swath of a Seasat-hke
system with moderate (,...., 5-10) squint.
The Hybrid Correlation Algorithm

In the case of small range walk, the secondary range compression process
reduces to the hybrid correlation algorithm of Wu et al. (1982b). As Jin and
Wu (1984) show by computations (Fig. 4.21), the function A(RIRe) of Eqn.
(4.2.44) has width the order of one range resolution interval, or about one range
sampling interval, so long as

207

effects encountered in side-looking SARs, in which the squint angle is consciously


kept as small as practicable. However, for some purposes the radar beam of a
SAR may be deliberately aimed at a large squint angle, perhaps tens of degrees
off broadside. In such cases, even the algorithm Eqn. (4:2.49) begins to degrade
in its ability to invert the system point response function. Accordingly, a
modification was developed by Chang et al. ( 1992) which is tolerant of the large
range walk encountered with a squint mode SAR.
The problem is that, with a squinted SAR, the secondary compression function
parameter f De changes appreciably with slow time s over the SAR integration
time S. Since slow time s and Doppler frequency f are closely locked, the
function A(RIRe) used in the secondary compression operation Eqn. (4.2.49)
needs to be updated as the Doppler spectrum g(f, R) is processed. The result
is that the secondary compression function A*(R' - RIRe) in Eqn. (4.2.51)
depends on s, and the operation cannot be combined with the range compression
filter as in Eqn. (4.2.53), even if the variation with range would be tolerable.
The procedure is then to implement (primary) range compression and
secondary range compression as independent operations. This is emphasized
by Chang et al. (1992). The basebanded data v,(s, R) are Fourier transformed
in range to produce spectra P.(s, v) which are multiplied by the range
compression filter transfer function,
H(v) = exp[-jn(cv/2) 2 /K]

to produce the range compressed data spectra,

*( 4.2.59)
a value for Seasat of about 1500 Hz. In that case,

COMPRESSION PROCESSING

<i(s,v) = H(v)V,(s,v)

The azimuth spectrum


G(f, v) = ffe.{ ci(s, v)}

the range compressed data Doppler spectrum itself. Then only the interpolation
operation is needed in order to assemble the composite spectra from the azimuth
transformed data g(f, R). For proper operation in the usual form (Wu et al.,
1982b), the point target response h(s, Rise, Re) should have a high .bandwi?th
time product in each range bin, and not simply over the full SAR mtegratton
time. This will be the case for range walk small enough that the secondary
compression procedure can be dispensed with. In an earli~r version of the hybrid
correlation algorithm (Wu, 1976), interpolation was not envisioned, and simple
nearest neighbor values of the spectra in each range bin were used for the
numbers B[f, R + R(f)]. This proved not to be entirely satisfactory in general.
Squint Mode Processing

The algorithm Eqn. (4.2.49) using azimuth range (secondary) compression,


developed by Jin and Wu (1984), is robust against relatively large range walk

is computed. The secondary range compression filter Eqn. (4.2.52) appropriate


to the frequency in question is applied to the Doppler spectra G(f, v) to produce
the data field B(f,v), the range spectra of the data B(f,R) ofEqn. (4.2.48):
B(f, v) =A*( -vlf)G(f, v)

(4.2.60)

The inverse range transform then yields the field B(f, R):
B(f,R) = ~; 1 {B(f,v)}

Finally these data are used in the migration correction and azimuth compression
procedure of Eqn. (4.2.49).
Chang et al. ( 1992) present simulations to show that this modified version
of the algorithm Eqn. (4.2.49) is accurate in achieving compression for a

208

IMAGING AND THE RECTANGULAR ALGORITHM

Seasat-like system at L-band (with 40 look angle) with a squint angle of 15-20,
whereas the algorithm Eqn. (4.2.49) itself begins to degrade at a squint angle
of about 5. Calculations are presented to show that, at a smaller look angle
(20), the algorithm Eqn. (4.2.49) is adequate at squint up to about 10, while
the modified algorithm at squint 20 is successful at a full range transform span
of 40 km, and by reduction of the range transform span to 10 km can operate
at squint as high as 80. At C-band, the algorithm Eqn. ( 4.2.49) itself performs
adequately for squint of 40 with a 40 km range transform span and 35 look
angle. Matters improve still further at smaller look angles and narrower range
transform span.
The algorithm of Chang et al. ( 1992) is therefore adequate for a broad range
of SAR systems. The only restriction is that, since the range curvature terms in
Eqn. ( 4.2.38) are only approximated by usfog the method of stationary phase
to arrive at the spectrum Eqn. (4.2.43 ), the pro~essor degrades if range curvature
is excessive. The situation worsens at lower frequency and higher altitude, since
the range curvature !lR, measured in range resolution cells bx., from
Section 4.1.3 is

Finer compressed azimuth resolution bx also rapidly degrades the situation.


Virtually all the SAR processors which have been constructed for earth
remote sensing use one version or another of the algorithms we have discussed
in this chapter so far. In Chapter 10 we will discuss a third way of dealing with
range migration. This is the "polar processing" algorithm, which has been used
mainly in aircraft systems, but is not limited to that platform. Before that
discussion, however, we will complete the description of the rectangular
algorithm with a discussion of the phenomenon of speckle noise in coherent
imaging systems, and a description of some algorithms designed for determining
the azimuth filter parameters (Doppler center frequency and Doppler rate) in
the rectangular algorithm, and for resolving an ambiguity in azimuth image
placement which can arise.

REFERENCES
Barber, B. C. ( 1985). "Theory of digital imaging from orbital synthetic-aperture radar,"
Inter. J. Remote Sensing, 6(7), pp. 1009-1057.
Bennett, J. R. and I. G. Cumming (1979). "Digital SAR image formation airborne and
satellite results," 13th Inter. Symp. Remote Sensing of the Environment, Ann Arbor,
Michigan, April 23-27.

Bennett, J. R., I. G. Cumming and R. A. Deane ( 1980). "The digital processing of Seasat
synthetic aperture radar data," Record, IEEE 1980 Inter. Radar Conf, April 28-30,
Washington, DC., pp. 168-175.

REFERENCES

209

Bennett, J. R., I. G. Cumming, P.R. McConnell and L. Gutteridge (1981). "Features of


a generalized digital synthetic aperture radar processor," 15th Inter. Symp. on Remote
Sensing of the Environment, Ann Arbor, Michigan, May.

Brookner, E. ( 1977). "Pulse-distortion and Faraday-rotation ionospheric limitations,"


Chapter 14 in E. Brookner (ed.), Radar Technology, Artech House, Dedham, MA.
Chang, C. Y., M. Jin, and J.C. Curlander (1992). "Squint mode processing algorithms
and system design considerations for spaceborne synthetic aperture radar," IEEE
Trans. Geosci. and Remote Sensing (Submitted).
Cook, C. E. and M. Bernfeld ( 1967). Radar Signals, Academic Press, New York.
Herland, E. A. (1981). "Seasat SAR processing at the Norwegian Defence Research
Establishment," Proc. of an EARSel..rESA Symp., Voss, Norway, May 19-20,
pp. 247-253.
Herland, E.-A. ( 1982). "Application of Satellite-Based Sidelooking Radar in Maritime
Surveillance," Report 82/ 1001, Norwegian Defence Research Establ., Kjeller, Norway,
September (AD A122628).
Jin, M. Y. and C. Wu (1984). "A SAR correlation algorithm which accommodates
large-range migration," IEEE Trans. Geosci. and Remote Sensing, GE-22(6),
pp. 592-597.
McDonough, R. N., B. E. Raff and J. L. Kerr ( 1985). "Image formation from space borne
synthetic aperture radar signals," Johns Hopkins APL Technical Digest, 6(4),
pp. 300-312.
Quegan, S. and J. Lamont (1986). "Ionospheric and tropospheric effects on synthetic
aperture radar performance," Inter. J. Remote Sensing, 7(4), pp. 525-539.
Wong, F. H. and I. G. Cumming (1989). "Error sensitivities of a secondary range
compression algorithm for processing squinted satellite SAR data," IGARSS '89,
Vancouver, BC, pp. 2584-2587.
Wu, C. (1976). "A digital system to produce imagery from SAR data," Paper 76-968,
AIAA Systems Design Driven by Sensors, Pasadena, California, October 18-20.
Wu, C., K. Y. Liu and M. Jin (1982b). "Modeling and a correlation algorithm for
spaceborne SAR signals," IEEE Trans. Aerospace and Electronic Sys., AES-18(5),
pp. 563-574.

5.1

5
ANCILLARY PROCESSES
IN IMAGE FORMATION

At the heart of any SAR imaging algorithm is the set of correlation operations
by which the point target response (distributed spatially due to the nonzero
pulsewidth and antenna beam width) is compressed to an approximate point.
One family of such procedures, the rectangular algorithm, has been described
in Chapter 4. Another, the polar processing algorithm, will be dealt with in
Chapter 10. In both cases, some operations in addition to correlation are usually
needed. In this chapter we describe five techniques, with particular reference to
the rectangular algorithm, although some of the discussion is more general.
First, we briefly note the precise arrangement of computations for digital
implementation of range compression using fast convolution. We then discuss
the phenomenon of speckle noise, and describe the use of multilook imaging
for its alleviation. We then give a detailed description of some methods by
which the Doppler center frequency foe and azimuth frequency rate parameter
fR, necessary for azimuth compression procedures, can be determined from the
radar data itself. Finally, we describe some ways of resolving the basic image
position ambiguity which arises in pulse radar, which time samples the Doppler
signal underlying SAR operation.
5.1

DIGITAL RANGE PROCESSING

With rare exceptions, all SAR processors carry out range compression of the
raw data for a large number of radar pulses before beginning azimuth
compression to compute a block of image. Even though some of the details
of range compression depend on the way that azimuth compression is to be
210

DIGITAL RANGE PROCESSING

211

carried out, the main elements are sufficiently alike to make a separate
description efficient. The methods of Appendix A are the basis of the processing
described here. Section 9.2.5 considers the computational complexity of the
procedures.
The continuous time real radar return signal for some particular pulse is of
some bandwidth BR centered on the carrier freqency fc. By linear frequency
shifting operations, this ultimately appears at the input of the A/ D converter
as a real signal corresponding to the point target response Eqn. (4.2.4) of
bandwidth BR centered on the offset video frequency / 1 , with / 1 > BR/2
necessary to minimize aliasing (Fig. 4.16a), but often / 1 ~ BR/2. (In the Seasat
case, for example, the range pulsewidth = 33.8 s and range chirp constant
K = 0.563 MHz/ s resulted in a bandwidth BR = 19.0 MHz, and the offset
video frequency chosen was 11.38 MHz.) For proper digital processing, the
continuous time signal v.(t) is sampled at some rate greater than the Nyquist
frequency, which is twice the frequency of the highest frequency component in
the signal being sampled, / 1 + BR/2 (Appendix A). With / 1 ~ BR/2, a usual
and convenient choice is !.r = 4/1 ( 45.53 MHz for Seasat ). This results in some
implementational simplification, since then

exp[j2n/1 (k/f.r)J = exp(jkn/2) = {l,j, -1, -j}


The corresponding real "sampled signal" vr!(t) as in Eqn. (A.2.4) has the
spectrum in Fig. A.2, which is just the spectrum of the continuous time signal
v.(t) replicated with period !.r
After sampling, the range signal consists of some number of sample values
taken at uniformly spaced times n/ J.. across the swath in slant range. The number
of bits per sample is generally from two to eight, with five having been the
choice for Seasat. The severe data rate considerations of wide swath SAR provide
a motivation for using as few bits per sample as possible, however, and in fact
one-bit SAR systems are a possibility under discussion (Barber, 1985b). For
Seasat, the nominal 35 km swath resulted in range sampling over an interval
of 300 s for each pulse, which at J.. = 45.53 MHz yielded P = 13680 samples
/ 0 spaced at range intervals (range bins) of
8.R. = c/2!.r = 3.3 m

(Note that a range bin is not the same as a range resolution cell.)
The range samples are now filtered by the digital range compression filter.
If the filter impulse response is h(t), this is sampled at the same rate !.r as the
range data. With a radar pulse length rP there are required Q = rpf.r samples
(1536 for Seasat). With a linear chirp, the effective transmitted pulse is
s(t) = cos[2n(f1 t

+ Kt 2 /2)],

(5.1.1)

212

ANCILLARY PROCESSES IN IMAGE FORMATION

5.1

relative to the offset video frequency / 1. The filter function is the matched filter:
h(t) = s(-t) = cos[2n(f1t - Kt 2 /2)],

(5.1.2)

An FFT size N which is the next power of two greater than or equal P + Q - 1
is chosen (2 14 = 16384 for Seasat), and the data and filter sequences filled with
zeros to that length (zero padding). Alternatively, a smaller value can be used
with the overlap-add or overlap-save procedures described in Appendix A.
In Fig. 5.1 are sketched the (periodic) time and frequency waveforms involved
in digital compression of the offset video range pulse Eqn. (5.1.1) using the filter
Eqn. (5.1.2). In every case, the region of computation is the first period of the
function in positive time or frequency, shown as solid lines in Fig. 5.1.

DIGITAL RANGE PROCESSING

213

The range compression filter coefficients Fig. 5.1 b are computed as the
N-point FFT of the sequence computed from Eqn. (5.1.2):
n = O,Q/2-1
n = Q/2, N - Q/2 - 1
h0 = h[(n - N)/.f.,],

(5.1.3)

n = N - Q/2, N - 1

taking account that the sequence h 0 is periodic with period N and that we want
always to enumerate sequences with positive indices. Since this sequence is real,
the even-odd separate procedure of Appendix A can be used conveniently.
Further, since we will carry out complex basebanding on the result, only the
coefficients Hk fork= 0, N /2 - 1 (Fig. 5.lb) need be computed (Fig. 4.16).
By themselves the filter coefficients Hk of Eqn. (5.1.3) suffer from the
problem of range sidelobes, discussed in Section 3.2.3. Before using them they
must be modified by some appropriate weight sequence, such as the sequence
corresponding to the Taylor weighting (Farnett et al., 1970):

If - !1 I ~ .f.,/4
The weighted filter coefficients are correspondingly

...
-vu

compressed
-data ..
I

at.!1

-----~.o

I
I

/1

e
N/4

[~'JI~ . rhl\
,_ .
N/2
Figure 5.1

k = 0, N/2 -1

Ht= Hk W(k.f.,/ N),

Steps in range compression. Solid lines on frequency spectra are base Fourier domain.
Dashed lines are periodic repetitions of spectra of digital signals.

(5.1.4)

For the case / 1 = .f.,/4, we have simply


(5.1.5)
As discussed in Section 4.2.3, the coefficients Ht of Eqn. ( 5.1.4) are conveniently
modified yet again to provide interpolation needed in some azimuth processing
procedures, or (Section 4.2.4) for use in some forms of frequency domain azimuth
compression.
The zero-padded range data samples / 0 , n = 0, N - 1, (Fig. 5.la) are FFTed
to produce coefficients F k k = 0, N /2 - 1, using a procedure appropriate to
real data, and taking account that we will complex baseband so that the
remaining coefficients are not needed. The range compressed data coefficients
at offset video (Fig. 5.lc) are then HtFk, k = 0, N /2 - 1, and these are complex
basebanded by computing (Fig. 5.le)
Gk= Hk+N/4Fk+N/4

k = O,N/4-1

Gk= Hk-N/4Fk-N/4

k=N/4,N/2-1

Note from Eqn. (5.1.5) that, for / 1 = .f.,/4,

'ic N/4 = 1 + 2 L Fm cos( 4nmk/ N)

(5.1.6)

214

ANCILLARY PROCESSES IN IMAGE FORMATION

5.2

Finally, computing the (N /2)-point complex inverse FFT of the sequence Gk


of Eqn. (5.1.6) yields the complex samples gk, k = 0, N /2 - 1, (Fig. 5.lf) of the
subsampled basebanded complex compressed range samples corresponding to
Eqn. ( 4.2.11 ). It is the phase of those numbers which carry the Doppler
information needed for azimuth compression processing. Since now only N /2
numbers represent the full range swath, the range sampling interval in this
complex domain, the size of a complex range bin, is c / !.r (6.6 m for Seasat),
rather than the value c/2!.r of the real time samples of the range video function.
Alternatively, of course, the real offset video data samples fn can be FFTed
to produce Fourier coefficients Fk, k = 0, N - 1. The coefficients Fk, k = 0,
N /2 - 1, are then rearranged as in Fig. 5.lc-f. The resulting complex basebanded
signal coefficients are filtered using coefficients H k obtained by transforming
N /2 time samples, taken at intervals 1/J., of

h(t) = exp(-jnKt 2 ),

(5.1.7)

215

Any particular realization ((R) of Eqn. (5.2.3) will yield an image l((R)l2
which is different from the mean Eqn. (5.2.2). The difference is speckle noise.
In this section we want to investigate the statistics of the individual real images
l((R)l2. Also, we will discuss some ways to generate estimators of the desired
image Eqn. (5.2.2) from available samples ((R).
Image Statistics

Accordingly, we view the terrain reflectivity ( (R) as a (complex) random variable,


whose real and imaginary parts have some probability distributions. The radar
data number v.(x, R) is then a random variable also. Considering the very large
number of image cells in the radar field of view, we then invoke the central
limit theorem to assume that the probability densities of the real and imaginary
parts of v.(x, R) are Gaussian. The number, Eqn. (5.2.3), the computed complex
image value, being a linear combination of Gaussian random variables, is also
a complex random variable with Gaussian real and imaginary parts. I ts mean is

rearranged similarly to Eqn. (5.1.3).


@''[C(x, R)] =

5.2

SPECKLE AND MULTl LOOK PROCESSING

L~R'=

-oo

h- (x, Rix', R')

SPECKLE AND MULTILOOK PROCESSING

The resolution element of any SAR is large with respect to a wavelength of the
radar system. As a result, it is generally unfruitful to attempt to define a
deterministic backscatter coefficient for each terrain element to be imaged.
Rather, as discussed in Section 2.3, the sought image is the local mean of the
radar cross section per unit area of each patch of the terrain in view. This is
defined in terms of the random specific cross section
a 0 (R) = a(R)/dA

L:R.= _ lif[((xo, Rolx', R')]h(x', R'lx

0,

R 0 ) dx 0 dR 0 dx' dR'

00

( 5.2.4)
If we now assume that the expected value of the terrain reflectivity function C
is independent of aspect angle over the range of angles for which the terrain point
is in the radar beam, using Eqn. ( 4.1.2) the delta function is recovered in
Eqn. ( 5.2.4) to yield

( 5.2.l)
tf[C(x, R)] = &[((x, R)]

The random nature of a 0 (R) is due to underlying variations on the order of a


wavelength in scale which can not be resolved by the SAR system.
As discussed in Section 3.2.1, the mean of the coefficient Eqn. ( 5.2.1 ), the
(real) image I ( R ), is related to the sample functions ( ( R) of the complex image by

(5.2.5)

Thus, the computed complex image function C(x, R) is a random variable whose
mean is the mean of the terrain reflectivity function.
We are mainly interested in the statistics of the random variable
Z(x, R) = IC(x, R)I

where ((R) is the terrain reflectivity function defined in Eqn. (3.2.3). Its
approximation in any particular realization,
C(R) =

f:00

h- 1 (RIR')6.(R')dR'

is the complex image derived from the radar voltage phasor signals vr(R) by
processing with the inverse of the radar system function (Section 4.1 ).

the magnitude of the computed complex image, whose mean square is "the
image". Ifwe assume that the real and imaginary parts of the complex Gaussian
random variable C(x, R) are independent and zero mean (implying incidentally,
from Eqn. (5.2.5), that the complex terrain function (has zero mean) with equal
variances a 2, then Z(x, R) has the Rayleigh density. This follows from the computation (Whalen, 1971, Chapter 4 ):

p(Z,c/>) = det[o(a,b)/o(Z,c/>)]p(a,b)

(5.2.6)

216

5.2

ANCILLARY PROCESSES IN IMAGE FORMATION

where we write

t =a+ jb = Zcos(cf>) + jZsin(cf>)


so that the Jacobian is lo(a, b)/o(Z, cf> )I = Z. Since, by our assumptions,
p(a,b) = p(a)p(b) = (l/2mr 2 )exp[-(a 2

+ b2 )/2u 2 ]

(5.2.7)

Eqn. (5.2.6) then yields


p(Z, cf>)= (Z/2nu 2 ) exp( -Z 2 /2u 2 )

SPECKLE AND MULTl LOOK PROCESSING

217

The image then has a randomly fluctuating intensity /(R) at each pixel, which
leads to the grainy appearance of speckle. For purposes of visual interpretation,
it is generally desirable to reduce those fluctuations, and to cluster the observed
intensities /(R) closer to the mean intensities / 0 (R), since it is the mean intensities
which are usually the required image information. This is usually done by
computing some number of nominally independent images (looks) of the same
scene, and averaging them, pixel by pixel. Alternatively (Li et al., 1983), a single
high resolution image can be locally smoothed.
If we let JdR) be the average of L independent realizations (looks) l;(R) of
the intensity /(R) for a pixel at R:
L

/L=(l/L) LI;

and hence

(5.2.12)

i= 1

2x

p(Z)=

p(Z,cf>)dcf>=(Z/u 2 )exp(-Z 2 /2u 2 )

(5.2.8)

the mean is unchanged:

the Rayleigh density.


The corresponding image intensity sample,

I(x, R) = Z 2

while the variance is reduced by the factor L:

= l{(x, R)l 2

from Eqn. (5.2.8) then has the exponential density:


2

p(J) = (dZ/dJ)p(Z) = (l/2u )exp(-J/2u

(5.2.9)

The mean and standard deviation of the intensity are then


l 0 (x, R) = S(J) = 2u 2

u1(x, R) = 10 = 2u 2
where u 2 may depend on (x, R). From Eqn. (5.2.9), the exponential density of ,,
the samples l(x, R) is equivalently:
p(J) = (1//0 )exp(-J//0 )

*(5.2.10)

Mu/I/look Images

Although there are many assumptions in the above derivation, analysis of typical
SAR images supports the final result that the image resolution cells have
intensities I which follow the exponential distribution:

Prob{/~ t} =

f:

p(I)dl = exp(-f//0 )

(5.2.11)

= (1/L) 2

L <1~ =

<1~/L

I= 1

(This reduction will be less if the look intensities are unequal or the looks
are not independent.) An image such as Eqn. (5.2.12), is called an L-look
image.
In SAR, independent looks J1(R) can be generated from data taken at different
aspect angles as the vehicle moves past the terrain (Fig. 5.2, drawn for the
common case of four looks). Thus the first look is generated from the forward
quarter of the antenna along-track beam, the next from the next quarter beam
back, and so on. Since signals from all parts of the beam reach the radar receiver
superimposed, however, such segregation of data can not be done in the time
or space domains. However, the high azimuth bandwidth time product of a
useful SAR locks together time and frequency, which allows the look data to
be sorted in the Doppler frequency domain. That is, data with high Doppler
frequency necessarily originated from terrain points in the forward edge of the
azimuth beamwidth, while the same point in the rear quarter of the beam
produces a low Doppler frequency and appears in the lowest quarter of the
Doppler band.
To produce such independent looks in the Doppler domain, the Doppler
spectrum of the range compressed data at each range bin is analyzed, after
range migration correction. That is, the spectrum is analyzed just before the

218

5.2

ANCILLARY PROCESSES IN IMAGE FORMATION

219

SPECKLE AND MULTl LOOK PROCESSING

r-~t-~~~-1-~~_::=-f

f oc

... ......

Figure 5.2

......... ......

... ......

...... .........

... ... ... ...

......... ... ....

Two subaperture looks at a target as the radar moves past.


Figure 5.3

azimuth compression filter is applied. The spectrum is then divided into (say)
four subbands by filters before compression, suitably tapered to avoid sidelobes
in azimuth time (Section 3.2.3), and overlapped to some extent to avoid loss
of too much signal energy, but not so much as to lose independence of the
looks (Fig. 5.3). Since the Doppler band width B0 is essentially independent of
range Re at beam center, the look filters can be taken with constant bandwidths
Bi> (nominally B0 / L for L looks) and with center frequencies evenly spaced
across the band B0 . Since foe changes with range Re, the look filter complex
Fig. 5.3 slides in frequency as a unit as the range bin Re in question changes.
Since the resolution in each look I;(R) is inversely proportional to the
bandwidth Bi> of Doppler data compressed in that look, processing only 1IL
of the full Doppler band B0 degrades the resolution in each look by 1IL as
compared to the resolution available if all data were compressed to form a
single image (single-look processing). Thus, for example, a single look Seasat
image uses the full Doppler band of 1300 Hz and attains a resolution ideally
c5x = V.1/ B 0 = 6600/1300 = 5.1 m, while a four look image has resolution in
each look 4 x 5.1 = 20.4 m, with the resolution in the superposition of the four
looks being the same as each look separately. (The exact resolution attained
in a multilook image depends on the details of implementation of the look
filters, since the precise answer depends on the bandwidth taken for each look
filter.)
Mull/look Processing

If the capability to produce single look images is desired in the processor, the
full Doppler data band B0 must be produced using an FFT of adequate length

in the azimuth time variable. Since the full synthetic aperture time S must be
used for the filter function, something markedly longer must be used for the

Doppler spectrum and look filters. (Antenna pattern weighting not shown.)

data block in order to achieve fast convolution efficiency (Appendix A). Then
there is no particular reason not to implement multilook filters by simply
combining the amplitude characteristic of Fig. 5.3 for each look with the single
look full band compression filter to produce the L multilook filters to apply to
the azimuth Doppler data. Since the compressed data in Doppler frequency
has only nominally 1/ L the bandwidth of single-look data, a sampling rate 1/ L
that needed for single look images suffices. This rate reduction is easily brought
about by doing the inverse FFT of the compressed data with an (N / L)-point
IFFT, where the original single look spectrum was taken with an N-point
transform. If something other than L-look imagery, with L a power of 2, is
desired, some zero padding is useful to bring N / L to an integral power of 2.
With this procedure, slow time registration of the images of the individual looks
is automatic, since the compression filter for each look retains exactly the proper
phase function to place the image pixels at the proper azimuth positions.
Alternatively, some computational and memory savings can be realized if
there is no intention to produce single look images with the processor. In that
case, the largest set of Doppler frequency data ever needed at any one time is
that corresponding to the band of one of the multiple looks, of bandwidth
Bi>= B0 / Lfor an L-look image. The memory savings in such a case are obvious.
The computational savings in a frequency domain processor follow because
doi~g~FFTs oflength N /Lrequirescomputation of the order L(N /L)log(N /L),
which ts less than that for one FFT of length N, which requires computation
of order N log(N). In time domain processing, the savings are in the ratio of
N 2 to L(N I L) 2 , since both the data length and the compression filter length
decrease in the ratio N / L for each look computation. In either case of time or

220

ANCILLARY PROCESSES IN IMAGE FORMATION

5.3

frequency domain processing, with reduced data span, the look filtering should
be done in the time domain to avoid taking a full band FFT of the Doppler
data. A conventional FIR filter is applied to the PRF-sampled azimuth time
data in each slant range bin to produce the data for each look. Since the band
of each look is only 1/ L the band of the Doppler data, decimation !s ~s~d as
well as filtering to reduce the data rate to the minimum needed for the mdlVldual
look bands.
.
If the segmentation procedure of the last paragraph is used, compensation
must be made according to which subband the image came from before
superposing them. The images for each look must be shifted along track
explicitly, if the same compression filter is u.sed for ~ac~ look. The necessary
correction can be done in the Doppler domam by adjusting the filtered output
after compression by the delay factor exp[ -jnfocUoci - !oc)lfR] to ac~ount
for the different Doppler center frequencies foci in each look. ~ltern~tively,
these factors can simply be included in the look filter to result m a different
filter to be used for each look.
Thermal Noise Effects

221

CLUTIERLOCK AND AUTOFOCUS

so that system noise adds a bias to the desired image I 0 Since the quantity li!2
also has the exponential density, its mean is also the image standard deviation,
so that the biased noisy single-look image still has unity SNR.
The system noise bias in the image estimate Iii 2 can be removed if an estimator
Pn of the noise power is available. That can be obtained from receiver output
voltage during a pre-imaging period with no input, or from a dark part of the
image with little terrain backscatter evident. The image is then computed as

This has mean

where we assume Pn to be an unbiased estimator of


computed image is

Pn. The variance of the

The extent to which multilook processing is effective in reducing image noise


depends on the level of thermal noise in the system. Since the image is ~he mean
S(J) of the intensity at each resolution cell, in the absence of system noise effects
we can define the single-look image SNR as
SNR\

using the fact that Ilj2 is exponentially distributed, with variance equal the
square of its mean.
In the case that Pn = lnl 2 , a single sample of system noise, Var(Pn) =
S(Pn) 2 = P~, and

= S(J)/a1 = 1

since the mean I 0 of the exponential density dis~ribution Eq~. (5.~.10) equals
its standard deviation. The SNR of an L-look image, assummg mdependent
looks, from Eqn. ( 5.2.13) is
SNRt = I 0 /(Jo/../L) = .jL
It might be noted that a multilook image has intensity which is the sum of
common-mean exponentially distributed variables, and thereby has the gamma
(or x2 ) density.
.
Radar system (including thermal) noise adds an independent Gaussian
component to the complex image pixels. The complex image is then

i=(+n
where ( is a realization of ' and n is an independent complex Gaussian noise
output. The mean image is then

SNRL = .jL/[(1+1/SNR.) 2 + (1/SNR.) 2 ]

1 2
'

*(5.2.13)

where SNR1 = I 0 / Pn is the ratio of mean image output without system noise
to mean system noise power. This is the expression usually presented (Ulaby
et al., 1982, p. 492). Some practical difficulties of the procedure are discussed
in Section 7.6. From Eqn. (5.2.13) it is clear that the nominal
SNR
improvement with multilook processing degrades to something less than
in the presence of finite SNR1

JL

5.3

JL

CLUTTERLOCK AND AUTOFOCUS

In SAR image formation, using a high resolution (focussed) system of the type
discussed in Chapter 4, the compression operation in azimuth (slow) time is
the crucial ingredient which makes the system function. The azimuth compression
filter is the filter appropriate to the range compressed point target response
Eqn. (4.1.24 ):

g(slxc, Re)= exp[ -j4nR(s)/J.]

(5.3.l)

222

5.3

ANCILLARY PROCESSES IN IMAGE FORMATION

The filter therefore involves the parameters of the range migration locus R(s),
the slant range to a point target as a function of slow time. The locus R(s) is
usefully expanded in a Taylor series about the slow time sc at which the target
is in the center of the radar beam (Fig. 4.1). Although at least one processor
(Barber 1985a) uses terms through the third order in slow time, it usually
suffices to retain only the second order term:
(5.3.2)
where the Doppler center frequency foe and azimuth chirp constant
defined as:

f De =

2Rc/ A,

fR

are

( 5.3.3)

In Appendix B we discuss determination of the parameters foe and fR from


satellite orbit and attitude data. Such procedures are inherently quite accurate,
up to the level of accuracy of the attitude measurement instrumentation and
the accuracy of the satellite orbital parameters computed from tracking data.
It can be, however, that instrumentation difficulties limit the former, while the
time lag in smoothing and refining tracking data may make it inconvenient to
use the latter. For these reasons, most image formation processors include
procedures for automatic determination of the parameters foe and fR to be used
for any particular scene, using only information derived from the radar data
to be processed. These procedures are called respectively clutterlock and
autofocus algorithms, and we will discuss some of them in this section.
A few remarks on terminology might be interesting. The term "focus" is of
course borrowed from optics, in analogy to the manipulation of light wavefront
curvature carried out by a lens. An autofocus procedure is thereby an algorithm
for automatic determination of the wavefront curvature constant fR of the
azimuth filter. Clutterlock is borrowed from conventional aircraft pulse Doppler
radar (Mooney and Skillman, 1970). In the case of an aircraft radar at least
partially viewing terrain, targets of interest are obscured by the radar returns
from terrain reflectors at the same range, the so-called clutter on the radar
display. If the target of interest is moving with respect to the terrain, it will
have returns which appear at the transmitting aircraft with a different Doppler
frequency from that at which the clutter features appear, the latter frequency
being due solely to motion of the radar platform. There is thus the possibility
of carrying out Doppler filtering on the radar returns to block the band of the
clutter (terrain) returns, while passing any other Doppler frequencies (due to
targets moving with respect to the terrain). The extent to which a moving target
can thereby be distinguished from its stationary background is the subclutter
visibility capability of the radar. If this technique is to work, the Doppler clutter
rejection filter must always center more or less on the band of the terrain returns,
which changes as the motion of the platform aircraft changes. The filter rejection
band is locked to the clutter band by feedback circuits (or algorithms) called,

CLUTTERLOCK AND AUTOFOCUS

223

reasonably enough, clutterlock circuits. Hence an algorithm which automatically


determines the center frequency f De of the Doppler band of SAR azimuth time
returns is called a clutterlock algorithm.

5.3.1

Clutterlock Procedures

All SAR clutterlock algorithms for automatic determination of the center


frequency foe of the Doppler spectrum in one way or another use the fact that
the high azimuth bandwidth time product of a SAR locks Doppler frequency
to position along track. Thus, returns contributing to any particular Doppler
frequency originate from targets in a specific part of the radar beam. As a
consequence, the power of the Doppler spectrum around the Doppler center
frequency foe on average should follow the shape of the two-way azimuth power
pattern G2 (s - sc) of the antenna. (Here G(s) is the one-way power pattern
G( 0, <P) evaluated at constant ground range and expressed in terms of azimuth
times= x/V..)
One clutterlock scheme (Berland, 1981; McDonough et al., 1985) therefore
determines foe by correlation of the average Doppler spectrum of the basebanded
data, before range compression, with the known azimuth power pattern of the
antenna. Another implementation (Bennett et al., 1980) uses the spectrum of
range compressed data before azimuth compression, and determines foe as the
frequency of the spectral peak. Other workers have used range and azimuth
compressed data. A single-look complex image has been used (Li et al., 1985),
with foe taken as the frequency balancing the spectral power. Multiple
single-look real images have also been used (Curlander et al., 1982), with the
looks taken equally above and below the Doppler centroid assumed in the
processing. The final centroid value is that for which the image energies balance.
A refinement (Jin and Chang, 1992) of the technique of Curlander et al.
(1982) determines the maximum likelihood estimate of .foe given multiple real
images from different looks, in the case of a scene with constant backscatter
coefficient. An extension of this latter algorithm, to scenes with non-constant
backscatter coefficient, is described by Jin ( 1986, 1989). Another scheme
(Madsen, 1989) operating in the time domain has been implemented.
A further consideration arises because foe is a function of the range Re at
beam center. Since it is essential that some spectrum averaging take place before
applying the procedures indicated above, some span of R 0 values will contribute
to determination of foe It is therefore necessary to introduce some model for
the variation of foe with Re, which may simply be to assume that foe is constant
over the range of Re used in its determination, or varies approximately linearly,
or obeys some more detailed model, such as one determined from the
considerations of Appendix B. The further assumption is made that foe is
constant with slow time over a sufficient span to allow the Doppler spectrum
to be developed (perhaps 5-10 km), an assumption which is satisfied to good
accuracy.

224

ANCILLARY PROCESSES IN IMAGE FORMATION

5.3

CLUTTERLOCK AND AUTOFOCUS

225

We will now indicate in some detail the specific choices which have been
made in developing these clutterlock algorithms. The precise arrangement of
procedures is not especially critical, since slight to moderate misplacement of
foe ( < 0.05 B0 ) only leads to some loss of SNR and some increase in ambiguity
levels (Li and Johnson, 1983). However, some of the procedures can lead to
noticeable SNR and ambiguity effects with certain scene characteristics, so that
the availability of a repertoire of procedures is useful.
Clutterlock by Doppler Spectrum Analysis

The early clutterlock algorithms operating in the Doppler frequency domain


use the general idea sketched in Fig. 5.4. In Fig. 5.4a, a scene reflectivity function
l((x)l2 at some fixed range is viewed through the two-way azimuth antenna
power pattern G2 (x). The data taken locally near the time sc = xc/V. map
closely one to one from space x into frequency of the Doppler spectrum g(f, R ).
The "hot spot" shown to the left of beam center biases the spectrum power to
the left of the center frequency, an undesirable effect.
The frequency domain procedures rely on spectral averaging to defeat the
bias effect indicated in Fig. 5.4a. If power spectra lg(.f, R)l2 are taken along
multiple range lines and averaged, a strongly reflective region present in a
spectrum at some range may not be present at a different range, and will
therefore be suppressed in the average spectrum. However, for strong regions
of more than a few bins extent in range, the spectrum effect may be present in
too many components of the average to be adequately suppressed. (Sea coasts
are the classic example.) It is therefore desirable to include spectra in the average
which are well separated in range. But then (Fig. 5.4b) the Doppler center
frequency may be different for different spectra being averaged. A remedy is to
fit a model of the function foc(R) to the values foc(Rc) measured by local
averaging of spectra near a collection of well-separated ranges Re.
In the scheme of Fig. 5.4, either range compressed basebanded data or
basebanded raw data can be used to develop the Doppler spectra. The latter
case may be preferable, in that bright point targets are dispersed in range, so
that the range migration effect is less likely to carry a target outside the regions
of ranges being averaged over subbands of the Doppler spectrum.
A question also arises as to whether to use range and azimuth compressed
(image) data, formed with some trial values of foe or data (range compressed
or not) before azimuth compression. Both schemes have been used. The use of
image data circumvents the effect sketched in Fig. 5.5. The two point targets
shown are dispersed in azimuth (before azimuth compression). The Doppler
spectra for use in clutterlock are to be found over the span S' of slow time:
One target, at sc, has been fully scanned in that interval, and will be represented
equally at frequencies above and below foe The other, at s~, appears in the
data, but biased towards higher frequencies. Using spectra computed from an
image would exclude the target at s~. The effect is mitigated also by range
averaging, except for regions, such as a coastline, which may extend over a
considerable range span.

lo

b
Fl~ure 5.4 Use of Doppler spectrum to estimate / 00 requires spectral smoothing with range
adjustment. (a) Bright point target induces bias in estimation of / 00 by peak location of Doppler
spectrum. (b) Drift of Doppler spectrum center as range moves across swath.

. One version of this family of clutterlock algorithms (McDonough et al., 1985)


ts based on correlation of the average Doppler power spectrum of the raw
b~sebanded data at various ranges with the nominal antenna pattern G2 (s),
with subsequent least squares fitting of the obtained values of foe to the model
developed in Appendix B:
(5.3.4)

226

ANCILLARY PROCESSES IN IMAGE FORMATION

5.3

CLUTTERLOCK AND AUTOFOCUS

227

Algorithms Using Energy Balance

- - Sc

A
Figure 5.5 Two point targets with responses dispersed in Doppler by azimuth beamwidth. With
aperture analysis span S', the target at s: contributes only partially to any clutterlock procedure.

A different algorithm based on the same general idea has been used in the JPL
processor. Nominal parameters fDe and fR are first computed from the satellite
orbit and attitude data, such as may be available, in order to carry out image
formation for a small ( 1 km or so) span of slant range and a span of azimuth
time which is also small, but which has enough pulses to carry out an FFT of
some reasonable length (say 5 km or so). The piece of image is to be small
enough that variations in reflectivity with aspect angle will be small, and also
small enough that fDe can be taken constant over the image.
In one version (Curlander et al., 1982), four real images of a four-look
processor are produced, but not added. The total energy E; in each of the four
images is found, simply by summing the pixel intensities over each image. Since
each image is from a different quarter of the full azimuth Doppler spectrum,
the image energies correspond to the Doppler spectrum powers in each quarter
spectrum.
Were the trial fDe to have been correct, from symmetry of the antenna pattern
and the locking of azimuth time to Doppler frequency, we would expect equal
energies in the sum of the two lower frequency look energies and in the sum
of the energies of the two upper frequency looks. In general this will not be the
case, and some non-zero value will be found for the number
( 5.3.5)

in order to determine the constants a and b. (Here H is the nominal altitude


of the satellite.) The algorithm was implemented using data prior to range
compression and without range migration compensation. At some tens of range
values spaced uniformly across the swath, clusters of a few adjacent range bins
were evaluated. Along each range of a cluster, an FFT was taken in the azimuth
direction to create Doppler spectra at adjacent ranges (Fig. 5.4b ). The squared
amplitudes of these adjacent spectra were averaged at each frequency to yield
a single power spectrum for each cluster.
Each averaged power spectrum was then correlated with the nominal antenna
two-way power pattern G2 (s) in the along track dimension, assuming along
track time and Doppler frequency to be adequately locked together, to determine
the Doppler frequency JDe(Re) at beam center for that particular range (taken
say at the center of the cluster). These values for the various clusters across the
full range swath were then fit to the Doppler centroid model Eqn. ( 5.3.4) to
determine the constants a and b. During SAR processing, the Doppler model
Eqn. ( 5.3.4) was used with the values a and b found to d,etermine a value f 0c
as needed for processing the image at any particular range Re. The use of a
large (full swath) data span and multiple stage smoothing alleviated such effects
as indicated in Fig. 5.5.
Essentially this algorithm, taking a = 0 in the model Eqn. ( 5.3.4) (assuming .
negligible eccentricity of the satellite orbit), was used earlier by Berland
( 1981 ). The same algorithm has been used with range compressed data (Bennett
et al., 1980).

The trial value of fDc is then incremented by some nominal amount, say 10 Hz,
and the entire procedure repeated to obtain a new value AE. Some number
(say 16) of such values are computed and plotted vs. fDe The value of fDe for
which a linear fit to the AE(fDe) values intersects AE = 0 is taken as the estimate
JDe for the particular range of the image piece used in the computations. The
entire procedure is then repeated for each 1 km or so span of slant range across
the range swath, and a linear fit made to the resulting values Joe(R) to determine
the final (assumed linear) relation of foe to Re.
Although somewhat computationally intensive, the procedure was reported
to be accurate to within a few Hz over ocean regions, which are nearly
homogeneous in scattering properties, and to within a few tens of Hz over
urban regions. This accuracy estimate was based on the observed variation of
the estimates across the swath about the deterministic model of foe(R).
In the algorithm of Li et al. (1985), rather than four subaperture images, a
R) is produced, using a trial value fDe
single full aperture complex image
at each range, computed from nominal orbit parameters. Azimuth spectra ( (f, R)
are produced and averaged over a number of adjacent range bins spanning a
small region (say 1 km) over which foe(R) is nominally constant. Each average
power spectrum l((f, R;)l 2 is then balanced to find the frequency above and
below which half the power lies. That collection of estimates JDe(R;) is then
fitted to a linear model fDc(R) to determine the final values fD 0 (R).

"s,

228

ANCILLARY PROCESSES IN IMAGE FORMATION

5.3

Even though the use of azimuth compressed (image) data obviates the
problem of Fig. 5.5, Li et al. (1985) note that some bias of foe is present. It is
attributed to variation of the true reflectivity '(x, R) of discrete targets with
respect to aspect angle, so that they may appear more strongly in some parts
of the Doppler spectrum than in others. The effect was not noted for
homogeneous scenes.
Jin (1989) worked out the statistics of the quantity AE of Eqn. (5.3.5),
assuming that the computed real images had elements which were exponentially
distributed (Section 5.2), and independent from one resolution cell to another.
He determined that, approximately, the mean of AE of Eqn. (5.3.5) was related
to the deviation Afoe of the value f ge used in the computation of the images
from the true value foe by:

CLUTIERLOCK AND AUTOFOCUS

229

range compressed data phasor is given by the convolution


O(s, R) =

h(s - s'IRK(s', R) ds'

(5.3.9)

where h is the azimuth impulse response function, embodying the two-way


azimuth antenna voltage pattern (the one-way power pattern) and the Doppler
phase shift. Then the azimuth spectrum is
g(f, R) = H(flR)Z(f, R)

where
(5.3.6)

where
Afoe = foe -

f ge

(5.3.7)

and
Ct=

2[W(O)- W(Bp/2)]

If

B,/2

because of the time and frequency locking effect of the high azimuth bandwidth
time product. The function Z is the Doppler spectrum of the complex
reflectivity '
If the azimuth compression operation is carried out with a filter H- 1 (JIR),
then the computed complex image ((s, R) has spectrum

W(f)df

2(f,R) = H- 1 (/IR)g(f,R)

-B,/2

Here

with power
12(/, R)l 2 = IH- 1 (flR)l 2W(f - foe)IZ(f, R)l 2

is the two-way antenna power pattern expressed in the Doppler frequency


domain. From Eqn. (5.3.6), an estimator of Afoe is just

where AE is the value at hand, so that, from Eqn. ( 5.3. 7), foe can be estimated as

where again W is the two way antenna power pattern in the Doppler domain.
In this, the term IH - 1 I2 is known, and is unity if the compression filter is not
weighted for sidelobe control. The term IZl 2 is an exponentially distributed
random variable, since the spectrum Z is a linear operation on the complex
Gaussian process '(s, R).
Using the assumed constant mean of 1'1 2 over the scene, Jin and Chang
(1992) derive the minimum variance unbiased estimator AJoe of the deviation

The correction procedure is iterated using the value Joe as a new value fge
Minimum Variance Unbiased Centroid Estimation
Jin and Chang (1992) and Jin (1989, Appendix B) have considered clutterlock

for a homogeneous scene, that is, one for which the exponentially distributed
intensities of the scene elements have constant mean, so that the backscatter
coefficient u 0 is constant. For such a scene, the azimuth time variation of the

Here fge is the Doppler center frequency used in forming the image (and foe
is the true value, about which the antenna pattern W(f - foe) is assumed to
be symmetric. They find
*(5.3.10)

230

ANCILLARY PROCESSES IN IMAGE FORMATION

5.3

where
w(f) = (1/a)W'(f)/W 2(f)

(5.3.11)

with
B,/2

a= SP

CLUTTERLOCK AND AUTOFOCUS

231

The integrals in this last equation are just the spectral energies of images
created from weighting of the portions of the Doppler spectrum below and
above the trial centroid fge The Doppler band can be further subdivided into
multiple (e.g. four) "looks", with energies E'1 , .,E~ computed from four
weighted subapertures. The denominator term,

[W'(f)/W(f)] 2 df

-B,/2

Here SP is the azimuth time span corresponding to the processed bandwidth


BP, and the prime indicates the derivative with respect to frequency. Note that
an estimate <To of the (assumed constant) mean image intensity is required. This
can be computed from the image at hand.
Equation ( 5.3.11) is for the case of IH - 1I = 1. Otherwise, W should be
replaced by IH- 112 W. Also, the spectra IZ(f,R)l 2 are computed as averages
over some moderate number of range values. Finally, an estimate N of the
system noise power spectral density is added to W before the computation.
Jin (1986, 1989) has developed a refinement of the clutterlock algorithm of Jin
and Chang (1992), applicable in the (usual) case that the image is not
homogeneous. The image is separated into some number K of subregions with
equal mean intensity <T 0 over each subregion. The subregions need not be
comprised of contiguous image elements. They are determined from the
intensities of a multilook image formed using a trial value of foe with each
subregion comprised of those image pixels having the same intensity, to within
the quantization selected. The procedure of Jin ( 1989) (Eqn. ( 5.3.8)), for example,
could be applied separately to each image subregion, computing K values .1.Ek
of Eqn. (5.3.5) and K values .1.f~e of Eqn. (5.3.8). The resulting K values f~e
would then be combined in inverse proportion to their variances to produce a
final value foe for the image as a whole (over the span of ranges used to
compute AE).
In fact, Jin (1986, 1989) used this segmentation procedure with the optimal
algorithm Eqn. (5.3.10), applied to each homogeneous subregion. However,
because the image segmentation occurs in the spatial domain, rather than in
the frequency domain, a re-writing ofEqn. ( 5.3.10) is needed. Were the procedure
of Eqn. (5.3.10) to be used directly, the spectrum of the (generally) non-'
contiguous pixels of the various subimages would be problematic to compute.
Assuming a symmetric antenna power pattern W(f), we have

Then Eqn. (5.3.10) becomes

I W'(f)I,

.1.foe = (E'1

~O

+ E~ - E~ - E~)/ E

*( 5.3.12)

where the energies E; refer to an image using the modified azimuth compression
filter

R- 1 (f,R) =

Clutterlock for a Quasi-Homogeneous Scene

W'~f) =

is proportional to the total image energy. Thus, to within a known scale factor,

[IW'(f)l 112 /W(f)]H- 1(f,R)

Finally (Jin, 1986), the values Eqn. (5.3.12) for the various homogeneous
subregions are combined in proportion to their inverse variances as
K

.1.foe =

Wk.1.f~e

(5.3.13)

k=l

where (Jin and Chang, 1992)

<Tt 2

=SP

f[
B,

W'(f)
df
W(f)+ N 0

The above integration is over the image band, and N 0 is the system noise
power spectral density of the image:
N0 =

JB,

W(f)df/(BPSNR 0 )

Clutterlock by Time Domain Correlation

Madsen ( 1989) has developed a clutterlock algorithm with computational


advantages over the frequency domain procedures we have been describing.
The procedure rests on the fact (Whalen, 1971, p. 40) that the power spectral
density of a stationary (generally complex) random process ((s), such as the
Fresnel reflectivity of an azimuth line, is the Fourier transform of its correlation

232

ANCILLARY PROCESSES IN IMAGE FORMATION

5.3

function:

233

CLUTTERLOCK AND AUTOFOCUS

spectral density (Whalen, 1971, p. 47):


S(f) =

f:ao

R(t)exp(-j2nft)dt
From Eqn. (5.3.15), the image azimuth correlation function is then

where
R(t) = 8((s + t}(*(s)

( 5.3.14)

R,(t) = Uo/Bo)exp(j2nf0 et)

f:ao

IH(f- L\f)l2W(f)exp(j2nft)df

( 5.3.17)

Since then also


R(t) =

f:ao

where
S(f)exp(j2nft)df

(5.3.15)

any shift in the power spectrum, say to S(f - foe), is evidenced by a phase
factor in the correlation function:
R( t):::;.. R( t) exp(j2nfoe t)

This suggests that we can determine foe by analysis of the phase of the slow
time correlation function of a computed image line ((s, R), which can be
estimated using Eqn. (5.3.14).
Suppose that the true scene is homogeneous, with independent intensities in
each resolution element. Then the reflectivity ((s, R) has a power spectral density
in the azimuth variable which is constant:

ll.f = Joe - foe

For small !if, for any specified t = t 0 the amplitude and phase of the integral
in Eqn. (5.3.17) will be proportional to ll.f:

For the selected t 0 , the value R,(t 0 ) is estimated based on Eqn. (5.3.14). The
angle of that complex number, say

from Eqn. (5.3.18) then yields the value of fo in:


2nf0 t 0 =2nf0 eto
fo = foe

+ 2na0 t 0 A.f,

+ ao(foe - foe),

foe= Uo - aofoe)/(l - ao)


where / 0 = 81((s, R)l2 is the scene intensity and B 0 is the azimuth bandwidth
of the scene. The scene power spectrum enters into the azimuth data through
the antenna azimuth two-way voltage pattern G(s), expressed in the frequency
domain through the frequency time locking relation, and shifted to the true
Doppler center frequency foe The range compressed data azimuth power
spectral density is thereby

where

The azimuth data Eqn. (5.3.16) are passed through a compression filter
H(f - JO.,) with an amplitude spectrum H(f) shifted to some presumed

Doppler center frequency foe The line of image thereby produced has power

*( 5.3.19)

The procedure is iterated. Madsen ( 1989) suggests that the first sample of the
estimated autocorrelation be used, so that t 0 is the first available lag value.
The coefficient a 0 in Eqn. (5.3.18) is derived under some reasonable
assumptions by Madsen (1985). As Madsen (1989) suggests, its determination
can be obviated by plotting a succession of values / 0 , found with different foe
in order to determine the value fo for which fo =foe implying from
Eqn. (5.3.19) that foe= foe
The considerable computational efficiency of Madsen's method comes about
partly because it is not necessary to compute any power spectra, but mainly
because of the possibility of computing the estimate of R,( t 0 ) using hard limited
data. In particular, let x, y be any two real stationary Gaussian processes, and
let x., Ys be their hard limited versions:
x.(t) = 1,
= -1,

x(t)

x(t) < 0

234

ANCILLARY PROCESSES IN IMAGE FORMATION

5.3

and similarly for y. The cross-correlation coefficient

of the original two processes is related to the cross-correlation function of the


hard limited versions by (Papoulis, 1965, p. 483):
Pxy(r) = sin[(n/2)Rx,y.(r)]

(5.3.20)

Since the correlation coefficient and correlation function of a complex process


such as '(s, R) have the same phase angle, Madsen suggests applying his
procedure to the estimated correlation coefficient p,( r) rather than to R,( r ).
Using Eqn. ( 5.3.20 ), p,( r) can be computed by determining the cross-correlation
of hard limited data, which can be done essentially by tabulation of sign
comparisons of x.(s + r) = Re C(s + r) and y.(s) =Im C(s).
Madsen (1985) finds that the variance of his estimator Eqn. (5.3.19) is
proportional to scene contrast. In practice, Madsen ( 1989) reports accuracy of
the same order as previous (frequency domain) methods, combined with
significant computational savings.

5.3.2

Autofocus

Most SAR image formation processors in current use carry out determination
of the azimuth chirp constant fR in the same way, using the subaperture.
correlation method (Bennett et al., 1981; Curlander et al., 1982; Wu et al., 1982;
McDonough et al., 1985 ). The exceptions are those processors such as in (Barber,
1985a), which use direct computation of fR from orbital data according to the
expressions of Appendix B, and processors, such as in ( Herland, 1981 ), which
use the fact that the image contrast

decreases if the speed parameter V in the model

CLUTTERLOCK AND AUTOFOCUS

235

Suppose that two complete intensity images were produced for some modest
sized patch of terrain, taken small enough in range extent that fR could be
considered constant, and large enough in azimuth extent to allow convenient
FFT size. Each image is produced from a different part of the Doppler spectrum,
as in multilook processing. Some nominal value f R. is used in the processing.
After formation of the two images, they will be registered in azimuth time by
shifting one relative to the other by exactly the amount corresponding to
Eqn. ( 5.3.21 ):
( 5.3.22)
where f f,c, f5c are the centers of the subbands used in forming the images,
and f R. is the trial value used. If we were forming a single multilook image, the
registered subimages. would now be added. However, we now make the
observation that, if the value JR. used in processing is not the correct value fR,
the registration will be incorrect because the imposed azimuth shift Eqn. (5.3.22)
will not accord with the actual relation in the image:
(5.3.23)'
Thus, the two images, which should be identical on the same time scale s, will
in fact be displaced from one another in time, with the amount of the
displacement being a measure of the mismatch in fR between scene and processor.
In the processor of Curlander et al. ( 1982 ), the outer two looks of a four-look
processor are used in this procedure. A nominal value of f R. is chosen, two
images / 1 (s,R) and / 2 (s,R) are produced, and the cross correlation function

is estimated for each range R of the image. These correlations are averaged
over range to obtain a single average cross correlation function. The location
in time of the peak of that function is found, for example by reading off the
peak of a local quadratic fit around the nominal peak. This gives the measure
of slow time misregistration of the two images:
(5.3.24)

is improperly chosen, thereby defocusing the image (Herland, 1980). Here we


will concentrate attention on the subaperture method.
..
As with so much of SAR processing, the subaperture method depends o~
the locking relationship between azimuth time or position and Doppler
frequency:
(5.3.21)

This is taken as one point on a curve of f>s vs. JR., and the entire process is
cycled for new nominal values JR., displaced slightly (a few Hz/s) from one
another. The correct value of fR for the range used in the images is taken as
the value at which a linear fit to the points on such a curve crosses the axis
f>s = 0, implying from Eqn. ( 5.3.24) that f R. = fR. The entire procedure is stepped
along in range across the swath of the SAR. The procedure works best over
land areas, where point-like targets exist which act to sharpen the crosscorrelation peaks, with a reported accuracy of a few tenths of a Hz/s.

236

5.3

ANCILLARY PROCESSES IN IMAGE FORMATION

In another version of the same idea (McDonough et al., 1985), the model
equation of Appendix Bis used:

CLUTTERLOCK AND AUTOFOCUS

range bin:
(5.3.27)

(5.3.25)
where V is an equivalent speed, very nearly constant with both range and
azimuth position over a typical scene. Some nominal value of V is chosen,
perhaps from nominal orbit data, or simply the approximate value Eqn. (B.4.12):

where V., H are nominal satellite speed and altitude, and Re is the nominal
earth {adius. Using the nominal value of V in the model Eqn. ( 5.3.25) for fR,
a moderate size piece of image is formed from each of at least two Doppler
subbands. A value JR for each range in the image is computed from Eqn.
(5.3.25), using the nominal V, and that value fR(Rc) used in the compression
processing.
Suppose the two images are produced with Doppler bands having center
frequencies which differ by some amount Aloe Then we will expect the pixels
in each range line of the two images to differ in slow time location s by an
amount, from Eqn. (5.3.22):
As' = fl.foe/ f R
We will compensate each range line of one of the images by that amount, so
as to register the two images in slow time. In reality, however, the pixels in the.
two images along any range line will differ in slow time location by an amount

237

where I 1 and I 2 are the intensities of the pixels in the two images and the sum
is over whatever portion of image slow time has been computed.
Since, in this version, we change the value fR over range as in Eqn. (5.3.25),
there is a systematic azimuth displacement as a function of range. We need
to compensate that dependence before averaging the correlation functions
Eqn. (5.3.27) over range. This can be done by computing the average as

where the sum is over whatever range bins are available in the image and R 0
is the smallest value of Re used in the computations Eqn. (5.3.27). The value
yP of y for which p.(y) peaks is then the measure of Js in Eqn. (5.3.25):
*(5.3.28)
which may be solved for the unknown value V.
In the particular case that the range interval used in the image formation is
sufficiently small that fR rather than V can be considered constant, the formulas
reduce to the earlier case Eqn. (5.3.24):
p(y)=

LI

1 (s+y,Rc)I 2 (s,R 0 )

s,Ri;:

where fR is the true value for that range in the scene. After compensation,
therefore, the pixels along any range line will still misregister by an amount as
in Eqn. (5.3.24):
Js =As' - As= Af0 c(l/ fR

1/fR)

= AfocPRc/2)(1/V 2 -

l/V'

(5.3.26)
where Vis the correct value for the velocity parameter in Eqn. (5.3.25) and 1t:'1
is our nominal choice.
We now measure Js by cross correlating the two images. The process ~
termed subaperture correlation, because the two images arise from separa- .
Doppler sub bands which correspond to different parts (subapertures) of tiler.
full antenna beam due to the locking of time and frequency. Since Js depend&
on range R 0 , we first compute the correlation function in slow time along each

which peaks at a value


*(5.3.29)
This equation can be solved for AfR = fR - f R and the resulting correction
applied to the assumed value f R.
Variants of these procedures are obviously possible, involving a trade-off
between precision of correction, which is enhanced for larger fl.foe (i.e., time
separation), and SNR effects which arise from the fact that looks widely separated
in frequency come from Doppler regions corresponding to the edges of the
radar beam azimuth pattern. One might, for example, use more than two looks,
say four, and carry out a least squares procedure to determine a smoothed
value of AfR. The optimal approach depends on the radar system and the terrain
reflectivity.

238

5.4

ANCILLARY PROCESSES IN IMAGE FORMATION

5.4

239

RESOLUTION OF THE AZIMUTH AMBIGUITY

RESOLUTION OF THE AZIMUTH AMBIGUITY

In Appendix A we discuss the spectrum F!(jw) resulting by discrete Fourier


transformation of samples of a continuous time waveform. Specifically, the
Fourier coefficients Fk, k = 0, N - 1, obtained as the FFT of a set of samples
fn = f(n/ J.), n = 0, N - 1, are the samples Fk = F!(j2nkf./ N) of the periodic
function (Fig. A.5):
00

F!(jw) = J.

F[j2n(f

+ mf.)]

\\

/
foc-fp

Figure 5.6

....

--._,,

,/

L_

Doppler spectrum at

(5.4.1)

f 0c

fp/2

foe+ fp

foe with aliases induced by sampling at PRF fp.

m=-co

Here F(jw) is the spectrum of the function f(t) whose samples are the numbers
J;..
In application to SAR azimuth processing, this means that the Doppler
spectrum computed as the FFT of the range compressed and basebanded data
for any image line is periodic. The period is the pulse repetition frequency,
J. = fp, since the sampling is that due to the pulsed nature of the radar. This
periodicity is of no concern in the various azimuth compression and filtering
operations involved in making a full resolution image, since all calculations are
done digitally and all azimuth filter spectra are also periodic. (A separate
question concerns whether or not the antenna pattern G( s) is adequately limited
to induce bandlimiting of the Doppler spectrum, so as to avoid aliasing by
sampling at the rate fp.) However, an ambiguity problem can arise in image
registration. In this section we will describe the problem and two methods to
resolve the ambiguity. These are discussed in Cumming et al. (1986) and in
Chang and Curlander (1992). The method described by Cumming et al. (1986)
was also suggested by Luscombe ( 1982) in a paper concerned with many aspects ,
of clutterlock, autofocus, Doppler aliasing, and ambiguity resolution. It was
implemented at JPL in 1983 for processing of SIR-B data.

while we assume the same expression with a value f 'oc for the Doppler center
frequency, differing by some multiple of the sampling frequency: f'oc =foe+ mfp
As a result, we assume the data for each image point to lie nominally along
the dashed range walk line of Fig. 5.7, whereas they actually lie along the solid
line. The same situation holds in both the slow time and Doppler frequency
domains:
AR = R - R' = -(.A./2)(foc - f'oc)(s - sc) = - (.A.nifp/2)(s - Sc)

= -(.A.mfp/2fR)(f - foe)

(5.4.3)

This is the range amount by which the azimuth time domain data locus would
be offset in the time domain migration correction procedure of Section 4.2.3,
or the first order range offset in assembling the Doppler frequency spectra in
the process of secondary range compression, as indicated in Eqn. (4.2.49).
Now consider the procedure of registration of the multiple looks of a
multilook image. Each frequency f in the subband of the first look, centered
s
I

Range Subaperture Correlation Algorithm

In a sampled Doppler spectrum of the form of that in Fig. 5.6, only one
replication is the right one, that centered at the Doppler center frequency foe
The clutterlock algorithms described in Section 5.3 determine a value /'oc in .
the base region of the replicated spectra: 0 ~ f'oc < J. = fp There is one step;;
in the processing chain which is sensitive to whether our estimate f 'oc is the
true value foe or one of its replications, f 'oc = foe + mJ., 'm =I= 0. That is in the
range migration correction. The fact is used in the range subaperture correlation
algorithm (Cumming et al., 1986) to determine the correct foe
Suppose that the true range walk Eqn. (4.1.39) for some particular image
range Re of interest is described by
R(s) ~ Re - (.A.foc/2)(s - sc)

( 5.4.2)

I
I
,___ -

(Alnfp/2) (s - s 0 )

Figure 5.7

Range walk locus error resulting from use of ambiguous Doppler spectrum with m "# 1.

240

at

ANCILLARY PROCESSES IN IMAGE FORMATION

5.4

f be will be associated with a frequency in the second subband:

RESOLUTION OF THE AZIMUTH AMBIGUITY

241

so that we require the error

!' = f + U~e - f be>


where the look center frequencies may be ambiguous themselves, but differ by
the same amount for any ambiguity number m. Ifwe have used the true migration
locus, then the ranges of the points corresponding to f and f' are the same,
and the points of the two sublook images suj>erpose, after azimuth registration
Lis= (f be - f~e)/ fR However, if we have used the wrong Doppler spectrum
replication (m =f; 0), the ranges corresponding to f and f' will differ by LiR of
Eqn. ( 5.4.3 ). The multilook spectra have been gathered from the wrong range
bins. The result is a range misregistration of the images by the amount
Eqn. ( 5.4.3 ).
However, just this misregistration can be sensed by subaperture correlation.
Before the look images are added, the images are cross correlated in range, just
as was done in azimuth subaperture correlation (Section 5.3.2) to measure fR
For some moderate patch of image we compute
p(R)

= l:I1(Rn + R)J2(Rn)
n

averaging over azimuth to enhance stability. The correlation p(R) will tend to
peak at the offset
*(5.4.4)
where Lif0c is the difference in assumed look center frequencies. The value of

m may be calculated from Eqn. (5.4.4). This yields the true value

foe = f

~e

- mfP

allowing the full image to be processed with the proper range migration
correction.

where o(). is the error in measuring beam center pointing angle (squint). That
is, we require
(5.4.5)
Since azimuth resolution is nominally (Eqn. (4.1.37)):
(5.4.6)
and nominally B 0 =

fp, the requirement Eqn. (5.4.5) becomes


*(5.4.7)

With ox fixed, the requirements on measurement of pointing direction as a


resolver of azimuth ambiguity become more severe with decreasing wavelength.
With ox= 7 m, for example, from Eqn. (5.4.7) at L-band the requirement is
lo9.I < 0.5, which is reasonably obtainable with on-board instrumentation. At
X-band, however, there is required I<591 1 < 0.05, which may be difficult to attain.
It is worth mentioning specifically that we are dealing with a true ambiguity
due to sampling. The effects of aliasing are also caused by such ambiguities
(Li and Johnson, 1983). A strong target in a sidelobe of the antenna beam may
correspond to a true Doppler frequency well outside the nominal Doppler band,
based on the mainlobe azimuth extent, and will thereby be aliased into the
processed band and taken as an image point displaced in azimuth by
Ax = V..Jp/ fR from its actual position. This problem can be avoided by raising
fp (sampling faster in azimuth) or filtering (presumming (Brown et al., 1973))
the pulse train before processing to reduce the Doppler band.
A difficulty can arise in the range correlation method of resolving the Doppler
ambiguity. From Eqn. (5.4.4), using nominal values fp = B 0 and Lifoe = B 0 ,
there results

Criteria for Ambiguity Resolution

The need for such an ambiguity resolution procedure .is greater at higher
frequencies, being usually unnecessary at L-band. This is because the true value
of foe can be calculated, at least to within half a pulse sampling interval fp
provided the antenna beam pointing direction is known to within nominally
half a beamwidth. This follows because Doppler frequency is related to pointing
angle () off broadside by nominally ( Eqn. ( 1.2.4))

Using Eqn. (5.4.6), and the usual model

this becomes

LiR = (mRe/4)(A./ox)2

*(5.4.8)

242

ANCILLARY PROCESSES IN IMAGE FORMATION

5.4

At C-band, say A. = 5 cm, and at the altitude of the space shuttle, Re = 250 km,
with a common single-look azimuth resolution bx = 6 m, and range resolution
bR = 7 m, this yields (form= 1)

tiR/bR = 0.6
Hence the misregistration is less than a resolution cell per ambiguity cycle, and
may be difficult to sense. (The cross-correlation uses single-look images, which
have a signal-to-speckle noise ratio of only 0 dB.) The situation worsens at
X-band.
Ambiguity Resolution Using Multiple PRFs

Accordingly, another method of resolving the Doppler ambiguity has been


proposed by Chang and Curlander ( 1992 ). It assumes the transmission of radar
pulse trains with more than one pulse repetition frequency fp The procedure
is reminiscent of the staggered PRF method (Mooney and Skillman, 1970) of
resolving range ambiguity due to second time around (Section 1.2.1 ), but is
implemented as brief (a second or less) bursts of pulses at each fp in turn. The
result is the availability of spectra of the Doppler signal in each range bin,
sampled at a variety of PRFs fp
The idea of the method is clear from Fig. 5.8. The position of the "real"
Doppler spectrum, centered on the true Doppler centroid foe is the same
regardless of sampling rate fp, while the "pretenders" change position as fp
changes. Clutterlock algorithms yield estimates f'oe in the baseband region,
0 ~ f'oe < fp The object is to analyze the measured values f'oe resulting from
the various fp, and infer from them the true value foe
We then have the following problem to solve. Given measured values
( 5.4.9)
where ki are unknown integers, find foe One obvious procedure is to compute
. = f ~e
. + nf .~ for n = 1, 2,... , untt1 we o bserve f 01 = f o2 = ... ,
the numbers f 0

/
I

,,

,,---...... \
I

fee -fp1

C\
fee

,,.........,
/
'

''

., /

243

from which we conclude foe= fb. Chang and Curlander (1992) set forth a
more deductive solution, which has the possibility of extension to account for
measurement errors.
Assume first that all frequency values are integers. Then by definition the
expression Eqn. (5.4.9) is a congruence (Barnett, 1969, Chapter 6):
foe

=f ~e mod(/~),

( 5.4.10)

i = 1, ... ,M

That is, the integer difference foe - f ~e is an integer multiple of the integer f ~.
Given the numbers f ~e and f~, we want to solve the simultaneous set
Eqn. (5.4.10) for the unknown foe
The existence of a solution to Eqn. (5.4.10) rests on the Chinese remainder
theorem (Barnett, 1969, p. 115): If the members of each pair f~, f i # j, have
no common integer divisors, other than unity, then the simultaneous set
Eqn. (5.4.10) always has solutions, and, further, those solutions are congruent
modulo f M = f ~ x . . . x f ~. That is, the solution foe is determined to within
the product fM, and the ambiguity span of the Doppler centroid is expanded
to that value.
The proof of the Chinese remainder theorem is by construction, and is given
by Barnett (1969, p. 115). In the new baseband 0 ~foe< fM, the solution is

i,

foe=

L MinJ~e mod(JM)
i=

( 5.4.11)

where

and the integers ni are any solutions of


(5.4.12)
Barnett (1969, p. 88) shows that Eqn. (5.4.12) has exactly one solution ni,
modulo f ~. That solution can be found by solving the Diophantine equation
corresponding to the congruence Eqn. (5.4.12):

RESOLUTION OF THE AZIMUTH AMBIGUITY

(5.4.13)

fo

fee +fp1

where ki is some integer. (A (linear) Diophantine equation is a linear equation


with integer coefficients for which we seek an integer solution.)
Euclid's Algorithm

Figure 5.8

True Doppler spectrum and differing ambiguities induced by two PRFs f~,

J;.

The solution of the equation Eqn. (5.4.13) can be constructed (Barnett, 1969,
p. 51) by chaining backwards through Euclid's algorithm (Barnett, 1969, p. 47)
for finding the greatest common divisor (gcd) of the integers M; and f~. This
will be illustrated by an example.

244

ANCILLARY PROCESSES IN IMAGE FORMATION

5.4

Suppose we choose f~ = 1652 = 2 x 2 x 7 x 59 and f~ = 1745 = 5 x 349.


Having no common factor other than one, these fulfill the requirement of the
method. Suppose the true Doppler centroid is foe= 5275. We then measure
(assuming a perfect clutterlock algorithm):

RESOLUTION OF THE AZIMUTH AMBIGUITY

245

The solution of Eqn. (5.4.14) is now identified as


n 1 = -675, k 1 = 713
Similarly,

J:;e =

5275 mod(1652) = 319,


n 2 = 713, k 2 = -675

flfe = 5275mod(1745) = 40
Then Eqn. ( 5.4.11) yields
The set Eqn. (5.4.13) is (M 1 =1745, M 2 = 1652):
1745n 1 + 1652k 1 = 1

(5.4.14)

1652n 2 + 1745k 2 = 1

(5.4.15)

We are assured by hypothesis that the greatest common divisor of


(a,b) = (1745, 1652) is 1. Euclid's.algorithm leads to that result as (the dots
tag the quotients, and remainders are labeled r 1):

a = 1745 = 1652 x 1 + 93
b = 1652 = 93 x 17 + 71

+ 22
22 x 3 + 5
5 x 4+ 2

r 1 = 93 = 71 x i
'2 =

71 =

r 3 = 22 =

foe= [1745 x (-675) x 319 + 1652 x 713 x 40]mod(2882740)


= 5275

Extensions of the Multiple PRF Method

Chang and Curlander ( 1992) develop a slight extension of this algorithm, aimed
at noninteger measured values fte From Eqn. (5.4.10), it is clear that the
integer part of the f te arises from the integer part of foe so that the integer
parts of the fte can be used in the above procedure, and their (common)
fractional part added to the resulting foe To allow for estimation errors in the
f ~e the measured f te are least squares fit to numbers a1 constrained to have
common fractional part, and the integer parts of the resulting a 1 used in the
algorithm. Their common fractional part is added to the solution foe found.
Also, the case is considered that the true value foe may change slightly
over the time interval from one PRF burst to another, so that, in place of
Eqn. (5.4.10), we have

r 4 =5=2xi+l

r5 =

2= 1

f be

xi+ Q

which identifies 1 (the last nonzero remainder) as the gcd of (1745, 1652).
Now carry out the back solution of the Euclid array according to the scheme
(the dotted quantities are combinations of quotients):
1=5-ix2

ix (22 - 5 x 4) = -ix 22 + 9 x 5
=-ix 22+9 x (71-22 x 3)=9 x 71-29

=f tc mod(f~),

i= l, ... ,M

( 5.4.16)

An additional algorithm is presented by Chang and Curlander ( 1992) for


resolution of the ambiguity in this case also, provided a minimum of three
values of fp are used. The values of fp in this algorithm may contain common
factors.
In the algorithm relative to Eqn. (5.4.16), the true values fbe can be assumed
to be random perturbations of a nominal value foe:

= 5-

x 22

= 9 x 71 - 29 x (93 - 71 x i) = -29 x 93 + 38 x 71
= -29 x 93 + 38 x (1652 - 93 x 17) = 38 x 1652 - 675 x 93
= 38 x 1652 - 675 x (1745 - 1652 x 1)
= -675 x 1745 + 713 x 1652

Any two of the values fbe say f be and f '&e are used as data in the maximum
likelihood estimator of foe and k 1 , assuming k 2 = k 1 + i, with i some deterministic
value. Thus, with p 12 (!!.fbe,t!.ff,e) being Gaussian, say:

246

ANCILLARY PROCESSES IN IMAGE FORMATION

REFERENCES

247

REFERENCES

one can compute

The result, rounded to the nearest integer, is


(5.4.17)
Using Eqn. ( 5.4.17), there follows the estimator
~1

f De - kJp +foe
which still contains the integer i as a parameter. To obtain some smoothing
e~ect, .two measur~ments are. used to determine it. It can be assumed that f'f,e,
foe differ from foe by considerably less than a PRF, f~ or f~, respectively.

Therefore, the values:


<>1 = icJbe - f'f,e)mod(f~)I
=

IObe - (k2f~ + Jf,e)] mod(f~)I

= lcJbe -Jf,e)mod(f~)I

and
j;

u2

1
3
3
= l(f De
- foe) mod(f p)I
A

l(Jbe -Jf,e)mod(f~)I

ideally vanish. Therefore, the value of i in Eqn. (5.4.17) is chosen so as to


simultaneously minimize <5 1 and <5 2 so far as possible. The method succeeds if
this can be done with the minimum <5 1 and <5 2 values considerably less than
the fp values. The limitations of these algorithms are analyzed by Chang and
Curlander (1992) with respect to errors in the clutterlock outputs f~e and the
differences Afbe due to antenna pointing drift between PRF bursts. Attention
is also given to choice off~ for the best accuracy of the: final estimate of f0c
in the three-frequency algorithm.
We have now described the main class of algorithms used to produce
uncalibrated images from SAR radar signals. We turn next to a description of
the radar hardware system itself, in Chapter 6, and in Chapter 9 a discussion
of the processing architectures available to carry out the image computations.
These computations must also include realization of algorithms for radiometric
and geometric calibration, discussed in Chapter 7 and Chapter 8.

Barber, B. C. ( 1985a). "Theory of digital imaging from orbital synthetic-aperture


radar," Inter. J. Remote Sensing, 6(7), pp. 1009-1057.

Barber, B. C. ( 1985b ). "Analysis of binary quantisation effects in the processing of


chirp and synthetic aperture radar signals," IMA Inter. Coef. on Math. in Signal.
Proc., Univ. of Bath, September 17-I9.
Barnett, I. A. ( 1969 ). Elements ofNumber Theory, Prindle, Weber, and Schmidt, Boston.
Bennett, J. R., I. G. Cumming and R. A. Deane ( 1980). "The digital processing of Seasat
synthetic aperture radar data," Record, IEEE 1980 Inter. Radar Conf, April 28-30,
Washington, DC, pp. 168-175.
Bennett, J. R., I. G. Cumming, P.R. McConnell and L. Gutteridge (1981). "Features of
a generalized digital synthetic aperture radar processor," 15th Inter. Symp. on Remote
Sensing of the Environment, Ann Arbor, Michigan, May.
Brown, W. M., G. G. Houser and R. E. Jenkins (1973). "Synthetic aperture processing
with limited storage and presumming," IEEE Trans. Aerospace and Electronic Sys.,
AES-9(2), pp. 166-175.
Chang, C. Y. and J. C. Curlander ( 1992 ). "Algorithms to resolve the Doppler centroid
estimation ambiguity for synthetic aperture radars," IEEE Trans. Geosci. Rem. Sens.
(submitted).
Cumming, I. G., P. F. Kavanagh and M. R. Ito ( 1986). "Resolving the Doppler ambiguity
for spaceborne synthetic aperture radar," Proc. IGARSS' 86 Symp., Zurich, September
8-11, pp. 1639-1643.
Curlander, J. C., C. Wu and A. Pang ( 1982). "Automated preprocessing of spaceborne
SAR data," IGARSS '82, Munich, pp. FA-1-3-1-6.
Herland, E.-A. (1980). "Some SAR-processing results using auto-focusing," Proc. 3rd
Seasat-SAR Workshop, Frascati, Italy, December 11-12, pp. 19-22.
Herland, E. A. (1981). "Seasat SAR processing at the Norwegian Defence Research
Establishment," Proc. of an EARSeL-ESA Symp., Voss, Norway, May 19-20,
pp. 247-253.
Jin, M. Y. (1986). "Optimal Doppler centroid estimation for SAR data from a
quasi-homogeneous source," IEEE Trans. Geosci. and Remote Sensing, GE-24(6),
pp. 1022-1027.
Jin, M. Y. ( 1989). "A Doppler centroid estimation algorithm for SAR systems optimized
for the quasi-homogeneous source," Publ. 89-9, Jet Propulsion Lab., Calif Inst. Tech.,
Pasadena, Oct. I.
Jin, M. Y. and C.-Y. Chang ( 1992). "Optimal Doppler centroid estimation for SAR echo
data from homogeneous source," IEEE Trans. Aerospace and Elec. Sys. (submitted).
Li, F. K. and W. T. K. Johnson (1983). "Ambiguities in spaceborne synthetic aperture
radar systems," IEEE Trans. Aerospace and Elec. Sys., AES-19(3), pp. 389-395.
Li, F.-K., C. Croft and D. N. Held (1983). "Comparison of several techniques to obtain
multiple-look SAR imagery," IEEE Trans. Geosci. and Remote Sensing, GE-21(3),
pp. 370-375.
Li, F.-K., D. N. Held, J.C. Curlande~ilnd C. Wu (1985). "Doppler parameter estimation
for spaceborne synthetic-aperture radars," IEEE Trans. Geosci. and Remote Sensing,
GE-23(1), pp. 47-55.

248

ANCILLARY PROCESSES IN IMAGE FORMATION

Luscombe, A. P. (1982). "Auxiliary data networks for satellite synthetic aperture radar,"
Marconi Review, 45(225), pp. 84-105.
Madsen, S. N. (1989). "Estimating the Doppler centroid of SAR data," IEEE Trans.
Aerospace and Elec. Sys., AES-25(21 pp. 134-140.
Madsen, S. N. (1985). "Speckle Theory," Electromagnetics Institute of Technical
University of Denmark, Report LD62, November.
McDonough, R. N., B. E. Raff and J. L. l<.err(l985). "Image formation from spaceborne
synthetic aperture radar signals," Johns Hopkins APL Technical Digest, 6(4),
pp. 300-312.
Mooney, D. H. and W. A. Skillman ( 1970). "Pulse-Doppler Radar," Chapter 19 in Radar
Handbook (Skolnik, M. I., ed.), McGraw-Hill, New York, pp. 19.1-19.29.
Papoulis, A. ( 1965). Probability, Random Variables, and Stochastic Processes, McGrawHill, New York.
Ulaby, F. T., R. K. Moore and A. K. Fung (1982). Microwave Remote Sensing, Vol. 2,
Addison-Wesley, Reading, MA.
Whalen, A. D. ( 1971 ). Detection of Signals in Noise, Academic Press, New York.
Wu, C., B. Barkan, W. J. Karplus and D. Caswell ( 1982). "Seasat synthetic-aperture
radar data reduction using parallel programmable array processors," IEEE Trans.
Geosci. and Remote Sensing, GE-20(3), pp. 352-358.

6
SAR FLIGHT SYSTEM

In this chapter, we begin our discussion of some of the practical considerations


associated with the design and implementation of a SAR system. The ability
of the ground signal processor to produce high quality, calibrated image products
depends on an accurate characterization of the subsystem elements preceding
it. Aspects of the flight instrument performance, such as its stability, linearity,
and frequency response, can contribute significantly to the overall system
performance. A poorly designed or malfunctioning sensor may result in
misinterpretation of the image data due to excessively high noise levels or image
artifacts (ambiguities). The introductory section presents an overview of the
end-to-end data system, describing the system operation and addressing the
various noise sources. This is followed by a discussion of the radar sensor
hardware and communications subsystems, identifying key assemblies and their
performance specifications, and techniques to measure their performance. The
chapter concludes with SAR system design methodology, including an analysis
of the trade-offs in radar parameter selection and ambiguity noise.

6.1

SYSTEM OVERVIEW

The SAR data system is comprised of three main subsystems (Fig. 6.1):
(1) SAR sensor; (2) Platform (spacecraft or aircraft) and data downlink;

and ( 3) Ground signal processor. The radar subsystem can be functionally


divided into four main assemblies: (1) Timing and control; (2) RF (analog)
electronics; (3) Digital electronics and data routing; and (4) Antenna. Each of
these assmblies can be further divided into subassemblies and components. For
249

6.1

ci

...I

>
w

u
..>i'

~
0
"O
"O

"'El=

.:a
'iii

Q::;

i.::

:i

e..:

...I

"O

"'

"'

i:o::

>
w

-;-

...I

e"'

.J:J

"'

.t:l
;::J

..."'0

ct

~i

'<;'

:.: a:
~ 51

-' UI

;;:u

.5
"O
"O
"'
~

zw

00

cg:

D. II)

... -u
"" z

I- 0
- a:
CJ I-

...I

"O "'
i:o:: "O
<"O

r.ll

;;i

cS

...I

UI

"" ...I
w
w

oil

-"' -

-u
Ow

CJ oz
_,o
""
z rr:
t>

1. Amplitude and phase errors across the system bandwidth which degrade

e ~
';"';; 8"'
0
"' ...
"' "'

UI

251

example, the antenna typically has three major subassemblies: (1) Feed; (2)
Radiators; and ( 3) Support structure. The characteristics of each radar assembly
will be addressed in more detail in the following sections. The spacecraft (S/C)
bus generally provides the downlink processor (including the communications
antenna) and the onboard recorders for temporary storage of data.
In discussing the SAR ground data system, we will refer to the various levels
of data products as defined by NASA (Butler, 1984 ). Their definitions, as adapted
specifically to SAR data products, are presented in Table 6.1. We will treat the
ground receiving station and Level 0 processing for removing the telemetry
artifacts as part of the data downlink. The ground data processing subsystem
consists of a Level lA processor which produces the single-look, complex image
by performing two-dimensional matched filtering of the data, followed by a
Level lB processor that performs radiometric and geometric corrections on the
Level lA output image. This Level lB processor may also perform low pass
filtering for speckle noise reduction and detection of the complex image for
video display. The final stage of the ground processing is the Level 2, 3 processor
that derives geophysical information (e.g., soil moisture, surface roughness) from
either a single image frame (Level 2) or multitemporal coregistered images
(Level 3).
Each element of the data system introduces noise of some type that corrupts
the signal, effectively degrading the overall system performance. Typically, this
performance is characterized by the system impulse response function. Additional
performance characteristics relating to the radiometric and geometric calibration
accuracy will be discussed in detail in Chapters 7 and 8. The key noise sources
degrading the synthetic aperture radar performance can be categorized as
follows:

<'>

SYSTEM OVERVIEW

CJ

rr:

z!z
-o

!u

"""o
z

""

"'

. <:>

QI

:I "O

.'fl <IS
IL

the range impulse response function;


2. Phase instability over time intervals varying from the relatively short round
trip propagation time to the longer synthetic aperture duration (or
coherent integration time) which primarily degrades the azimuth impulse
response function;
3. Thermal noise introduced by the analog electronics which degrades the
system dynamic range and the polarimetric performance (e.g., phase
estimation accuracy);
4. Distortion noise, introduced by quantization error, system nonlinearities,
(e.g., saturation effects) and bit error noise (from the communications
subsystem) which degrades the impulse response function in both dimensions.
The degradations introduced by the first two items listed above are
generally considered as linear system errors, while the distortion noise is a
nonlinear error. To the degree that linear system errors can be characterized
as deterministic errors, they can be compensated in the signal processor by

250

252

SAR FLIGHT SYSTEM

TABLE 6.1

Level 1B

Level 2

Level 3

Level 4

SYSTEM OVERVIEW

253

TO
PLATFORM

SAR Data Level Definitions

BUS FOR
RECORDING

Level Definitions

Level
Level 0
Level lA

6.1

Reconstructed digital video data.


Reversibly processed image data (one-look, complex) at full
resolution, time referenced, and annotated with ancillary information,
including radiometric and geometric calibration coefficients and
georeferencing parameters (i.e., platform ephemeris) computed and
appended but not applied
Level lA data that has been geometrically resampled and
radiometrically corrected to sensor units (i.e., radar backscatter cross
section). Standard SAR data product.
Derived geophysical parameters (e.g., ocean wave height, soil moisture,
ice concentration) mapped on some uniform time/space grid with
processing parameters appended.
Geophysical parameter data mapped on uniform space-time grid
scales, usually with some completeness and consistency properties
added (e.g., missing points interpolated, multiframe mosaics).
Model output or results from analysis of lower-level data (i.e., data
not measured by the instruments but derived from instrument
measurements).

adjusting the matched filter reference function. The residual, random linear
error component degrades the impulse response function. The nonlinear noise
sources to some degree can be modeled as white additive noise. However,
frequency harmonics will arise from saturation effects that must be treated
separately. These issues will be discussed in more detail in Section 6.2. The
thermal noise will result in measurement errors on an individual sample basis,
but over a large number of samples the mean noise power can be accurately
estimated and subtracted from the average received power to derive an accurate . _.
backscatter coefficient estimate.
Prior to addressing the system performance specifications for individual
assemblies in the radar subsystem, it is instructive to briefly review the radar
system operation, keeping in mind that there are a number of variations on
this operational scenario. A block diagram of the SAR sensor is shown in Fig. 6.2.
Transmission

The coherent radar signal originates in the stable local oscillator ( stalo ). This
signal is gated into the exciter subsystem according to a predefined pulse duration
and pulse repetition frequency (PRF). The exciter modulates the pulse tone
with a frequency or phase code. This signal is then translated to the desired
carrier frequency by a series of mixers, amplifiers, and bandpass filters. At the
carrier frequency, the RF signal is input to a high power amplifier (HPA) which
is either a cascade of solid state amplifiers or a traveling wave tube (TWT}
device. This high power signal is then passed through a circulator switch to the

OR
DOWNLINK

TIMINGANO

CONTROL

-----------
FREQ.

CONTROi.ER

MULT.

MICROPROC.

IF I
BPF

RECEIVER

TRANSMITTER

RF ELECTRONICS :------

!'
I

I
I

ANTENNA :

..... ___________ _,
I
I
I

Figure 6.2 Block diagram of SAR subsystem illustrating four assemblies and key subassemblies.

antenna subsystem. The antenna feed network consists of coaxial cable and/or
waveguide with power dividers. It divides the signal into a number of parallel
coherent paths (assuming a phased array antenna) which terminate with
radiating elements or slots. The feed network may also contain phase shifters

254

6.1

SAR FLIGHT SYSTEM

for electronic beam steering and transmitter/receiver (T /R) modules for


improving the SNR (i.e., an active array).
Reception

The return echoes are collected by the same antenna radiator and feed system
as was used for signal transmission. The exception is an active array where the
T /R module paths are not the same on receive as on transmit. A circulator
then switches the echo signal into the receiver where it is bandpass filtered and
input to a low noise amplifier (LNA). A variable gain amplifier (VGA) typically
follows the LNA to normalize the signal amplitude according to the target
backscatter. This signal is then frequency shifted to an intermediate frequency
(IF) for narrowband filtering to the pulse ban~width, amplified, ~nd agai~
shifted either to baseband or offset video by a senes of filters and mixers. This
video signal is split by a power divider and digitized using dual analog-to-digital
convertors (ADCs) clocked to sample the in-phase (I) and quadrature (Q)
components of the baseband signal. Alternatively, a single ADC sampling at a
rate twice the system bandwidth can be used to digitize the offset video. The
ADC output is then time expansion buffered in a high speed random access
memory (RAM) to achieve a constant rate data stream. This data is the~ pas~ed
to a formatter unit which appends the header (e.g., GMT clock, cahbrahon
signals, engineering telemetry) and outputs the data to the platform bus.
Downlink

The platform bus transfers the formatted video data to an onboard signal
processing system via the digital data routing electronics for recording,
processing, or transmission to a ground receiving station (Fig. 6.3). The ground
receiver and Level Oprocessor demodulate the carrier signal, strip off the channel
coding (that is applied for bit error protection), and correct for telemetry artifacts
(e.g., data sequence, polarity). The platform bus may include high density digital
recorders (HDDRs) for temporary data storage, digital signal processors for
removal of the pulse code modulation (range compression), and/or synthetic

SYSTEM OVERVIEW

aperture formation (azimuth compression). Real-time on board signal processors


are most commonly found in SAR systems designed for military applications.
The complexity of the onboard hardware could be significantly reduced by
downlinking the analog video SAR signal and digitizing the data at the receiving
station (e.g., Seasat ). The disadvantage of this design is a degraded system
performance, specifically in terms of the radiometric calibration accuracy.
Processing

The recovered digitized SAR video data is either recorded on HDDRs by the
Level 0 processor (and the tapes shipped to the Level 1 processing facility), or
the data is retransmitted and recorded at the Level 1 facility for real-time or
off-line processing (Fig. 6.4 ). The first stage of the Level 1 processing performs
data synchronization and reformatting. Since the data is recorded in range line
order, the Level lA signal processing in the range dimension.typically preceeds
the azimuth processing. Almost all correlator systems use two one-dimensional
reference functions as an approximation for the exact two-dimensional matched
filter. Since the Level lA output is a single-look complex image, this processing
stage is essentially reversible. The processing operations in the Level 1B
processor include: ( 1) radiometric corrections to remove the cross-track signal
power modulation; (2) geometric resampling to a map grid; and (3) detection
and lowpass filtering for speckle noise reduction. In general these operations,
which produce the radiometrically and geometrically calibrated imagery for
extraction of surface information, are not reversible. The final processing stage,
the Level 2, 3 processor, generates the high level, non-image products. This
processor converts the calibrated imagery into geophysical data units that
represent some type of surface characteristic. Very few of the Level 2, 3 products
can be produced in a fully automated fashion, due to the complex scattering

CORRELATIVE
DATA

HIGH DENSITY
DIGITAL
RECORDERS
FROM
LEVEL 0
PROCESSOR
FROM SAR

SENSOR
SUBSYSTEM

255

DATA
SYNC AND
FORMATTING

SAR
CORRELATOR
(LEVEL 1A)

SAR
GEOPHYSICAL
PROCESSOR(S)
(LEVEL2.3)

SAR POST
PROCESSOR
(LEVEL 18)

TO USERS
AND LARGE
SCALE
MODELS

ON-BOARD
SAR
CORRELATOR

TO USERS

..-----.~:"'
DOWNLINK
PROCESSOR

~
DOWNLINK
ANTENNA

GROUND

LEVELO
PROCESSOR

PROCESSING
SUBSYSTEM

RECEIVING
STATION

Figure 6.3 Block diagram of the platform and data downlink subsystem illustrating major
subassemblies.

....___.,.. 0-0
HDDR

ARCHIVE

Figure 6.4 Block diagram of the ground data processing subsystem illustrating major processing
stages.

256

SAR FLIGHT SYSTEM

mechanisms that give rise to the target reflectivity. Since this processing requires
operator interaction, it is typically not considered as an element of the SAR
ground data system, even though it is an essential processing stage for extracting
information from the SAR data.
An example of an end-to-end SAR data system design and operation is
presented in Appendix C. This appendix describes the NASA Alaska SAR
Facility ground processing system. This system is designed to process data from
the ESA-ERS-1 SAR, the NASDA J-ERS-1 SAR and the Canadian Radarsat
systems. It includes all elements of the signal processing chain described above,
including a Level 2, 3 processor for multitemporal tracking of sea ice and
production of ice concentration maps and ocean wave spectra contour plots.

6.2

257

RADAR PERFORMANCE MEASURES

Azimuth distortion results from pulse-to-pulse errors such as timing jitter or


drift of the coherent local oscillator.
System Transfer Function Distortion Analysis

The radar system transfer function can be modeled as a distortionless filter


followed by a distortion filter as shown in Fig. 6.5. We will use the paired echo
technique (Klauder et al., 1960) to analyze the distortion filter with frequency
domain transfer admittance Y( w) given by
Y(w) = A( w) exp[jB(w )]

( 6.2.1)

where A(w) is the amplitude transfer characteristic and B(w) is the phase
characteristic. These terms can be expanded in a Fourier series as follows
6.2

RADAR PERFORMANCE MEASURES


A( w) = a0

The mainlobe broadening is usually defined as the actual 3 dB width relative


to an ideal value estimated assuming no system errors. The sidelobe measures
are evaluated over a region that excludes the mainlobe response (e.g., the ideal
null-to-null width). The PSLR is the ratio of the largest sidelobe value (outside
a specified mainlobe region) to the mainlobe peak, while the !SLR is the
integrated sidelobe to mainlobe power ratio. Prior to considering the individual
subsystem elements, we first present techniques to analyze the linear and
nonlinear system distortions.
6.2.1

Linear System Analysis

The assumption of system linearity allows us to charact,erize each element of


the SAR system from the antenna to the signal processor in terms of its transfer
function. Unmodeled or random errors in any of these elements will degrade
the performance of the matched filtering process in the SAR correlator and
produce errors in the impulse response function (increased sidelobe levels and
mainlobe broadening). Certain types of errors tend to affect the system impulse
response mainly in the azimuth dimension while others are predominant in the
range dimension. Range distortion often results from errors in narrowband
filters, amplifiers, and other devices over the period of the pulse dispersion.

L an cos( new)

(6.2.2a)

n=l

As previously discussed, the end-to-end system performance depends on an


accurate characterization of the flight hardware. System phase and amplitude
errors as well as distortion errors degrade the impulse response function.
Typically three measures are used to characterize this function:

1. Mainlobe broadening (Km1);


2. Peak side-lobe ratio (PSLR);
3. Integrated side-lobe ratio (!SLR).

ao

B(w) = wb 0

ao

bn sin( new)

(6.2.2b)

n=l

where e is a complex constant dependent on the filter bandwidth.


Each term in the summation of Eqn. (6.2.2) will give rise to a set of echoes
on either side of the desired impulse response. As an example, consider the case
where n = 1 (i.e., only one ripple component is present). If a signal s( t) is applied
to the filter given by Eqn. (6.2.1), the output r(t) is (Berkowitz, 1965)
r(t) = a0 J 0 (bi)s(t

+ b0 ) +

ao

Jm(bi) x

m=l

)s(t + bo +me)+ ( [( 1 + amal


b
0

l)m( 1 - amalb )s(t + bo 0

me)]

(6.2.3)
where J; is the Bessel function of the first kind and ith order. The first term in
Eqn. (6.2.3) is the desired signal, weighted by the zero order Bessel function.
Each term of the summation consists of two echoes, advanced and delayed
replicas of s( t) weighted by the mth order Bessel function. The desired output
signal relative to the input signal is delayed by b0 and the paired echoes are
displaced from the desired output by me (Fig. 6.6). Note that the first phase
S1(t)

INPUT SIGNAL
S 1(co)

Figure 6.5

DISTORTIONLESS
FILTER
H(co)

DISTORTION FILTER
Y(co) A(m) EXP OB(co))

OUTPUT SIGNAL
So(m)

Linear distortion model of radar system transfer function.

258

6.2

SAR FLIGHT SYSTEM

RADAR PERFORMANCE MEASURES

259

B(o>)

s(t)

b
Figure 6.6

( conNnued)

distortion term b 1 actually gives rise to an infinite series of paired echoes which
are generally neglected beyond the first pair. The peak sidelobe ratio (PSLR)
from each amplitude ripple term is given by
PSLRa

= 20log(~)
2a0

(6.2.4)

260

SAR FLIGHT SYSTEM

6.2

and similarly the peak sidelobe for each phase ripple term is given by
PSLRP =

20log(~)

261

RADAR PERFORMANCE MEASURES

formulation can be made for the mainlobe broadening. Thus the overall system
performance is given by
(6.2.5)
(6.2.10)

Generally, the system overall peak sidelobe performance is dominated by one


of the terms in either Eqn. (6.2.4) or Eqn. (6.2.5).
For small amplitude errors, these terms degrade the impulse response function
predominantly as a result of quadratic and higher order amplitude versus
frequency characteristics (i.e., rms error around a linear fit across the passband).
Similarly, only quadratic and higher order phase errors relative to the desired
phase versus frequency function will degrade the impulse response. System
timing errors (including sample jitter) can be treated as phase errors. As a
general rule of thumb, lower order terms (in the summation of Eqn. ( 6.2.2)) will
produce mainlobe broadening errors, whereas the higher order terms will affect
predominantly the sidelobes.
To assess the effects of random errors on the mainlobe width and ISLR, a
good approximation is that, for small errors, the ISLR is given by the variance
of the error about the best linear fit. Thus, for amplitude errors
ISLRa = 20 log <18

(6.2.6)

( 6.2.11)
where <T; is the standard deviation of the phase or amplitude error of the ith
subassembly and Km 1, is the fractional broadening from the ith subassembly.
Measurement Techniques

A technique commonly used to measure the amplitude characteristic of the


system transfer function is to use as input a series of equal level tones spaced
across the frequency spectrum and measure the output signal using a power
meter. A least squares linear fit is applied to the data points from which an
rms error performance is derived. To measure the phase characteristic, a series
of pulses, each at a different frequency spaced across the system bandwidth, is
used as an input. The relative change in group delay of each pulse is measured
at the output using a network analyzer. This group time delay td(w) and the
phase distortion B( w) are related by

where <18 is therms amplitude error about a linear fit across the frequency band,
and for phase errors
ISLRP = 20 log <TP

(6.2.7)

where <Tp is the rms phase error in radians about the desired phase function.
Similarly, the mainlobe broadening for amplitude and phase errors respectively
is given by
- a;)- 2

(6.2.8)

Km1, = (1 - a~)-2

(6.2.9)

Km1.

= (1

td(w) = -dB(w)/dw

(6.2.12)

Numerical quadrature is used to derive the phase versus frequency data points.
A least square error quadratic fit is applied to these points and the rms phase
error is calculated from the residuals. For timing error measurements, a counter
can be used to measure the relative differences between the leading edges of a
series of timing pulses. The variance of these measurements determines the
timing jitter, which can then be converted into phase error by
( 6.2.13)
where a 1 is the rms timing jitter and f is the frequency of the measured signal.

where Km1 is the broadening factor relative to the theoretical mainlobe width.
Each element in the radar subsystem will produce a phase and amplitude
error characteristic. To derive an overall performance specification for the radar,
it is typically assumed that each error source is an independent process,
characterized by some probability distribution function (PDF). The resultant
PDF of all error sources is assumed Gaussian, by the central limit theorem,
with mean and variance given by the sums of the mean and variance!
contributions of the individual error processes. This formulation allows the
effective ISLR of the system to be calculated as the sum of the ISLR contributions
from each subassembly or component comprising the subassembly. A similar

6.2.2

Nonlinear System Analysis

While most radar systems are designed such that their components operate in
the linear region over a wide range of inputs, the actual performance of the
radar can never be strictly categorized as linear. Given that the return echo
amplitude modulation is random, some fraction of the data (i.e., the tails of the
probability distribution) will always be in saturation (Fig. 6.7). If the percentage
of the data in saturation is small, the system is in a quasi-linear operation mode,
where the nonlinearities are characterized by the level of harmonic or

262

SAR FLIGHT SYSTEM


So

6.3
So

THE RADAR SUBSYSTEM

263

to best accommodate the expected range of backscattered power. An important


consideration in the receiver design is to set the video amplifier saturation point
such that the front end (RF or IF) amplifiers can saturate without first saturating
the video amplifier (i.e. the amplifier that matches the analog output to the
ADC) over all possible VGA settings. Nonlinear effects resulting from saturation
in the early stages of the receiver could be masked by additive noise and therefore
be difficult to detect in the digitized video signal.
In addition to the harmonic distortion, the effect of system nonlinearities
also depends on the settling time or system memory. The settling time is generally
defined as the time required for the response to an input to return to zero once
the input is removed. The response to a signal at some time t 1 and the response
to an identical signal at t 2 is not the same if
(6.2.14)

Figure 6.7 System transfer function illustrating the effect of saturation on the echo signal where
s1 is the input signal and s0 is the output signal.

intermodulation distortion. This distortion characteristic is typically evaluated


by using a sinusoid as an input signal and measuring the spurious power in
the output signal spectrum. However, this technique only approximately
estimates the system nonlinearity since the signal distortion is generally
dependent on the frequency of the sinusoidal input. Radar system test procedures
therefore typically call for testing with a number of tones spaced at frequencies
across the system bandwidth. Additionally, two-tone tests are performed to
evaluate intermodulation distortion, where several pairs of tone inputs are used
to characterize the second order system nonlinear response characteristics. n
should be noted, however, that no finite set of tone inputs can fully characterizt}
the system nonlinearity. An alternative measurement technique using Gaussian
white noise as an input is described in Appendix D. This approach, which
is used routinely in physiological system analysis, provides a complete
characterization of the system nonlinearities. However, even though .a
comprehensive set of tones and tone pairs spread across the system bandwidth
does not fully characterize the nonlinearity of the radar system (receive chain~
the Gaussian white noise technique is rarely used to characterize radar system
nonlinearities. It can be argued that even if such tests were performed, meaningfu{
interpretation of the results in terms of the output image qmdity is difficult at best7
Radar system nonlinearities typically arise from saturation or limiting .Hf
devices such as amplifiers and mixers. Additionally, crossover distortion ~ .
arise due to the nonlinear characteristics of a device changing operating mod~
(e.g., high power switch). Most receivers have several amplification stages
the signal is mixed down to baseband (Fig. 6.2). Typically, at the intermediatO) .
frequency (IF) stage, a variable gain amplifier (VGA) or a switched attenuator.
is inserted to adjust the position of the receiver instantaneous dynamic range

where tm is the system memory or settling time. The system memory can be
measured using a two-pulse input, where the response to each pulse is measured
as a function of the time spacing between inputs. The minimum time interval
which results in identical responses to the two inputs is the settling time. This
parameter could also be measured directly from the autocorrelation function
of the system response to white noise. The nonlinear characteristics of the analog
to digital conversion process will be considered in more detail in the section
on ADCs.
6.3

THE RADAR SUBSYSTEM

This section will review the four assemblies of the radar subsystem in terms of
their performance characteristics and design trade-offs.
6.3.1

Timing and Control

The timing and control assembly consists of a free-running crystal oscillator


and the associated frequency multiplier and divider circuitry to generate the
signals required by the other subsystem assemblies. Additionally, a microprocessor
is typically included to generate the signal sequences required for the radar
operation. A stable local oscillator (stalo) with good short-term relative stability
is essential for the radar to perform in the SAR mode. Specifically, the transmitted
signal phase must be retained to coherently demodulate the received echo. The
stalo drift can be translated into azimuth phase error by
(6.3.1)

where t is the round trip propagation time of a pulse, fc is the carrier frequency
and u~(-r) is the Allan variance of the crystal oscillator. The Allan variance,

264

SAR FLIGHT SYSTEM

6.3

which is typically provided by the manufacturer, is defined as the fractional


frequency drift (Af / f) over some time interval of interest r. The following
example illustrates the performance requirements for the stalo design.
Example 6.1 Typical performance for a crystal oscillator such as the HewlettPackard HP10811 with an Jo= 10 MHz is o'y(r) = 1 x 10- 10 for r on the
order of milliseconds. As an example, consider the E-ERS-1 system where
R = 850km, r = 5.7 ms and fc = 5.3 GHz. Assuming the HP10811 oscillator
is used for the stalo, from Eqn. ( 6.3.1 ), the azimuth phase noise is

From Eqn. (6.2.7) and Eqn. (6.2.8), the azimuth impulse response function is
degraded by
ISLRP =. - 34.5 dB
Kmi p =

1.0007

which are negligible errors.


The long-term stability of the stalo (over the mission) is also an important
consideration in maintaining the carrier within its specified frequency band.
Additionally, since the stalo provides timing signals for the other assemblies,
long-term drift could cause some timing errors, although typically systems are
designed such that the effect of long-term drift produces negligible system
performance degradation.

6.3.2

THE RADAR SUBSYSTEM

265

Among the various pulse coding schemes, frequency coding and phase coding
are commonly used, with frequency coding by far the most popular. The
frequency codes can be categorized as linear or nonlinear FM. The linear FM
or chirp code is used in most radar systems, primarily due to its ease of
implementation and its insensitivity to Doppler shifts. Almost all currently
operational (non-military) SAR systems, as well as those planned for the 1990s
(with the exception of Magellan), use a linear FM chirp (Fig. 6.8a). Nonlinear
FM codes (e.g., Taylor weighted) are used primarily in military applications
where very low sidelobes are required (Fig. 6.8b). The nonlinear chirp permits
exact matched filtering (i.e., range compression) without the severe SNR loss
that would result from an equivalent processor w~ighting of a linear FM signal
(Butler, 1980).
Phase code modulation is used primarily in systems where the available
resources (i.e., power, mass) are limited or in situations where a relatively
inexpensive coding implementation is required. Most popular is the binary
phase code, where a 180 phase shift is switched into the circuit at periodic
intervals (Fig. 6.8c). The sequence of O's (no shift) and 1 's (180 shift), which
occur at uniform intervals of At (a chip), is chosen to achieve the best possible
sidelobe characteristics. For small pulse compression ratios (:::;;; 13 chips per
pulse), Barker codes are commonly used due to their optimal equal-level sidelobe
characteristics. However, since longer codes are required for most SAR systems
(e.g., Magellan has 60 chips per pulse), pseudorandom sequences such as the
maximal-length sequence are more common. A detailed treatment of these and
other phase coding techniques is given in Cook and Bernfeld (1967).
Dispersive Delay Lines. The most common implementation of the FM code

is a surface acoustic wave (SAW) dispersive delay line (DDL) of the configuration
shown in Fig. 6.9a. The SAW DDL typically consists of two complementary
transducers, each composed of a number of electrodes whose periodicity varies

RF Electronics

The RF electronics assembly can be divided into the following main subassemblies:
(1) Exciter; (2) Transmitter; and (3) Receiver. We will discuss the performance
and design trade-offs of each subassembly.
Exciter

The exciter subassembly generates a coded pulse waveform from the continuou& ,
tone stalo output. As described in Chapter 3, coding of the transmitted pulse .
provides a range resolution, '5R, dependent only on the bandwidth of the pulse' ;
code (i.e., (;R = c/2/Jp_ where BR is the pulse code bandwidth). Since '5R ;if.
independent of the pulse duration, the transmitter peak power requirements
can be reduced by extending the pulse duration without degrading either the
resolution or the SNR. This peak power reduction simplifies the transmitter
design, increasing both performance and reliability as well as reducing the risk
of breakdown or arcing in the high power cables.

-------71 -------71

1__ /

__ /

I
I

I
I

J+---rp-j
(a)

180

rum
Al

14--"P-I
(b)

(c)

Figure 6.8 Pulse coding schemes: (a) Linear FM code; (b) Nonlinear FM code; (c) Binary phase
code where TP is the pulse duration, Ba is the pulse bandwidth and lit = 1/Ba is the chip
duration for the binary phase code.

266

SAR FLIGHT SYSTEM

6.3

THERMAL COMPRESSION
REFLECTIONLESS

BONDED A u CONTACTS

HIGHLY POLISHED
PIEZOELECTRIC
SURFACE

THE RADAR SUBSYSTEM

267

(having higher density at the higher frequencies). The position and length of
the electrodes set the phase and amplitude response. The DDL is essentially a
linear filter whose group delay va ries over the system bandwidth. The delay
versus frequency characteristic can range from a li near, flat amplitude response
to a nonlinear weighted response for sidelobe supp ression. T ypical time
expansion facto rs are on the order of 1000 where, for example, a 30 ns input
is gated from the stalo to produce a 30 s pulse. For large time bandwidth
products (TB > 1000), spurious internal reflect ions can degrade the phase and
amplitude performance characteristics. To reduce these effects an inclined
transducer geometry is used (Fig. 6.9b, c ). Without special compensation, at
TB ~ I 000 the peak sidelobes of the autocorrelation function are typically 30
to 35 dB down from the mainlobe (Phonon Corp., 1986).
T he adva ntages in using a D DL fo r pulse code generation are that it is a
proven technology, the performance specifications in terms of TB and pulseto-pulse jitter meet most system specifications, and it is relativel y lightweight.
Its key d isadvantages are that it is inflexible (i.e., fixed code) and that it is lossy
(up to 60 dB at TB= 1000).
Digital Pulse Coding. Exciter technology is at a transition point, where most
existing exciters utilize the DDL technology while current and future system
designs use digi tal technology. There are several techniques for digitally
generating the pulse code waveform. The digital phase shifter method, used in
SI R-C, essentially switches a n inline phase shifter through a piecewise linear
approximation of a quadratic function (for linear FM) over the pulse duration
(Fig. 6.!0a). To achieve PSLR and ISL R performance in the -30- -35 dB
range, the slope of the linear phase approximation must be updated at an
interval Lit that satisfies (Klein, 1987)

( 6.3.2)

.2. cm
c
Figure 6.9 Surface acoustic wave dispersive delay l~ne: (a) Dou.hie .d ispersion inline geometry;
(b) Double dispersion inclined geometry;( c) Close-up view ofSIR-B mchncd transducer geometry.

where BR is the pulse (chirp) bandwidth and tP is the pulse duration. An


alternative approach is to prestore the codes in a fast random access memory
(RAM) and gate these signals through a digital to analog convertor ( DAC)
and a bandpass filter (Fig. 6.IOb).
The key advantage of a digital coding system is that it can achieve much
higher performance than analog DDL systems. The pulse-to-pulse jitter and
frequency versus time linearity can be controlled by adjusting the code
quantization and sampling parameters. Additionally, the flexibility of the digital
system provides implementation options for multiple bandwidths, nonlinear
chirps, or binary phase codes, all within a single exciter assembly. The technology
in high speed ( > 200 MHz) buffer memories and digital-to-analog convertors
is such that the prestored code technique is currently suitable only for airborne
systems. The digita l phase shifter approach is preferable for spaceborne systems
at this time due to the lack of space qualified parts.

268

SAR FLIGHT SYSTEM

6.3

THE RADAR SUBSYSTEM

269

Transmitter

The transmitter subassembly consists of a series of mixers and bandpass filters


to convert the coded exciter pulse output to the carrier frequency. The low
power input signal is fed into a high gain amplifier (HGA) unit for generation
of the high power signal that is output to the antenna feed system. The HGA
components commonly used in the SAR are either solid state or tube amplifiers.
Generally the trade-off to be made is the high peak power and efficiency available
from a tube versus the reliability of a parallel solid state amplifier network.

RF
MOD

GATE

Traveling Wave Tubes. The tube commonly used in most airborne and some
spaceborne systems is the traveling wave tube (TWT). The TWT consists of an
electron gun (heated cathode, control grid and anode), a delay line, and a
collector (Fig. 6.1 la). The electron beam, formed by an electric field, passes

SYNCHRONOUS
LOGIC

PRF
CU<

Anode

Slow wove
structure

Cotholle

RF

MOO

GATE

CNTRL

CNTRL

Heoter

PRF
CU<

Focusing structure

SYNCHRONOUS
LOGIC

a
b

Figure 6.10 Digital pulse coding schemes: (a) In-phase shifter; (b) Prestored code.
I

Exciter Performance. The key parameters in characterizing exciter performan~"

Psat

are pulse-to-pulse timing jitter and rms phase and amplitude errors. The p~ !'
and amplitude errors can be characterized in terms of mainlobe and sidelo~;?
characteristics of the pulse autocorrelation function as discussed in Section 6.2.t"
Typical numbers are aP ~ 3 rms and CT8 ~ 1.0 dB. The amplitude distortion in
1
the DDL is not a factor since the signal is clipped in the transmitter (see ne,~\.
section). The pulse-to-pulse jitter degrades the azimuth iqipulse response. TJW;.~
jitter error is translated into azimuth phase error by:
,

Power
output

--i---i---

I
I

I
I

,.

.,

Lineor
dynamic

ISoturotion I
I region I

_:~r:_

II "'9dB II
I

<Tp

where a, is the standard deviation of the pulse-to-pulse jitter and f is, t~j
operating frequency of the DDL. For a 10 MHz SAW DDL, a 0.5 ns jitter,
produces aP = 2, resulting in an ISLR of -24 dB.

= 2nfa1

______ J_____=~"'4 dB

Smin,

Tongentiol
sensitivity

Power input -

Figure 6.11
amplifier.

Traveling wave tube: (a) Circuit layout; (b) Gain characteristic of broadband TWT

270

6.3

SAR FLIGHT SYSTEM

THE RADAR SUBSYSTEM

271

Receiver

through a delay line, where the beam energy is transferred to the delay line,
effectively amplifying the RF signal. The TWT is characterized by both high
gain and large (octave) bandwidths. For radar applications, the tubes are
typically operated in saturation (Fig. 6.llb) to maximize the available output
power and to ensure a stable power level despite variation in the input signal.
However, operation in this region makes the TWT a nonlinear device and
harmonics of the fundamental signal are generated that must be removed using
a bandpass filter. The efficiency of microwave TWTs ( 1-10 GHz) has improved
to 30-50% with advanced collector designs. Typical gains are 45 to 60 dB.

The receiver assembly is typically divided into a radio frequency (RF) stage,
an intermediate frequency (IF) stage, and a video frequency (VF) stage
(Fig. 6.2). The RF front end basically consists of a: ( 1) Limiter to prevent high
power signals (from the transmitted pulse or interfering radars) from damaging
the system; (2) Bandpass filter (which is wide compared to the pulse bandwidth)
to limit the spurious signal power; and (3) Low noise amplifier (LNA) whose
noise figure is a key factor in establishing the overall system signal to thermal
noise ratio. The noise figure is given by

Solid State Amplifiers. Most lower frequency spaceborne SAR systems (i.e.,

F = 10 log(SNRif SNR0 )

L- and S-bands) employ solid state amplifier designs for improved reliability.
A parallel-cascaded design is used to achieve the required output power.
Consider the SIR-B amplifier network as an example (Fig. 6.12). The low power
signal is initially split into three parallel channels. Each channel is amplified
with a set of (Class A) predriver amplifiers operating in the linear region. These
are followed by isolation circulators and Class C driver amplifiers. This driver
signal is input to a power amplifier subassembly which consists of a series of
bipolar transistor stages to achieve the required gain. Combiners are then used
to reassemble this parallel network output into a single high power signal. This
SIR-B design using 50 W bipolar transistors produced a 1.5 kW output power
at about 12-15 % efficiency (Huneycutt, 1985). Current technology using GaAs
FETs can achieve 20-25 % efficiency at C- and L-bands and about half of that
at X-band.

POWERAMP: 10.SdBGAIN
i.........................................................

DRIVERSTAGE: 54dBGAIN

, .................................................................... , ......... '91i

':

CLASS A
PRE DRIVER

i
:'

CLASS A CLASS C
PRE DRIVER DRIVER

l\

:'

:'
:

!l ll

' \~

TO

ANTENNA

POWER
DIVIDER

where SNR1 and SNR0 are the signal to noise ratios at the input and output
of the amplifier respectively. This measure is a figure of merit for noise internally
generated in the amplifier (Section 2.6.2). A typical noise figure is 3-4 dB for
an L-band amplifier and about 1 to 1.5 dB higher at C-band.
The intermediate frequency stage typically consists of: (1) IF amplifier(s);
(2) Variable gain amplifier (VGA); and (3) Bandpass filter(s), slightly wider
than the pulse bandwidth, to limit the system noise. The VGA is used to set
the quiescent gain of the system for a given data acquisition sequence. However,
for some systems the instantaneous dynamic range of the signal is such that a
sensitivity time control (STC) or an automatic gain control (AGC) is required
to reduce the signal dynamic range. These techniques are discussed in a later
section.
The video frequency stage consists of: ( 1) Low pass filter; and (2) Video
amplifier to match the output of the receiver to the ADC input. A second VGA
may be included in this stage.
At each stage a number of directional couplers are inserted as test points
and a calibration signal is injected using a directional coupler, typically at the
front end following the circulator, or just preceding the IF amplifier.

'

FROM CHIRP
GENERATOR

(6.3.4)

DRIVER
STAGE

POWER
AMPLIFIER

DRIVER
STAGE

POWER
AMPLIFIER

Figure 6.12 SIR-8 solid state high gain amplifier design.

PORT

Receiver Performance. Each element in the receiver subassembly is characterized


by its phase and amplitude errors across the passband using the techniques
described in Section 6.2.1. The nonlinear distortion can be characterized by
frequency analysis using a series of single-tone and two-tone test inputs.
Harmonic and intermodulation distortions arise when the input signal exceeds
the linear dynamic range of the individual components. Typically, the mixers
or the amplifiers limit the system dynamic range and contribute the bulk of the
s.purious resp~nses. However, with existing off-the-shelf components, a receiver
hnear dynamic range (from noise floor to the 1 dB compression point) of
50 dB is achievable with an rms phase error of less than 5 and an rms amplitude
error of less than 0.5 dB. Typical amplitude and phase errors for receiver
components are given in Table 6.2.

272

SAR FLIGHT SYSTEM

TABLE 6.2.

6.3

Typical Amplitude and Phase Errors for Receiver Components

Components

Peak-to-peak
Amplitude
Errors (dB)

Peak-to-peak
Phase Errors
(deg)

0
0
0.1
0.1
0.1
0.1
0.1
0.2
0.3

0.5
0.5
0.5
0.5
0.5
0.5
0.5
1.0
2.0

Attentuators
Power dividers
Circulators
Directional couplers
Mixers
Switches
Limiters
Amplifiers
Filters

Automatic Gain Control (AGC) and Sensitivity Time Control (STC). To


compensate for the large variation in the echo dynamic range, many radar
receivers incorporate automated systems to adjust the receiver gain. The purpose
of these devices is to effectively reduce the signal dynamic range as seen by the
IF and VF stages of the receiver and the ADC. The sensitivity time control
(STC) implements a time dependent variable gain to compensate for the ,
systematic amplitude modulation characteristic of each echo. Typically, the STC
is triggered by a control pulse whose timing is set by the predicted round trip
delay time. A common method for implementing the STC is to generate a
repetitive voltage ramp which is applied to the gain control inputs of a series
of cascaded variable gain amplifiers. The exact shape for the ramp ideally would
be

-(R

sTc-

(r)siny(r))
G 2 ('t')

112

(6.3.5)

where G( 't') is the nominal vertical antenna pattern as a function of echo delay
time r projected into the cross-track ground plane, y( 't') is the look angle, and
R is the slant range. The STC function used in the Seasat receiver is shown iJt .
Fig. 6.13.
.
An automatic gain control (AGC) is typically designed to compensate foo
intrapulse variation in the return echo power, minimizing changes in the echo1 .
dynamic range resulting from variation in target reflectivity. Essentially, theSe>
devices employ a control loop with a detector (integrator) to estimate the
received power across a portion of the echo. The integrated power estimate iS:
fed back with a negative gain to the receiver VGA. The trade-off in AGC
performance is dependent on the time constant of the servo loop. It must be ..
short to minimize the feedback error, yet sufficiently long for the integrator to
make an accurate estimate of the echo power.

PRF
EVENT

273

THE RADAR SUBSYSTEM

RCVR PROTECT
WINDOW

'"'"I
I

1-38 J&B-r--._

1.2,.) I

I
I

STC
RESPONSE

,..
Figure 6.13

194 J&B-----oto'""-... CENTER OF


RANSMITTER
AKAGE PULSE
RECORD WINDOW

STC function used in the Seasat receiver.

The main shortcoming of these variable gain amplifiers is that they make
radiometric calibration of the data extremely difficult. Not only does the inverse
of this gain function need to be applied in the signal processor before matched
filtering, but corrections for any changes in the relative phase characteristic of
the receiver transfer function must also be compensated. Although ideally these
characteristics can be measured preflight as a function of temperature, generally
neither an AGC nor an STC can be reliably used when precise amplitude and
phase calibration is required.

6.3.3

Antenna

The SAR antenna assembly typically consists of a single high gain antenna used
for both transmit and receive consisting of a feed system, structural elements
(including deployment mechanisms), and radiating elements. For the Magellan
system, the SAR antenna is also used for the communications downlink. The
basics of antenna design can be found in any of a number of books (e.g.,
Stutzman and Thiele, 1981). The key antenna parameters affecting the SAR
performance are the antenna gain (or directivity) and its beam pattern. The
antenna gain is directly proportional to its area. Assuming uniform illumination,
the gain is given by (see Section 2.2)

G = pD = p(4nA/A. 2 )

( 6.3.6)

where p = PePa is the antenna efficiency, Pe is the radiation efficiency (loss), Pa


is the aperture efficiency, Dis the directivity, A. is the wavelength, and A is the
aperture area. Typically, to achieve the required SNR for spaceborne systems,
aperture gains of 30 dB or more are required.
An additional minimum area constraint is imposed by the ambiguity
characteristics of the system. Material in Section 1.2 shows that, to prevent
overlapping echoes in range, the antenna width must satisfy
(6.3.7)

274

SAR FLIGHT SYSTEM

where 17 is the incidence angle, fp is the pulse repetition frequency (PRF), and
c is the propagation speed oflight in free space. Similarly, to prevent overlapping
azimuth Doppler spectra, the antenna length must satisfy
(6.3.8)
where V.1 is the relative sensor to target velocity.
From Eqn. (6.3.7) and Eqn. (6.3.8) the minimum antenna area required in
order to satisfy the ambiguity constraints is
Amin

= 4.rn V.1 tan 17 / c

(6.3.9)

The range and azimuth ambiguities will be considered in more detail in


Section 6.5.
To achieve the required gain, most spaceborne systems use either a microstrip
phased array (e.g., Seasat, SIR-B) or a slotted waveguide (e.g., X-SAR) design
(Fig. 6.14). Typically, for very large apertures such as L-band spaceborne
antennas, a microstrip design is preferred, which is lightweight, relatively low
cost, and achieves good performance over limited bandwidths (Carver and
Mink, 1981 ). The slotted waveguide phased array design is used by X-SAR,
E-ERS-1, and Radarsat. Other antenna designs have been used (e.g., Magellan
uses a 2.3 m circular dish), but because of the large apertures required by
Eqn. (6.3.6) to Eqn. (6.3.9) a phased array is generally the most efficient and
cost effective design.
An advantage of a microstrip phased array antenna is that both the radiating
elements and the feed system can be etched into a printed circuit (PC) board
(Munson, 1974). Additionally, solid state components such as phase shifters
and amplifier modules can be added to provide electronic beam scanning and
an improved SNR. The Spaceborne Imaging Radar (SIR-C) antenna is such
an active phased array design (Huneycutt, 1989). The SIR-C antenna is a
dual-frequency quad-polarized assembly, transmitting both H and V linearly
polarized signals (on alternate half interpulse periods) and receiving both the
like- and cross-polarized returns. A low level RF coded pulse is input to the
corporate feed subassembly and divided in elevation, as shown in Fig. 6.15 for
the L-band. Note that Fig. 6.15 shows only half of two mirror image L-band
panels in the elevation aperture. The placement of the phase shifters relative to
the transmitter/receiver (T /R) modules effects a taper across the aperture. The
layout of the individual panels within the aperture is shown in Fig. 6.16. The
array is 12 m in length and 4.1 min width consisting of 18 C-band panels on
the bottom, a 9 x 2 array of L-band panels in the center, and a vertically
polarized slotted waveguide X-band antenna on top. The Shuttle attach edge
is at the bottom of the pictured array.
Each C-band panel has 18 rows of radiating elements and each L-band panel
has 9 rows. Each row is fed by a phase shifter and T /R module in the center
of 6 patches for L-band and 18 patches for C-band. The phase shifter is a 4 bit

b
Antenna designs: (a) Microstrip phased array L-band antenna used in SIR-B
(b) Slotted waveguide X-band antenna used in X-SAR. (Courtesy of H. Ottl.)
'
Figure 6.14

275

276

6.3

SAR FLIGHT SYSTEM

1.0

1.0

1.0

1.0

0.7

0.7

0.36

0.2

0.08

THE RADAR SUBSYSTEM

277

TO
ELECTRONICS

!---~-~,~'~-~-~ ~ E~ c:ToS TAIP

PATCH

~~'
INPUT/OUTPUT

Figure 6.15 SIR-CL-band antenna feed system (one-half of symmetrical design) illustrating the
incorporation of active elements to achieve amplitude taper.

PIN diode design. The C-band HPA is a 3-stage GaAs FET operating in Class
A for a 25 dB gain, while the L-band is a 3-stage silicon bipolar transistor
design operating in Class C for a 29 dB gain. The LNA designs are GaAs FET
and silicon bipolar for the C- and L-bands, respectively, each achieving a noise
figure of 1.5 dB. The ferrite circulator provides 20 dB of isolation at 0.5 dB
insertion loss.
.
The design of the SIR-C antenna is illustrative of the future of spaceborne
SAR technology. Although SIR-C uses discrete components for its T /R modules
and phase shifters, monolithic microwave integrated circuits (MMIC) are
approaching the point where they can now be considered viable for a spacebome
SAR application. The advantage is that the electronics can be incorporated into
the printed circuit board with the microstrip radiator ;md the feed network,
providing a fully integrated system. The MMIC devices have been demonstra~
at frequencies from under 1 GHz to over 100 GHz. As the RF frequency of~
device is increased, generally both the output power and the efficiency drO,:t
Typical numbers for L- or C-band devices are 40 to 50% efficiency at 5-10 W;
output power, dropping to 25 % efficiency and 3-5 W output at X-band.
A key issue limiting wide application of this technology remains the manufacturing
yield and therefore the production costs.

l-BAND PANEL

r::::
} CBAND

Figure 6.16 Panel layout of SIR-C antenna.

Antenna Performance. The antenna gain (or efficiency) and the pattern shape

are certainly two key considerations in the antenna design; however, a number
of other specifications must be met for adequate performance. As previously
discussed for the receiver, phase and amplitude errors across the passband will
degrade the system impulse response function. In the antenna assembly we must

278

6.3

SAR FLIGHT SYSTEM

THE RADAR SUBSYSTEM

279

where Oe,+ifp refers to the range or azimuth angles that give rise to signal
components within the processing bandwidth (including ambiguous regions).
A typical performance requirement for the cross-polarization isolation as defined
in Eqn. ( 6.3.11) is - 25- - 30 dB. SAR ambiguities wilfbe further discussed in
Section 6.5.1.
6.3.4

Dlgltal Electronics and Data Routing

The digital data handling assembly (DDHA) consists of the analog-to-digital


random ~ccess memory (RAM) for data buffering, and data
routmg/sw1tchmg electromcs to route data to either the signal processor or
!o the hi.gh density digital recorders. The ADCs convert the analog video signal
mto a bmary data stream by sampling the voltage at fixed time intervals. The
ADC is a nonlinear device in which the output power versus input power is
ideally a stairstep function (Fig. 6.18).
Assumi~g a ~aussian distributed input and a uniform quantizer (Max, 1967),
the theoretical signal to quantization noise ratio is
con~ertor (~D~),

CROSS-POLARIZED
PATTERN, Gxpor

Figure 6.17 Like- and cross-polarized patterns illustrating high cross-polarized sidelobes for a
mainlobe null design.

also consider the phase and amplitude errors as functions of the off-boresight
angle within the mainlobe of the antenna. It is not unusual for the antenna to
be a major source of phase and amplitude error, especially in the case of a
microstrip phased array antenna which inherently has relatively small bandwidth.
The polarization purity is also an important consideration in the antenna
design. This is especially true in a multipolarization radar such as the SIR-C,
where the relatively low power cross-polarized return is used to derive
information about the target. In this, as in any pulsed radar system, ambiguous
responses can arise, not only from the desired radiated pattern but also from
the spurious cross-polarized signals. This can be a significant error source if
the cross-polarized pattern is designed such that it has a null coinciding with
the peak gain of the like-polarized mainlobe as in Fig. 6.17. This pattern is
designed to minimize the function

Jo" Gxpol( 0) d(J


Jo" G1po1(9)d0

SQNR = 6nb

+ l.8(dB)

(6.3.10)

where 08 is the azimuth mainlobe beamwidth and Gxpoi(O) and G1po1(9) are the
azimuth cross-polarized and like-polarized gain patterns, respectively, as a
function of off-boresight angle 0. However, due to the finite sampling of the
azimuth spectrum, the signal components outside of the /p/2 region fold over
into the main portion of the azimuth signal band.
A consideration often overlooked in the antenna specification is that the
cross-polarized pattern may have large sidelobes when its null is placed in the
mainlobe pattern. For a linearly polarized antenna the horizontal pattern
function to be minimized is

Ir=

Joe,+ifp GxpotC 0) d(J


foe, G1po1(0)d8

-n

(6.3.11)
Figure 6.18 Transfer function of an ideal ADC.

( 6.3.12)

280

SAR FLIGHT SYSTEM

6.3

where nb is the number of bits per sample. The actual SQNR is typically less
than that given in Eqn. (6.3.12) due to errors in the quantizer. The ADC errors
can generally be classified as either timing errors or quantization level errors.
Errors classified as timing errors are sample clock jitter and sample bias, which
result in a relative phase error between the two ADCs in a quadrature sampling
design. Sample jitter gives rise to a phase error according to Eqn. ( 6.3.3 ), where
u, is now defined as the standard deviation of the sample jitter and f is the
sampling frequency. Sample bias errors are- typically stable or slowly varying
and can be measured with calibration signals and corrected in the ground
processor. Quantization level errors result from DC bias (a shift in all
quantization levels) or errors in the relative spacing between levels (differential
nonlinearities). The DC bias is easily corrected in the signal processor by
estimating the mean of the digitized video signal. The ADC SQNR is reduced
by the ratio of the bias voltage to the full-scale voltage of the ADC. Similarly,
the differential nonlinearities can be estimated by the processor if a full scale
sinewave test signal is available. Comparing the ADC sinewave histogram to
the ideal probability distribution function (PDF), the differential nonlinearity
(Dq) in least significant bits is given by

Dq =

L(X;/X)- 1
;

THE RADAR SUBSYSTEM

wher~ N. and Nq ~re the saturation and quantization noise powers respectively,
p(x) i~ ~he Gaussian PDF for the input signal, and v; are the quantization
and digital reconstruction levels respectively, and

X;

( 6.3.16)
is the total number of digital reconstruction levels as shown below

'

XLv+ I

>

S;

For a uniform quantizer (i.e., having equally spaced quantization levels x across
the ADC dynamic range), we can plot the signal to distortion noise ( sat~ration
P.lus quantization) as a function of the standard deviation of the input Gaussian
si~nal and the number of bits per sample, nb. These curves are plotted in
Fig. 6.19. Note that at low standard deviations (i.e., weak signals) the noise is
domi~ated. by the qu~ntization component, which appears as a log linear
function with a 6 dB improvement in the SDNR for each quantization level

( 6.3.13)

P;

where x; is the total number of counts in the ith bin, P; is the expected fractional
number of counts in the ith bin for an ideal ADC, and x is the total number
of samples in the histogram~
The SNR given by Eqn. (6.3.12) describes the ADC performance given a full
scale deterministic input signal. Since the digitized SAR video is a random,
Gaussian distributed, zero mean signal, the SNR depends on the statistics of
the echo (Zeoli, 1976). The assumption of a Gaussian distribution is reasonable
considering that the typical antenna footprint is large and that the echo consists
of scattering from a diverse ground area. The noise energy is calculated for each
sample as the square of the difference between the input analog value and its
digital reconstructed value. This noise is commonly classified into saturation
noise and quantization noise components. The saturation noise is defined as
the noise arising from input analog signals that exceed the maximum or
minimum range of the analog-to-digital converter, while the quantization noise
is the error resulting from input signals within the ADG dynamic range. For a
Gaussian input signal these noises are given by
(6.3.14)
(6.3.15)

281

40

,,

m _30

=
a:
z
0

"'

20

10

10

20

30

40

Standard deviation in dB
Figure 6.19 Distortion noise as a function of input power and number of bits per sample.

282

6.4

SAR FLIGHT SYSTEM

according to Eqn. (6.3.12). For input signals with large standard deviations
(high power) the saturation component dominates. Thus, independent of the
number of quantization bits, as the input signal power increases each curve
tends toward unity SDNR.
Therefore, in terms of the optimal gain setting for the video amplifier preceding
the ADC in the SAR receive chain, there is a unique value that produces the
maximum signal to distortion noise ratio,- SDNR = S/(Nq + N.) (Sharma,
1978). As the number of bits per sample increases for a given input signal power,
the gain setting (that gain maximizing the signal to distortion noise) should be
reduced to balance the saturation and distortion noise components. In setting
the gain in the receiver subassembly, it should be noted that, in any one imaging
period (e.g., a frame or a synthetic aperture), the standard deviation of the echo
may vary from a very low value (a low backscatter region) to a high value (a
bright backscatter region). Thus, the dynamic range of the echo over time
intervals on the order of the synthetic aperture time or longer may be much
greater than the instantaneous dynamic range of the return from targets within
a small time interval. For many types of natural targets, instantaneous dynamic
ranges of 25 dB within a short time interval are not uncommon. Adding to this
is the additional dynamic range required to accommodate the antenna pattern
modulation, the range attenuation, and the cross-track variation in the sample
cell size. The instantaneous dynamic range required in the ADC may be 40 dB
or more. Receiver techniques to reduce this dynamic range, such as the sensitivity
time control (STC) or the automatic gain control (AGC), have major drawbacks
in that these devices degrade the system radiometric calibration accuracy.
With the advent of high speed, wide dynamic range ADCs the need for either
an STC or an AGC to reduce the echo dynamic range is greatly diminished.
Table 6.3 lists some of the commonly available ADCs. Devices capable of
TABLE 6.3.

List of Currently Available Analog to Dlgltal Converters

Sampling
Frequency(MHz)

Bits/sample

ADC
Channels

10

12

20
20
30
36
50
60
100
100
250
300
525

8
10

10
12
12
10
8
8

1
2
2
1
1

2
2
1

8
4

1
1

Source: Courtesy of S. W. McCandless, Jr., 1989.

Manufacturer
Analog Devices
TRW
Analog Devices
Sony /Tektronics
Analogic
Nicolet
Sony /Tektronics
Analogic
Biomation
Hughes
Tektronics
Hughes

PLATFORM AND DATA DOWNLINK

283

100 Msamples/s at 8 bits/sample can be bought "off the shelf". For radar
systems with bandwidths over 50 MHz, in-phase and quadrature sampling can
be employed using two devices, each operating at half the Nyquist rate of 2BR.
In most radar systems, oversampling is applied to minimize the effects of
aliasing. For Seasat, the system range bandwidth is 19.0 MHz and the real
sampling frequency is 45.54 MHz, resulting in an oversampling factor
( 6.3.17)
where fs, the sampling frequency of the I, Q detected complex signal, is one
half the real sampling frequency. When calculating the effective distortion noise
for an ADC that uses oversampling, a reasonable approximation is that the
quantization noise will be reduced by the oversampling factor, while the
saturation noise is essentially unaffected. This noise reduction occurs during
the range matched filtering operation in the signal processor. An analogous
reduction in quantization noise occurs in the azimuth signal processor as a
result of the PRF to processing bandwidth (Bp) oversampling of the azimuth
spectrum. The assumption inherent in the above statement is that the quantization
is essentially white over the range and azimuth spectra of the echo data. This
has been demonstrated by simulation of the noise spectra (Li, et al., 1981).

6.4

PLATFORM AND DATA DOWNLINK

Most spaceborne SAR systems and a few airborne systems downlink the digitized
SAR echo data to ground receiving stations. The key downlink characteristics
that affect the SAR system performance are: ( 1) The noise introduced by bit
errors; and (2) the downlink data rate (which limits either radar swath width,
duty cycle, or dynamic range). These two factors are interdependent since
increasing the bandwidth of the downlink signal processor to increase the data
rate also increases the noise bandwidth and therefore the probability of a bit
error. A detailed treatment of the trade-offs in the design of communication
systems, link budgets, and error encoding schemes can be found in the literature
(Carlson, 1975). Here we will consider the SAR system design options, given a
downlink communications system with a known probability of bit error Pb (or
bit error rate) and bandwidth (or maximum data rate).
6.4.1

Channel Errors

Following quantization of the SAR video signal, the data stream is passed to
the platform data bus. There it is either captured on a high density recorder
for non-real-time transmission to the ground receiving station, or directly
downlinked via the communications subsystem signal processor. This signal
processor typically encodes the data with some error protection code (e.g.,

284

SAR FLIGHT SYSTEM

6.4

convolutional code) and modulates the downlink carrier signal with the resultant
coded data using quaternary phase shift keying (QPSK).
The error statistics of this system depend on the type of error protectio n
code used. Altho ugh rando mly occurring bit errors are typically assumed for
the link, if a convolutional code of Jong constraint length is used, burst error
statistics can result (Deutsch and Miller, 1981). It should be noted that NASA
has adopted a co nvolutional code, constraint length 7, rate 1/ 2, as standa rd
for the Shuttle high rate data do wnlink. The NASA TDRSS downlink fro m
the Shuttle is relayed by White Sands Receiving Station to G oddard Space Flight
Center ( GSFC) via a high rate Domsat link. The data transfer is actually through
a cascade of two links (TDRSS and Do msat ), each using a different coding
scheme. The effects of the two links in tandem could cause severe burst errors.
Consider the situation shown in Fig. 6.20 for the NASA high rate shuttle data
transmission. The pro bability of bit error for the entire link is given by

Pb= ( 1 - Pbi)Pb2

+ (1 -

Pb2)Pb1

+ Pb1 Pb2 =Pb, + Pb2 -

285

An analysis of the Shuttle to TDRSS link indicates tha t the signal-to-noise ratio
is 6.5 dB resulting in Pb~ 10 - 5 with an average burst length of 4-5 bits and
an expected period between bursts of 2 x 10 5 bits for the 1/ 2 rate, length 7 code.
T o determine the effect of bit errors o n the SAR performance, we assume
that the bit errors occur randomly in time. This allows us to apply Bernoulli 's
theorem, where the probability of an m-bit error in an nb-bit code wo rd is given
by:
( 6.4.3 )
For a single m-bit error wit hin any code word designated by v;, the resultant
code word vi contributes a noise term

N~'

Pb1 Pb2

(6.4.1)
where Pb 1 is the bit error probability for the shuttle to TORS to White Sands
segment and Pb2 is the bit error probability for the White Sands to Domsat to
GSFC segment. The third term in Eqn. ( 6.4.1) represents the coupling between
the two links, which could produce burst errors with a longer expected burst
length than is cha racteristic of either link individually. However, if the
perfo rmance of each link is sufficiently high, the probability of occurrence of
the bursts is small and

PLATFORM AND DATA DOWNLI NK

=_I.I Jx;.
I=

I J- 1
j ,. i

(x - V; )2 p(x) d x

( 6.4.4)

x,

where the signal between X; a nd X ; + 1 is digitized to V; and L v given by Eqn.


(6.3. 16) is the total number of possible o utput levels fo r v. Therefore the total
bit error noise is given by
( 6.4.5)

( 6.4.2)

~DCM SAT

SPACE
SHUTILE

where Pm is given by Eqn. ( 6.4.3 ). An expression for Pm given multiple errors


can be fo und in Beckman ( 1967). The effect of bit errors o n the signal to
distortio n noise is shown in ( Fig. 6.21 ). For small values of Pb (i.e., random
errors), the effect is essentially to raise the q uantization noise accord ing to the
noise power given in Eqn. ( 6.4.5 ). The assumptio n in this analysis is that the
bit error noise power spectrum is fla t across the system bandwid th (Li, et al.,
198 1). A final point is that, given the bit error noise is white, the noise power
Nb in Eqn. (6.4.5) is fu rther reduced by the oversampling factor given in
Eqn. (6.3.1 7).

6.4.2

GSFC
Figure 6.20 N ASA space shullle high rate communications downlink signal path.

Downlink Data Rate Reduction Techniques

A major constraint in the design of most current spaceborne SAR systems (e.g.,
E-ERS-1, J-ERS-1 , Radarsat) is the available downlink data rate. For these
systems, the swath wid th is either da ta rate limited, or the system d ynamic range
has been degraded by reducing the number of bits per sample. To illustrate the
downlink ca pacity required by a typical SA R system, we present the following
example.

286

6.4

SAR FLIGHT SYSTEM

PLATFORM AND DATA DOWNLINK

287

Assuming an oversampling factor g0 r = 1.2, the sampling frequency is f.r =


48 Msamples/s. The number of samples per range line is therefore

8-blt auant!zer

Nr = f.,tw = 22,600 samples

40

and the instantaneous data rate is


30

,,

m
c

ex:

To determine the average data rate we need the PRF. The Doppler bandwidth
is given by

20

(/)

10

where V. 1 ~ 7.5 km/sis the spaceborne sensor to target velocity. Assuming the
same oversampling factor in azimuth,

Bit error rate = 0.1

-10 .L----4~-.....--..---..--.L...----,---.----r-~--1

30

20

10

40

50

Assuming the ADC output is buffered to achieve time expansion over the entire
inter-pulse period, the average (sustained) real-time downlink data rate is

Standard deviation in dB
figure &.21 Effect of random bit errors on signal to distortion noise ratio as a function of signal
power for 8 bit quantization.

Example 6.2 Consider a spaceborne SAR system with the following characteristics:
Quantization nb = 8 bits/sample;
Bandwidth BR "== 20 MHz;
Antenna Length La = 12 m;
Swath Width~= 100 km;
Incidence Angle '1 = 45.
The required minimum slant range swath width is approximately

W.

~ ~sin '1

= 71 km

which corresponds to a data sampling window duration of


't"w ~

2W./c =

471

_______________________________........._

where TP = 1/fp is the inter-pulse period.


From this example we can easily see that, to achieve the 8 bit quantization
necessary to preserve the echo dynamic range and the wide swath, we need an
extremely high data rate downlink. Typically a downlink data rate of this
magnitude cannot be achieved, since it would require a large downlink
transmitter and antenna subsystem that cannot be accommodated within the
platform resources, given the large mass and power requirements of the SAR.
The alternative is to reduce the system performance by /modifying either the
system design or the data collection procedure. Among the available options are:
1. Increase the azimuth length (La) of the SAR antenna and reduce the PRF
and/or the azimuth oversampling factor (g0 a) at the cost of increased mass
and degraded azimuth resolution;
2. Reduce the system bandwidth (BR) and/or the range oversampling factor
(g0 , ) at the cost of range resolution;
3. Reduce the swath width ( ~) or change the imaging geometry to a steeper
incidence angle ( '1) at the cost of ground coverage and increased geometric
distortion from foreshortening and layover effects (Chapter 8);

288

6.4

SAR FLIGHT SYSTEM

4. Reduce the quantization to fewer bits per sample (nb) at the cost of
increased distortion noise and therefore a degraded impulse response
function and radiometric calibration accuracy (Chapter 7).
Assuming the swath width is maintained, these data rate reduction options
essentially become a trade-off between degrading either: (1) Geometric (spatial)
resolution; or (2) Radiometric resolution (dynamic range).
If a tape recorder is available onboard for capture of the real-time output,
then the sensor duty cycle could also be factored into the required downlink
capacity. Furthermore, if an onboard processor were available to generate the
image data in real time, the resolution degradation could be performed by
multilook averaging, thus reducing the speckle noise in the process.
6.4.3

Data Compression

Spatial data compression has long been used as a technique for data volume
reduction. Generally, the assumption in most compression algorithms is that
some type of redundancy exists in the representation of the data (Jain, 1981).
Many data compression algorithms have been devised to reduce redundancy
based on the statistics of the data set. Compression algorithms are classifed as
either lossy or lossless.
The lossy (or noisy) algorithms are designed to achieve a relatively large
compression factor with the loss of some information (i.e., added noise) in the
reconstructed data. Conversely, a lossless (or noiseless) algorithm is capable of
exactly reconstructing the original data set from the compressed data stream.
For an application such as reducing the downlink data rate, lossy algorithms
are rarely considered for scientific instruments. This is due to the inability to
predefine what an acceptable information loss would be, since the data is to
be used for a variety of research applications. Lossy algorithms will be considered
in more detail in the ground segment of the SAR data system (Chapter 9) for
the distribution of browse image products.
Lossless compression, on the other hand, has been routinely used to reduce
the downlink data rate for optical instruments (Rice, 1979). The redundaneyi
in the data set is typically characterized by its zero order entropy (Shannon, 1948)
L,

H0

= L P 1 log2 P 1

(6.4.6)

i=l

where Lv is the number of quantization levels and P1 is the probability a sample


will assume the value v1 A basic assumption in Eqn. (6.4.6) is that of stationarity
for the data statistics. The entropy of a data set establishes the minimum number
of bits required to represent the information in each data sample. It is therefore
a useful measure to characterize the potential for lossless compression of the
SAR raw data downlink.

PLATFORM AND DATA DOWNLINK

289

An analysis of SAR raw data from the NASA DC-8 airborne system indicates
that H 0 ~ 6- 7 bits/sample. Thus, assuming 8 bit quantization, the maximum
achievable compression factor is on the order of 1.2. An analysis of this sort
must take into account that the SAR data is stationary only over a small time
and space interval, and therefore the entropy of the data depends on the local
target characteristics. Furthermore, when characterizing the SAR data, care
must be taken to ensure that the radar system is not limiting the data dynamic
range prior to the ADC.
Assuming that a 20% savings could be achieved in the downlink data rate
without loss of information, data compression could provide a substantial
improvement in the radar system performance (wider swath, more bits per
sample, etc.). However, realistically there is no lossless data channel, since bit
errors from the transmission will always degrade the data. In fact, most lossless
compression algorithms result in the data being more susceptible to bit errors,
effectively increasing the BER for a given link performance. To offset this factor,
error protection codes must be applied to the data before transmission. Since
the overhead for error protection is typically 20 % or more, a real savings in
the downlink data rate is not achieved.
Several studies have been performed using lossy compression to reduce the
downlink data rate. They conclude that the vector quantization algorithm
exhibits good performance (Reed et al., 1988). Compression factors as high as
10: 1 have been claimed, but to date a full error analysis has not been performed
to quantitatively assess the actual impact on image quality.
6.4.4

Block Floating Point Quantization

A more useful technique to achieve a reduction in the downlink data rate is


block floating point quantization (BFPQ), also referred to as block adaptive
quantization (BAQ). The BFPQ algorithm is based on the fact that over a
small time interval (in both azimuth and range) the entropy of the data is lower
than is that of the data set as a whole. The block floating point quantizer is a
device that receives the output data stream from the ADC (Fig. 6.22) and codes
the uniformly quantized data samples into a more efficient representation of
the data, requiring only mb bits/sample (mb < nb). This technique cannot be
strictly considered as lossless compression, since certain portions of the data
set (e.g., land/water boundaries) may exhibit an entropy (or dynamic range)
~arger than the number of bits (mb) used in the BFPQ representation, resulting
~
m an increased distortion noise.
The BFPQ technique is analogous to the AGC, in that the sampled radar
echo data are integrated (in power) over a period of time to determine a threshold
(or exponent) for that block of data. Given this threshold, the BFPQ codes
each data sample output from the ADC such that it represents only the variations
about the threshold value for that block of data. The dynamic range of the
data within the block essentially requires fewer bits per sample to achieve a
signal to distortion noise ratio comparable to the original uniformly quantized

290

6.4

SAR FLIGHT SYSTEM

SAR
ANALOG
VIDEO
SIGNAL

BFPO

AOC

L - - - - - ' Thresholds

DOWNLINK
RECEIVER

1----19!

nb
SAR
DEFORMATTER
BFPO -1 1-+--1. . CORRELATOR
(LEVELO) 1------1~.__ __..
(LEVEL 1A)
Thresholds

PLATFORM AND DATA DOWNLINK

291

where nb, mb, and n1 are the number of bits required to represent the original
data sample, the BFPQ data sample, and the threshold, respectively. The
instantaneous dynamic range of the BFPQ data is that of an mb bit uniform
quantizer. However, its adjustable dynamic range is that of the original nb bit
quantizer. Thus, the BFPQ will preserve the full information content of the
input data stream if the dynamic range of the original data within any I. x l.
data block does not exceed the dynamic range of the mb bit quantizer.
The assumption in the BFPQ design is that, within a given block of data,
the signal intensity with high probability does not exceed some prescribed
dynamic range. Thus, selection of the block size is essential to proper
performance of the BFPQ. The factors to be considered in selection of the block
size are:

1. The block should contain a sufficient number of samples to establish

1BIT

I BIT

ABSOLUTE
7 BIT
3 BIT
~FFER
7 81
VALUE. IF SIGN i...;.e;T,;.__ __,'+I TABLE LOOK UP ' - " " - -.....
BIT-0, TAKE 1'
(ROM)
MAGNITUDE
BITS
COMPL
12 BIT INPUTS
3 BIT OUTPUTS
(4K x 3)BITS
I xI I

Gaussian statistics for the data set used in estimating each threshold. Due
to the speckle noise a minimum of 50 to 100 samples is required.
2. The block should be small in range relative to the variation in signal power
due to antenna pattern modulation and range attenuation. The design
should allow a maximum variation of only 1-2 dB from these effects.
3. The block should also be small in range relative to the number of samples
in the pulse; and small in azimuth relative to the synthetic aperture length.
Typically the data is approximately stationary over 1/4 to 1/2 of the pulse
and synthetic aperture lengths.

128

The BFPQ Algorithm

COMPARISON TABLE
WITH 32 LEVELS
SBIT
PING PONG TABLE
FOR THE INDEX OF
ONE RANGE LINE
128 x 5 BITS

(RAM)

5 BIT

t,__~5B~IT--..1~--(COOl_NG_OF_T_HE_MU~L~TI~PU-E-R)-;.!~MATIER
I BFPO(B,4)

Functional block diagram of the block floating point quantizer (BFPQ): (a) SAR
data system with a BFPQ; (b) Design of the SIR-C BFPQ with nb =. 8, mb = 4, n, = S.

Figure 6.22

input. Consider a data block of la samples in azimuth ~nd I. sa~ples in .range.


A single threshold is derived for each data block and 1s downlmked with the
encoded data. The compression factor is therefore given by
(6.4.7)

The BFPQ algorithm divides the digitized SAR video data into blocks, where
the sample variance within a block is small compared to the variance across
blocks within the data set. The variance of the samples within each block is
estimated to determine the optimum quantizer, which minimizes the distortion
error for that block. In effect, the BFPQ operates as a set of quantizers with
different gain settings. The problem of quantizing for minimum distortion given
a certain probability density p( x) was first addressed by Max ( 1960 ). He showed
that, given a Gaussian distributed input, using a minimum mean square error
criterion, a uniform quantizer is optimum. Assuming Gaussian statistics within
the data block, the BFPQ algorithm is as follows:

1. For each input data block the standard deviation a is calculated. Typically
this is implemented by calculating the mean of the absolute value of each
sample and relating this to a by

f
Jiica

-I
~ Ix;!+ 0.5
x I = Li= I

x .. + I

exp ( - x 2/2a 2) d x

(6.4.8)

x,

where Lv is the number of quantization levels and the X; are the normalized
quantizer transition points.

292

6.4

SAR FLIGHT SYSTEM

Look-up
table

ADC
Output
8 bit no.

Determine

1128
(lll+IOI)

ii:E

threshold

. Next
burst

8
Input
Voltage

I
{

Slgnblt
Magnltudeblt

'}

Threshold determined
In previous burst

Analog l/Q data

Figure 6.23 BFPQ design used for the Magellan SAR with nb = 8, mb = 4, n, = 8 (Courtesy of
H. Nussbaum).

293

TABLE 6.4. Uniform 2 Bit Quantizer Transfer Function


(Max, 1960)

2. Each sample in the block is scaled by the estimated standard deviation for
that block and the result compared to the optimum quantization levels for
a mb bit quantizer with u = 1.
3. The resulting mb bit word and the estimated threshold (which is an n, bit
quantized value of lxl) are downlinked.
4. The BFPQ decoder in the ground receiver determines the correct multiplier
(gain) from the quantized threshold and reconstructs a floating point
estimate of the original data sample.
Example 6.3 Consider the BFPQ design used in the Magellan spacecraft
mapping Venus (Kwok and Johnson, 1989). Due to the small mass and power
budgets available on a deep space probe, such as Magellan, the peak downlink
data rate is constrained to approximately 270 kbps. Additionally, the data link
is available only 50 % of the time since the SAR and the communications system
share the high gain antenna. To achieve the prime mission objective of mapping
the entire planet within one year at 150 m resolution, some type of data
compression was required.
A BFPQ of (8,2) was adopted (i.e., nb = 8 bits, mb = 2 bits). The analog
video signal data is quantized to values between -128 and 128, while the block
size used for the estimate of each threshold is set at l, = 16 range samples and
la = 8 azimuth pulses. The system, shown in Fig. 6.23, is designed such that the
estimated threshold value is applied to a following data block. The standard
deviation is estimated by the absolute sum method given in Eqn. (6.4.8). The
input data is normalized by this value and quantized according to the uniform
quantizer levels given in Table 6.4.

PLATFORM AND DATA DOWNLINK

BFPQ Output

Input Signal Level*

0.98160' ~ x
0 ~ x < 0.981611
-0.981611 < x < 0
x ~ -0.981611

l
l
0
0

0
0
l

*The value 0.9816 is the optimum transition point for an ideal


uniform quantizer; the value of u is estimated from Eqn. ( 6.4.8).

In the Magellan implementation, the transfer function for this normalization

giv~n.by Eqn. (6.4.8), is precalculated and stored in a look-up table. Thus, th~
8 bit mput sample and the 8 bit threshold address a 2 bit output sample from
table accqrding to Table 6.4. The ground reconstruction simply
mverts this process, and a gain function calculated from the threshold is used
to reconstruct the original data stream according to Table 6.5. The performance
curves for the. M~llan desi~n are s~own in Fig. 6.24. Note that the (8, 2)
BFPQ SNR d1stortlon curve is essentially a set of 2 bit SNR curves spaced
across the dynamic range of the 8 bit curve.
It is important to note that with the Magellan BFPQ we can never achieve
a better sig~al to di~tortion noise ratio (SDNR) than is given by the peak value
fo.r the 2 bit q~antizer. However, we can maintain that performance over a
wider range ?f mput values (approaching that of the 8 bit quantizer) using the
BFPq te~hmque. The effect of the distortion noise incurred by using the 2 bit
quantization depends. on the relative level of other noise sources in the system.
For most s~stem ~es1gns, the SD_N~ should be small relative to the signal to
thermal noise ratio (SNR). This is based on the radiometric calibration
requirements (Chapter 7), which assume the thermal noise power is known and
can be subtracted from the total received power to derive the backscattered
energy. Since the distortion noise is nonlinear and cannot be subtracted, it must
~he look-~p

TABLE 6.5. Look-up Table for the Two Bit Data


Reconstruction

Decoder Input
l
l
0
0

l
0
l
0

Reconstructed Value*
l.7211
0.5211
-0.5211
-l.7211

*T.he valu~s 0.4528, 1:5104 are optimum transition points for


an tdeal umform quantizer. Due to saturation effects the uniform
quantizer has an effective gain of 0.8825, resulting.in the given
reconstruction levels.

294

6.5

SAR FLIGHT SYSTEM

SYSTEM DESIGN CONSIDERATIONS

295

Example 6.4 Assume that the measurable range of target backscatter coefficients
(i.e., the noise equivalent a 0 ) and the wavelength, .A., are specified by the scientist.
Furthermore, assume the mass and power budgets are constrained by the launch
vehicle such that:

IXI

1. Maximum antenna area (A), and therefore the antenna gain ( G ), are limited
by the mass;
2. Maximum radiated power (P1) is limited by the available de power and
system losses (L.);
3. Minimum noise temperature ( T,,) is determined by the earth temperature
( "'" 300 K) and the receiver noise figure;
4. Slant range ( R) is determined by the imaging geometry and the platform
altitude.

30

"'O

.:

8-bit

a:

Q
ti)

20

10

/8,2)BfPQ
0

10

20

The designer has little flexibility to meet the SNR requirements given these
system constraints. Consider the single pulse radar equation for distributed
targets (Section 2.8)
30

40

50

60

(6.5.l)

Standard deviation In dB
Figure 6.24 Distortion noise as a function of input power for the Magellan BFPQ.

be small relative to the thermal noise or very small ( < -18 dB) relative to the
signal power for calibrated imagery.

6.5

SYSTEM DESIGN CONSIDERATIONS

The design of the SAR system is generally dependent on the ap~licatio~ for
which it is intended. Typically, specifications are provided to the design engmeer
by the end data user such as: (1) Resolution; (2) Incidence angle; (3) Swath
width; (4) Wavelength; (5) Polarization; (6) Calibration accuracy; (7) SNR,
and so on. Additional constraints are imposed by the available platform
resources and mission design (e.g., launch vehicle): ( 1) Payload mass, power, ;,
and dimensions; (2) Platform altitude; (3) Ephemeris/attitude determination x
accuracy; (4) Attitude control; (5) Downlink data rate, and so on. Given these~
inputs the system specifications are determined: (1) System gains (losses); (2) ~
Rms amplitude error versus frequency; (3) Rms phase error versus frequ~ncy;
(4) Receiver noise figure; (5) System stability (gain/p~ase .versus time/
temperature), and so on. The final design is the result of an iter~tive proce~ure,
balancing performance characteristics among subsystems to achieve the optimal
design. The following example is presented to illustrate these trade-offs.

where 11 is the incidence angle and Bn is the noise bandwidth. The system
parameters available for enhancing the SNR are:
1. Increase the pulse duration, rP, at the cost of increased average power
consumption;
2. Decrease the antenna length, L3 , while increasing the width, W,., to keep the
area constant to maintain the constraint in Eqn. ( 6.3.9). This will reduce the
swath width and increase the average power consumption due to the higher
'PRF required;
3. Reduce system losses by improving the antenna feed system (waveguide) or
by inserting T / R modules into the feed to improve the system gain; again
at the cost of increased power consumption.

Note that all of the options considered to improve the SNR require an increase
in the available power. If additional power is not available then the designer
must request a modification in the given requirements. Lowering the altitude
will produce a significant increase in SNR due to the R 3 factor, but will !;lecrease
the swath width. The 3 dB swath is approximately
( 6.5.2)

296

SAR FLIGHT SYSTEM

6.5

where Jv., is the antenna width. The effect of reducing R on the swath could be
compensated by reducing Jv., and increasing La to keep the antenna area and
swath constant. A small drop in SNR and a reduction in the azimuth resolution
~x ~ La/2 would result, but the net effect would be a significant increase in SNR.
The design procedure illustrated in the above example is intended to
demonstrate the interrelationship between user performance specifications, radar
system parameters, and platform resources. No single algorithm can be defined
that will optimize the design across the wide range of applications, since the
priority ordering of the system performance parameters depends on the data
utilization. At best, the final design will be a ~ompromise between the available
resources and the user's needs.

6.5.1

SPECTRAL
POWER

-fp

Ip

Illustration of SAR azimuth ambiguities for PRF = B0 .

Figure 6.25

The ambiguous signal power at some Doppler frequency fo and some time
delay To can be expressed as (Bayman and Mcinnes, 1975)
00

Ambiguity Analysis

SaUo. To) =

A key element in the radar system design is the antenna subsystem. As we have
discussed in Section 2.2.1, the antenna gain is proportional to its area.
Additionally, its dimensions in range ( Jv.,) and azimuth (La) approximately
determine the 3 dB beam width (assuming no amplitude weighting) by
(}H

297

SYSTEM DESIGN CONSIDERATIONS

= J../La

Ov = J../Jv.,

Azimuth

(6.5.3a)

Range

(6.5.3b)

These in turn affect the resolution in azimuth and the available swath width in
range. The shape of the antenna beam, specifically its sidelobe characteristics,
is also key to the performance of the radar system. The discussion in
Example 6.4 considers only the signal to thermal noise requirements of the
system. An additional noise factor, ambiguity noise, is also an important
consideration, especially for a spaceborne SAR. Equations (6.3.7) and (6.3.8)
presented rough guidelines for determining the antenna dimensions. These
bounds are based on the criteria that the 3 dB mainlobe of the antenna pattern
does not overlap in time for consecutive echoes, and that the azimuth 3 dB
Doppler spectrum is less than the PRF. Obviously, these constraints are very
approximate and may not meet the required signal to ambiguity noise ratios.
The azimuth ambiguities arise from finite sampling of the Doppler spectrum
at intervals of the PRF (Fig. 6.25). Since the spectrum repeats at PRF intervals,
the signal components outside this frequency interval fold back into the main
part of the spectrum. Similarly, in the range dimension (Fig. 6.26), echoes from
preceding and succeeding pulses can arrive back at the antenna simultaneously
with the desired return. For a given range and azimuth antenna pattern, the
PRF must be selected such that the total ambiguity noise contribution is verj
small relative to the signal (i.e., -18 to -20 dB). Alternatively, given a PRF or
range of PRFs, the antenna dimensions and/ or weighting (to lower the sidelobe
energy) must be such that the signal-to-ambiguity noise specification is met.

G2 (fo

+ mfp, To + n/fp) u 0 (fo + mfp, T0 + n/fp)

mtn=-oo

m,n,,.O

( 6.5.4)

where m and n are integers, G 2 (f, T) is the two-way far field antenna power
pattern, and u 0 is the radar reflectivity. The integrated ambiguity to signal ratio
is therefore given by
00

m.n ~-oo

B.12
_
8 12

G (f + mfp, T + n/fp) u (f

+ mfp, T + n/fp) df

0
ASR(T)=~m.;::.=..
""~0'--~~~----,,...,,.--,.,..-~~~~~~~~~~~~~

B.12

G 2(f, T) u 0 (f, T) df

(6.5.5)

-B.12

ECHO
ENERGY
Tp =INTERPULSE PERIOD
Tp __1_
fp

RANGE
AMBIGUITY
NOISE

AMBIGUOUS
REGION

cli'o
ISS,1:
/i'"'c-t
Figure 6.26

Illustration of SAR range ambiguities.

298

SAR FLIGHT SYSTEM

6.5

where B is the azimuth spectral bandwidth of the processor. Note that the
ASR is ;ritten as a function of r, or equivalently the cross-track position in
the image. Since the system ambiguity specifications typically refer to the
integrated azimuth ambiguity and the peak range ambiguity (which depends
on cross-track position), the expression in Eqn. (6.5.5) is not very useful for
design engineers. It requires both the two dimensional antenna pattern and .the
target reflectivity to be formulated in terms of the Doppler frequency and time
delay. Additional relations are required to derive these quantities from the
measured data. Typically, antenna patterns are given as a function of offboresight angles and <ro is given as a function of local incidence angle. For
design purposes it is more useful to rewrite Eqn. (6.5.5) separating the azimuth
and range ambiguity components. In the following two sections we will analyze
the effects of each type of ambiguity separately.

299

SYSTEM DESIGN CONSIDERATIONS

AASR = I 0 dB is not uncommon when there is a bright backscatter region


adjacent to a dark region. For example, an urban area located next to a calm
lake, or a bridge over a river, can produce very high AASRs. Some examples
of azimuth ambiguities are shown for Seasat and SIR-B images in Fig. 6.27 and
Fig. 6.28, respectively.
The location of the azimuth ambiguity in the image is displaced from the
true location of the target. The relative displacement in range and azimuth
respectively is given by (Li and Johnson, 1983)
dXRA ::::: (mJ../p/ fR)Uoc + m/p/ 2)

(6.5.7)

dXAz ::::: m/p V.ilfR

(6.5.8)

Azimuth Ambiguity

As previously described, azimuth ambiguities arise from finite sampling of the


azimuth frequency spectrum at the PRF. As in any pulsed radar, the SAR
Doppler spectrum is not strictly band limited (due to the sidelobes of the antenna
pattern), and the desired signal band is contaminated by ambiguous signals
from adjacent spectra. It is important to note that, due to the one-to-one
relationship between azimuth time and frequency (Section 3.2.2), the shape ?f
the azimuth spectrum is simply the two-way power pattern of the antenna in
azimuth convolved with the target reflectivity.
The ratio of the ambiguous signal to the designed signal, within the SAR
correlator azimuth processing bandwidth (Bp), is commonly referred to as the
azimuth ambiguity to signal ratio (AASR). The AASR can be estimated using
the following equation:

(6.5.6)

where we have assumed that the target reflectivity is uniform for each azimuth
pattern cut (including sidelobes) at each time interval dr within the record
window. Additionally, we have assumed that the azi muth antenna pattern at
each elevation angle within the mainlobe is similar in shape and that the coupling
between range and azimuth ambiguities is negligible. These assumptions are
generally valid for most SAR systems. The AASR as given by Eqn. (6.5.6) is
typically specified to be on the order of - 20 dB. However, even at this value
ambiguous signals can be observed in images that have very bright targets
adjacent to dark targets. As previously described, SAR imagery can have an
extremely wide dynamic range due to the correlation compression gain for point
targets. Thus, even with a 20 dB suppression of the ambiguous signals, a value

Figure 6.27

Seasat image of New Orleans, LA (Rev. 788) illustra ting azimuth ambiguities.

300

6.5

SAR FLIGHT SYSTEM

SYSTEM DESIGN CONSIDERATIONS

301

the range dimension. This blurring or dispersion in units of focussed range


resolution cells can be approximated by (Li and Johnson, 1983 )
( 6.5.9)
where R is the slant range and bx, JR are the focussed azimuth and slant range
resolutions, respectively. A relatively large value of N 0 R is desirable since the
unwanted ambiguous targets will be dispersed in the image.
Azimuth Ambiguity Wavelength Dependence

There are two factors that cause the effect of azimuth ambiguities to be more
severe as the wavelength is reduced. The first is demonstrated by Eqn. (6.5.9),
in that the range dispersion is proportional to ,1. 2 ; at shorter wavelengths the
ambiguous energy will be more focussed and the peak AASR increased. A
second factor is the effect of undetected platform pointing errors. The azimuth
bandwidth (for a given azimuth antenna dimension) varies inversely with the
rada r freque ncy. Therefore, the Doppler shift as a function of pointing error
increases linearly with frequency. The standard deviation of the Doppler centroid
estimation error for a given pointing error can be written as
(6.5.10)

e.

Figure 6.28

SIR-B image of Montreal, Quebec (DT 37.2) illustrating azimuth ambiguities.

where
is the squint angle, <19, is the standard deviation of the squint angle
error, and V. 1 is the relative sensor-to-target speed. From Eqn. (6.5.6) a Doppler
centroid estimation error would result in the processing bandwidth (Bp) being
offset from the mainlobe of the azimuth spectra. Since the ambiguous signal
energy is higher at the edges of the mainlobe than it is in the center (see
Fig. 6.25), an increase in the AASR results.
For cases where the squint angle determination uncertainty becomes so large
that
(6.5.11)

v.

where 1 is the magnitude of the relative platform to target velocity, m is the


ambiguity number, and f 00 , fRare the Doppler centroid freque ncy and Dop~Jer
rate used in t he processor azimut h reference function at the true target locat10n.
Typical values for Seasat, assuming m = l, are

tl.xAz

= 23

km

tl.xRA = 0.2 km

Because the ambiguous targets are significantly displaced from their true
locations, the range migration correction applied to the signal data ~t t~e
ambiguous target location is offset from the true value, resulting in blurnng m

it is possible that the clutterlock algorithm will converge on an ambiguous


Doppler centroid (i.e., t he estimated centroid from the clutterlock will be some
integer multiple of the PRF offset from the true centroid). Substituting
Eqn. (6.5.11) into Eqn. (6.5.10) and rearranging terms, we see that for squint
angle errors greater than
( 6.5.12)
the clutterlock routine will converge on an ambiguity. Since the Doppler
bandwidth B0 = 2 V.1 / L. and the azimuth beam width()" = -1./ L.,for small squint

302

6.5

SAR FLIGHT SYSTEM

SYSTEM DESIGN CONSIDERATIONS

303

that is being employed on the SIR-C/X-SAR shuttle missions is presented in


(Chang and Curlander, 1992).

angles Eqn. (6.5.12) becomes


(6.5.13)

Range Ambiguity

Thus, assuming the PRF is approximately equal to the Doppler bandwidth,


the clutterlock algorithm converges on an ambiguous centroid if
( 6.5.14)
which is one half the azimuth beamwidth.
Example 6.5 Consider a system such as the X-SAR, which will operate jointly
with SIR-C aboard the Shuttle (Table 1.4). The X-SAR azimuth antenna
dimension is L = 12 m and the radar wavelength is A.= 3 cm, resulting in an
azimuth beam;idth, ()H = 0.143. Since the Shuttle has an estimated pointing
uncertainty of approximately 1.0 (3a) in each axis, the X-SAR (3a) Doppler
centroid estimation error will be on the order of 10 to 15 ambiguities. This
pointing error presents a difficult problem for the processor to resolve the t~ue
Doppler. Two techniques for this PRF ambiguity resolution are currently bemg
considered (Section 5.4 ).
The first technique, range cross-correlation of looks, uses the fact that the
range migration correction, when derived from an ambiguous Doppler centroid,
will result in a target in one look being displaced relative to an adjacent look by

Range ambiguities result from preceding and succeeding pulse echoes arriving
at the antenna simultaneously with the desired return. This type of noise is
typically not significant for airborne SAR data, since the spread of the echo is
very small relative to the interpulse period. As the altitude of the platform, and
therefore the slant range from sensor to target, increases, the beam limited swath
width increases according to Eqn. (6.5.2).
For spaceborne radars, where several interpulse periods (TP = 1/fp) elapse
between transmission and reception of a pulse, the range ambiguities can become
significant. The source of range-ambiguous returns is illustrated in Fig. 6.26.
For PRFs satisfying the relation
T. > 2A.R tan I'/
P

cW,.

range ambiguities do not arise from the mainlobe of the adjacent pulses.
Typically this is considered an upper bound on the PRF. To derive the exact
value of the range ambiguity to signal ratio (RASR), consider that, at a given
time ti within the data record window, ambiguous signals arrive from ranges of
j

(6.5.15)
where AR is the range displacement in meters and As is the time separation
between the centers of the two looks. Since As, A., and fp are known, m can be
determined by a range cross-correlation of the two single-look images. Note
that, in the absence of edges or point-like targets in the images, the correlation
peak-to-mean ratio is quite small due to the speckle noise in the single look
images.
A limiting factor in the performance of this ambiguity resolving technique
arises from the fact that AR is proportional to both A. and As, which are inversely
proportional to frequency. For X-SAR, with m = 10 we get AR~ 20 meters.
At a complex sampling frequency off.= 22.5 MHz this represents an offset.of
approximately 3 pixels. Since these are single-look pixels, the speckle noise
makes it nearly impossible to exactly determine m.

An alternative approach, called the multi-PRF technique, is derived from


those used in MTI radars (see Section 5.4 ). It requires the SAR to cycle through
two or more PRFs, dwelling on each for several synthetic aperture periods. From
each data block using the same PRF, an ambiguous Doppler centroid is derived
using conventional clutterlock techniques. Using the Chinese remainder theorem,
the true centroid can be derived if the squint angle uncertainty and squint angle
drift rate are not too large. A detailed treatment of the multi-PRF technique

(6.5.16)

= 1, 2, ... nh

( 6.5.17)

where j, the pulse number (j = 0 for the desired pulse), is positive for preceding
interfering pulses and negative for succeeding ones. The value j = nh is the
number of pulses to the horizon. To determine the contribution from each
ambiguous pulse, the incidence angle and the backscatter coefficient must be
determined for each pulse (j) in each time interval ( i) of the data record window.
Assuming a smooth spherical model for the earth, the incidence angle l'/;i at
some point i within the data record window (corresponding to a range delay
t;) and some pulse j is given by (Fig. 8.1)
(6.5.18)
The target distance is R, = IR, I and R, = IR, I is the sensor distance from the
earth's center and Yii is the antenna boresight angle corresponding to l'/ii This
boresight angle can be written in terms of the slant range Rii as follows
(6.5.19)
In this formulation we have ignored any refractive effects of the atmosphere.
Typically, this is a good approximation for earth imaging, except at grazing
angles (i.e., j approaching nh). Additionally, when imaging through dense

304

6.5

SAR FLIGHT SYSTEM

atmospheres, such as on Venus, the refra~tion eff~cts are significant and a


refraction model for the atmosphere is required (Kb~re, 198~).
The integrated RASR is then determined by summ~ng all signal co~ponents
within the data record window arising from pr~cedmg and. succeedmg pulse
echoes, and taking the ratio of this sum to the mtegrated signal return from
the desired pulse.
The RASR is given by
RASR =

Jl IJl
Sa,

Sa,=

L"

a~Gf;/R~sin(17ij)

for j =F 0

I
I
I

where a~ is the normalized backscatter coefficient at a given '1ii and ~ij is t~e
cross-tr~~k antenna pattern at a given Yij The exact dependence of~. on 17 is
a function of target type, radar parameters, and environn;iental con~itions (see
Chapter 1). The antenna pattern dependence on yij is a function of the
illumination taper across the array. For a uniformly illuminated aperture, the
far-field pattern is given by Eqn. (2.2.30)
= sinc 2 [nW.. sin( </>ij)/ A.]

(6.5.23)

where the off-boresight elevation angle 4>iJ is given by


Yij - (Yo+ r)

-30

MINIMUM SNR s 5.4 dB

1..-_ _ __,__ _ _ _ _
T_H_ER_MA_L_N0_1s_E_E_au_1v_A_L_EN_T_a__ _......1.-.._ ____,

320

325

330

335

340

345

350

RANGE FROM SUBSATELLITE POINT (km)

Figure 6.29 Plot of SIR-B performance (in noise equivalent u 0 ) versus cross-track position for
y = 55. System noise floor dominated by range ambiguities, sharp spike in noise floor is nadir return.

(6.5.22)

j7"0

4>ij =

____~~----.....J
I
F~rt----"1~S~U~R~F=AC~E~BA:C~K~S~C:ATT~E~R~-=M~U:HL~E~M:A:N:S~L:AW:_

-25

i= -n.

Gij

20

(6.5.20)

where Sa, and Si are respectively the range ambiguous and desired si_gnal i;x>wers
(at the receiver output) in the ith time interval of the data recordmg wmd~w,
d N is the total number of time intervals. From the radar equation,
~~n. (6.5.1 ), only the parameters that do not cancel in the ratio of Eqn. (6.5.20)
need be considered. Thus
( 6.5.21)
for j = 0

305

"4------SWATH LIMITS-------<..i

'b

Si

SYSTEM DESIGN CONSIDERATIONS

(6.5.24)

Here Yi is given by Eqn. (6.5.19), y0 is the ant~nna electrical boresight relative


to the ~latform z (nadir pointing) axis, and r 1s the ~oll angle.
The RASR is presented as an integrated .value m Eq~. (~.5.20) over the
cross-track swath. However, the ambiguous signal energy i~ hi.ghly de~nd~nt
on the cross-track swath position. Typically, the range ambiguity spec1ficat10n
is given as the peak value across the swath, that is,
(6.5.25)
The system design is tailored by adjusting the data window position (DWP),
the PRF, and the antenna (amplitude and/or phase) taper to ensure that the

location of the ambiguous pulses and the nadir returns are outside the data
recording window. The SNR performance is shown in Figure 6.29 for a typical
set of SIR-B parameters.

6.5.2

PRF Selection

The set of values that the above listed radar parameters (PRF, DWP, etc.) can
assume is constrained by a number of other factors. This is especially true in
the case of the PRF. As we have shown in the preceding discussions on azimuth
and range ambiguities, the AASR and RASR are both highly dependent on the
selection of PRF. A low value of PRF increases the azimuth ambiguity level
due to increased aliasing of the azimuth spectra. On the other hand, a high
PRF value will reduce the interpulse period and result in overlap between the
received pulses in time. The PRF selection is further constrained for a SAR
system that has a single antenna for both transmit and receive. The transmit
event pmst be interspersed with the data reception for a spaceborne system
since, at any given time, there are a number of pulses in the air. Additionally,
the PRF must be selected such that the nadir return from succeeding pulses is
excluded from the data window. The transmit interference restriction on the
PRF can be written as follows
Frac(2Rifp/c)/ fp >

-rP

+ -rRP

Frac(2RNfp/ c)/ /p < - fp

-rRP

(6.5.26a)
(6.5.26b)

306

6.6

SAR FLIGHT SYSTEM

SUMMARY

307

SIR-A

and
( 6.5.26c)
where R 1 is the slant range to the first data sample (i.e., j = 0, i = 1), RN is the
slant range to the last (Nth) data sample in the recording window, rP is the
transmit pulse duration, and Rr is the receiver protect window extension about
t . The functions Frac and Int extract the fractional and the integer portions
of their arguments, respectively. These relationships are illustrated in the timing
diagram, Fig. 6.30.
The nadir interference restriction on the PRF can be written as follows:

~<..:>

z
ex:
~

= 0,

I, 2,. .. nh

(6.5.27a)

nh

(6.5.27b)

2H /c + j/ fp > 2RN/ c

2H/ c + 2tP

j = 0, I, 2,. ..

+ j/JP < 2R 1 / c

I
I
I

37

SIR-B.-J

I
I
I
I
I

26
where H :::::: R. - Rt is the sensor altitude above the surface nadir point. We
have assumed in the above analysis that the duration of the nadir return is 2tP.
The actual nadir return duration will depend on the characteristics of the terrain.
For rough terrain the significant nadir return could be shorter or longer than
2tP. An example of the excluded zones defined by Eqn. ( 6.5.26) and Eqn. ( 6.5.27)
is given in Fig. 6.31.

'tp

..........~---.-~-.....L.~~_.:..;

15-t-~.-~-+..;.._-r"'---+__JL-r~

1000

1100

1200

1332 1395 1464

1600

1700

1824 1900 2000

PRF, Hz
~

't RP

SEASAT

I
I

INDICATES NADIR RETURN

GJ INDICATES TRANSMIT EVENT

Figure 6.31 Plot of PRF against y for SIR-B illustrating excluded zones as a result of transmit
and nadir interference.

~'+-------+-~r--m
ILll :-nL_-_.'"':_. .4=------f
...

I
I

Frac (2R, fp /c)

2R,

fp

'tp

The set of acceptable PRFs, or range of PRF values, is therefore established


by the maximum acceptable range and azimuth ambiguity-to-signal ratios, as
well as the transmit and nadir interference. For a given sensor and mission
design, there may be no acceptable PRFs at some look angles that meet the
minimum requirements. The designer then has the option to relax the performance
specifications for these imaging geometries or exclude these modes from the
operations plan. In general, as the off-nadir angle is increased, the PRF
availability is reduced and the ambiguity requirements must be lowered to find
acceptable PRFs. H owever, the signal to thermal noise ratio at the higher loo k
angles is also red uced, so that the relative thermal noise to ambiguity noise
ratio remains relatively constant.

6.6

SUMMARY

b
Figure 6.30 Timing diagram illustrating the constraints on PRF selections: (a) Transmit
interference; (b) Nadir interference.

In th is chapter we have presented an analysis of two major subsystems in the


end-to-end radar data system. The first part of the chapter described the rada r

308

SAR FLIGHT SYSTEM

instrument and its major assemblies. This was followed by a discussion of the
spacecraft bus and data downlink subsystem.
The SAR sensor subsystem consists of four major assemblies: (1) Timing
and control; (2) RF electronics; (3) Digital electronics; and (4) Antenna. Their
performance can be analyzed in terms of a linear distortion model. Quantitative
relationships between the linear system errors and the resultant impulse response
function were given. Additionally, the non-linear performance characteristics of
the SAR were described in terms of the signal to distortion noise ratio.
The platform and data downlink subsystem is often a limiting factor in the
SAR performance, in that the available data rates, power, and mass may be
insufficient to accommodate the instrument. To reduce the data rate, the system
performance is often degraded. Alternatively, a data compression technique,
block floating point quantization, can be employed. This concept was described
in detail with an example of the Magellan SAR design.
The chapter concluded with a discussion of various aspects of the SAR system
design. A detailed treatment of ambiguities was presented with examples from
the Seasat and SIR-B systems. The limitations of nadir and transmit interference
were also presented as another factor in the PRF selection.
The intent of this chapter was to introduce the various error sources that
result from the sensor and data downlink. These errors to some degree can be
compensated in the signal processor by adjusting the matched filter reference
function. However, some component of the sensor and data link errors will be
passed through to the final image product. An understanding of the sources
and characteristics of these errors is essential for proper design of the ground
data system and interpretation of the SAR imagery.

REFERENCES
Bayman, R. W. and P.A. Mcinnes (1975). "Aperture Size and Ambiguity Constraints
for a Synthetic Aperture Radar," IEEE 1975 Inter. Radar Conj., pp. 499-504.
Beckman, P. ( 1967). Probability in Communication Engineering, Harcourt, Brace and
World, New York.
Berkowitz, R. S., et al. ( 1965). Modern Radar, Linear fm Pulse Compression, C. M. Cook,
Chapter 2, Part IV, Wiley, New York.
Butler, D. ( 1984) "Earth Observing System: Science and Mission Requirements Working
Group Report," Vol. I, NASA TM 86129.
Butler, M. ( 1980). Radar Applications of SAW Dispersive Filters, Proc. I EE, 127, Pt. F.
Carlson, A. B. ( 1975). Communication Systems: An Introduction to Communications
and Noise in Electrical Systems, McGraw-Hill Book Company, New York.
Carver, K. and J. W. Mink (1981). "Microstrip Antenna Technology," IEEE Trans. Ant.
and Prop., AP-29, pp. 2-24.

Chang, C. Y. and J.C. Curlander (1992). "Algorithms to Resolve the Doppler Centroid
Estimation Ambiguity for Spaceborne Synthetic Aperture Radars," IEEE Trans.
Geosci. Rem. Sens. (to be published).

REFERENCES

309

Cook, C. E. and M. Bernfeld (1967). Radar Signals: An Introduction to Theory and


Application, Academic Press, New York.
Deutsch, L. and R. L. Miller ( 1981 ). "Burst Statistics of Viterbi Decoding," TDA Progress
Report 42-64, Jet Propulsion Laboratory, pp. 187-189.
Huneycutt, B. ( 1989). "Spaceborne Imaging Radar-C Instrument," IEEE Trans. Geo.
and Remote Sens., GE-27, pp. 164-169.
Huneycutt, B. L. (1985). "Shuttle Imaging Radar-B/C Instruments," 2nd Inter. Tech.
Symp. Opt. and Electr. Opt. Applied Sci. and Eng., Cannes, France.
Jain, A. (1981). "Image Data Compression: A Review," Proc. IEEE, 69, pp. 349-389.
Kwok, R. and W. T. K. Johnson (1989). "Block Adaptive Quantization of Magellan
SAR Data," IEEE Trans. Geo. and Remote Sens., GE-27, pp. 375-383.
Klauder, J. R., A. C. Price, S. Darlington and W. J. Albersheim ( 1960). "The Theory
and Design of Chirp Radars," Bell Syst. Tech. J., 39, pp. 745-808.
Klein, J. (1987). "Effects of Piecewise Linear Chirp Phase," JPL Internal Publication.
Kliore, A. ( 1981 ). "Radar Beam Refraction Model for Venus," JPL Internal Publication.
Li, F., D. Held, B. Huneycutt and H. Zebker (1981). "Simulation and Studies of
Spaceborne Synthetic Aperture Radar Image Quality with Reduced Bit Rate." 15th
Inter. Symp. on Remote Sensing of the Environment, Ann Arbor, Ml.
Li, F. and W. T. K. Johnson (1983). "Ambiguities in Spaceborne Synthetic Aperture
Radar Data," IEEE Trans. Aero and Elec. Syst. AES-19, pp. 389-397.
Max, J. (1960). "Quantizing for Minimum Distortion," IRE Trans. lrifo. Theory, IT-6,
pp. 7-12.
Munson, R. E. (1974). "Conformal Microstrip Antennas and Microstrip Phased Arrays,"
IEEE Trans. on Antennas and Prop., AP-22, pp. 74- 78.
Phonon Corp. (1986). "Special Report on Military SAW Applications: Interdigital
Dispersive Delay Lines," RF Design, June 1986.
Reed, C. J., D. V. Arnold, D. M. Chabrias, P. L. Jackson and R. W. Christianson (1988).
"Synthetic Aperture Radar Image Formation from Compressed Data Using a New
Computation Technique," IEEE AES Magazine, October, pp. 3-10.
Rice, R. F. (1979). "Some Practical Universal Noiseless Coding Techniques," JPL
Publication 79-22, Jet Propulsion Laboratory, Pasadena, CA.
Shannon, C. (1948). "A Mathematical Theory of Communication," Bell Syst. Tech. J.,
27, pp. 379-423, 623-656.
Sharma, D. K. ( 1978). "Design of Absolutely Optimal Quantizers for a Wide Class of
Distortion Measures," IEEE Trans. Comm., COM-20, pp. 225-230.
Stutzman, W. and G. Thiele (1981). Antenna Theory and Design, Wiley, New York.
Zeoli, G. W. (1976). "A Lower Boun~on the Data Rate for Synthetic Aperture Radar,"
IEEE Trans. Jrifo. Theory, IT-22, pp. 708-715.

7.1

7
RADIOMETRIC
CALIBRATION OF SAR
DATA

Historically, SAR image data has been used for a variety of applications (e.g.,
cartography, geologic structural mapping) for which qualitative analyses of the
image products were sufficient to extract the desired information. However, to
fully exploit the available information contained in the SAR data, quantitative
analysis of the target backscatter characteristics is required. In general, any
scientific application which involves a comparative study of radar reflectivities
requires some level of radiometric calibration. Typically, these comparisons are
performed spatially across an image frame or temporally from pass to pass in
multiple frames. However, comparisons may also be made across radar systems
(e.g., Seasat and SIR-B), or across frequencies or polarization channels with
the same system (e.g., L-HH and C-VV).
Ideally, all data products generated by the SAR correlator are absolutely
calibrated such that an image pixel intensity is directly expressed in terms of
the mean surface backscatter coefficient. This requires the signal processor to
adaptively compensate for all spatial and time dependent variations in the radar
system transfer characteristic. This procedure, referred to as radiometric
correction or compensation, establishes a common basis for all image pixels~
such that a given pixel intensity value represents a unique value of backscattered
signal power, independent of its location within the data set. For absolute
calibration, a constant scale factor is required that compensates for the overall
system gain (including the ground processor), in addition to an estimate of the
noise power to determine the relative contribution of the thermal noise in the
recorded signal. Absolute calibration is essential for comparison of multisensor
data as well as for validation of the measured backscattered signal characteristics
using scattering models.
310

DEFINITION OF TERMS

311

In this chapter, we will introduce a set of definitions for the basic calibration
terms as well as image calibration performance parameters. From this basis,
we will discuss various system calibration procedures required to produce the
measurements needed for radiometric correction. We will describe the internal
(radar system) and external (ground) devices used to insert a known, deterministic
signal into the radar data stream for characterization of the system transfer
function. Finally, and perhaps most important, we describe the ground data
system procedures for measuring these calibration signals and correcting the
image data such that the output products are routinely calibrated.

7.1

DEFINITION OF TERMS

As in any scientific or engineering discipline, there is a set of common terminology


used in discussing the radar system performance (IEEE, 1977). However, in
describing the absolute radiometric accuracy and the relative precision of a
SAR system, a number of terms are used quite loosely. We therefore do not
represent these definitions as internationally accepted. However, they are
representative of the parameters commonly used to characterize the radiometric
fidelity of a SAR system.
Often, when specifying radiometric errors, they are not given in terms of the
end-to-end system performance, rather many calibration measures are referenced
to the sensor subsystem or individual assemblies (e.g., antenna) that comprise
the sensor. For example, a system engineer will typically characterize the
performance of the SAR antenna in terms of(IEEE, 1979): the cross-polarization
isolation; the amplitude and phase errors (as a function of both system
bandwidth and off-boresight angle); and the two dimensional pattern (including
sidelobe levels and antenna gain or directivity). Although these specifications
are necessary for the radar engineer to characterize the antenna's performance,
they are of little meaning to the end user, who is interested only in the system
performance so far as it affects the particular application.
Rather than describe the radiometric calibration in terms of sensor
uncertainties, we define the calibration parameters in terms of the end data
products. This is an important distinction, since many types of radar system
errors can be at least partially compensated in the signal processor. It is therefore
necessary to evaluate the performance of the radar system in terms of its
end-to-end characteristics, including all elements of the three major subsystems:
(1) SAR sensor; (2) Data downlink; and (3) Signal processor.
7.1.1

General Terms

Before proceeding with a definition of the performance parameters, we first


define what we mean by calibration. We define radiometric calibration as the
process of characterizing the performance of the end-to-end SAR system, in

312

RADIOMETRIC CALIBRATION OF SAR DATA

terms of its ability to measure the amplitude and phase of the backscattered
signal. This calibration process generally consists of injecting a set of known
signals into the data stream at various points and measuring the system response,
either before or after passing through the signal processor. We distinguish
calibration from system test, in the sense that calibration is performed as part
of the normal system operation, while testing is only performed prior to or
following the normal operations.
The calibration process can be divided into two general categories:
(1) Internal calibration; and (2) External calibration. Internal calibration is the
process of characterizing the radar system performance using calibration signals
injected into the radar data stream by built-in devices (e.g., calibration tone,
chirp replica). External calibration is the process of characterizing the system
performance using calibration signals originating from, or scattered by, ground
targets. These ground targets can be either point targets with known radar
cross section (e,g., corner reflectors, transponders), or distributed targets with
known scattering characteristics (e.g., u 0 ).
The calibration process is distinguished from verification in that verification
is the intercomparison of measurements from two (or more) independent sensors
with similar characteristics. The consistency between independent sensors of
the measurements of the same target area under similar conditions can be used
to verify the calibration performance specifications of each instrument. Instrument
validation refers to the comparison of geophysical parameters, as derived from
some scattering model, to known geophysical parameter values (e.g., surface
roughness) as determined from ground truth measurements. The validation
process assumes that reliable models are available to derive the geophysical
parameters from the u 0 values. Otherwise, the instrument measurement errors
cannot be separated from the model uncertainty.
7.1.2

Callbratlon Performance Parameters

The performance of the radar system can be characterized in terms of a set of


calibration parameters. The system performance is typically divided into
absolute (accuracy) and relative (precision) terms. Absolute calibration requires
determination of the overall system gain, while relative calibration does not
require system gain since it involves the ratio of data values within a single
radar system. If relative comparisons are made across radars (or radar channels),
then the system gain does not cancel in the ratio and the absolute gain of each
channel is required.
Single Channel Parameters

Three calibration parameters are generally used to specify the performance of


a single channel (i.e., single frequency, single polarization) radar system. Absolute
calibration is the accuracy of the estimate of the normalized backscatter
coefficient from an image pixel or group of pixels as a result of system induced
errors. Relative calibration is generally categorized according to the time

7.1

DEFINITION OF TERMS

313

separation between the pixel values to be compared. Typically, systems are


specified in terms of both their long-term and short-term performance. Long-term
relative calibration refers to the precision of the estimate of the backscatter
coefficient ratio between two image pixels (or groups of pixels), separated by
the time required to produce uncorrelated error sources in the dominant error
terms (e.g., thermal instabilities, attitude variation). Short-term relative calibration
is the uncertainty in the backscatter coefficient ratio between two pixels (or
groups of pixels) separated by a time interval that is short relative to the time
constant of the dominant error sources.
The distinction between short- and long-term relative calibration is somewhat
qualitative and is generally based on the science utilization of the data. In a
typical data analysis, a key parameter is the ratio of the mean power (within
an image frame) of two homogeneous target areas (e.g., for target classification).
Alternatively, an analysis may be from pass-to-pass over a common target area
for change detection. The fact that many error sources are negligible in a short
term comparison, such as within an image frame, establishes the need for an
independent performance specification. Relative errors, such as the variation
due to thermal effects and errors resulting from platform instability, are negligible
if the time separation between measurements is sufficiently short.
Given that the backscattered signal is a complex quantity, we must extend
the above definitions for the system radiometric calibration to include the
estimation accuracy of the target dependent phase. However, this phase term
is only meaningful for multi-channel SAR systems as discussed in the following
section.
Multiple Channel Parameters

For a multi-polarization SAR, both the relative amplitude and the relative phase
stability must be specified to determine the cross-channel calibration performance.
The polarization channel balance is the uncertainty in the estimate of backscatter
coefficient ratio between coincident pixels from two coherent data channels.
Similarly, the polarization phase calibration is the uncertainty in the estimate of
the relative phase between coincident pixels from two coherent data channels.
The phase uncertainty should include both the mean (rms) value and the
standard deviation about the mean, since the second order statistics of the phase
error can contribute significantly to uncertainty in the target polarization
signature (Freeman et al., 1988). These polarization parameters should be
specified for each radar channel combination.
For a multifrequency SAR both the relative and absolute cross-frequency
calibration must be specified for each cross-channel combination. The absolute
cross{requency calibration is defined as the uncertainty (precision) in the estimate
of the backscatter coefficient ratio between two pixels (or image areas), either
simultaneous or time separated, from frequency diverse radar channels. The
relative cross{requency calibration is the uncertainty in the estimate of the
cross-frequency ratio of relative backscatter coefficients between two image
pixels or homogeneous target areas. Phase calibration is not meaningful across

314

RADIOMETRIC CALIBRATION OF SAR DATA

frequency channels, since the phase difference between backscatter measurements


at different frequencies is uncorrelated.
7.1.3

Parameter Characteristics

The calibration performance parameters defined in the previous section typically


refer only to systematic error sources. These parameters characterize performance
by excluding target dependent errors such as speckle noise and range and
azimuth ambiguities. Additionally, it is assumed that the power contributed by
the thermal noise is known and can be subtracted from the total received power
prior to the data analysis. Uncertainty in the noise power estimate is typically
not included in the error model.
Furthermore, the calibration parameters are random variables. Since generally
calibration accuracies are specified as a single number, it is inherently assumed
that the probability distribution function of each error term is Gaussian.
Typically, the specified numbers are one standard deviation errors. It is also
generally assumed that the error sources are uncorrelated, such that the various
contributors can be root sum squared (rss) to determine the overall system
performance.
An additional point that should be made is that the calibration errors are
in general a function of both along-track and cross-track position of the target.
In the cross-track dimension, for example, the slope of the antenna pattern
increases with the off-boresight angle. For a given error in the estimate of the
antenna electrical boresight, the relative calibration error will vary depending
on the position of the target within the elevation beam. In the along-track
dimension, orbit-dependent variations, for example, may affect the calibration
uncertainty due to thermal cycling of the instrument.
In summary, parameters defining both the absolute calibration accuracy and
the relative calibration precision should be defined to encompass the end-to-end
system. Each of these parameters is a random variable and its value should be
specified with a probability of occurrence. Additionally, the error source
characteristics may be functions of both along-track and cross-track target
position and therefore, in general, should be specified as functions of these two
variables, or at least bounded by the maximum error over some domain. This
set of calibration error sources typically excludes target dependent effects such
as speckle and ambiguity noise, since the relative contribution from these effects
is unique for a given target area. The calibration parameters for single channel
radar systems must be extended for multiple channel radars, since, in a multiple
channel system, additional cross-channel error sources exist.

7.2

CALIBRATION ERROR SOURCES

As was discussed briefly in the previous section, the radiometric calibration


accuracy of the SAR data is not simply dependent on the stability of the sensor

7.2

CALIBRATION ERROR SOURCES

315

subsystem. The end-to-end system performance, involving the sensor as well as


the downlink and ground processor, must be considered. In this section, we
will review each element in the end-to-end system in terms of its characteristic
error sources and its effect on the overall system calibration.
The objective of the calibration process is to characterize the system with
sufficient accuracy that the properties of the imaged target area (as measured
through its electromagnetic interaction with the radiated signal) can be derived
from the image data values using some systematic analysis procedure. This data
analysis procedure, usually referred to as geophysical processing, interprets
image u 0 values in terms of some geophysical characteristic of the target (e.g.,
soil moisture, ocean wave height). The sensitivity of this analysis to errors in
u 0 determines the required calibration performance for that specific application.
In general, the greater the dimensionality of the data set (i.e. multiple incidence
angles, polarizations and frequencies), the more robust the analysis procedure
and the more demanding the system calibration performance.
The key elements to be considered in calibrating the SAR system are
illustrated in Fig. 7.1. The following subsections provide an overview of the
calibration error sources for each major subsystem element.
7.2.1

Sensor Subsystem

Included in our discussion of the sensor subsystem are the effects of the
atmospheric propagation errors, as well as those of the radar antenna and the
sensor electronics.
Atmospheric Propagation

The propagation of both the transmitted and reflected waves through the
atmosphere (in which we include the ionosphere' can result in significant
modification in the electromagnetic wave parameters. The key atmospheric
effects are: ( 1) Attenuation of the signal (amplitude scintillation); (2) Propagation
(group) delay; and (3) Rotation of the polarized wave (Faraday rotation). These
effects are typically localized in both time and space and are therefore extremely
difficult to calibrate operationally.
Amplitude scintillation does not occur naturally above 1 GHz, except along
a band of latitudes centered on the geomagnetic equator and within the polar
regions during periods of peak sunspot activity, which occur in 11 year cycles
(Aarons, 1982). The peak in 1990 nearly coincides with the launch of the ESA
ERS-1, however the effects will be small for this system, which is a C-band SAR
(A.= 5.6 cm), since the perturbation strength is proportional to wavelength
squared. An analysis for the Seasat SAR (A.= 23.4 cm), which was launched
just prior to the peak sunspot activity in 1979, concluded that fully 15 % of the
nighttime Seasat images would show significant degradation. However, an
evaluation of the processed image data does not support this analysis (Rino,
1984). At higher frequencies (above 10 GHz), attenuation from water vapor
absorption could also effect the SAR measurement accuracy (Chapter 1).

7.2

z t:>

cc ~~f-

~d

C/)

91 ul

8~

:::iE

__J

di H

.s.,
t>

~..

f-

a: (.)

<(

Cl u..

~i:g

"':::s

z~

c:ol0

() ::i

'5

Q~
~a:

~~
Z;:.?

__J

Q(S

tu~

:c
~a..

(.!)

t-

.,>

00

"!

@ ~"'~

(.!)

317

Group delay is also an ionospheric effect that is most severe for low frequency
(::::;; 1 GHz), high altitude ( > 500 km), polar orbiting SARs. An uncompensated
group delay will degrade the SAR performance in two ways. First, the slant
range estimate will be offset according to error in the propagation velocity
(Chapter 8). A second effect is pulse distortion, which results in spreading of
the pulse (i.e., the ionosphere behaves like a linear dispersive delay line
(Fig. 7.2)). An EM wave propagating through a medium ionosphere typically
experiences a two-way group delay of 50 to 100 ns, increasing to as high as
500 ns during peak sunspot periods, with a nominal pulse dispersion of less
than 1 % for Seasat-like parameters (Brookner, 1973).
Faraday rotation is the effect of the ionosphere on a lipearly polarized wave,
producing a rotation in the wave orientation angle. The amount of rotation is
directly related to the ionospheric dispersion resulting from the earth's magnetic
field. It is inversely proportional to the radar carrier frequency squared. At
frequencies above 1 GHz the rotation is small under most ionospheric conditions
and can be neglected.
An example where the atmospheric effects are significant is the Magellan
SAR designed to map Venus (Chapter 1). The Venusian atmosphere is more

(.') 0

<(
(.) <(

CALIBRATION ERROR SOURCES

RECEIVED
PULSE

TRANSMITTED
PULSE

"Cl

ii=:
<
rn
"Cl

s::

__J

~~

t-

iD
0

a:

z~

LL

H: ~

~
a..

g:~

---0-40n

-.s

~8

::;i
a:

l
~1

~w

w~

1j
0

2... ,

0
IQ
I::

.,

"": g

"'"=
! !!'

~~

t~
~g
~ If

en

a:

ii
I-'(.)

g:e

a:i

1J

-n-27ns
,A,,, ..

/\

I/ "

~~
l\1(\ ..

+5ns
),\ .

Figure 7.2 Ionospheric pulse dispersion for short pulse with a Gaussian envelope. The results are
for grazing angles during severe ionospheric conditions for two way propagation of a 1 GHz wave.
The pulse attenuation is given by 'P" (Brookner, 1985.)

316

318

RADIOMETRIC CALIBRATION OF SAR DATA

dense than that of the earth. The highly elliptical orbit of Magellan results in
both very shallow and very steep incidence angles over the orbital period. The
result is that the long propagation path through the dense atmosphere causes
significant attenuation and refraction of the EM wave, altering the incident
surface geometry and in some cases the orientation of the wave.
Antenna

The SAR antenna can be a major source of calibration error. There are several
factors that limit the antenna subsystem calibration. First, to achieve the required
SNR, a large antenna gain is required and therefore a large physical aperture
area. Spaceborne antenna systems are typically over 10 m in the azimuth
dimension. To maintain pattern coherence, the structure must be rigid such
that its rms distortion is less than A./8. Considering the spaceborne environment,
both zero gravity unloading and the large variation in temperature will cause
distortion in the phased array. This distortion can result in gain reduction,
mainlobe broadening, and increased sidelobe levels.
A second key factor limiting the antenna calibration is that the characteristics
of the antenna cannot be easily measured using internal calibration devices. As
we will discuss in a later section, most internal calibration systems bypass the
antenna subsystem and inject known reference signals directly into the radar
receiver electronics. In general, the only method to calibrate the antenna in
flight is by use of external calibration targets. However, this approach limits
monitoring of the antenna performance to certain discrete places within the
orbit. Any intra-orbital variation in this subsystem performance cannot be well
characterized.
A final consideration for the antenna is specifically for the case of an active
array (Fig. 7.3). An active array has phase shifters and transmit/receive (T/R)
modules inserted in the feed system to improve the system SNR and provide
electronic beam steering. Typically, hundreds of active devices are used in such
a design. This presents a difficult problem in characterizing the performance of
each device, which may degrade or fail during the mission lifetime.
Antenna calibration implies precise characterization of the gain and phase
transfer characteristic across the system bandwidth as a function of off-boresighf
angle. Additionally, the cross-polarization isolation is an important factor, not '.
only in the mainlobe of the antenna pattern but also in the sidelobe regions
that are aliased back into the mainlobe by the PRF sampling (Blanchard and
Lukert, 1985).

,----,
I
I

L---,
I

I
I
I
I
I

Sensor Electronics

The sensor electronics, which include both the RF and digital assemblies, are
typically well characterized by internal calibration devices. The system
performance, which is given in terms of the rms phase and amplitude errors
across the system bandwidth, can vary as a result of component aging or thermal
variation. The internal calibration loops employ either coded pulse replicas ()r
calibration tones to determine the system response function.
319

320

RADIOMETRIC CALIBRATION OF SAR DATA

A second factor in characterizing the performance of the sensor electronics


is the system linearity. The dynamic range of the receiver electronics should
always exceed that of the ADC, and the video amplifier linear dynamic range
should always be designed such that it is the first to saturate at any gain setting.
Typically, a 35-40 dB instantaneous dynamic range is required for an acceptable
distortion noise level.
7.2.2

Platform and Downlink Subsystem

A key element in determining the overall system calibration accuracy and the
image quality is the sensor platform. A stable platform with precise attitude
and orbit determination capability is a necessity for the generation of calibrated
data products. Uncertainty in the sensor position and velocity primarily affects
the geometric calibration, degrading the target location accuracy and the
geometric fidelity of the image. This will be discussed in more detail in Chapter 8.
The platform attitude variables, in conjunction with its ephemeris, are key
parameters for determination of the echo data Doppler parameters. Even with
parameter estimation routines, such as clutterlock and autofocus, the initial
predicts must be sufficiently accurate for the estimates to converge properly. It
should be noted that these Doppler parameter estimation techniques are target
dependent, thus the convergence accuracy, and therefore the system performance,
depend on the surface characteristics. It is preferable to have attitude sensors
capable of measuring the sensor attitude to within one tenth of a beamwidth
in azimuth and several hundredths of a beamwidth in elevation.
The platform control is an important factor determining the quality of the
SAR image products. A large attitude rate, if not tracked by the SAR azimuth
reference function, will degrade the image quality by reducing the SNR within
the processing bandwidth. For block processing in azimuth, the Doppler
centroid varies as a function of time over the synthetic aperature length, which
results in the processing bandwidth being properly centered at only one point
within the block. The calibration error bias can be corrected, if the attitude rate
is known, by adjusting the processor gain for each block according to the signal . . .
loss.
Random errors caused by the data downlink have little effect on t~ i
radiometric calibration for distributed targets. A severe bit error rate (i.e., .:.
> 10 - 3 ) can degrade the impulse response function and therefore affect t~)
external calibration accuracy if point targets are used. If an entire echo line of/
data were lost in the Level 0 (telemetry data) processing, the internal fidelity
of the data set would be degraded. The effect is most severe for multichannel
systems such as an interferometer or a polarimeter, where the loss of a line of
echo data in one channel will cause a relative channel-to-channel phase error.
7.2.3

Signal Processing Subsystem

The signal processing subsystem consists of three major elements: ( 1) SAR


correlator; (2) Post-processor; and (3) Geophysical processor.

7.2

CALIBRATION ERROR SOURCES

321

SAR Correlator

The SAR correlator (Level lA processor) forms the image products from the
digitized video signal data by convolving the raw data with a two-dimensional
matched filter reference function (Chapter 4 ). The reference function coefficients
are derived from the Doppler characteristics of the echo data. Typically, the
SAR correlator processing algorithm approximates the exact matched filter
function with two one-dimensional filters. Additionally, in the frequency domain
fast convolution algorithm, the Doppler parameters are assumed constant within
a processing block. For large squint angles and large attitude rates, these
approximations are inadequate, producing matched filtering errors. The result
is an increased azimuth ambiguity level, loss of SNR, degraded geometric
resolution, and geometric distortion (image skew). The accuracy of the matched
filtering is especially critical when external calibration targets are used to derive
the sensor induced errors, since the sensor and processor errors cannot be
separated to identify the error source. As we will discuss in more detail in
Section 7.6.1, a technique has recently been developed to minimize the effect of
matched filtering errors on calibration performance (Gray, 1990). However, as
described above, these errors will still affect the image quality (impulse response
function) characteristics.
Post-Processor

The post-processor performs geometric and radiometric corrections on the SAR


image data. A key element in this process is the estimation of the correction
coefficients. This requires an analysis of ancillary data sets such as: ( 1) Engineering
telemetry; (2) Sensor, platform, and processing parameters; and (3) External
calibration device measurements. These data, in conjunction with preflight test
data and calibration site imagery, are used to develop a time dependent model
for the radar system transfer characteristic. This model in turn provides estimates
of the sensor errors at any time during the mission, assuming that the sensor
instabilities (e.g., thermal drift) are deterministic and can be measured. The
accuracy of the model will depend on the performance of the internal calibration
devices, as well as on the frequency of the spatial (cross-track) and temporal
(along-track) sampling of the system transfer function using ground calibration
sites. The calibration plan must consider the effects of the space environment
as well as the telemetry bandwidth and the performance limitations of both the
internal and external calibration devices. Typically, the post-processor correction
errors are driven by the accuracy of the input data used to derive the correction
coefficients and not by the performance of the post-processor subsystem.
Geophysical Processor

The geophysical processor interprets the calibrated backscatter measurements


(e.g., u 0 ) in terms of the surface biogeochemical characteristics. Depending on
the specific parameter to be measured, this can be done by inversion of a
scattering model (e.g., Bragg model), or empirically by using the statistics of
the image (e.g., the ratio of the mean to the standard deviation). With either
approach, ground truth data is generally required to train and/or verify the

322

7.3

RADIOMETRIC CALIBRATION OF SAR DATA

geophysical processing algorithm. The accuracy of the derived geophysical d~ta


depends on the image data calibration and the .adequacy ~f the sc~ttenn_g
model. A critical factor in developing a geophysical processmg algonthm is
parametrization of the analysis such that key environmental factors can be
included (e.g., surface temperature, diurnal variation, wind speed, etc.). The
most successful algorithms to data are those that are relatively insensitive to
calibration errors (e.g., they utilize only the ratio of pixel values).

RADIOMETRIC ERROR MODEL

set of scatterers with complex reflectivity, as in Eqn. (3.2.3), by


((x,y) = A(x,y)exp[jt/f(x,y)]

7.3

(7.3.4)
where ifx,y is the expectation over x and y and Ax, ARg are the azimuth and
ground range resolution cell sizes of the unprocessed raw video signal (the beam
footprint).
Substituting Eqn. (7.3.4) and Eqn. (7.3.2) into Eqn. (7.3.1), we can write the
mean received power for a homogeneous target as
(7.3.5)
where Pn is the mean noise power over some block of data samples used in the
estimation of a 0
If we ignore the effects of system quantization and saturation noise, the mean
received power for a homogeneous target is related to the digitized video signal
by

RADIOMETRIC ERROR MODEL

The process of radiometrically calibrating the SAR image data can be red~ced
to estimation of the bias and scale factors that relate the backscatter coeffietent
to the image data number (DN). Assuming the system is linear, we can write
the receiver output power as

Pr

2
="In
L.,,d11 /Ml =

where nd,i is the complex data number of the (i,j) digitized sample and M 2 is
the number of samples averaged. From Eqn. (7.3.5) we can write
0

where p is the total received power, P. is the signal power, and Pn is the additi~e
(thermah noise power. Ignoring the effects of ambiguities, the signal power is
related to the mean radar cross section ii by
(7.3.2)

where K'(R) is a range dependent scale factor.


.
Recall from Section 2.3 that the radar cross section a of a patch of terram
is a random variable. The mean radar cross section ii of a region is only defined
for an extended area of homogeneous statistical properties. Assuming the average
signal level for a homogeneous SAR image is independent of scene coh~rence
(Raney, 1980), a statistically uniform target region can be modeled as a discrete

ii d

i,j

(7.3.1)

P. = K'(R)ii

(7.3.3)

spaced at intervals equal to the unprocessed resolution cell size. The amplitude
A ( x, y) is modeled as a Rayleigh distributed, stationary process, while the phase
l/J(x, y) is uniformly distributed and stationary. The expected radar cross section
is therefore

Summary

Calibration of the SAR end-to-end data system presents a formidable challenge


to both the radar and ground processor design engineers. The uncertainty in
the characterization of each element in the data system must be established,
and an overall error model developed to determine if the expected system
performance meets the specification. A key factor is the stability of the radar
sensor relative to the calibration measurement sampling interval. If the transfer
characteristic is not adequately sampled in either time or frequency then the
accuracy of the correction coefficients will be degraded.
.
.
In the following sections, we discuss the internal and external calibration
measurement strategies by reviewing current system designs. Their performance
will be assessed in terms of a system error model. In the second portion of the
chapter we will discuss the ground calibration processor design in terms of the
image analysis and data correction algorithms required.

323

ii~ - Jin
K(R)

(] = --=----=

(7.3.6)

K(R) = K'(R)AxARg

(7.3.7)

where

Thus, if the scale factor K(R) and the mean noise power Pn can be estimated
over a small area ( M x M samples) of the data set, then the mean backscatter
coefficient a 0 can be determined from Eqn. (7.3.6).
In general, Pn and K(R) will be both frequency and time dependent given
the radar component aging, thermal stress, and platform motion. However, the
frequency dependence is significant only in. terms of the processor matched

324

7.3

RADIOMETRIC CALIBRATION OF SAR DATA

filter error characteristics. For a point target, these errors will be expressed in
terms of mainlobe broadening and increased sidelobe energy in the point target
response function. For a distributed target, the processor matched filtering
integrates the frequency response, thus the shape of this response is not
significant, since only the integrated power affects the radiometric calibration.
In general, the noise power and scale factor should be written as functions of
time P 0 (t) and K(R, t), and can on!y be considered constant over a small block
of data.
Since the calibration correction parameters vary with time, the estimates of
these parameters cannot be extrapolated over a large area. Additionally, there
is a large uncertainty in the a 0 estimate if M is only a few pixels. This is due
to the inherent speckle noise in the data resulting from a large number of
independent scatterers within a single resolution cell (Section 2.3, 5.2). Since
the intensity of a one-look pixel (M = 1) obeys the exponential probability
distribution function Eqn. ( 5.2.9 ), this uncertainty is 3 dB. Stated differently,
there is about a 50% probability that the single-look pixel value lies outside
the a 0 3 dB range. The estimate of the noise power also must be derived
from a large number of pixels to determine the statistical mean. On an individual
pixel basis, the actual noise power may deviate significantly from the mean
noise estimate.
The variation in noise power over time primarily results from variation in
the radar receiver chain component gains. This drift can usually be measured
from receive-only noise measurements, when the transmitter is placed in a
standby mode and only the thermal noise is recorded. The changes over time
in thermal noise power can be monitored using internal calibration signals that
measure the overall receiver gain characteristic.
A formulation for the range dependent scale factor K(R) in terms of
measurable quantities can be derived from the radar equation, as we will show
in the next section. It is dependent on radar system parameters such as the
antenna gain pattern, the transmit power, and the sensor-to-target slant range.
Errors in the estimates of these system parameters will degrade our estimate
of K(R) and therefore the radiometric calibration.
To evaluate the sensitivity of a 0 to errors in the estimate of K(R) and Po
we take the partial derivative of Eqn. (7.3.6) with respect to each parameter.
The uncertainty in the estimate of a 0 for a given error in K(R) is:
(7.3.8)

RADIOMETRIC ERROR MODEL

325

unbiased such that


t9'{K(R) - K(R)}

P =0

= O; lf{P

0 }

0 -

where 8 represents the expectation and K(R) and P0 are the estimated values.
Combining Eqn. (7.3.8) and Eqn. (7.3.9) and rearranging terms, the fractional
uncertainty in the estimate of a 0 from errors in the noise power and the correction

factor is given by

2 ( SK )2 ( Sp )2
( Sa)
a
= K(R)
+ a KCR)
0

(7.3.10)

where we have assumed the estimation errors are uncorrelated, Gaussian


distributed variables. Recall from Eqn. (2.7.1) that K(R) is the product of a
number of terms (transmit power, antenna pattern, etc.), such that

If we assume that the distribution of the estimation errors for each term is

Gaussian and uncorrelated, and if we further assume that the variances are
small, then the coefficient of variation of the K(R) estimation error is given by
the sum of the coefficients of variation of the individual parameters (Kasischke
and Fowler, 1989)

Bi = Bi + BR + + BR
1

(7.3.11)

where the coefficient of variation, Bx= sx/x, is the ratio of the standard deviation
to the sample mean for the random variable x. Combining Eqn. (7.3.10) and
Eqn. (7.3.11) the error model becomes

Ba

2 + 2 + + 2 + (

BK

BK 2

'

BK

)2]1/2

sP.
K(R)a 0

(7.3.12)

where Ba = Sa! a 0 Using the relationship in Eqn. (7.3.6), we get a final expression
for our error model as

Ba= [BR +BR + ... +BR + (-2Bp~-P_n_)2]112

while the a 0 error for a given error in P 0 is:

Sa= Spj K(R)

fid -

Pn

(7.3.9)
0

where Sa sK, and sp. are the standard deviations of the estimates of a , !'(R),
and P 0 , respectively. We have assumed that estimates of K(R) and P 0 are

Thus the coefficient of variation for a 0 is given by the root-sum-square of the


coefficients of variation of the individual terms in the radar equation plus a
scaled noise term.

326

7.4

7.5

RADIOMETRIC CALIBRATION OF SAR DATA

THE RADAR EQUATION

Given the radar equation for a distributed target as defined in Eqn. (2.8.2), we
can write the receiver output signal power as

p = PtGrG 2 (<f>),P(u 0 AxAR 8 )

(4n)J R4

(7.4.1)

for a homogeneous scene, where we have assumed that the antenna is reciprocal
(i.e., Gt= Gr= G), Pt represents the radiated power, Gr is the overall receive
gain, and AxAR 8 is the ground area of each precompression resolution cell.
(The point target radar equation would use the term u, the radar cross section
of the point target, in parentheses in Eqn. (7.4.1).)
From Eqn. (7.3.2), Eqn. (7.3.4) and Eqn. (7.4.1), the range dependent scale
factor K(R) is given by
2

K(R) = P,GrG (</>)A. AxAR1


(4n)J R4

(7.4.2)

The area of the resolution cell (precompression) is given by AxAR 8 , where

Ax= A.R/La
AR 8

= crp/(2sin17)

(7.4.3)
(7.4.4)

where in turn rP is the pulse duration, La is the antenna length, and 17 is the
incidence angle.
Inserting Eqn. (7.4.3) and Eqn. (7.4.4) into Eqn. (7.4.2) and rearranging terms, '
we get
(7.4.5)
In evaluating Eqn. (7.4.5), certain terms are known to high precision and can
be ignored in an analysis of the system calibration accuracy. These include: ( 1)
Wavelength, A.; (2) Pulse duration, rP; (3) Antenna length, La; (4) Slant range~
R; and(S)Theconstant term, c/128n 3 Therefore, we can re~rite Eqn. (7.4.5)as
(7.4.6)

where K.(R) is comprised of the "deterministic" terms in K(R). Thus the


calibration problem is reduced to estimation of the radiated power Pt; the
overall receive gain Gr; the elevation antenna pattern and boresight gain G( 4> );

RADIOMETRIC CALIBRATION TECHNIQUES

327

and the incidence angle 17, which depends on the platform roll angle. Additionally,
the noise power term in Eqn. (7.3.5) must be estimated.
The calibration techniques to estimate these parameters are broken into
internal calibration and external calibration measures. The internal calibration
uses data from built-in calibration devices to measure primarily the transmitter
power output and the receiver gain. Typically these devices will only be used
to track the system drifts over time. External calibration techniques generally
use image data of calibration sites equipped with point targets of known
scattering properties, or images of distributed target sites with known u 0 These
data are used primarily for absolute gain and antenna pattern estimates. The
following section will describe each of these techniques in detail.

7.5

RADIOMETRIC CALIBRATION TECHNIQUES

The engineering procedure for characterizing the radiometric accuracy of a SAR


system begins with the specification and design of the instrument and carries
through to the premission testing and long-term operations (Freeman, 1990b).
Figure 7.4 illustrates the system calibration process. The initial activity during
the instrument design phase is to specify an internal calibration subsystem that
will measure critical instrument parameters during its operational lifetime
(e.g., receiver gain, transmitter output power, etc.). Following the instrument
fabrication, a set of preflight system tests are conducted.
The preflight tests are primarily to verify that the sensor performance meets
specifications. A secondary goal of the testing is to derive the functional
dependencies between the internal measurements and the system performance
parameters. For example, the antenna flatness may be sensitive to a temperature
gradient across the array, causing warping when the sun illuminates one end
of the antenna. It may be possible to characterize the effective change in the
antenna pattern (e.g., the peak gain and/ or the electrical boresight) as a function
of the temperature profile from temperature sensors located on the array. If
this relationship is characterized preflight then it could be used as an indirect
measure of the antenna performance during the operational phase.
In conjunction with the internal calibration measurements performed routinely
throughout the operational phase, less frequent external calibration measurements
are also required. The external calibration sites provide known targets for
directly measuring the end-to-end system performance. Additionally, these sites
can be used to verify the estimated sensor performance by conducting multisensor
measurement campaigns with other calibrated sensors (e.g., scatterometers ).
These campaigns are very important for detecting estimation biases or analysis
errors that cannot otherwise be identified. The ground calibration sites should
consist of a combination of point targets (e.g., corner reflectors, transponders)
and distributed targets of known homogeneous backscatter characteristics
(e.g., a rain forest).

7.5

z<( en
-

:S en

a: >0

....J
a: ><(

..J

LL <(

a: z
w <(
a..

Ill

:J

<(
(.)

z<(

~ :J

(/)

i==
Ci5
<(

i==
<(

~I-

a:

UJ <(

~~

Ill

:J

<( a_
(.)

11llOIUUUnOHllllllllllllllOUHlll11UUlllllUlllOllllUOHUl1111111111111

.... z
go
co
....

a: <(
a.. a:
m
::::i

<(
.... <(
<( 0

a:

a:

.....

Q
,.....
:5 er
0

(/)

cc

a: 2..
a:

~~
0-

mW
-..J :s
w

II

o::::>

LL

<(

a:

a: en
0 <(
en w

z :s
w

en

::::!: (/)
0 ~ I:J (/) (/)
UJ

a:
a_

~ ,_

<(

>-

Cl)

UJ
I-

<(

I-

>

5:

<( (.)

a:

<(

A A ' '

UllHll

..... "'""'"''"'""""'""'""""""""'ll"ttlln1111uunuu

""'

g~.... g

Cf

::::!:
a_

UJ
UJ

a: z

ffi::::!:

z ....J
UJ
zUJ ~

(5

UJ

(jj
0

(/)

a:
~ Ill

<(

i== 0
<(

:J ::::!:
<(

:J

::::!:

a_

<( (.)
(.)

Jt.

llllllllUllllllllllllllllllllllHUlllllllllllllllllllllllllllllllHlllllH1111Hff1

0(/) 0

329

Perhaps the most difficult task in calibrating the SAR system is not in
collecting this set of calibration data, but rather in performing the calibration
analysis to derive the correction parameters. As shown in Figure 7.4, the final
stages in generating calibrated data products are: ( 1) Assembling the calibration
metadata and calibration site imagery into a database; (2) Performing analysis
of this data to derive the radiometric correction factors (i.e., the K(R) and P0
terms as functions of time); and (3) Incorporating this information into the
operational processing data flow to routinely generate calibrated data products.
This section addresses specifically the sensor calibration measurements and the
ground calibration site design. The following sections will address the calibration
processor design and data analysis in some detail.

7.5.1

UJ

!zUJ

Ci5

Iz-- I

en

<( ....

a: z

z
j::

:J

<(
(.)

........................................................ ,,,,,,,,, .......

a:

Ill <(

Ill

fil

roG

i==

(/)
<(

<(

UJ

<( UJ

'"""'"'"'''''""'"""""'''''''"""'"""""""'''111111"

RADIOMETRIC CALIBRATION TECHNIQUES

Internal Calibration

The internal calibration measurements are only useful in conjunction with the
preflight system test results that define the relationship between these built-in
device measurements and the key system performance parameters. This is
especially true for a spaceborne SAR such as the E-ERS-1 SAR or the SIR-C.
For systems such as these, extensive testing of the RF electronics, digital
electronics, and the antenna are made over temperature and, when possible, in
a vacuum environment. Key system parameters such as: transmitter output
power, transmitter and receiver losses, receiver gain, antenna gain and pattern,
RF /digital electronics linearity and dynamic range, and phase/amplitude versus
frequency stability are measured as functions of temperature at each (unique)
radar gain and PRF setting. Proper placement of internal calibration devices,
such as temperature, current, and power meters, will permit determination of
the system performance as a function of variation in these parameters.
Obviously, this technique assumes that the variation in system performance
can be modeled as a function of these observable parameters. Furthermore, we
assume that these calibration devices are themselves accurately calibrated and
stable over time. In addition to these built-in test meters, most radar systems
perform in-flight RF test measurements using calibration loops. To illustrate
the two fundamental approaches to the RF internal calibration design we
consider as examples the ESA E-ERS-1 SAR and the NASA/ JPL SIR-C designs.

IUtllffHllUUlllUllllllUHllllllUllHllllllUlllllHllHllllllllltllllll

E-ERS-1 Internal Cal/brat/on

::::!:

j::

<

<(

(/)

(/)

<(

en

~ 0z

UJ
UJ

Cf Ci5
UJ

Cf

>- zUJ 0 1 - - - z
a: 0
b
<( ~ z
~ a:
<(
UJ a_

a: ..J
ma..

::::i

328

UJ
I-

UJ

The E-ERS-1 instrument is a C-band passive array design with a single


transmitter (TWT, SAW) assembly and a single receive unit with dual ADCs
operating 90 out of phase (i.e., I/Q mode). In addition to the built-in DC test
meters (temperature, power, voltage, etc.) to monitor system health, this system
features an RF calibration loop designed to make a direct measurement of the
system transfer characteristic (Attema, 1988). This is accomplished by routing
the transmitted signal through a test path, bypassing the antenna, and inserting
an attenuated replica of the transmitted signal into the front end of the receiver
chain. Figure 7.5 illustrates the internal calibration loops of the E-ERS-1 SAR

330

RADIOMETRIC CALIBRATION OF SAR DATA


HPA

7.5

UP
CONVERTER

PULSE
EXPANSION

FREOUENC
GENERATO

RADIOMETRIC CALIBRATION TECHNIQUES

331

TO ANTENNA
FROM HPA-----x.....,..--58-d_B_ _ _ _ _ _ _ _ SAR

X...____
IF REPLICA

FROM
5.18 GHz
FREQUENCY-.,..--...r
GENERATOR

CALIBRATOR!--------,
RF REPLICA

LNA

DOWN
CONVERTER

IF

AMPLIFER

COHERENT
DETECTOR

Figure 7.5 Internal calibration loop design used by ESA ERS-1 SAR. A similar design is employed
by the X-SAR shuttle radar (Attema, 1988).

system. The high power amplifier (HPA) output is coupled into a bypass circuit
that has two possible paths. The calibration loop signal (RF replica) passes
through the entire receiver chain, bypassing only the antenna, while the pulse
(IF) replica loop additionally bypasses the entire RF stage of the receiver and
inserts a signal into the front end of the receiver IF stage.
The details of the calibrator block in Fig. 7.5 are shown in Fig. 7.6. The
calibration loop is used only during the turn-on and turn-off phases of the data
collection operation. The high power amplifier (HPA) output is coupled
( - 58 dB) into the calibrator bypass circuit and demodulated to an intermediate
frequency (123 MHz). The signal is then filtered, attenuated, and shifted back
to its original RF center frequency where it is coupled ( -44.5 dB) into the
front end of the receive chain prior to the low noise amplifier (LNA). An HPA
power out measurement is performed using a power meter. This measurement
is then sent to the control processor for incorporation into the downlink data
stream.
The pulse replica loop is used primarily during the data acquisition phase
of the operations. This loop injects a replica of the transmitted pulse into the
data stream during the quiet periods between pulse transmission and echo
reception. A delay line is used to properly insert this echo into the data stream
without interfer~ng with the received signal. A command from the control
processor is used to set the signal level to be compatible with the selected IF
amplifier gain in the receive chain. The pulse replica loop injects this attenuated
signal into the receiver following the LNA at an intermediate frequency to ,
minimize the front-end noise contamination. It is impor\ant to note that tiu;
pulse replica loop cannot directly measure the system gain variation since the.
primary source of gain drift is the front end LNA.
,
The E-ERS-1 internal calibration loops will be used as follows to correct for
system errors (Corr, 1984). The relative change in transmitter output power
times the receiver gain variation is measured by the calibration loop during th~
turn-on/off sequences. The gain at any time during the data acquisition period
is then estimated assuming a linear variation over the period. This is a reasonable

43dB

POWER
METERAGC
FROM
CONTROL
PROCESSOR

TO CONTROL
PROCESSOR

123.2MHz

x
X -

11 dB

j_

44.5 dB
TO
FROM
RECEIVER ------:;......;:~--------SAR ANTENNA
Figure 7.6

Detail design of the internal calibrator for E-ERS-1 (Atterila, 1988).

assumption since the period between turn-on and turn-off is relatively short
(nominally < 5 minutes). The pulse replica loop is primarily used to obtain the
relative gain and phase characteristics (minus the LNA) across the system
bandwidth. This transfer function estimate is then used to determine the exact
range pulse code for use in the ground signal processor. If the pulse code (e.g.,
chirp) generator is not stable (e.g., phase drift), then a frequent update in the
range compression function may be required for formation of the synthetic
aperture.
SIR-C Internal Calibration

An alternative approach to internal system calibration is to use a single frequency


tone generator that is coherent with the stable local oscillator ( stalo) controlling
the radar system. This design, shown in Fig. 7.7, is used by the NASA/JPL
Shuttle Imaging Radar series of instruments (Klein, 1990a). The calibrator
subsystem generates a stable low power tone that is used to monitor changes

332

RADIOMETRIC CALIBRATION OF SAR DATA

7.5

RF
ELECTRONICS

DIGITAL
ELECTRONICS

fca1

STABLE
CALIBRATOR
LOCAL
'----~OSCILLATOR
..___ _ __.
f slo

I /"""\.

'-J

REFLECTED
POWER

FORWARD
POWER
"-------'TRANSMITTERi-----i

rO

EXCITER

Figure 7.7 Internal calibration loop design used by NASA/JPL SIR-8 and SIR-C instrument.

in the receiver transfer characteristic. Prior to the data acquisition during the
turn-on phase of operation, the calibrator generates a tone spanning the full
dynamic range of the receiver. This continuous tone signal is injected into the
receiver data stream via a directional coupler. It scans across the passband,
dwelling at each frequency position for a fixed number of pulses. Typical numbers
for SIR-C would be a scan over 11 frequency positions, dwelling at each position
for 64 pulses ( "'0.05 s ).
During the data acquisition phase, the tone is set in a fixed position in the
center of the system bandwidth at a power level more than 12 dB below the
expected signal power. The calibration tone (cal tone) signal power is set at this
low level to ensure that it does not contribute significantly to receiver saturation.
Details of the SIR-C calibration subsystem are shown in Fig. 7.8. The caltone
frequency is derived from the stalo frequency f. 10 , the sampling frequency J.,

,---------------------,
I
fs10

I SYNTHESIZER

fcal

POWER
LEVELER
CONT~OLLED

REGUlATED
D.C. POWER

STATUS
SIGNALS

STEP
ATTEN.
TEMPERATURE

CONTROL
SIGNAL

Figure 7.8 Details of the SIR-C calibration design.

INJECTION
COUPLER

II
I
I

_ _J

RADIOMETRIC CALIBRATION TECHNIQUES

333

and the PRF fp It is selected such that the calibration tone falls into a discrete
FFT bin during the signal processing. The calibration output power is controlled
by a thermal compensation circuit to maintain less than 0.1 dB variation over
a range of operating temperatures. A step attenuator is used to adjust the caltone
signal power such that it is always 12-18 dB below the echo signal power. The
resulting caltone will be phase locked with the radar from pulse to pulse. This
permits coherent integration of consecutive echoes to effectively increase the
caltone power relative to the echo power for a precise measurement of receiver
gain.
The caltone is extracted from the data during signal processing by performing
an FFT on each echo line within a data block (e.g., 1024 samples by 512 lines).
Each transformed line is then summed coherently in the along-track direction.
For example, a 1024 sample range transform effects a 30 dB gain in the caltone
to receiver output power (P. + Pn) ratio. This gain is achieved since the caltone
energy is confined to a single FFT bin, while the received signal energy is spread
across all 1-024 bins. A ppase coherent azimuth summation of 512 transformed
lines achieves an additional 27 dB gain in the caltone power level. However,
this is partially offset by the unfocussed SAR aperture gain which is approximately
15 dB (35 lines) for a nominal SIR-C mode. Thus a caltone to signal ratio of
30 dB can be achieved from processing a 1024 by 512 block of data, assuming
the initial caltone to signal data ratio is set at -12 dB. The resulting caltone
estimation error ( <0.01 dB) is small relative to the expected caltone power
drift ( -0.1 dB). The results of a simulation using NASA/JPL DC-8 SAR data
(unfocussed aperture gain "'10 dB) are shown in Fig. 7.9 (Kim, 1989).
The caltone gain estimate is used to normalize the data samples acquired
during a time interval around the processed block of data. Typically the signal
processing generates an image frame from each 15 s block of data. Caltone
estimates from the beginning and end of the data block are routinely produced
to verify system stability over the 15 s period. The raw digitized video data are
then normalized according to the estimated mean caltone power level after their
conversion to a floating point representation and after subtraction of the caltone.
The caltone subtraction can be performed in either the time domain or the
frequency domain, given estimates of both the caltone gain and phase. If zero
padding of the data is required to achieve the "power of two" FFT block in
the range correlator, then the caltone energy will be dispersed according to the
fraction of zero samples. This greatly complicates the frequency domain
estimation and subtraction procedures. In this case, the caltone subtraction is
most efficient in the time domain. The caltone scan sequence during the turn-on
and turn-off phases of the data collection measures the gain and phase variation
across the system bandwidth. These measurements can be used to adjust the
range reference function for optimum matched filtering during the signal
processing.
The caltone scheme described above has one distinct benefit in that it can
be used to measure the gain variation throughout the data take. However, its
shortcoming is that it does not measure transmitter output power. This can be

334

7.5

RADIOMETRIC CALIBRATION OF SAR DATA

Spectru m of

RADIOMETRIC CALIBRATION TECHNIQUES

335

temperature. In this scheme, the absolute measure of radiated power can only
be determined using external calibration devices such as a ground receiver.

I Line

l00

Antenna Internal Calibration

25 0

0 0
0 0

255 8

511 . 5
Frequency

767 3

1023 0

Spectru m of Coherently Averoged <5~0 lines> Range Lines


100

75 0
CD

'O

550 0

.I:
u

Cll

Q.

Ul

25 0

0 0

0. 0

255 . 8

511 . 5
Frequency

767 3

1023 . 0

Figure 7.9 Plot of 1024 bin range transform of NASA DC-8 SAR data with a caltone inserted
at bin 512. Note the built-in radar caitone is set out of band at bin 975 (Kim, 1989).

done using a power meter \Fig. 7.7). However, the precision of such a meter is
typically not adequate to meet the calibration accuracy requirements. Alternatively,
the transmitter performance can be characterized in terms of its output power
versus temperature characteristic. Generally, relative changes in the transmitter
output power over a short time period are highly correlated to its operating

The internal calibrators described above are useful devices for measuring relative
system drift over short periods of time (minutes to hours). These drifts arise
primarily from thermal effects. It is important to note, however, that neither of
the techniques described above measures the antenna gain variation, which can
be the predominant error source. This is especially true in a spaceborne system,
which undergoes zero gravity unloading effects and large variations in
temperature. Generally, changes in the antenna gain and its radiation pattern
can only be measured in-flight by external calibration techniques, which will
be discussed further in the next section. This is because the desired pattern is
the far field pattern, which requires a calibrator at a distance of 2L; / J.. from
the antenna ( -4 km for the E-ERS-1 C-band). Theoretically, this far field
pattern can be synthesized from precise near field gain and phase measurements,
but practically the required precision cannot be achieved in an operational
spaceborne environment.
For an active array, such as the SIR-C antenna, the problem is further
complicated since there are several hundred transmitter and receiver modules
on the backplane of the antenna. Since each of these devices has its own gain
and transfer characteristic, system calibration is an especially difficult task.
External calibration devices will be used extensively to measure the overall
performance of this system. However, to monitor short term variations, an
internal calibration scheme has been devised (Klein, I 990a). A simplified
schematic of this system is shown in Figure 7.10. The antenna performance
verification loop, termed the radio frequency built-in test equipment (RF BITE),
consists of a second antenna feed (BITE feed) system. When an RF frequency
modulated pulse is sent to the antenna for transmission via the regular feed
system, this signal is coupled into the BITE feed system via a meandering
coupling line. The signal power at the antenna feedthrough points (from the
backplane to the radiating elements) is collected in the BITE feed system and
coupled into the receiver chain for digitization and incorporation into the
downlink telemetry. Additionally, the T / R module LNAs can be characterized
using the same BITE feed system. This is done by injecting a calibration tone
into the BITE feed and coupling this signal into the receiver chain at the
feedthrough points. The caltone signals are then collected by the regular antenna
feed system, digitized, and incorporated into the downlink telemetry. The system
is designed such that, during the turn-on phase (or by ground command), each
LNA and HPA can be turned on individually, by panel, or by leaf(three panels)
to measure the performance of the active elements during system operations.
The utilization of the RF BITE measurements for calibrations however
requires that the relative phase and gain over temperature of each ~oupler b~
known to an accuracy such that the antenna pattern can be synthesized.
Additionally, for the RF BlTE to be sensitive to system errors, each coupler

336

RADIOMETRIC CALIBRATION OF SAR DATA

7.5
C-BAND ANTENNA PANEL

7.5.2

18

I cal

RF BITE FEED

FROM
ANTENNA
FEED

TO RECEIVER

Figure 7.10 Simplified schematic of SIR-C C-band antenna performance verification loop. A
similar loop is installed in the L-band antenna.

must be approximately in phase such that the signals ad~ construc~ively. To


perform antenna calibration using this system, each couphng coefficient must
be measured for absolute gain estimation, or at least each coupler must be
confirmed to be stable over the operating temperature range for relative pattern
measurements. In essence, calibrating this RF BITE system may be more di~c.ult
than calibrating the radar itself. However, the utility of the system in venfymg
the functionality of each T /R module makes it a very useful device. The R_F
BITE will be used operationally in SIR-C to detect anomalous performance m
individual T /R modules. Degraded or failed modules will be shut down. The
resultant effect on the antenna pattern will be determined using an antenna
simulator in the ground processing subsystem.
Summary

337

preflight testing and verified inflight with external calibration techniques.


Similarly, the absolute system calibration of the SAR cannot be determined
without knowledge of the antenna gain. Therefore, the emphasis on the internal
calibration device designs is primarily in performing a relative measure of the
system drift over time.

T/RMODULE

CALIBRATOR
(CW TONE)

RADIOMETRIC CALIBRATION TECHNIQUES

In the previous section, the key parameters affectmg radar ~ystem ~ahbrat1on
were identified. These included the radiated power, the receiver gam, and the
antenna pattern, boresight gain, and angle. Generally, internal calibration lo~ps
can be used to estimate relative changes in the transmitter power and receiver
gains as a result of temperature variation or component aging. Built-in test
meters can provide additional data on the sensor performance, but they are
subject to the same types of errors as the sensor itself. Measureme!1t of the
antenna performance during in-flight operations is very difficult a~d ts usu~lly
not attempted. Instead, the antenna performance is characterized durmg

External Calibration

The use of ground targets with known scattering properties to derive the radar
system transfer function is referred to as external calibration. The advantage of
an external calibration procedure over internal calibration is that the end-to-end
system performance can be directly measured. Therefore, system parameters
which are difficult to measure, such as the antenna pattern, the boresight gain
and angle, and the signal propagation effects, can be characterized using external
calibration techniques. The shortcoming of this approach is that the calibration
sites are typically imaged infrequently. The result is an insufficient sampling of
the system transfer characteristic to measure either short term system instabilities
or platform motion effects. Operational calibration of any spaceborne SAR
system requires both external calibration to estimate the end-to-end system
performance (including the absolute gain) and internal calibration to monitor
the relative drift of the system between external calibration sites. The external
calibration techniques generally involve two types of target: (1) Point targets
or specular scatterers of known radar cross section (RCS); and (2) Distributed
targets of large homogeneous area with relatively stable, well characterized
scattering properties (e.g., a 0 ).
Point Target Cal/brat/on

Point targets are typically man made devices such as corner reflectors,
transponders, tone generators, and receivers. Each of these devices spans a
geometric area much less than a resolution cell, but exhibits a radar cross section
that is bright with respect to the total backscattered power from the surrounding
target area within the resolution cell.
To minimize calibration errors from the background area, the point target
RCS should be at least 20 dB larger than the total power scattered from the
SAR image resolution cell (i.e., a 0 oxoR1 ). There are a number of effects other
than the background power to be considered when deploying calibration targets.
The pointing angle of the device relative to the radar must be precisely measured
(e.g., an uncertainty < 1.0), since generally the radar cross section is highly
dependent on orientation. An additional consideration is the contribution from
multipath. This occurs when either the transmitted or reflected signal scatters
off the local terrain or nearby structures and is received by the SAR antenna
simultaneously with the calibration target return. A final point is that the device
RCS should be characterized by measuring its scattering properties in a
controlled environment (e.g., anechoic chamber) over a range of temperatures
and viewing angles. The concern is that, for a passive device such as a corner

338

RADIOMETRIC CALIBRATION OF SAR DATA

7.5

reflector, t he RCS is very sensitive to disto rtions in the plates forming the sides
of the reflector. Fabr ication errors or warping from thermal cycling could cause
a significant change relative to the theoretical RCS of the device.

cb.

40
/

20
Passive Calibration Devices. The most frequently used devices for SAR
calibration are corner reflectors. By far the most popular reflector is the
triangular trihedral design (Fig. 7.11 ). The triangular trihedral radar cross
section is given by (Ruck et al., 1970)

'

I/

a;

,,

Q;

..

IV

20

>
C1>

_,

...

-300 -

"v

'

,,
'1

r"\.

,f

[\

<I> .

40

C1>
~

where a is the length of one side. This design is preferred since it is relatively
stable for large radar cross sections and exhibits a large 3 d B beamwidth ( -40)
independent of wavelength and plate size.
An example of the dependence of radar cross section and beamwidth on
pointing angle relative to the axis of symmetry is given in Fig. 7.12 (Robertson,
1947). This figure shows the response of a triangular tri hedral (~ = 0.6 m) .to
a K-band radar (A.= l.25 cm). The variation in RCS as a function of device
orientation is an important consideration if the device is to be deployed in a
permanent configuration and imaged as a target of opportunity during normal
operations. This approach was used for several of the Seasat corner reflectors

'

<I>.

-40 -

:!:!

(7.5.1)

I/

RADIOMETRIC CALIBRATION TECHNIQUES

<I>.

339

oo

100_

..........

I' \

a.. 40

<I>.

,,,,...

C1>

>
~ 20
Q;

~-

'\

I/

-200 -

........

<I> .

20

.....

'/

\.

~
I

'
0
4> -100

40
I

20

'"

0
-40

<I> .

-....... !'...

looo"'"

'l/I

'
-20

II/

--

20
40 -40 -20
Angle e (Degrees)

300

r--..

I'
A

I"'--

20

Figure 7.1 2 Relative radar cross section patterns as a (unction or angle relative to the axis or
symmetry; 0 is vertical elevation angle, <P is the horizontal angle (Robertson, 1947).

Figure 7.11

Triangular trihedral corner reOector (a = 2 m) deployed by JPL at Goldstone,

California, calibration site.

which were imaged from both ascending and descending passes over the
calibration site. These devices were oriented with the axis of symmetry
perpendicular to the surface. For Seasat at a 20 look angle this resulted in
only a few dB of lost RCS, but eliminated the need to re-orient the devices for
each pass. A summary of the RCS and beamwidth parameters for various
reflector designs is given in Table 7.1.
The construction of the reflector must be to an error tolerance that is small
relative to the radar wavelength. Typical specifications for surface irregularity
are for an rms variation less than 0.12, resulting in a 0.1 dB RCS Joss; the plate
curvature should be less than 0.2A. for a 0.1 dB RCS Joss; and the orthogonality
requires plate alignment of better than 0.2 in each axis for a 0.1 dB Joss.
Assuming another "'0.2 dB uncertainty from pointing (orientation) of the device,
typical numbers for device accuracies are on the order of 0.5 dB. However,
additional calibration errors may result from uncertainty in estimating the
background backscatter or from multipath effects. For this reason it is desirable
to find a suitable location for deployment where these contributions are small
(i.e., < - 20 dB) relative to the RCS of the corner reflector.

340

7.5

RADIOMETRIC CALIBRATION OF SAR DATA


TABLE 7.1

Scattering Properties of Several Common Reflector Designs

Maximum
Reflector

Sphere

Square plate

RCS

3dB
Beamwldth

211:

0.44~/a

Luneberg lens

-40

Triangular trihedral

-40

Square trihedral

-40

Shape

Di

RADIOMETRIC CALIBRATION TECHNIQUES

341

and the antenna patterns, which are key parameters that cannot be measured
with internal calibration devices. The tone generators are used in pairs to
produce two continuous frequency tones offset by some fraction of the system
bandwidth at orthogonal polarizations. These devices are primarily used to
measure the cross-polarization isolation of the radar. A comprehensive ground
calibration site design typically would include all three device types.
A functional block diagram of a transponder is shown in
Fig. 7.l 3a ( Brunfeldt and Ulaby, 1989 ). The peak radar cross section is given by

TRANSPONDERS.

(7.5.2)

where G1, Gr are the transmit and receive antenna gains and G. is the net gain
of the transponder electronics. This design provides the flexibility to achieve
the desired RCS by selecting amplifiers with the required gain. The antenna
selection is driven primarily by cross-polarization isolation and beamwidth
requirements, with gain a secondary consideration. With a two-antenna design,
as pictured in Fig. 7.13b, the cross-coupling between antennas is an important
consideration. since this signal is amplified by the. transponder gain. The required
cross-coupling performance ( < -80 dB) is achieved by spatially separating the
antennas. Typically, standard gain horn or microstrip patch antennas are used.
However, if large cross-polarization isolation and low sidelobes are required a
corrugated horn may be used.
The functional design of a ground calibration
receiver is shown in Fig. 7.14. Basically, these systems consist of a receive
antenna, an envelope detector circuit that can lock onto the radar PRF a
digitizer, and a storage device. This system may be integrated with a transponder
for a dual-function device. Such devices are currently being produced in small
numbers by the University of Stuttgart (Freeman et al., 1990d). Ground receivers
can be used to directly measure the azimuth antenna pattern and to indirectly
measure the elevation pattern by deploying a number of receivers cross-track.
If the relative boresights of the SAR and receiver antennas are co-aligned, then
the peak SAR radiated power can be determined from
COMPACT GROUND RECEIVERS.

In situations where a dark background (low a 0 ) is not available, a larger


reflector may be used to increase the RCS. However, this typically leads to
increased construction and deployment errors. Other types of reflectors that
have a larger RCS for a given aperture may be used, such as a flat plate, if the
pointing is sufficiently precise that its narrow beamwidth is not a large erroi;
factor (Table 7.1). The square trihedral offers an increased RCS at the same
beamwidth as the triangular trihedral, but is not as stable, especially in an
environment where wind stress may produce plate bending. An alternative
solution to passive reflectors is the use of active systems where the RCS ~
controlled by the device gain.
Active Calibration Devices. This class of devices includes instruments such ~

transponders, receivers, and tone generators. Each of these serves an important


function in calibration of the SAR system. The transponder is similar to the
reflector in that it relays the transmitted signal back to the radar. However, the,
transponder has the benefit of increasing the signal strength by electroniq
amplification. The ground receivers are essentially half a transponder with some
recording capability. They can be used to directly estimate the radiated power

(7.5.3)
~here EI~P is the effective isotropic radiated power, R is the slant range, Pr
is the received power as measured from the digitized signal, and Gr, G. are the
antenna and electronic gains of the receiver unit. The use of ground receivers
~n be a highly accurate technique for measurement of the SAR antenna pattern,
smce the forward radiated power is measured. This is a much stronger signal
than the reflected RCS or the background u 0 . However, if the SAR antenna is
not reciprocal, then the receivers cannot determine the SAR receive antenna

342

7.5

RADIOMETRIC CALIBRATION OF SAR DATA

RS 232C
INTERFACE

TRANSMIT
ANTENNA

RECEIVE
ANTENNA

343

RADIOMETRIC CALIBRATION TECHNIQUES

Gt

Gr

PREAMPLIFIER

VARIABLE
ATTENUATO R

SWITCHED
ATTENUATOR

AMPLIFIER

DIG ITAL
REGISTRATION
UNIT

DETECTOR

v
FRONT END
CONTROLPANEL
POWER
SUPPLY

Figure 7.14 Ground calibration receiver design by the Institute for Navigational Studies ( INS)
at University of Stuttgart, Germany (Freeman, 1990d).

bandwidth. The cross-polarization isola tion of the SAR receive antenna can be
determ ined from the raw signal data by

= Gr(f,)

xp

(7.5.4)

Gr(J;}

where Gr(J;) and c r(f,) a re the SAR receive antenna like- and cross-polarized
gains, respect ively. These signals, offset in frequency by J. - J;, will be shifted
by the one-way Doppler associated with the relat ive sensor to target position
for that range line. T he q uantity in Eqn. (7.5.4 ) can be measured in the ground
processor from a Fourier transform of each range line. Across the SAR azimuth
aperture, the received tone generator signal can migrate through several bins
in the FFT due to the Doppler shift. T hus, if azimuth summation of adjacent
range lines is required to reduce the signal estimation erro r, care should be taken
that the tone falls within a discrete FFT bin for each range line used.

LINEAR POLARIZED
HORN TRANSMIT
ANTENNA, GAIN

b
Figure 7.13

Gt
Active transponder design by Applied Microwave Corporation (Brunfeldt, 1984).

pattern, since a ground receiver can only measure the overall SAR tra nsmit
chain characteristic.
Tone generators typically consist of a
linearly polarized antenna and a signal generator, as shown in Fig. 7. 15. These
devices are used in pairs, with each unit tra nsmitting o ne of two orthogonal
polarizations at a frequency offset from the other by some fraction of the system

CONTINUOUS WAVE TONE GENERATORS.

SIGNAL
GENERATOR

VARIABLE
ATIENUATOR

POWER
METER
PORT

AC
POWER
SUPPLY

Figure 7.15 Block diagram of continuous wave tone generators to measure antenna crosspolarization isolation.

344

RADIOMETRIC CALIBRATION OF SAR DATA

7.5

Calibration Site Design

To perform the required measurements for SAR system calibration, groups of


devices are required (Dobson et al., 1986). Typically these devices are deployed
in along-track and cross-track geometric configurations to measure the geometric
calibration accuracy as well as radiometric ca libration parameters. A site
originally used by NASA/ JPL for the Seasat SAR and later upgraded for the
DC-8 airborne multipolarization, multifrequency SAR is shown in Fig. 7.16
(Freeman et al. I 990a). M ost of the array consists of triangular trihedrals with
transponders, receivers, tone generators, and dihedrals inserted as shown. The
transponders and dihedrals were oriented to enable measurement of the SAR
cross-polarized transfer characteristic (Hirosawa a nd Matsuzaka, 1988). An

RADIOMETRIC CALI BRATION TECHNIQUES

345

L-band image of the Goldstone site acquired by the DC-8 SAR is shown in
Fig. 7.17. Since each reflector has been surveyed to determine its true location
this image can be used to assess the scale and skew errors (Chapter 8) as weli
as the absolute location error of the DC-8 system.
'
The elevation antenna pattern is determined by fitting the RCS measurements
from each device with a least squares error polynomial. Across the mainlobe
return, a qu~drat.ic fit is sufficient to characterize most antennas (Fig. 7.18 ).
The uncertamty m each estimate is given by the device errors (fabrication,
deployment, etc.), the uncertainty in the background contribution (i.e., ao()x()R ),
8

~~~<9
GOLDSTONE
LAKE

~00 m

~~
~ivT~~"-.
JPL

'-~

r-1
I I
I
I II

._~il-!..,;;.--....;:t_24.
"

ff ,'

;ec;.0 (llHEDRAL
t\ 6 ' TRIHE"DAAL
.a. 8' TRIHEDRAL

a. f

I~I

. "' PASSIVE RECEIVERS

Jjfo
V)

/II
GOLDSTONE SAR
CALIBRATION SITE

/#
/

QC'1<.

.//

:>/~
~'<"

_r):j,-:~._0
( ~Q

L .J ( / '

\
Figure 7.16

e; L+C ?ARCS
-1'.0 L ...CTONEGEN

(1

~I

I I
I I

/:z:I

I I
I I

BM

'-<.:.:to;:&"
-..........

Diagram of the NASA/ JPL calibration site at Goldstone, California{Freeman, 1990a).

Figure 7.17
SAR.

L-band total intensity image of Goldstone, California, acquired by NASA/ JPL DC-8

346

RADIOMETRIC CALIBRATION OF SAR DATA

7.5

,I

Distributed Target Calibration

/
N'-TWO-WAY PATTERN

ALONG
TRACK

Cross-track (vertical) antenna pattern measurement using point targets deployed

cross track.

and the image measurement errors. Assuming these error sources are uncorrelated,
the pattern estimation error is given by
Sp =

(s~R + s~R + s~) 1 ' 2 / .jM

347

associated with the platform attitude variation (e.g., roll angle errors) and
thermal variation can be neglected. The short-term stability performance
(short-term relative calibration) is an important measure for many scientific
analyses.

REFLECTOR
RCS MEASUREMENT
WITH TOLERANCES

Figure 7.18

RADIOMETRIC CALIBRATION TECHNIQUES

Distributed target calibration refers to external calibration using natural targets


of large areas with homogeneous backscattering properties. A fundamental
assumption is that the scattering properties of these areas are stable or that the
variation is well characterized. This permits the image characteristics associated
with the target scattering to be decoupled from the sensor performance.
One important benefit of using distributed calibration targets is that they
measure the radar performance at various operating points within the system
dynamic range. Recall that, for point target calibration, the device RCS must
be large relative to the surrounding a 0 to minimize the background estimation
error. Therefore these devices can only measure the system performance at the
high end of the linear dynamic range (Fig. 7.19). Distributed target calibration
sites exhibit a wide range of a 0 values that can be used to assess the system
performance at a number of points across its linear dynamic range.
A second important advantage of distributed calibration sites is that they
can be used as a direct measure of the cross-track variation in the received
signal power as reflected in the digitized raw video signal after range compression.
Referring to our formulation of the distributed target radar equation in

(7.5.5)

where M is the number of devices used in the pattern estimate and sCR, s8R,
and sM are the standard deviation of the device RCS estimate, the background
a 0 estimate, and the image measurement error, respectively.
The image measurement error as well as the background error can be
significantly reduced by using a technique proposed by Gray et al. ( 1990). Their
approach is to integrate the return power over a local area surrounding the
reflector, rather than to attempt to estimate the peak return. The total power
in an equivalent adjacent area is also estimated, and the difference between
these two powers is that attributed to the RCS of the reflector. Thus, the only
0
error in the estimation procedure is the variation in background a between
the area containing the device and the reference area. This variation can be
minimized by selecting the calibration site such that th'l reflector is placed in
a large homogeneous backscatter region. The remaining error contributor is
that of the device itself, which can be mediated by measuring the reflector (or
transponder) under controlled conditions such as in an anechoic chamber, or
on an antenna range.
The short term stability of the radar system can also be assessed by placing
a second group of devices at some distance down-track from the main calibration
site. These two calibration sites should be sufficiently close that the errors

SAT.

11
11

.--POINT

TARGETS

NOISE
FLOOR
0
Figure 7.19 System gain characteristic illustrating the operating point for the calibration devices
(e.g., reflectors, transponders).

RADIOMETRIC CALIBRATION OF SAR DATA

348

7.5

Eqn. (7.4.1 ), four parameters vary as functions of cross-track position within the
swath. They are:
( 1)
(2)
( 3)
(4)

Slant range, R
Ground range resolution, L\R/sin 'I
Elevation antenna pattern, G 2 ( </J)
Backscatter coefficient, u 0 ( 'I)

Both the look angle y and the incidence angle 'I can be written in terms of the
slant range, the platform ephemeris, and the platform attitude as given in
Eqn. (8.2.4) and Eqn. (8.2.5). Typically, the most important platform parameter
for calibration is the roll angle estimation error, which causes the antenna
pattern to be offset relative to its expected cross-track location. A plot of the
Seasat antenna pattern correction factor (roll= 0) as a function of slant range
(or equivalently cross-track pixel number in a slant range image) is shown in
~~~

To extract the antenna pattern from the range compressed signal data, the
received signal power variation due to u 0 , R, and sin 'I must first be estimated.
Typically the slant range, R, the range bandwidth, BR, and the platform position,
0
R., are well known. Additionally, for each of the main calibration sites, the u

-1.9968

CJ
0>'

.2

-5.6927

~
I

RANGE PIXEL NUMBER


Figure 7.20 Cross track antenna pattern correction as applied to the slant range image using
Seasat parameters.

349

versus 'I dependence is known, leaving just the elevation antenna pattern and
the roll angle as the key parameters to be estimated. It should be noted that
the total received power consists of both the signal power and the noise power.
Thus the noise power must be subtracted prior to performing any corrections
on the cross-track signal power. If the noise power is subtracted after range
compression then the compression gain must be taken into account as described
in Section 7.6. In some cases, where the SNR is low, the thermal noise can
dominate the signal return power, resulting in a large antenna pattern estimation
error unless the the noise power is known to a very high precision.
To reduce the effects of thermal noise, a large number of range compressed
(or range and azimuth compressed) lines can be'. incoherently added in the
along-track direction. The number of lines integrated must be short relative to
the rate of change of the roll angle. This technique was used by Moore (1988)
to estimate the SIR-B antenna pattern over the Amazon rain forest.
A similar echo tracker approach was implemented operationally in the SIR-B
correlator to estimate the roll angle prior to the antenna pattern correction
stage (Fig. 7.21). For each standard image frame, consisting of -25 K range
lines, 1 K, range compressed lines spaced throughout each 5 K block were
incoherently averaged, smoothed using a low pass filter, and fit with a least
square error (LSE) quadratic polynomial. The error function was weighted
according to the estimated SNR of each data sample. The peak of the estimated
pattern was extracted and averaged with estimates from the other four (5 K
line) image blocks to provide a single roll angle estimate for the image. As
ex~ected, this technique worked well for regions of relatively low relief. In high
rehef areas the LSE fit residuals were used to reject the estimate and revert to
attitude sensor measurements. A roll angle echo tracker technique was needed for
SIR-B because of the large uncertainty in the shuttle attitude determination.
The estimated (3u) attitude sensor error was on the order of 1.5 in each axis
wi.th drift ~ates as high as 0.03 /s (Johnson Space Center, 1988). Results usin~
this techmque to measure the roll angle variation for SIR-B are shown in
Fig. 7.22 (Wall and Curlander, 1988).
The distributed target approach to antenna pattern and roll angle estimation
should not be considered as a replacement for the point target estimation
procedure. Rather, this techniqueshould be treated as an approach (target of
opportunity) that can be used to fill gaps between the point target site estimates
for monitoring intra-orbital variation. Additionally, distributed targets can
me~su~e performance over wide swath areas (e.g., 100 km E-ERS-1 swath),
which is very costly using point target devices.
7.5.3

-9.38&7 '-~5i52~-1~10-4~1~658~-2~208i..._;~2iL60--~33~1-2~3864~--:44~1~6--:49~68:-:--

RADIOMETRIC CALIBRATION TECHNIQUES

Polarimetric Radar Calibration

~alibration of a polarimetric SAR system that is capable of acquiring four


s1mu!taneous chan~~ls, two like and two cross orthogonal polarizations,
req~ues several add1t1onal measurements (Freeman, 1990c). Assuming a linear,
honzontally and vertically polarized, system, the polarimetric SAR measures

350

7.5

RADIOMETRIC CALIBRATION OF SAR DATA

RADIOMETRIC CALIBRATION TECHNIQUES

351

i8.8

RAW DATA

,,.,...

SIR-8
OT 106.3
GMT: 286/00:44:40
1 POINT/1K LINES

.J

RANGE

.J

COMPRESSION

a:

...

I-

::c:

18.6

C1

in
w
a:

AVG 1KLINES
PER SK BLOCK

NEXT
BLOCK

18.4 ,___.__..____,__..___,__...._--L_...J--.:.J~..1
39.0 45. 7 52.4 59.1 65. 7 72.4 79.1 85.8 92.5 99.2 105.9
TIME, sec

WEIGHTED LSE FIT


QUADRATIC POLYNOMIAL

23.9 r----.--..----.--...---..-...--------.-NO

.J
.J

a:
~ 23.8

YES

::c:

C1

USE ATTITUOE
SENSOR ESTIMATES

iii
w
a:
0

SIR-8
OT 90.3
GMT: 285/01:02:00
1 POINT/1K LINES
23. 7 .____.__..____,__..._........_ _.___._ _._--''---'
1.0 4.5 8.0 11.6 15.1 18.6 22.1 25. 7 29.2 32.7 36.2
TIME, sec

AVG SPEAK
LOCATIONS

Figure 7.22 Echo tracker roll angle estimate as a function of time for two SIR-B data segments.
Each estimate results from the integration of 1000 range lines.
CALCULATE

ROLL ANGLE

where q/ and ff characterize the radar receive and transmit systems respectively
and JV is the additive noise term. For an ideal system, ff and q/ could be
characterized as identity matrices with some complex scale factor. Polarimetric
system errors can be modeled as channel imbalance and cross-talk terms
(Freeman et al., 1990a), i.e.,

ROLL ANGLE

Figure 7.21

Flowchart of the SIR-B echo tracker routine to estimate the platform roll angle.

the target scattering matrix

(7.5.7)

6 = (s"" Suv)
Svu svv
where each element
given by

S
Q

is a complex number. The received signal (voltage) is

(7.5.6)

Inserting Eqn. (7.5.7) into Eqn. (7.5.6) we get an absolute phase term I/Ir+ I/I"
which is not significant since it only represents the relative position of the
dominant scatterer within the resolution cell. The gain term A.A1 represents
in Eqn. (7.3.1).
the common gain across all channels and is equivalent to
This gain can be estimated from calibration site data as described in the previous
section. The cross-talk terms c5 1 , c5 2 , c5 3 , and c5 4 represent contamination resulting
from the cross-polarized antenna pattern, as well as poor isolation in the

JP.

352

RADIOMETRIC CALIBRATION OF SAR DATA

transmitter switches and circulators. These terms can be directly measured using
polarization selective receivers and tone generators as described in the previous
section. The b 1 and b2 terms are directly measurable from the raw signal data
by evaluating the ratio of like- and cross-polarized tone generator signals in
each H and V channel. Similarly, receivers with exceptionally good crosspolarization isolation performance ( >40 dB) with antennas oriented for
like- and cross-polarized reception can be used to estimate b3 and b4.
The channel imbalance terms f 1 and f 2 are generally complex numbers whose
amplitude and phase characteristics must be precisely known fo~ many
polarimetric applications (Dubois et al., 1989). A reasonably good estimate of
the amplitude imbalance can be obtained from internal calibration pr~cedures,
assuming the antenna H and V patterns are similar and the borestghts are
aligned. However, the phase imbalance can only be estimated using external
targets since the antenna contribution cannot be ignored. The relative gain and
phase of the channel imbalance terms f 1 and f 2 can also be estimated using
active devices such as transponders, where the scattering matrix of the target
can be controlled. It can be shown that three transponders with independent
scattering matrices, such as (Freeman et al., 1990a)

7.6

where we have ignored errors in the device construction and deployment and
Arr= .j;;; is given by Eqn. (7.5.1). The relative channel phase imbalance ~an
be estimated from a trihedral reflector or from a distributed target, assummg
that the dominant scattering mechanism is a single bounce type scatter.
A limitation in the technique as presented by both van Zyl and Klein (other
than the reciprocity assumption) is that the channel imbalance can only be
estimated in a local area around the reflector. If the target scattering could be
. modeled such that the relative change in zuu/ zvv were known as a function of

353

incidence angle across the swath, then the amplitude balance as a function of
cross track position could be estimated using a distributed target technique.
The absolute value of zuu/ zvv could then be determined using a single device
or group of devices in a local area. In the NASA/JPL SAR processor for the
DC-8 polarimetric system, the phase error between the H and V channels is
routinely estimated using a distributed target (such as the ocean) and software
has been distributed to the investigators to perform clutter calibration on their
images using the approach proposed by van Zyl. It also should be noted that
in the calibration of polarimetric data the cross-polarized terms zuv. zvu are
av~raged (after phase compensation) to obtain a single value (see Section 7.7).
This approach. is based on the fact that all natural targets are reciprocal, and
therefore the difference between the cross-polarized terms is due only to system
errors. A final point is that in all these techniques we have assumed the noise
power to be negligible. For distributed target calibration techniques to be valid,
the data should be averaged over a large number of independent samples to
reduce the effective noise power, keeping in mind that the parameters to be
estimated may be dependent on their spatial position, limiting the area over
which the estimate can be performed.

7.6
can be used to solve for all six error terms.
An alternative approach, using known characteristics of a distributed target
scattering matrix in addition to passive corner reflectors, has been proposed by
van Zyl ( 1990) and Klein ( 1990b ). Given a target dominated by single-bounce
surface scattering, the target imposes no cross-polarized term and the relative
HH to VV phase is constant. Thus, assuming reciprocity (i.e., b1 = b4, b2 = b3,
f 1 = f 2 ), these terms can be calibrated without the use of any point target
calibration devices. To determine the channel amplitude imbalance, a corner
reflector such as a triangular trihedral is required whose scattering matrix is
given by

RADIOMETRIC CALIBRATION PROCESSING

RADIOMETRIC CALIBRATION PROCESSING

In the SAR ground data system, the signal processing consists of a raw data
correlation (Level lA processing) to form the SAR image, followed by a
post-processing stage (Level 1B processing) to perform the image radiometric
and geometric corrections. The geometric correction algorithms will be addressed
in Chapter 8. The remainder of this chapter will be used to describe the
radiometric calibration processing. The radiometric calibration processing
involves analysis of the internal and external calibration data, generation of the
calibration correction factors, and application of these corrections to the image
data. The calibration processing data flow is shown in Fig. 7.23. There are three
major ground data system elements. The calibration subsystem (CAL) is
typically an off-line workstation tasked to perform analysis of the internal and
~xternal calibration dafa as well as the preflight test data. The catalog (CAT)
is the data base management system responsible for archiving the calibration
data including preflight test data. The CAT is also responsible for reformatting
the engineering telemetry data into time series records for each .internal
calibration device (e.g., P.(ti), i = 1, N). These data are then accessed by the
CA~ in c?njuncti~n with the calibration site imagery to derive the necessary
rad1ometnc correction parameters for the SAR correlator (COR). The corrections
are .precalc~lated and stored in the CAT for eventual access by the correlator
dunng the image processing operations. Typically, the correction factors are
~lso stored as time series (e.g., G( </>, tJ, ti = 1, M) where the sampling frequency
ts dependent on the stability of the sensor and the calibration device used for
the measurement.

354

RADIOMETRIC CALIBRATION OF SAR DATA

7.6

PREFLIGHT
TEST DATA;
GROUND SITE DATA

ENGINEERING

CALIBRATION
SITE RAW DATA

TELEMETRY

1-LOOK COMPLEX
IMAGERY

AEFORMATIED
RAW DATA

TEL.EMETRY DATA

METADATA
ARCHIVE

RADIOMETRIC
CO'IRECTIC1'1
Fl'CTORS
(VIACAS)

SHORT TERM
CALIBRATION
ARCHIVE

Data flow diagram showing the transfer of calibration data between the correlator,
the catalog and the calibration processor.

Figure 7.23

7.6.1

Calibration Processor

The calibration processor supports the system calibration during three phases
of operation:
1. Preflight test data analysis;
2. Calibration processing (i.e., correction factor generation/application);
3. Verification processing and performance analysis.
Each of these phases is described in the following subsections.
Preflight Test Data Analysis

The preflight test data analysis is used to derive the relationship between the
internal calibration device measurements and the radar performance parameters.
For example, the transmitter power output may depend uniquely on its baseplate
temperature. Preflight testing can establish the functional relationship between
the transmitter output power and the baseplate temperature sensors to provide
a means of indirectly calibrating the transmitter drift during operations.
Additionally, the stability of the sensor, which is established in preflight tests,
is used to determine the required sampling of the internal calibration data and
the number of external calibration sites.
The preflight testing is especially important for the SAR antenna characterization,
since its performance cannot be directly measured using internal calibration

RADIOMETRIC CALIBRATION PROCESSING

355

devices. For the SIR-C active phased array antenna, the thermal sensors on
the antenna backplane will be used to calibrate the T /R module output power
and gain drift over the mission. Additional parameters, such as the DC current
drawn by each panel, will be used to indicate if a T /R module or a phase shifter
is performing anomalously.
Calibration Processing

The preflight test data analysis results are used to interpret the in-flight telemetry
in terms of the system performance. The key calibration parameters to be
estimated during the preprocessing are the radiated power, the antenna patterns,
the receiver gain, the noise power, and the roll angle.
Depending on the system stability, measurement of the amplitude and phase
drifts as functions of frequency across the system bandwidth may also be
required. Generally, the effects of quadratic and higher order phase and
amplitude errors on the radiometric calibration accuracy are neglected since
they do not affect the total power, but rather the shape of the impulse response
function (Chapter 6). If the area integration technique (Gray, 1990) is used to
estimate the device RCS, then matched filtering errors will not affect the
estimation accuracy of the calibration correction parameters. However, other
image quality characteristics, such as the geometric resolution and sidelobe
performance, will be degraded.
An overall calibration processing flowchart is shown in Fig. 7.24. This chart
is drawn assuming that the calibration corrections are incorporated into the
operational image processing chain. The functions attributed to the calibration
processor (CAL) are as follows:
1. Calibration site image analysis of single point targets to determine
mainlobe broadening (Km1), sidelobe characteristics (ISLR, PSLR), and
absolute location accuracy;
2. Multiple point target analysis to determine geometric distortion (scale,
skew, orientation errors) and the elevation antenna pattern;
3. Raw data analysis of tone generator signals to determine cross-polarization
isolation of the receive antenna;
4. Engineering telemetry analysis to estimate drift in the system operating
point (i.e., change in receiver gain or transmitted power);
5. Generation of calibration correction factors, K(R, t;), including antenna
pattern and absolute calibration scale factor;
6. Distributed target calibration site analysis for antenna pattern estimation.
The correction factors are passed from the CAL to the SAR correlator (via the
CAT) for incorporation into the processing chain as shown in Fig. 7.24.
If the roll angle variation is slow relative to the azimuth coherent integration
time, then the radiometric correction factor can be directly applied to the
azimuth reference function, eliminating the need for an additional pass over the

356

RADIOMETRIC CALIBRATION OF SAR DATA

PREFLIGHT
TEST DATA

CALTONE SCAN
SIGNAL DATA

7.6
RAW SIGNAL
DATA

RON SIGNAL
DATA

DERIVE/
MODIFY
RANGE REF

AUTOFOCUS

l --ot..

CLUT~~~LOCK r----'-----i!!.~2!!~

RANCE

AEF.

Id (R,1)
t, (A,1)

ANTENNA ~~-~

I\ T, P-'TIERN

CALCULATE
G(O,)

P,
K(R)

GENERATE
AZ. REF.
FUNCTIONS
WITH K(R)
CORRECTIONS

."----------"!:

SELECTED 1LOOK OATA

HIV BALANCE
(AECAL)

GENERATE
AISOLUTi G41N

a v. A

LOOK-UP
TABLE
CAi.IBAATIOtf PROCESSOR

SIGNAL DATA
FLOW

CALIBRATION PARAMETERS

PARAMETER
DATA FLOW

OUT UT
PRODUCT

Figure 7.24 Calibration processing flowchart illustrating the major software modules.

data. An alternative approach would be to apply the corrections to the output


image, either prior to or following the multilook filtering. Note that if the
correction is applied to the data prior to noise subtraction, then the noise power,
which was initially constant across the swath, will vary as l / K ( R ).
The SAR correlator (COR) is responsible for performing the following
calibration related functions:
I. Extract the calibration tone scan data (e.g., SIR-C) or the calibration loop
leakage chirp (e.g., E-ERS-1, X-SAR) during the turn-on and turn-off
sequences. Estimate system (except for the antenna) gain and phase versus
frequency profiles from this data;

RADIOMETR IC CALIBRATION PROCESSING

357

2. Monitor the caltone (SIR-C) or the pulse replica loop (E-ERS-1, X-SAR)
during the data take to derive drifts in system gain / phase characteristic;
3. Estimate receive-only noise power during turn-on and turn-off sequences;
derive noise power at any point in data acquisition sequence using drift
measurements ;
4. Perform echo-based attitude tracking using clutterlock and echo (roll)
trackers;
5. Apply cross track radiometric corrections to image data ;
6. Perform raw data quality analysis (QA) functions such as evaluation of the
bit error rate (BER) and histogram, and range spectra;
7. Incorporate all radar performance, calibration correction factors, and
quality assurance data into the image ancillary data records.
For polarimetric SAR data calibration, the above list of correlator functions
must be extended to include: ( l) Like-polarized return (i.e., zHH> zvv) phase and
amplitude balancing using distributed targets ; (2) Phase compensation and
averaging of cross-polarized terms (i.e., zHv' Zvtt); and (3) generation of
normalized Stokes matrix (Dubois and Norikane, 1987). A detailed description
of the various software modules and data flow diagrams for the SIR-C calibration
processor is given by Curlander et al. ( 1990).
An operations scenario for the calibration processing would be as follows.
The first step is to perform analysis of selected image and telemetry data over
the time interval for which the data is to be calibrated. The correction factors
are generated as a time sequence for each parameter and then stored in the
CAT database. The database generates a processing parameter file for each
image to be processed which includes the calibration correction parameters and
nominal system performance data, as well as the radar and mission parameters
for that time interval. In the COR, the calibration correction parameters are
applied to normalize the image data. Finally, the performance data is transferred
to the image ancillary data files and appended to the output data products.
Verification Processing and Performance Analysis

The absolute calibration accuracy and relative precision of the data products
can be verified by establishing ground verification sites either equipped with
point target devices, or covering homogeneous backscatter regions of known
cr 0 (Louet, 1986). For the verification site imagery, the nominal calibration
corrections, as derived from the engineering telemetry and the calibration site
data, are applied to the image products. The backscatter estimate, as derived
from the image, is then compared to the point target RCS or the distributed
target cr 0 to derive the calibration error. These parameters, which define the
calibration performance, are valid over a limited time interval that depends on
the system stability. They should be appended to the data products as an
ancillary file to aid the scientist in interpreting the data.

358

RADIOMETRIC CALIBRATION OF SAR DATA

7.6.2

Calibration Algorithm Design

7.6

In this section we address in more detail the problem of operationally producing


radiometrically calibrated SAR images. We first derive a form of the radar
equation applicable to the SAR image which includes processor gains. A basic
tenet that should be used in establishing a procedure for image calibration is
that all corrections be reversible (i.e., the original uncorrected image should be
recoverable). This inversion process may be necessary if the calibration
correction factors are updated at some time after the initial processing. A second
key requirement is that the algorithm be flexible such that the corrections can
be applied to either the detected or the complex SAR images. Additionally, the
procedure should allow for subtraction of the noise floor by the user but should
not operationally apply this correction to the data, since it will cause local
errors in the a 0 estimate and may result in negative power estimates.
Radar Equation for Image Products

The radar equation for the received signal power from a distributed target of
uniform a 0 (Section 7.3) can be extended to the processed image. Recall that
the mean received power is given by Eqn. (7.3.5)
(7.6.l)

After the azimuth and range compression operations are applied to the digitized
video signal, the mean power in a homogeneous image is given by (Freeman
and Curlander, 1989)
(7.6.2)

where ox, oRg are the image azimuth and ground range resolution cell sizes,
N 1 = LrLaz is the number of samples integrated during the correlation processing.
and WL = W. W..z is the total loss in peak signal strength due to range and
azimuth weighting functions (e.g., Hamming weighting). The parameters Lr,
Laz are the range and azimuth reference function lengths and W., W..z are the
range and azimuth reference function weighting loss factors, respectively. The
parameter L refers to the number of looks or the number of resolution cells
incoherently added (assuming no normalization) to reduce the speckle noise.
The ratio of the two terms to the right of the equality in Eqn. (7.6.2) is equivalent
to the multipulse SNR equation in Eqn. (2.8.8). The second term in Eqn. (7.6.2)
is multiplied by N 1 (rather than Nl} since noise samples dO'not add coherently.
Conversely, the signal power, represented by the first term in Eqn. (7.6.2), can
be considered as a phase compensated coherent integration. The difference
between the behavior of the signal power and noise power terms can be explained
by noting that echo signals add coherently in voltage while noise terms are
mutually incoherent and can only be added in power. A non-coherent integration
(such as forming multiple looks) affects the signal and noise power terms
equivalently.

RADIOMETRIC CALIBRATION PROCESSING

359

If we compare the radar equation before and after processing, from


Eqn. (7.6.1) and Eqn. (7.6.2) the ratio of the mean image signal power to the
mean raw video data signal power is

P!
P. =

oxoRLNf WL
AxAR.

(7.6.3)

where AR., oR are the precompression and image slant range resolutions, and
Ax, ox are the precompression and image azimuth resolutions respectively.
Equation (7.6.3) is sometimes called the processing compression ratio.
The question now arises as to whether there is an improvement in the signal
to noise ratio (SNR) as a_result of the signal processing. Again consider a
distributed homogeneous target. We wish to evaluate the expression
SNR 1 P!Pn
--=-SNR P:,P.

(7.6.4)

where the superscript I refers to image data. Substituting from Eqns. (7.6.3),
(7.6.2) and simplifying we get
SNR 1

OxORN1

SNR

AxAR.

--=---

(7.6.5)

Recall that <>x ~ L 3 /2 and oR = c/2BR, where La is the along-track antenna


length and BR is the range bandwidth. Furthermore' N 1 = L r L az' where L r = ,,."pf,s
and Laz = A.Rfp/(La V.1) and rP is the pulse duration, f. is the complex sampling
frequency, and V. 1 is the sensor-to-target relative speed. Inserting these expressions
and Ax = A.Rf L., AR. = crp/2 into Eqn. (7.6.5), we get
SNR' = f. .LafP
SNR BR 2V.,

(7.6.6)

Since the Doppler bandwidth is B 0 = 2V.,/ L 8 , then


(7.6.7)

where Oor Ooa are the range and azimuth oversampling factors respectively.
Thus, there is no increase in the image of SNR for returns from a uniform
extended target as a result of the image formation, except by the product of
two oversampling factors. These oversampling factors are the ratio of the PRF
to the azimuth Doppler bandwidth and the ratio of the complex sampling
frequency t? the range P?lse bandwidth. No further increase in the signal to
thermal noise (SNR) ratio (e.g., by using smaller processing bandwidths) is

360

RADIOMETRIC CALIBRATION OF SAR DATA

7.6

possible. In practice, if ambiguity noise is considered, reducing the azimuth


processing bandwidth or swath width usually improves the o~erall SNR.
It is important to note that, although target coherence over time was assumed
to obtain Eqn. (7.6.2), this assumption is not mandatory for the result to. be
valid. Partial coherence is a common feature of many radar returns. Imagmg
of ocean waves is a well-studied example (Raney, 1980). The coherence of the
target does not alter the total signal power in the image, but simply degrades
the final image resolution.
Radiometric Correction Factors

The form of the correction factor to be used in compensating the range


dependence of the received signal power in the SAR image will depend on the
form of the applied azimuth reference function. From Eqn. (7.6.2) the mean
image power for a homogeneous target is
P~ = K'(R)u 0 ('5xc5R 11 )(LNf Wd

+ LN, WLPn

(7.6.8)

where from Eqn. (7.3.7) and Eqn. (7.4.2)

K'(R) = P 1G,G 2 (t/J)A. 2 /(4n) 3 R 4

(7.6.9)

Assuming the mean received power is given by some mean image pixel value
-

P Ir

--

2 - "I
f..., nPIJ /Ml2

(7.6.10)

np -

i,j

where I I indicates detection of the complex pixel, and . the averaging is


performed over an M x M sample block of data, we can wnte
(7.6.11)
where the image rnean noise power is given by
(7.6.12)
and the image correction factor from Eqn. (7.6.2) is
1

K (R) -

(t1)

P 1G,G 2 (t/J)A. LWLL;c5xc5R 1


(4n) 3
R4

(7.6.13)

Recall that the azimuth reference function size was assumed to be equal to the1
number of pulses spanning the azimuth footprint, i.e.,
(7.6.14)

RADIOMETRIC CALIBRATION PROCESSING

361

Substituting Eqn. (7.6.14) into Eqn. (7.6.13) we see that the range dependence
1
of K (R) is inversely proportional to R 2 It is also interesting to note from
inserting Eqn. (7.6.14) into Eqn. (7.6.12) that the image noise power actually
increases linearly with range.
Up to this point, we have assumed that no normalization is applied to the
reference function or the multilook filter to compensate for the number of
samples integrated. For example, if each term in the azimuth reference function
is normalized by the number of azimuth samples Laz as is done in many SAR
processors, then the image correction factor K 1(R) is inversely proportional to
4
R and the noise power varies as 1/ R. Only if an azimuth reference function
normalization of
is used will K'(R) be inversely proportional to the
3
traditional R that appears in many forms of the radar equation. A
normalization will also result in a constant noise power independent of range
position within the image. These relationships are summarized in Table 7.2.
Misunderstanding of the relationship between the image signal power and the
slant range/attenuation factor may explain the range dependent variation in
many SAR images found in the literature.
Consider the Seasat correlator as an example. The number of pulses in the
azimuth footprint is given by Eqn. (7.6.14). Evaluating this equation using the
values: A.= 0.24 m, fp = 1647 Hz, R = 850 km, La = 10.7 m, and V.1 = 7.5 km/s,
we get Laz = 4187 pulses. For the frequency domain fast convolution processor,
only block sizes of powers of 2 can be used in the FFT. Thus, it is convenient
to use a reference size of 4096 and an azimuth block size of 8192, resulting in
4096 good image samples per block. The azimuth reference function coefficients
(i.e., fDc fR) are adjusted as functions of R, but typically for Seasat the length
is fixed at 4096 to maintain an even power of 2. Thus, the azimuth resolution
cell size increases linearly with range such that there is a slight resolution
degradation ( -4%) across the swath. In this case, the average signal level
varies as 1/R 3 , while the noise level is independent of range, resulting in an
SNR proportional to 1/R 3
As a second example, consider the SIR-B correlator design implemented by
NASA/JPL to perform the operational SAR processing (Curlander, 1986). In
that design, the azimuth processing block size per look (for a four-look image)
was fixed at 2048 samples. To accommodate the varying footprint size over the
range of look angles ( 15 to 60), the number of nonzero terms (i.e., La2 ) in the

J"L:

J"L:

TABLE7.2 Effect of Azimuth Reference Function Length Laz and Normalization on the
Expected Image Power

Normalization
None
1/Laz

1/Az
None

Length
Variable, ocR
Variable, ocR
Variable, oc R
Fixed

Signal Power
2

ocl/R
ocl/R 4
ocl/R 3
ocl/R 3

Noise Power

SNR

ocR
ocl/R
Constant
Constant

l/R 3
1/R 3
1/R 3
1/R 3

362

7.6

RADIOMETRIC CALIBRATION OF SAR DATA

processing block was varied to maintain a constant azimuth resolution. To


minimize ambiguities, the azimuth processing bandwidth BP was set at 0.8 Bo.
We can write Eqn. (7.6.14) in terms of B0 as

RADIOMETRIC CALIBRATION PROCESSING

363

reference normalization factor of

K,=~

(7.6.19)

Substituting Eqn. (7.6.16) in place of B 0 in Eqn. (7.6.15), we get the expression


used to determine the SIR-B correlator azimuth reference function length as

should be applied. This yields an image with constant mean noise power equal
to the input noise level in the raw data. This is a useful representation since
Waz W,. can be determined directly from the ratios of the processed to
unprocessed mean receive-only noise power with and without weighting applied.
A second basic requirement is that all interpolations such as the range cell
migration correction, or the slant-to-ground range reprojection, preserve the
data statistics. The specific criteria for the interpolation coefficients such that
the data statistics are preserved are presented in Chapter 8. Assuming the
normalization factors in Eqn. (7.6.18) and Eqn. (7.6.19) are applied to the
reference functions, the radar equation as given by Eqn. (7.6.2) becomes

(7.6.17)

(7.6.20)

The SIR-B reference function was always normalized by the azimuth FFT block
size (i.e., 2048 samples) independent of Laz Since this correction ~actor is
independent of range, it does not affect the range dependence of either the
expected signal power or the SNR. Hence for the SIR-B image product~ the
signal power varies as 1/R 2 while the noise varies as R with an SNR proportional
to 1/ R 3

where we have assumed the multilooking process is normalized by the number


of samples integrated. Equation (7.6.20) is now identical to the raw data radar
equation (except for the resolution cell sizes) and the u 0 can be estimated using
Eqn. (7.3.6). Thus, if the expected noise power is first subtracted from each
image pixel intensity value and (in the resulting image) each range line is
weighted by the factor 1/ K ( R ), the data number will be equivalent to u 0
(ignoring speckle and ambiguity noises).
In practice, very few processors perform noise subtraction since the estimated
mean noise power may deviate significantly from the actual noise on an
individual pixel basis. The problem is that negative powers can result. For a
complex pixel representation a large phase error can occur, since the phase of
the additive noise term is random. A more useful algorithm is to first apply the
K ( R) correction to the received signal-plus-noise image. The resulting relationship
between the image data number and the u 0 value is

(7.6.15)

assuming the full aperture is processed. For SIR-B the processing bandwidth
was estimated using
BP= (0.8)/p ~ (0.8)B 0

(7.6.16)

Correlator Implementation

The radiometric calibration algorithm should produce image products that are
both relatively and absolutely calibrated. Simply stated, in a relatively calibrated
image each pixel value (i.e., data number or gray level) can be uniquely related
to some backscatter coefficient (within an error tolerance), independent of its
cross-track position or time of acquisition. In an absolutely calibrated image,
the coefficients specifying the relationship of each relatively calibrated data
number to a backscatter value (within an error tolerance) are given. For example,
assuming a linear relation, u 0 is given by

where In I is the detected pixel value and K 0 , Ke are real constants.


Since Pto maintain a constant azimuth resolution independent of range target
position'. the azimuth reference function length should vary in proportion to
the change in range across the swath, a relative calibration factor of
(7.6.18)

is required to normalize the azimuth reference function. Similarly, a range

(7.6.21)

where we have assumed that a two parameter stretch, i.e., a gain K 0 and a bias
Ke, are used to minimize the distortion noise associated with representing the
image within. the dynamic range of the output medium.
To derive the image correction factor K 1(R), each of the parameters in Eqn.
(7.6.13) must be estimated. The terms A., L, WL, R, L., Laz bx, bR 8 are all well
known or easily measured and contribute little to the overall calibration error.
Significant errors come only from uncertainty in the estimation of P1, G,, G2 ( <P ),
and P0
The thermal noise P0 can be estimated by averaging a block of samples from
the turn-on and turn-off receive-only noise segments in each data take.
Throughout the data take, the drift in receiver gain, G., can be estimated from

364

7.7

RADIOMETRIC CALIBRATION OF SAR DATA

a caltone. Therefore, the thermal noise estimate at the center time of the image
frame, tc, is given by
(7.6.22)
where GcAL(tc) is the ratio of the system gain at time tc to the gain at the
turn-on time, t 0 This gain drift may also be characterized by other internal
calibration devices such as a leakage chirp or thermal sensors.
The radiated power P, is most accurately measured using a set of ground
receivers. The variation in P 1 over the time interval between ground receiver
measurements can be tracked using internal meters (power, temperature) or by
a leakage chirp. Similarly, the receiver gain can be directly measured by a
calibration tone or a leakage chirp. The antenna is typically measured preflight
to obtain a nominal pattern. Inftight variation from thermal stress or zero
gravity unloading is typically measured using external targets. Either a distributed
homogeneous target, or point targets (e.g., transponders or corner reflectors),
can be used to measure the two way pattern from the SAR image. Alternatively,
the transmit pattern can be directly measured using ground receivers and, if
reciprocity can be assumed, the two way pattern inferred from this measure.
The antenna boresight, or equivalently the pattern roll angle, can be refined by
analysis of the antenna pattern modulation in an uncorrected image by
estimating the location of the peak return power from a least square error fit
of the image data.

POLARIMETRIC DATA CALIBRATION

365

receiver channel is 2.8 cm longer than the other, the two channels are 180 out
of phase. Thus the balancing operation in Eqn. (7.7.1) would effectively cancel
the cross-polarized return (in the absence of other system errors and noise),
resulting in a value of zero for zHv independent of the target scattering
characteristics. To compensate for this systematic phase offset, prior to balancing
a phase difference correction must be applied to the data. The mean phase
difference is given by
N

fbx =

L arg(zuvziu)/ N

(7.7.2)

i=1

where the summation is performed over some representative set of data samples
spanning the entire image frame. Since just one cross-polarized channel need
be corrected to compensate for this phase error, Eqn. (7.7.1) becomes
ZHv = [zuv exp( -jfbx) + ZvnJ/2

(7.7.3)

Phase calibration of the like-polarized terms requires an analysis similar to that


of Eqn. (7.7.2). A mean phase difference for the like-polarized channels is
calculated from
N

fb1 =

L arg(z88 ziv )/ N

(7.7.4)

i= 1

This correction is then applied to all pixels in one of the like-polarized images, i.e.,
7.7

POLARIMETRIC DATA CALIBRATION

(7.7.5)
The polarimetric data products are typically represented in a Stokes matrix
format. This is achieved by first performing a symmetrization of the scattering
matrix. The symmetrization procedure is as follows (Zebker et al., 1987). Given
four radiometrically corrected images (in a complex amplitude format) that
represent the two like-polarized target backscatter measurements (i.e., z88 and
zvv) and the two cross-polarized measurements (i.e., zuv and zvu), the
symmetrization procedure is to average the cross-polarized terms such that
ZHv

= (zuv + Zvn)/2

(7.7.1)

on a pixel by pixel basis. The inherent assumption in thls process is that for
all natural targets s8 v = svn Therefore any differences between z8 v and zvu
must arise from radar system errors.

In practice, prior to balancing the cross-polarized channels, the data must


be compensated for systematic phase errors that arise from path length.
differences or electronic delays in one channel relative to another. Consider, for
example, a C-band (A.= 5.6 cm) quad-polarized SAR system with two receive
chains, one each for the H and V channels. If the electrical path length in one

A necessary condition for this procedure to be valid is that there be a zero


phase shift between s88 and svv for all targets included in the summation of
Eqn. (7.7.4). However, only if the scatterer is single-bounce (e.g., Bragg
scattering) will the relative phase be zero (van Zyl, 1989). The phase correction
procedure thus requires identification of a single bounce target, such as ocean
or slightly rough terrain (rms height < A./8) with a relatively high dielectric
constant (i.e., no volume scattering). An additional assumption in the procedure
outlined above is that the phase difference distribution is symmetric and
unimodal. For an asymmetric distribution, the mean values estimated in
Eqn. (7. 7.2) and Eqn. (7. 7.4) should be replaced by the median of the distribution.
If the probability distribution function is bimodal a smaller block of samples
should be used for estimating the phase correction factor.
Like-.: and cross-polarized phase corrections in Eqn. (7.7.2) and Eqn. (7.7.4)
typically need not be estimated for every image. A single correction factor is
usually applied to a group of images over some time period dependent on the
instrument stability. If the radar is highly sensitive to slight thermal variations,
causing the electrical path length to vary in one receive chain relative to the

366

REFERENCES

RADIOMETRIC CALIBRATION OF SAR DATA

other a unique correction factor may be required for each image frame.
Calib~ation of the like-polarized channel amplitude imbalance cannot be
performed using distributed targets since the ratio ~HHI svv ~s very target
dependent and cannot be predicted. Since the scattermg matnx of a corn~r
reflector such as a triangular trihedral is well known (sHH/ svv = 1), an analysis
of the return from this target can be used to balance the like-polarized channel
amplitude in that local area. Amplitude imbalance can arise from H,.v pat~ern
misalignment, which would require balancing to be performed at multiple pomts
ac)'oss the swath. This can be accomplished using an array of reflectors deployed
across the ground track. Another, as yet untested, approach would be to perform
the absolute like-polarized channel balancing at a single point within the swath
(using a reflector), and. then to use a distributed target such as t~e ocean to
perform a relative balance at all other points across. the swath. This appr.o.ach
requires that the target sHH/ svv not change as a function of cross-~rack pos1t10n.
However, it does not require that the ratio be known. The reqmrement that
sHHI svv

= constant over range

is never valid for an airborne system, since the range of incidence angles is so
large. However, for a spaceborne polarimetric SAR, where '1 varies over the
entire swath by only a few degrees, this relative balancing technique may be
feasible.
The final step in the polarimetric calibration is correction of the crosspolarized leakage terms that typically result from poo~ i~olation in t?e ante~n.a
or transmitter switch, or from platform attitude variation. We beheve th~s is
best implemented using the previously described clutter based techmque
proposed by van Zyl ( 1990). These corrections can. be applied as ~ postprocessing step (on the Stokes matrix) and are typically not operationally
applied in the SAR correlator.
.
Following the polarimetric calibration steps outlmed above (except the
cross-polarized leakage term correction), the Stokes matrix products are formed.
This first requires generation of the six cross-products

= z~HZ~v
JHHHH = z~HzifH
lvvvv = zvvz~v
J HVHV = Z~v zitv
JHVVV = Z~vZvv
JHHVV = Z~ttZvv

JHHHV

(7.7.6a)
(7.7.6b)

367

cross-product data to effect an improved speckle reduction performance over


that of incoherent pixel addition (Chang and Curlander, 1990). The final
processing stage (which is optional) is the formation of the ten real Stokes
matrix elements and the efficient coding of these data by normalizing the Stokes
matrix elements (Dubois and Norikane, 1987). The shortcoming in producing
the Stokes matrix as a final output product is that the noise subtraction is a
relatively complex procedure since each noise power array (i.e., P0 ) KHH(R),
P0 .f Kvv(R), (P0 " + P0 .)/2KHv(R)) must be manipulated similarly to the image
data processing used to form the Stokes matrix elements. This is a fairly involved
process for the scientist to perform. In practice, since the thermal SNR must
be large for polarimetric data analysis to be feasible, the noise power contribution
is often neglected.

7.8

SUMMARY

This chapter has addressed the issue of SAR radiometric calibration primarily
from the signal processing perspective. The basic terms were defined and an
end-to-end system view of the various error sources presented. Several internal
calibration schemes were described in detail to identify the system measures
that can and cannot be performed using built-in test equipment. We then
addressed the techniques and technology currently employed for external
calibration with ground sites. The relative merits of point target versus
distributed t~rget calibration sites were discussed and several techniques using
clutter statistics for calibration were presented.
The second portion of the chapter concentrated on design of the ground
processor to utilize the acquired calibration data for operational correction of
the data products. We described a configuration using an off-line calibration
processor to analyze both the internal calibration device measurements and the
calibration site imagery. This system generates correction factors that are passed
to the correlator for application to the image data. We derived an appropriate
form of the radar equation that explicitly indicates the processor induced
gains/losses and discussed the effect of various processor implementations on
this equation. We concluded with a brief discussion of the calibration procedures
for a polarimetric SAR system.

(7.7.6c)
(7.7.6d)
(7.7.6e)
(7.7.6f)

The multilooking can be performed directly on the cross-product terms by


adding adjacent pixels or by applying a complex two-dimensional filter to the
2
cross-product images. The SIR-C processor will apply a sinc type filter to the

REFERENCES
Aarons, J. (1982). "Global Morphology of Ionospheric Scintillations," Proc. IEEE, 70,
pp. 360-378.
Attema, E. (1988). "Engineering Calibration of the ERS-1 Active Microwave Instrument
in Orbit," Proc. IGARSS '88, Edinburgh, Scotland, pp. 859-862.
Blanchard, A. and D. Lukert ( 1985). "SAR Depolarization Ambiguity Effects," Proc.
IGARSS '85, Amherst, MA, pp. 478-483.

368

RADIOMETRIC CALIBRATION OF SAR DATA

Brookner, E. ( 1973 ). "Ionospheric Dispersion of Electromagnetic Pulses," IEEE Trans.


Ant. Prop., AP-21, pp. 402-405.
Brookner, E. ( 1985). Pulse-Distortion and Faraday-Rotation Ionospheric Limitations,
Chapter 14, in Brookner, E. (ed.), Radar Technology, Artech House, Dedham, MA.
Brunfeldt, D. R. and F. T. Ula by ( 1984 ). "Active Reflector for Radar Calibration," IEEE
Trans. Geosci. and Remote Sensing, GE-22, pp. 165-168.
Chang, C. Y. and J.C. Curlander (1990). "A New Approach for Operational Multilook
Processing of SAR Data," Proc. IGARSS'90, College Park, MD, pp. 1333-1337.
Corr, D. G. (1984). "AMI Calibration Study," Final Report Vol. 1, SAR Calibration,
ESA CR(P) 2009, Noordwijk, Netherlands.
Curlander, J. C. ( 1986 ). "Performance of the SIR-8 Image Processing Subsystem," IEEE
Trans. Geosci. and Remote Sensing, GE-24, pp. 649-652.
Curlander, J. C., J. Shimada and L. Nguyen ( 1990). "Shuttle Imaging Radar-C,
Calibration Processor System Functional Design Document," JPL Pub. D-6953, Jet
Propulsion Laboratory, Pasadena, CA.
Dobson, M., F. Ulaby, D. Brunfeldt and D. Held ( 1986). "External Calibration of SIR-8
Imagery with Area Extended Point Targets," IEEE 1rans. Geosci. and Remote Sensing,
GE-24, pp. 453-461.
Dubois, P., D. Evans, A. Freeman and J. van Zyl (1989). "Approach to Derivation of
SIR-C Science Requirements for Calibration," Proc. IGARSS '89, Vancouver, B.C.,
pp. 243-246.
Dubois, P. and L. Norikane (1987). "Data Volume Reduction for Imaging Radar
Polarimetry," Proc. IGARSS'87, Ann Arbor, MI, pp. 691-696.
Freeman, A., J. Curlander, P. Dubois and J. Klein ( 1988). "SIR-C Calibration Workshop
Report," JPL Pub. D-6165, Jet Propulsion Laboratory, Pasadena, CA.
Freeman, A. and J. Curlander (1989). "Radiometric Correction and Calibration of SAR
Images," Photogram. Eng. and Rem. Sens, 55, pp. 1295-1301.
Freeman, A., Y. Shen and C. Werner ( 1990a ). "Polarimetric SAR Calibration Experiment
using Active Radar Calibrators," IEEE Trans. Geosci. and Remote Sensing, GE-28,
pp. 224-240.
Freeman, A. (1990b). "SIR-C Calibration: An Overview," JPL D-6997, Jet Propulsion
Laboratory, Pasadena, CA.
Freeman, A. (1990c). "Calibration and Image Quality Assessment of the NASA/JPL
Aircraft SAR During Spring 1988," Jet Propulsion Laboratory, JPL Technical
Document, D-7i97, Pasadena, CA.
Freeman, A. et al. (1990d). "Preliminary Results of the Multisensor, Multipolarization
SAR Calibration Experiments in Europe 1989," Proc. IGARSS'90, College Park, MD,
pp. 783-787.
Gray, A. L., P. W. Vachon, C. E. Livingstone and T. I. Lukowski (1990). "Synthetic
Aperture Radar Calibration using Reference Reflectors," IEEE Trans. Geosci. and
Remote Sensing, GE-28, pp. 374-383.
Hirosawa, H. and Y. Matsuzaka ( 1988). "Calibration of a Cross-Polarized SAR Image
Using a Dihedral Corner Reflector," IEEE Trans. Geosci. and Remote Sensing, GE-26,
pp. 697-700.
IEEE Standard Dictionary of Electrical and Electronic Terms (1977). ANSI/IEEE Std.
100-1977, 2nd Ed., Wiley, New York.

REFERENCES

369

IEEE Standard Test Procedures for Antennas (1979). ANSI/IEEE Std. 149-1979, Wiley,
New York.
Johnson Space Center ( 1988 ). "Payload Accommodations Document," NSTS 07700,
Vol. 14, Rev. J, Houston, TX.
Kasischke, E. S. and G. W. Fowler (1989). "A Statistical Approach for Determining
Radiometric Precisions and Accuracies in the Calibration of Synthetic Aperture Radar
Imagery," IEEE Trans. Geosci. and Remote Sensing, GE-27, pp. 417-427.
Kim, Y. (1989). "Determination ofthe Amplitude and Frequency ofCaltone for SIR-C,"
Internal Memorandum, Jet Propulsion Laboratory, Pasadena, CA.
Klein, J. (1990a). "SIR-C Engineering Calibration Plan, JPL-D6998,' Jet Propulsion
Laboratory, Pasadena, CA.
Klein, J. (1990b). "Polarimetric SAR Calibration using Two Targets and Reciprocity,"
Proc. IGARSS '90, College Park, MD, pp. 1105-1108.
Louet, J. (1986). "The ESA Approach for ERS-1 Sensor Calibration and Performance
Verification," IGARSS'86, Zurich, pp. 167-174.
Moore, R. K. ( 1988). "Determination of the Vertical Pattern of the SIR-8 Antenna,''
Inter. J. Remote Sensing, 9, pp. 839-847.
Raney, K. (1980). "SAR Response to Partially Coherent Phenomena,'' IEEE Trans. Ant.
Prop., AP-28, 777-787.
Rino, C. L.- and J. Owen (1984). "The Effects of Ionospheric Disturbances on
Satellite-borne Synthetic Aperture Radars,'' SRI International, Technical Report,
Contract DNA011-83-C0131, Menlo Park, CA.
Robertson, S. D. (1947). "Targets for Microwave Radar Navigation,'' Bell Syst. Tech. J.,
26, pp. 852-869.
Ruck, G. T., D. E. Barrick, W. D. Stuart and C. K. Krichbaum (1970). Radar Cross
Section Handbook, Vol. I, Plenum Press, New York.
van Zyl, J. J. (1989). "Unsupervised Classification of Scattering Behavior using Radar
Polarimetry Data," IEEE Trans. Geosci. and Remote Sensing, GE-27, pp. 36-45.
van Zyl, J. J. (1990). "Calibration of Polarimetric Radar Images Using Only Image
Parameters and Trihedral Corner Reflector Responses,'' IEEE Trans. Geosci. and .
Remote Sensing, GE-28, pp. 337-348.
Wall, S. D. and J. C. Curlander (1988). "Radiometric Calibration Analysis of SIR-B
Imagery," Inter. J. Remote Sensing, 9, pp. 891-906.
Zebker, H., J. J. van Zyl and D. N. Held (1987). "Imaging Radar Polarimetry from
Wave Synthesis," J. Geophys. R., 192, pp. 683- 701.

8.1

371

calibration error sources, considering sensor, platform, and processor effects.


We then present algorithms for geometric correction, including geocoding either
to a reference ellipsoid (i.e., a datum), or to a high resolution digital elevation
map. The chapter concludes with a discussion of techniques for mosaicking
multiple geocoded frames with an application to multisensor image registration.

8
GEOMETRIC CALIBRATION.
OF SAR DATA

In Chapter 7 we discussed the procedures for relating the received signal data

to the target scattering characteristics. This radiometric calibration process


involves measuring the system transfer function and correcting the image
products such that they directly represent the target backscatter coefficient.
However, an accurate estimate of the target reflectivity requires precise
knowledge of the relative geometry between the sensor and target. To this point,
we have derived the radiometric corrections assuming a smooth target surface.
In fact, in areas where there is significant relief, the local incidence angle deviates
from that of a smooth geoid and therefore the radiometric correction factors
(antenna pattern, resolution cell size, etc.) should be adjusted for the terrain
height. Additionally, the internal geometric fidelity of the image (as it relates
to a true representation of the target area) degrades as a function of the deviation
in terrain height relative to the assumed geoid model.
The geometric calibration accuracy of a SAR image (or any image) can be v.
evaluated in terms of the absolute location and image orientation errors, as
well as the relative image scale and skew errors. Geometric calibration is the
process by which we determine each of these performance parameters for a
given data set, while geometric correction, or equivalently geometric rectification,
refers to the post-processing step where the image is resampled to some
new projection. The term geocoding usually refers to a special case of the
geometric correction procedure where the image is resampled to some spatial
representation with known geometric properties (e.g., a standard map projection
such as Universal Transverse Mercator, UTM).
In this chapter we will formally define the various parameters describing the
geometric calibration accuracy. This is followed by an analysis of the geometric
370

DEFINITION OF TERMS

8.1

DEFINITION OF TERMS

For many scientific applications (e.g., geologic mapping, land surveys) the
geometric fidelity of the data product is critically important. Geometric
distortion principally arises from platform ephemeris errors, error in the
estimate of the relative target height, and signal processing errors. We define
geometric calibration as the process of measuring the various error sources and
characterizing them in terms of the calibration accuracy parameters. The terms
geometric correction and geometric rectification will be used interchangeably to
describe the processing step where the image is resampled from its natural
(distorted) projection into a format better suited to scientific analysis. Geocoding
is the process of resampling the image data into a specific output image format,
namely a uniform earth-fixed grid, which typically is a standard map projection.
Mosaicking refers to the process of assembling, into a single frame, multiple(~ 2)
independently processed (geocoded) image frames that are overlapping in their
coverage area.
The geometric calibration parameters can be divided into absolute error
terms, as referenced to some fixed coordinate system, and relative error terms,
which describe the distortion within an image frame. The absolute geometric
calibration of an image can be described by two parameters: location and
orientation. The absolute location error is the uncertainty in the estimate of any
image pixel relative to a given coordinate system (e.g. geodetic latitude and
longitude). The image orientation error is the angular uncertainty in the estimate
of a line in the image as compared to a line of reference, such as an axis of the
coordinate system (e.g., the angle between an image isorange line and the
equator).
The relative geometric calibration parameters describe the internal geometric
fidelity of the SAR image. The relative image calibration can be characterized
in terms of two parameters: scale and skew. The relative scale error is the
fractional error between a distance as represented in the image and the actual
geographic distance. This error term is typically specified in the range and
azimuth (or line and pixel) dimensions. The relative skew error is the error
between a given angle as represented in the image and the actual angle. For
example, two roads that intersect at a right angle may be represented in the
image at a crossing angle of 91, which is a relative skew error of 1.
For a multiple channel radar system, there is an additional parameter required
to describe the image-to-image misregistration. This relative misregistration error
is defined in the along-track and cross-track dimensions as the relative location

372

8.2

GEOMETRIC CALIBRATION OF SAR DATA

error (displacement) between two coincident pixels from image data acquired
by two separate radar channels.
The characterization of the image geometric calibration in terms of the above
listed parameters is not unique. The representati~n we presen~ here is convenient,
since these parameters are directly measurable m the SAR image.

8.2

GEOMETRIC DISTORTION

Before describing the various techniques for geometric corre~tion of the image
products, we first address the geometric distortions inherent m the uncorre~ted
image data and the source of these distortions. T~ey ca~ ~ener~lly be categon~
as resulting from sensor instability, platform mstabihty, signal propagation
effects, terrain height, and processor induced errors.
8.2.1

GEOMETRIC DISTORTION

373

delay to derive the actual propagation time used in the slant range calculation,
that is,
R =

c('t" -

't" 0 )/2

(8.2.3)

Here 't" is the total delay from the time a control signal is sent to the exciter for
pulse generation until the echo is digitized by the ADC. This delay is precisely
known since it is controlled by the radar timing unit which in turn is based on
the stalo frequency. Error in the estimate of the propagation time will result in
a slant range error which in turn will bias the incidence angle estimate. From
Fig. 8.1, we can write
(8.2.4)
where I'/ is the incidence angle, y is the look angle, R. and R1 are the magnitude
of the spacecraft and target position vectors relative to the center of the earth, and

Sensor Errors

The sensor stability is a key factor controlling the internal geometric fidelity
of the data set. For example, the consistency of the interpulse or intersample
period is governed by the accuracy of the timing sign~ls. sen~ to the .P~lse
generator and the analog to digital convertor (ADC). Vanatton i? these timmg
signals is dependent primarily on the stability of the local oscillator (stalo ).
Typically, short term variation in the stalo frequency t.hat produces s~mple-~o
sample variation (clock jitter) is negligible from an im~ge geometric fidelity
standpoint. Perhaps more significant is the long-term dnft of the stalo. For. a
mapping mission, such as the Magellan Venus radar mapper, the stalo dnft
must be measured over the course of the mission to determine the actual PRF,
since this establishes the along-track pixel spacing, that is
(8.2.1)
where Lis the number of azimuth looks andfp is the pulse repetition frequency
(PRF). The magnitude of the swath velocity V.w is given by
(8.2.2)
where R and R are the magnitudes of the sensor and target position vectors
and y a~d V ar~ the sensor and target velocity vectors, respectively. A fractional
error in the s~alo frequency translates into a similar fractional error in the PRF
and therefore the along track pixel spacing, which results in an along track
scale error.
A second sensor parameter that directly affects the geometric fideli~y of the
data set is the electronic delay of the signal through the radar transmitter and
receiver. This electronic delay 't"e must be subtracted from the total (measured)

y = cos- 1 [(R 2

+ R~ -

R;)/(2RR.)]

(8.2.5)

where R is the sensor-to-target slant range. Therefore, an error in the estimate


of the slant range resulting from hardware electronic delay error as given by
Eqn. (8.2.3) will result in an incidence angle estimation error from Eqn. (8.2.4)
and Eqn. (8.2.5). This in turn will cause an across-track scale error in the SAR
image since the ground range pixel spacing is given by
c5x1 , = c/(2/. sin I'/)

(8.2.6)

where f. is the complex sampling frequency. From Eqn. (8.2.6) we see that
errors in either y or f. translate into cross-track scale errors, as will be shown
in the following section on target location errors.
A third type of error, which may be more accurately classified as a platform
error than as a sensor error, is drift in the spacecraft clock. Any offset between
the spacecraft clock and the clock used to derive the ephemeris file from the
spacecraft tracking data will result in target location errors. If the spacecraft
ephemeris is in an inertial coordinate system, then the planet rotation must be
derived from the time difference between the actual data acquisition and the
reference time for the inertial coordinate system. Drift in the spacecraft clock
will result in an error in the target longitude estimate according to

where w. is the earth rotational velocity, sd is the clock drift, and ( is the target
latitude. An along-track position error will also result from clock drift according
to sd V.w, where V.w is the swath velocity.

374

8.2

GEOMETRIC CALIBRATION OF SAR DATA

GEOMETRIC DISTORTION

375

where R. and Rt are the sensor and target position vectors, respectively. The
~lant range R is gi~en by Eqn. (8.2.3). For a given cross-track pixel number j
m the slant range image, the range to the jth pixel is

SENSOR

c
R; = -( < - <.)
2

(8.2.8)

where tlN represents an initial offset in complex pixels (relative to the start of
the sampling window) in the processed data set. This offset, which is nominally
0, i~ required for pixel location in subswath processing applications, or for a
design where the processor steps into the data set an initial number of pixels
to compensate for the range walk migration.
The Doppler equation is given by

I
I

NADIR

+ - (j + tlN)
2J.

(8.2.9)
Ren

CENTER

where A. is the radar wavelength.foe is the Doppler centroid frequency, and v.,
Vt are the sensor (antenna phase center) and target velocities, respectively. The
target velocity can be determined from the target position by

lw

(8.2.10)

OFPLANETV

Figure 8.1 Relationship between look angle, y, and incidence angle, 17, for a smooth spherical
geoid model. The spacecraft position is given by R, = H + R where R is the radius of the earth
at nadir and H is the S/C altitude relative to the nadir point.

8.2.2

Target Location Errors

The location of the (i,j) pixel in a given image frame can be derived from
knowledge of the sensor position and velocity (Curlander, 1982). More precisely,
the location of the antenna phase center in an earth referenced coordinate
system is required. The target location is determined by simultaneous solution
of three equations: (1) Range equation; (2) Doppler equation; and (3) Earth
model equation.
The range equation is given by
(8.2.7)

where roe is the earth's rotational velocity vector. The Doppler centroid in
Eqn. (8.2.9) is the value offoe used in the azimuth reference function to form the
given pixel.
An offset between the value of foe in the reference function and ,the true JOe
r
causes the target to be displaced in azimuth according to
(8.2.11)
where tlfoe is the difference between the true foe and the reference foe.JR is the
Doppler rate used in the reference function, and V.w is the magnitude of the
swath velocity. To compensate for this displacement, when performing the target
location, the identicalfoc used in the reference function to form the pixel should
used in Eqn. (8.2.9). The exception to this rule is if an ambiguous foe is used
m the reference function. That is, if the true foe is offset from the reference foe by
more than fv/2. In this case, the pixel shift will be according to the Doppler
offset between the reference foe and the Doppler centroid of the ambiguous
Doppler spectrum, resulting in a pixel location error of

?e

(8.2.12)

376

GEOMETRIC CALIBRATION OF SAR DATA

8.2

where m is the number of PRFs the reference f De is offset from its true value
(i.e., the azimuth ambiguity number). Using Seasat as an example, with m = 1,
V. = 7.5 km/s, f, = 1647 Hz, and fR = 525 Hz/s, the azimuth target location
e;~or associated ~ith a processing Doppler centroid offset by one ambiguity is
approximately 23 km. Additionally, there is a small range offset which is given
by Eqn. (6.5.7). Nominally, for Seasat this is on the order of 200 m.
The third equation is the earth model equation. An oblate ellipsoid can be
used to model the earth's shape as follows
2

x, +y, +~= 1
(R.

+ h) 2 .

R~

GEOMETRIC DISTORTION

377

SAR

ISOOOPPLER
CONTOUR

S/CTRACK

(8.2.13)

,,

""'/

_,/
;

where R is the radius of earth at the equator, h is the local target elevation
relative to the assumed model, and RP, the polar radius, is given by
RP= (1 - f)(R.

+ h)

SENSOR, Rs

(8.2.14)

where f is the flattening factor. If a topographic map of the area imaged is used
to determine h, then the earth model parameters should match those ~sed to
produce the map. Otherwise, a mean sea level model such as that given by
Wagner and Lerch (1977) can be used.
The target location as givt;n by {x 1, Yo z,} is determined from the simultaneous
solution of Eqn. (8.2.7), Eqn. (8.2.9) and Eqn. (8.2.13) for the three unknown
target position parameters. This is illustrated pictorially in Fig. 8.~.. Thi.s fi~ure
shows the earth (geoid) surface intersected by a plane whose position is given
by the Doppler centroid equation. This intersection, a line of constant Doppler,
is then intersected by the slant range vector at a given point, the target location.
The left-right ambiguity is resolved by knowledge of the sensor pointing
direction.
The accuracy of this location procedure (assuming an ambiguous f De was not
used in the processing) depends on the accuracy of the sensor position and
velocity vectors, the measurement accuracy of the pulse delay time, ~nd
knowledge of the target height relative to the assumed earth model. The l~c.atio.n
does not require attitude sensor information. The cross-track target position is
established by the sampling window, independent of the antenna footprint
location (which does depend on the roll angle). Similarly, the azimuth squint
angle, or aspect angle resulting from yaw and pitch of the platform, is determined
by the Doppler centroid of the echo, which is estimated using a clutterlock
technique. Thus the SAR pixel location is inherently more accurate than that
of optical sensors, since the attitude sensor calibration accuracy does not
contribute to the image pixel location error. The following sections discuss the
relationship of platform ephemeris errors, ranging errors, and target elevation
errors to the image geometric calibration accuracy parameters.

X, VERNAL EQUINOX
Figure 8.2

Geocentric coordinate system illustrating a graphical solution for the pixel location

equations.

8.2.3

Platform Ephemeris Errors

The platform position and velocity errors can be broken into three components:
(1) Along-track errors; (2) Cross-track errors; and (3) Radial errors. We will
examine the effects of each of these in terms of the azimuth and range target
positioning error.
Along-Track Position Error, ARx. An along-track position error causes an
azimuth target location error according to

(8.2.15)
where .1.R. is the along-track sensor position error. The cross-track or range
location error from an error in AR. is negligible.

378

GEOMETRIC CALIBRATION OF SAR DATA

8.2

GEOMETRIC DISTORTION

379

Cross-Track Position Error, LiRy A cross-track sensor position error pre-

Sensor Velocity Errors, Li Vx- Li Vy, Li Vz. The along.track, cross-track, and radial

dominantly results in a target range location error of

sensor velocity errors each produce an azimuth location error proportional to


the projection of that sensor velocity error component in the sensor-to-target
direction. This component of the velocity error is given by

(8.2.16)

where LiR is the cross-track sensor position error. A small azimuthal target
displacem~nt will result from a shift in the earth's rotational velocity at this
new cross-track target position according to Eqn. (8.2.11 ). However, the effect
is quite small and can be neglected for most applications.
Radial Position Error, LiRz. A sensor radial position error is essentially an error
in the estimate of the sensor altitude, H. From Eqn. (8.2.5) the change in look
angle for a given change in the sensor radial position is

Liy =cos

_ 1 [R

+ R; - Rt]
2R.R

_ 1 [R + (R. + LiRz) - Rt]


- cos
2(R. + LiRz)R

LifocA.R V.w
2

(8.2.19)

(8.2.20)

2v.t

=[sin(~+ Li17) _
sm 17

l]lOO%

(8.2.24)

(8.2.18)

where V.st is the magnitude of the relative sensor to target


velocity.

Perhaps a more severe effect resulting from a radial sensor pos1tton e~ror
than the target location error is the image cross-track SFale error. Cons1d~r
the look angle offset Liy resulting from a radial position error LiRz. This
approximately translates into an equivalent incidence angle error ~i.e.~ ~y ~ Li17 ).
Therefore, the ground range pixel spacing given by Eqn. ( 8.2.6 ), which is mversely
proportional to sin 17, results in a range scale error of
k,

we get an azimuth location error of

The range location error from these sensor velocity error components is
negligible. However, an along-track velocity error does produce an azimuth
scale error in the image according to

where V. is the earth tangential speed at the equator, ( 1 is the geocentric latitude
of the ta~get, ai is the orbital inclination angle, and Liy, the change in loo~ a~gle,
is given by Eqn. (8.2.17). The resultant target azimuth location error ts given
by Eqn. ( 8.2.11) which can be rewritten as
fiX2 ~

(8.2.23)

A radial sensor position error will also cause an azimuthal target location error
according to the resultant Doppler sh~ft Lifoc which is given by
2
Lifoc = ; (cos Ct sin ai cos y)Liy

(8.2.22)

where ()5 is the squint angle of the sensor measured relative to broadside. From
Eqn. (8.2.20) with

(8.2.17

which leads to a target range position error of approximately


Lir 2 ~ RLiy /sin 17

LiV = LiV. sin O. +Lilly sin y +Li~ cosy

(8.2.21)

(8.2.25)

8.2.4

Target Ranging Errors

The sensor-to-target slant range is determined by the signal propagation time


through-the atmosphere as given in Eqn. (8.2.8). Slant range errors arise from
error in the estimation of the sensor electronic delay, r., or uncertainty in setting
the data record window relative to the pulse initiation. The electronic delay
term represents the time elapsed from generation of the transmit pulse control
signal (i.e., the data record window timing reference) until the pulse radiates
from the antenna, plus the time for the received echo to travel from the antenna
through the receiver electronics to the ADC. The electronic delays, which are
typically on the order of microseconds, are generally characterized preflight
and monitored inflight to measure relative drift as a result of component aging
or temperature variation. Typically, this delay is measured using a leakage chirp
that flows directly from the transmit chain to the receive chain via a circulator
(see Fig. 6.2). The additional delay through the antenna feed system to the
radiating elements is usually estimated by analysis. For passive antenna
subsystems, such as Seasat or E-ERS-1, this technique is adequate. However
an active system, such as the SIR-C antenna which has transmit/receive (T/R)
modules in the antenna feed assembly (see Fig. 6.15), requires a more complex
experimental setup with an external transmitter/receiver unit to measure delay
through this portion of the system.
A second key source of slant range estimation error can arise from
propagation timing errors. It was assumed in Eqn. (8.2.3) that the propagation

380

8.2

GEOMETRIC CALIBRATION OF SAR DATA

velocity of the electromagnetic wave was equal to the speed of light, c. In general
this is a good approximation, however under certain ionospheric conditions a
significant increase in the signal propagation time relative to propagation time
in a vacuum can occur. This additional delay, T 1, is given by
(8.2.26)
where R 1 is the propagation path length through the ionosphere,!. is the radar
carrier frequency, and K 1 is a scale factor that depends on the ionospheric
electron density (NTV). Figure 8.3 is a plot of ionospheric group delay versus
carrier frequency (Brookner, 1985). At a grazing angle 11 = 80, for severe

I -

18

fr - f ~

R,(fi -

Ji)

(8.2.27)

where f 1 and f 2 are the two carrier frequencies. An alternative approach to


calibrate this delay is to access a database of ground measurements available
through the Environmental Science Services Administration (ESSA) in Boulder,
CO.
A ranging error, resulting from an electronic delay measurement error or
from unmodeled variation in the propagation velocity of the EM wave, will
result in a cross-track target location error of

Tl -0o} SEVERE IONOSPH ERE


NTV = 10

381

ionospheric conditions the round trip delay is on the order of 1- 2 s at L-band.


Assuming a medium ionosphere, for the Seasat incidence angle and radar
frequency, the expected delay is on the order of 150 ns, which translates into a
22.5 m slant range error and a range target position error of nearly 65 meters.
The variation in ionospheric conditions from mild to severe is both temporal
and geographical. The electron density, NTv which is the key ionospheric
parameter determining K., is typically several times greater at local noon tha n
at midnight; it also peaks near the equator and is minimum at the poles. An
additional factor affecting K 1 is the solar activity. The density is highest
(large K 1) at the sun spot maximum which occurs every 11 years (e.g., 1990, 200 I ).
For a radar system such as an ocean altimeter, where the propagation delay
must be measured to a fraction of a nanosecond accuracy, a dual-frequency
radar is required to measure K 1 (TOPEX, 1981). The relative shift in r 1 can be
used to solve for K 1 by
K - [t (f) - t (f )]

Tl .o

GEOMETRIC DISTORTION

el/m

~w
Cl

a..

::>

cilr
Ar 3 = - 2sin11

a:

<.':)

u
a:
w
I
a..

where Ar is the slant range timing error (e.g., AT., Tr). For example, a 10 ns
electronic delay measurement error results in 1.5 m location error in the slant
range image, which translates into Ar 3 = 4.4 m in the ground range image for
Seasat (11 = 20). Similarly, an unmodeled propagation delay of r 1 = 50 ns
results in a 7.5 m slant range error and a ground range error Ar 3 = 22 m.

8z
Q

0.0001

0.1
Figure 8.3

(8.2.28)

0.2 0.3

0.5 0.7 1.0


2.0 3.0 5.0 7.0 10.0
CARRIER FREQUENCY, fc (GHz)

20.0

Plot of ionospheric group delay (two-way) versus radar carrier frequency for both
severe and medium ionosphere (Brookner, 1985).

Target Elevation Error. In the target location algorithm outlined in Section


8.2.2 an oblate ellipsoid was assumed for the earth model. To account for
variation in the target elevation about this ellipsoid, the ellipsoid radius can be
adjusted by the elevation, h, as in Eqn. (8.2.13) and Eqn. (8.2.14). The effect of
an error in estimating the target height can be stated in terms of the effective
slant range error. A slant range error of (Fig. 8.4)

AR= ilh/ cos 11

(8.2.29)

382

8.2

GEOMETRIC CALIBRATION OF SAR DATA

GEOMETRIC DISTORTION

383

SAR

Figure 8.4 Geometry illustrating effect of height estimation error

~h

on target range location.

I GROUND RANGE IMAGE I

will result from a height estimation error Ah, where '1 is the local incidence
angle. The target range location error is then given by
,

Ar 4 =Ah/tan '1

(8.2.30)

NEAR
'~ANGE

''

Assuming an incidence angle '1 = 20, as in Seasat, Ah= 1 m results in a target


location error Ar4 = 2.4 m.
Foreshortening and Layover EHects

As we discussed in the previous section, an error in the target height relative


to the ellipsoid model results in a slant range estimation error, which in turn
produces a cross-track displacement of the target within the image frame. This
local geometric distortion is due to the fact that the SAR (actually any radar)
is a ranging device and therefore generates a cross-track reflectivity map based
on the sensor to target range distance. For a smooth surface, there is an inherently
nonlinear relationship ( 1/sin '1) between the sensor-to-target range and the
cross-track target position in the image. This relationship, given by Eqn. ( 8.2.4 ),
Eqn. (8.2.5) and Eqn. (8.2.6), is illustrated in Fig. 8.5. Since for a side-looking
radar the angle of incidence '1 varies across the swath, the ground distance
represented by each sample is not uniform. The effect is that features in the
near range appear compressed with respect to the far range. Only for smooth
surfaces can the slant range spacing and the ground range spacing be related by
sin '1
As the local terrain deviates from a smooth surface, additional geometric
distortion occurs in the SAR image relative to the actual ground dimension
(Lewis and MacDonald, 1970). This effect, illustrated in Fig. 8.6a, is termed
foreshortening when the slope of the local terrain, oc, is less than the incidence
angle, '1 Similarly, a layover condition exists for steep terrain where oc ~ '1 For
ground areas sloped towards the radar (oc + ), the effective incidence angle
becomes smaller, thus increasing the cross-track pixel spacing. Ground areas

GROUND RANGE
IMAGE

SLANT RANGE
IMAGE

Figure 8.5 Relationship between slant range and ground range image presentation for a side
looking radar.

sloped away from the radar (oc-) have effectively a larger local incidence angle
thus decreasing the range pixttl size.
.In relatively high relief areas, as shown in Fig. 8.6b, a layover condition may
exist such that the top of a mountain is at a nearer slant range than the base.
In this .cas~, the ii_nage of the mountain will be severely distorted, with the peak
app~a.nng m the 1m~ge at a nearer range position than the base (see Fig. 8.21).
Add1t1onally, echo signals from multiple target locations will arrive at the SAR
re~~iving antenna simultaneously. Therefore the fraction of scattered power
ansmg ~rom. each. target c~nnot be resolved. To properly correct this type of
geometr.1c d1s~ort1on reqmres some assumption about the scattering model.
Theoret1ca.lly, if the backscatt~r coefficient as a function of the incident geometry
for a particular target area 1s known, the relative power contribution of a

384

8.2

GEOMETRIC CALIBRATION OF SAR DATA

GEOMETRIC DISTORTION

RADAR BEAM

SAR
RADAR BEAM

I
II

"

NEAR RANGE

/
/ FARRANGE

NEAR RANGE

c
c

c
SLOPING SURFACES

a
SAR

RADAR
ANTENNA
RADAR BEAM
~ RADAR-IMAGE PLANE

""
"
NEAR RANGE

SLOPING SURFACE

b
Figure 8.6 Geometric distortions in SAR imagery: (a) Foreshortening; (b) Layover; (c) Shadow;
(d) A combination of imaging geometries illustrating secondary peak.

Figure 8.6 (continued)

385

386

8.3

GEOMETRIC CALIBRATION OF SAR DATA

particular range bin from each iso-range target (in the layover region) can be
determined and assigned to the correct cross-track pixel in the resampled
(rectified) image. Practically, this would be an extremely difficult process, since
for each output pixel a search would be required over an area of digital elevation
data whose targets could produce the identical range and Doppler histories. Of
course, the available a 0 versus 17 model is only approximate, and therefore a
radiometrically calibrated image cannot be recovered and obviously the phase
information is lost.
An image distortion related to the layover effect is radar shadow. Shadowing
occurs when the local target slopes away from the radar at an angle whose
magnitude is greater than or equal to the incidence angle of the transmitted
wave (oc- > y). When a shadow condition occurs, the shadow region does not
scatter any signal. In the rectified image, these areas are typically represented
at a signal level equal to the system thermal noise power. This will prevent a
negative power representation of shadow area in the noise subtracted imagery.
To perform scientific interpretation of data products with these types of
distortion, the scientist must relate the backscatter coefficient to the local incident
geometry of the EM wave. Therefore, as an ancillary data product, a local
incidence angle map (i.e., 17 1(i,j)) should be provided with each terrain corrected
image. This map, in conjunction with the calibrated image, provides the
investigator with both the backscatter coefficient and the incidence angle
for each resolution cell. Given this ancillary data set, the user can directly
characterize the target reflectivity as a function of imaging geometry.
Additionally, the incidence angle map provides information on the location of
the radar layover and shadow regions, which is important since these data cannot
0
be calibrated in terms of a
Although it is somewhat beyond the scope of this text to derive the full set
of geometric conditions which would result in radar layover and shadow, we
should point out that it is a rather complex process to search over regions of
the digital elevation map (DEM) to determine if a secondary peak is intersected
by the radar beam. Figure 8.6d again illustrates radar shadow and layover
regions. An incidence angle map should indicate that segment ab is a layover
region since the local slope is greater than the incidence angle. Segment be is a
normally illuminated (foreshortened) region where the local 17 values are
provided. Segments cd and cl are shadow regions and should be indicated as
such. Even though the local slope oc(i,j) is less than 17(i), it is intersected by a
hidden ray, not the actual radar beam. Similarly, segments de and fg are
foreshortened regions where the local incidence angle
111(i,j) = 17(j) - oc(i,j)

(8.2.31)

should be provided in the incidence angle map. A detailed treatment for


generation of the radar shadow and layover map is given by Kropatsch and
Strobl ( 1990 ). For a real-time geocoding system, if the radar parameters
(i.e., data record window position, look angle) and platform ephemeris are

GEOMETRIC RECTIFICATION

387

SENSOR

Figure 8.7

Illustration of specular point migration effect in SAR imagery (Courtesy of M. Ko brick).

k~own, the incidence angle ma~ can be ge~erated in advance of the processing,
usmg a qEM. The radar data is not reqmred for this process.
. A fi~al, per~aps more subtle, source of geometric distortion is specular point
m1gratl~n. This occurs as shown in Fig. 8.7 for rounded hilltops where the
predo~mant scattere~ location is dependent on the incidence angle of the
transmitted .wave. This effect can be important when registering two image
frames acqmre~ at differen~ incidence angles. For example, in stereo imaging,
where the relative target displacement from two images at different incidence
angles determines the target height, specular point migration can be a significant
error source.

8.3

GEOMETRIC RECTIFICATION

The geomet~ic distortions described in the previous section can in part be


corrected by image resampli.ng if inf~rmation about the sensor position, imaging
geometr~, an~ target elevation relative to the ellipsoid are available. Especially
severe d1storti?ns, sue? as layover and radar shadow, cannot be corrected to
produce a calibrated image. However, as we have just described, a separate
data produ~t can be generated which identifies the pixels in either layover or
shadow reg10ns. These areas do not contain calibrated target reflectivity data
and therefore should be excluded from quantitative data analyses.

388

8.3

GEOMETRIC CALIBRATION OF SAR DATA

In this section, we will present algorithms for performing the image geometric
rectification. Our algorithms are based on a model of the sensor imaging
mechanisms and do not require tiepointing to derive the correction factors.
Essentially, there are three main categories of geometric rectification algorithms:
(1) Ground plane, deskewed projection; (2) Geocoding to a smooth ellipsoid;
and ( 3) Geocoding to a topographic map. Each of these algorithms use the
pixel location technique previously described in Section 8.2. Therefore the
geometric calibration accuracy of the corrected data products is directly related
to the target location error.

8.3.1

Image Resampllng

Prior to a discussion of the geometric correction algorithms, it is appropriate


to outline some basic rules for resampling the SAR image data. In the strictest
sense, assuming it is required that the resampling operation not degrade the
quality of the SAR image (i.e., no information is lost), then the resampling
algorithm must conserve all statistics (i.e., the probability distribution function
and all moments) of the input image. We know, from the Shannon-Whittaker
sampling theorem (Appendix A) for a Nyquist sampled image, that an
interpolation kernel of the form sinc(x) can be used with no loss of information.
In practice, however, a truncated sine function must be used, resulting in image
artifacts (e.g., distortion of the image statistics). Thus, given that we cannot
preserve all the image information using a finite resampling filter, the question
remains as to what the best approach is for optimally conserving the input
characteristics in the resampled output image.
Since the complex signal data is of finite bandwidth, as determined by the
sensor and data processing parameters, the required sampling frequency is
definable and is typically met by most radar systems. Oversampling factors of
10% to 20% are typical to minimize the effects of aliasing from the tails of the
spectra. Additionally, assuming the filters used in the signal processing are also
Nyquist sampled, the complex image samples are uncorrelated. We can define,
in general, a complex interpolation operation of the form
V0 (i) =

cj

V.(i

+ j)

(8.3.1)

GEOMETRIC RECTIFICATION

389

In other words, if the input complex image data samples are uncorrelated then
a unit energy interpolation filter preserves the image statistical inform;tion.
For correl.ated data samples, with an autocorrelation function given by Pv the
filter requirement becomes (Quegan, 1989)

r1c;1 2 +2Rer

C;cfpv(i-j)=l

(8.3.3)

i j > i

It should be noted that, although we have preserved the statistical distribution

and moments with the criteria ofEqn. ( 8.3.2) and Eqn. ( 8.3.3 ), the autocorrelation
function, and therefore the texture of the resampled output image, will be altered
(ex~pt ~n the special case of nearest neighbor resampling). Depending on the
apphcatton of the data, other criteria for determination of the filter coefficients
may be used which are a better match to the desired image characteristics
(e.g., t?e i~pulse res~onse function and sidelobe levels). In any case, a data
analysts or mterpretatton scheme that utilizes textural information must account
for the effects of resampling.
It is not unusual for the image geometric rectification to. be applied to a
detected (intensity) image product. The detection process, which involves
squaring the real and complex values, doubles the spectral bandwidth of the
?riginal .image and ~herefore requires twice the sampling frequency of the input
image (see Appendix A). If the sampling is not doubled (which is usually the
case) aliasing occurs (the severity of which depends on the scene content) and
the detected samples will be correlated.
In the case of resampling the intensity image, we are again interested in
preserving the output image statistical distribution and the moments relative
to the input image. Since, as was discussed in Section 5.2, the input intensity
image has an exponential rather than a Gaussian distribution (as in the real
and imaginary components of the complex image), the image statistical
distribution will not be preserved. Assuming the intensity image is oversampled,
such that the data are independent, the interpolated image can be described in
terms of gamma distributions (Madsen, 1986).
Given an interpolation filter of the form
/ 0 (i)

L djl (i + j)
1

(8.3.4)

where JI!, V0 are the complex input and output (amplitude) images respectively
and the cJ are complex resampling coefficients. It can be shown that the
interpolation of Eqn. (8.3.1) preserves the statistical distribution of input data,
including all moments, if

where JI> I~ are the input and output (intensity) images respectively and the d.
are real interpolation coefficients, preservation of the image mean sets a conditio~
on the resampling coefficients of

(8.3.2)

(8.3.5)

390

8.3

GEOMETRIC CALIBRATION OF SAR DATA

The preservation of the second moment and the variance requires ( Quegen, 1989~

LL d;djlP1U j

i)l =

(8.3.6)

t'

Ground Plane, Deskewed Projection

In this section, as well as in the following sections on ge?coding, we ass~me


the input image consists of single-look complex. valu_es m the natural pixel
spacing of the radar system. This spacing is determmed m range by the co~p.lex
sampling frequency J. of the ADC and in azimuth by the radar pulse repetttton
frequency fp according to
Slant Range: bx,= c/(2/.)

(8.3.7) _

Ground Range: bx 8, = c/(2/. sin 11(j))

(8.3.8)

Azimuth: bxaz =

(8.3.9)

V.w/fp

The parameter 11(j) is the incidence angle at cross-track pi~el number j. The
slant range to that pixel is given by Eqn. (8.2.8) and the magmtude of the swath

. .
velocity V.w is given by Eqn. (8.2.2).
The process to convert the input image to a ground plane deskewed pr0Ject10n
at uniform ground spacing is given by Curlander ( 1984 ). The output cross-track
and along-track pixel spacing arrays are first generated by
X8 z{i) = ibXaz;

391

where Xaz and x 8 , are the azimuth and ground range input spacing arrays and
Na, N, are the input array sizes in azimuth and range, respectively. The primed
values are the output arrays. Typically the output spacing is chosen such that

where Pi is the image autocorrelation function. Similar equations ca~ ~ written


for preservation of the higher order moments (Madsen, 19~6). Ag~m, tt sho~ld
be noted that additional criteria may be necessary to denve an. mt~rpolat10n
kernel that meets other image quality specifications. A fin.al po~nt ts that t~e
interpolation should not be carried out in the d~te_cted ~mphtude image don;-am
(i.e., the square root of the intensity image). ~hts ts a fairly com_mo_n error smce
image data are typically represented as ampbtude data ~hen ~tstnbuted to ~he
users. Images are represented in an amplitude fo~mat smce this _repres_entatton
has more contrast than the intensity image and ts therefore easter to mterpret
visually. However, resampling the amplitude image is a. no~line~r process a~d
therefore the resulting output image cannot be quantitatively ~nterpret~d t~
terms of cr 0 This also holds true for the multilooking operation ( ~htch ts
effectively a box filter). Multiple pixel averaging to reduce speckle n01se must
be performed on the intensity image.

8.3.2

GEOMETRIC RECTIFICATION

x~z(i') = i'bx~z

x 8,(j) = jbx 8,; x~,(j') = j' bx~,

i = 1, N,;

i' = 1, N~

j= l,Na; j' = 1, N~

(8.3.lOa)
(8.3.lOb)

resulting in square pixels. The output spacing array thus serves as a pointer to
the input spacing array to generate the resampling coefficients. These coefficients
should be determined to preserve the image statistics according to conditions
outlined in the previous section. The real and imaginary parts are resampled
separately.
In establishing the two one-dimensional resampling arrays in Eqn. (8.3.10),
we assumed that the azimuth and range input pixel spacings were independent.
While it is true that the range spacing is independent of azimuth, the azimuth
spacing does have some dependence on range position. This comes from the
target "'.elocity term in Eqn. (8.2.2) which can be approximated by
1__

J1i

( V.

COS OC;) COS ( 1

(8.3.11)

where oc; is the orbit inclination angle and ( 1 is the geocentric latitude. We can
evaluate the error resulting from the assumption that J1i is constant within an
image frame. For a 100 km swath, the worst case latitude error at the swath
edge is less than 0.5 and the associated scale error is less than 0.05%. Therefore,
across a 100 km swath image, the assumption that azimuth pixel spacing is
independent of range position results in a worst-case distortion of 50 m.
An additional consideration is that the uncorrected SAR image is naturally
skewed unless the data is frequency shifted to zero Doppler during the processing.
For spaceborne systems, either the earth rotation or an off broadside (squint)
imaging geometry will result in a Doppler shift in the echo data (Fig. 8.8a).
Assuming the processing is performed at the Doppler centroid, an image range
line is skewed relative to its orientation on the earth (Fig. 8.8b ). Thus, the
output image must be deskewed according to its relative change in Doppler.
Using the near range pixel Doppler centroid as a reference (i.e.,j = 1), this skew
is given by
(8.3.12)
where AnsK is in output aximuth pixels. For most systems this deskew can be
approximated as a linear function where
(8.3.13)
where ksK is a skew constant approximated from Eqn. (8.3.12). The deskew
operation is not required if the azimuth reference function is centered about

392

GEOMETRIC CALIBRATION OF SAR DATA

ALONG TRACK

~
EARTH

U
/

/ /~/

//

OOPPtE~ /
/

I I I I
i

!I 1
I

'

8.3.3

'

ZERO
I 1
\
DOPP!..ER DOPP!..ER
CENTROID

~ZERO

OOPPtER
PARALLEL

a
ALONG
---+-----TRACK

393

I I

GEOMETRIC RECTIFICATION

parameter estimation, while orientation errors arise from both skew errors and
ephemeris errors (primarily platform velocity).

OIRECTlO~

ROTATION

ISO

8.3

ALONG

--+----i~TRACK

Geocoding to a Smooth Ellipsoid

Geocoding is the process of resampling the image to an earth fixed grid such
as Universal Transverse Mercator (UTM) or Polar Stereographic (PS) map
projections (Graf, 1988). A key element for routine production of geocoded
products is the use of the radar data collection, processing, and platform
parameters to derive the resampling coefficients. The technique described here
is based on using a model of the SAR image geometric distortion rather than
operator intensive interactive routines such as tiepointing (Curlander et al.,
1987). The geocoding routine is based on the absolute pixel location algorithm
described in Section 8.2.2. Recall that this technique relies on the inherent
internal fidelity in the SAR echo data to determine precise sensor to target
range and antenna pointing (squint angle), without requiring specific information
about platform attitude or altitude above nadir. The geocoding procedure
generally consists of two steps: (1) Geometric rectification; and (2) Image
rotation.
Geometric 'Rectification to Map Grid. The initial step in the rectification

Illustration of image skew from earth rotation induced ~oppler shift: (a) Pio~::
iso-Doppler lines; (b) Image format when processed to Doppler centroid; (c) Image format w
processed to zero Doppler (Courtesy of K. Leung).

Figure 8.8

zero Doppler and the data is shifted (by applying a phase ramp) prior to azimuth
compression (Fig. 8.8c ). The zero Doppler ap~roach is.efficient for small Doppler
shifts, but can cause significant complexity 10 the azimuth correlator for large
squint angles.
.
.
"d
If the platform squint (yaw, pitch) rate requues that the Doppler centroi
be updated along track, then each azimuth processing block must _be deskewed
separately and, in general, resampled prior to i:nerging the bl~cks 10to. the fi?al
image frame. In practice this azimuth resamphng can be avoide_d ~Y. 10clud~ng
a phase shift in the azimuth reference function. If the Doppler shift is 10creas~ng
block-to-block (i.e., larger skew), then an additional overlap bet~een process~ng
blocks is required to ensure that there are no gaps in the merged image follow10g
deskew.
The residual angular skew in the rectified imag~ as ref~renced ~o an orthogonal
coordinate system is a key measure of geometnc fidelity. Typ1cal num~rs for
high precision image products are skew e~rors less than 0.1 and0 image
orientation errors relative to some reference bne (e.g. true north~ of 0.2 Skew
errors are predominantly processor induced artifacts from errors 10 the Doppler
0

procedure is to generate a location map for each image pixel using the location
algorithm in Section 8.2.2. Here we assume a smooth geoid at some mean target
elevation for the entire im~ge frame. Following generation of this location map,
the image pixels can be resampled into any desired cartographic projection by
mechanization of the equations appropriate for the desired earth grid. A good
reference for these map projecfions is published by the United States Geological
Survey (Snyder, 1983). The relationship between the complex image pixels in
the slant range-Doppler reference frame and the map projection can be
expressed in terms of coordinate transformations as follows (see Fig. 8.9)
(x, y) = T1 (x', y')

(8.3.14a)

(x', y') = T2(l, p, P)

(8.3.14b)

where (x, y) is the coordinate frame defined by the original SAR image, (x', y')
is the coordinate frame of the rectified image, ( l, p) is the coordinate frame
defined by the map grid, and p is the angle between grid north and y'
(Fig. 8.9). The coordinate system transformations are given by T1 for the rectified
to original image and by T2 for the geocoded to rectified image. A method for
calculating Pis presented in the next subsection.
The rectified image is in a grid defined by (x', y') where the abcissa (x') is
parallel to the cross-track direction and the ordinate (y') is parallel to the
spacecraft velocity vector at the frame center. A rectified image in the geocoded
format is generated by rotation of the rectified image into a grid defined by

394

8.3

GEOMETRIC CALIBRATION OF SAR DATA

395

GEOMETRIC RECTIFICATION

where the coefficient set {ai, bi} of each block is derived from the corner locations.
The block size is selected according to the geometric error specification for the
output image.
The transformation in Eqn. (8.3.14a) requires resainpling of the complex
image, which involves two-dimensional (2D) interpolation of each of the real
and imaginary components. To reduce the number of computations, these
equations can be rewritten such that each 2D resampling can be performed in
two one-dimensional ( lD) passes. The decomposition of the 2D resampling
into two 1D resampling passes is performed as follows (Friedmann, 1981)

p (GRID NORTH)

(l,p)
GEOCODED IMAGE COORDINATE FRAME
(x', y') RECTIFIED IMAGE COORDINATE FRAME

Pass 1:

y=v
Pass 2:

(8.3.16)

u =x'
(8.3.17)

where the coefficient set {ei,,h} is determined from the set {ai, b;} for that block.
The first"I>ass represents a rectification in the along-track direction and the
second pass represents a rectification in the cross-track direction as shown in
Figure 8.10. An intermediate image is generated by Pass 1 in the (u, v) grid and
the two-pass rectified image is in the desired (x', y') grid.

x'
Figure 8.9

Relationship between the rectified and geocoded image coordinate frames.

Geometric Rotation. The geometrically rectified image is in a grid defined by


(x', y'). To transform the image into a geocoded format, a rotation of the image

( l, p ). The above transformations supply the spatial mapping of the geodetic


locations into the slant range and azimuth pixel locations. Geometric
rectification without geocoding thus involves resampling of the input image
(x, y) into a coordinate system defined by the map grid (x', y'). Equation (8.3.14)
is written in terms of transformations on the output image, and so the first step
in the resampling procedure is to determine the fractional slant range and
azimuth pixel numbers in the original image that correspond to each output
grid element.
An exact mapping on a pixel-by-pixel basis of the output grid to the input
image is a computationally expensive process. This procedure can be simplified
(at the cost of some geometric distortion) by subdivision of the output grid
into blocks. Only the corner locations of each block are talculated using the
previously described location procedure, and the relative locations within each
block are then obtained using bilinear interpolation, that is

'lit.
'i
ft iIii: :E~RCATIOO
:ECTIRCATIOO

(8.3.15a)
(8.3.15b)

Figure 8.10

Illustration of the two-pass resampling procedure for geometric rectification.

396

8.3

GEOMETRIC CALIBRATION OF SAR DATA

GEOMETRIC RECTIFICATION

397

is required. For a map projection such as the Universal Transverse Mercator


(UTM), this rotation aligns the image pixels with grid north. This rotation
angle is determined by the inclination of the orbital plane, a 1, and the latitude
of the scene center. From spherical geometry, the rotation angle can be shown
to be approximately

. _ (cos<X;)
P::::::sm 1 cos'

(8.3.18)

VERTICAL
SHEAR

l!!t:..:. bt

where ( is the geodetic latitude of the image center. The approximation in


Eqn. ( 8.3.18) is strictly valid only for nadir pointing instruments. A more accurate
approach to derive the rotation angle P is to use the location algorithm in
Section 8.2.2 to determine the geocentric location of two iso-range points in
the image, from which the rotation angle relative to grid north can be derived.
The mapping of the rectified image pixels into the geocoded map grid is given
by the standard coordinate system rotation

(x')

= (

y'

co~ P

where pis the image rotation angle. Again, 2D resampling of the complex image
to effect the rotation can be separated into two lD resampling passes by
decomposing the rotation matrix into the following form

(x')
y'

= (

1
- tan

l)

0 )(cos P sin P)(


sec p
0
1
p

The image resampling passes are therefore (Fig. 8.11)


Pass 1:

Pass 2:

x' =q

HORIZONTAL
SHEAR
8.11

Illustration of the two-pass resampling procedure for image rotation.

where gp is th~ oversampling factor. This represents an additional resampling


pass o~er the imag~. The next section describes a technique for reducing the
geocodmg process mto three lD resampling passes.
Geo~oding: Rectification and Rotation. The two resampling passes to rectify
the image, and the two passes required to rotate the rectified image into a
geocod~d fo~mat, can be combined into three lD resampling passes. Pass 2 of
the rectification process and Pass 1 of the rotation process are combined into
the se~ond pass of. this three pass process. The total transformation is
determmed .by combi~ing Eqn. (8.3.16) and Eqn. (8.3.19). The resultant three
transformations are given by

Pass 1:

y' = - q tan P+ r sec P

(8.3.19)

q = lcosP + psinP

(8.3.20)

gp =

x=e0 +e 1 u+e 2 v+e 3 uv


y=v

Pass 2:

r=p

u=q

1 +tan P

(8.3.21)

=fo + f1q

+ f2y' + f 3qy'
y' = - q tan P + r sec p
v

where Pass 1 represents an image shear along y' and Pass 2 is an image shear
along l. An intermediate image is generated in the grid (r, q) and the desired
geocoded image in ( l, p ).
To minimize aliasing of the image data, oversampling must be performed
prior to image rotation (Petersen and Middleton, 1962). The amount of
oversampling required to avoid overlapping image spectra is given by

F~ure

P)(')

sin
- sm P cos P

Pass 3:

q=

l cos

p + p sin p

r=p
~here t~e coefficient set {e1,J;} is determined from the set {a1, b1}. The images
i; the gnds defined by (.u. v) and ( q, r) are intermediate images generated during
t e three stage resamphng. The oversampling of the image data is incorporated

398

8.3

GEOMETRIC CALIBRATION OF SAR DATA

into the first pass. The cross-track rectification and an image shear are combined
into the second pass. The third pass is a second image shear and resampling
that takes the (q, r) coordinate intermediate image into a geocoded format.
Figure 8.12 illustrates the intermediate stages during generation of a geocoded
image using the above scheme. The along-track corrections are applied and the
image is oversampled in the first pass. In the second pass, the cross-track
corrections are applied and the image is sheared. A final shear and an
undersampling in azimuth then transform the image into the desired output
grid. An example of this algorithm as applied to Seasat data is given in
Fig. 8.13. This image is from an ascending pass (Revolution 545) of an area near
Yuma, Arizona (( ~ 33N). A small segment of the original 100 km image frame
was selected for processing. The unrectified image data (detected from the
complex format for illustration) is shown in Fig. 8.13a. This image is oriented
at an angle, f3 = 21.9, relative to true north as determined from Eqn. (8.3.18)
for the Seasat inclination angle, ex;= 108. Figures 8.13b, c, d show the outputs
of the three resampling passes. Note that the final image in UTM projection
aligns the agricultural field boundaries with the image line and pixel axes.
The UTM projected image in Fig. 8.13 can be compared with a geocoded
image from a descending Seasat pass covering the same area (Rev. 681) as
shown in Fig. 8.14. The ease with which changes can be detected between the
various fields in the two images demonstrates the benefits of using a common
coordinate system for representing the data products. A second example given
in Fig. 8.15 compares a geocoded Seasat scene to a SIR-B scene again covering

PASS2
"
RECTIFICATION, VERTICAL SHEAR

~_,_~l
PASS3
HORIZONTAL SHEAR
Figure 8.12
rota tio n.

Illustration of the three-pass geocoding procedure combining rectification a nd

GEOMETRIC RECTIFICATION

399

Flg_u~e a._13 Seasat image of Yuma, AZ (Rev. 545) showing intermediate geocoded products: (a)
O ngmal image; (b) Pass I output is azimuth corrected a nd oversampled ; (c) Pass 2 output is range
corrected a nd range skewed ; a nd (d ) Pass 3 output is azimuth undcrsampled a nd azim uth skewed.

the same ground area. These data sets, acquired six years apart, dem onstrate
the utility of the geocoded format for monitoring changes in land use. However,
the most striking difference in the two images is the distortion in the mountainous
region. Seasat had an incidence angle, '1 = 23, while this particular SIR-B
image was acquired at '1 = 44 . Since the geocoding was performed assuming
a smooth oblate ellipsoidal earth model, the foreshortening distortio n (which
is more severe for Seasat) remains in the final image product. An extensio n of this
geocoding technique to account for variation in the local topography is described
in the following section.
8.3.4

Geocodlng to a Topographic Map

As previously described, in addition to the slant range nonlinearity a nd azi muth


skew distortion, effects such as radar foreshortening, layover, and shadow can
arise fro m deviation of the target elevation relative to a smooth geoid
(see Sect10~ 8.2.4 ). i::-o ge.o metrically correct these distortions, an independent
source of ?formation ts required, either from a second imaging angle
(e.g., radar interferometry, rad ar stereo) or from surface topographic maps.

""'
0
0

a
Multitemporal geocoded Seasat images near Yuma, AZ: (a) Rev. 545, ascending pass; (b)
Rev. 681 , descending pass.

Figure 8.14

ILLUM\

""'0
-4

Figure 8.15 Multisensor geocoded images near Oxnard, CA: (a) Seasat image acquired at 'I= 23 in
9/ 78; (b) SIR-B image acquired at 'I= 44 in 10/ 84.

402

GEOMETRIC CALIBRATION OF SAR DATA


8.3

More information on the topics of stereo and interferometric SAR techniques


can be found in the literature (Zebker and Goldstein, 1986; Leb~rl et al., 1?86;
Ramapriyan et al. 1986). In this sectio1:1' we will .specificall~ descnb~ a techn~que
to automatically derive these geometnc corrections from mformatton provided
by a digital elevation map (DEM).
. .
.
.
.
One possible technique for rectification of terram mduced d~stortton usmg
a DEM was reported by Naraghi et al. (1983) and l~ter by Domik et ~l. (1986).
This technique uses the DEM to generate a simulated ra.dar image by
illuminating the map from the radar imaging geometry. The si~ulat~d r~dar
image is then registered to the actual radar image by using a. fine gnd of tiepom.ts.
The absolute locations of these tiepoints are then used to estimate the polyn.omial
coefficients of a warping function that spatially transfo.rms t~e rad~r im~ge
coordinates into the simulated image coordinates. Followmg this coregistration
process, the radar image is resampled into a rectified f?rm~t us~ng the ~now!1
distortions in the simulated image. The key shortcommg m this techmque is
that both the acquisition of the tiepoints and the generation of the sim.u~ated
images are operator and computationally expensive pr~sse~. An ~dd~tional
limitation of this procedure is that the accuracy of the rectified i':11age is dtrectl.y
a function of the density of matching tiepoints. Therefore, this procedure is
generally used on small subimage blocks where only a few tiepoints are required
for a good registration accuracy.
.
An alternative approach, to be described in this section, re~uires at. most 2- 3
tiepoints for a long image strip (up to 1000 km). This techmque, which can be
applied to either complex or intensity image data, .was first prop~sed by
Kwok et al. (1987). His a direct extension of the techmque for geocodmg. t~ a
smooth ellipsoid described in the previous section. It utilizes the ch~ra~tens~ics
of the radar imaging geometry to model the terrain induced geometnc distortion
and perform resampling based on the predicted correcti~n factors. Th~ few
tiepoints needed are used only to remove the residual tr~nslatto~al and r?tational
errors between the predicted geodetic location of an ~mage ~nxe~ a~d its ac~ual
location on a topographic map. Furthermore, there is no tiepomtmg required
if the platform ephemeris errors are small, as is expected for future ~AR sy~tems
using the Global Positioning System (G PS) satellite network for orbit trackmg.

GEOMETRIC RECTIFICATION

time geocoding system is required this approach greatly simplifies the design
(Chapter 9).
Given the target elevation values in the output grid, the next step is to
generate a latitude, longitude versus (i,j) pixel number map for the complex
slant range SAR image, using the location algorithm outlined in Section 8.2.2.
For a given element in the output grid (10 , p0 ), the fractional pixel location in
the original SAR image (1 0, p 0) is determined by a two-dimensional coordinate
transformation of the output image to the input grid, as described in the previous
section. This transformation provides the target location R1(0) in the original
imageassuming a smooth geoid as shown in Fig. 8.16. The pixel number (1 , p )
0 0
uniquely identifies a time t(/ 0) and a range R(p0). This time is used to calculate
the spacecraft position R,.(1 0) from an orbit ephemeris file by polynomial
interpolation. The spacecraft ephemeris is nominally in a geocentric rectangular
coordinate system. For simplicity we assume the coordinate system is rotating
with the x axis at longitude zero (Greenwich meridian), the z axis at grid north,
and the y axis completing the right hand system.
The next step is to convert the geodetic latitude and longitude of the target
into this rectangular coordinate system. Given the reference ellipsoid in
Eqn. (8.2.13), the target position R1(0) can be represented in terms of its
geographical coordinates

Calculation of Target Displacement Due to Terrain. The basic procedure for

geocoding to a terrain map is as described in the previo.us section for a smooth


ellipsoid. The output grid projection and sample spacmg .are s~lec~ed fo_r the
geocoded image product. For each element in the output snd, which is ty~ical~y
in latitude and longitude versus line and sample number, an elevat~on is
determined relative to some reference geoid from the DEM. The elevation at
each output grid location is calculated by performing a two-dimensional
interpolation of the DEM. An alternative approach woul~ be to r~samp~e the
DEM into the output projection (e.g., UTM) at the required spacm~ pnor to
geocoding (i.e., create a DEM database for a given region). The elevat10n values
could then be directly read from the DEM file without resampling. If a real

403

----

..l

R (o)

DISPLACEMENT OF t
POINT ON THE GEOID
Figure 8.16 Illustration of the geometry producing relief displacement of terrain features in radar
imagery.

404

8.3

GEOMETRIC CALIBRATION OF SAR DATA

can be used to determine the azimuth displacement by

by (Heiskanen and Moritz, 1967)


Xo =

q COS '

COS

(8.3.22a)

(8.3.22b)

y 0 = q cos ' sin X


R~ . r
z0 =-qsm
..

(8.3.22c)

R:

where
(8.3.23)
and ,, x are the geodetic latitude and longitude of the t~rge~ an~ ~e RP are
the equatorial and polar radii of the DEM reference elhpsotd. ~tmt~arly, the
geographic coordinates of a point at an elevation h above the elhpsotd

are given by
xh = x 0 + h cos ' cos X

(8.3.24a)

+ h cos' sin X
z0 + h sin '

(8.3.24b)

Yh = y0
zh =

(8.3.24c)

From the spacecraft position vector R.(1 0) and the target ~?sition v~ctors R,(O),
R,(h), the relative slant range vectors to each target pos1tton are given by
R(O) = R.(1 0) - R 1(0)

(8.3.25a)

= R.(1 0) - R1(h)

(8.3.25b)

R(h)

405

GEOMETRIC RECTIFICATION

The radiometric value of the SAR image at (I~, p~) is determined by a


two-dilnensional interpolation. This value is then inserted into the output grid
at location (10 , p0 ).
Geocoding Procedure. The operational procedure for geocoding to a topographic map is essentially the same as the three step procedure outlined in the
previous section, with two exceptions: (1) A preprocessing step is required to
register the DEM to the SAR image; and (2) the cross-track correction
procedure (i.e., Pass 2, to be described under the next heading in this section)
is modified to account for the relief displacement of the target. An operational
flowchart of this geocoding procedure is presented in Fig. 8.17. The input
ancillary .data consists of the spacecraft ephemeris, the radar parameters, and
the correlator processing parameters. The ephemeris update vector AR. is
derived from the preprocessing.
The preprocessing step to register the SAR image to the DEM can be
performed either by operator tiepointing, to determine the translational error
between the two data sets, or by an automated tiepointing scheme as outlined
in Fig. 8.18. The procedure for the automated tiepointing is as follows. The first
step is to select several small areas of the original image (e.g. 512 x 512 pixels).
The size of this area should be twice the maximum (3u) location error. For

EXTRACT
hFROM
DEM

CALCULATE
DISPLACEMENT
R(h) R(o)

ADJUST
RESAMPLING
LOCATIONS

(8.3.26)

where!. is the complex sampling frequency, c is t~e wave propagation velocity,


Ansr is the target displacement in slant range p1x~ls, and. R(h), R(O) are the
vector magnitudes. Additionally, there is a small azimuth dtspl~cement that can
be determined by substituting Eqn. (8.3.25a) and Eqn. ( 8.3.25b) mto the Doppler
equation for R. - R 1, Eqn. (8.2.9). The relative Doppler shift
Afo = fo(h) - fo(O)

where Anaz is in samples and/R" is the Doppler rate at range R(h). The azimuth
and range pixel numbers of the displaced target are given by

CALCULATE
GEODETIC
COORDINATES

The target at height h relative to the geoid is displaced in range by

Ansr = 2f.[R(h) - R(O)]/c

(8.3.28)

(8.3.27)

PASS 1:
ALONG TRACK
CORRECTION,
OVERSAMPLING

PASS2:
CROSS TRACK
TERRAIN
CORRECTION,
SKEW

PASS3:
ALONG TRACK
SKEW ANO
t----1~
UNDERSAMPLING GEOCOOEO,
TERRAIN
------'CORRECTED
IMAGE

Figure 8.17 Flowchart of the procedure for image geocoding with terrain correction.

406

GEOMETRIC CALI BRATION OF SAA DATA


ANCILLARY
DATA

CALCULATE
GEODETIC
COORDINATES
VERSUS
PIXEL No.

CORRELATE
DEM WITH
IMAGE

Figure 8.18

SELECT DEM CHIP;


ROTATE AND
ILLUMINATE FROM
SAA GEOMETRY

Rs

CALCULATE 6
FROM
MISREGISTRATION
AND TARGET h

t.Rs

Flowchart of the preprocessing step to register the SA R image to the DEM.

.0

each area, a grid of pixel locations in geodetic coordinates is generated. From


the DEM, the elevation map for the same ground area is selected, rotated into
the SAR image a long-track and cross-track coordinate system, and illuminated
from the SAR imaging geometry (Fig. 8. 19). These simulated SAR images,
derived from the DEM, are cross-correlated with the actual image framelets to
determine the image to DEM offset. This offset is used to update the pixel
location map such that it is now registered to the DEM. If geocoding of an
along-track strip of images is to be performed, then the misregistration between
the DEM and the image can be used to update the S/ C ephemeris ( i.e., LiR.).
Only one update is required per several minutes of orbit ( 10- 20 image frames),
depending on the orbital stability of the spacecraft and the precision required.
For low orbiting platforms with large accelerations due to drag, such as the
shuttle at 225 km, more frequent updates may be necessary.
An example of a terrain corrected image is given in Fig. 8.20. This Seasat
image of Mount Shasta, California, shows a significant foreshortening distortion
due to the steep terrain relative to the Seasat incidence angle. The image in
Fig. 8.20a was geocoded to a smooth ellipsoid for comparison to the terrain
corrected image in Fig. 8.20c. Note that the geometry of the image has been
corrected to remove the side-looking distortion. The image pair in Fig. 8.2 1 is
from ascending and descending Seasat passes over the same ground area in the
San Gabriel Valley east of Los Angeles, California. Again the original frames
in Figs. 8.2 la, b have been geocoded to a smooth ellipsoid for direct comparison
to each other and to the terrain corrected data in Figs. 8.2 1c, d. When attempting
to relate common points in the two terrain corrected images, recall tha t they
were imaged from opposite directions such that a shaded area in the descending
scene corresponds to a bright area in the ascending scene.
Radiometric Correction of Images Geocoded to a Topographic Map. Perhaps

the most predominant characteristic of the topographically corrected SAR


images in Figs. 8.20 and 8.21 is that the slopes toward the radar are
407

8.3

GEOMETRIC RECTIFICATION

409

<t
E

0
LO

:::>

...J
...J

.0

Comparison o( ascending and descending Seasat images near Los Angeles, CA,
geocoded to a smooth ellipsoid, with the same images geocoded to a DEM; (a) Rev. 351, ascending,
smooth; (b) Rev. 660, descending, smooth; (c) Rh. 351 ascending, DEM; and (d) Rev. 660
descending, DEM.

Figure 8.21

radiometrically saturated. This arises from the increase in the effective scattering
area of a resolution cell sloped away from the radar (i.e., a+ in Fig. 8.6a). T he
ground range resolution is given by

OR 8

\:E:::>

..J
..J

408

= c/ (2BR sin(IJ -

a))

(8.3.29)

where BR is the range bandwidth, IJ is the local incidence angle assuming a


smooth geoid, and a is the terrain slope. As a approaches IJ, the entire sloped
region becomes a single resolution cell and the scattered energy from the entire
slope is integrated, creating very bright pixels.
T he topographic correction routine used to produce these images replicates
this saturated intensity value at a uniform ground pixel spacing over the distance
comprising the layover region. This results in the smearing effect seen in Figs.

410

8.4

GEOMETRIC CALIBRATION OF SAR DATA

8.20c and 8.2lc, d and an increase in the total image power. A more correct
representation of the scattered power (from a unit ground area) would be to
normalize each pixel by the actual resolution cell area, which depends on the
local slope as derived from the DEM. Assuming no radiometric co~re~tions
have previously been applied, the corrected image should first be multtphed by
a factor
g 1 = sin(17 - (X)

(8.3.30)

to account for the increased (decreased) cell area resulting from a positive
(negative) surface slope.
A second radiometric correction factor, that is also incidence angle dependent,
is the antenna pattern. Given the polar antenna gain function, G( </> ), where </>
is the off-boresight angle relative to the look angle y, we can project this pattern
onto the ellipsoid. From Eqn. (8.2.5)
(8.3.31)
where R1b = I R1(h) I is given by Eqn. (8.3.~4) and R~ = I R(h)I is from Eqn.
(8.3.25b). The parameter R. is the S/C altitude relat1v~. to the c~nter of ~he
ellipsoid and y is the actual look angle (i.e., antenna electnc_al bores1ght relative
to nadir including the platform roll angle). Thus for a given target at some
height, h, the parameters Rb and R 1b are determined from ~qn. (8.3.24) and
Eqn. (8.3.25b). The off-boresight angle in the polar ~atter~ ts then ~alculated
from Eqn. (8.3.31). From this pattern, a second rad1ometnc correction factor
to be applied to the terrain geocoded image is determined
(8.3.32)
where we have assumed the antenna is reciprocal. A final correction factor for
the range attenuation is given by
(8.3.33)
Combining these three corrections and assuming they are applied to the complex
data, then
gT(h, (X) =

J glg2g3

(8.3.34)

where g 1 , g 2 , and g 3 are given by Eqn. (8.3.30), Eqn. (8.3.32), and Eqn. (~.3.33),
respectively. Equation (8.3.34) is the relative radiometric correction reqmred to
normalize the received amplitude signal from a target at elevation h on a slope
(X relative to the ellipsoid. To date, no system has operationally applied both
the radiometric and geometric topographic corrections to SAR image products
as described in this section.

8.4

IMAGE REGISTRATION

411

IMAGE REGISTRATION

A natural question following from our discussion on geocoding and terrain


correction of the SAR data regards the application of these data products. As
previously discussed, a radiometrically and geometrically terrain corrected
image, in conjunction with an incidence angle map, allows the scientist to
calculate relative values of u 0 as a function of incidence angle. In this way, the
relative scattering between two target types could be derived directly from the
geocoded data products. Furthermore, if information on the system radiated
power anctr' the receiver gains were available, the absolute u 0 could also
be derived from the image data by proper scaling of the image intensity
(see Section 7.3). However, as we described in Chapter 1, the SAR data
interpretation is greatly enhanced when it is combined with other data sets
(i.e., correlative data). This is especially true for data acquired by remote sensors,
such as visible and infrared detectors, that measure the earth radiation at
distinctly different parts of the electromagnetic spectrum (Elachi, 1987).
The factors that have slowed progress in interpretation of these multisensor
data sets are essentially twofold. First, and perhaps foremost, is that there is at
bes! a very limited database of synergistic SAR and optical (or infrared)
wavelength data. Secondly, the radiometric and geometric calibration procedures
as described in this chapter have only recently been well understood, and are
just now being mechanized into a set of algorithms that can be implemented on
an operational basis. The application of these techniques to future data sets,
such as those that will be acquired as part of the NASA Earth Observing System
(EOS) program, offers the potential of a wide range of applications. Specifically,
these products are key for developing an understanding of the earth's
environmental processes.
The integration of data from a multitude of sensors presents a number of
challenges in cross-sensor calibration and image registration. Perhaps the most
obvious problem arises from the fact that the SAR is a side-looking instrument
while most optical instruments (including those operating at near infrared and
infrared wavelengths) are nadir looking. To acquire synergistic data, the orbits
must be offset by the SAR cross-track swath distance. Alternatively, the swath
width of the two instruments must be sufficiently wide that they overlap. Neither
of these are very practical solutions since the radar characterization of target
type may require a specific imaging geometry (e.g., oceanography requires a
steep incidence angle, polarimetric applications a shallow angle). Perhaps the
best solution is to set the platform in a drifting orbit to obtain global coverage
over a period of time and to systematically build a geodetic database that
incorporates the data from each sensor.
The tools we need to generate this multi-sensor, global database are: ( 1)
geocoding algorithms to map the data into an earth fixed grid; (2) mosaicking
algorithms to assemble the image frames into a map base; and (3) data fusion
algorithms to precisely register the data from the various sensors to subpixel
accuracy. A key factor in developing such a database is to establish standards

412

GEOMETRIC CALIBRATION OF SAR DATA

for the geocoded data products to which all instrument processing systems
adhere. In this area, not only is there a lack of consistency among processing
centers handling data from different sensors, but there is often little agreement
across processors for the same sensor. Jn an effort to solve the problem, a
number of committees have been formed to provide recommendations for
standards in spaceborne data. One group, the Consultative Committee on Space
Data Systems (CCSDS), has dealt mainly with downlink data stream formats.
A second group, the Committee on Earth Observations Satellites (CEOS), has
addressed specifically both optical (Landsat) and SAR data products in terms
of image format and presentation. However, a community consensus has not
been reached on key items such as standards for the ellipsoid, the map projection,
the output image grid spacing, or image framing within the grid. These will be
important topics of discussion for the multi-national working groups being
formed under the EOS program.

8.4.1

8.4

IMAGE REGISTRATION

413

this empirical co ~recti.on app_lied to the image data in the boundary region may.
?e~rade the calibration. Given two adjacent images acquired at different
incidence angles, the data in the overlap region will have a different mean
0
intensity since the a varies as a function of '1 The feathering process to blend
the seams adjusts this mean, and therefore degrades the calibration accuracy,
to generate. an aesthetically pleasing image product. Jn principle the effect of
the smoothing can be accounted for in the calibration scale factors however
practically it_ is relatively complex to keep track of these correction p~rameters.
The~efore, ~his process ~houl~ only be performed when generating photoproducts
or video displays for visual interpretation.
An ~x~mple of a three-frame mosaic using Seasat data covering an area of
geologic interest near Wind River, Wyoming, is shown in Fig. 8.22. The images
were first geocoded to a UTM projection at 12.5 m spacing using USGS 24,000
to l_DEM _d at_a .. The images were radiometrically corrected assuming a smooth
geo1d. The md1v1d ual frames were registered to each other using cross-correlation

Mosaicking

The generation of large scale maps using SAR imagery requires a capability to
assemble multiple image frames or strips into a common grid. These mosaics
could then be cut into standard quadrants and stored in the database according
to a grid structure. One possible convention for selecting these quadrants is the
US Geological Survey map quadrant system. For example, in this system, the
250,000: 1 maps have a quadrant on the order of 100- 150 km on a side.
Given that the image data from the various sensors has been geocoded to
a standard database, the generation of a large scale mosaic is a relatively simple
process (Kwok et al., 1990). It is analogous to assembling a jigsaw puzzle onto
a template, where in our case the template is a map grid. The analogy is poor
in the sense that the geocoded images do not fit nicely together. Rather, there
is generally an overlap or gap between adjacent frames, and therefore there
needs to be a convention on how to merge these data; specifically, which portion
of the data is to be discarded or how the gaps are to be filled when generating
the image mosaicks.
Jn general, even if the systematic geometric distortions have been properly
corrected in generating the geocoded image products, there remains a random
residual error in registering each frame to the map base. It is therefore necessary
to cross-correlate adjacent image frames (assuming there is sufficient overlap
region) to determine this residual misregistration error. Typically the correlation
is performed over a number of small patches along the overlap region and the
average misregistration is used to correct the offset. The new image is then set
into the grid, replacing the existing image data in the overlap regio n.
To blend the seams, a feathering process is needed. This procedure consists
of deriving the mean of the image in a small area on either edge of the seam
from a data histogram. An averaging process is applied in this border region
by adjusting the mean using a linear ramp function. Obviously, if a larger
boundary region is selected then the seam transition will be smoother. However,

Figure 8.22 Mosaic of lhree Seasat image frames near Wind River, Wyoming geocoded using a
USGS 24,000 to I DEM.
'

414

GEOMETRIC CALIBRATION OF SAR DATA

8.4

IMAGE REGISTRATION

415

and the seams smoothed by feathering the output. A second example of the
mosaicking process is a larger-scale Southern California mosaic as shown in
Fig. 8.23. This image, which is comprised of 33 Seasat geocoded frames, covers
approximately 240,000 km 2 It is particularly useful for studying the geologic
formations and fault lines in the region. Figure 8.24 is a 32-orbit mosaic of
Venus compiled from data acquired by the Magellan spacecraft. The image
dimension is approximately 500 km on a side. Each image strip comprising the
mosaic is 20 km wide and extends the entire vertical dimension of the image.

ST

MAP QUADRANT

250 :000:1

Figure 8.23 Mosaic of 33 Seasat image frames of Southern California region covering
approximately 240,000 km 2 .
Figure 8.24 Multiorbit mosaic of the "crater farm" region of Venus, centered at -27S, 339 E.
The largest of the craters shown is 50 km in diameter.

416

GEOMETRIC CALIBRATION OF SAR DATA

8.4.2

Multisensor Registration

Given two data sets, such as Seasat SAR and Landsat Thematic Mapper, the
data from each sensor can be geocoded into a common projection (e.g., UTM)
and grid spacing. There remains, however, a residual misregistration between
the two scenes that must be corrected before the pixels can be said to be
coincident. Generally, this registration is a relatively simple process for similar
data sets (e.g., Landsat Band 3 and Band 4), but the SAR image brightness,
which depends on surface roughness and dielectric constant, may not correlate
with the optical image brightness, which depends on the reflectance (i.e., chemical
composition) of the surface.
A good example of this discrepancy is shown in Fig. 8.25. Figure 8.25a is a
geocoded Seasat image without terrain correction, while Fig. 8.25b is a Landsat
Band 4 image. Both cover approximately the same 75 x 75 km ground area
near Yuma, Arizona. In the upper region of the image pair there is essentially
a radiometric reversal in the relative brightness. In this area, the ground cover
is a bare, sandy soil, which to the SAR is a low backscatter target, while to the
Landsat Band 4 detectors this region appears very bright. Also notable is the
detailed terrain information in the Seasat image (lower right) as compared to
the Landsat data. A third distinct difference is the grainy appearance of the
SAR image resulting from the speckle noise. This image pair clearly demonstrates
that conventional cross-correlation techniques are not sufficient to register the
two images to subpixel accuracy.
A more rigorous approach to the image registration problem is to extract
some feature or set of features that is known to be invariant across ttre data
sets. The traditional approach to extracting this feature set is to manually select
a set of tiepoints that are common across the multisensor data set. These
common points are then used as input to a polynomial warping routine
to correct the misregistration (Siedman, 1977). We previously described in
Section 8.3.4 how the SAR image could be precisely registered to a DEM by
illuminating the map from the SAR imaging geometry. A similar procedure can
be applied to the Landsat data as shown in Fig. 8.26. In this case, the DEM
is illuminated from the same sun angle as the Landsat image to obtain the
correct shadowing effect. The Landsat image is then cross-correlated with the
illuminated DEM to determine the residual translational misregistration. If this
technique is used, then both the SAR and the Landsat images are registered to
a common map base (e.g., the USGS 24,000: 1 DEM), and therefore they are
also coregistered.
The technique described above works well in high relief areas where the
DEM data can provide a common reference. However, given a global data set,
most of the data is either relatively flat terrain (or ocean), or there are no
precision DEMs available. For these data, an alternative image registration
approach is required. A number of candida te image processing techniques for
both the feature extractio n and matching can be found in the literature. Many
of these techniques, although originally intended for o ther applications, can be

.D

'-

.0

a~

C t:::.
c._,

"' 00

E
"
0 ...

u :;

...cO.., "'8'"'
f :;i;

:I

c:n >

ii: ~

417

418

8.4

GEOMETRIC CALIBRATION OF SAR DATA

IMAGE REGISTRATION

419

GEOCODEDPRODUCTS
PATCH
PRE-SELECTION
SEGMENTATION
EDGES
REGION BOUNDARIES
PRINCIPAL COMPONENTS

MATCHING OF SUB-PATCHES
CHAMFER MATCHING
BINARY CORRE LATION
DYNAMIC PROGRAMMING

Figure 8.27 Flowchart of multisensor registration algorithm.

Figure 8.26 Comparison of Landsat image framelettes with simulated imagery from DEM images:
(a) and (c) are simulated images; (b) and (d) are Landsat data.

used for multisensor registration. Fig. 8.27 shows a generalized flow for a
multisensor registration algorithm, where a number of techniques are made
available at each stage of the processing to accommodate the variety of sensor
and target types, as well as varying environmental conditions. A report
evaluating this approach to multisensor registration, including a number of
candidate algorithms, has been published by Kwok et al. (1989).
Consider, for example, the image pair presented in Fig. 8.25. A simple
cross-correlation would yield a very weak correlation peak (or peak-to-mean
ratio) in the region of the sand dunes, as a result of the dramatic radiometric
difference between the two images. A better approach would be to extract

features that are invariant across the two scenes. Three candidate techniques
are: ( 1) Edge operators; (2) Statistical analysis using the stationarity properties
of local regions; and (3) Principal component analysis. Jn the remainder of this
section we will address the edge operators in some detail.
There is a large body of literature on the subject of edge detection. However,
in almost all cases only optical image data are considered. For the SAR imagery,
since it is corrupted by speckle noise, techniques based o n the first and second
o rder directional derivatives (e.g., Sobel or Roberts operators) will perform
poorly. This is especially true in terms of localization of the edges, since these
operators produce large responses in the edge region. Similar performance
limitations are characteristic of statistical edge operators such as those proposed
by Frost et al. (1982) and Touzi et al. (1988). An alternative procedure, using
a two-dimensional smoothing operator such as a Marr- Hildreth operator
(Marr et al., 1980) or a Canny edge detector (Canny, 1983, 1986), exhibits
significantly improved localization and edge detection performance relative to
the derivative and statistical operators.
An example of a Canny edge detector as applied to Seasat, Landsat TM,
and SPOT images is shown in Fig. 8.28. This region, the Altamaha River, GA,
shows a variety of target types (rivers, fields, roads, etc.). The Seasat image,
acqui red in July, 1978, has a significantly greater number of detected edges,
primarily due to the statistical characteristics of the original image. The SPOT
(Band 3) and the Landsat (Band 4), both acquired in July, 1984, are markedly

8.4

IMAGE REGISTRATION

421

similar, although there are textural differences in the images that give rise to
some dissimilar lines. Perhaps the key point demonstrated by this example is
that the matching routines must be adaptive, to optimize their performance for
a given set of data and imaging conditions. For example, the width of the Seasat
edge operator could be increased to reduce the number of spurious edges as
compared to the optical data. An example of the effects of varying the spatial
filter width is given in Fig. 8.29. In fact, the matching routine may require an
iterative procedure in which, for each pass, the filter parameters would be
adjusted until some cross-image similarity criterion is satisfied.
u

.c

Figure 8.29 Effect of variation in spatial filter width parameter u in Canny edge detector for SAR
image of Altamaha River, GA: (a) Original image (512 x 512 pixels); (b) Edge image with u = I
pixel; (c) Edge image with u = 2 pixels; (d) Edge image with u = 4 pixels.

420

422

GEOMETRIC CALIBRATION OF SAR DATA

Given that some invariant feature(s) have been extracted, a matching


procedure is required to derive the translational and rotational offsets between
these features, as shown in the flowchart of Fig. 8.27. Several possible procedures
are identified, such as: ( 1) Binary cross-correlation (Davis and Kenue, 1978);
(2) Distance transform or chamfer matching (Barrow et al., 1978); and (3)
Dynamic programming using an autoregressive model (Maitre and Wu, 1989).
The key metric to be considered when selecting a matching procedure is the
robustness of the procedure, given that there are some dissimilar features across
the image set. Additionally, the matching routine should be capable of detecting
some amount of residual rotation between the patches. For example, binary
cross-correlation is relatively insensitive to the presence of extraneous edges in
the Seasat image of Fig. 8.28a when matching it to the optical images of
Fig. 8.28b and Fig. 8.28c. However, a small rotation between the two images
(i.e., <0.5) will decorrelate the image pair. Thus, the matching procedure must
consist of a series of image rotations given a candidate set of angles about the
nominal (zero rotation) angle. However, binary cross-correlation can be made
more robust to small rotations by thickening the edge lines (Wong, 1978). For
example, a three pixel wide edge will generally tolerate a rotation of 1.5 to
2.0 without significant decorrelation.
An alternative technique to binary cross-correlation is the distance transform,
in which the edge map is converted to a grey level image according to the
pixels' distance from an edge. This is illustrated in Fig. 8.30 for Seasat and
Landsat TM images of Wind River Basin, Wyoming. In comparing the binary
cross-correlation technique to the distance transform (chamfer) matching, the
general conclusion is that chamfer matching is less sensitive to rotational offsets
but more sensitive to the existence of extraneous edges.
The dynamic programming technique is a relatively new approach that has
not yet been tested using SAR data. It essentially uses an autoregressive model
to register severely distorted images to a map base without any a priori
information about the distortion. The model is used to define the deformation
at the pixel scale. Dynamic programming then optimizes the search for the best
registration of an ordered sequence of primitives. These primitives could be
edges, or another type of cross-image invariant feature. There are a number of
unique features to this matching process that may well lead to its selection as
the optimal solution for many of the matching problems. Maitre and Wu ( 1989)
have demonstrated the approach using optical data with remarkably good
results.
In summary, multisensor registration is the final stage of the Level 1B
processing. It utilizes the output of the geometric correction/ geocoding routines
described earlier in this chapter to perform registration of data sets from
distinctly different portions of the electromagnetic spectrum. This in turn leads
to a more detailed description of the surface features, which can then be used
to model the change processes or simply to survey the current land use. Perhaps
the most challenging aspect of this problem is the wide variety of data types
resulting from different sensors, target types, and environmental conditions. It

.0

423

424

GEOMETRIC CALIBRATION OF SAR DATA

is clear that no fixed procedure or set of procedures will satisf~ a~l t~e m~tching
requirements. This task is perhaps a good c~ndidate f~r an artificial mte.lh.gen~e,
rule based approach for selecting the optimal algonthm and determmmg its
parameters. Furthermore, the comp~ta~ional load .for some of the m?re
complex algorithms may mandate a distnbuted (massively parallel) processmg
architecture or a neural network type implementation.
Independent of the final system design selected to perform the multisensor
registration task, the payoff in developing a capability to routinely gener~te
registered multilevel data sets will be far re~ching. !he.se products are cru~al
for presenting the data in a format allowmg denvat1on of the geophysical
parameter information, which in tum is used to drive large scale models of
the earth's global processes.

8.5

SUMMARY

This chapter completes our discussion of the SAR image calibra.tion and
the correction algorithms. In Chapter 7 we presented the techmques for
characterizing the radar system transfer function using both internal a~d
external calibration devices. Throughout Chapter 7 we assumed a smooth ge01d
in order to concentrate on the issues associated with radiometric calibration.
The problem of geometric distortion in the SAR imagery was pres~nted. in
Chapter 8. We initially reviewed the basic definitions of t~e geom.etnc cahbratton
terms and introduced a set of parameters to charactenze the image accuracy.
This discussion was followed by an error analysis to identify the key sources
of geometric distortion and target location error. However, the bulk of the
chapter was dedicated to the geometric correction algorithms.
We presented automated techniques to map the natural SAR. correl~tor
output image (i.e., without resampling) into a rectified format (umform pi~el
spacings), either in the SAR azimuth/range grid or into~ standard map gnd.
Much of the discussion was centered around the techmques to perform the
image rotation and to compensate for the terrain e~ects. yve presente~ a
three-pass resampling technique that requires only one-dm:~e~sional res.ampl~ng
operations. We proposed a technique to correct for the terram mduced d1stort~on
during the second pass. Specific equations were presented to calcu~ate the pixel
displacement as well as the radiometric correction factors resultmg from the
local relief.
The chapter concluded with a discussion of an application of ~eoco~ed
imagery to multiframe image mosaicking and multisensor image registration.
A number of examples were presented from Seasat SAR and Landsat TM data
sets to illustrate the pros and cons of the various algorithms. We compared the
performance of a number of edge detectors for matching, co~cluding ~rinci~ally
that much work remains to be dooe in the area of multisensor image registration.
A final point is that we are only now beginning to mechanize these radiometric
and geometric algorithms in terms of making them a part of the automated

REFERENCES

425

processing operations. Assuming that calibrated products become the standard


in the near future, our next big challenge is to merge the SAR data with other
non-SAR imagery as a preprocessing stage for geophysical data analysis.
Considering the effect of scene content and environmental conditions on the
statistics of the image data, this may be an extremely complex task to fully
automate. In this area, perhaps the best approach is to use some rule based
expert system to evaluate the data characteristics and then select the optimum
technique for matching.
REFERENCES
Barrow, H. G., J. M. Tenenbaum, R. Bolles and H. C. Wolf (1978). "Parametric
Correspondence and Chamfer Matching: Two New Techniques for Image Matching,"
Proc. DARPA Image Understanding Workshop, pp. 659-663.
Brookner, E. ( 1985 ). "Pulse-Distortion and Faraday-Rotation Ionospheric Limitations,"
Chapter 14 in E. Brookner (ed.), Radar Technology, Artech House, Inc., Dedham,
MA, pp. 201-211.
Canny,J. F. (1983). "Finding Edges and Lines in Images," MIT Tech. Report, Al-TR-720,
Artificial Intelligence Laboratory, Mass. Inst. Tech., Cambridge, MA.
Canny, J. F. (1986). "A Computational Approach to Edge Detection," IEEE Trans.
Pattern Anal. and Mach. Intel/., PAMI-8, pp. 679-698.
Curlander, J.C. (1982). "Location of Spaceborne SAR Imagery," IEEE Trans. Geosci.
and Remote Sensing, GE-20, pp. 359-364.
Curlander, J.C. (1984). "Utilization of SAR Data for Mapping," IEEE Trans. Geosci.
and Remote Sensing, GE-22, pp. 106-112.
Curlander, J. C., R. Kwok and S. S. Pang (1987). "A Post-Processing System for
Automated Rectification and Registration ofSpaceborne SAR Imagery," Int. J. Rem.
Sens., 8, pp. 621-638.
Davis, W. A. and S. K. Kenue ( 1978 ). "Automatic Selection of Control Points for the
Registration of Digital Images," Proc. 4th Inter. Joint Corif. on Pattern Recognition,
Kyoto, Japan, pp. 936-938.
Domik, G., F. Leberl and J. Cimino ( 1986). "Multiple Incidence Angle SIR-B Experiment
over Argentina: Generation of Secondary Image Products. IEEE Trans. Geosci. and
Remote Sensing, GE-24, pp. 492-498.
Elachi, C. (1987). Introduction to the Physics of Remote Sensing, Wiley, New York.
Friedman, D. E. ( 1981 ). "Operational Resampling of Corrected Images to a Geocoded
Format," 15th Inter. Symp. on Remote Sens. Enuir., Ann Arbor, Ml, p. 195 et seq.
Frost, V. S., K. S. Shanmugan and J.C. Holtzman (1982). "Edge Detector for Synthetic
Aperture Radar and Other Noisy Images," IGARSS '82 Digest, FA-2, pp. 4.1-4.9.
Graf, C. ( 1988 ). "Map Projections for SAR Geocoding," Tech. Report, ERS-D-TN-22910,
DLR, Oberpfaffenhofen, Germany.
Heiskanen, W. A. and H. Moritz ( 1967). Physical Geodesy, W. H. Freeman, San Francisco,
CA.
Kropatsch, W. and D. Strobl (1990). "The Generation of SAR Layover and Shadow
Maps from Digital Elevation Models," IEEE Trans. Geosci. and Remote Sensing,
GE-28, pp. 98-107.

426

GEOMETRIC CALIBRATION OF SAR DATA

Kwok, R., J. Curlander and S. S. Pang (1987). "Rectification of Terrain Induced


Distortion in Radar Imagery," Photogram. Eng. and Rem. Sens., S, pp. 507-513.
Kwok, R., E. Rignot, J. C. Curlander and S. Pang (1989). "Multisensor Image
Registration: A Progress Report," JPL Tech. Doc. D-6697, Jet Propulsion Laboratory,
Pasadena, CA.
Kwok, R., J. Curlander and S. S. Pang ( 1990). "An Automated System for Mosaicking
Spaceborne SAR Imagery," Inter. J. Remote Sensing, S, pp. 507-513.
Leber!, F., G. Domik, J. Raggam, J. B. Cimino and M. Kobrick (1986). ::Radar
Stereomapping Techniques and Application to SIR-B Images of Mt. Shasta, IEEE
Trans. Geosci. and Remote Sensing, GE-24, pp. 473-481.
Lewis, A. J. and H. C. MacDonald (1970). "Interpretive and Mosaicking Problems of
SLAR Imagery," Remote Sensing of the Environment, V. pp. 231-237.
Madsen, S. (1986). "Speckle Theory, Modeling Analysis an~ Appli~atio~s Related to
Synthetic Aperture Radar Data," Ph.D. Thesis, Techmcal Umvers1ty Denmark,
Lyngby.
.
.
.
Maitre H. and y. Wu ( 1989 ). "Dynamic Programming Algorithm for Elastic Reg1strat1on
of Distorted Pictures Based on Auto-regressive Models," IEEE Trans. Acoust. Speech.
Sig. Proc., ASSP-37, pp. 288-297.
Marr, D. and E. Hildreth (1980). "Theory of Edge Detection," Proc. R. Soc. Lond. B.
290, pp. 199-218.
Marr, D. (1982). Jlision, W. H. Freeman, San Francisco, CA.
Naraghi, M., w. Stromberg and M. Daily (1983). "Geometric Rectification of Radar
Imagery Using Digital Terrain Models," Photogram. Eng. and Rem. Sens., 49,
pp. 195-199.
Petersen, D. P. and D. Middleton ( 1962). "Sampling and Reconstruction of Wavenumber
Limited Functions in N-Dimensional Euclidean Spaces," Inf Control, 5, pp. 279-323.
Quegan, S. ( 1989 ). "Interpolation and Sampling in SAR Images," I GARSS '89 Symposium,
Vancouver, BC, Canada, pp. 612-616.
Ramapriyan, H.K., J.P. Strong, Y. Hung and C. W. Murray, Jr. (1986). "Auto~ated
Matching of Pairs ofSIR-B Images for Elevation Mapping," IEEE Trans. Geosc1. and
Remote Sensing, GE-24, pp. 462-472.
Siedman, J.B. (1977). "VICAR Image Processing Systems, Guide to System Use," JPL
Technical Document 77-37, Jet Propulsion Laboratory, Pasadena, CA.
Snyder, J. P. (1983). "Map Projections used by the U.S. Geological Survey," US.
Geological Survey Bulletin 1532, Washington, DC.
TOPEX Science Working Group (1981). "Satellite Altimetric Measurements of the
Ocean," JPL Tech. Rep. 400-111, Jet Propulsion Laboratory, Pasadena, CA.
Touzi, R., A. Lopes and P. Bousquet ( 1988 ). "A Statistical and Geometrical Edge Detector
of SAR Images," IEEE Trans. Geosci. and Remote Sensing, GE-26, pp. 826-831.
Wagner, C. A. and F. J. Lerch (1977). "Improvement in the Geopotential Derived from
Satellite and Surface Data," J. Geophys. Res., 82, pp. 901-906.
Wong, Y. R. (1978). "Sequential Scene Matching Using Edge Features," IEEE Trans.
Aero. Elec. System., AES-14, pp. 128-140.

Zebker, H. and R. Goldstein (1986). "Topographic Mapping from Interferometric


Synthetic Aperture Radar Observations," J. Geophys. Res., 91, pp. 4993-4999.

9
THE SAR GROUND SYSTEM

In Chapter 6 we presented the end-to-end SAR data system as consisting of


three major subsystems (Fig. 6.1 ): ( 1) The radar sensor; (2) The communications
downlink; and (3) The ground processor. The first two subsystems were
addressed in detail in that chapter. Emphasis was placed on the overall system
impact of the sensor and link performance in terms of their effects on the data
quality. In this chapter we address the third major subsystem, the ground
processor. As in our treatment of the flight segment, we will emphasize the
spaceborne SAR application, assuming a single channel (one frequency, one
polarization) mode of operation. Our treatment of the ground processor will
again be from a systems perspective. That is, given the design and performance
characteristics of the sensor and data downlink, in conjunction with the image
quality, data throughput, and data product specifications, we analyze the various
design options for implementing each element of the ground data system.
Specifically, this chapter addresses the computational complexity of several
SAR correlation algorithms (such as time domain convolution, frequency
domain fast convolution, and spectral analysis) that are commonly used in the
Level lA processor. The rationale for selection of a particular algorithm given
the system requirements will be discussed, as well as the architectural
considerations in the implementation. Additional design considerations will be
discussed for each correlator architecture, such as:

1.
2.
3.
4.

Process control;
Data management;
Flexibility, evolvability; and
Reliability, maintainability.
427

428

THE SAR GROUND SYSTEM

SAR Correlator Design

The design process for the SAR correlator generally consists of the following
steps:
1. Definition of the processor requirements;

2. Algorithm selection, and evaluation of the computational loading and output


image quality;
3. Survey of candidate signal processor architectures and the available
technology;
4. Evaluation of cost vs. performance, including both implementation costs
(hardware, software) and sustaining costs (maintenance, upgrades).
5. Architecture selection and detailed design.
We should note that the design process outlined above is not necessarily serial,
in that the selection of a particular algorithm or architecture in Steps 2 or 3
may not be feasible once the costs are evaluated. The performance requirements
may conflict with the available resources or technology, requiring some descope.
In Section 9.1, we address the requirements definition of Step 1 in detail;
Section 9.2 then addresses the algorithm selection and loading analysis of
Step 2; and in Section 9.3 we present various candidate architectures with their
performance versus cost trade-offs.
Following the SAR correlator discussion, the design options and practical
constraints for the Level 1B processor will be presented. This processor performs
the radiometric and geometric corrections required for production of calibrated
image products. Considerations relating to the throughput performance, storage
and access of ancillary data (e.g., digital terrain maps), and the data product
formats will be discussed. The chapter concludes with a section on browse data
generation and specifically on image data compression techniques. A complexity
analysis of several lossy spatial compression algorithms is presented, along with
a queueing analysis to determine the required compression ratio.

9.1

CORRELATOR REQUIREMENTS DEFINITION

Prior to evaluation of the candidate architectures and algorithms, the basic


processor system requirements must be established. Thes" are derived from the
sensor and platform design and performance characteristics, as well as from the
user product requirements. The basic radar and platform parameters used in
the processor design are listed in Tables 9.1 and 9.2, respectively. Table 9.3 is a
list of the output specifications required for the correlator requirements analysis.
A number of detailed specifications have been excluded from these lists for
brevity.

9.1

CORRELATOR REQUIREMENTS DEFINITION

TABLE 9.1 Lisi of Radar Parameters Required for


Correlator Design

Antenna dimension (L., W.)


Number of data samples per echo line (N,)
Bits per sample (nb), sampling frequency (.f.)
Radar frequency (fc), polarization
Pulse: bandwidth (BR), duration (rp), coding scheme
Look angle (y)
Pulse repetition frequency (/p)

TABLE 9.2 List of PlaHorm Parameters Required for


Correlator Design

Inclination angle (a;)


Orbital altitude (H)
Position determination accuracy (u., c;Y' uz)
Velocity determination accuracy (u., uy, u;)
Attitude determination accuracy (u., c;Y' up)
A~titude rate determination accuracy ( c;t> uy, up)
Bit error rate (P8 )

TABLE 9.3 List of Output Specifications Required for


Correlator Design*

Throughput: peak and sustained rates


Data product types/formats
Image quality
azimuth and range ambiguity to signal ratios
(AASR, RASR)
azimuth and range resolutions (8x, 8R)
integrated sidelobe ratio (ISLR)
peak sidelobe ratio (PSLR)
Geometric fidelity
location, orientation accuracy
scale, skew error
Radiometric fidelity
relative, absolute accuracy
*It is assumed that geometric and radiometric calibration are
performed in the post-processor following image correlation.

429

430

THE SAR GROUND SYSTEM

9.1.1

Doppler Parameter Analysis

9.1

The extreme bounds for the Doppler centroid, foe and the Doppler rate, fR, must
first be established. This includes the limiting values that each parameter can
assume, as well as the maximum rate of change in both the along- and cross-track
dimensions. The rate of change of the Doppler parameters in the along-track
direction becomes critically important in selection of the correlation algorithm
since, for the frequency domain fast convolution technique, there is an inherent
assumption that the Doppler parameters are constant over the synthetic aperture
period. These parameters can be expressed in terms of the relative sensor to
target position and velocity vectors as follows:
2

foe= RA. V.1 R,1

(9.1.1)

CORRELATOR REQUIREMENTS DEFINITION

431

co~trol specifications. The magnitude of the attitude variation is given by the


attitude control error, while the variation period is derived from the attitude
rate. This analysis should be performed for both the minimum and maximum
look angles and for the yaw and pitch, both in phase and 180 out of phase
The output will provide the Doppler bounds

Jmax
!max
De' R

in ea~h of the along-track and cross-track dimensions. An example of the


r~sultt.ng ~lots for these parameters, using the SIR-CC-band characteristics, is
given m Fig. 9.1. The!';;: and f~in are used to determine the maximum range

where R,1 = R. - R1 and V,1 = v. - V1 are the relative sensor-to-target position


and velocity vectors, respectively, and R = IR.11 is the slant range distance. An
approximate target position and velocity can be determined from the spacecraft
attitude and ephemeris data by
6000

3000

"N 5000

tOOO

(9.1.2)
(9.1.3)

where w. is the earth rotational velocity and b is the attitude-adjusted boresight


unit vector, given by an order rotation of the nominal zero attitude antenna
boresight according to the measured roll, yaw, and pitch angles in the platform
attitude determination record.
The Doppler rate is given by
(9.1.4)

e.

CD

.!!
N

e.

a:

4000

1000

where A.1 = A. - A1 is the relative acceleration of the platform. Typically, it is


assumed that A. is simply the acceleration due to gravity (although S/C drag
may also be considered), and

goo

(9.1.5)
300

The second term in Eqn. (9.1.4) is a small contributor to the Doppler rate

( < 10 % ) as compared to the first term.


Given the expressions In Eqn. (9.1.1) and Eqn; (9.1.4) for foe and fR in terms
of the orbital parameters, the nominal Doppler parameter bounds and maximum
rates of change can be evaluated by simulating an orbit of the platform, assuming
some sinusoidal variation for the attitude parameters according to the platform

400

500

600

700

SLANT RANGE (km)

I I
1525 35

45

55

65

LOOK~

F~gur~9.1 ~lot of fo. ([~)and fR (e) for SIR-CC-band SAR at worst case attitude (yaw= 1.4,
pitch - -1.8 ) as a function of slant range for two orbit inclinations (57, 90).

432

THE SAR GROUND SYSTEM

9.1

CORRELATOR REQUIREMENTS DEFINITION

433

walk and range curvature according to


= A.j.J'j)~x BP

fmin

RW

samples

(9.1.6)
cl>q = 22112

C R

RC -

A.J.B~

8cfll'in

samples

(9.1.7)

where BP is the processor azimuth spectral bandwidth, A. is the radar wavelength,


and J. is the complex sampling frequency. These values, in turn, set the
requirement for the cross-track dimension of the range cell migration memory.
The value NRw also establishes the requirement for secondary range
compression (Jin and Wu, 1984). This processing step (Section 4.2.4), which
usually combines with range compression, cornpensates for the additional target
dispersion occurring at high squint angles. It results from errors in the
approximation of two-dimensional reference functions as two one-dimensional
functions. The criterion for application of the secondary range compression, as
given by Jin and Wu (1984), is that (Eqn. (4.2.59))

I
I
I
I
I
I
I
I

C(
(!)

a:
w
~

w
>
i=
.~

I I

.,

_a:

(9.1.8)
0.001

where TB, the time bandwidth product, is given by


(9.1.9)
and -r 0 is the coherent integration time. Any imaging mode (i.e., combination
of look angle, latitude, and squint angle), that produces a Doppler centroid
resulting in a range walk that satisfies Eqn. (9.1.8), requires secondary range
compression to meet nominal performance specifications.
Doppler Drift Rates

The change in Doppler parameters as a function of both along- and cross-track


position establishes the need for reference function updates to meet the matched
filter accuracy requirements. The parameter typically specified for fR is the
maximum quadratic (or higher order) phase error at the edge of the synthetic
aperture. For foe it is the fractional error between the true Doppler centroid
and the reference function centroid at the aperture edge. A typical number for
the allowable quadratic phase error resulting from fR estimation error is n/4
(i.e., </Jq = 45). Errors of this magnitude produce very little degradation in the
impulse response function (Fig. 9.2). Assuming that no weighting is applied to
the reference function, the effect of allowing a phase error of </Jq = 45 is to
broaden the mainlobe by approximately 2 %, and increase the peak sidelobe
level about 2 dB relative to the mainlobe. Although this is a relatively modest
degradation, considering the other phase errors in the system (which combine
in root-sum-squared fashion with this error) and our ability to reduce the

0.0001 ~----:-1:-------L-----L-_J
0
1.0
2.0
3.0
NORMALIZED DISTANCE
Figure 9.2

Effect of quadratic phase error,

t/Jq, on the point target response function.

sidelobes by amplitude weighting, a unity error criterion in most cases yields an


accep~able performa~ce. The.maximum time between reference function updates

resultmg from

fR dnft (i.e., f p:ax) is given by


't

ur

=---

J Rmax 'tc2

(9.1.10)

~here we have assumed a n/4 phase error and -r0 is the coherent integration
time. ~or the frequency domain fast convolution algorithm, the processed block
duration (from center to edge of aperture) is

(9.1.11)

434

9.1

THE SAR GROUND SYSTEM

where Naz is the FFT length and Laz is the azimuth reference function length.
The update requirement is therefore
(9.1.12)
since within a data block the fast convolution algorithm requires that the
Doppler parameters remain constant. If the requirement in Eqn. (9.1.12) is not
met, the data must be preprocessed to correct for the phase errors (motion
compensation) or an alternative algorithm (e.g. time domain convolution) could
be used.
A matched filtering error in the Doppler centroid foe results in lost signal
power and increased azimuth ambiguities. T~e maximum time between reference
function updates for a given foe drift (i.e., fj)~x) is given by
(O.l)B0
ud

(9.1.14)
where is the block duration. The update time ud could be increased by
performing motion compensation of the data before processing. The application
of this technique would require precise attitude rate information to perform
phase adjustment of each line.
In almost all cases, the cross-track update rates are driven by theDoppler
rate dependence on the slant range, as shown in Eqn. (9.1.4). Sirtlilar error
analyses can be applied to determine the maximum number of samples between
updates. Typical numbers are on the order of two to eight bins, depending on
the error specification.

9.1.2

Azimuth Processing Bandwidth

The fraction of the Doppler bandwidth (Bp) used in the processor is a design
parameter determined by the azimuth ambiguity to signal ratio specification
(AASR), as defined in Chapter 6. A typical AASR specification is on th~ order
of - 20 dB. Given the azimuth antenna pattern and PRF, the bandwidth BP
can be determined, assuming a homogeneous target area, by

m~oo
00

AASR =

m;'O

B /i

_:.,,

G (f

-WdR

+ mfp)df

If.

B.12

-B,/Z G

(f)df

(9.1.15)

Azimuth Reference Function Length

The azimuth reference function length, Laz is given by


(9.1.16)
where Laz is in samples. This can be rewritten as
= Bpfp

L
az

De

where we have assumed that the allowable centroid error is 10% of the Doppler
bandwidth B 0 , which produces a relatively small degradation in the SNR and
AASR. Thus, a further requirement to use the fast convolution technique is

435

where G 2 (f) is the two-way azimuth antenna pattern. For example, consider
a spaceborne system with a uniformly illuminated azimuth aperture. Assuming
fp = 1.1 Bo, from Eqn. (9.1.15) a value BP= 0.75 B0 provides an AASR =

(9.1.13)

/max

CORRELATOR REQUIREMENTS DEFINITION

fR

. (9.1.17)

Note that, since fR is range dependent, the length of the azimuth reference must
be updated as a function of cross-track position to keep the azimuth resolution
constant.
Azimuth FFT Block Length

Assuming the azi~uth reference function is updated along-track according to


some estimated foe the overlap between raw data blocks must be adjusted
according to the azimuth shift of each block relative to the adjacent block. This
shift is given by
,
(9.1.18)
where Naz is the FFT size of the input data block and Laz is the azimuth
reference function length. One possible processor design is to increase the block
overlap (from Laz to Laz + N.) to accommodate the maximum shift resulting
from Doppler drift. This, however, reduces the number of good output data
samples per block of data processed. For this design, an azimuth correlator
efficiency factor can be defined as
(9.1.19)
where N. is the maximum block-to-block shift. The result in Eqn. (9.1.19)
provides the relationship between the azimuth correlator efficiency factor, g8 ,
and the azimuth block length size, Naz A larger block size yields a more efficient
processor; however, Naz is limited by the Doppler parameter update criterion.
From Eqn. (9.1.12), Eqn.(9.1.14), and Eqn. (9.1.19), the block size is bounded by

436

9.2

THE SAR GROUND SYSTEM

For a multilook processor, where subaperture spectral division is used, Naz


would be the block size for each look.
The range cell migration memory is given by
(9.1.21)
assuming a complex, floating point ( 8 byte) rep_resentation for each data sample.
For example, the Seasat azimuth reference function for a full resolution,
single-look image is Laz ~ 4 K, resulting in a minimum block size of Naz = 8 K.
The largest foe produces a range walk on the order of 128 samples. From
Eqn. (9.1.21), the range cell migration memory is therefore MRcM = 8.0 MB.
9.1.3

Range Reference Function

The range FFT block size is determined by the number of samples in the echo
window and the reference function length. The range reference function length is
(9.1.22)
where f. is the complex sampling frequency and tP is the tr~nsmitted pulse
duration. The range FFT length, N~, is usually chosen to be the smallest power
of 2 that satisfies
N~ ~

L,/(l - g,)

(9.1.23)

where g, is the range compression efficiency factor. Typically, g, is selected to


be greater than 1/2, and usually it is limited by the corner turn memory size,
which is given by
Mer= Naz(N~ - L,) (8) bytes

CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS

437

9.2 CORRELATOR ALGORITHM SELECTION AND


COMPUTATIONAL ANALYSIS

!he selection of the appropriate SAR correlation algorithm for data processing
is dependent on the signal data characteristics, the system throughput
requirements, and the output image quality specifications. There is no simple
procedure for evaluating the trade-offs among these factors. Rather, a fairly
complex analysis is needed, requiring consideration of the design and
implementation constraints in conjunction with signal processor architectures
and the available technologies. A fundamental trade-off to be made is the relative
importance of system throughput versus image quality.
The key element in the processing chain is the azimuth processing stage,
which involves formation of the synthetic aperture to focus the azimuth return
into a high resolution image. In this section, we consider the trade-offs between
what we consider to be the two fundamental azimuth correlation techniques:
(1) spectral analysis (e.g., unfocussed SAR or SPECAN); and (2) matched
filtering (e.g., frequency domain or time domain convolution). We recognize
that there are a number of other possible techniques, such as the polar
processor with step transform (Chapter 10), the hybrid algorithm (Wu et al.,
1982), and the wave equation processor (Rocca et al., 1989). Generally, these
techniques are used for special situations (e.g., inverse SAR, large squint angles,
high phase precision) and will not be considered here.
The processor performance in terms of output image quality depends on the
characteristics of the echo data. A primary characteristic driving algorithm
selection is the time bandwidth product of the azimuth signal. This parameter,
which is the product of the processing bandwidth and the coherent integration
time, given by
(9.2.1)

(9.1.24)

again assuming a complex floating point data representation. For example, jf


the azimuth and range FFT sizes are set at 4 K complex samples each, and if
the range reference function length is 512 complex samples, the minimum corner
turn memory size is Mer= 112 MB. The McT can be reduced by shortening
the range block length N~. However, recall that each block must be overlapped
by L,, thus Mer reduction is achieved at the cost of processing efficiency.
Memory can also be reduced by packing the data into a ( 16I, 16Q) or (8I, 8Q)
format.
A final consideration in selecting the range FFT size is the cross-track
variation in the Doppler centroid, foe Since the secondary range compression
filter function depends on foe and this is assumed constant for each block, N~
may be limited by the performance requirement for the secondary range
compression. This limitation is typically only important for wide azimuth
beamwidth or squint imaging mode radars (Chang et al., 1992).

is a good benchmark to determine if an approximation can be used for the


exact 20 matched filtering algorithm. Small TB products are generally
characteristic of high frequency (X-band or higher) spaceborne radars, or of
relatively low-flying platforms (e.g., airborne systems). Generally, for these
systems we can obtain good quality imagery with a simplified azimuth
correlation algorithm.
In the following two sections we address the trade-offs in performance versus
computational complexity for the spectral analysis algorithms and the matched
filtering algorithms.
9.2.1

Spectral Analysis Algorithms

We will consider two commonly used spectral analysis algorithms. These are
the unfocussed processor, which applies no phase compensation to correct for
the quadratic phase history of the target, and the deramp-FFT or SPECAN

438

9.2

THE SAR GROUND SYSTEM

algorithm in which a phase correction is applied to the signal prior to a forward


transform.

CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS

439

substituting Eqn. (9.2.3) we get


(9.2.5)

Unfocussed SAR Algorithm

To utilize a spectral analysis technique such as unfocused SA~ or SPECAN,


we must first consider the resolution requirements of the output image products.
For the unfocussed processor, the azimuth resolution is given b~ the along-!rack
integration time associated with, for exam~le, a rr./4 quadratic phase shift. It
can be shown using simple geometry (see Fig. 9.3) that
1C

c/>q = 2A.R(V.1't'cu

)2

(9.2.2)

where q, is the relative change in quadratic phase between the center and the
edge of the aperture and i- 00 is the unfocussed aperture.tim~. For c/>q = rr./4, the
coherent integration time for unfocussed SAR processmg ts

Thus, for a spaceborne system such as Seasat, where A. = 0.235 m and R = 850
km, <>x ~ 316 m which is too coarse for most science applications. However,
in the case of an airborne X-band system such as the Canadian STAR-1, where
A. = 3.2 cm and R ~ 10 km, an unfocussed azimuth resolution of <>x ~ 13 m is
achievable with c/>q = rr./4. This is acceptable for many applications.
The unfocussed SAR processor was used by many of the early SAR systems.
This processor does not compensate for the along-track phase shift resulting
from the change in sensor-to-target range. In its most rudimentary form this
processor consists of summing adjacent pixels over the unfocussed aperture
length

(9.2.3)
Since the azimuth resolution is given by
(9.2.4)

~UNFOCUSSEDAPERTURELENGTH.....j
SENSOR

s\.
. .... --

l1R W1_

...
. ...

....

iso PHASE/

.. ..

ISO RANGE
PATH

where cu is given by Eqn. (9.2.3). However, in general, this will not produce
good quality imagery, since the inherent assumption is that the beam is steered
to zero Doppler. For squint angles producing a significant Doppler shift (e.g.,
foe > 0.25 B0 ), the azimuth ambiguities will be severe. Additionally there is
uncompensated range walk which will cause the targets to be dispersed in the
range dimension. Thus, a more practical algorithm requires a preprocessing
step where the data is multiplied with a factor W., =An exp{j2rr.f00 n/ fp} that
shifts it to zero Doppler and also weights the terms in the summation to reduce
the sidelobes. The data block should also be skewed by the range walk
Eqn. (9.1.6) prior to summing to minimize the range dispersion.
The computational complexity of the unfocussed SAR algorithm in terms of
floating point operations (FLOP) per input data sample (assuming complex
data) can be evaluated as follows:
1. Azimuth reference function multiply (to shift to zero Doppler and weight
the sidelobes) requires one complex multiply per input sample
2. Summation of the elements in the data block requires one complex add per
input sample
Thus the aggregate computational complexity for the unfocussed SAR processor
is
C0

8 FLOP/complex input sample

TARGET
Figure 9.3 Sensor to target imaging geometry for SAR. Unfocussed aperture for </>q = n/4
(i.e., &R = A./16) is given by Eqn. (9.2.3).

where we have assumed six operations (four multiplies, two adds) per complex
multiply and two operations per complex add. Also, we have ignored the

440

THE SAR GROUND SYSTEM

computations required for the reference function generation, which are negligible
assuming the Doppler centroid is slowly varying relative to the image frame size.

0
Zz

The Spectral Analysis Algorithm

The SPECtral ANalysis (SPECAN) algorithm corrects for quadratic phase


variation across the processing bandwidth and separates targets based on their
differential Doppler shift. This technique is an improvement over both the
unfocussed SAR and Doppler beam sharpening algorithms in that it achieves
significantly higher resolution. However, it is limited in that it cannot
accommodate the variable cross-track range curvature correction. The flowchart
for this algorithm, which is described in detail in Chapter 10, is shown in
Fig. 9.4a. Basically, it performs a skew (or an interpolation) of the data in the
range dimension to compensate for the range walk, applies a linear FM (deramp)
correction to the data block, and then uses a forward FFT to spectrally separate
the targets. The deramp function is centered about the mean centroid for the
data block with a slope determined by the Doppler rate. The reference function
is updated as a function of cross-track position to track the Doppler parameter
variation. The output image must be resampled from its natural range-Doppler
grid (fanshape) format to an orthogonal grid.
The output image following the forward FFT stage does not provide valid
data for all targets within the block. Targets at the edge of the output block
generally are degraded in both resolution and azimuth ambiguity to signal ratio
(AASR). To achieve a uniform data quality, some data is discarded and the
FFT blocks are overlapped at the cost of processing efficiency. The fraction of
data to be discarded becomes severe as the required resolution approaches the
full aperture resolution. To improve the efficiency, the FFT (i.e., the processing
block) can be shortened at the cost of azimuth resolution (assuming the deramp
function is applied to all data within the processing block).
For radar systems with a small TB product, where the range curvature is
essentially negligible, the SPECAN algorithm presents a computationally
efficient method for azimuth correlation. To assess the computational complexity
of the SPECAN algorithm (Fig. 9.4a), we divide the azimuth correlation into
processing steps and evaluate the number of computations per input sample as
follows
1. Azimuth Naz-point reference function generation

4/nu real multiplies


1/nu real adds
2/nu cosine operations

ij2

rr: w
rr:

...
:!:

!!
8

...cc
Q

w
CJ
c

:!!

rr:
rz:W

D.
D.

'5
c:
0
co
ou

u.
wz
rr:w

ti)

ti

:c CJ
... 0
=>z
;!!! :::>
NU.
c

:::E
c

g~
D.

8
~

eif

..c:

w
w gi c
CJ w ...
Zrr:c
c D. Q
rr: :::!!
0
0

t:u.

rr:
c:z:
;::
rr: :::>
o!
U. N
c

...

i:::

c
0-1
:::EO
0 D.
rr: rr:
w

...
:!:

0
z
:::>
u. !::;
u. :::>

~::I!

:z:~

... D.

:::> :::!!
::!lo
N
c

E
~
ou

t:

.,~

w u.
:z:
rr: ...
w::>
>:::!!

cQ

ti)

w
CJ
c

:!: N
c

;!!!

0.
IZI

3:....

.g
....

~
0

"
:;
e

..c:
ti)

rr:
rz:W
w ...
-1W
D. :::!!
D. c
Orr:
cc
D.

u.
w
rr:Z
w
:z: CJ
... 0
=>z
;ji :::>
N IL
c

'-

OS
....
co

..lol
0

"

::0

~!::;

w
wmc
CJ w ...
Zrz:c
Co. C
rr: :::!!
0
0

c~l!::

...ow
c9~
c m ti)

ti)

rr: rr:
WW
D. w
D. ::I!
0C
err:
c
D.

IL:::>
w:::E
rr:~

:c D.
... :::E
io
N
c

S
..c:
.g=E
OS

0
z

.......
where nu is the update interval in range samples times the update interval in
azimuth blocks.

~o
u. i:::
w=>
rr:5
:c>
... z
::>O
:::EO

....~

w
w gi c
iw ...
c rr:c
rr: Q

Q~

rr: :z:

;~

rr: :::E
oIL !;I!

w
D. CJ

CZ
:c ::::i
ti) D.
Z:::E
Cc

~:a

err:

C:.

...cc
Q

w
c

CJ

;!!!

" 0
co
:I (;I

~u

~ .. Q

..

a;
GI """
~
~

i]
U.

OS

IL
wz
rr:w
:c CJ
... 0
=>z
:!!!:::>
NI.I.
c

441

442

THE SAR GROUND SYSTEM

9.2

2. Reference function multiply

3. Forward Naz-point complex FFT

( 1/2) log 2 Naz complex multiplies


log 2 Naz complex adds
4. Fanshape resampling, two four-point complex interpolations

16 real multiplies
12 real adds

9.2.2

Summing the total number of operations in Steps 1-4 above, the aggregate
computational complexity in floating point operations (FLOP) for azimuth
correlation with the SPECAN algorithm per sample input to the azimuth
correlator is

7/nu

+ 5log 2 Naz + 34

(FLOP/sample)

(9.2.6)

where N az = csf.p is the azimuth block size and 't"cs is the coherent integration
time.
For multiple block processing, typically the blocks will be overlapped, with
the samples from the edges of the block discarded. The fractional block to block
overlap is

where l:!..N is the number of samples in the overlap region. Then the multiblock
azimuth correlator computational complexity is
(9.2.6a)
A rule of thumb for determining whether the SPECAN algorithm can be
effectively used is that the range curvature must be less than 1 pixel (Sack et al.,
1985). From Eqn. (9.1.7), setting NRc = 1 we get

12 - 8c
JR cs - ),J.

1r

Frequency Domain Fast Convolution

Given the requirement for a high precision azimuth correlator that can produce
imagery at an azimuth resolution near the fully focussed aperture ideal
performance, spectral analysis algorithms are inadequate. The frequency domain
convolution (FDC) algorithm, which consists of two one-dimensional matched
filters (as described in detail in Chapter 4 ), provides a close approximation to
the exact two dimensional matched filter. This algorithm can be used for most
spaceborne systems operating in the nominal strip imaging mode, assuming
secondary range compression (SRC) is employed. For large squint angles (i.e.,
> 10 OH), an additional processing stage may be required (Chang et al., 1992).
The modification entails performing the azimuth transform prior to application
of the SRC.
The computational complexity of the FDC azimuth correlator given in
Fig. 9.4b can be assessed as follows. Assuming Naz input samples constitute the
azimuth processing block, the number of computations per data sample input
to the azimuth correlator (for processing a single block of data) can be broken
down as follows:

o.

1. Naz-point complex forward F FT

(1/2) log 2 Naz complex multiplies


log 2 Naz complex adds
2. Range migration correction, 4 point complex interpolation

8 real multiplies
6 real adds
3. Azimuth Laz-point reference function generation (time domain) and Nazpoint transform

Rewriting Eqn. (9.2.7) in terms of the time bandwidth product, we get


TB --

443

where BP= 't"cslfRI Thus Eqn. (9.2.8) gives maximum TB, and therefore the
maximum block size that can be used in the SPECAN algorithm, assuming the
range curvature cannot exceed one range bin. For Seasat, where J. = 22.76
Msamples/s and A.= 0.235 m, the maximum TB is 449. The resulting coherent
integration time is on the order of cs = 0.95, which is equivalent to an azimuth
resolution at a range R = 850 km of Jx ~ 14 m, as compared to 19.5 km for
the real aperture resolution, 316 m for the unfocussed SAR processor, and about
6 m for the fully focussed synthetic aperture. For a system such as the ESA
ERS-1, where A.= 5.6 cm and J. = 19 Msamples/s, the maximum TB= 2256,
which results in a maximum cs = 1.0 s which is greater than the nominal full
aperture observation time.

one complex multiply

C~A =

CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS

(9.2.8)

4 Laz/(Naznu) real and 1/(2nu)log2 Naz complex multiplies


Laz/(Naznu) real and (1/nu)log 2 Naz complex adds
2 Laz/(N8 znu) cosine operations

444

THE SAR GROUND SYSTEM

9.2

where nu is the cross-track update interval (in samples) times the along track
update interval (in blocks)
4. Reference function multiply

1 complex multiply
5. Nazpoint inverse FFT

(1/2)log 2 Naz complex multiplies


log 2 Naz complex adds
Summing the total number of operations in Steps 1-5 above, the aggregate
computational complexity required for azimuth correlation with the FDC
algorithm per input sample is
(FLOP/sample)
(9.2.9)
where L az is the azimuth reference function length in complex samples, given by
(9.2.10)
for full aperture processing. In Eqn. (9.2.9) we have not taken into account !he
efficiency factor of the azimuth correlator as given by Eqn. (9.1.19). Assummg
that the raw data set to be processed is divided into azimuth blocks, Eqn. (9.2.9)
gives the number of computations per input sample to process a single block.
The efficiency factor determines the overlap between blocks, or equivalently the
number of input samples that must be processed twice. Thus, for multiblock
processing, the computational rate is given by

CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS

445

azimuth correlation, however it is also the most computationally intensive. The


TDC algorithm is capable of characterizing each sample in the echo data set
by its exact Doppler parameters, and therefore theoretically the azimuth
reference function contains no approximations as to the processing block size.
In a time domain processor, each reference function can be exactly tailored to
its position within the data set (Lewis et al., 1984). Thus, the algorithm can
produce an exact matched filter for a given set of radar characteristics (ignoring
random system errors). The computational complexity of the TDC azimuth
correlator, shown in Fig. 9.4c, can be assessed in terms of the number of
operations per data sample input to the azimuth correlator as follows:
1. Azimuth La 2 -point reference function generation

4 La2 /(Na 2 nu) real multiplies


La2 /(N 02 nu) real adds
2 L 82 /(N.2 nu) cosine operations
where nu is the update interval in range samples.
2. Range migration correction,four-point complex interpolation

8 real multiplies
6 real adds
3. Time domain L 82-point complex convolution

L 32 complex multiplies
L. 2 - 1 complex adds

(9.2.11)
Thus, for example, if the reference function length plus the block skew is 40 %
of the block size, then ga = 0.6 and 1.7 times as many computations per input
pixel are required for multiblock processing than for processing a single block.
We have also assumed that the squint angle is relatively small, such that the
standard frequency domain convolution algorithm can be used. For larger
squint angles, the algorithm must be modified to perform.the forward azimuth
FFT prior to the secondary range compression, thus requiring an additional
two corner turns for the da!a and an additional complex multiply per sample.
9.2.3

Time Domain Convolution

The most precise approach for SAR correlation is the matched filter tim~ domain
convolution (TDC) algorithm. Conceptually it is the simplest algorithm for

where we have assumed the reference function is not updated as a function of


along track position within a data block, Naz~
Summing the total number of operations in Steps 1-3 above, the aggregate
computational complexity for azimuth correlation using the TDC algorithm
per azimuth correlator input sample is
(FLOP/ sample)

(9.2.12)

where L. 2 is given by Eqn. (9.2.10).


The time domain convolution algorithm is typically used only for very
short apertures or in high precision processing applications where small volumes
of data are being processed (e.g., as in a verification processor to produce the
optimum quality image product).

446

THE SAR GROUND SYSTEM

9.2

soo....-~~~~~~~~---,,...--E-R-S--1'1~~.,S-E_A_SA-T~~-,

~I
500

Time
Domain
Convolution

CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS

447

extremely computationally intensive process, even with short reference functions.


To illustrate the type of computational capability required for real-time azimuth
correlation, we present the following example.
Example 9.1 For Seasat SAR, the digitized raw video data has the following
characteristics

400

N, = 6840 complex samples/range echo line

c
300

L, =pf.= 760 complex samples/range reference function

TP = 1I fp = 607 s
200

Spectral Analysis

I (deramp FFT)

~e. h.ave converted the Seasat real sampling frequency to complex samples by
dlVldmg by 2. After range compression the range line length is

100

N, = N, - L, = 6080 complex samples


0

10

15

n
Figure 9.5

The azimuth correlator therefore must process N. range compressed complex


samples in T.P seconds. Assuming we require full azimuth resolution and B p = BD

Plot of computational rate of azimuth correlators as function of reference function

(9.2.14)

length (L = 2").

9.2.4

(9.2.13)

Comparison of the Azimuth Correlators

Comparing the complexity of the various algorithms requires some assumptions


about the implementation, since the algorithm design affects the computational
complexity. It is not possible to make a direct comparison since the various
algorithms can have widely different performance characteristics in terms of
. their image quality (i.e., resolution, sidelobes, ambiguities). Thus, the number
of computations is a necessary but not a sufficient criterion for algorithm
selection. It is but one of many factors considered in the processor design.
A plot of the four algorithms discussed in this section, in terms of the number
of floating point operations per input data sample as a function of data block
size, is presented in Fig. 9.5. For the SPECAN algorithm, we have assumed the
block size Naz is one-quarter the full aperture reference function length Laz
with gb = 0.8, while for the FDC we assume Naz= 2Laz (g 3 = 0.5). For both
the SPECAN and the FDC algorithms, we assume the. reference is updated
every four samples cross-track and every block along-track so that nu = 4. For
the TDC, the reference is updated every four samples cross-track and every
1024 samples along-track, so that N 32 nu = 4096.
The number of computations per input sample for the unfocussed SAR is
constant, independent of Naz while for the SPECAN algorithm the computational
rate increases by 6.25 log 2(Naz). The FDC has a steeper slope at 22 log2 (Naz),
while the time domain algorithm increases linearly as 8Laz and becomes an

Inserting the following Seasat parameters into Eqn. (9.2.14)


fp = 1646.75 Hz

A.= 0.235 m
R = 850 km
L 0 = 10.7 m

V.1 = 7500 m/s


we get
Laz = 4099 pulses
Rounding down to the nearest power of2 (and therefore improving the AASR),
we select
Laz = 4096 pulses
Naz= 2Laz = 8192 pulses
Assuming the Doppler parameters are updated every four bins cross-track (i.e.,
nu= 4), and are not updated along-track within a 100 km frame, (i.e., N. = O),

448

THE SAR GROUND SYSTEM

9.2

the processor efficiency from Eqn. (9.1.19) is ga = 0.5. Since we are performing
multiblock processing, the computational complexity from Eqn. (9.2.9) and
Eqn. (9.2.11) is

CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS

449

or
CsA ~ 86 FLOP /input sample.

To meet the AASR requirement, we will set gb = 0.8, therefore


C~ = CsA/gb = 108 FLOP/input sample

Thus
C~nc = 328

From Eqn. (9.2.15)

FLOP /input sample

The computational rate is given by

R~c = C~NR/TP
R~oc = 3.28 x 10 9 FLOPS

(9.2.15)

where RM is in floating point operations per second (FLOPS). In other words,


FDC

h
real-time full aperture azimuth compression of the Seasat SAR data usmg t e
frequency domain fast convolution algorithm requires an azimuth correlator
capable of executing nearly 3.3 GFLOPS!

which is about 30 % the FDC computational rate.


For the TDC algorithm, we will again assume that the reference function is
updated once every 4 samples cross-track (i.e., nu = 4) and once every Naz = 8192
pulses along-track. The computational complexity from Eqn. (9.2.12) is
CToc

15

+ 8Laz ~ 32,800 FLOP/input sample

where we have assumed Laz = 4096 pulses.


The computational rate is therefore

For comparative analysis of the other two azimuth correlators, we present


the following example.

RToc ~
~

Again consider the Seasat SAR. Using the sensor parameters


given in Example 9.1, we will evaluate the relative complexity of the SPECAN
and time domain convolution (TDC) algorithms.
From Eqn. (9.2.8) the maximum block size for the SPECAN algorithm is
given by

NR CsA/ TP
328 GFLOPS

Example 9.2

Naz =

/Sc

fp r = fp -V TjJJJ--

Naz= 1538 pulses

From Example 9.1, the full aperture reference function Laz = 4099 samples. For
quarter aperture, four-look processing, Laz = 1025, which is less than the
maximum block size constraint. Since the block must be a- power of 2 less than
Naz we select

Naz =

which is 100 times as many operations as the FDC and over 300 times the
computational rate of the SPECAN algorithm.
In summary, the SPECAN algorithm requires the fewest computations of
the three azimuth correlators (excluding the unfocussed SAR) and can pt;ovide
reasonable image quality for small time bandwidth product (TB) data sets such
as the ESA ERS-1. To achieve the full azimuth resolution for larger TB data
sets, either the time domain or the frequency domain convolution algorithms
can be used. The time domain convolution is inherently more precise, but at
an extremely large computational cost for spaceborne systems, since its
computational complexity increases linearly with the number of pulses in the
synthetic aperture. The frequency domain convolution provides a good
compromise between throughput and image quality in that, for most systems,
the image degradation is very small relative to TDC, but the computational
requirements are on the order of the SPECAN algorithm.

1024 pulses

Assuming nu= 4, from Eqn. (9.2.6) we get


CsA ~ 36

+ 5 log2 Naz

9.2.5

Range Correlation

For the cross-track or range dimension processing we will only consider the
frequency domain fast convolution algorithm. Similar to the azimuth correlation,

450

THE SAR GROUND SYSTEM

9.2

the range correlation consists of a forward transform, a complex reference


function multiply, and an inverse transform. Since the range reference function
changes very slowly as a function of foe the overhead from reference function
generation is negligible. Thus the computations per input data sample can be
broken down as follows:

CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS

451

discarded, then Ne is reduced by one. Alternatively, the fractional data block


can be processed with a reduced size N~ and the range efficiency factor calculated
as a weighted average of each g,, dependent on the block size.

Example 9.3 Again, consider the Seasat data set where

1. Forward transform of N~ points, requiring

N, = 6840 complex samples


L, = 760 complex samples

( 1/2) log 2 N~ complex multiplies


log 2 N~ complex adds

fp = 1646.75 Hz

2. Reference function multiply, requiring

Assuming we have a block size of NR. = 2048 samples

1 complex multiply

Ne= Int(4.7) + 1 = 5

3. Inverse N; transform, requiring

and

( 1/2) log 2 N~ complex multiplies


log 2 N~ complex adds

g,

The computational complexity for frequency domain fast convolution range


compression per input pixel is therefore
C~oc

= (6

+ 10log2 N~)/g,

(9.2.16)

where g, is the efficiency factor for multiblock range correlation.


To calculate the efficiency factor in the range correlator, the number of
processing blocks must first be estimated. Assume N, complex samples per input
range line, L, complex samples per reference function, and a processing block
size of N~ complex samples. The number of good points from each processed
block is N~ - L, + 1. Therefore, the number of processing blocks required is
(N, - L, + 1)/(N~ - L, + 1). Since we cannot process a fraction of a block, we
must round up to the nearest integer, thus
Ne = Int ( ,N r -L r )
N, - L, + 1

N,
e

Therefore
C~oc = 173 FLOP /input sample

For real-time processing the range correlator must operate at


R~oc = N,C~ocfp
R~oc = 1.95 GFLOPS

(9.2.18)

The computational rate can be reduced by increasing the processor block size.
If a processing block of N~ = 8192 were selected, then Ne= 1 and g, = 0.83.

The computational complexity becomes


c~DC = 163 FLOP /sample

+1

(9.2.17a)
with a real-time rate from Eqn. (9.2.18) of

where Int represents the integer operation. The range efficiency factor is given by
g, = N N'

= 6840 = 0.67
52048

(9.2.17b)

R~oc = 1.83 GFLOPS

which is a 5 % improvement in the rate required for the smaller block.

In the above analysis we have assumed that the residual block fraction at the
end of the range line is processed as a full block. If this fractional block is

Since the computational load on the processing system for range correlation is
dependent on the processor block size, unless there is a large change in Doppler
across a range line, requiring an update in the reference function secondary

452

9.3

THE SAR GROUND SYSTEM

compression term, the range correlator should always be designed to process


the largest possible block.

9.3

SAR CORRELATOR ARCHITECTURES

ERS-1

SAR CORRELATOR ARCHITECTURES

1.0

~I

0.9

I
FDC

0.8

Considering the large number of computations required in SAR processing the


selection of the correlator architecture requires careful analysis to ensure that
the system throughput requirements are met. For example, we could take a
straight-forward approach and buy as many CRAY X-MP /4 computers as
needed to do the job. Using the UNPACK benchmarks for a standard
FORTRAN implementation, a single-processor X-MP/4 system performs
69 MFLOPS (Dongarra, 1988). Assuming that a network of CRAYs can operate
at 100% efficiency, a real-time Seasat azimuth correlator using the FDC
algorithm requires 48 CRAY X-MP /4 processors. If we used the TDC algorithm,
we would need over 5300 CRAYs. Obviously, some optimization in the
architecture, going beyond a network of general purpose computers, is required.

453

0.7

fc

0.6
0.5
0.4
0.3
0.2
0.1
0

9.3.1

Architecture Design Requirements

The design process to determine the system architecture must consider more
than just the basic computational rate of a machine (Hwang, 1987). Initially,
a trade-off study should be performed to prioritize the relative importance of
the system throughput versus flexibility. In other words, the more specialized
we can make the processor to generate a single type of output with a similar
set of processing parameters (i.e., block size, FFT length, range migration, etc.),
the better we can tailor the architecture to achieve extremely high throughput.
A second, equally important, consideration is the radiometric accuracy
requirement. If high precision radiometric calibration is not required, we can
for example consider fixed point arithmetic for the mathematical operations,
or truncate the range correlator output prior to corner turn. If however a high
precision output is required, a full floating point (or a block floating point)
representation is needed, increasing the complexity of the correlator hardware.
A third key design parameter is the resolution requirement. The resolution
specification on the output image product not only impacts the number of
computations per input data sample, as discussed in the previous section, but
is also a key driver determining the required processor memory capacity.
To optimize the implementation of the azimuth corr6lator, an important
parameter to consider is the fraction of computations that are FFT operations.
This is shown in Figure 9.6 for the SPECAN and FDC algorithms. (The
unfocussed SAR and the time domain convolution do not require FFTs.) For
the frequency domain convolution, assuming the reference function length is
1-8 K samples, the fraction of FFT computations is over 80% of the total
computations. For the SPECAN algorithm this fraction is over 50%. Therefore,
the optimal architecture for implementation of these algorithms requires a highly

10

12

15

n
(Laz= 2")
Figure 9.6

Plot of fraction of total computations in FFT as function of azimuth reference function

length.

efficient technique for performing FFTs. This will be addressed in detail in this
section for each of the architecture designs.
We will categorize the various SAR correlator architectures into what we
consider to be the three fundamental designs: (1) Pipeline; (2) Common Node;
and ( 3) Concurrent Processor. There are a number of possible variations or
combinations of these basic designs and we will address some of them with
examples of real systems. For each architecture, the key design parameters
to be considered are: (1) Peak I/O data rates; (2) Memory capacities;
(3) Computational requirements per processor; (4) Reliability/redundancy of
the design; ( 5) Maintainability/ evolvability of the design; and ( 6) Complexity of
the control system. These design parameters should be evaluated in conjunction
with the current technology to factor into the trade-off analysis the relative cost
of the hardware. For example, a memory requirement of 32 Mbytes is not
especially stringent with current technology, considering that 4 Mbit chips are
currently available. A typical cost per byte of RAM is on the order of 1/20 of
a cent. Thus, a 32 Mbyte memory might cost $16 K. Conversely, if the architecture
requires an 1/0 bandwidth of 100 MB/s, that forces a departure from standard

454

THE SAR GROUND SYSTEM

data bus architectures {such as the VME bus), or even the newer fiber optic
ring networks {FDDI), to say an {as yet) unproven HSC star architecture,
which could be quite costly.
Perhaps the most important consideration that is overlooked by many system
designers is that the hardware technology evolves faster than the software.
Typically, new hardware {such as the high speed data bus architectures) will
operate in only a very limited environment. Using such equipment in a custom
designed SAR correlator could require a sigriificant amount of software to be
developed at the microcode level. The software drivers necessary to communicate
with peripheral devices are a chronic problem for system engineers attempting
to incorporate the latest state-of-the-art technology into their system. It is
usually advisable when building an operational system to use equipment one
version removed from the most recent release. The system should be designed
such that technology upgrades can be incorporated within the basic structure,
requiring a minimum amount of redesign.

::J~j

~!i
8

';-

--- - .

IL~:;

QQ

1:1!~~

i= !<

---

~Q

9.3.2

Cl

~~~
::Ea:

Pipeline Arithmetic Processor

The optimal system architecture for achieving extremely high throughput SAR
correlation is the pipeline machine. A generalized functional block diagram of
a pipeline processor is presented in Fig. 9.7. The data is input to the processor
from some type of storage device {e.g., a high density tape drive or the SAR
sensor ADC). Each processor element {or functional unit) performs some type
of operation on the data array {x 1 ,x 2 ,. .. ,xn} to generate a new array
{ A 1 {x 1 , .. , xn), ... , Am{x 1 , , xn) }, where each operation A; may be performed
on any or all of the input data samples. The pipeline consists of a series of these
functional units, connected by a data bus. The movement of data and the
arithmetic operations are controlled by a digital clock whose cycle time is
compatible with the hardware elements comprising each unit. The pipeline is
terminated by a second storage device whose 1/0 data rate requirements may
be either higher or lower than the input device, depending on the functional
operations applied to the data.
We can apply this generalized description of the pipeline processor to the
SAR correlator as shown in Fig. 9.8. In this simplified diagram of a pipeline
SAR processor, we first divide the processing task into modules that relate
directly to the major stages of the SAR processing: (1) Range correlation; (2)
Corner turn; (3) Azimuth correlation; (4) Geometric rectification; and (5)
M ultilook filtering. Each of these modules may be further bi;oken into functional
units. For example, the range correlator consists of a forward FFT unit followed

z
0
x I=

~:5

fil~ !ii
8w

--

fi3

a:

--

I.!.!
ii!;

-- - - .
~

:c

4
w

~t:

wz

o::::i
I-

CJ

w~

~'9~
5

.1

--

::::>

::E

.J~~

j
';-

- - ~- o:;
ILZ ::::>
l:l!~::E

if w !<
~1ffi
811.~
I

a:

~Ci:

:.::

5::::>
::E

-- - ~

~~
5<
QC

t;

w
w

I-

0-

--z
0

~
g::e
_;iffi
-

:.::c

Ow
Ot)W

::!w~
:; I::::iw;:EC

11:1<i!
-Z
STORAGE
DEVICE

FUNCTIONAL
UNIT
A

Figure 9.7

FUNCTIONAL
UNIT
B

FUNCTIONAL
UNIT
N

STORAGE
DEVICE

Functional block diagram of pipeline processor.

455

456

THE SAR GROUND SYSTEM

by a complex multiply unit followed by an inverse FFT unit. This architecture


is optimal from a system throughput perspective since there is a dedicated
hardware element for each stage of the processing. The aggregate computational
rate of the system is the sum of the computational rates of each functional unit
comprising the system, since when the pipeline is full all units are operating
simultaneously.
Advanced Digital SAR Processor

A good example of an operational pipeline SAR processor is the Advanced


Digital SAR Processor (ADSP) built by the Jet Propulsion Laboratory for
NASA. This system, shown in some detail in Fig. 9.9, is a straight pipeline
architecture consisting of custom designed digital units using commercial (off
the shelO integrated circuit (IC) chips. This system is capable of operating with
a continuous input data stream at Seasat real-time data rates and generating
a four-look detected output image that is written to a high density digital
recorder. The system, completed in 1986, features a computational rate measured
at over 6 GFLOPs when the pipeline is full. It consists of 73 boards (22 unique
board designs) comprising two racks as shown in Fig. 9.10.
The functional block diagram in Fig. 9.9 is detailed to illustrate the level of
programmability required to provide the necessary flexibility such that the
ADSP can be used by both the Spacebome Imaging Radar (SIR) program and
the Magellan (Venus radar mapper) program. The data flows through the main
pipeline as indicated by the vertical lines. Horizontal lines entering functional
units illustrate key control parameters to be passed to that unit. Some
parameters, such as the FFT length or the weighting functions, are updated
only once per processing run, while others, such as the interpolation coefficients
or the azimuth reference function coefficients, must be dynamically updated
during the processing run. Some of the key system characteristics are: ( 1) Range
and azimuth FFT sizes are variable and can be set up to a maximum of 8 K
complex (or 16 K real) samples; (2) Range cell migration memory can
accommodate up to 1024 bins of range walk; (3) Azimuth correlator performs
either the SPECAN algorithm or the frequency domain convolution (FDC)
algorithm; and (4) Programmability of individual units permits flexibility in
selection and update of processing parameters.
Some of the limitations of the pipeline architecture can also be seen in the
ADSP. For example, the autofocus and clutterlock modules must operate in a
feedback mode, performing the analysis on one block of data and applying the
result to a following data block. In general, this type of feed)>ack results in error
which will degrade the image quality. However, due to the slowly varying nature
of the Doppler parameters along-track (excluding the airborne SAR case), this
feedback error can be partially compensated by using a Kalman filter (e.g., an
rx.-P tracker) to extrapolate the Doppler parameter estimates to the next block.
A second limiting factor in many real-time systems is the precision of the
computations. For example, in the ADSP the azimuth reference function is
generated in the frequency domain. However, for low TB data (i.e., < 100), a

9.3

10p

~z

457

SAR CORRELATOR ARCHITECTURES

lCLKl !SYNC
ltANGE

UNE LENGTH

LINE
8UFFER

sue ILOCK SIZI


AND IEPA1 IATE
LOOK I ANO 4 FRAME

RAT AND OFfSf.1

20 .... ,

i.-

OUTPUT FR- SIZE


ON/OFF

COllElATION

fNf\JT FRAt.tf SIZE

flAK OFFSET

RANGE OOffLER AZIMJTH LINE LENGTH


SHFT
RANGE L,,...E LfNGTH

UNSCRAMILEl

SUI ILCXK LENGTH

20 .... ,

CLUnERLOCK i . ACCUMULATOR

LINSCRAMILER

'

TRIPLE IUfffl

LOOK

MEMORY

LOOK OVfllAP RATE

LOOK OVERLAP

INTRALINE

NUMIER OF LOOKS

AOOER

REAL tNP\11 ADJUST

RANGE
01
FFT

Ml,ILll-

FFT LENGTH

RANGE
FFT

-~

fsYN

DATA,CLKt
AUTOFOCUS
CROS$
COUELATOI

REFERNt"f FUNC

RANGE REFHENCE FUNCTION

RANGE
RffflENCE
GENERATOR

SELECT ION RATE


ON/OFF

W!IGHTING fUNC

DETECTION

ffT LlNGTH

INTEGER RANGE
GLL SHFT

COEFFICIENT SELECT
INTERPOLATION
COEFFICINTS

I 0
llSAMPLEi
IN AZIM.ITH

10 M-tz

LNE UNGlH

OUTPUT SELECT

c'

FFT LENGTH

10 .... ,

UN.SCRAMIUl1:

AZIMUTH
mI

AZIMUTH ILOCK
LENGTH
CORNER
TURN
MEMORY

FRAM& AOORlSS
OFFSET

NUMlll OF LOOKS
PHCENT ZERO-FILL

-~

DfRAMP REFIRlNCE
FLINCllON

DOPPLER OFFSET AND


rOEFfW'lfNIS ANO RATES CIRCULAR SHIFTS
AZIMUTH
REffRENCf
GENERATOR

DOPPLER
PROCESSING

REFERENCE LENGTH
ANO wtK;jHJ INU
Of.RAMP OR

CONVOLUTION
AZIMUTH REFERENCE FUNCTIC>f'll

AZIMUTH

FFI LENGTH

ffT

'

RANGE
-TION

MIMCW:Y

OOPPllR OffSEllAn
M...... ,-.... PAYW
... _. -,.N11i

COEFFICIENT
SELECT
INTERPOLATION
COEFFICIENTS

I 0
RISAMPLER
N IANGl

COEFFICIENT
UpPATI RATE

Figure 9.9 The Advanced Digital SAR Processor (ADSP) functional block diagram showing
control parameters to each module. (Courtesy of T. Bicknell.)

458

9.3

THE SAR GROUND SYSTEM

Figure 9.10

The ADSP system shown with a Thorn-EM! high density recorder.

linear FM frequency domain representation does not replicate the Fourier


transform of the time domain chirp. Other performance compromises, such as
the number of bits used in the computations and flexibility in updating
parameters (e.g., antenna patterns), are characteristic of most pipeline systems
where precision and flexibility are traded for speed.
As a follow-on to the ADSP, the JPL group has built a second pipeline
processor which is installed at the University of Alaska in support of processing
the J-ERS-1 , E-ERS-1, and Radarsat data to be received at the Fairbanks
ground station. As might be expected, this system, completed in 1990, is more
compact, using less than half the number of ICs at less than I / 3 the power
consumption. This saving derives primarily from the utilization of low power
CMOS technology and the larger capacity ( 1 Mbit) memory chips. The Alaska
SAR Facility is described in detail in Appendix C.

SAR CORRELATOR ARCHITECTURES

(8 boards/ set) are identical, and there are four such board sets (two for the
azimuth correlator and two for the range correlator). Similarly, the memory
boards used in the corner turn and multilook memories ( 14 total ) are designed
identically. This introduces the possibility of sharing these boards among the
various modules at the cost of throughput.
Consider the architecture of Fig. 9.11, where the range and azimuth
correlators share the same modules. Instead of a continuous data transfer, as
in the straight pipeline operation, the data is input to the bent pipe correlator
in bursts. Each burst is one processing block (N. 2 x N, samples) of data. In
the first pass of the data through the system, the complex interpolator module is
bypassed, and the range reference function is read into the reference function
multiplier unit. The range compressed output is stored in RAM until range
processing of the data block is complete. The matrix transpose of this data
block is then fed back into the correlation module, which is reset for azimuth
compression. The complex interpolator can perform range migration correction
and slant-to-ground range correction in the same step, or alternatively it can
output the slant range imagery. The azimuth compressed output is again stored
in RAM until the block processing is complete. The feedback loop is then
switched, transferring the processing data block to the multilook module, while
the next block of data is input to the correlation module for range compression.
The correlator design described above is just one example of how a flexible
pipeline design could be used for SAR processing. In general, this approach is
less expensive in terms of the number of digital boards required to implement
the correlator. However, it does require a more complex control system to
switch the data paths, and it is significantly slower than the straight pipeline
architecture. The Alaska SAR Facility correlator was originally planned to be
a bent pipe design. However, a trade-off study of cost versus performance
indicated that the straight pipeline was the optimal approach.

CORRELATION

CORNER-TURN

MULTI-LOOK

M:X>UE

I COt.f'l)( IFEFEFENCE I 1
FU~IDN I FFr

FFT I INTERP0-1

I LATON

f MULTIPLY I

MULTI-STAGE

RAl"-Va.1
Kr;f$

I.EMORY

!MULTI
DETEC1 I.CO<
Tx:>N
Fil.TEA

Flexible Pipeline Architecture

A variation on the straight pipeline architecture (e.g., ADSP) is a flexible pipeline


which permits dynamic reconfiguration of the interconnections between functional
units. Recall that the ADSP has 73 digital boards, but only 22 unique designs.
This derives from the fact that the forward and inverse FFT board sets

459

Figure 9.11

Functiona l block diagram of SAR correlator with bent pipeline architecture.

460

THE SAR GROUND SYSTEM

9.3

Rel/ablllty and Control

A major drawback to the pipeline processor design is reliability. This type of


system generally does not exhibit graceful degradation (i.e., a single failure could
cause the entire system to shut down until the problem is diagnosed and a
repair implemented). For this reason, to minimize the downtime, a set of high
level diagnostic tools is generally required for rapid troubleshooting, and a full
set of spare boards needs to be maintained for replacing the failed element.
When a failure occurs, the diagnostic system must pinpoint the bad board. This
board is replaced and repaired ofHine to maximize the system availability.
Alternatively, a more sophisticated system would have on-line spares and
possibly an automated diagnostic capability to switch in spare components in
case of a detected failure.
The computational loading on any individual processing element is
ameliorated in a pipeline system by adding additional elements at stages in the
processing chain where a throughput bottleneck occurs. This permits each stage
to operate synchronously under control of a single clock. The data throughput
is controlled by the cycle time of the clock whose design is based on the user
throughput requirements. A high speed clock (e.g., the 20 MHz or 50 ns clock
used in the ADSP) can greatly complicate the control due to the short interval
available for coefficient updates. In fact, in most systems it is the complexity of
the control system that is the key factor limiting the throughput of the pipeline.
9.3.3

Common Node Architecture

A more traditional architecture, generally used for implementing a non-real


time SAR signal processor, is the common node architecture. A functional block
diagram of this architecture is illustrated in Fig. 9.12. Essentially, in this
architecture all data transfers pass through a common node or data bus to
which are attached storage devices, computational elements, and a control
system. Input data transfer can be via the host (control) computer or via direct
memory access (OMA) ports located on the computational elements (CEs).
These OMA ports permit data transfer directly from an external device into
the CE memory without passing through the host CPU memory. The common
node architecture in its simplest form would be an array processor, or a digital
signal processor (DSP) board, interfaced to a host computer via an external
bus (Davis et al., 1981). A more advanced configuration (such as the IBM
common signal processor) might consist of multiple custom FFT units or
arithmetic processors operating in parallel, connected by a high speed switch
to route data between units when a process is complete.
The prime advantage of a common node architecture over a pipeline
configuration is its flexibility to adapt to the specific processing requirements
of a particular data set. These systems are predominantly software based with
the bulk of the software residing in the host CPU. For example, algorithm
modifications to reconfigure the system to. process a new mission data set are
relatively easy, since a high level operating system is available to program the

461

SAR CORRELATOR ARCHITECTURES

I
I

HIGH DENSITY
STORAGE
DEVICE($)

....

I-

'l

,.
HOST/CONTROL
CPU

--

r
I

OAT A TRANSFER
NODE/BUS

--

-- .
-

ARITHMETIC
PROCESSOR
UNITCSl

....

I
I

FFT
UNIT<S)

....
Figure 9.12

Functional block diagram of common node processor architecture.

interface controller boards. As the computational demand or the required


throughput increases, additional computational units can be added to the system
without a major reconfiguration. A further benefit of this architecture is that
the system can be redundantly configured such that failures result in graceful
degradation in the performance of the system.
ihe key disadvantage of this architecture is the data transfer node. Given
that the system is configured with a single high speed switch or computer bus,
this node does represent a single-point failure in the system. Additionally, for
extremely high throughput SAR correlator applications, the data rate through
this node can become the limiting factor in system performance. To illustrate
this point we present the following example.
Example 9.4 Consider the common node architecture of Fig. 9.13a. To achieve
real-time throughput, concurrent azimuth and range correlation must be
performed in separate computational units. For Seasat, the real-time input data
rate is
r 1 = 2Nr x

/p = 2 x 6840 x 1646.75 = 22.53 MB/s

assuming the 5 bit real data stream is converted to a complex 8 bit I, 8 bit Q
representation in the input interface. This data is transferred via the node to

462

THE SAR GROUND SYSTEM

9.3

RANGE
CORRELATOR

Exa~ple 9.5 Assuming a (9,9,6) complex data representation is used, the


real-time Seasat 1/0 rate through a single node can be determined as follows

FFT-1
REF MUL T

463

SAR CORRELATOR ARCHITECTURES

r 1 = 22.53 MB/s

FFT

r2 = 3fp(Nr - Lr)= 30.0 MB/s

therefore
i----~r
INPUT/
~
NODE
OUTPUT
...;
I SWITCH
INTERFACE
r2

HDDR

II-

r2

CORNER
TURN
(RAM)

r1 = r 1

I REF MULTIPLY

HDDR

I' INTERPOLATION

AZIMUTH
CORRELATOR

FFT

+ 3r 2 =

r1 = r 1

+ 7r 2 = 232.5 MB/s

the range correlator module. Assuming both the reference function multiply
and forward and inverse transforms are performed within the module, the output
data rate is

FFT 3
FFT 2
"-

(9.3.1)

r2 = 8fp(Nr - Lr)= 80.1 MB/s

(9.3.2)

which approaches the capability of state-of-the-art technology using the HSC


network architecture.
Ifwe now consider the configuration shown in Fig. 9.13b, where all the FFTs
are ~r~or~ed in one mo_dule and the complex arithmetic (i.e., interpolations,
mult~phes) is performed m a separate module, four additional data transfers
relative to Eqn. ~9.3.2) are required for the azimuth and range correlation.
Ther~fore, even with the (9,9,6) data representation, the aggregate data rate for
real-time Seasat processing is

Figure 9.13a Common node SAR processor architecture with computational units grouped
according to processing sequence.

FFT
1

where we have allocated 8 bytes for the complex floating point representation.
We have similar transfer rates into and out of the corner turn memory and the
azimuth correlator before output to the HDDR. The aggregate data rate through
the node is therefore

--

HDDR

r1 = r 1

112.5 MB/s

+ 3r2 = 263 MB/s

INPUT I
OUTPUT
INTERFACE

-I
-

t
NODE
SWITCH

I
-

I-

CTM 3

CTM 2

CORNER
TURN
<CTM;

which approaches current technology for 1/0 transfer switches.


A technique to reduce the data rate is to code the data prior to transfer
across the node and decode it before the next processing stage. A convenient
representation is (9,9,6), where 9 bits are allocated to each of the real and
imaginary components of the mantissa and 6 bits to a common exponent. This
type of representation (which is used in the ADSP) adds only a relatively small
distortion noise, but it does put an additional burden on each signal processing
module to pack and unpack the data.

HDDR

I
I-

AP 3
AP 2
"-

ARITHMETIC
PROCESSOR (AP)

.___

Figure 9.13b Common node architecture with computational units grouped according to type.

464

THE SAR GROUND SYSTEM

9.3

British Aerospace of Australia SAR Processor Design

An example of the common node architecture is the E-ERS-1 correlator being


developed by British Aerospace of Australia. That design, shown in Fig. 9.14,
utilizes an APTEC IOC-24 as the data transfer node with a micro VAX II as
the host CPU and custom processor elements (PEs) to perform the bulk of the
numerical computations.

SAR CORRELATOR ARCHITECTURES

465

For this system a 10: 1 slowdown is planned from the real-time E-ERS-1
data rate. This translates into an input data rate r 1 = 2 MB/s assuming the
data is unpacked into byte format. The data for each transfer is buffered in the
IOC-24; thus the aggregate data rate must include both inputs and outputs to
the IOC. Since the corner turn is performed in the IOC local memory, one
input/output transfer pair is eliminated. The aggregate data rate through the
IOC is therefore given by
(9.3.3)

(console)---..,

MicroVAX II

Since the correlator operates at a fractional real-time throughput rate, qt = 1/10,


Eqn. (9.3.1) becomes
(9.3.4)
assuming all arithmetic uses a 16 bit complex integer format. Inserting the
following values

High level
Control

APTEC IOC-24
FORMAT
BUFFER

Range
Process
200MFlops/s
Installed

qt = 1/10 real-time rate

fp =
Tp

1680 Hz

= 37.1 s

BR= 13.5 MHz

f. = 15.0 Msamples/s
N r = 6428 complex samples

Lr =

FS

PEs

Azimuth
Process

300MFlopsis
Installed

106Mb/s

Figure 9.14 Common node architecture implemented by British Aerospace of Australia for
E-ERS-1 processing (Fenson, 1987).

J. TP =

557 complex samples

into Eqn. (9.3.4), we get r 2 = 3.94 MB/s, while from Eqn. (9.3.3) r 1 = 19.76 MB/s,
which provides an 18 % margin for the peak 1/0 below the 24 MB/s maximum
capacity of the IOC-24.
Each processor element contains a RISC controller, local memory, and
arithmetic processors capable of 50 MFLOPs. To evaluate the number of PEs
required for E-ERS-1 azimuth correlation, from Eqn. (9.2.10) Laz = 1034
complex samples. Using Laz = 1024, Raz= 2Laz ga = 0.5, and nu= 4, from
Eqn. (9.2.11) we get C~oc ~ 267 FLOP per complex input sample. From
Eqn. (9.2.15), the real time processing rate is R~oc ~ 2.63 GFLOPS. At one tenth
real-time (i.e., q1 = 0.1), the azimuth c6rrelator must perform a minimum of
263 MFLOPs, which requires 6 PE boards. Assuming the range compression
is executed using the FDC algorithm with an 8 K FFT, the range correlator
m1;1st perform 187 MFLOPS at one-tenth real-time, requiring four additional
PE boards.
For processing at a rate one tenth real-time, the common node architecture
fits well within the current computing technology. Commercially available array

466

THE SAR GROUND SYSTEM


9.3

processors such as the STAR VP-3 (rated at 150 MFLOPS) or the Sky
Computer,(SKYBOLT) board level processor (rated at 80 M~LOPS) could
also be used to meet the one tenth real-time performance requirement. For a
board level array processor system, the data transfer nod~ beco1:11es the int~rnal
bus of the host CPU. At the one tenth real time rate it is feasible for a smgle
host computer, augmented by 6-7 SKYBOLT pr_ocessors, to a~hieve the
450 MFLOPs necessary for simultaneous range and azimuth compression. There
is currently no single processor CPU that can meet this goal, however, there
have recently been a number of multiprocessor systems introduced (e.g., CRAY
Y-MP/4, ALLIANT 2800) that are capable of over 500 MFLOPS. Howe~er,
an architecture based around a supercomputer may not be the most cost effective
solution, since it does not readily provide for future expansion.
A reasonable compromise to the expense of the supercomputer would be
to use a superminicomputer class host with an attached processor to perform
the bulk of the computations. The SIR-C processing system employs such a
design. This architecture, shown in Fig. 9.15, utilizes an Alliant FX-8
superminicomputer with four high speed ports (~SPs) for data 1/0 (Test
et al., 1987). Each HSP is rated at 30 MB/s. The disk ar~ay, manufactur~d by
Maximum Strategy, performs an eight-way hardware stnpe over a maximum
of 32 disks, providing a storage capacity of 64 GB, at an 1/0 data ~ate of 10-12
MB/s. This system is sufficiently flexible to process the l~r~e vanety of SIR-C
data collection modes since it is fully programmable. Additionally, all elements
are redundant (including the disk array with a parity disk), p~oviding for graceful
degradation with hardware failures. The SIR-C processor achieves 300 MFLOPS
across the two array processors and an additional 96 MFLOPS peak performance
with the eight Alliant processor boards. The !imiting facto~ is the 1/0 to the
disk array since in this design the range and azimuth correlations are performed
within th~ array processor ( 32 MB memory) and the corner turn is performed

SAR CORRELATOR ARCHITECTURES

467

in the host CPU ( 192 MB memory). Each array processor operates on a different
data block, performing identical computations.
It should be noted that similar common node architectures are being utilized
by: NASDA for J-ERS-1; CCRS for the E-ERS-1 and RADARSAT; and the
DLR for the E-ERS-1 and X-SAR data processing. The popularity of this
architecture is traceable to its price/performance ratio, as well as the fact that
off the shelf computer hardware is adequate to meet most throughput requirements.
An advanced version of this architecture is being sponsored by several agencies
within the US Department of Defense. They plan to develop a general purpose
signal processor similar to that shown in Fig. 9.12. The primary objective of
this development program, which involves a number of major defense contractors
such as Hughes, TRW, IBM, and AT & T, is to develop a system incorporating
a high speed switch and a set of VHSIC processing modules that meet interface
standards in accordance with military specifications (e.g., ADA). The switch
(or data node) is more like an intelligent controller, recognizing when a process
(e.g., an FFT) is complete and routing the results to another processing module
for the next stage of processing. This system is a general purpose signal processor
specified to perform in the 2-3 GFLOPS range in an extremely compact
configuration (e.g., < 1 m 3 ).
9.3.4

Concurrent Processor Architecture

A third class of system architectures for the SAR signal processor is the
concurrent or parallel system. Here we are referring primarily to loosely coupled
multiprocessor systems with distributed local memory (Fig. 9.16), such as the
1/0
CHANNELS

110
CHANNELS

ALLIANT FX - 8
192MB

CE8

RAM AM

CACHE

IP

STAR

STAR

DISK

VME

VP-3

VP-3

ARRAY

BUS

Figure 9.15 Common node architecture implemented by JPL for SIR-C processing.

a
b
Figure 9.16 Functional block diagram of a concurrent (massively parallel) processor: (a) Twodimensional topology; (b) Three-dimensional topology.

468

THE SAR GROUND SYSTEM

Massively Parallel Processor (MPP) developed by Goodyear or the Hypercube


developed by the California Institute of Technology (Caltech). This is in contrast
to a multi-processor system, such as the Alliant FX/2800 or the CRAY Y-MP I 4,
where the processors are tightly coupled with a shared memory system. In this
section, we will discuss examples of both SIMD (single instruction multiple
data) machines (e.g., the MPP or the Connection Machine) and MIMD
(multiple instruction multiple data) machines~ such as the Caltech Hypercube
and the EMMA multiprocessor built by ELSAG in Italy.
The most obvious advantage in a concurrent processor system is that the
aggregate computational power is essentially unlimited. If higher performance
is required, additional microprocessors can be added to increase the size of the
array to meet the throughput requirements of any processing task. Additional
benefits, such as reliability (through redundancy) and evolvability, directly
follow from this architecture. The main drawbacks to this type of system
are twofold: ( 1) The I/O data rate is typically a limitation, since each processing
element (PE) cannot be directly accessed; and (2) the operating systems for
such machines are not sufficiently mature to permit the software to be easily
optimized across all the PEs. It should be noted that great strides are being
made in both of these areas, but, until these limitations are significantly reduced,
the practical utility of loosely coupled multi-processor architectures is of a
somewhat narrow scope.
SIMO Processor Arrays

Single instruction multiple data (SIMD) systems are parallel processors which
operate synchronously under the same control unit. Physically the processor
elements (PEs) can be connected in any communication topology. For example,
the MPP is a two-dimensional (planar) array where each PE can transfer data
only to its four nearest neighbors. Conversely, the Connection Machine is an
n-cube topology where any PE can be connected to n other PEs according to
some predefined configuration that may be optimal for a given application
(Hillis, 1985).
The SAR correlation algorithm has been implemented on the MPP by a
group at the GSFC (Ramapriyan et al., 1984). A functional block diagram of
this system is shown in Fig. 9.17. The array unit (ARU) consists of a 128 x 128
(i.e., 16,384) array of PEs, each with its own 1024 bit local memory. The cycle
time is 100 ns (i.e., 10 MHz clock), however, each PE can perform only bit
serial arithmetic. The result is that this system is highly efficient for fixed point
operations, but its performance is dramatically reduced for floating point
operations. For example, the MPP is measured to perform 1.86 GOPS for
8 bit integer multiply operations, but only 39.2 x 10 6 complex multiplies per
second (Schaefer, 1985).
Data input (output) occurs through 128 bit wide ports at the 10 MHz clock
rate with 1 bit flowing to (from) each PE in the first (last) column of the array.
The array is controlled by an array control unit (ACU) which is microprocessor
based. The data management and application software are housed in the

9.3

SAR CORRELATOR ARCHITECTURES

469

STAGING
MEMORY

128 BIT
INPUT
INTERFACE

--

ARRAY
UNIT
CARUl

128 BIT
OUTPUT
INTERFACE

Figure 9.17 Concurrent processor SIMD architecture used by Goodyear in the Massively Parallel
Processor (MPP).

program data management unit ( PDMU) which is a VAX 11 /780 minicomputer.


The PDMU executes the programs that are developed in the host computer.
In the current configuration, both the host and the PDMU functions are handled
by the VAX system. Due to the limited size of the staging memory (SM) of
16 MB, the SAR data is processed in blocks.
Assuming there are no data 1/0 transfer bottlenecks, and that the corner
turn could be managed such that this operation is overlapped with the actual
computations, the MPP has the processing power to achieve approximately
1/20 real-time Seasat processing with floating-point computations. If the
radiometric accuracy of the output were not a prime consideration, 8 bit
arithmetic could be employed to achieve a rate about one half real-time. The
algorithm implemented by Ramapriyan et al. ( 1984) used 16 bit arithmetic. It
was possible to perform 16, 4 K complex FFTs simultaneously by partitioning
the array into 16 32 x 32 subarrays. The actual PE control software to perform
the radix-2 butterfly across 1024 1 bit processors is beyond the scope of this
text, but suffice it to say that the overhead from this type programming
complexity has severely limited the use of SIMD architectures for operational
SAR correlation.
MIMD Processor A"ays

The multiple instruction multiple data processors can be categorized into either
shared memory tightly coupled machines (e.g., Cray Y-MP/4), where a single
bus is shared by both the processors and the memory, or distributed memory

470

THE SAR GROUND SYSTEM

multicomputer systems, where each processor node has local memory and is
interconnected by some topology (e.g., ring, hypercube, etc.). In this section we
will address only the latter type of MIMD architecture. A number of MIMD
topologies have been created for specific processing applications, such as the
BBN butterfly switch (BBN Labs, 1986), where the arithmetic processors are
arranged to access other processors' local memories to efficiently execute the
FFT operation (among other signal processing_ tasks). As previously discussed,
both the communication efficiency and the program complexity are major
concerns in utilizing this type of architecture for SAR processing.

9.3

SAR CORRELATOR ARCHITECTURES

REGION2

REGION 1

The Italian Space Agency EMMA-2 E-ERS-1 Processor

PN 1

A good example of a MIMD based architecture for SAR correlation is the


EMMA-2 processor, developed by ELSAG (Appiani et al., 1985). This system
was developed originally for real-time pattern recognition tasks and is currently
used by the US Postal Service for automated letter sorting. The EMMA system
hardware architecture is shown in Fig. 9.18. In Fig. 9.18a, the hierarchical
organization is illustrated. The host computer controls a region or network of
regions, each of which in turn controls a network of families. Each region bus
is interconnected by a high speed interface (interregion connection unit) for
transfer of data between regions. The detailed organization of each region is
shown in Fig. 9.18b. Connected to the region bus are microprocessors (Pl),
each of which has its own family bus to which the processing nodes (PNs) are
connected, as well as a high capacity system memory (HCSM) board that is
shared by the PNs within that family. Each PN board consists of three
microprocessors, each of which has its own co-processor, 32 KB of EPROM,.
and 32 KB of RAM. A near-term upgrade of the local memory to 128 KB is
planned. The maximum configuration of the EMMA-2 is:

PN 128

PN 128

REGION BUS
IRCU

8 regions/ system
32 families/region
128 PN boards/family
The current system design uses 16 bit microprocessors (the iAPx286 chip),
with a 32 bit bus architecture for future microprocessor upgrade.
The EMMA-2 architecture has been selected by the Italian Space Agency
(ASI) for E-ERS-1 fast delivery processing (Selvaggi, 198J). The requirement
is to produce three 100 km image frames per 100 minute orbit, which translates
into a throughput of about 1/120 of real-time. To achieve this throughput rate,
the SPECAN algorithm was selected with quarter aperture, multilook processing.
As previously discussed (Section 9.2.1 ), the maximum coherency time for spectral
analysis processing of the E-ERS-1 C-band data is on the order of 0.7 s such
that full aperture resolution can be achieved with negligible degradation.
Furthermore, the EMMA-2 implementation requires only quarter aperture

471

PN

PN

PN

PN

PN

PN

HCSM

HCSM

HCSM

SIPC

MINI
COMPUTER

(SPECIAL
PERIPHERALS)

Fl~ure ~.18

Con~ur~ent

processor MIMD architecture used by ELSAG, Genoa, Italy: (a)


H1erarch1cal orgamzahon; (b) Region bus organization. (Appiani et al., 1985.)

coherency, which for E-ERS-1 is only 0.16 s. From Eqn. (9.2.6) and Eqn. (9.2.6a)

~-

C~A = 45

+ 6.25 log2 (Naz)

~here we have set the cross-track reference function update interval at n = 4


bms and used B P -08
gb = 0.8. For a four-look imageu the
B D resuIt'mg m

472

THE SAR GROUND SYSTEM

azimuth block size is


Naz

= tcsfp/ L = 269 pulses

where L = 4 is the number of looks. Selecting the FFT length fl az to be 256,


we get C~ = 95 FLOP /input sample. The computational rate to perform
azimuth compression in 1/120 real-time (i.e., q1 = 1/120) is
RsA

= qJPC~N, = 7.8 MFLOPS

assuming /p = 1680 Hz and fl,= 5871 complex pulses per echo line after range
compression.
To evaluate the computational rate for range compression, we will assume
the fast convolution (FDC) algorithm is used. The EMMA system architecture
of each processing node constrains the maximum FFT to N~ = 2048 complex
points. From Eqn. (9.2.17), inserting the E-ERS-1 parameter values of N, = 6428
and L, = 557 complex samples into Eqn. (9.2.17), we get g, = 0.78 and Ne= 4.
From Eqn. (9.2.16)

C~oc = <6 + 10log2 N~) = 148 FLOP/sample


g,
The computational rate required for range compression at 1/120 real time is

9.4

POST-PROCESSOR SYSTEMS

473

and the multilook processor, about 100 PNs will be required for the E-ERS-1
task.
Concurrent Processor Rellablllty and Malntalnablllty

The concurrent architectures discussed in this section offer one significant


advantage over other system architectures in that they are highly redundant.
A failure in any single processing element results in only minor degradation
in the overall processing capability of the system. This assumes, of course, that
the application software is written to be reconfigurable to operate over a reduced
number of processors should a failure occur. Additionally, diagnostics must be
available to pinpoint the failed element for replacement. A second key benefit
is in the spares cost. A pipeline architecture such as the ADSP requires over
20 unique boards, while a concurrent SAR processor will typically have only
3-4 unique board designs.
It would seem reasonable therefore that this type of parallel architecture will
be the architecture of choice for future high speed signal processing applications.
Most likely, a combination of loosely coupled processors and tightly coupled
processors will evolve as a compromise design. The primitive operating systems
for the massively parallel machines currently place too great a burden on the
programmer, in terms of writing parallel code and partitioning tasks among
the processing elements. Thus the combination of a tightly coupled system, such
as in a mini-supercomputer, augmented by a massively parallel processor to
perform the computationally intensive tasks, could potentially provide a simple
design interface and high performance in the same system.

R~oc = qJPN,C~oe = 13.3 MFLOPS

9.4

Thus the EMMA-2 system must be capable of a minimum of 21.1 MFLOPS


to meet the requirement of 1/120 real-time rate.
The architecture selected for the E-ERS-1 processing task is a single region
with three families, as shown in Fig. 9.18b. A pipeline processing algorithm is
employed, with each family performing a different task, but each PN within a
family performing an identical task on different portions of the data set. The
first family is connected to an HDDR via a special peripheral interface card
(SPIC) for raw data input. This family performs data unpacking, synchronization,
and processing parameter evaluation. The second family performs range
compression and azimuth spectral analysis, while the third family performs
azimuth compression, resampling, and multilook overlay operations. The data
is processed by blocks, with each block consisting of 2 x 106 complex samples
(using 256 range lines). The data is output from family 3 to the host computer
disk via the region bus. Each PN has been benchmarked at 180 ms per 1024
point complex FFT. This translates into a computational rate of0.28 MFLO.PS
per PN. Therefore, about 75 PNs are needed to perform the E-ERS-1 processmg
task, assuming 100% efficiency. To account for performance loss due to the
system inefficiencies, as well as the compute requirements of the input processor

POST-PROCESSOR SYSTEMS

Thus far in this chapter we have presented various aspects of the SAR correlator
architecture and design. The emphasis throughout the discussion was on the
need to produce image products at high data rates. The question that naturally
arises is what to do with the correlator output. In other words, there must be
a back-end data analysis and distribution system to handle the high output
data rate. In Fig. 9.19 we illustrate one possible approach to the design of the
back end system. Following the correlator are two major processing elements:
the post-processor and the geophysical processor. The post-processor performs
Level 1B processing, which encompasses the radiometric and geometric correction
of the output imagery, as well as multilook averaging and the generation of
browse products. The geophysical processor (Level 2, 3) mosaics multiple SAR
image frames, formats them into map quadrants, performs SAR image registration
with other sensor or geographical data sets, and derives some geophysical
characteristic(s) from this product (e.g., wave height, soil moisture, surface
roughness). These geophysical measurements are then input to large scale models
for estimation of global processes such as ocean circulation or hydrological
cycles (Level 4 products). In this section, we will address specifically the

474

THE SAR GROUND SYSTEM

SAR

RAW DATA
(LEVEL 0)

CCHBATI;A

._____

9.4

~ ~ST-

MN3E.OATA

a"MAPS

GE<J'HVSCAL ....____
~
PHYSICAL_

(LEVEL 1A) _ _ ___.... (LEVEL 18) ....__ _ _ _ PARAMETERS


(LEVEL 2, 3)

L.EVa
181

LEVa
1A

LEVa

RADIOMETRIC

182

OOARECTDI

OTM;ENG.
TELEMETRY

LEVa
2,3

....------, LEVa
183

.----.LEVa

LEVa
MULTI-

184

SIN!OR , _ _..

RJSICJll

GEOPH'Y'SICAL
PRlC6SSNG

lAroE
SCH.E
MOOB.S

9.4.1

POST-PROCESSOR SYSTEMS

475

Post-Processing Requirements

The post-processor design depends on the data rate and data volume output
from the SAR correlator, the variety and accuracy requirements for the various
product types, and, perhaps most importantly, the precision required in the
computations. In our analysis, we will assume the SAR correlator produces
only single-look, complex, full resolution image data without any geometric
resampling or radiometric corrections applied. All multilook filtering and
detection operations are performed in the post-processor. In this formulation,
we move all output product options to the post processor, resulting in a
correlator output that is of a single type, thus simplifying the archive. The
correlator processing is also reversible, allowing us to recover (most of) the raw
data by applying the inverse of the compression filters. This would permit an
archive of only the single-look image data without retention of the Level 0 raw
data. (To be fully reversible we must retain all partially filtered data, i.e., the
reference function length in each dimension, and perform full floating point
computations throughout the correlation.) The volume of data to be archived,
the location of the archive relative to the SAR correlator, and the quality of
the original raw data set (which indicates the amount of reprocessing required)
are key factors in determining at what level of data product the permanent
archive is to be maintained .

--+
Data Volume and Throughput

CORREi.ATM:
~TASE1'S

----

b
Figure 9.19 Functional block diagram of SAR ground data system: (a) Top level organization;
(b) Details of post-processing subsystem.

For spaceborne SAR systems, such as SIR-C or E-ERS-1, the acquired data
volume is almost always constrained by the downlink data rate, roL The range
line length in complex samples (ignoring the overhead from the ancillary data
headers) is given by
(9.4.1)

architectural trade-offs in the post-processing system. The details of the postprocessing algorithms were presented in Chapters 7 and 8.
In many SAR processing systems, the radiometric and geometric correction
procedures are not functionally separate from the SAR correlation process-_ In
fact, most of these operations can be incorporated into the SAR correl~tton.
processing chain without additional passes over the data set. The functional
breakdown between correlation processing and post-processing assumed here
is just one possible design and is not necessarily optimal for the computational
performance aspects of the system. However, it does provide for maximum
flexibility in terms of the variety of output product types that can be produced.
A SAR processing system dedicated to a single application or user grou~ may
combine a number of these processing steps with the range and azimuth
compression, since the variety of products is not required. Some of these
trade-offs were previously discussed in Chapters 7 and 8.

where nb is the quantization. In Eqn. (9.4.1) we have assumed that the onboard
digital system time-expansion buffers the downlink data across the entire
interpulse period. After range compression, the number of good samples per
range echo line is given b~
(9.4.2)

where rP is the pulse duration and f. is the complex sampling frequency.


Assuming each data acquisition period (datatake) is long relative to the
azimuth reference function length, i.e.,

476

THE SAR GROUND SYSTEM

9.4

where 'Jd1 is the datatake duration, then we can write the correlator instantaneous
output data rate as

r00

nuNJP (bytes/s)

(9.4.3)

where nu is the number of bytes per pixel (e.g., for a 64 bit complex representation
nu= 8). Substituting Eqn. (9.4.1) and Eqn. (9.4.2) into Eqn. (9.4.3), we get
(9.4.4)
where qd is the instrument duty cycle (i.e., the fraction of total time that the
SAR is operating). For a real-time processing system Eqn. (9.4.4) specifies the
input data rates that the post-processor must be capable of processing.

Example 9.6 Consider the following Seasat parameter set


qd = 50%dutycycle

-rP = 33.4s

nu= 8 bytes/pixel

nb = 5 bits/sample

fp =

f. =

1646.75 Hz

22.77 Msamples/s

roL = 112.7Mbps

(Note that r 0 L for Seasat, which had an analog downlink, represents the output
data rate from the ground digital units.) From Eqn. (9.4.4), the corresponding
correlator output data rate is
r 00 = 40.1 MB/s

which is over six times the correlator input data rate.


The net increase in data relative to the correlator input stems from the
dynamic range as a result of processor compression gain. A data rate reduction
between the correlator and the post-processor can be achieved by coding the
8 byte complex floating point representation. It has been shown that for most
applications a code using 1 byte for each of the real and imaginary mantissa,
and 1 byte for a common exponent, adds negligible additional distortion noise
(van Zyl, 1990).
Assuming some fraction of the correlator output is .used exclusively for
analysis as single-look, complex products, the post-processor input data rate
can be written as
(9.4.5)
To determine the computational rate required by the post-processor system,
we must identify specific radiometric and geometric correction operations that

POST-PROCESSOR SYSTEMS

4n

are to be applied to each input data sample. Since these correction algorithms
depend on system characteristics, such as the sensor stability over time, the
pla.tfor~ ephemeris and attitude accuracy, and the frequency and type of internal
cahbrat1on measurements, the number of operations could range from only a
few to several hundred per pixel, depending on the system stability. For this
reason, we emphasize the methodology for scoping the size of the post-processor,
followed by specific examples for a quantitative evaluation of the computational
rate.
9.4.2

Radiometric Correction

The radiometric calibration process consists of evaluation of both the internal


and external calibration data and generation of the calibration correction factors.
These factors are then used to correct the image data, thus establishing a
common basis for relating the pixel data number representation to the target
backscatter coefficient. In general, we can define a radiometric calibration and
image correction procedure as consisting of the following steps:

1.
2.
3.
4.

Internal calibration data evaluation;


External calibration data evaluation;
Generation of calibration correction factors;
Radiometric correction of image data.

The internal calibration data includes: (1) engineering telemetry used to assess
syste~ gain/phase errors or drift in the operating point of the system; (2)
receive-only noise power; and (3) calibration loop data such as injected
calibration tones or leakage pulses (e.g., chirps). The external calibration data
consists of images of point target calibration devices or distributed homogeneous
target sites.
For this analysis we assume that the calibration data evaluation in Steps 1
and 2 is performed offiine by a dedicated calibration analysis workstation. This
is a reasonable assumption since a significant portion of the analysis may involve
operator interaction to select targets and interpret the telemetry data. Additionally,
much of the analysis is performed only occasionally since the time constants
for variation are large relative to the sampling period and the point target sites
are typically observed infrequently.
~nits simplest form, the radiometric correction factor is a scalar array that
vanes as a function of cross-track image pixel number. This correction factor
is dependent on

1.
2.
3.
4.

Two-way elevation antenna pattern;


Slant range;
Resolution cell size;
System gains/losses.

478

9.4

THE SAR GROUND SYSTEM

If we assume that the system is stable over some time period after which ~ new

correction factor must be derived, the correction as applied to the amphtude


data is
_ [sin tl(/)(R( 1) + cl /(2f.))
Kr(J) G2 (J)GsTdJ)

3
]

12
'

(9.4.6)

where R( 1) is the slant range to the first image pixel, 'l(J) is the ~nciden~e angle
at cross-track image pixel I, G(J) is the antenna pattern .projected m~o the
image plane, and GsTd I) is the sensitivity time control gam as a function of
time (sampling interval).
. .
.
Typically, it is reasonable to assume that Kr(J) ts mdepend~nt of (slow) tI!11e
over scales of 10-15 s (which constitute an image frame), with the exception
of the roll angle rate. Changes in the roll angle will cause the a~tenn~ pattern
modulation and the incidence angle to change relative to the samplmg wmdow by
Arir = 2f RJ. tan '1 / c

samples/s

Assuming a maximum shift An~ax is acceptable before a Kr update, the update


interval is
Anmaxc
2rRf. tan,,

At =--r_ _

(9.4.7)

Consider as an .example the Shuttle Imaging Radar for '1 = 45, !. =


22.5 MHz, r = 0.033 /s, and R = 300 km. Assuming we update at An~ax = 1
pixel, the update interval is Mu= 0.025 s. For fp = 1400 Hz, we must update
every 35 range lines, which is about 600 updates for each 15 s data set. Rather
than generate a new correction factor for each update, th~ Kr(I) array can
be extended such that it is larger than the actual swath width. The u~ates
are then accomplished by simply shifting the array without any add1~1onal
computations. The assumption here is that the antenna pattern as projected
into the image plane does not change significantly over the range of roll angles
.
within a 15 s frame.
Given that the above assumptions are valid, only a single Kr(J) is required
for an image frame and the computational rate for generatio~ of ~h~ correction
factor is negligible. Since, in this case, the radiometric corre~tion ts ~ust. a scalar
multiply applied to each complex pixel, the comp~tational c~mplex1ty ts CRc =
2 FLOP /input pixel, and therefore the computational rate 1s

where rp1 is the post-processor input rate in complex sample~. For Se~sat
real-time processing RRc ~ 10 MFLOPS for the radiometnc correction.

POST-PROCESSOR SYSTEMS

479

Assuming the calibration data evaluation is performed off-line prior to the


correlation processing, the correction factor Kr(I) could be precalculated and
used to scale the azimuth reference function, thus eliminating the additional
computations required in the last. This approach requires that the correction
array update interval, Atu, be greater than the synthetic aperture period. A final
point is that, in general, the calibration correction is a two-dimensional complex
filter function. The radiometric correction stage can be used as a second filtering
pass over the data to correct for mismatch in the azimuth and range reference
functions due to Doppler parameter estimation errors or phase and amplitude
errors across the system bandwidth. The nature of these correction filters will
depend on the characteristics of the image point target response function; if
the data is dispersed along the range and azimuth dimensions, two onedimensional filters may be adequate. However, if the data is skewed, either a
resampling step or a two-dimensional filter would be required. An additional
post-processor filtering stage could add an additional 50-100 FLOP per sample
depending on the filter size. If the system errors are deterministic, the correction
filters could be incorporated into the range and azimuth compression reference
functions, thus eliminating the need for the correction filter.

9.4.3

Geometric Correction

Inherent in the SAR data is geometric distortion caused by the side looking
geometry, surface terrain, system sampling errors, and platform velocity
variation. Assuming the location of any pixel can be determined relative to a
fixed earth grid (e.g., UTM, Polar Stereographic), the images can be geometrically
rectified by performing a two-dimensional resampling (Siedman, 1977). The
pixel locations can be derived by tiepointing (either operator assisted or
automated), or predicted using a model for the sensor imaging geometry and
the target elevation. The latter approach requires precise knowledge of platform
(actually antenna phase center) position and velocity during the imaging period.
It should be noted that the geometric fidelity of the resampled image product
is not depencfent on knowledge of the platform attitude. If the range and Doppler
information inherent in the echo data is used in the target location, as described
in Chapter 8, then the value of foe reflects the antenna yaw and pitch angles,
and the range gate is independent of roll angle. Therefore, the only significant
error contributors in the target location procedure are the satellite orbit
determination uncertainty and the target elevation relative to the reference geoid.
It has been shown that the aforementioned tiepointing procedure can be
used to geometrically rectify a SAR image using a polynomial warping algorithm
(Naraghi et al., 1983). However, this approach is ineffective for images with
significant relief due to the local distortion caused by foreshortening and layover
effects. A more precise technique, proposed by Kwok et al. (1987), uses only a
few point targets of known position (latitude, longitude, elevation) to refine the
accuracy of the ephemeris using the SAR range and Doppler equations. It
requires a minimum of two targets distributed in range to provide incidence

480

THE SAR GROUND SYSTEM


9.4

angle diversity and two targets in azimuth to determine the along-track scale
errors. This approach is described in detail in Chapter 8. The tiepoint selection
and image registration are performed oftline in the calibration analysis workstation,
and therefore do not contribute to the post-processor computational rate
requirement.
Geometric Correction Procedure

For a spaceborne platform with a relatively small amount of drag, the position
errors (.dx, .dy, .dz) derived from a single site are highly correlated over a small
arc. Additionally, since the position and velocity errors are also highly correlated
with each other, the corrected platform ephemeris can be repropagated, thus
allowing all image data for that arc to be geometrically calibrated. The geometric
correction procedure is as follows:
1.
2.
3.
4.
5.

Point target analysis;


Orbit refinement and repropagation;
Generate location vs. pixel number grid;
Register image with digital terrain map (repeat 2 and 3);
Resample image to uniform grid.

Steps 1-4 are typically implemented oftline in a calibration analysis


workstation. Determination of the point target locations involves some operator
interaction, therefore these operations are adjunct to the high speed processing
chain. To register the image with a digital elevation map (DEM), a small area
(e.g., 512 x 512 points) of the DEM is projected into the SAR geometry (e.g.,
rotated to the SAR ground track and illuminated according to some backscatter
model) and cross-correlated with the SAR image. This registration step is used
to derive the residual target location error after all systematic corrections are
made. Steps 2 and 3 are repeated following the image to map registration
process. Steps 2, 3, and 4 generally require no operator interaction. The
resampling process in Step 5 above is typically designed to produce one of three
geometrically corrected products (Schreier et al., 1988)
1. Ground plane projected to smooth geoid in an azimuth/range grid;
2. Geocoded to geoid model in an earth fixed grid;
3. Geocoded to terrain elevation map in an earth fixed grid.
We will analyze the computational complexity of each geometric resampling
procedure in the following subsections.
Ground Plane Projection. In order to resample the complex output image of

the SAR correlator to a ground projection, with uniform pixel spacing in both

POST-PROCESSOR SYSTEMS

481

azimuth an~ range ~irections, .we first generate a grid of location versus pixel
number as discussed m the previous section. The resampling process is as follows:

1. Gen~r~te a resampling index in azimuth direction using 4-point interpolation,


requiring
4 real multiplies
3 real adds
2. Peiform azimuth interpolation using N, points (e.g., sine or cubic spline

interpolator ), requiring
N, real multiplies per I and Q
(N, - 1) real adds per I and Q
3. Repeat Steps 1 and 2 for the range dimension.

The aggregate number of floating point multiplies per complex input pixel is
FLOP /complex input pixel

(9.4.9)

where gor goa are the ove~sampling factors in range and azimuth respectively.

Ex~?1ple 9.7 Assum~ for the single-look Seasat image, where bx ~ 6 m, that
a umform output spactng of bxaz. = 3.125 mis selected for the azimuth dimension
and .c5x8 ~. = 12.5 m for the ground range dimension. The input slant range
spacmg is

ax.= c/(2J.) =

6.58 m

resulting in an average ground range spacing of

c5xgr = c/(2J. sin 11) = 19.2 m


where a mean incidence angle across the swath of '1 = 23 is assumed. fhe
range oversampling factor is therefore
g0 , = DXgr/ DXgr. = 1.54

(9.4.9)

The input azimuth spacing is


(9.4.10)

482

where

THE SAR GROUND SYSTEM

9.4

V.w is the swath velocity; from Eqn. (8.2.2)

Inserting this value into Eqn. (9.4.10) we get

bXaz,;::::

POST-PROCESSOR SYSTEMS

483

and the range and azimuth oversampling factors are given by Eqn. (9.4.9) and
Eqn. (9.4.11), with <5x8 bXaz. replaced by the output grid line and pixel spacing
<5x1, <>xP respectively.

4.07 m and
(9.4.11)

Example. 9.8 Assume that the post-processor input is Seasat single-look,


complex imagery rotated 45 relative to grid north. The output image is to be
geocoded to a uniform 4 meter spacing, i.e.

Using Eqn. (9.4.8), assuming a four-point interpolator (i.e., N 1 = 4), we get


The azimuth and range sampling factors are given by
Coe,

66 FLOP /input sample


goa = 1.41

From Example 9.6 the Seasat real-time correlator output data rate is 5
Msamples/s which would require a post-processor computational rate of 330
MFLOPS for real-time geometric correction.
If multilook detection is performed prior to geometric correction, the azimuth
input pixel spacing is reduced by the number of looks. However, if the output
spacing requirement is not reduced the number of computations remains the
same. Any resampling operation performed after detection should use intensity
data to preserve the first and second order statistics (Quegan, 1989).
Geocoding to a Smooth Geoid. To geocode the correlator output into a
standard map projection we perform a three-pass resampling process, as
described in Chapter 8 (Friedman, 1981 ). Pass 1 is azimuth geometric correction
and oversampling. Pass 2 is range geometric correction and skew. Pass 3 is
azimuth undersampling and a second skew to effect the desired image rotation.
The procedure is shown pictorially in Fig. 8.12. The azimuth oversampling is
to prevent aliasing from the rotation. The oversampling factor is given by

1
goa =--p
cos

(9.4.12)

where p is the rotation angle. Since a third pass must also be added to the
number of computations, the aggregate number of floating point operations
per input sample for geocoding to a smooth geoid is given by
(9.4.13)
where gua the azimuth undersampling factor, is given by

gor = 4.8
gua = 0.72

From Eqn. (9.4.13) the computational complexity is


Coe, = 261 FLOP

per complex input pixel. Assuming an input data rate of 5 Msamples/s, the
computational requirement for the geocoding from Eqn. (9.4.5) is
Roe,

= rp1Coe = 1.3 GFLOPS


2

This extremely high computational rate results from the requirement for a
single-look complex output oversampled to a 4 m uniform spacing. A more
realistic pos~-processing scenario is presented in the following example.
Example 9.9 Assume we have a one tenth real-time Seasat processor, such
that the SAR correlator output data rate is
rco = 0.5 Msamples/s

and that only 50 % of the correlator output is to be geocoded. The post-processor


input data rate is reduced to
rp1 = 0.25 Msamples/s

If the data is first L-look averaged, such that- <>xaz = LV.w/fp, requiring
4 .FLOP per samp~e, the d~ta rate is reduced to r~, = r~.! L. Assuming L = 4,
with an output pixel spacmg of <5x1= bxP = 12.5 m, we get the following

484

THE SAR GROUND SYSTEM

9.4

oversampling factors

POST-PROCESSOR SYSTEMS

485

3. Determine the foreshortened target displacement


goa = 1.30

approximately 25 operations per output sample to determine both azimuth


and range components, Eqn. (8.3.22) to Eqn. (8.3.28).

gor = 1.54
gua = 1.0

where we have assumed p = 30.


The computational rate per detected L-look input sample is given by

(9.4.14)

where the superscript L refers to the look averaging. From Eqn. (9.4.14) for
N 1 = 4, Coe,= 106 FLOP/sample. For rp1 = 0.25 Msamples/s and L = 4

Rt'.e, = 4.3 MfLOPS

The computational complexity for the DEM resampling operations of Steps 1


and 2 is
(9.4.15)

where g0 m is the map oversampling factor. We have assumed in Eqn. (9.4.15) that
no r?tat~on of the m~p is required and that the input and output DEM pixel
spa~mg ts the same m both the line and pixel dimensions (e.g., northing and
eastmg for a UTM projection).
The computational complexity for the image resampling is given by
Eqn. (9.4.13), with the additional calculations required in Step 3 to determine
the foreshortening displacement. Thus the computational complexity for the
geocoding with terrain correction is

which can be handled by most scientific workstations augmented with an array


processor or a floating point accelerator.

(9.4.16)

The following example illustrates the number of computations required.


Geocoding to Digital Elevation Map. To geocode the image to a high resolution
digital elevation map (DEM), the post-processor must calculate the foreshortening
correction for each output pixel given the target elevation at that point. The
resampling algorithm is similar to the three-pass geocoding process described
for the smooth ellipsoid, except the second resampling pass for range correction
requires an additional stage to perform the foreshortening correction. Assuming
a uniform square pixel output grid (e.g., a Universal Transverse Mercator
projection with t5x1= t5xP = 12.5 m) and an input DEM in some arbitrary
projection and spacing ( t5xm), the additional computational steps for geocoding
with terrain correction are as follows:

~xample _9.10 Given a DEM with sample spacing t5xm = 25 m and an output
image gnd of t5x1= t5xP = 12.5 m, the oversampling factor is g0 m = 2. From
Eqn. (9.4.15)
CoEM

= 71 FLOP /input sample

For a 100 x 10? km map, there a.re NoEM = 16 Msamples per frame. Assuming
one tenth real time throughput (1.e., At= 150 s), the computational rate is
RDEM

1. Convert the DEM from a geodetic to a geocentric system (Heiskanen and


Moritz, 1967)
11 FLOP per DEM sample, assuming the radius of cu~ature varies slowly
relative to the target elevation (which is true for DEMs greater than
250,000: 1 scale);

2. Resample the DEM to the required image output grid


2N1 + 2 FLOP in each of the line and sample directions per output sample;

CDEM N DEMI At

= 7.6 MFLOPS

Assuming the oversampling factors of Example 9.9, the image resampling


complexity can be estimated from Eqn. (9.9.16) as
Coe,= 153 FLOP/complex input pixel

If an L-look detection operation is performed prior to geocoding

486

THE SAR GROUND SYSTEM

9.5

Assuming a one tenth real-time rate, four looks, and one half of the data
geocoded (i.e., rp1 = 0.25 Msamples/s, L = 4)
R~c, =

5.8 MFLOPS

IMAGE DATA BROWSE SYSTEM

487

REAL-TIME INPUT FROM


SARCOR El.ATOR(S<4QMB/S)

The aggregate computational rate is therefore

SYNC/
DEMJ)(

RT= R~c,

+ RoEM = 13.4 MFLOPS

which could be handled with a minicomputer augmented by an array processor.


If the DEM is already in the desired output grid format, the computational
requirements to perform the map resampling can be eliminated, further reducing

the required post-processor system computational rate.


OPTlCAl

10 MB/$

DISK

9.4.4

JU<EBO)(

Post-Processor Architecture

For a single-look real-time input data stream to the post-processor, the


radiometric correction requirement is estimated to be 10 MFLOPS (assuming
no terrain correction), while the geometric correction varies from 300 MFLOPS
for image rectification to approximately 2 GFLOPS for geocoding to a DEM
(assuming 1-look, 4 m square, complex pixels). The radiometric correction
factors must also be updated on a pixel by pixel basis for the terrain corrected
geocoded product. Consider the correction factor given by Eqn. (9.4.6) for a
smooth geoid. The incidence angle term 11(1) must be updated based on the
local slope as derived from the elevation data. Additionally, the antenna pattern
correction must be derived from the actual off-borcsight angle of the target.
From Eqn. (8.3.30) to Eqn. (8.3.34), this requires an additional 18 FLOP per
output pixel, which translates into a real-time computational rate of "'180
MFLOPS, assuming an oversampling factor g03 g0 r = 2.
Therefore, the requirement for real-time geocoding and radiometric correction
of one-look Seasat data to a high precision terrain map is on the order of
2.2 GFLOPS. For a four-look image, geocoding 50% of the data at real time
rates, the computational rate is reduced to "'160 MFLOPS. Essentially, all the
data processing can be structured as concurrent vector operations, which can
be implemented efficiently on a supercomputer such as the CRAY X-MP /4,
which is specified at peak performance of 276 MFLOPS, or on a mini-super
such as an Alliant FX-800 (360 MFLOPS). For example, a real-time Seasat
SAR system operating on four-look products requires "'200 MFLOPS which
could be handled with a single (4 processor) CRAY X-MP/4 or an Alliant
FX-800.
In addition, a high speed online archive for storage of the DEM data is
needed. A system such as an optical disk jukebox is sufficient if the required
data can be downloaded to the host memory prior to processing the SAR image
data. The current jukebox systems can support 100 disks at 2 GB/disk, which
2
13
would hold about 2,000, 100 x 100 km DEM frames (i.e., 2 x 10 m ),
assuming 25 m spacing and 2 bytes each for the x, y, and z coordinates (or
latitude, longitude, and elevation). The major limitation in an optical disk DEM

Figure 9.20

Example hardware architecture for real-time post processor subsystem using only
commercial hardware.

archive is the 1/0 data transfer rate. Typically the sustained transfer rate is less
than 500 KB/s, which translates into a minimum of 200 s to download an
image. For real-time processing a network of these devices would be required
to achieve the required data rates ( -6 MB/s).
In Fig. 9.20, one possible architecture for a real-time post-processor system
~shown. Assuming an input data rate of 40 MB/s (i.e., 5 M complex samples
~er second), the data is first frame synchronized to identify the start of a range
lme and the sample boundaries. This custom interface board can also be used
to demultiplex the data across several input channels to reduce the input data
rate to a value compatible with each post-processor unit. Since the input data
must be blocked into image frames for geocoding, the CPU memory must have
sufficient capacity to stage the input processing block, the DEM and workspace
for the intermediate products. This is on the order of 400 MB for a 100 km
single l?ok complex Seasat image frame. To reduce the required memory, a
processmg block smaller than the image frame can be used at the cost of a
significant increase in the complexity of the data handling software and large
1/0 rates between the CPU and peripheral storage.
9.5

IMAGE DATA BROWSE SYSTEM

T~e

h!gh data rate output from the post-processor is not easily accessed by the
community for visual interpretation of the imagery. The scenes are
o~ten ma c~~plex format and are too large for video display (i.e., 8 K x 8 K
pixels_). Additionally, for real-time SAR correlation, the output data rates are
too high for electronic distribution across wide area communication networks
to ~~ientists who. may be located at a site remote from the SAR signal processing
~acihty. To provide the users rapid access to the most current data base, browse
image products are often generated and stored online. The scientists can then
scient~fic

488

9.5

THE SAR GROUND SYSTEM

log-on to the browse image data base management system and select imagery
for transfer to their home institutions across more conventional communication
channels.
An analogy to this data access scenario is the card catalog systems used in
a library (many of which are now electronic data bases). A user can search the
card catalog by title or author, if the specific book is known, to determine the
book location and status (e.g., on loan). Alternatively, if only the subject area
of interest is known, the subject catalog can be used to access all books related
to a specific topic within the library system. Contained in the catalog is a
synopsis or an abstract summarizing the book content, as well as detailed
information on its location. Similarly, an image browse system provides the
user with a low resolution summary of the image information contents. It could
be accessed by image file number or by site name if the user knows of a specific
scene. Also, as in the library catalog, if the user knows only of a location (i.e.,
latitude, longitude, area) a search can be made across all the image data products
in some specified region acquired during the time period of interest. The image
catalog contains information as to the processing status and the types of products
available.
The key science requirements in a browse data generation and distribution
system are twofold: good reconstructed image quality (at the user site); and a
short transfer delay time. The specifications controlling the browse system
performance are the channel capacity and the computational capacity of both
the transmitting a11d receiving computer systems. Generally, to achieve the
required access times for interactive browsing for some given link capacity,
spatial compression of the data products is required. The image compression
algorithm should be designed to minimize the number of computations needed
for image reconstruction since this capability must be replicated at each user
site. Additionally, the algorithm should be optimized for the unique characteristics
of the SAR image data, namely:

1. Large dynamic range ( >60 dB) as a result of compression gain;


2. Speckle (multiplicative) noise, which increases the data entropy; and
3. Nonstationary statistics due to the varying target scattering characteristics.
Thus, the SAR data characteristics place some unique constraints on selection
of the data compression algorithm.
9.5.1

Browse System Requirements

Following are the system requirements necessary for the design of a browse
data processing and distribution system:
Image Quality Specifications
reconstructed image resolution, ~x x ~Rg (m)
signal to compression noise ratio, SCNR (dB)
reconstructed image size, N 1 x NP (pixels)

IMAGE DATA BROWSE SYSTEM

489

Data Access

channel capacity, re (bps)


channel characteristics (BER, SNR)
peak image transfer rate, A. (images/hour)
maximum access delay, T (seconds)

Given these inputs, we can then perform the analysis necessary to derive the
required compression ratio, d. Typically, the required compression is larger
than can be achieved by any lossless compression algorithm and, depending
on the required minimum signal to compression noise ratio, only a few lossy
compression algorithms are suited for SAR data compression (Chang et al.,
1988a).
9.5.2

Queueing Analysis of the Online Archive System

To determine the required compression ratio we must establish the system access
load. We assume a Poisson distributed access pattern where each access consists
of a single image file transfer. For this analysis we will further assume that a
single serial port is shared by all users. We do this without loss of generality
since the extension to multiple image transfers and multiple communication
channels can be made simply by redefining the image size and the channel
capacity. The browse system will therefore be modeled as a MID I 1 queueing
system, where M represents a Poisson distribution, D is a deterministic time
required to encode and transmit the image file, and 1 indicates a single system
for processing and distribution.
It can be shown that for this system the mean response time, T, approaches
(Kleinrock, 1975)
(9.5.1)

where Wis the waitin:g time to access the system and 4, 7;, and ~ are the
encoding, transfer, and decoding titnes, respectively. The wait time is given by
c

w = A.(4 + 7;) 2 /2(1 - A.(4 + 7;))

(9.5.2)

where A. is the mean number of images transferred per second. The transfer time
is given by
(9.5.3)
where nb is the number of bits per image pixel, N 1 and NP are the line and pixel
dimensions of the image file, d is the compression ratio, and r c is th~ channel
capacity in bits per second. Furthermore, we can write
(9.5.4)

490

THE SAR GROUND SYSTEM

and

(9.5.5)
7

where Ce and Cd are the numbers of computations per pixel required to encode
and decode the image and Re and Rd are the computational rates (in FLOPS)
of the encoding and decoding processors, respectively.
We can now insert Eqn. (9.5.2)-Eqn. (9.5.5) into Eqn. (9.5.1) and write an
expression for the compression ratio, d. However, this relatively complex
algebraic equation is not very useful since, in most cases, the compression
algorithm encoding and decoding computational complexity factors (i.e., Ce,
Cd) depend on the compression ratio. Instead we will illustrate the use of these
equations with an example.

ec

i='
GI

j::

GI

Ill

c
0

Q.
Ill

GI

a:

Example 9.11 Consider a browse system designed such that the images are
compressed upon receipt from the post-processor and stored in a compressed
format, so that the encoding time is T. = 0. Furthermore, assume that the
decoding procedure is such that the receiving system can decode the data faster
than the channel can transmit, i.e.,

Te= 60s
2

10

Access Frequency

20

15

(Images/hour)

allowing the decoding process to be fully overlapped with the image data transfer
(i.e., Td = 0). Equation (9.5.1) becomes

T=W+I;

(9.5.6)

Inserting Eqn. (9.5.2) and Eqn. (9.5.3) into Eqn. (9.5.6) we can plot the total
access time (including the queue) as a function of access frequency, A., and the
compression ratio, d, given the image size (N., NP' nb) and the link capacity
(r 0 ). If we assume a 1 K x 1 K pixel image is required for the user display, the
browse system must first reduce the original full resolution input image frame,
either by segmenting or averaging the original image. We will assume a byte
representation for each pixel and that the communication link is a 9.6 Kbps
line. No channel coding is included. The results shown in Fig. 9.21a indicate
that a compression ratio of 15-20 provides data access in less than 2 minutes
for 20 access requests per hour. If a 1 minute encoding time is required
(Fig. 9.2lb) following the request receipt (i.e., T. = 60 s), then the queue begins
to grow large as the request frequency approaches A.= 20 images/h. For this
case a reasonable solution would be to add a second 9.6 Kbps line.

Image Quality

Given that a compression ratio of 15-20 is adequate to service 25, 1 MB image


requests per hour per 9.6 Kbps data link, the problem remains to determine

d=20
d=15

d=10

i='
GI

E
j::
--'41

Ill

c
0

Q.
Ill

GI

a:

El

El

0
0

9.5.3

10

15

Access Frequency

A.

20

25

30

(Images/hour)

b
Figure 9.21 Response time of browse (M/D/1) system as a function of access frequency ..i and
compression ratio d for (a) Encoding time T. = O; (b) T. = 1 min. (Courtesy of C. Y. Chang.)

491

492

THE SAR GROUND SYSTEM

9.5

the compression algorithm that can achieve the desired compression ratio, given
some image quality criterion.
For this measure, the traditional parameter used is a signal to compression
noise ratio

SCNR =

lOlog[~n~,J;~(nPu IJ

n;,)2]

(9.5.7)

IJ

where nP; is the pixel value in the original image and n;,J is the reconstructed
pixel value following transmission and decompression of the data. To achieve
a visually good quality image, the compression noise should be of the same
magnitude or less than the other noise sources in the data. For the SAR, system
noises such as thermal, bit error, quantization, and saturation are typically on
the order of 10-12 dB below the signal level, while the target dependent noises,
such as range and azimuth ambiguities, are nominally 15-18 dB down. The
exception is speckle noise. For a four-look image the signal to speckle noise
ratio is only 3 dB (Section 5.2). If 8 x 8 averaging is performed on this data
the speckle noise is then about 12 dB below the signal level and becomes
comparable to other noise factors.
The SCNR required for browse applications will therefore depend on the
processing applied to the image data before compression. If a low SCNR is
acceptable, as in the case of high speckle noise (one-look images), a large
compression ratio can be achieved, and thus we effectively trade distortion noise
for a higher resolution at a given link capacity. If we assume the browse image
size is that of a typical video display (i.e., 1 K x 1 K pixels), and that to achieve
this reduction we 8 x 8 average the four-look data, a SCNR ~ 15 dB is required
for good quality reconstructed images.
An additional consideration is the spectral distribution of this noise power.
In the above comparison with the various system and target noise sources, we
assumed that the compression noise is essentially white across the spatial
spectrum of the image. In fact, many compression algorithms add a high
frequency noise characteristic, resulting from block encoding of the input data.
There are various techniques to distribute this noise more evenly across the
spectral bandwidth, although they typically result in an increased overall
compression noise (Ramamurthi and Gersho, 1986).

9.5.4

Compression Algorithm Complexity Analysis

Data compression algorithms can be broadly classified as either lossy or lossless


(noiseless). Because of the speckle noise and nonstationary statistics characteristic
of SAR data, lossless techniques are relatively ineffective, yielding at most a
compression factor of 1.2 to 1.4 (Chapter 6). Since we require much higher
compression ratios, and can tolerate some degradation in the image data for

IMAGE DATA BROWSE SYSTEM

493

the browse application, only lossy techniques will be considered in detail. The
lossy algorithms can be grouped as follows:
1.
2.
3.
4.

Predictive Coding
Transform Coding
Vector Quantization
Ad Hoc Techniques (e.g., fractal geometry)

We will discuss each briefly as it applies to the SAR image browse application.
Predictive Coding

This category typically offers a very simple coding/decoding procedure. However,


these algorithms cannot achieve compression ratios below 1 bit/pixel (i.e.,
compression factor of 8). The image quality at 1 bit per pixel is generally not
adequate for most science browse applications. A good example of this algorithm
is Linear Three Point Predictive (LTPP) Coding (Habibi, 1971). The algorithm
uses an autoregressive model to linearly predict the value of a pixel based on
three neighboring values. The prediction error is then quantized and sent through
the channel. The prediction coefficients must be updated if the statistics of the
image change. Since this is generally true for SAR data, we assume the correlation
matrix for each block is calculated before encoding. This requires 7 FLOP for
each pixel. The encoding and decoding operations each require an additional
5 FLOP. Thus, for the LTPP, c. = 12 FLOP and Cd= 5 FLOP per input
pixel. The LTPP offers a simple implementation for compression factors of.
d ~ 4, however it is limited in flexibility. For most SAR applications other
techniques have better performance characteristics.
Transform Coding

Transform coding maps data from the spatial image domain to a representation
tfiat is more efficient for encoding the image information. The most frequently
utilized transfurms are the cosine and the Hadamard. The Hadamard transform
offers a lower computational complexity than the cosine transform at a reduced
performance. However, transform coding almost always yields better performance
than predictive coding at the same compression ratios, and it offers more
flexibility in that any compression ratio can be specified if the resultant image
distortion is acceptable. The major disadvantage is the computational complexity,
since both the encoding and decoding procedures require a large number of
two dimensional transforms.
A comparative analysis of the compression algorithms listed at the beginning
of this section has recently been performed for SAR (Chang et al., 1988b ). That
report concludes that an adaptive discrete cosine transform (ADCT) procedure
is the optimum approach for coding SAR image data in that it produces the
best SCNR for a given compression ratio. Essentially, the steps of the adaptive

494

THE SAR GROUND SYSTEM

9.5

IMAGE DAT A BROWSE SYSTEM

495

transform coding algorithm are as follows (Chen et al., 1977):


l. Partition image into blocks (e.g., 32 x 32 pixels);
2. Transform each block with a 20 energy packing transform (e.g., Fourier,
cosine, Hadamard);
3. Classify each block (e.g., four classes) based on its activity (e.g., variance,
or mean to standard deviation ratio);
4. Generate a bit allocation map for each class to efficiently code the
transform coefficients (i.e., more bits to higher activity coefficients);
5. Normalize, quantize, and code the transform coefficients based on the
class and bit map for that block.
The coded transform coefficients, the bit allocation map, and the class map
must be transmitted via the channel for use in the image reconstruction process.
Following the data transmission, the inverse processes of renormalization and
quantization are table lookup procedures, while the 20 inverse transform is
computationally intensive.
The computational complexity for block sizes 16 x 16 pixels or larger is
essentially driven by the cost of performing the transforms. For an ADCT
procedure, the encoding complexity per input pixel, assuming a square transform
block of dimensions S, is (Lee, 1984)
C~ocr

= 2(2 log2 S -

1 + l / S)

+ N 1 Np/ (2S 4 )

(FLOP / pixel)

(9.5.8)

where the first term on the right is for the transform (Step 2) and the second
term is the sorting operation (Step 3). For example, a 1 K x 1 K browse image,
coded using block size S = 16, requires C~ocr ~ 22 FLOP per input pixel for
encoding and 14 FLOP / pixel for decoding which does not require sorting. For
a 128 pixel block, the encoding and decoding complexity each increase to
26 FLOP / input pixel (i.e., the sorting is negligible). An alternative transform
algorithm, the Hadamard transform, is sometimes preferred, in which integer
a rithmetic is employed since it requires only addition operations. The performance
of the Hadamard transform exhibits a slightly degraded SCNR relative to the
cosine transform.
Some results from coding Seasat browse images (after 8 x 8 averaging of
the four-look image) are shown in Fig. 9.22 and Fig. 9.23. For this data we
have used a 16 pixel block size with four activity classes. Note that the image
becomes blocky at the higher compression ratios, even though the SCNR
remains above 15 dB. For the Detroit scene, the statistics vary widely from the
urban regio n to the la ke, thus sk ewing the classes and degrading the ADC T
performance.
For a browse application, where the user typically has little processing
ca pability at the home institution (or on a ship, or in the field), the transform
coding generally exceeds the maximum decoding complexity requirement.

Fl~ur~ 9.~2 Ada~ti_ve di ~crete cosine transform (ADCT) compression of Seasat image of Detroit,
M1ch1gan. (a ) Ongmal image; (b) Compression ratio, d = 10, SCNR = 18.4 d B; (c) d = 30
SCNR = 16.0 dB; (d) d = 50, SCN R = 15.1 dB.
'

!'lowever, for. point-to-point d a ta transfer where high speed processors can be


mstal~ed at either end, the ADCT is the optimum solutio n fo r compression of
SAR image data.
Vector Quantization
~he ~e~tor

qua n tizatio n ( VQ) algorithm offers a co mpromise between the


of t he LTPP a nd the performa nce of the ADCT. This procedu re
pr~vtdes a reasonably good reconstructed image quality a t high compression
rat1o_s._ Further~ore, the decoding procedure is reduced to a table lookup,
requmng essent1 ~1ly n~ mat hema tical operatio ns by the user. The d isad va ntage
of the VQ algo rithm 1s tha t the encoding complexity ca n be h igh for large
cod ebooks a nd the edge effects can be severe if the image exhibits a wide
s1mp~1c1ty

496

THE SAR GROUND SYSTEM

9.5

IMAGE DATA BROWSE SYSTEM

497

4. Transmit the index of the selected codeword fo r each vecto r and the image
codebook.
The performance of this algorithm is dependent on how well the subset of the
source data used to train the codeboo k (Step 2) represents the enti re source
data set. If the statistics vary at different porti ons of the image, such as in the
Detroit scene of Fig. 9.22, and if the codebook does not contain vectors from
the bright city areas, for example, these areas will be highly distor ted in the
reconstructed image. Assuming we select 2m codewords as the codebook size,
the maximum compression rati o is
(9.5.9)

Figure 9.23 ADCT compressio n of (8 x 8 ) averaged Seasat image of Kennewick, Washington :


(a ) Original; ( b) d = 10, SCNR = 15.6 dB ; (c) d = 30, SCNR = 12.9 dB; (d ) d = 50, SCNR =
11.8 dB.

dynamic range. Even with these drawbacks, the VQ algorithm appears to be


the best choice for the SAR browse application.
Essentially, vector quantization is a generalization of scalar quantization.
The steps of the procedure a re as follows (Linde et al., 1980);
1. Divide image into blocks (vectors);
2. Generate codebook by training with a subset of the source data ;
3. Compare each image vector with the codebook to determine the most
similar codeword ;

Figure 9.24 Vector quantization (VQ) compression of(8 x 8) averaged Seasat images: (a ) bnglnal,
Kennewick, Washi ngton; (b ) d = 14.8, SCNR = 14.3 dB; (c) O riginal, Detroit, Michigan;
(d) d = 14.8, SCNR = 16.2.

498

REFERENCES

THE SAR GROUND SYSTEM

where the vector block size is S x S and nb is the number of bits per pixel. The
second denominator term is the overhead associated with transmitting the
codebook. As an example, consider a 1 K x 1 K pixel browse image with S = 4,
m = 8, nb = 8. The compression ratio is 15: I. The codebook therefore represents
approximately a 6 % overhead.
The n umber of computations for the encoding procedure in Step 3, using a
fully searched codebook, is
c~o

= (Mq + 1)2m

499

example, assume m = 8, q = 0.25, and M = 4 ; the computational complexity is


c~ 0 = 512 FLOP / input pixel. The predominant operations are adds and
compares, with very few multiplies required. As a result of this large number
of computations fo r the VQ encoding process a number of more efficient coding
schemes, such as m ulti-level (tree) codebooks, have been developed (Cha ng,
1985 ). The performance of the VQ algorithm is illustrated in Fig. 9.24 and
Fig. 9.25. An 8 bit, 2 level codebook was used in the compression routine to
generate these images.

(9.5.10)

where q is the fraction of the original image used in training the codebook
and M is t he number of iterations required to train the codebook. For

Ad Hoc Techniques
There are a number of other compression routines that do not fall into these
basic categories. Several of these have been evaluated for the SAR application.
Among those evaluated are (Chang et al., 1988)

Fractal Geometry
Micro Adaptive Picture Sequencing (MAPS)
Block Adaptive Truncation (BAT)
However, none of these algorithms could improve on the performance to
complexity ratio of the VQ. Either the computational burden was too high (e.g.,
fractals), or the performance was poor (e.g., MAPS is very blocky in low activity
areas), or t hey could not achieve sufficiently high compression ratio (e.g., BAT
is a lways less t han d = 8). A more detailed consideration of these algorithms
is presented in a review paper by Jain ( 1981 ).

REFERENCES
a

Figure 9.25 VQ compression of averaged Seasat images: (a) Original, Los Angeles, California;
(b) d = 15.1, SCNR = 11.7 ; (c) Original, Beaufort Sea; (d) d = 15.1, SCNR = 17.2 1.

Appiani, E., G. Barbagelata, F. Cavagnaro, B. Conterno and R. Manara ( 1985).


"EMMA-2 An Industry-Developed Hierarchical Multiprocessor for Very High
Performance Signal Processing Applications," First Inter. Conf on Supercomputing,
St. Petersburg, Florida.
BBN Labs ( 1986). " Butterfly Parallel Processor Overview, Version !."
Chang, C. Y., R. Kwok and J. Curlander ( I988a). "Spatial Compression of Seasat SAR
Imagery," IEEE Trans. Geosci. and Remote Sensing, GE-26, pp. 763- 765.
Chang, C. Y., R. Kwok and J.C. Curlander ( 1988b). " Data Compression of Synthetic
Aperture Radar Data," Jet Pro pulsion Laboratory, Technical Document, D-5210,
Pasadena, CA.
Chang, C. Y., M. Jin and J. C. Curlander ( 1992). "Squint Mode Processing Algorithms
and System Design Considerations for Spaceborne Synthetic Aperture Radar," IEEE
Trans. Geosci. and R emote Sensing, in press.
Chang, P. C., J. May and R. M. Gray ( 1985). " Hierarchical Vector Quantizers with
Table-lookup Encoders," Proc. IEEE Inter. Conf. Comm., 3, pp. 1453-1455.
Chen, W. H., and H . Smith ( 1977). "Adaptive Coding of Monochrome and Color
Images," IEEE Trans. Comm., COM-25, pp. 1285- 1292.

500

THE SAR GROUND SYSTEM

Davis, D. N. and G. J. Princz ( 1981 ). "The CCRS SAR Processing System," 7th Canadian
Sym. on Remote Sensinf!, Winf!il'~g, Manitoba~ pp. 520-526.
Dongarra, J. J. (1988). "Performance on Various Computers Using Standard Linear
Equation Software in a Fortran Environment," Argonne National Laboratory
Technical Memorandum, No. 23.
Fenson, D. (1987). "British Aerospace of Australia, ERS-1 Data Acquisition Facility,"
Technical Document.
Friedman, D. E. (1981). "Operational Resampling for Correcting Images to a Geocoded
Format," 15th Inter. Symp. on Remote Sens. of Envir., Ann Arbor, Ml, p. 195.
Habibi, A. (1971). "Comparison of the nth-order DPCM Encoder with Linear
Transformations and Block Quantization Techniques," IEEE Trans. Comm. Tech.,
COM-19, pp. 948-956.
Heiskanen, W. A. and H. Moritz( 1967). Physical Geodesy, W. H. Freeman, San Francisco,
CA, pp. 181-183.
Hillis, W. D. (1985). The Connection Machine, MIT Press, Cambridge, MA.
Hwang, K. ( 1987). "Advanced Parallel Processing with Supercomputer Architectures,"
Proc. IEEE, 15, pp. 1348-1379.
,
Jain, A. K. ( 1981 ). "Image Data Compression: A Review," Proc. IEEE, 69, pp. 349-387.
Jin, M. and C. Wu (1984). "A SAR Correlation Algorithm which Accommodates Large
Range Migration," IEEE Trans. Geosci. and Remote Sensing, GE-22, No. 6.
Kleinrock, L. (1975). Queueing Systems, Vol. 1: Theory, Wiley, New York.
Kwok, R., J. C. Curlander and S. S. Pang (1987). "Rectification of Terrain Induced
Distortions in Radar Imagery," Photogram. Eng. and Rem. Sens., S3, pp. 507-513.
Lee, B. G. (1984). "A New Algorithm to Compute the Discrete Cosine Transform,"
IEEE Trans. Acoust. Speech Sig. Proc., ASSP-32, pp. 1243-1245.
Lewis, D. J., B. C. Barber and D. G. Corr ( 1984 ). "The Time Domain Experimental SAR
Processing Facility at the Royal Aircraft Establishment Farnborough," Satellite
Remote Sensing, Remote Sensing Society, Reading, England, pp. 289-299.
Linde, Y., A. Buzo and R. M. Gray ( 1980). "An Algorithm for Vector Quantizer Design,"
IEEE Trans. Comm., COM-28, pp. 84-95.
Naraghi, M., W. Stromberg and M. Daily (1983). "Geometric Rectification of Radar
Imagery using Digital Elevation Models," Photogram. Eng., 49, pp. 195-199.
Quegan, S. (1989). "Interpolation and Sampling in SAR Images," IGARSS '89 Symposium,
Vancouver, BC, Canada.
Ramamurthi, B. and A. Gersho (1986). "Nonlinear Space-Variant Postprocessing of
Block Coded Images," IEEE Trans. Acoust. Speech Sig. Proc., ASSP-34, pp. 1258-1268.
Ramapriyan, H. K., J. P. Strong and S. W. McCandless, Jr. ( 1986). "Development of
Synthetic Aperture Radar Signal Processing Algorithms on the Massively Parallel
Processor," NASA Symposium on Remote Sensing Retrieval Techniques, Williamsburg;
VA, December 1986.
Rocca, F., C. Cafforio and C. Drati (1989). "Synthetic Aperture Radar: A New
Application for Wave Equation Techniques," Geophysical Prospecting, 37, pp. 809-830.
Sack, M., M. R. Ito and I. G. Cumming (1985). "Application of Efficient Linear FM
Matched Filtering Algorithms to Synthetic Aperture Radar Processing," Proc. IEE,
132, pp. 45-57.

REFERENCES

501

Schaefer, D. H. (1985). "MPP Pyramid Computer," Proc. IEEE Syst. Man. Cyber Conj.,
Tucson, AZ.
Schreier, G., D. Kossman and D. Roth ( 1988). "Design Aspects of a System for Geocoding
Satellite SAR Images," ISPRS, Kyoto Comm. I, 1988.
Selvaggi, F. ( 1987). "SAR Processing on EMMA-2 Architecture," RI EN A Space Meeting
Proceedings, Rome, Italy.
Siedman, J. B. (1977). "VICAR Image Processing System Guide to System Use," Jet
Propulsion Laboratory Publication 77-37, Pasadena, CA.
Test, J., M. Myszewski and R. C. Swift ( 1987). "The Alliant FX Series: Automatic
Parallelism in a Multi-processor Mini-supercomputer," in Multiprocessors and Array
Processors, Simulation Councils, San Diego, CA, pp. 35-44.
van Zyl, J. (1990). "Data Volume Reduction for Single-Look Polarimetric Imaging
Radar Data," submitted to IEEE Trans. Geosci. and Remote Sensing.
Wolf, M. L., D. J. Lewis and D. G. Corr (1985). "Synthetic Aperture Radar Processing
on a Cray-1 Supercomputer," Telematics and biformatics, 2, pp. 321-330.
Wu, C., K. Y. Liu and M. Jin (1982). "Modeling and a Correlation Algorithm for
Spaceborne SAR Signals," IEEE Trans. Aero. Elec. Syst., AES-18, pp. 563-575.

OTHER IMAGING ALGORITHMS

503

Within the limitations imposed by depth of focus, the function Eqn. ( 10.0.1)
corresponds to a stationary system function

10

(10.0.2)
where (Re is the slant range at beam center)

OTHER IMAGING
ALGORITHMS

and </>1(R) = </>(2R/c). With the definition Eqn. (10.0.2), the response function
Eqn. ( 10.0.1) is just

An imaging algorithm is then


(10.0.3)

In earlier chapters, we have discussed mainly those SAR imaging algorithms


which have been developed for high resolution remote sensing applications.
The emphasis has been on spaceborne systems. In the case of such a system,
the effects of range migration and limited processor depth of focus are
immediately evident (Section 4.1.3). This is even more the case at the relatively
low frequency (L-band) of the earliest earth orbiting SAR, Seasat. The remote
sensing application set the direction towards strip mapping (side looking) sensor
deployment, and towards terrain imaging algorit~ms operating in that mode.
In Chapter 4 and Chapter 5 we described the developments leading to
appropriate processors in such applications, building on such work as that of
Wu (1976).
At the same time, other classes of processors were being developed. One
approach treats the impulse response function of the system directly as
a two dimensional Green's function to be inverted. The complex basebanded
radar signals, before range compression, correspond to the response function
Eqn. (4.2.31):
tJ.(s, t) = exp[-j4n:R(s)/).] exp{j</>[t - 2R(s)/c]}

where </>(t) is the phase of the transmitted pulse:


s(t) = cos[2n:.fct

+ </>(t)]

and R(s) is the range migration locus Eqn. ( 4.2.30).


502

( 10.0.1)

where P, is the two dimensional spectrum of the basebanded data O.(s, t) before
range compression. The algorithm Eqn. (10.0.3) was developed in particular by
Vant and Haslam (1980, 1990).
Another class of processing algorithms different from rectangular rangeDoppler processing has grown up, based on alternate schemes for attaining
range resolution in pulse compression radar. These are based on the "deramp"
processing scheme for range compression (Section 10.1 ). The idea is to do
whatever is necessary to salvage the process of simple frequency filtering on
the Doppler spectrum of the azimuth signal, while at the same time making
use of the full target spectrum thereby attaining improved resolution (focussed
processing). Such algorithms have been mainly developed for use in airborne
systems, but are not restricted to such systems. They are, however, particularly
well adapted to systems which are squinted away from side-looking so as to
deliberately aim (say) forward at some limited region of interest, as for example
in a spotlight mode SAR. Such systems are in contrast to the Seasat-like
deployments we have been mainly considering so far, in which the objective is
to map the terrain below the vehicle more or less uniformly, with squint only
a nuisance to be compensated in the processing.
In the case of the large bandwidth time product of the azimuth Doppler
signal imposed by the usual geometries, high resolution azimuth processing can
be done using the techniques of matched filter processing. From the point of
view of the Green's function h(x, Rix', R') and its inversion (Section 3.2.1), the
return signal v.(x, R) of the radar, in response to a distributed target with
complex reflectivity ((x', R'), is
,
v,(x, R) =

h(x, Rix', R')((x', R') dx' dR'

504

OTHER IMAGING ALGORITHMS

10.1

This integral equation has solution


'(x, R) =

f h-

(x, Rix', R')v,(x', R') dx' dR'

(10.0.4)

Delay
to

3 (t)

Delay

tr
K (tr-to)

h(x, Rix', R'),(x', R') dx' dR'

Figure 10.1

the kernel h(x, Rix', R') is of a very simple form, and in fact is just that kernel
which is inverted by Fourier transformation. Thereby the image function '(x, R)
results from Fourier transformation of the data function v,(x, R). Application
of compression filters and inverse Fourier transformation as needed in the
rectangular algorithm do not occur. The focussed image results by a single
two dimensional Fourier transform operation. The cost is (perhaps considerable)
data preprocessing to form the signals ii, from the radar data v,.
The algorithms of the class to be discussed go by various names in
their variants, such as deramp FFT processing (sometimes called stretch
processing), step transform processing, SPECAN processing, and polar
processing. Ausherman et al. ( 1984) have given an overview of the class. All of
these algorithms have links to the methods of tomographic imaging, which
Munson et al. (1983) and Fitch (1988) discuss. We begin with a discussion of
deramp processing, which is the direct predecessor of the step transform method
of SAR imaging.

Chirp generation and corresponding deramp range compression scheme.

This has a frequency f = fc + K ( t - t 0 ) which depends on time, so that full


resolution processing is not possible by simple frequency filtering.
In deramp compression, the received signal corresponding to Eqn. ( 10.1.2)
is converted to a constant frequency signal with frequency linearly related to
t 0 , the quantity to be determined, by the system of Fig. 10.1. In the case t, = 0,
for example, we have
v,(t) = [s(t)v,(t)]diff.freq.
=

cos 2n(Kt0 t

+ fct 0

Kt~/2)

( 10.1.3)

which is a constant frequency sinusoid whose frequency Kt 0 encodes the range


delay t 0 Working in terms of positive frequency components only, for
convenience, the computation of Eqn. (10.1.3) can be written
v,(t) =

s(t)v~(t)

= exp[j2n(fcto ~ Kt~/2)] exp(j2nKt0 t)

10.1

DE RAMP COMPRESSION PROCESSING

Let us consider again range compression processing of a linear FM with high


bandwidth time product:
'
s(t) = cos 2n(fct

+ Kt 2 /2),

ltl < r:p/2

( 10.1.1)

If this is scattered back by a unit point target at range R 0 = ct0 /2, the received
signal will be
v,(t) = cos 2n[fc(t - t0 )

505

cosmct

where h- 1 (x, Rix', R') is the inverse Green's function. In the case of the along
track variable x, the kernel h(x, Rix', R') is approximately a linear FM, and
the inversion kernel h- 1 (x, Rix', R')is therefore another linear FM, the azimuth
compression filter. Convolution is necessary to apply the inverse kernel to the
data, as in Eqn. (10.0.4). Range migration enters as a complicating factor.
The algorithms we will describe in this chapter take an alternative point of
view. The received radar data v,(x, R) are pre-processed into signals data v,(x, R)
such that, in the corresponding superposition equation:
v,(x, R) =

DERAMP COMPRESSION PROCESSING

+ K(t -

t 0 ) 2 /2],

It - tol < r:p/2 (10.1.2)

( 10.1.4)

The waveform Eqn. (10.1.3) to be Fourier analyzed is nonzero only over the
time span for which the factors Eqn. ( 10.1.l) and Eqn. ( 10.1.2) overlap. If that
overlap could be arranged to be the full pulsewidth r:P, or nearly so, the frequency
Kt 0 of the signal Eqn. (10.1.3) would be recovered with a resolution 1/r:p, so
that the resolution in t 0 approaches 1/IKlr:P = 1/B, which is the full resolution
afforded by the pulse compression waveform Eqn. (10.1.1). This can be done
by delaying the pulse Eqn. ( 10.1.l) by some reference time t., say the midswath
time:
s*(t - t,) = exp(-j2n.fct) exp[ -jnK(t - t,) 2 ],

It - t,I:;:;; /Ji.t/2 (10.1.5)

506

10.2

OTHER IMAGING ALGORITHMS

(b)

f
//(a)
/

Frequency against time in deramp range compression. (a) Transmitted; ( b) Reference;

Figure 10.2

(c) Received.

I
(Fig. 10.2). The reference pulse Eqn. ( 10.1.5) is generated such that its length
At is the timewidth of the slant range swath over which returns are expected.
The result of the reference mixing operation then is a preprocessed signal at
baseband:

v.(t) = s*(t - tr)s(t - to)


= exp[jnK(t~

- t:)] exp[j2nK(tr - to)t],

It - t0 1~ rp/2
(10.1.6)

This function v (t) is a constant frequency sinusoid, available over the full
transmitted pul:e duration p whose frequency K(t. - t 0 ) is a direct measure
of the target range parameter t0 The precision to which that frequency can be
measured is IKl&o = 1/-rp, so that target range resolution is

507

allow for a target return at any position in the swath, the processor FFT must
have time length At, even though any particular frequency bin is occupied by
signal for at most a much shorter time p By lengthening the processor time
to At we have degraded the signal to noise ratio of the system. Further, there
are generally present signal frequencies in v.(t) ranging from K(t. - tnear) to
K(t. - tea.), where tnear and tear correspond to the two extremes of the range
swath. Thus the deramped signal v.(t) has a bandwidth IKIAt, whereas v.(t),
the radar return itself, has only the band IKlp Thus the sampling rate of the
deramped signal must be artificially high. The system is simplest to arrange in
the case that At and p are roughly equal. This means that either the
swath must be narrow, less than a pulse width, or that subswaths must be
processed with multiple reference functions used to dechirp each subswath signal
separately, perhaps using the step transform procedure discussed in Section 10.2.
The potential application of deramp processing to SAR azimuth compression
is clear. The algorithm has recently been called the SPECAN (SPECtral
ANalysis) algorithm in that context (Sack et al., 1985). A number of difficulties
arise, however, which can make the procedure somewhat involved for high
resolution image formation. In addition to the problems mentioned above in
regard to range processing, which are also present in the application to azimuth
processing, the phenomenon of range migration can make it necessary to
assemble together from various range bins the data to be applied to the azimuth
FFT processor. Finally, since the azimuth chirp constant fR depends on slant
range across the swath, the relation between FFT bin number and image
point azimuth position changes with range, a circumstance which requires
interpolation operations to construct a uniformly sampled image. The situation
is discussed by Sack et al. (1985), and in detail by Wu and Vant (1984). Both
Sack et al. ( 1985) and Wu and Vant ( 1984) give a detailed analysis of the step
transform, an important modification to which we now pass.

10.2

where BR = IKI is the bandwidth of the transmitted pulse. Thus the resolution
of full bandwidth pulse compression processing is realized.
All of the operations involved in carrying out the deramp procedur~ ~re
linear, since the reference function Eqn. ( 10.1.5) is independent of target position
in the swath. Therefore a complex reflectivity distribution '(R) across the swath
is reproduced by the system of Fig. 10.1, with the squared magnitude of each
complex Fourier coefficient at the output of the FFT processor used for
filtering the deramped received signal being the real reflectivity l((RW at the
corresponding range. A radar system with this type of range processmg has
been called a stretch radar (Hovanessian, 1980, p. 114).
A practical difficulty arises in deramp processing. Normally the swath width
At is considerably larger than the pulse length
(Fig. 10.2). Since we need to

STEP TRANSFORM PROCESSING

STEP TRANSFORM PROCESSING

The basic idea of deramp processing can be realized in a version known as the
step transform. The method as applied to range compression is discussed by
Perry and Kaiser (1973) and by Martinson (1975). Perry and Martinson ( 1977)
also mention the technique in the context of along-track SAR processing. An
analysis of the along-track application is given by Sack et al. (1985), and by
Wu and Vant (1984). Wu and Vant (1985) analyze the modifications that need
to be made in the case of a highly squinted (spotlight) SAR, in which case the
along-track Doppler signal is not necessarily well approximated as a linear FM.
With simple deramp range processing (Section 10.l), difficulties arise if the
range swath timewidth At over which a return signal can occur is noticeably
longer than the width
of a transmitted pulse. Even in the case of a swath
only the width of the transmitted pulse, in deramp processing the deramped
signal v.(t) of Eqn. (10.1.6) will not capture the return signal over the majority

508

OTHER IMAGING ALGORITHMS


10.2

STEP TRANSFORM PROCESSING

509

I
I
I
I
I

I!::./= K (tn-t 0 )

,~

I
I

=K(t-tn)

Figure 10.3 Multiple deramp references used in step transform processing.

of its width unless the target is near the center of the swath (Fig. 10.2). This
suggests separating the full swath of interest into a number of subswaths, each
of width considerably less than that of a transmitted pulse, with each subswath
provided with its own local reference signal (Fig. 10.3). Thereby essentially all
of the signal span of any return can be captured, with different time segments
of the full pulse appearing in different subswaths. Full resolution processing
then requires simultaneous processing of the signals from multiple subswaths.
The step transform is the two-stage procedure which implements the scheme.
Coarse Range CoeH/clents

Consider then a simple subswath, the nth, centered on a reference time tn,
with a target at range time t 0 (Fig. 10.4). The deramped signal for that subswath,
similar to Eqn. (10.1.6), is
vn(t) = exp(jtf>) exp[j2nK(tn - t 0 )t],

It - tnl

tit/2

Figure 10.4 Frequency plot for step transform pulse compression.

final full resolution range information. This redundant information resides in


different frequency intervals in adjacent subapertures, and it is the phase
changes from one subaperture to another which lead to full resolution range
measurement. It is therefore of interest to examine the Fourier analysis of a
~arget return in each pa~ticular subaperture centered on time t 0 Proceeding
m the language of contmuous time and frequency variables, we determine
the Fourier transform over the aperture tn - tit/2 ~ t ~ tn + tit/2 of the
deramped signal Eqn. ( 10.2.1), taken with origin at the start of the aperture:

(10.2.1)

11

Vn(f) = exp(j<f>) {

where

tf>

= nK(t~

t 0 )(tn - tit/2

+ t)]

x exp( -j2nft) dt

- t~)

Carrying out Fourier transformation of this over the interval tit centered on
tn determines the frequency K(tn - t0 ) to a resolution bf= 1/ tit (assuming that
the interval in question is not at the end of the target pulse), and thereby
determines the range R 0 of the target to a resolution bR = c/21Kltit, coarser
by the ratio tit/rp 1 than the full resolution capability c/21Kl-rP of the system.
This so-called coarse range processing yields the same range information in
adjacent subswaths, since any particular target appears in multiple subswaths,
although at. different frequencies separated by the frequency step Ktit
corresponding to the time shift tit of the reference linear FM signals.
It is the further processing of the redundant coarse resolution information
about each target in adjacent subswaths ( subapertures) which leads to the

exp[j2nK(t0

=tit exp[jnKt0 (t 0 +tit)] exp[jnKtn(tn - tit)]

x exp( -j2nKtnt0 ) exp(jntitu) sinc(ntitu)

(10.2.2)

where

I~ each subaperture, a target at some t 0 appears essentially in one frequency


bm, th.at at f = K ( tn - t 0 ). The first exponential factor in the corresponding
coefficient Eqn. ( 10.2.2) is a constant, independent of n, and the second can
be compensated by multiplying by its conjugate, since all of its terms are

510

OTHER IMAGING ALGORITHMS

10.2

(10.2.3)

= (const)exp(-j2nKtnt0 )

511

targets out to a distance such that

known. The compensated value of Vn(f) in that bin is just


V~(f)

STEP TRANSFORM PROCESSING

(In practice, the FFT is used so that Vn(f) is sampled at a spacing 1I At in


f) This is a sinusoid in the time variable tn with frequency Kt 0 (Fig. 10.5).
Discrete Fourier analysis of the compensated coarse resolution frequency
coefficients Eqn. (10.2.3) over tn, during the target span Tp (Fig. 10.5), then
yields a spectrum which is ideally an impulse at frequency Kt 0 The impulse
can be located to a resolution of nearly 1/-rp in 1Klt0 , or a resolution
1/IKl-rP in t 0 , the full available resolution of the pulse compression system.
Oversampled Coarse Range Analysis
The appearance of the sine function in the expression Eqn. (10.2.2) for Vn(f)
rather than a rectangle function in frequency of width 1 /At introduces the
possibility of aliasing (Appendix A). Ideally, only targets with t 0 such that

Ito - t 0 1< l/21Kl!lt

lio - tol < P/21KIAt,

P>t

(10.2.4)

so that the band to be analyzed over the subaperture time tis p/At. This requires
a sample spacing in tn of At/ p, rather than At, in order to avoid aliasing.
The oversampling factor P typically used is on the order of 2 or 3, unless the
coarse resolution filter has very well controlled sidelobes. That is, two or three
times as many subapertures are generated than are sketched in Fig. 10.3.
Digital Coarse Range Analysis
Sack et al. (1985) describe the digital algorithm of step transform range
compression in detail. For a target at t 0 = m& (Fig. 10.6a), the basebanded
radar return, when multiplied by the appropriate deramping chirp in terms of
the local index l on subaperture n, is

exp{ -jnK[(l - L/2)<5t]2},

l = 0, L- 1

( 10.2.5)

(a)

vn<!>

would contribute to the response


at any particular frequency
t 0 ). The bandwidth of the signal Eqn. ( 10.2.3) would be 1 /At, the
span of Kt 0 values contributing. However, due to its sidelobes, the sine function
spreads more widely than 1/ At. Therefore targets further from t 0 than 1/21KIAt
will contribute to the subaperture response at any particular frequency, say

J = K (tn -

I II I

- - --------o

t=t'+t 0 -.M.

r--

to
= 111&

.:1t

=L8t

t K.:1t
_j_

_L

1/.:1t

Subaperture n
(b)

~
i=O

~N8t

...__ _ _ _ _ ___.!---Aperture A

2
~.._~__.o~---'-----40~~~'--~~o~~~-'-~~-

t n- 1

t0

tn+1

3
tA

tn

~M=L8t--1
Figure 10.5 Coarse range bins in deramp range compression.

Figure 10.6 Time sampling in step transform. (a) Single subaperture; (b) Oversampling of
subapertures for fine resolution transform (case p = 3).

512

OTHER IMAGING ALGORITHMS


10.2

This produces the sampled deramped signal in the aperture at t 0 = nJt:


2

v,(lln) = exp{jnK(&) (n - m)[(n - m - L)

+ 21]}

(10.2.6)

Taking the FFT of the sequence Eqn. ( 10.2.6) over the apertur<:_ time variable
I yields the discrete coefficients corresponding to the spectrum V,(f):

513

STEP TRANSFORM PROCESSING

The range analysis proceeds as indicated in Fig. 10.6b. Selecting a subaperture


number say A, A= 0, 1, ... , centered at tA = (n 0 +AN)&, all targets with
lt 0 - tAI < rp/2 will appear in that subaperture. The available coarse FFT
coefficients Eqn. ( 10.2.7) relative to those targets lie in I /2 apertures around
aperture A, where I = Prp/ At can be arranged to be an integer. The phase of
the coefficients Eqn. ( 10.2.7) is compensated to obtain coefficients for analysis:

L-1

V,(kln) =

V~(kln)

v,(lln)exp(-j2nkl/L)

1=0

= exp[jnK(&) 2 (n - m)(n - m - L)] exp[ju(L - 1)]


k = 0,

x sin(Lu)/sin(u),

L - 1

u = n[K(<5t) 2 (n - m) - k/L]

(10.2.7)

= V,(kln)exp[-jnK(&) 2 n(n - L)]

( 10.2.11)

These coefficients are found by tracing through the matrix of values V,(kln) as
in Fig. 10.7, remembering that the coefficients V,(kln) are periodic ink.
The fine range analysis is then obtained by taking the /-point FFT of the
coefficients Eqn. (10.2.11). The resulting coefficients g(rlA) are such that

Digital Fine Range Analysis

The coefficients Eqn. ( 10.2.7) provide a complete coarse range analysis in every
subaperture n, with resolution 1/IKIAt. The various subaperture coefficients
Eqn. (10.2.7) are processed together, with respect to the time index n, to obtain
the final fine resolution analysis of the range returns. We select sequential
subapertures indexed by i and centered at uniformly spaced times t 0 = i(N&)
(Fig. 10.6b ). Allowing for oversampling to avoid ghost images in the final output,
we have
& 0 = NJt =

tit/P = L&/p,

N=L/P

Analysis A+ 1
Analysis A

I
-------1

L-1

( 10.2.8)

If a particular target resides in frequency bin number k of aperture i, it will

reside with the same amplitude in bin k + LKN('5t) 2 of aperture i + 1, since


the aperture time separation N Jt corresponds to a frequency increase N K&
(Fig. 10.5), and the subaperture bin size is l/At = 1/L<5t. We must therefore
arrange matters such that this bin increment is an integer:
p = LKN(&) 2 = K(L&) 2 /P = K(At) 2 /P =integer

(10.2.9)

using Eqn. (10.2.8). In the expression Eqn. (10.2.7) for the coefficients V,(kln),
we have thereby arranged for u to be held constant as n changes by iN from
one subaperture to another. The remaining variation of V,(kln) with n resides
in the phase angle,

v = nK(&) 2 (n - m)(n - m - L)

= nK(Jt) 2 [n(n -

L)

+ m(m + L) - 2mn]

(10.2.10)

The variation with n represented by 2mn is what will "do the job" in determining
the target index m when we transform over n. The factor n(n - L) is an unwanted
variation with n, and must be compensated.

Figure 10.7 Subaperture selection for fine resolution analysis in step transform (case p

2).

514

OTHER IMAGING ALGORITHMS

10.2

(taking n0 = 0):

STEP TRANSFORM PROCESSING

515

(a)

lg(rlA)I

= lsin(Lv)/sin(v)l lsin(Jw)/sin(w)I

? yr (k)

( 10.2.12)

where

v = nK('5t) 2 (NA - m)

w = n[KN(<5t) m + r/I]
2

(10.2.13)

I
I
I
I

1/L1t

1/tp

? g (rlA)

~~:

l
~

I , i: I
I I

(I bins)
r=O, l-1

Since we have approximately


I= 1/IKIN(<5t) 2

(10.2.14)

f
B = 1/St
(Lbins)
k= 0, L-1

(b)
I
I

then

~Aperture

lw = n[K(bt) 2 mNI

+ r]

n[r

+ m sgn(K)]

( 10.2.15)

so that the resolution in target position m, as measured by r, is bm = 1,


that is, one time sample '5t = 1/I KI -rP, the full resolution of the pulse compression
system.
Elimination of Fine Range Ambiguities

The index A in v of Eqn. (10.2.13) numbers the coarse frequency resolution


cells of size 1/ L\t within which the index m provides fine resolution (Fig. 10.8).
Since lg(rlA)I of Eqn. (10.2.12) is periodic in r with period I, if lgl peaks at
r = r, we know only that the target is at m with

m mod (J) =

-r sgn(K)

~~~~~~~:...._A-1H-\-A-~~Pt-~~~/

~/K(m-NA)31

Figure 10.8 Frequency bins and response functions in step transforms. (a) Coarse and fine
frequency bins. (b) Response functions sin(Lv)/sin(v), sin(Jw)/sin(w).

(10.2.16)

The ambiguity is resolved, however, since each fine resolution analysis


corresponds to a separate aperture of length NA. However, each target will
appear in multiple subapertures in general, which leads to ghosts, replications
in ambiguous fine bins, attenuated only somewhat by the coarse bin factor
sin(Lv)/sin(v) of Eqn. (10.2.12). This sidelobe leakage of the coarse analysis
(Fig. 10.8) is alleviated as the aperture oversampling factor Pincreases.
To see this, consider that each fine resolution cell covers a frequency interval
1/ -rP, while each coarse cell covers an interval 1/flt. Therefor~ there are needed
I'= (1/L\t)/(1/-rp) = -rp/L\t = (B/IKl)/Lbt

= l/IKIL(bt) 2 = NI/L = I/P

I
I

( 10.2.17)

fine resolution cells to cover a coarse cell, where we used Eqn. (10.2.14) and
the definition p = L/ N. But the fine analysis computes I fine bin coefficients,
a proportion p more than needed. Therefore, only the first 1IP proportion of

the fine resolution coefficients need be reported out of each fine resolution FFT,
the next set of fine resolution bins being picked up as the first 1/ p of the
coefficients of the fine FFT for the next value of A.
This data selection process is what defeats sidelobe leakage in the coarse
FFT. From Fig. 10.9, it is clear that any target which is in a particular coarse
bin will have a frequency over aperture number i which is in the first 1/P part
of the band of the fine resolution FFT. However, if the response in the coarse
bin in question is in fact a sidelobe leakage from the next adjacent coarse bin
(or if the mainlobe has been so broadened by weighting for sidelobe control
that the adjacent bin comes in through a skirt of the mainlobe ), it will have a
frequency over i in the range below the lowest 1/ p part of the fine band, and
~o fort?. T?us, for say P= 3, we will not see sidelobe leakage until the response
is leakmg m from three coarse bins away from the one under analysis. That
one we will see, because it is aliased into the lowest region of the ith aperture
spectrum due to the periodicity of the discrete fine resolution spectrum. One
then chooses P large enough that the aliased response is from a sidelobe far
enough out on the coarse resolution response that it is of negligible amplitude.

516

OTHER IMAGING ALGORITHMS

10.2

Leakage

f
Aliased
active region

STEP TRANSFORM PROCESSING

517

for some j = (0, J - 1), corresponding to sn = n0 (;s, (n 0 + N 0 + j/J)(;s,


( n0 + 2N0 + 2j / J)(;s, ... , we can then select in turn reference ramps with starting
indexes n0 , n0 + j / J, n0 + 2j / J, . .. , and in general n0 + (ij) mod ( J), for use in
successive subaperture analyses, with subsequent shifts by integer multiples of
N 0 as before. The result is that the coarse resolution bin stepout from one
subaperture to the next is exactly an integer, as required.
There is a further consequence of such use of noninteger starting indexes n0 ,
however. In forming the sampled deramped signal the center of the aperture
now may not be an integer number of sample steps: sn = (n + a)(;s, a> 0.
Generally the deramped signal is
v.(lln) = exp{jnK((;s) 2 [(n - m)(n - m - L)

i:

- a(a + L) + 2(n - m + a)l]}

'-----Active region of fine bin


Figure 10.9 Coefficient selection in fine range analysis of step transform compression (case P= 3 ).

Azimuth Compression

When the step transform algorithm is used for azimuth SAR processing, it is
the linear FM of the sampled Doppler signal which is analyzed. The procedure
is just that which has been detailed above, with the us~al complications o~ range
migration and change of Doppler frequency rate fR with range to be considered.
Sack et al. (1985) have given a good discussion.
So far as change offR with range is concerned, the step transform procedure
is simply adjusted every so often across the range swath as fR is updated. Since
the input and output sampling rates (;t of the algorithm are independent of fR,
no interpolation is needed on the output to produce a uniform image grid. The
only complication internal to the algorithm is the requirement Eqn. (10.2.9)
that the coarse ra:nge bin stepout p used from one subaperture to another be
an integer. This is most conveniently done by adjusting N inversely to the
change infR, so that the overlap ratio {J depends onfR Since the percent change
in J. over the swath is normally small in space based systems, no great change
res~lts in the system operation on that account. However, whatever N is used
must also perforce be an integer, and changes in N involving a fractional p~rt
of an integer cannot be accommodated. Sack et al. ( 1985) suggest then ~smg
some number, say J, of reference ramps, with time origins spaced at multiples
j of (;s/J:
sn(j) = (n

The term 2al in the exponent is accounted for from one aperture to the next
by the nature of the noninteger stepout in n. However, the factor a(a + L)
represents a term depending on n, since a is in general different for every n, and
must be compensated in forming the sequences v;(kln) of Eqn. (10.2.11) from
the coarse coefficients V,(kln), just as was the previous term n(n - L).
Range migration is handled much as in the algorithms of Section 4.2.3. That
is, the appropriate data is gathered together along the curved migration path
in range/azimuth memory before Fourier transformation in the Doppler
domain, but after range compression. If we assume the nominal linear range
walk is removed in the time domain as described in Section 4.2.3, then we deal
essentially only with range curvature. Each coarse resolution FFT in the Doppler
domain operates over some limited span S' of azimuth time. If S' is adequately
small, the residual migration (mainly quadratic) over the aperture S' will be
less than one range bin, or half a range bin, or whatever is desired, depending
on the precision of processing needed. As Sack et al. ( 1985) note, this establishes
an upper bound to the coarse aperture time S'. Since we have as always the
nominal migration locus (after walk removal):
( 10.2.18)
where s is slow (azimuth) time, the worst case situation, at the end of the full
synthetic aperture, yields a range migration due to the curvature over the
subaperture, and a limit (for quarter-cell accuracy):

AR= (V~/2R 0 )[(S/2) 2

(S/2 - S') 2 ]

V~(S - S')S'/2R 0 < (;R/4

+ j/J)(;s

If integer spacings (in (;s) of N 0 , corresponding to sn = no(;s, (no+ No)(;s,


(n 0 + 2N 0 )(;s, ... are to be changed to noninteger spacings No+ j/J,

v;

( 10.2.19)

Provided 2R 0 (;R/S 2
> 1, this is always the case. Otherwise, for correction
to within a quarter range resolution cell the subaperture time is bounded

518

OTHER IMAGING ALGORITHMS

10.3

as

With apertures S' chosen in each range bin, the appropriate coarse resolution
bins are then patched together to form the input to the fine resolution FFT.
Another constraint is imposed by the necessity for range migration correction
when using the step transform for slow time processing. Each fine resolution
FFT relates to a number of targets separated in slow time by the sampling
interval bs, which in this case is the radar pulse repetition interval. In frequency,
these targets fill the band of width 1/ S' corresponding to the resolution of the
coarse resolution FFT. Thus each coarse resolution frequency coefficient relates
to a number of targets, which are separated in frequency by up to 1/S' Hz,
corresponding to a maximum separation in slow time of 1/lfRIS'. (Since the
analysis band lfRIS of the coarse resolution FFT is just 1/bs, this can also be
written as S(bs)/S'.)
The data for each fine resolution FFT is gathered together by selecting a
single coarse resolution coefficient from each coarse resolution time interval
and applying the appropriate range curvature correction (again assuming the
linear range walk has been previously compensated). Therefore, each of the
targets contributing to a particular coarse bin must have the same curvature
correction to be applied, again to within a range bin, or some appropriate
fraction (say a quarter) thereof. Now consider (Fig. 10.10) two targets, in the
same coarse Doppler bin, separated by the maximum amount ll.foc = 1/S'. In
slow time this corresponds to
ll.s = 1/lfRIS' = S/B 0 S' = S(bs)/S'

The two targets are not in general at the same Re. The largest discrepancy in
range curvature correction required by any segment of length S' common to
the two targets occurs at the positions shown. From Eqn. ( 10.2.18), the difference
in range curvature corrections required for those two targets, assuming fR to
be the same for both, is
(V;1/2Rc){(S/2) 2

[S/2 - S(bs)/S']2}
= (V:i/2Rc)[S 2 (bs)/S'](l - bs/S') ~ bR/4

for the quarter cell criterion, requiring a bound

in which the term bs/S' may be dropped.


In considering the computational requirements of the step transform SAR
algorithm, Sack et al. ( 1985) conclude that the FFT operations needed are

519

t\R 1 ___.j
I _.I
I
I
I
I
I

I
I

S c2 -+---_.....

S c1 -+-----.-

as (SIS')

R
Figure 10.10

Range migration considerations in step transform azimuth processing.

somewhat less computationally demanding than in the case of the rectangular


algorithm. However, it is not clear what the overall operation time of such an
algorithm might be in the case of a satellite platform. The FFT operations
typically constitute only on the order of half the calculations, and the FFT is
a very efficient operation in comparison with interpolation or range migration
correction in other ways. As a result, techniques such as this, based on
spectral analysis, are being used by ERS-1 for high-speed production of image
products.
Yet another SAR image formation algorithm has a long history, especially
in the aircraft "community". This is the polar processing algorithm, which can
also be linked to the general idea of deramp processing, and to which we now
turn.

10.3
(10.2.20)

POLAR PROCESSING

POLAR PROCESSING

An algorithm of considerable practical importance has been developed over the


past decade for the digital processing of SAR data, primarily for use in aircraft
platforms, but not in general restricted to that case. This has come to be known
as the polar processing algorithm. The algorithm is rooted in the general class
of deramp algorithms discussed in Section 10.1. As in all deramp processing,

520

OTHER IMAGING ALGORITHMS


10.3

the mechanism of constant frequency filtering is salvaged by converting the


linear FM signal due to a target return into a constant frequency signal, whose
frequency encodes spatial position, either in range or along track, or both in
the case of a two dimensional "frequency", a wave number vector. In contrast
with deramp processing, while the step transform algorithm of Section 10.2
retains the range migration effects until compression processing is under way,
in polar processing the range walk effect is removed during preprocessing of
the radar returns. This simplifies the actual compression part of the algorithm.
Looked at from the point of view of inversion of the SAR system Green's
function (Section 4.1.1), in polar processing the data are formatted in such a
way that the Green's function of the resulting reformatted data is very simple,
just that which is inverted by a single (two-dimensional) Fourier transformation.
This is in complete analogy to the implementation of the range compression
matched filter as a simple Fourier transform operation in the deramp algorithm
for range processing, once the data have been formatted properly by the range
deramp operation.
As originally phrased (Walker, 1980), the algorithm was inteqded for the
imaging of a rotating object (a planet, for example) by a fixed radar, in
the procedure which has now come to be known as inverse SAR (ISAR)
(Wehner, 1987, Chapter 7). Although we follow the development presented by
Walker (1980), we shall use the language appropriate to a moving platform
and a stationary target. In either case, the use of polar processing is particularly
appropriate in the situation that a relatively localized region at some distance
from the platform is to be imaged. Whether or not the radar is side-looking is
of no great concern - the algorithm is useful for the case of large squint, as in
the spotlight mode of SAR operation (Brookner, 1977). Spotlight processing
has also been specifically related to tomographic imaging, a point of view which
also relates to polar processing (Munson et al., 1983).
10.3.1

Figure 10.11

POLAR PROCESSING

521

Geometry of radar encounter with target in polar formulation.

where tn =: 2Rn/ c, and Rn is the (approximately constant) range to target during


pulse penod n.
Deramplng the Received Data

For each pulse, the received signal Eqn. ( 10.3.2) is deramped, just as in the
first step of deramp range compression (Section 10.n using the waveform
Eqn. ( 10.3.l ), delayed and conjugated:

The Basic Idea of Polar Processing

Let us consider first the situation of a unit point target located at a fixed vector
position R0 possibly in space (Fig. 10.11). The origin of coordinates is some
arbitrary point in the general vicinity of the region to be imaged. A radar moves
in space along some path described by a vector Ran(t), which is assumed to be
known at every instant. The radar transmits pulses s(t), assumed to be linear
FM with frequency rate K:
s(t)exp(j2n.f.,t) = exp[j2n(.f.,t + Kt 2 /2 )],

ltl < tp/2

(10.3.1)

For the moment we ignore the change in range to target from time of pulse
transmission until reception. With the origin of time taken at the instant of
transmission of a pulse, the received waveform is then
(10.3.2)

d(t) = s*(t - tan) exp[ -j2n.f.,(t - tan)J

=exp{ -j2n[fc(t- tan)+ K(t - tan) 2 /2]}

( 10.3.3)

where tan = 2Ran/ c is the known range from radar to coordinate origin during
pulse period n. The result is a video signal
g(t) = d(t)v.(t) =exp{ -j2n[(.f., + Kt)(tn - tan) - K(t~ - t;n)/2]}

=exp{ -j2n[(.f., + K(t - tan))(tn - tan)- K(tn - tan) 2 /2]} (10.3.4)


The real signal represented by Eqn. (lOJ.4) is then complex basebanded
("I, Q detected") to obtain the complex signal Eqn. ( 10.3.4) itself. Letting AR
be the scene extent, and noting that t - tan tn - tan are of order 2AR/c, the
sec?nd. term in the exponent in the right can be dropped if AR c.f.,/31KI,
which ts normally the case.

522

OTHER IMAGING ALGORITHMS


10.3

POLAR PROCESSING

523

From Fig. 10.11, the range from radar to target during pulse n is
Rn= [(Rt - Ran)(R1 - Ran)J 112
= Ran[l - 2R1Ran/R:n + (R1/Ran) 2 ]

1 2

(10.3.5)

or, to first order in the small (by assumption) quantity R 1/ Ran


( 10.3.6)
where Ran is the unit vector Rani Ran Then thederamped video Eqn.( 10.3.4)is
g(t) = exp{j(4n/c)[f.

+ K(t -

2Ran/c)]Ran R.}

( 10.3.7)

We now define a data index vector r. (three-dimensional, in general). The


complex video signal g( t) of Eqn. ( 10.3. 7) is sampled at rate f. over the span
of each pulse n, and the resulting numbers g(k/f.) indexed by
( 10.3.8)
where tk = k/f.. By comparison of Eqn. ( 10.3.7) with Eqn. ( 10.3.8), the data
array is seen to be just
(10.3.9)
The pulse and time indexes (n, k) are joined in the vector r., which from
Eqn. (10.3.9) appears as a vector wave number.

Figure 10.12

Region of data index in polar processing.

Eqn. ( 10.3.9) is

Imaging and Target Resolution

The name polar processing for this process derives from the fact that, as indicated
in Eqn. ( 10.3.8), data for a particular pulse are indexed, in t, along the radial
vector Ran of a spherical coordinate system (Fig. 10.12). For a two dimensional
image (the usual case), this becomes a polar coordinate system. The dimensions
of the data region Fig. 10.12 in the space of the wave number r. of Eqn. ( 10.3.8)
depend on the pulsewidth in time, along the direction of the radar position
vector Ran at each pulse. In the cross-track dimension, normal to Ran the data
span is a circular segment of nominal angle 2(AO)/ A., where AO is the span of
aspect angles traversed by the radar while viewing the target. For a radar far
removed from the target region, since the origin is local to the target,
the span of angle AO is nearly the span of aspects through which the radar
views the target. In turn, the span AO leads to an approximate linear span of
data of extent 2(AO)Ran/ A., where Ran is the nominal range to target.
As always, invoking linearity of the process, for a distributed region with
complex reflectivity
1) to be imaged, the signal field corresponding to

"R

(10.3.10)
This is the two dimensional inverse Fourier transform of the sought image
((R.). The inverse Green's function (compression) operation is then simply

carried out by Fourier transformation:


( 10.3.11)
where dr. is an area element. For a two-dimensional region to be imaged,
Eqn. (10.3.10) and Eqn. (10.3.11) become a two-dimensional Fourier transform
~air. It is wort~ noting explicitly that, if the image region represented by Rt is
1~ fact. three-dimensional, the vehicle path Ran represented by the twod1mens1onal vector r. through Eqn. ( 10.3.8 ), can only replicate a two-dimensional
representation of the region.

524

OTHER IMAGING ALGORITHMS


10.3

The expression Eqn. ( 10.3.11) at once yields the nominal resolution of the
polar processor. In range, the extent of data in that wavenumber dimension
(Fig. 10.12) yields the range resolution

POLAR PROCESSING

525

which is just that of the transmitted pulse. In "azimuth" (across-track),


the wave number extent in Fig. 10.12 indicates a transform resolution, using
Eqn. ( 10.3.11 ), which is
~x =

A./2/lfJ

This is the standard result (Wehner, 1987, p. 206).


Thus, in polar processing of signals resulting from two dimensional terrain,
both range compression and azimuth processing are realized by the single two
dimensional Fourier transform Eqn. ( 10.3.11), in contrast to the rectangular
algorithm, Section 4.1.2, in which both forward and inverse two-dimensional
transforms are required, with application of filter functions between. Range
migration effects are avoided in polar processing as well. However, in practice
a considerable amount of data interpolation is required in order to arrange for
convenient digital application of the operation Eqn. ( 10.3.11 ). This is because
data samples are available at the nodes of the polar grid Eqn. ( 10.3.8), whereas
they are needed at the nodes of a rectangular grid for use of the two-dimensional
FFT (Fig. 10.13 ).

x
Figure 10.13

Index grid ( O) for polar format data with required index grid (

x
x )for FFT processing.

downconverted point target response is


10.3.2

Polar Processing Details

In this section, we discuss an alternate deramp procedure, and examine an


approximation made above.

v.(t) = s(t -

exp[j2nfc(t - t0 ) ] exp[ -j2n(fc - fi)t]

(10.3.12)

so that the transformed response is


V.(f) = S(f -

Frequency Domain Deramplng

In some situations the deramp processing described in Section 10.1 may indeed
be feasible, if implemented at some video offset frequency. This is especially the
case if the region of targets to be imaged is small enough that the entire region
is covered simultaneously by a single pulse, so that a single reference ramp of
reasonable width can be used to deramp the return from any target point
(Fig. 10.2).
More generally, the equivalent of the deramp operation Eqn. ( 10.3.4) can be
realized in the Fourier transform domain. To that end, the data Eqn. (10.3.2)
across the full target region are first downconverted to some convenient video
offset frequency / 1 , then time sampled and Fourier transformed. For a
transmitted pulse Eqn. ( 10.3.1 ), and again assuming the range of the radar from
the target can be considered constant during the time of pulse period n, the

t0 )

Id exp[ -j2n(f- fi)t

exp( -j2nfct0 )

(10.3.13)

where S(f) is the baseband spectrum of the transmitted pulse. After complex
basebanding, we have available frequency samples of
( 10.3.14)
where
( 10.3.15)
is constant, depending however on pulse number.
A result equival~nt to that of the time domain deramp operation Eqn. ( 10.3.4)
can now be realized by frequency domain operations on the spectrum

526

OTHER IMAGING ALGORITHMS


10.3

Eqn. ( 10.3.14 ). The procedure consists in adding values 2nftan to the phase of the
spectrum Eqn. (10.3.14), where tan= 2Ran/c, and dividing out the known
spectrum S(f). This results in deramped data
G(f) =an exp[ -j2nf(tn - tan)J

(10.3.16)

Using again the approximation Eqn. ( 10.3.6), this is


G(f) =an exp[j(4nf/c)Ran RiJ

(10.3.17)

POLAR PROCESSING

527

Si~ce the. transit ti~e T is not constant, the received signal v.( t) is not simply
a time shifted vers10n of the transmitted signal s( t ), but rather has a different
waveform, which furth~rmore depends on the specific form of the function R(t).
Jus~ as was the case with matched filter range compression, this can introduce
a difficulty. The matter involves further consideration of the form of the delay
function .T(t') and the way in which it affects the received signal waveform v,(t).
We consider only signals in the baseband.
To that end, let us consider the spectrum of the received signal:
Tp/2

For a distributed target C(R1), Eqn. (10.3.17) becomes

V,(f) = ff1{s(t')} =

s(t')exp{-j2nf[t'

+ T(t')]} dt (10.3.22)

-Tp/2

( 10.3.18)
Defining the storage index by

r. = (2f/c)Ran

We can use an expansion of T(t') around the time of launch of the midpoint
(say) of the pulse, taken as t' = 0. Using Eqn. (10.3.21), and defining R1 and R,
as the ranges to target at the times of transmission and reception of the midpoint
of the pulse, i.e., R 1 = R(O) and R, = R[T(O)], we obtain from Eqn. (10.3.21)

(10.3.19)

T(O) = T0 = (R 1 + R,)/c

the corresponding data field in wave number space is

t(O) = t 0 = (R,
=t(O) =

=t 0

(10.3.20)
which are essentially the same numbers as in Eqn. ( 10.3.10 ). Fourier
transformation of the data Eqn. (10.3.20) yields the complex function anC(R1),
however. Since the amplitude of an is unity, as in Eqn. ( 10.3.15), the factor
disappears when the image 1(1 2 is taken.
Intra-Pulse Range Variation

As we have discussed in Section 4.1.1, the approximation that the target range
is a constant value Rn during the time of pulse period n is reasonable in the
case of a side looking radar, and in the case of the longer wavelengths, in
particular at L-band. However, for a higher frequency system, X-band, say, and
particularly for a system with considerable forward squint, this may not be the
case. We need to examine the approximation again, and to describe a means
for compensation of the resulting effects if it is not well satisfied.
As in Section 4.1.1, suppose that v,(t) is the signal rrceived at time t in
response to a unit point target. This is the value of the signal transmitted at
some time t' which is earlier than t by the two-way transit time T. The time T
in general depends on t, or equivalently on t': T = T(t'). Let R(t) be the slant
range of the radar from the target at time t. Then the total travel time T(t')
must satisfy
c-r(t') = R[t'

+ T(t')] + R(t')

(10.3.21)

+ R,)/(c -

R,)

= {R1 + [(c + R1)/(c - R,)] 2 R,}/(c - R,)

(10.3.23)

Using these, the factor in the phase in the exponent of the integrand of the
received spectrum Eqn. (10.3.22) becomes
t'

+ T(t') =

T0

+ [1 + i 0 ]t' + f 0 t' 2 /2]

(10.3.24)

In the.course of forming the data Eqn. (10.3.20) to be used in image formation,


we adjust .the phase of V,(f) of Eqn. ( 10.3.22) by adding the quantity 2nftan
wher~ tan ts some nominal range time to the reference point for the pulse in
quest10n, say tan= 2Ran(O)/c, the range at time of transmission of the midpoint
of the pulse. We then divide out the transmitted pulse spectrum S(f), with the
result (dropping the term t( t') 1 in the scale factor):
G(f) = V,(f) exp(j2nfta0 )/S(f)
=

{f

2
Tp/
-Tp/2

s(t') exp{ -j2nf[t'

+ T(t')] + j2nft

80

dt'}/S(f) (10.3.25)

Using the approximation Eqn. ( 10.3.6), as well as

we have
(10.3.26)

528

OTHER IMAGING ALGORITHMS

10.3

Then, Eqn. (10.3.25) becomes


G(f) = {exp(j2nr. R,)/S(f)}
'tp/2

s(t')exp{ -j2nf[(1

+ i 0 )t' + i'0 t' 2 /2J} dt'

(10.3.27)

POLAR PROCESSING

529

The matter reduces to determining the "spectrum" (actually a time waveform


in this case) of a linear FM signal with extent BR and small chirp constant 2t0 / K.
Such spectra have been computed by Jin and Wu (1984) in the applicable case
(Fig. 4.21 ). The result is a well-localized pulse with reasonable sidelobes so long as

-.12

to < K/2Bi_ = K/(Ktp) 2 = 1/(2BRtp)

where the storage index r. is as in Eqn. (10.3.19).


The response G(f) of Eqn. ( 10.3.27) is compressed by Fourier transformation.
The discrepancy between the integral term in Eqn. ( 10.3.27) and S(f) represents
a mismatch between the compression filter exp( -j2nr. R1) and the signal
Eqn. (10.3.27). Analysis of the resulting distortion in general is possible, but it
suffices for our purpose to consider that the transmitted pulse is a linear FM.
Further, we will drop the quadratic term in the exponent of the integral in
Eqn. ( 10.3.27). Consideration of some typical values indicates the feasibility of
this latter approximation. However, in the case of a long X-band pulse, the
linear term is significant. In the case of Seasat, however, with!= 1 GHz roughly
and a 33 s pulse, the linear term is only 0.16 rad, and can itself be neglected,
but, as Barber (1985) has mentioned, only marginally. With an X-band system
and 1 squint, using the 33 s pulse, the linear term at the edge of the integral
amounts to a phase of 1.7 rad, and further analysis and care are warranted.
In any event, we need only consider the linear phase distortion term in
Eqn. ( 10.3.27). For the special case of a linear FM transmitted pulse of large
bandwidth time product, the analysis can easily be carried further. Since the
matter has to do essentially with range compression, we can consider the
treatment of a single pulse. For that case, still remaining in the baseband,
(10.3.28)

s(t) = exp(jnKt 2 ),
The spectrum Eqn. (3.2.29) is
S(f) = exp( - jnf 2 / K)

to within a constant. From Eqn. ( 10.3.27) we have (dropping the quadratic term):
G(f) = exp(j2nr5 R,)S[(1

+ i 0 )f]/S(f)

= exp(j2nr. R,) exp( -j2nio.f 2 / K)

(10.3.29)

where we use the fact that t 0 is small to make the approximation


(1

+ t 0 ) 2 = 1 + 2t0

The signal represented by the spectrum Eqn. ( 10.3.29) will be processed by


simple Fourier transformation. We want the result to be a pulse in time of the
nominal width 1/BR, where BR is the pulse bandwidth IKltP.

Since approximately
t0 =

2R/c

this corresponds to a limit


(10.3.30)
which is likely to be well satisfied. In the case of more complex waveform
coding, the criteria may be more stringent. In that case, one might make an
estimate of a nominal t 0 for a scene, or some part thereof, and compensate that
value by Doppler shifting the spectrum S(f) before using it in Eqn. ( 10.3.25)
to generate G(f) from V,(f).

10.3.3

An Autofocus Procedure for Polar Processing

The deramp operation described in Section 10.3.2, in which a linear phase term
exp(j2nftan), with phase proportional to the range Ran = ctan/2 of the radar
from a reference point, is subtracted from the data spectrum, obviates the
problem of range migration correction in forming the image in polar processing.
In addition, it yields a kernel (Green's function) in Eqn. (10.3.20) which is
trivially inverted (by Fourier transformation). However, in order to carry out
the procedure, it is necessary to know the range Ran of the radar antenna from
the reference point (the origin in Fig. 10.11), and to know that pulse by pulse.
Any errors in this range, due perhaps to errors in the navigation system of an
aircraft, or tracking errors in the case of a satellite platform, will introduce
phase errors into the deramped data corresponding to Eqn. (10.3.20), and
degrade the image. Following the work of Walker (1980), who describes the
effects of such errors on the image, and drawing upon the subaperture correlation
procedure for autofocus described in Section 5.3.2 in reference to the rectangular
algorithm, it is possible to suggest a scheme for the automatic compensation
of some of these motion compensation errors which can arise in polar processing.
Let us first make a specialization of the procedures described in
Section 10.3.2 to the (usual) case ofa planar image, so that R1 (Fig. 10.11) is a
two-dimensional vector. This allows use of a two-dimensional data field. Let
us also assume that in the deramp operation leading to Eqn. (10.3.16) a
(measured or presumed) vector R~n is used, which might not be the same as

530

OTHER IMAGING ALGORITHMS


10.3

POLAR PROCESSING

531

the true radar position vector Ran at pulse n. As before, the single pulse data
transform Eqn. ( 10.3.14) is essentially
( 10.3.31)

G(f) = S(f)exp(-j4nfRn/c)

where we have neglected any of the Doppler effects discussed in Section 10.3.2.
The deramp operation is carried out with a phase factor exp(j4nfR~n/c) to
yield data Eqn. ( 10.3.16) to be Fourier transformed as
G(f) =exp[ -j4nf(Rn -

R~n)/c]

( 10.3.32)

where Rn is the slant range from radar to target.


Assume that in fact

(10.3.33)
where s(t) is some error. Then (Fig. 10.11)
(10.3.34)
where we assume both R, ands are small with respect to
first order terms. Then, again to first order,

R~n

and keep only

( 10.3.35)

so that, rather than Eqn. ( 10.3.17), we have


Figure 10.14

G(f) = exp[j4nfR~n (R1 + s)/c]

True (R ) and measured (R~.) antenna, positions with planar target (R,) region.

(10.3.36)
we can write Eqn. (10.3.36) as

We now take explicit account of the assumption that R, lies in a plane, say the
(x, y) plane, and write

( 10.3.37)
where R~P is the unit vector in the (x, y) plane (Fig. 10.14):
R~P = i cos(t/J~n)

+ j sin(t/l~n)

G(f) = exp[j2n(rp R,

+ 2fRa s/c)]

( 10.3.39)

where we define the two-dimensional data field index vector


( 10.3.40)

(10.3.38)

For simplicity, we will henceforth drop the subscript n, as well as the prime, so
that all quantities subscripted "a'', referring to the antenna position, are
considered as measured values. Taking account that

Withs= 0, two-dimensional transformation of G(f) of Eqn. (10.3.39) over the


two-dimensional data array 'indexed by rP ideally yields the two-dimensional
response vector <>(R - R,).
The first step towards an autofocus procedure, aimed at compensating s( t ),
comes about by expressing the position error vector s( t ), which varies pulse to
pulse, as a function s(x, y) of index rP in the data array, where
( 10.3.41)

532

OTHER IMAGING ALGORITHMS


10.3

This can be done if we assume the antenna position azimuth angle l/Ja(t)
(Fig. 10.14) to be a monotonic function over the full synthetic aperture
observation time.
From Fig. 10.14 we have

y/x = tan[l/Ja(t)]

(10.3.42)

From this we can determine t for any point (x, y) in the data array, possibly
by table look up if l/Ja(t) is a complicated function. More usually, .Pa(t) will
admit a low order rational fraction approximation. For example, if the path of
motion of the radar vehicle is nominally a straight line, so that
X 8 (t) =

Xo

533

and define

g(xp, Yp) = xp(Bxot + ilxot 2 /2)

+ Yp(By 0 t + ily0 t 2 /2)


+ [rp/tan(a)](szo + Bzot + ilz0 t 2 /2)

(10.3.47)

In Eqn. (10.3.47) we recall that t(xp, Yp) is a known function, and note that
g(xp, Yp) depends only linearly on all the parameters of the motion error as
written in Eqn. (10.3.45).
'
Now consider some point (xk, Yk) in the data array. Expanding g(xp, yp) of
Eqn. (10.3.47) at that point, we have

+ Xot

Ya(t) =Yo +.Pot

(10.3.43)

(10.3.48)

(10.3.44)

where (x, y) are the deviations of (xp, yp) froin (xk, Yk). If we now make an
image using the data over the subregion (subaperture) locally around (xk, Yk),
a target at (xi, Yi) will appear in the image displaced by the phase error terms
in Eqn. ( 10.3.36) which are linear in the data coordinates. Those are the linear
terms of Eqn. (10.3.46). The target point will therefore appear with image
coordinates

then

y/x = tan[l/Ja(t)] = (y0

+ y0 t)/(x 0 + Xot)

For a specified point in the data array, Eqn. ( 10.3.44) can be solved for t ~s an
explicit function of y/x. The same is the case if X 8 (t), Ya(t) are quadratic, or
can at least be so approximated over the span of time of interest in the process.
In any case, from the measured radar position R~ relative to the reference point,
which corresponds to a particular data index rP, values of t(x, y) can be found
for any index in the data array.
Let us now consider the distortion term in the data G(f) of Eqn. ( 10.3.39).
We first consider an expansion of s(t) to some adequate order about some
nominal time t 0 , taken as t = 0:
( 10.3.45)
where for illustration we truncate at the second order. The distortion term in
Eqn. ( 10.3.39) is then
(4nf /c)Ra s = (4nf /c){k cos(a)

+ YpByo + g(xp, Yp)]

rP = (2f/c) sin(a)

(10.3.49)

with the quadratic distortion terms contributing defocus.


If we repeat this procedure for other points in the data array, (xm, Ym>.
(xn, Yo), ... , we can compute the cross-correlation functions of the subaperture
images two by two. Since the target displacements are differ~nt in the different
subaperture images, as in Eqn. ( 10.3.49 ), the correlation functions will peak
away from zero. Specifically, for example, if we correlate images k and m, the
peak of that cross-correlation function will occur at

0Ykm = og/oyk - og/oym


(10.3.46)

where we use Eqn. ( 10.3.37) and Eqn. ( 10.3.41 ), and note from Eqn. ( 10.3.40) that
Rap= rp = rp/rp

+ og/oxk
Yik =Yi+ syo + og/oyk
Xik =Xi+ Bxo

oxkm = og/oxk - og/oxm

+ [sin(</Ja)/rp]rp} s

= 2nrPs + (4nf/c)kscos(a)
= 2n[xpsxo

POLAR PROCESSING

(10.3.50)

The left sides of these two equations can be measured. The right sides are linear
in the error parameters sxo Bxo Byo .... Therefore, by computing at least as
many correlation pairs as there are parameters in the model Eqn. (10.3.43) to
be determined, we can solve for the error parameters, perhaps using a least
squares procedure if the set of equations Eqn. (10.3.50) is over determined.
With specific values of the coefficients in Eqn. ( 10.3.45) in hand, the function
g(xp, Yp) of Eqn. ( 10.3.47) is fully determined, again using the function t(xp, yp),
perhaps by a look-up procedure on tan[l/Ja(t)]. The data function Eqn. (10.3.36)

534

OTHER IMAGING ALGORITHMS


REFERENCES

can then be compensated by multiplying the data array entries point by point
by the compensator

f (xp, yp) = exp[ -j2rcg(xp, Yp)]

535

REFERENCES
Ausherman, D. A., A. Kozma, J. L. Walker, H. M. Jones and E. C. Poggio (1984).
"Developments in radar imaging," IEEE Trans. Aero. and Elec. Sys., AES-20(4),
pp. 363-400.

The result is to replace the data Eqn. ( 10.3.39) by


( 10.3.51)

Barber, B. C. (1985). "Theory of digital imaging from orbital synthetic-aperture radar,"


Int. J. Rem. Sens., 6(7), pp. 1009-1057.
Brookner, E., ed. (1977). "Synthetic aperture radar spotlight mapper," Chapter 18 in:
Radar Technology, Artech House, Dedham, MA.

where
(10.3.52)
The image is thereby fully corrected for the distortions due t?. moti~n
compensation errors, except that it appears with a constant pos1t1on shift
Eqn. ( 10.3.51) in accord with the constant vehicle position offset. . .
In carrying out the procedure indicated, values for the coeffic~ents m the
derivatives ag/xk, ag/ayk are needed in Eqn. (10.3.50). These mvolve the
derivatives dt/dxP, dt/dyP, which may be known explicitly if the motion of ~he
vehicle was taken as a simple approximation such as in Eqn. ( 10.3.43 ). Otherwise,
we can write
d(yp/xp)/dxP = d{tan[l/la(t)]}/dxP
= {d

tan[ l/J 8 (t)]/dt }( dt/dxp)

( 10.3.53)

In this the left side is a simple function of the point (xp, Yp), which can be taken
specifi~ally at any of the subaperture cent~r poi~ts (xk, Yk) of interest. The ~rst
terni on the right can be found, numencally 1f necessary, from the vehtc~e
traject~ry in the vicinity of the subaperture points. The nee~ed term dtzdx~ is
then calculated at the point (xk, Yk). A similar procedure yields the denvat1ve
dt/dyk.
.
.
.
Polar processing in general, even without autofocus constderattons, mvo~ves
a considerable amount of data interpolation. The ultimate image formation,
the Fourier transform of the data field G(xp, yp), will be done digitally using
the FFT. It is therefore necessary that data values be available referred to a
rectangular grid on the data array. In contrast, the uniformly s~aced s~mpling
done by the video digitization for each pulse produces values umformly mdexed
along rays in the data array, with the rays not in general.parallel, but r~ther
in the polar format. Without careful consideration of the ~ompu.tatt~~al
operations; it is difficult io make a clear state~ent about the relat1v~ des1rab1h~y
of polar processing and the rectangular algonthm. The trade-offs mvolved ~ill
also depend on the radar system deployment, specifical~y t~e slant range, squmt
angle, and swath size. This algorithm is commonly used m auborne SAR systems
for target detection.

Fitch, J.P. (1988). Synthetic Aperture Radar, Springer-Verlag, New York.


Hovanessian, S. A. (1980). Introduction to Synthetic Array and Imaging Radars,
Artech House, Dedham, MA.
Jin, M. Y. and C. Wu (1984). "A SAR correlation algorithm which accommodates
large-range migration," IEEE Trans. Geosci. and Remote Sensing, GE-22(6),
pp. 592-597.
Martinson, L. (1975). "A programmable digital processor for airborne radar," IEEE
Inter. Radar Corif., April, pp. 186-191.
Munson, D. C. Jr.,J. D. O'Brien and W. K. Jenkins (1983). "A tomographic formulation
of spotlight-mode synthetic aperture radar," Proc. IEEE, 71(8), pp. 917-925.
Perry, R. P. and H. W. Kaiser (1973). "Digital step transform approach to airborne
radar processing," Record, NAECON '73, May, pp. 280-287.
Perry, R. P. and L. W. Martinson (1977). "Radar matched filtering," Chapter 11 in:
Radar Technology (Brookner, E., ed.), Artech House, Dedham, MA.
Sack, M., M. R. Ito and I. G. Cumming (1985). "Application of efficient linear FM
matched filtering algorithms to synthetic aperture radar processing," IEE Proc.,
132(Part F, No. 1), pp. 45-57.
Vant, M. R. and G. E. Haslam (1980). "A Theory of 'Squinted' Synthetic-Aperture
Radar," Report No. 1339, Communications Research Centre, Ottawa, November.
Vant, M. R. and G. E. Haslam (1990). "Comment on 'A new look at nonseparable
synthetic aperture radar processing'," IEEE Trans. Aero. and Elec. Sys., AES-26(1),
pp. 195-197.
Walker, J. L. (1980). "Range-Doppler imaging of rotating objects," IEEE Trans. Aero.
and Elec. Sys., AES-16(1), pp. 23-52.
Wehner, D.R. (1987). High Resolution Radar, Artech House, Norwood, MA.
Wu, C. ( 1976). "A digital system to produce imagery from SAR data," Paper 76-968,
A/AA Systems Design Driven by Sensors, Pasadena, California, October 18-20.
Wu, K. H. and M. R. Vant (1984). "Coherent Sub-Aperture Processing Techniques for
Synthetic Aperture Radar," Report No. 1368, Communications Research Centre,
Ottawa, January.
Wu, K. H. and M. R. Vant (1985). "Extensions to the step transform SAR processing
technique," IEEE Trans. Aero. and Elec. Sys., AES-21(3), pp. 338-344.

A.1

ANALOG LINEAR SYSTEM THEORY

537

so that
mO[f /n] = O[(m/n)f] = (m/n)O[f]
APPENDIX A

That is, such a system is homogeneous over the rational numbers. Although
this is good enough for practical purposes, strictly speaking one must specify
homogeneity as a separate property in the definition of a linear system. That
is, in addition to Eqn. (A.1.1) we require independently that

DIGITAL SIGNAL
PROCESSING

O[af1 (t)]

= aO[f1 (t)] = ag 1 (t)

(A.1.2)

for arbitrary scalar a. (Note that the output of a linear system must be identically
zero if the input is zero.)
For linearity to hold we do not require that
O[f(t - ti)]= g(t - ti)

In this Appendix, we will describe the digital signal processing alg_orithms


required to realize the main operations needed in SAR image f~rmahon. We
will have to do with bandlimited signals, such as the radar IF signal and the
SAR Doppler signal. We will also include a. discussio~ of analog. fi_lter
calculations, and an analysis of the process of time samplmg a bandhm1ted
signal to produce the numbers for input to digital processing algorithms.

A.1

ANALOG LINEAR SYSTEM THEORY

The first part of the definition of a linear system is that the system output in
response to the sum of two inputs is the sum of the outputs in _res~nse to the
two inputs taken separately. Symbolically, if the system operat10~ is expre~sed
as O( ), and if we choose to think of the inputs and outpu!s as time f~nctlons
f(t) and g(t) respectively, then a system is linear (almost) if and only if
O[f1(t)

+ f 2(t)] = O[f1(t)] + O[f2(t)] = 91(t) + 92(t)

(A.1.1)

for any inputs f 1 , f 2 from the class of functions for which the system out~ut is
defined. (The system output must be well defined, in the sense that an mput
f (t) uniquely determines the output g( t ), although the reverse need not be true.)
It follows from Eqn. (A.1.1) that, for integer m, n, we have:

If this latter property is true, that is, if a time shift of the input causes only a
corresponding time shift of the output, the system is in addition called stationary.
Stationarity, although convenient, is not a fundamental property on a par with
linearity. Without stationarity, the world of signal processing can proceed
relatively unimpeded, but without linearity considerable complications ensue.
Impulse Response and Convolution

A linear system, whether stationary or not, is completely specified by its unit


impulse response h(tlt'), which is the response g(t) of the system as a function
of time t to an input '5(t - t') which is a unit impulse (Dirac) function occurring
at time t'. In another terminology, h(tlt') is the Green's function of the system.
If we consider an arbitrary continuous input function f (t ), the defining property
of the impulse function is:
f(t) =

= O[n(f/n)]

= nO[f /n]

O[f /n] = (l/n)O[f]

536

f'"oo J(t')'5(t -

t') dt'

(A.1.4)

The linearity properties Eqn. ( A.1.1) and Eqn. ( A.1.2 ), and the definition of the
impulse response h(tlt'), then yield at once
g(t)

= O[f(t)] =
=

f:

00

O[f]

(A.1.3)

o[f:

f(t')t5(t - t')dt'

00

f_

00

J(t')O[J(t - t')] dt' =

J(t')h(tlt')dt'

(A.1.5)

00

This is the convolution integral, expressing the output g(t) of a linear system
in terms of the input f(t) and the impulse response h(tlt').

538

DIGITAL SIGNAL PROCESSING


A.1

In general, the function h(tit') may have a different waveform as a functio~


of t for each value of the parameter t'. In the opposite case, the system ts
stationary, and
h(tlt')

= O[b(t -

t')]

= O[b(t)]l 1_r = h(t -

t'JO)

= h(t)l1-r

(A.1.6)

(In the last step of Eqn. (A.1.6) we have used a common a~use of notati~n in
designating both a function of two variables t, t' and a function of ?ne ~anable
t by the same letter h.) For a stationary system, the convolution mtegral
Eqn. (A.1.5) then becomes
g(t) =

realizability is unnecessary in this sense. The system in effect can look ahead if
we introduce some time delay in the processing and store the input data for
that length of time.
System Transfer Function

In any system analysis problem, matters proceed more simply if the waveforms
of interest, such as f(t) and g(t) in Eqn. (A.1.8), are expressed in terms of the
eigenfunctions of the operators of the system in question. In the case of a linear
stationary system, for an input

(A.1.7)
wheres is an arbitrary complex number, from Eqn. (A.1.8) we have the output
(assuming convergence of the integral):

00

g(t) =

:00 h(t')f(t - t') dt'

(A.1.8)

f_ 00 h(t')exp[s(t - t')]
H(s) =

f:

(A.1.9)

t'

h (t')

=l:1
t

g (t)

O....._ If (t') h (H') cit'

t'

F(s)exp(st)ds

= O[f(t)] =

(A.1.11)

o[fF(s)exp(st)ds J= fF(s)O[exp(st)] ds

F(s)H(s)exp(st)ds =

G(s)exp(st)ds

(A.1.12)

where we define
G(s) = H(s)F(s)

Convolution of f(t) with h(t) to form g(t).

(A.1.10)

If this can be done, then the corresponding output function has the expression
g(t)

)Ir

h(t')exp( -st')dt'

00

From Eqn. (A.1.9), for any s the function exp(st) is an eigenfunction of the
system. The corresponding eigenvalue is H(s) of Eqn. (A.l.10). We then hope
to be able to find coefficients F(s) such that an arbitrary input function can be
expressed in terms of the set of eigenfunctions exp( st) by the linear combination
f(t) =

Figure A.1

dt' = exp(st)H(s)

where we define the system "transfer function"

In the particular case of a causal, or "physically realizable", system, no output


can occur before some input has happened, so that necessarily h(tlt') = 0, t < t',
or h( t) = O, t < O. The case shown in Fig. A.1 corresponds to a physic~lly
realizable system. Unless true real-time operation is desired for a system, physical

t'

539

f(t) = exp(st)

:00 h(t - t')f(t')dt'

which has the graphical interpretation shown in Fig. A.1. By a change of variable,
Eqn. (A.1.7) becomes also
g(t) =

ANALOG LINEAR SYSTEM THEORY

(A.1.13)

as the coefficient in the representation of the output g(t) on the eigenfunction


basis set in terms of the input coefficients F(s) and the transfer function H(s).

540

DIGITAL SIGNAL PROCESSING


A.2

Thus convolution in the time domain, Eqn. (A.1.8), has been transformed into
multiplication in the eigenfunction ("frequency") domain, Eqn. (A.1.13).

g 0 = H(jn2n/T)f0

To determine appropriate restrictions on the set of functions f(t) which have


representations Eqn. (A.1.11), and to determine which sets of exponentials are
required, is the subject of Fourier series or transform analysis, or one or two
sided Laplace transform analysis. For functions such that

f:

lf(t)I dt < oo

(A.l.14)

00

which suffices for our needs, the set exp(jwt), lwl :S oo, is appropriate, and leads
to the Fourier transform theory. Then we have the Fourier transform pair:

~- 1 [F(jro)] =

f:

F(jro)exp(jrot)dro/2n

(A.1.15)

541

The system input/output computation follows as in Eqn. (A.1.12), leading to

Signal Representation

f(t) =

SAMPLING OF BANDLIMITED SIGNALS

(A.1.19)

where H(s) is the system function Eqn. (A.1.10).


. Th~ computations involved in finding the transforms F(s), Eqn. (A.l.16), or
mvert~ng the output transforms G(s) to find the waveforms g(t), are generally
as tedious as those involved in carrying out the convolution integral calculations.
In textbook problems, tables of transforms, and simple techniques for their use
save the day, but in practice only numerical computation works, not the leas;
because we never can specify the functional form f (t) of the waveforms passing
~hrough th~ systems_ of interest. It is therefore essential to relate the computations
m the contl~uous time world to computations in discrete time, the subject of
the next sect10ns.

A.2

SAMPLING OF BANDLIMITED SIGNALS

00

where the expansion coefficients are


F(jw) =

~[f(t)] =

f:

f(t)exp( -jrot)dt

(A.1.16)

00

In the case of a nonstationary linear system, no theory comparable in simplicity


and generality to Laplace/Fourier theory has been developed. Fortunately, in
virtually all of our computations in SAR imaging, the operations needed are
stationary over adequate time spans to make computation using Fourier
techniques feasible. A notable exception is the case of Section 4.2.2, in which
azimuth compression processing is implemented without the approximation of
quasistationarity, using direct convolution in the slow time domain.
The companion formalism to the Fourier transform theory is that of Fourier
series, which is appropriate in the case that the functions f(t) of interest do
not satisfy the requirement Eqn. (A.1.14 ), but rather are periodic, with period say
T, while being absolutely integrable over any period. Then the countable
sequence of complex exponentials exp(jn2nt/T) is the appropriate set of
eigenfunctions upon which to expand the waveforms. We have then

As we showed in Eqn. (A.1.9), the eigenfunctions of linear stationary systems


are the exponentials exp( st). These are the natural modes of such systems and
~ny such sys.tern_ used as a signal generator can produce only such mod~s, in
lmear combmatlons. The only such functions that sustain their behavior
indefinitely, at finite level, are the sinusoids. Communications equipment
develoix:d along c~mpatible directions, with information being expressed as
modulations on a smgle frequency carrier wave.
Thus attention is centered on the frequency band of a particular waveform
with communications waveforms being of a relatively narrow band centered a;
some carrier frequency m 0 (bandpass waveforms). In order that different
com~unicati?n channels not interfere, it was desirable that functions carrying
the mformation through the channel not intrude upon the bands of their
neighbors, so that designs were aimed at producing waveforms f(t) such that
their transform representations F(jw) had the bandlimited property:
IF(jm)I = 0,
where

lf-f.l>B/2

= w/2n, with a companion relation

IF(jm)I =

o,

If+ f.I >

B/2

00

f(t) =

f exp(jn2nt/T)

(A.1.17)

n= -oo

if f(t) is real, since then F(jro) = F*( -jw), from Eqn. (A.1.16).

T/2

Let us now consider the particular case of a (possibly complex) bandlimited


"low pass" waveform f(t), one such that (Fig. A.2):

Representation of Band/Im/led Signals

with

= (1/T)

-T/2

f(t)exp(-jn2nt/T)dt

(A.1.18)
IF(jm)I = 0,

lfl > B/2

(A.2.1)

542

DIGITAL SIGNAL PROCESSING


A.2

It was known very early that discrete samples taken at any rate J. > B sufficed
to represent exactly such a bandlimited waveform. The minimum sampling rate/
is called the Nyquist frequency. Shannon (1949) gave a proofofthis fact, and the
result is sometimes called the Shannon sampling theorem. The theorem is stated
as follows.
Let a signal f(t) be integrable (Eqn. A.l.14) and bandlimited as in
Eqn. (A.2.1), and let f. > B be the sampling frequency. Then exactly (there is
no interpolation error):
f(t) =

L"'

f(n/ f.) sinc[nf.(t - n/f.)]

(A.2.2)

SAMPLING OF BANDLIMITED SIGNALS

543

where l>(t) is the Dirac delta function. This sampled signal has a Fourier
transform

= _L"' fn exp( -jnw/J.) = F(z)lz=exp(jw/f,)

F!(jw)

(A.2.5)

n- - co

which is .a pe~iodic function of frequency f with period f.. The last notation
on the nght m Eqn. (A.2.5) introduces incidentally the Z-transform of the
sequence of numbers f 0 :

n= -oo

where

(A.2.6)

n= -co

sinc(u) = [sin(u)]/u

(A.2.3)

Eqn. (A.2.2) expresses the bandlimited function f(t) at any arbitrary t exactly
in terms of the infinite sequence of its samples f 0 = f(n/ f.) at discrete times
t 0 = n/ f.. Equation (A.2.2) is also Whittaker's formula for interpolation of
bandlimited functions.
Since the representation Eqn. (A.2.2) is exact for a bandlimited function, and
since the function on the right is analytic, we can conclude that no bandlimited
function (not identically zero) can vanish except at isolated points. Thus, a
strictly bandlimited function which vanishes everywhere outside some interval
ltl ~ T/2 is an impossibility. Nonetheless, a bandlimited function may well
vanish at a countably infinite set of isolated points, as witness Whittaker's
function Eqn. (A.2.2) with all f(n/f.) set to zero except for lnl ~ N 0 Further,
since the interpolating functions Eqn. (A.2.3) have a bound which decays
monotonically with increasing u, beyond some distance away from the first and
last nonzero samples the bandlimited function so constructed becomes and
remains small, although not strictly zero.

In Eqn. (A.2.5) we recognize a Fourier series, Eqn. (A.1.17), with series


coefficients / 0 Therefore, from Eqn. (A.l.18), we must have
fn = ( 1/f.)

f./2
f -f./2

f!(t) =

L"'

n= -co

fnl>(t -

n/ f.)

(A.2.4)

F(z )z 0 -

dz/2nj

lzl= I

(A.2.7)
displaying incidentally the inversion formula for the Z-transform. On the other
hand, from the inversion formula Eqn. (A.l.15) for the Fourier transform, we
must have

f_"'"' F(jw) exp(jwn/f.) df

fn = f ( n/f.) =
"'

ff./2+kf,

"'

ff.12
-f./

k= -oo

k=~oo

F(jw) exp(jwn/f.) df
-f./2+kf.

Spectrum of a Sampled Signal

Whittaker's interpolation formula Eqn. (A.2.2) is valid without error only for
functions which are strictly bandlimited, Eqn. (A.2.1 ), and only provided the
sampling frequency f. > B. A different view of the, same result gives insight
into the errors which are introduced if either ofthese requireipents is violated.
Suppose then that f(t) is any function, bandlimited or not, having a Fourier
transform F(jw). Let f. be some arbitrary sampling frequency, and let
J;. = f(n/ f.) be the time samples of f(t). Let these be encoded into a
mathematical construct called (historically) the "sampled signal":

F!(jw) exp(jnw/f.) df =

J./2

-f./

F[j(w

+ kw.)]exp[j(w + kw.)n/f.]df

"'

k=~ F[j(w + kw.)] exp(jwn/J.) df

(A.2.8)

00

By the uniqueness property of Fourier coefficients, comparison of the two forms


Eqn. (A.2.7) and Eqn. (A.2.8) for f 0 yields the result that
F!(jw) = !.

L"'

k= -oo

F[j(w +kw.)]

(A.2.9)

That is, F!(!w) is a superposition of shifted replicas of the transform F(jw) of


the analog signal f(t), as indicated in Fig. A.2.

544

DIGITAL SIGNAL PROCESSING


A.3

_,.,,.-,..,,._,,,-,,

...

'

IF I 0CJ?) I

'\

-8/2

''

Is -B/2

B/2

Is >B

-fs

Reconstruction of a Sampled Signal and Aliasing

In case l(t) happens to be strictly bandlimited, Eqn. (A.2.1), a?d prov~ded


that f. > B, the terms in the sum Eqn. (A.~.9) ~ave no ov~rlappmg port10ns
(Fig. A.2). In principle, then, as indicated m Fig. A.2, passmg l!(t) through
an ideal low pass filter, having transfer function

= 0,

elsewhere

f./2

(A.2.10)

yields an output which is just the k = 0 term in Eqn. (A.2.9):


G(jw) = H(jw)F!(jw) = F(jw)
We conclude again that the bandlimited signal l(t) is exactly recoverable by
suitably processing its discrete samples 10 , coded into F!(jw).
.
The impulse response of the required low pass filter Eqn. (A.2.10) ts
h(t)

= ff- 1 [H(jw)] = sinc(nf.t)

By linearity, the response of the filter Eqn. (A.2.10) to the input l!(t) of
Eqn. ( A.2.4) is
00

g(t) =

L
n= -oo

00

l 0 h(t - n/ f.) =

Is B
2 2

~ Alias region
~
ls<B

Bandlimited spectrum (solid line); replications due to sampling (broken lines), without
aliasing; and reconstruction filter.

Ill~

---J

(B-fr)/2

Figure A.2

= 1/J.,

J._B

(Reconstruction filter)

-i/2
s

H(jw)

545

IF Oro) I
.. -, - .. -...
;"
.... _,,,
',

DISCRETE CONVOLUTION

ln sinc[nf.(t - n/ .f.)] = l(t)

Figure A.3 Spectrum of bandlimited signal and replications due to sampling. Aliasing present
due to f, < B.

F!(jw) (Fig. A.3), and will be passed by the low pass interpolation filter
Eqn. (A.2.10). This is the notorious "aliasing" effect, so called because energy
content of l(t) which is actually at frequencies above .f./2 is mistaken by us
as belonging to frequencies below f./2. The corresponding energy is traveling
under an assumed name. This problem is impossible to recover from once
induced, so that it is essential to include an appropriate bandlimiting filter in
the processing just ahead of any sampling operation.
The representation Eqn. (A.2.9) of F!(jw) in terms of shifted replicas of F(jw)
is the starting point for analyses which seek to estimate or bound the errors
we make in interpolating using Whittaker's formula, Eqn. (A.2.2), in the case
that the samples l(n/ f.) are not properly obtained (either because we are
sampling a function which is not in fact bandlimited, or because we are sampling
too slowly). Papoulis (1966) has developed a number of such results. Usually
the SAR data system will have been designed such that aliasing is not a problem
in the radar receiver data. However, the azimuth Doppler signal is bandlimited
only by the falloff of the azimuth antenna pattern, and some analysis of aliasing
is needed before setting the Doppler sampling rate (the PRF). This is considered
in terms of azimuth ambiguity analysis in Section 6.5.1.
Now that we have in hand the fundamental result that the transform F!(jw),
Eqn. (A.2.9), of the sampled signal l!(t), Eqn. (A.2.4), and the transform F(jw)
of the analog signal l (t) are in fact identical on the baseband Il I ~ .f./2, provided
bandlimiting and adequate sampling_ rate hold, we can develop some useful
relations allowing computation of system output functions in terms of the
discrete samples of the input.

n= - oo

which is again Whittaker's formula Eqn. (A.2.2).


On the other hand, if the original signal l(t) is bandlimited, but we sa~ple
too slowly (.f. ~ B), or if the signal is not bandlimited, then higher order rephcas
of F(jw) in the sum Eqn. (A.2.9) shift partially into the baseband Ill~ f./2 of

A.3

DISCRETE CONVOLUTION

The main operations involved in SAR image formation are filtering of signals
by linear stationary systems. If we regard an input signal as a function l(t)

546

DIGIT AL SIGNAL PROCESSING


A.3

of continuous time, and if the filter is a stationary system with impulse response
h(t), then the output is exactly the ("linear") convolution, Eqn. (A.1.7):
g(t) =

(A.3.1)

:<X> h(t - t')i(t')dt'

If both i(t) and h(t) are time sampled at a rate J., then certainly the integral

Eqn. ( A.3.1) has an approximation


<X>

gk = g(k/J.) ~

h[(k- m)/J.Ji(m/J.)(1/J.)

(A.3.2)

m=-oo

which is the discrete convolution of the two sample sequences h(n/J.), i(n/ J.),
scaled by 1I!..
Discrete Convolution of Band/Im/led Signals

The key fact is that, if both i(t) and h(t) are bandlimited, such that
F(jw) = H(jw) = 0, Iii> B/2, and if the sampling is at or above the Nyquist
rate, J. > B, then Eqn. (A.3.2) is exact. Since the output g(t) of a linear analog
system with bandlimited input is necessarily also bandlimited, then its exact
samples g0 , computed by Eqn. (A.3.2),-suffice to reconstruct the complete output
g(t) exactly at all times, in principle by using Whittaker's interpolation formula,
Eqn. (A.2.2). Hence, in the case of a bandlimited input, the exact analog output
g(t) can be computed in terms of time samples of the input. (If the system
function H(jw) is not bandlimited, the samples h0 of its impulse response are
not used in Eqn. (A.3.2). Rather, it is necessary to replace the system by
its bandlimited companion.)
That the discrete 'convolution Eqn. (A.3.2) is exact for bandlimited inputs
follows at once from the fact that, in the band Iii ~ B/2, from Eqn. (A.2.5)
and Eqn. (A.2.9) we have
<X>

F(jw) = (l/J.)F!(jw) = (l/J.)

inz-n,

Iii < B/2

(A.3.3)

n= -co

Discrete Fourier Transform

As we pointed out in Section A.1, the calculation of a continuous time


convolution Eqn. (A.3.1) is only in principle facilitated by introduction of the
apparatus of Fourier transforms. In contrast, the calculation of the discrete
convolution Eqn. (A.3.2) is greatly facilitated in practice by the introduction of
a transform theory appropriate to that computation.
Towards that end, we note that we already have displayed one form of such
a transform theory, namely, the formalism having to do with the sampled signal
f!(t) defined from the time samples f,, of a continuous time signal i(t).
Specifically, given any sequence of numbers i 0 , we defined the Z-transform of
the sequence Eqn. (A.2.5). The inversion formula Eqn. (A.2.7) expresses the
countable infinity of values f 0 in terms of a continuum of frequencies on a finite
interval Iii < f./2. Since on that interval (assuming no aliasing) by Eqn. (A.2.9)
we have
G!(jw) = f.G(jw) = f.H(jw)F(jw) = H!(jw)Fl(jw)/J.
we can carry out the calculation Eqn. (A.3.2) in principle by

Nonetheless, we still have to do here with integrals over frequency functions


F!(jw) for which we do not know the analytic forms, in general.
In proceeding to something more practical, suppose that the samples in are
zero everywhere except over a finite interval, say 0 ~ n ~ N - 1. Then we might
expect to obtain a representation of that finite number of values in terms of
something less than a continuum of values of the function F !(jw ), and perhaps
even in terms of a finite number of its values, specifically N of them. In fact, it
is easy to verify that the pair of relations:
N-1

fn = (1/ N)
<X>

Fk exp(j2nkn/ N),

O~n~N-1

k=O

gkz-k = f.H(jw)F(jw)

k=-oo

(A.3.5)

N-1

Fk =

<X>

=(1/J.)

547

Equating coefficients of powers of z in Eqn. ( A.3.4) yields the discrete convolution


Eqn. (A.3.2).

where z = exp(jw/ J.), and similarly for H(jw ). Then in the same band we have
J.G(jw) =

DISCRETE CONVOLUTION

n=O

imhnz-n-m

f,, exp( -j2nkn/ N),

O~k~N-1

m,n= -co
<X>

=(1/f.)

L
k,m=-co

h[(k-m)/J.]i(m/f.)z-k

(A.3.4)

is an identity in the two sets of numbers {!0 }, { Fd, which are in general
complex. This pair is the discrete Fourier transform (DFT) and its inverse. In
case the numbers f,,, 0 ~ n < N, are the only nonzero samples of a properly

548

DIGITAL SIGNAL PROCESSING


A.3

549

DISCRETE CONVOLUTION

sampled bandlimited function, then the numbers Fk, 0:::;; k < N, have the
interpretation (Eqn. (A.2.5) and Eqn. (A.2.9)):
(A.3.6)

Fk = F(z)lz=exp(j2,,k/Nl = F!(j2nkf./N) = f.F(j2nkf./N)

/.
/

They are therefore uniformly spaced samples of the spectrum F(jw) on the band

Ill< J./2.

We now have in hand a representation Eqn. (A.3.5) of the N samples fn in


terms of a finite sum of sampled exponentials, with coefficients F k we can
calculate from the given time samples In. The representation correctly reproduces
the original samples In only over the span 0 :::;; n :::;; N - 1, however, since we
began by assuming In vanished outside that interval, while clearly the DFT
expression Eqn. (A.3.5) for In does not. In fact, both the DFT expressions
Eqn. (A.3.5) are periodic, in the variables n and k, respectively, with period N.
The restriction of the calculation Eqn. (A.3.5) to the base period [O,N - 1] is
essential if the DFT expressions are to relate to samples of a bandlimited
function with only N nonzero samples.

-N

2N

...,

.,....,... ,/' I
.,...

1----I

--

I
I

I
I

I
I

/' ()
\

\ , )>'/

In linear convolution
H is not presert

Discrete Fourier Transform and Discrete ConvoluUon

Let us now investigate what Oppenheim and Schafer (1975) have called the
"periodic convolution" of two sequences In, hn which vanish except on the base
interval [O, N - 1]. The periodic convolution can be calculated using discrete
Fourier transforms Eqn. (A.3.5), and with care can be made identical to the
linear convolution Eqn. (A.3.2). The periodic convolution is defined as the
convolution of sequences Jn, fin which are the results of extending In hn from
the base span periodically, with period N:
N-1

Um= L fim-Jn

(A.3.7)

n=O

The resulting seqence Um is also a periodic sequence with period N.


From the DFT expressions Eqn. (A.3.5), the periodic sequence Um has a
representation in terms of the (periodic) coefficients
N-i

Gk=

Um w-km = (1/ N2)

m=O

N-1

Hk,Fk" w-km+(m-n)k'+nk"

m,n,k',k"=O
N-1

= (l/N)

Hk,pk,wm<k'-kl = HkFk

(A.3.8)

m,k'=O
where we defined W = exp(j2n/ N). Thus we have a transform analysis of the
periodic convolution operation:
(A.3.9)

t
Figure A.4 T!me waveforms in case of linear convolution realized by circular convolution using
DFT. Case of madequate sample span.

where !?2 and !?2 - indicate the operations Eqn. ( A.3.5) of taking the discrete
Fourie.r transform and its inverse. The sequence Un results as the indicated DFT
ope~at10~s on t~e ~equences fo, hn, which vanish outside [O, N - 1], but which
are 1dent~cal to In, hn on that i_!lte;val. The sequence Un so computed, the periodic
convolution of the sequences In hn, is the "circular" convolution of the sequences
In, hn, ra~her ~ha~ the (linear) convolution Eqn. ( A.3.2) of the sequences In, hn.
~he. s1tuat1on 1s "!.ad~ clear by the diagrams of Fig. A.4. This shows the two
penod~c sequences f.., hn whose convolution Un is obtained by inverting the
coeft!.c1ent ~equence FkHk. The heavy outlining indicates the base periods of Jn
and hn, which are the only spans over which In and hn have nonzero values.
A~~ value of Un on t_he base interval [O, N - 1] involves both values of fin
~nsmg from the base mterval,'and values arising from a neighboring period of
hn, where~s the values of gn on the base interval involve only values of ii on
the b.ase mterval. Therefore gn and Un will be different numbers, even onn the
base mterval. Further, t?e linear convolution values gn are potentially nonzero
for 0:::;; n:::;; 2(N - 1). Smee the values Un simply repeat beyond n = N - 1, the
values of gn for N :::;; n :::;; 2( N - 2) are not available from the circular convolution.
Linear Convolution Realized as Circular Convolution

To avoi~ these ~ffect.s of circ~lar convolution, we must arrange matters so that


we obtam the situat10n of Fig. A.5. There the base period has been extended

550

DIGITAL SIGNAL PROCESSING


A.3

551

DISCRETE CONVOLUTION

some value for N, the first L+ M - N -1 values u 0, ... ,uL+M-N-i will be


incorrect. If we have sequences fn, hn of length N, and we use a transform size
of length N, in fact only the single value UN_ 1 will be computed correctly.

/\
,, \
\

Filtering a Data Stream by Transform Operations

N~L+M-1

t-M

If we are carrying out batch processing of a certain number L of input samples


fn th~ough a filter with impulse response sequence hn of length M, we can use
the discrete Fourier transform procedure Eqn. (A.3.5) and Eqn. (A.3.8) with a
value N larger than L + M - 2 to compute the convolution Eqn. (A.3.2).
However, it may be that this value of N is inconveniently large, or we may
have. to do with an input sequence fn that is on-going in time indefinitely.
Special arrangements then need to be made to carry out the computation
successfully.
As in Fig. A.6, suppose that we segment the input data flow fn into sections
of length L, each ith subsequence being defined by

f~ = fiL+n

O~n~L-l

(A.3.10)

\
N

....

,....

____ _,,~~,..

_,, ....

------....

Figure A.5 Linear convolution realized by circular convolution. Sample span extended to avoid
circulatory replications.

sufficiently such that, fo! computation of the last value of Un, the troublesome
replication of hn in a neighboring period interval has not yet moved in to take
part in the computation, while the first replication of fn after the base period
has not yet begun. If fn = 0 outside the span 0 ~ n ~ L - 1, and if hn = 0
outside the span 0 ~ n ~ M - 1, then the linear convolution sequence Un in
Eqn. (A.3.2) has nonzero values at most over 0 ~ n ~ L + M - 2. We must
have N ;;:?: L + M - 1 in order that the last value Un of interest not be replaced
by a replication of g0 This also assures that the first replication of hn outside
the base period has just not begun to overlap the values of fn in the base period.
Thus, for Un and 9n to be identical over the base period P ~ n ~ N - 1, it is
both necessary and sufficient that N be chosen for the computation such that
N ;;:?: L + M - 1. Thereby all possible nonzero values of Un occur on the base
span, and these are identical to the values 9n The DFT procedure Eqn. (A.3.9)
computes the linear convolution Eqn. (A.3.2).
Another way of saying this is that, if we have sequences fn, hn of lengths L
and M, respectively, and if we attempt to compute N values Un 0 ~ n ~ N ~ 1,
of their linear convolution by the discrete Fourier transform procedure usmg

2L

""

,,,,

,l!
I

I
~

I
I

"Add"

region

"Overlap"
region

Figure A.6 "Overlap-add" method of filtering a long data stream using N-point transform.

552

DIGITAL SIGNAL PROCESSING


A.3

DISCRETE CONVOLUTION

553

Then certainly
Saved

/,-\

' '

,______ ______..............._,,,,

"'........................

-..----

and because of the linearity of the convolution operation Eqn. (A.3.2):

where g~ is the output of the syst~tn in response to the ith input segment of
Eqn. (A.3.10). If the impulse response sequence hn is of length M, we need only
choose some convenient value N ~ L + M - 1 to carry out the component
convolutions in Eqn. (A.3.11) correctly. The results are simply added. As shown
in Fig. A.6, the result of convolution of the input sequence !~ with the impulse
response sequence hn will generally spread over more than one data interval.
Then part of the output from one ith component convolution Eqn. (A.3.11)
must be added to the outputs of other segments with which it overlaps. This
procedure is called the overlap-add method of filtering an ongoing data stream.
Alternatively (Fig. A.7), the input stream can be segmented into blocks of
length L = N, the contemplated transform size. If the impulse response sequence
h is oflength M, then we know from above that the first L + M - 1 - N = M - 1
p~ints of the calculated convolution are incorrect, and only the remaining
N - M + 1 points represent progress in computing the output data stream gn.
The procedure is then to shift the input segment M - 1 points farther back on
the input stream than would otherwise be necessary, with the result that the N
input points to each convolution calculation are made up of the last M - 1
points of the previous section, saved and reused, and the first N - M + 1 points
of the data stream fn which have not yet been used. The good N - M + 1
output points from each calculation are pieced together appropriately to form
the output stream Un. This procedure is called the overlap-save method, since
the input segments must be overlapped, necessitating saving some of the input
points from one computation to another.
It might be noted that, so far as computation time is concerned, we lose by
using the discrete Fourier transform procedures as we have described them to
compute a convolution sum. For suppose we want to convolve an L-point
sequencewithanM-pointsequence. TheoutputsequenceisoflengthM + L- 1.
Each of the middle L - M + 1 points requires M multiplies and M - 1 adds
for its calculation, while the M - 1 points on each end of the output sequence
require (M - 1)2 operations, a total operation count of L(2M - 1) - M - l,
of order 2ML. If we use the discrete Fourier transform technique, we need to
calculate two N-point transforms, one requiring L multiplies and L - 1 adds
for each of the N values F k and the other requiring M multiplies and M - 1
adds for each of N values Hk. We then have N multiplies to calculate the Gk

~~~~---~~~--./

(A.3.11)

M-1

N-M+1

ht-n

t-M

t
Discard

Good

Figure A.7 "Overlap-save" method of filtering a long data stream using N-point transform.

sequence, and finally an N-point inverse transform, with N multiplies and N - 1


adds for each of the M + L - 1 output sequence values, all adding up to
(M + L- 1)(4N - 1) + N operations,oforder4N(M + L).lf ML< 2N(M + L)
whic.h is always the case since we must have N ~ L + M - 1, direct convolutio~
requires fewer computations.
The indications in fact go just the other way as a result of the fast Fourier

t~ansform alg?rithm, which is a dramatically more rapid way of calculating the


drscr~te Four~er transform than is evident from the form Eqn. (A.3.5). It is that

algorrthm ~htch has made possible much of what is now called signal processing,
and to which we turn attention in the next section.

554

DIGITAL SIGNAL PROCESSING


A.4

A.4

THE FAST FOURIER TRANSFORM ALGORITHM

One of the most common operations in signal processing is Fourier transformation.


As applied to a finite number of numerical data, the ap~ropriate Fourier
transform pair is the discrete Fourier transform (D FT), defined m Eqn. ( A.3.5):
N-1
fn = (1/N)

k=O

Fkexp(j2nkn/N),

n = O,N - 1

(A.4.1)

k = O,N - 1

(A.4.2)

N-1
Fk =

n=O

fn exp(-j2nkn/ N),

Since we have to do here with finite sums, convergence questions do not arise.
The pair is valid as an identity for complex data numbers fn as well as for real
data.
Two distinct uses are made of the DFT. The first is spectral analysis, in
which we seek the Fourier spectrum Eqn. (A.1.16) of a time waveform which
is bandlimited to Iii~ B/2. As in Eqn. (A.2.9), the signal spectrum F(j~) on
the band exactly equals the scaled spectrum F!(jw )/f. of the sampled signal
f!(t) of Eqn. (A.2.4), constructed from the samples fn taken at a. rate f. > B.
Since in practice only some finite number N of the samples fn are nonzero, the
spectrum function F!(jw) has sampled values as in Eqn. (A.3.6):

555

realized by the FFT. This procedure of carrying out a convolution computation


using the FFT is called "fast convolution", and is universally used, except for
special cases such as that in which N is quite small, say N < 64.
The key observation in development of the FFT is that the periodicity, in
both variables k and n, of the complex exponential factors in the DFT of
Eqn. (A.4.1) and Eqn. (A.4.2 ), with period N, can be exploited in the computation.
Two ways of doing this can be constructed, which are in the technical sense
duals of one another. The first, leading to "decimation in frequency" FFT
algorithms, separately computes various subsequences of the output sequence
Fk. The second, leading to "decimation in time" algorithms, separates the input
sequence f 0 into subsequences, and computes separately on each.
Decimation In Frequency

Let us consider first the decimation in frequency scheme, and suppose for
simplicity that N is an even number. Decimation in frequency algorithms evolve
by first noting from Eqn. (A.4.2) that
Nn-1
Fk =

F! (j2nkf./ N) = F k>

THE FAST FOURIER TRANSFORM ALGORITHM

k = -(N/2- l),. . .,N/2

N-1

fnexp(-j2nkn/N)

Un+ exp(-jnk)fn+N/ 2] exp(-j2nkn/ N)

n=O
N/2-1
n=O

+ L

n=N/2

fnexp(-j2nkn/N)

(A.4.3)

Then the output sequence F k can be partitioned as


Thus the D FT Eqn. (A.4.2) computes samples of the Fourier spectrum F(jw ). .
The second main use of the DFT is as an aid in computing the convolution
of two signals, that is, the output of a linear stationary system in response to
some input. As we saw in Section A.3, in terms of operation counts the DFT
is. an inefficient way to carry out convolution processing, and indeed the
procedure was not much used until the mid 1960s. At that ti~e, howe~er, an
algorithm was presented (Cooley ~nd Tukey, 1965) which ex~loited a
rearrangement of the computations of the DFT to reduce the operation count.
This algorithm, which came to be known as the "fast" Fourier transform (FFT),
gave birth to the discipline of signal processing as it is practiced today. (The
advances in digital computer hardware which were taking place at the same
time played an essential role as well.)
.
Suppose we want to compute the convolution Eqn. (A.3.2) of two N.-pomt
sequences J;., hn. By di~ect conv?luti?n, to compute the ;.,,2N outyut pom~s 9n
requires -2N 2 operations, whde with the DFT we need -6N operatt?ns.
With the FFT, however, as we will see below only -2Nlog 2(N) operations
are needed, for the case of N a power of 2. For the modest value N = 1024,
say, direct convolution then requires about 2 x 106 operations, the DFT needs
6 x 10 6 , and the FFT, only 5 x 104 This difference in behavior between N 2
and N log(N) easily tips the computational balance in favor of the DFT, as

N/2-1
F2m =

F2m+1 =

Un+ fn+N/2)exp[ -j2nmn/(N/2)]

Un - ln+N12)exp(-j2nn/N)exp[-j2nmn/(N/2)]

n=O
N/2-1
n=O

(A.4.4)

(A.4.5)
Thus the even numbered Fk, Eqn. (A.4.4), are calculated as the N /2 point
transform of the sequence f 0 + !n+N12 , while the odd numbered Fk, Eqn. (A.4.5),
are the N /2 point transform of the sequence (J;. - ln+N/ 2) exp(-j2nn/ N), where
the exponential multipliers are the so-called "twiddle factors". The two N /2
point transforms are further subdivided into N / 4 point transforms, and so forth,
until we deal ultimately with two-point transforms.
For each n, the operations in forming the sequences to be transformed as in
Eqn. (A.4.4) and Eqn. (A.4.5):
9n = fn + fn+N/2

(A.4.6)

g~ = (fn - ln+N/2) exp(-j2nn/ N)

(A.4.7)

556

DIGITAL SIGNAL PROCESSING


A.4

are just those of taking a two-point transform (the case N = 2 of Eqn. ( A.4.2) ),
followed by adjustment of the second output coefficient by a twiddle factor.
Thus, for say N = 8, we first do four 2-point transforms using(/o,f4), ... , (/3,f1>
The four output coefficients g0 of Eqn. (A.4.6) are the inputs to a four-pomt
transform, and the four twiddled outputs g~ of Eqn. (A.4.7) are the inputs to
another four-point transform. In carrying out the first of these, we first do two
two-point' transforms using (g 0 , g 2) and (g 1, g 3 ) and twiddle the second output
coefficients of each. This yields numbers (h 0 ,hi), (h 0 ,h~), each pair of which is
the input to a final two-point transform. Collecting all these together, we have
done four (N /2) (complex) two-point transforms at each of three (log 2 N)
computation stages, with twiddle factors applied at each stage.
Looked at another way, in Eqn. (A.4.4) and Eqn. (A.4.5) we have N /2
two-point transforms, with the resulting coefficients adjusted by twiddle factors
(one of each pair of which is unity), and finally two (N /2)-point transforms on
the resulting adjusted coefficients.
If then N is a power of 2, N = 2m say, we need m stages of decimation to
get down to the final two-point transforms. Each two-point transform (a
"butterfly") requires two (complex) additions, and since there are altogether
m(N /2) two-point transforms, we require mN complex additions. Applying the
twiddle factors uses N /2 complex multiplications for each stage except the last,
for a total of N(m - 1)/2 multiplications involving twiddle factors. The grand
total of complex operations needed for the N-point transform is thus:
mN + (m - l)N /2 = (3/2)N(log 2 N - 1/3)

Various reorderings of the computation can reduce the operation count even
below this, but the lion's share of the improvement is indicated by the transition
from N 2 behavior to N log 2(N).
Decimation In Time

With decimation in time procedures, we separate the fn sequence into a sequence


with even index, n = 2m, and a sequence with odd index, n = 2m + 1, for
m = 0, ... , N/2 - 1. Then
N/2-1
Fk =

f 2mexp[-j2nkm/(N/2)]

m=O

N/2-1
+exp( -j2nk/N)

m=O

f 2m+l exp[ -j2nkni/(N /2)]

(A.4.8)

Thus, the coefficients in the N-point DFT appear as sums of coefficients in two
DFTs, each of length N /2, with adjustment of the second set by twiddle factors
exp( -j2nk/ N). Assuming that N /2 is even, each of these constituent ( N /2)-point
DFTs can be calculated by further subdividing the sequences f2m and f2m+1
and carrying out a total of four transforms, each of length NI 4. The process

THE FAST FOURIER TRANSFORM ALGORITHM

557

cycles until we finally deal with constituent transforms of length just 2 points.
The total operation count in carrying out the original N-point transform with
this procedure turns out to be precisely the same as for the decimation in time
procedure, and in fact the graphs of the computations are duals of one another.
Coefficient Ordering

The detailed techniques of realizing an FFT algorithm (note we do not s~y at


this point "the" FFT algorithm - there are many variants (Elliott and Rao,
1982, Chapter 4)) become closely involved with the type of hardware with which
one wishes to deal. There is an interplay between computation ordering and
necessary access to memory which provides a considerable number of possibilities.
We might mention here only the choices between in-place and not-in-place
algorithms, and between natural ordering of inputs and outputs and bit reversed
ordering.
For either of the choices of time decimation, Eqn. (A.4.8), or frequency
decimation, Eqn. (A.4.4) and Eqn. (A.4.5), there exist algorithm versions which
require two storage arrays, each of length N (complex), with input data at
each stage of the computation taken from one array and output data loaded
into the other. There also exist versions which require only one array ("in-place"
computation), with the numbers in the input array for each stage being replaced
by output from that same stage. One pays a price for the storage saving in the
latter case, however, in that either the input data or the output data will appear
in sequential memory in bit reversed order. For example for N = 8, locations
(binary) 000, 001, 010, ... , 110, 111 will contain data numbers with indexes 000,
100, 010, ... , 011, 111. Whether time or frequency decimation is used, if both
input and output are to be in normal order, two storage arrays must be provided.
On the other hand, for in-place computation, with either time or frequency
decimation one has the choice of either the input or output, but not both, being
in normal order, with the other being bit reversed.
If we are interested in fast convolution, we can always use in-place
computation (a single storage array), and normally ordered input to, and output
from, the convolution. We simply use normally ordered input f 0 , and say an
in-place decimation in time algorithm. The output coefficients Fk then appear
in bit reversed order, but if we have also arranged that the filter coefficients Hk
are in bit reversed order, we simply multiply the two arrays in the ordering in
which they appear to determine the array Gk in bit reversed order. We then
use an in-place algorithm which expects its input coefficients to be in bit reversed
order, which produces the final system output g0 in normal order. The penalty
is that we may need to invoke different codes for the direct transform of f 0 and
the inverse transform of Gk.
The inverse transform operation Eqn. (A.4.1), forming the sequence f 0 from
the sequence Fk, is done using the same code as the direct transformation
Eqn. (A.4.2), with minor changes of index and scale, since the computations
differ only in the sign of the exponential and in the scale factor 1/ N.

558

DIGITAL SIGNAL PROCESSING


A.5

Since the FFT computation is the heart of fast convolution SAR image
formation algorithms, as well as of so much of signal processing, it is worth
giving some additional discussion of the choices that can be made, as well as
some indication of the more recent developments in the subject of fast
convolution in general.

ADDITIONAL TOPICS RELATING TO THE FFT

559

where
Gk = ( lk
Hk

+ Y~12-k)/2

= (Yk -

Y~12-k)/2j,

k = O,N/4

Gk= G~12-k
A.5

ADDITIONAL TOPICS RELATING

to

THE FFT

Hk = H~12-k'

The Radix of the Transform

In taking FFTs, a number of points N = 2m is commonly used, in the way


we have described briefly in Section A.4. This realization of the calculation
Eqn. (A.4.1) using successive segmentation by factors of2 is called a "radix 2"
algorithm. The original presentation of the FFT (Cooley and Tukey, 1965)
made no such assumption, and an N-point FFT algorithm can be derived for
any arbitrary factoring of any number N. The size of the basic transform unit
coded is called the radix of the algorithm. Thus, if an N = 2m point transform
is coded as a cascade of m stages of two-point transforms, we have a radix 2
FFT, while if we code m/2 stages of four-point transforms (requiring m to be
even), we have a radix 4 algorithm. If we code m/2, or (m - 1)/2 for odd m,
stages of four-point transforms, followed or preceded by one two-point transform
for odd m, we have a simple mixed radix transform, and in this case a transform
of "radix 4 + 2".
The various factorings of N lead to transforms with different operation counts,
although all with the basic N log(N) behavior, and hence somewhat different
running speeds. As Gentleman and Sande (1966) early pointed out, radix 4 or
radix 4 + 2 is nearly a factor of2 faster than radix 2, for moderate size transforms
of the order N = 1024. Bergland ( l 968a) pointed out that radix 8 + 4 + 2 is
even faster. Since the rate of increase in program complexity grows as the radix
goes up, while the rate of increase in operation speed slows, radix 8 is the
highest generally used. For special values of N, use of radixes such as 3 and 5
may lead to faster transforms than simply extending data sets of the desired
length N by zeros to reach a power of two.

k = N/4

+ l,N/2

with. th~ sequence lk being the (N /2)-point complex transform of the numbers Yn
Similarly, an N-point inverse transform Eqn. (A.4.1) leading to a real sequence
is computed using an (N /2)-point complex transform as:
f2n = Re(y0 )
f2n + 1 = Im(y0 ),

n = O,N/2- 1

(A.5.2)

where the sequence Yn is the (N /2)-point inverse FFT of the sequence


k = 0, N/2- 1

with
Gk= (Fk + F~12 -k)/2
Hk = exp(-j2nk/N)(Fk - F~12 _k)/2,

k = O,N/4

Gk= G~12-k
Hk

= H~12-k

k = N/4

+ 1,N/2

In th~ seco~d way of dealing with real data (Bergland, 1968b ), a complex FFT
algorithm 1s. pruned to remove all redundancy in computing a value and its
complex conjugate, and to eliminate all computation of values known to be zero.

Arrangements for Real Data

We often have to do with data which are real numbers, for example, the time
samples of the real offset video signal resulting from each pulse of a SAR system.
There are two standard ways of computing an FFT for a. real data sequence.
In the first (Brigham, 1974, Chapter 10), the N-point transform sequence Eqn.
(A.4.2) of N real numbers fn is computed using a complex FFT routine with
N /2 input points Yn = f 20 + jf2 n+ 1 , n = 0, N /2 - 1. The transform coefficients
are:
Fk =Gk+ Hk exp(-j2nk/ N),

k = O,N/2

Fk = F~-k

k = N /2

+ l, N

- 1

(A.5.1)

Vectorization of the Transfotm

Quite ~ignificant improvements in FFT computation times can result by


paralleltng aspects of the computation, using vector machines such as the
Cyb~r-205 or CRAY. Pease (1968) and Temperton (1979), for example, have
considered the ad~antages to gained by arranging specific FFT computations
to ac~rd better with the architectures of specific machines than do the standard
algor~thm~ we have been discussing. One potential advantage which might be
expl01ted .m the SAR. computation in particular would be carrying out the range
co~press1on of multiple radar pulses simultaneously on a vector machine, with
the mnermost loop of the computation being indexed by pulse number.

?e

560

DIGITAL SIGNAL PROCESSING


A.6

Prime Factor and Number Theoretic Transforms

In recent years, two main developments in numerical transform th~ory have


evolved. On the one hand, these center on more efficient computation of the
traditional (exponential based) DFT, Eqn. (A.4.1) and Eqn. (A.4.2). On the
other hand, development of a new class of transforms suitable for realizatio_n
of fast convolution has been carried forward, the so-called number theoretic
transforms. We will mention only some of the main developments here.
In the conventional (Fourier) FFT, a main development has been the
Winograd FFT, or the Winograd Fourier transform analysis (WFTA). Silv~rman
( 1977) gives a tutorial account of the theory, while Zohar ( 1979) gives. a
discussion oriented towards realization in a computer code. For an N-pomt
transform to be realizable by WFTA, N must be decomposable as a product
of any number of mutually prime factors n; selected from among (2, 3, 4, 5, 7,
8, 9, 16). The largest possible value of N realizable by WFTA ~s thus N = 5~,
but as Agarwal and Burrus (1974) discuss a discrete convolution problem with
larger N can be converted to a two-dimensional problem with smaller WFTA
sizes.
In WFTA itself, the N-point transform is realized as a specially arranged
nest of n;-point transforms, each of which is further realizable with special
efficiency. The full algorithm requires the same or somewhat more adds than
the conventional FFT, but considerably fewer multiplies. For example, for a
complex transform with N = 1008, WFTA requires 34668 real ~dds and 3548
real multiplies, while a radix 8 + 4 + 2 FFT for !if = 1024 requires 21793 real
adds and 10244 multiplies. If, on a particular machine, rnultiplies are noticeably
slower to perform than adds, the use ofWFTA may have considerable advantage.
Following the idea of Winograd to decompose N into a product of mutually
prime factors n;, the "prime factor" FFT was developed. In this procedure, the
one dimensional N-point transform is converted into a K-dimensional transform,
with the transform in each dimension involving a number of points equal to
the corresponding factor of N. Thus, for N = 5040 = 5 x 7 x 9 x 16 for
example, the transform is realized as a 4-dimensional ~ransform w!th 5,. 7, 9,
and 16 points respectively in each dimension. Each constituent one dimensional
transform is realized by some appropriately fast method, often the WFTA.
Burrus and Eschenbacher (1981), Chu and Burrus (1982), and Johnson and
Burrus (1982) have discussed various aspects of such prime factor algorithms.
At the cost of considerable program complexity, substantial speedup can be
realized. For example, Chu and Burrus (1982) report operation of a 280-point
prime factor algorithm at a rate 27 times faster than a radix 2 FFT run on a
comparable machine.
Finally, it is useful to recall that we are often interested in the F~ only_ as
a means to realize a fast convolution. If fast convolution is the primary aim,
transforms other than the FFT may be appropriate and advantageous. Rader
( 1972) has discussed a transform in which 2 plays the role which exp( - j2'!" IN_)
plays in the Fourier transform, Eqn. (A.4.1). For an input sequence which 1s
integer (fixed point) data, and of length N which is a prime number, a transform

INTERPOLATION OF DATA SAMPLES

561

and its inverse can be formulated which maps sequence convolution into
transform multiplication. The transform, when realized on a binary machine,
requires no multiplies, N(N - 1) adds, and (N - 1)2 circular register shifts for
an N-point input sequence. The algorithm is particularly suitable for fixed point
realization of convolutions of relatively short sequences. If, in addition, in this
transform the prime N is of the form N = 2m + 1, in which case N is a Fermat
number, the Fermat transform is obtained, which requires only (m + l)N adds
and no multiplies.
The entire field of fast algorithms for signal processing applications is
discussed in depth and generality in the useful texts by Elliott and Rao ( 1982)
and by Blahut (1985). The advantages to be gained from use of algorithms
other than traditional FFT procedures of radix 4 + 2 in SAR calculations are
relatively unexplored at this time. We therefore will end our discussion of the
matter here, having pointed out perhaps some possibilities.

A.6

INTERPOLATION OF DATA SAMPLES

Let us finally consider interpolation of a bandlimited low pass signal l(t), with
spectrum F(jw) which vanishes for Ill> B/2. As always, we assume the signal
to have been sampled at an adequate rate f. > B to produce a sequence
f., = l (n/ f.). Suppose further that all but a finite number N of the samples are
zero. Then we know from Eqn. (A.2.2) that we have exactly
N-1

l(t) =

f., sinc[nf.(t - n/f.)]

n=O

(A.6.1)

valid for all t, which provides error free interpolation (or extrapolation) of the
given N samples. Beyond this there is only the question of implementation to
consider.
We at once remark, however, that Whittaker's formula, Eqn. (A.2.2), is often
not the most reasonable way to carry out such an interpolation computationally.
It is usually more efficient to introduce a time shift in the transform domain,
since we now have at hand an efficient way to compute transform coefficients
(the FFT).
In general, if l(t) is a function withFourier transform F(jw), the transform
of the delayed version g(t) = l(t - T) is
G(jw) =exp( -jwT)F(jw)

(A.6.2)

Therefore, if l(t) is bandlimited to the band Ill ~ B/2, so also will be g(t), and
samples of g(t) at the same rate as those of l(t) (f. > B) will suffice for
reconstruction. From Eqn. (A.2.9) and Eqn. (A.6.2), the transforms of the

562

DIGITAL SIGNAL PROCESSING


A.6

corresponding sampled signals are related by


G!(jw) = exp( -jwT)F!(jw)

(A.6.3)

since F (jw) and F !(jw) are identical on the band If I <


Eqn. (A.3.6) and Eqn. (A.6.3),

Gk = F k exp( - j2nkf. T / N),

= 0, N -

BI 2.

(A.6.4)

The sought samples 90 = g(n/ f.), n = [O, N - l], are then just the inverse FFT
of the numbers Eqn. (A.6.4 ).
.
.
As a special case, suppose that we want to interpolate to the m1dpomts of
the original sampling intervals. Then we have T = 1/2f., and Eqn. (A.6.4)
becomes

so that, from Eqn. ( A.4.1 ),


N-1
g0

= (1/N)

L Fkexp[jnk(2n -

n=O,N-1

1)/N],

k=O

If we define a sequence

F~

by

k = O,N -1
k=N,2N-1
so that F~ is the zero-padded version of Fk, then the inverse FFT of the sequence
F~ is
g~ = ( 1/2N)

N-1

F k exp(jnkn/ N),

k=O

= 0, 2N -

g0 = f(n/ f. - 1/mf.) = mg:,.n-1

9n = f(n/ f. - p/mf.) = mg:,.n-p

A companion operation to interpolation is loosely called decimation. Whereas


interpolation, in the last version discussed above, increases the sampling rate
of a bandlimited signal by a factor m above that minimum f. > B which is
necessary to assure the absence of aliasing, decimation decreases the rate f. by
a factor m in the case that in fact f. > mB, so that the original function is
oversampled by a factor at least m. The obvious answer is the correct one: Since
the given sequence f 0 is oversampled by a factor m, simply discard all but every
mth sample.
The operations of interpolation and decimation, which in themselves change
the sampling rate by integer ratios, can be combined so as to increase or decrease
the sampling rate by any desired rational ratio a = p / q, so long as in the
decimation part of the combination enough samples are always retained that
the restriction f. > B is not violated. For example, if we wish to increase the
sampling rate from f. to (S/3)J., we FFT the original sequence of N samples
J.. by an N-point transform, pad the Fk sequence to a length SN by appending
zeros, do an inverse FFT of length SN, and throw away all but every third
sample in the result, after multiplying by S to adjust for the new scale factor
in the inverse FFT.
In many applications, one "wants to carry out this process of sample rate
adjustment on an ongoing data stream, rather than on a batch of N points.
Crochiere and Rabiner ( 1981) considered the matter. The procedures, which
operate entirely in the time domain, are based on the observation that, if we
insert p zeros between every sample f 0 of the original data sequence, we obtain
a sequence g0 , any pN samples of which have FFT coefficients given by
pN-1
Gk =

n = 0, N-1

n = 0, N - 1

so that we obtain the full set of interpolation points on the grid of fineness
1/mf. with one operation.

which is to say that g0 = 2g~n-l n = 0, N - 1, so that the g~ sequence is the

.
.
sought interpolation of the original function f(t).
This procedure obviously generalizes to the case T = 1/mf., m which case
we compute the N-point FFT of the sequence f 0 , extend the resulting sequence
Fk by appending zeros to fill out the N-~oint Fk seque~ce to a length mN,
and compute the inverse FFT over mN pomts. The result ts a sequence of mN
points g~ such that

563

In fact, if we want a delay T = p/mf., p = 0, 1, ... , m - 1, the same reasoning


leads at once to

Then, from

INTERPOLATION OF DATA SAMPLES

i=O

N-1
9; exp( -j2nki/pN) =

n=O

f 0 exp( -j2nkn/ N),

k = 0, pN - 1

making the change of variable n = i/p, so that the Gk sequence is just the Fk
sequence, but considered over p of its base periods of length N. If then the g
0
sequence is low pass filtered in the time domain by a digital filter to remove
spectral components from k = N through k = pN - 1, the result has a spectrum
which is exactly the F k sequence padded out by zeros to a length pN. The result
therefore is the sequence with the values f(n/f.) interpolated on the mesh with
fineness p If.. Discarding all but every qth sample in the result therefore
accomplishes sample rate increase by the rational factor p / q.

564

DIGITAL SIGNAL PROCESSING

REFERENCES
Agarwal, R. C. and C. S. Burrus (1974). "Fast one-dimensional digital convolution by
multidimensional techniques," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-22( l ),
pp. 1-10.
Bergland, G.D. (1968a). "A fast Fourier transform algorithm using base 8 iterations,"
Math. Computation, 22, pp. 275-279.
Bergland, G. D. (1968b). "A fast Fourier transform algorithm for real-valued series,"
Comm. ACM, 11(10), pp. 703-710.
Blahut, R. E. (1985). Fast Algorithms for Digital Signal Processing, Addison-Wesley,
Reading, MA.
Brigham, E. 0. ( 1974 ). The Fast Fourier Trans/orm, Prentice-Hall, Englewood Cliffs, NJ.
Burrus, C. S. and P. W. Eschenbacher (1981). "An in-place, in-order prime factor FFT
algorithm," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-29(4), pp. 806-817.
Chu, S. and C. S. Burrus (1982). "A prime factor FFT algorithm using distributed
arithmetic," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-30(2), pp. 217-227.
Cooley, J. W. and J. W. Tukey (1965). "An algorithm for the machine calculation of
complex Fourier series," Math. Computation, 19(90), pp. 297-301.
Crochiere, R. E. and L. R. Rabiner (1981). "Interpolation and decimation of digital
signals - A tutorial review," Proc. IEEE, 69(3), pp. 300-331.
Elliott, D. K. and K. R. Rao ( 1982). Fast Transforms: Algorithms, Analyses, Applications,
Academic Press, New York.
Gentleman, W. M. and G. Sande (1966), Fast Fourier transforms - for fun and profit,
AFIPS Fall Joint Computer Conf, San Francisco, November 1966, Spartan Books,
Washington, DC, pp. 563-578.
Johnson, H. W. and C. S. Burrus ( 1982). "The design of optimal DFT algorithms using
dynamic programming," Proc. IEEE Inter. Conf Acoust., Speech, and Sig. Proc., Paris
(May), pp. 20-23.
Oppenheim, A. V. and R. W. Schafer (1975). Digital Signal Processing, Prentice-Hall,
Englewood Cliffs, NJ.
Papoulis, A. ( 1966). "Error analysis in sampling theory," Proc. IEEE, 54( 7), pp. 947-955.
Pease, M. C. ( 1968). "An adaptation of the fast Fourier transform for parallel processing,"
J. ACM, 15, pp. 252-264.
Rader, C. M. (1972). "Disqete convolutions via Mersenne transforms," IEEE Trans.
Computers, C-21, pp. 1269-1273.
Shannon, C. E. (1949). "Communication in the presence of noise," Proc. IRE, 37(1),
pp. 10-21
Silverman, H. F. ( 1977). "An introduction to programming tl\e Winograd Fourier
transform algorithm (WFTA)," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP25(2), pp. 152-165.
Temperton, C. (1979). "Fast Fourier transforms and Poisson solvers on CRAY-1," pp.
361-379 in: Super-Computers, Vol. 2, Infotech International.
Zohar, S. (1979). "A prescription of Winograd's discrete Fourier transform algorithm,"
IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-27(4), pp. 409-421.

APPENDIX B

SATELLITE ORBITS AND


COMPRESSION FILTER
PARAMETERS

The esse~~ of SAR, a~d the root of its dramatic resolution properties, lies in
the ~oss1_b1hty ~f carrym_g out compression processing of the Doppler shifted
carn~r signal m the az~muth coordinate as the vehicle flies by the target
(~ection 1:2.2~. For an isolated point target, the waveform of that Doppler
signal, which is the slow (azimuth) time variation of the phase of the output
of the range compression filter, is (Eqn. (4.1.24)):
g(s) =exp[ -j4nR(s)/A.]

(B.0.1)

~h~re _R(s) is range from radar to target and s is slow time. This signal is
mtrm~1ca~ly_ sampled by the pulsed nature of the radar, with a sampling frequency
fp wh1c~ is Just the radar PRF. The waveform of this point target response is
needed m order to construct the compression filter. The detailed nature of the
rang~ ~unction R(s) _is therefo~e cru~ial to the construction of a SAR processor,
and it is that behavior to which this Appendix is devoted.
. Th~ range function R(s) for practical geometries is a complicated expression
mvolvm~ many ~arameters of the relative motion of satellite and target, the
latt~r hem~ earned al?ng by the rotating earth, and possibly having some
motion. of its own relative to the earth surface. However, because of the limited
~eamw1dth of ~he ra~ar. antenna pattern, a specific ~oint target creates radar
s~gnal only dunn~ a hm1ted span of slow time. That time span, the integration
time of the S~R, is usual~y small enough that a Taylor series expansion of R(s)
abo~t a nommal center time, say sc, can be terminated after the first few terms
to ~1eld an adequately accurate approximation to the full function R(s).
Typically, only terms through the quadratic in slow time need to be retained
'
565

566

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


B.1

PARAMETERS IN TERMS OF SATELLITE TRACK AND TARGET POSITION

in which case the azimuth function Eqn. (B.0.1) is a linear FM signal in the
Doppler frequency domain. Therefore, in this Appendix we will seek expressions
for R(s) and its first few derivatives evaluated at the time scat which the target
in question is in the center of the radar beam. Those lead to the parameters
needed in the azimuth compression filter of a SAR processor.
We will derive three different forms of expression for these derivatives of
R(s) evaluated at beam center, arranged to accord with three different situations
in which one wants to calculate them. First, we will need versions of the
parameter computations which can use the accurate values of satellite position
and velocity obtained by observing the trajectory of the vehicle. Second, for
prediction of the azimuth filter parameters during system design, as well as for
their computation in the case that the satellite orbit and orientation are known
rather precisely, it is useful to have accurate formulas in terms of these quantities.
Third, we need analytical models upon which to base the data fitting procedures
involved in clutterlock and autofocus (Chapter 5).

B.1 PARAMETERS IN TERMS OF SATELLITE TRACK AND


TARGET POSITION

As shown in Fig. B. l, consider a coordinate system with origin at the center


of the earth. Let the satellite position as a function of slow time s be the vector
R.( s), and let an isolated point target be at position R1( s ). The range vectoris
I

R(s) = R.(s) - R1(s)

I Xs

(B.1.1)

so that we are interested in the function


R(s) = IR.(s) - R1(s)I

(B.1.2)

the scalar slant range. It is convenient to expand R(s) as a Taylor series about
some time sc, which will be the time of passage of the nominal beam center
across the target, but which we can take as arbitrary for the time being. Then
we can write
R(s) = Rc + Rc(s - Sc)+ Rc(s - sc) 2 /2 + ... ,

Figure B.1

Differentiating both sides of Eqn. (B.1.4), we have

(B.1.5)

(B.1.3)

and seek the various derivatives of R(s) evaluated at the special time of beam
center on the target, rather than seeking the analytical form of R(s) directly.

Satellite and target positions in inertial system fixed at earth center.

For ~enerartty, suppose that the target moves with respect to the surface of the
rotattnfg earth. Let r be the target position, with coordinates taken relative to
a set o axes fixed on the rotating earth's surface. That is,

Slant Range Derivatives Given Platform Trajectory

From Eqn. (B.1.2), suppressing henceforth the explicit appearance of the slow
time variable s:
(B.1.4)

;.:~:r~, r!; ~; rz:rehmeasure~ by an observer on the earth relative to coordinate


' J,

w tc move with the rotating earth Let Rto be th e ongtn


of that

567

568

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


B.1

system. Then:

PARAMETERS IN TERMS OF SATELLITE TRACK AND TARGET POSITION

569

in which
(B.1.6)
(B.1.7)
is the acceleration of the target relative to the earth's surface. Using Eqn. ( B.1.11)
and standard identities, Eqn. (B.1.15) becomes

Assuming the earth to have rotational symmetry about its axis,


(B.1.8)

RR+

A.(Rs - Rt)+ 1Vsl 2 + 2we(V. x R,)


+(we x Rt)(we x R.)- 2V.v, + 2we(R. xv,)

where we is the (constant) earth's angular velocity. Further (e.g. Hay, 1953, p. 80 ),
t =We

R2 =

+ lv,1

+ V,

2
-

(R. - Rt)a,

*(B.1.16)

(B.1.9)
Considering the Doppler signal Eqn. (B.0.1), with phase

where by v, we mean the target velocity as seen by an observer fixed on the


earth's surface:
(B.l.10)

<P

i 0 (s) =
(B.l.11)

using also Eqn. (B.1.6). Then


(B.l.12)
and
(B.l.13)
Using Eqn. (B.1.12) and Eqn. (B.1.13) in Eqn. (B.1.5) yields

RR= V.(R. - Rt)+ we(R. x Rt)-(R. - Rt)v,

*(B.1.14)

This expresses the first derivative R of slant range in terms Qf t~e t~r~et pos~tion
and velocity on the earth and the satellite position and veloetty m its orbit.
Differentiating Eqn. (B.1.14), we have

f:

f 0 (s) = <P/2n = -2R(s)/).

Vt = We x vto +We x r + v,

+ V,

-4nR(s)/A

we have the general Doppler frequency expressions

From Eqn. (B.1.7), with Eqns. (B.1.8), (B.1.9), and (B.1.10) taken into account,
we obtain

= We X Rt

-2R(s)/A

In particular,
foe= -2Re/A

(B.1.17)

fR = -2Re/A

(B.1.18)

corresponding to the range function expansion in Eqn. (B.1.3), are the Doppler
center frequency and Doppler rate for use in the SAR processor.
The expressions Eqn. (B.1.14), Eqn. (B.1.16) indicate explicitly how target
motion relative to the earth surface affects the parameters foe and fR through
changes in R(s) and R(s). The expressions form the basis for assessment of
defocusing caused by uncompensated target motion, in conjunction with depth
of focus considerations (Section 4.1.3). Henceforth, however, we will assume the
target is a terrain point fixed on the earth surface, and take

a,= v, = 0
Satellite Acceleration Given Earth Potential Function

RR+ JP= A.(R. - Rt)+ V.(V. - Vt)


+

We

(R. X Vt + V5 X Rt) - (V5

- (R. - Rt)(a,

+We

Xv,)

Vt) v,
(B.1.15)

To use the expressions Eqn. (B.1.14) and Eqn. (B.l.16) to obtain the Doppler
parameters foe andfR, we need to know the motion of'the satellite. The tracking
system and the orbit smoothing processor will normally provide the satellite
position and velocity R., v. as functions of slow time s (vehicle flight time),
although some interpolation may be needed between the times at which these

570

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


8.1

quantities are made available. However, the higher derivatives of R. must


usually be calculated if they are needed;
.
.
.
If we assume a uniform spherical earth, the calculatton 1s easy, sm~ then
the force field in which the satellite finds itself is a central field (n~glectmg t~e
influences of the sun, moon, etc.). Then (whatever the form of the orbit) Newton s
law yields the equation of motion:
(B.1.19)

/s

where R = IR I and - 3.986 x 10 14 m 3 2 is the product of the gravitational


.
constant and the mass of the earth. Differentiating Eqn. (B.1.19) yields

PARAMETERS IN TERMS OF SATELLITE TRACK AND TARGET POSITION

571

we have Eqn. (B.1.20) as


A.= -(/R:){[l

+ (1.5B2/R;)(l

- 5 sin 2 (.)]R.

+ (2B2 z/R;)k}

(B.1.22)

This shows both the nonuniformity and noncentrality of the force field, through
the terms with(. and k, respectively. Higher order terms in the potential function
Eqn. (B.1.21) are available, and could be used for more accurate calculation of
A., or for calculation of A. in a third order expansion of the slant range function
R(s).
The expansion Eqn. (B.1.22), when used in Eqn. (B.1.16), together with
Eqn. (B.1.14) yields the parameters foe and fR For example, if we assume a
central force field, B 2 = 0, and a target fixed on the earth, we obtain

and so forth for the higher derivatives, where

R. =

foe= ( -2/A.R)[V.(R. - R,) + w. (R. x R,)]

V.R./R.

JR= (-2/A.R)[(-/R:)R.(R. - R,) +

In the case of a noncentral force field, due to a nonspherical a~d/or no?uniform


earth, it is convenient to introduce the gr~vitationa~ potential functton U(p)
(Haymes.--1!)71, p. 42). This is a scalar function of pos1tton p such that the force
per unit mass on the satellite is
F(p)

= - [VU(p)]lp = p(sJ

Then Newton's law is

It is usual and convenient to express the function U(~) as ~ series in powers


of 1/ p with coefficients that are indirectly measured b~ mfemn.g them from the
observed orbits of satellites. For a uniform earth with ro!att~nal symmetr~,
the potential at the satellite loq1tion R., correct to the 10d1cated order, ts
(Haymes, 1971, p. 45):
2

((.)]

(B.1.21)

where ( is the latitude of the satellite on a sphere with center at the earth
center. The coefficient is B 2 = -4.405 x 10 10 m 2 (El'y~sberg, 196?, P 199).
With the form of the potential Eqn. (B.1.21), and takmg a coordmate system
(Fig. B.l) such that
sin((.)=

z/ R.

v.v. - R2

+ (w. x R,)(ro. x R.) + 2w0 (V. x R,)]

*(B.1.24)

All quantities here are evaluated at the time sc of passage of the terrain point
of interest through the radar beam center.
The expression Eqn. (B.1.23) exhibits terms due to satellite motion (perceived
due to squint and orbit eccentricity) and earth rotation. Both of these are
generally significant. In expression (B.1.24), however, for rough calculations it
may be adequate to use the approximation

(B.1.20)

A.= -VU[R.(s)]

U = (/R.)[1+(B 2 /R;)(-0.5+1.5 sin

*(B.1.23)

(B.1.25)
where

V.1 is the speed of the satellite relative to


2
vs2 t-1.R1
-1v
-,
s -

the target point. Thus,

w e x RI 12
*(B.1.26)

The expression Eqn. (B.1.25) differs from Eqn. (B.1.24) only in the small
centripetal acceleration() and R2 terms, and in the small term (ro. x R,)'(ro. x R).
More accurately, Eqn. (B.1.24) defines a speed parameter V for use in
Eqn. ( B. l .25) in place of V., '. The matter is discussed in more detail in Section B.4.
Coordinate System

In order to carry out numerical calculations, it is necessary to introduce some


coordinate system in which to express the various vector quantities in expressions
such as Eqn. (B.1.23) and Eqn. (B.l.24). The usual system is the equatorial
(inertial) coordinate system (Haymes, 1971, p. 1). This is a right-handed

572

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


B.2

rectangular system with the z axis the axis of rotation of the earth, positive
towards the north pole (Fig. B.1 ). The positive x-axis points in a fixed direction
in inertial space, the direction of the vernal equinox, also called the first point
in Aries, and denoted symbolically as Y, the sign of the ram. The earth rotates
on its axis in this fixed coordinate system. The origin of the system can be
regarded as fixed at the center of the earth, so that the system itself moves with
fixed orientation around the sun as the earth travels in its orbit. Since throughout
we neglect the influence of the sun on the satellite, that this coordinate system
moves around the sun is of no concern.
The vernal equinox in this context is a specific direction, rather than a time.
The xy-plane of the equatorial coordinate system of Fig. B.1 is the plane of the
earth's equator. The z-axis, the axis of rotation of the earth, is tilted at nominally
23.5 with respect to the plane of the earth's orbit around the sun. As a result,
for nominally six months of each year the sun lies below the xy-plane, while
for the other six months it is above. At one precise instant each year, to an
observer at the earth's center the center of the sun would appear to pass through
the xy-plane headed into the positive-z hemisphere. That instant, nominally
some time on March 21, is the time of the vernal equinox, and the direction
from earth center to sun center at that instant is called the vernal equinox.
A slight complication arises in use of the vernal equinox as a coordinate
direction. Because of a variety of perturbing effects, the earth's axis of rotation
moves with respect to the plane of the earth's orbit. There is a mean circular
movement (precession) around the cone with central half angle 23.5, with a
period of 25800 years, and a small wobbling (nutation) about that mean, with
a period of 18.6 years. As a result, the direction of the vernal equinox moves,
and it is necessary to specify a date to which the equatorial coordinate system
in question relates. Until 1984, that was taken as the beginning of 1950, precisely
defined. Since 1984, the year 2000 is the convention. The vernal equinox actually
occurred at the first point (horn) of Aries (the ram) about 2000 years ago, so
that the current equatorial system has an x-axis rotated about (2000/25800) x
360 = 28 away from that star.
Before considering detailed formulas for target position R, in Eqn. (B.1.14)
and Eqn. (B.1.15), we will describe determination of the satellite vectors from
orbital elements.

B.2

TRAJECTORY PARAMETERS IN TERMS OF SATELLITE ORBIT

573

~f th~ orbit ellipse:.rather than at its center.) Such an orbit can be described by its
or~ital elements , t~ese bein~ constants of the ellipse and of its orientation

relative to the equatonal coordmate system (Fig. B.2). They are (Haymes 1971
p. 498):
'
'
a,
e,

the semi-major axis of the ellipse;


the eccentricity of the ellipse;

Ot:;,

the inclination of. the ellipse (the angle between the plane of the ellipse
and the xy-coordmate plane);

n, the_ longitude .of the ascending node (the azimuthal angle of the point at

whi~h the orbit cuts the xy-plane in passage of the satellite from the lower
hemisphere to the upper, that point being the "ascending node")
w, the argu~e.nt ~f perig~e (t~e angle ("argument") along the orbit ~lane,
taken ~osit1ve m the dtrectton of satellite travel, from ascending node to
t~e ~om!. of close~t ap~roach of the satellite to the earth center
( pe~igee ), th~t pomt bemg on the major axis of the ellipse);
P, the s1~ereal ~n.od (the ti~e required for one transit of the ellipse by the
satellite) - this is not an mdepen4ent parameter of the orbit
T, the time of peri~ee passag.e (the absolute time at which the sateilite passed
through the pomt of pengee during the orbit in question).

TRAJECTORY PARAMETERS IN TERMS OF SATELLITE ORBIT

A satellite which finds itself in a central force field, Eqn. (B.1.19), will move in
a planar orbit which is one of the conic sections (Haymes, 1971, p. 41). If the
satellite is to be of use for remote sensing, it must be in an orbit which is
nominally elliptical. The elliptical orbit is further often arranged to be a near
circle. Were the earth to be a uniform sphere, a satellite would move in a strict
elliptical orbit, with the center of the earth at one of the foci of the ellipse.
(Note that thereby the origin of the equatorial coordinate system is at a focus

x'
Figure B.2

Definition of elements of satellite orbit.

574

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


B.2

TRAJECTORY PARAMETERS IN TERMS OF SATELLITE ORBIT

575

Nominal Satellite Orbit for Given Orbital Elements

It is useful to have available the equations relating these orbital elements to


the quantities of interest in computing the filter parameters, namely R. and v.

The orbital elements are constants of integration, or transformations thereof,


which arise in integrating the equation of motion of a satellite in a central force
field, Eqn. (B.1.19). We will follow Haymes (1971). El'yasberg (1967, esp.
Chapters 4, 5) gives a more detailed treatment.
Again using the inertial system centered on the (assumed) uniform spherical
earth (Fig. B. l ), the equation of motion of the satellite is
(B.2.1)
where= 3.986 x 10 14 m/s. From Eqn. (B.2.1),
R. x A. = 0 = d(R. x V.)/dt
R. x

v. = const

(B.2.2)

Equation (B.2.2), the first integral of Eqn. (B.2.1), is the "areal integral'', and
indicates that a vector normal to the plane of R.(s) and V.(s) is a constant in
time. Hence R., v. evolve in a plane, and the orbit is planar.
Since the orbit is planar, we can confine attention to that plane and introduce
the polar coordinates shown in Fig. B.3. Then
R. = R.ur

(B.2.3)

v. = R.ur + R.ur = R.ur + R.iiu,

(B.2.4)

A.= (R. - R.ii 2 )ur

= -(/R;)ur

+ (2R.ii + R.61)01

Figure 8.3

Orbit plane for central force field.

the transformation R. = l/uand using Eqn. (B.2.6)and Eqn. (B.2.7), there results
(B.2.5)

using Eqn. (B.2.1) and the definition of ex indicated in Fig. B.3.


Substituting Eqn. (B.2.3) and Eqn. (B.2.4) into Eqn. (B.2.2) yields the areal
integral as
(B.2.6)
so defining " This is Kepler's second law, that the motion in the orbit plane
is such that the vector R. sweeps out area at a constant rate (hence the term
"areal" integral).
Equating coefficients of ur in Eqn. (B.2.5), we have

I;
l{

2 2
2
2
- K u (d u/dex )

K 2u 3 -

u2

Thus

with solution

u =A cos(ex - w) + /K 2
where A and w are constants of integration.
Transforming from u back to R. = 1/u, and defining

(B.2.9)

(B.2.7)
To find R. as a function of ex, which will turn out to be the equation of an
ellipse, we use Eqn. (B.2.6) to eliminate time s from Eqn. (B.2.7). Introducing

(B.2.8)

yields

R.=(e/A)[l +ecos(ex-w)]-1

(B.2.10)

576

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


8.2

Therefore R. has minimum and maximum values (the values at perigee and
apogee):
(R.)min =

(B.2.12)

p = 2nalf2I112

there results

+ e cos(IX -

ro)]

*(B.2.13)

Equation (B.2.13) is the equation of an ellipse, provided that e < 1, with


semi-major axis a. (This result is Kepler's first law.) The values Eqn. (B.2.11)
become
(R 8 )min = a(l - e)
(R.)max = a(l

Satellite Coordinates for Nominal Orbit

(B.2.11)

Then defining

R. = a(l - e 2)/[1

577

If we want to calculate the position R. of the satellite at some specified time s,


expressed through the mean anomaly, we need to work our way from the mean
anomaly M_, back through the eccentric anomaly E, to the true anomaly f.
Together with R., which follows immediately from fusing Eqn. (B.2.13), this
locates the satellite.
First, it is straightforward to show (Haymes, 1971, p. 33) that

e/ A( 1 + e)

(R.)max = e/A(l - e)

TRAJECTORY PARAMETERS IN TERMS OF SATELLITE ORBIT

which is Kepler's third law, so that P is not an independent element of the


orbit. From the time s, and known time T, this yields the mean anomaly M.
A moderate amount of geometric consideration of Fig. B.3 (El'yasberg, 1967,
p. 60) leads to Kepler's equation:

E - e sin(E) = M
This ~ust be solved numerically for E given M. (Since, in cases of interest to
us, e 1s very small, Newton's method converges in a few steps). More geometry
leads to

+ e)

The ellipse Eqn. (B.2.13) in the orbit plane is described by three of the orbital
elements - the semi-major axis a, the eccentricity e, and the phase angle ro
(the argument of perigee). Two other elements, the longitude n of the ascending
node and the inclination IXi of the orbit, relate the orbital plane to the equatorial
coordinate system. The sidereal period P and the time T of passage of the
satellite through perigee serve to locate the satellite in its orbit for any given
times.
It is convenient to introduce a number of angles with regard to the movement
of a satellite around its orbit (Fig. B.3). The angle

f=1X-ro

tan(f /2) = ((1

+ e)/(1

- e)J 1' 2 tan(E/2)

from which follow f and then R .


Simple transformations now yield the inertial coordinates of the satellite and
?fits. velocity, i.e., the vectors R. and v. From Fig. B.2, rotating the geocentric
10ert1al system through the angle n corresponds to the transformation

Another rotation through the inclination angle IXi yields

[i"] [

which appears in Eqn. (B.2.13) is the "true anomaly". The angle E (Fig. B.3)
is the "eccentric anomaly". This is the central angle, measured from perigee, of
the point on the circumscribed circle where a line througli the satellite parallel
to the minor axis intersects the circle. Finally, the "mean anomaly" is defined as

j"

= 01

c~s IXi

k"

-SID IXi

i'J

sin0 IXi ][ j'


COS IXi
k'

Finally (Fig. B.3),


M = (2n/ P)(s - T)

This is the angle of the satellite, measured from perigee, if the motion were
circular with period P, the sidereal period.

Ur] [

[ ;~

COSIX

-s~n IX

sin IX

OJ[ j"i"J

COS IX 0
0

k"

578

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


B.2

Cascading these yields

TRAJECTORY PARAMETERS IN TERMS OF SATELLITE ORBIT

where we introduce the radial and tangential speeds. Writing the vectors u,, u
1
in terms of the inertial system, using Eqn. (B.2.14 ), then yields the components
of v. in the equatorial system.
Higher derivatives of R. could be found in the same general fashion, but we
will refrain from doing that, except to note that

u, = i[cos(a) cos(!l) - sin( a) cos(a 1) sin(!l)]


+ j[cos(a) sin(!l) + sin( a) cos(a1) cos(!l)]
+ k[sin(a) sin(a1)]

u1 = i[ -sin( a) cos(!l) - cos( a) cos(a1) sin(!l)]

ft..= (/R;)e cos(a - w)

+ j[ -sin( a) sin(!l) +cos( a) cos(a1) cos(!l)]

Ci.= (-2/R;)esin(a-w)

+ k [cos(') sin( a;)]


uP

= i[sin(a 1) sin(!l)] - j[sin(a1) cos(!l)] + k cos( a;)

579

*(B.2.19)

(B.2.14)
Perturbations of the Nominal Orbit

Since

Eqn. (B.2.14) yields

x. = R.[cos(!l) cos( a) - sin(!l) cos( a;) sin( ix)]


Ys = R. [sin(!l) cos( a)+ cos(!l) cos(a1) sin( a)]

z. = R. sin( a 1) sin(')

(B.2.15)

Let us now consider the satellite velocity Eqn. (B.2.4). From Eqn. (B.2.13)
we have

R. =

[R;e sin(a - w)/a(l - e 2 )]&

*(B.2.16)

From Eqn. (B.2.6) we have


i:X = 1<./R;

*(B.2.17)

while from Eqn. (B.2.11) and Eqn. (B.2.12) we have

All of the above has assumed a central force field, Eqn. (B.2.1), leading to the
orbit being strictly an ellipse in a plane. However, since the earth is an oblate
spheroid, bulging at the equator, with somewhat of a pear shape (larger below
the equator), and with even higher order nonsphericities, the force field acting
on the satellite is not central. The result is that the satellite orbit is not a simple
ellipse in an invariant plane. The analytical treatment of the changes in the
elliptical orbit due to noncentrality of the force field is straightforward, but of
some complexity. A detailed treatment is given by El'yasberg (1967, Chapter
13), taking into account the first term of the earth's potential function,
Eqn. (B.1.21 ), past the simple inverse square force behavior (that involving the
coefficient B2 ).
The perturbing effects of higher order terms in the earth's potential function
are most conveniently expressed as perturbations of the nominal elliptical orbit.
To first order, two of the orbital elements increase or decrease monotonically
with time ("secular perturbations"), the argument of perigee wand the ascending
node n. Three of the orbital elements remain constant, again to first order: the
length a of the semi-major axis, the eccentricity e, and the inclination a1 Over
the course of one revolution of the satellite in its orbit, the accumulated changes
in the perturbed elements are (El'yasberg, 1967, p. 212):

<>w =
<>n =

2
(R)
. +(R)
s mm
s max =2a=2e/A(l-e )

Then Eqn. (B.2.9) yields


1<. 2 = e/A = a(l - e 2 )

(B.2.18)

Using Eqn. (B.2.13), Eqn. (B.2.17), and Eqn. (B.2.18) in Eqn. (B.2.16), we have
Eqn. (B.2.4) as

v. =

(ne/p 2 )[5 cos 2 (a;) - 1]


-(2ne/p 2)cos(a 1)

where
p = a(1 - e 2 )

and e is a constant of the gravitational field:

[/a( 1 - e 2 )] 1 12 { e sin(f)u, + [ 1 + e cos(f)Ju1}

*(B.2.18a)

(B.2.20)

e = -1.5B 2 = 2.634 x 10 25 m 5 /s 2

(B.2.21)

580

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


B.3

These perturbations Eqn. (B.2.20) may also be expressed as average rates of


change over a period of the orbit:
aver(w) =
aver(O) =

<>w/P
<>O./ P

(B.2.22)

where the sidereal period can be expressed as

P = 2na u

IJ

(B.2.23)

As an example, for Seasat with a= 7161.39 km, e = 1.86 x 10- 3, <X; = 108.02,
from Eqn. (B.2.23) we have a period P = 100.5 min. Then Eqn. (B.2.20),
Eqn. (B.2.21), and Eqn. (B.2.22) yield

COMPRESSION PARAMETERS IN TERMS OF SATELLITE AITITUDE

Slant Range Rate In Terms of Beam Angles

!n Fig. B.1, we show an equatorial coordinate system with the satellite at an


~nstantaneous posit~on R. in ~he plane of its orbit. The orbit inclination angle
ts '!' and the satellite at. the mstant considered has climbed an angle ix in its
o~btt and has local headmg ~ngl~ v east of north, latitude C., and longitude x.
Figure B.4 shows the local situation around the satellite. The target point Rt is
on t~e eart~ ~urface, so that the plane shown, through Rt and normal to the
satellite p~s1t1on vector. R., cuts R. somewhat below the surface. The angle I/I
measured m th~t pla~e ts taken relative to the plane determined by R. and v.
The. local headmg v ts the angle from the meridional plane to the latter. The
angles Y and () are defined as shown in Fig. B.4. The satellite motion is given
by Eqn. (B.2.3), Eqn. (B.2.4), and Eqn. (B.2.5).
As shown in Fig. B.4, the slant range vector is

<>w = -2.11 x 10- 3 rad


aver(w) = -1.73 deg/day

<>n = 2.50 x 10- 3 rad

(B.3.1)
We will ~se this to eliminate Rt from the formulations of Section B.1 in favor
of R, which latter can be expressed in terms of the pointing parameters R, y,

aver(O) = 2.05 deg/day


In addition to the secular changes in n (precession of the orbit) and in w,
there are small (second order) periodic variations in a, e, and <X;, with period
long with respect to the time of a single orbit, and first order periodic variations
in the period P. The latter are considered in detail by El'yasberg (1967,
Chapter 13 ).
The equations we have just developed yield the satellite position and velocity
vectors in terms of the classical parameters of the orbit. It yet remains to consider
the influence of the target position and velocity on the slant range function
R(s) in order to determine the parameters/oc.fR of the compression filter. We
will do that in the next section.

581

(P is below earth surface)

B.3 COMPRESSION PARAMETERS IN TERMS OF SATELLITE


ATIITUDE

In Section B.2 we expressed the satellite inertial position R. and velocity V. in


terms of the orbital element parameters of the satellite motion. By introducing
some additional parameters to describe the orientation of the line from satellite
to target with respect to the satellite position it is possible to complete the
description of the compression filter parameters needed for any particular point
on the surface of the earth. In doing this we follow the approach of Raney
(1986, 1987), but in the generality of Barber (1985).
Figure B.4 Detail from Fig. B. I.

582

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


B.3

and

rfl.

583

COMPRESSION PARAMETERS IN TERMS OF SATELLITE ATIITUDE

We have
(B.3.2)

small amount which is, however, of considerable significance. It is therefore


convenient to introduce the squint angle o. of beam center, defined in every
case by

Since we continue to assume that the target point is fixed to the rotating earth,
R, is the local radius at the target point, and is constant. On the other hand,
(B.3.3)
where

(B.3.7)
(With this definition, for a right looking radar with forward squint of say 1,
o. = 1, while for a left looking radar squinted 1 forward we have o. = 179.)
From Fig. B.4,

(B.3.4)

R sin(y) =Rt sin(O)


R cos(y) = R. - Rt cos(())

is the (constant) earth angular velocity. Then, differentiating Eqn. (B.3.2),


and using Eqn. (B.3.1) and Eqn. (B.3.3), we obtain

(B.3.8)

From the upper spherical triangle (1) in Fig. B. l:


RR=

v. R -

w. (R. x R)

(B.3.5)
sin( v) cos((.) = cos( ai)

From Fig. B.4, we have

0 = cos( a) sin((.) - sin( a) cos((.) cos(v)


R = R cos(y)ur

Using this with R. and


becomes

R = R.. cos(y) -

v.

+ R sin(y)[ -cos(rfl)u1 + sin(rfl)up]

while for the lower (2):

from Eqn. (B.2.3) and Eqn. (B.2.4), Eqn. (B.3.5)

sin((.)= sin( a) sin( a;)

(B.3.10)

Using Eqn. (B.3.10) in Eqn. (B.3.9) yields:

R.w. sin(y) cos(t/I)

+ w.R. sin( y)k [sin( rfl )u + cos( I/I )up]


R, cos(y) - R.w. sin(y) cos( I/I)
+ w.R. sin(y)[sin(l/I) cos( a) sin(ai) + cos( I/I) cos(a1)]

(B.3.9)

cos(v) cos((.)= cos( a) sin( a;)


*(B.3.6)

where we write

(B.3.11)

which also holds for sin(a) = 0, as is seen directly from Fig. B.l.
Using Eqn. (B.3.7), Eqn. (B.3.8), and Eqn. (B.3.lO)in Eqn. (B.3.6), we obtain:
RR = R.[R. - R 1 cos( O)] - w.R.R1 sin(O)[sin(O.) - (w./w.) cos((.) cos( o. - v)]

w.

= de

for the instantaneous satellite angular velocity and use Eqn. (B.2.14~.
Equation (B.3.6) yields the range rate R, and thus, fromfv = -2R(s)/A., the
instantaneous Doppler frequency / 0 , for arbitrary pointing angles 'l''. t/I from
satellite to target point. If we have particular reference to the compression filter
parameter foe however, we are interested. in poin~ing along the center ~ the
radar beam. In that case, for an exactly stde-lookmg radar, we ~ave t/I - n/2
(looking to the right of track), or l/J = - n/2 (looking left). In operation, however,
slight yaw and pitch of the satellite about its nominal forward path lead to an
angle I/I which differs from n/2 by something typically less than a degree, a

(B.3.12)
This form of Eqn. (B.3.6) is a result of Barber (1985), taking into account that
we have defined v with the opposite sign.
For the special case of a circular orbit, so that R.. = 0, Eqn. ( B.3.6) becomes
'

R=

-R.w. sin(y) cos( I/I)

x { 1 - (m0 /w.)[tan(l/J) cos( a) sin(a;) + cos( a 1}]}


This is a result given by Raney ( 1986).

(B.3.13)

584

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


B.3

Slant Range Acceleration In Terms of Beam Angles


Let us now investigate the other main Doppler parameter, fR. Differentiating
Eqn. (B.3.5) to introduce R, we have

RR

+ R2 =

As R

COMPRESSION PARAMETERS IN TERMS OF SATELLITE ATTITUDE

585

+ V R

- w. [V. x R

+ R. x R]

(B.3.14)

where, from Eqn. (B.3.1) and Eqn. (B.3.3),

R = v. - w. x Rt= v. - w. x (R. -

R)

(B.3.15)

Substituting Eqn. (B.3.15) into Eqn. (B.3.14), and using simple identities, we
obtain

RR+

R2 =

A5R

+V

V. - 2w.[(R. - R) x V.]

+ (w x R.)[w x (R.-R)]
0

(B.3.16)

Substituting for R and R1 and its derivatives in terms of u,, 0 0 and up from
Eqn. (B.2.3), Eqn. (B.2.4), Eqn. (B.2.5), and Eqn. (B.3.1), and substituting u.,
0 1, and uP in terms of their rectangular components from Eqn. (B.2.14 ), in order
to carry out the operations with

and simplifying, there results:

RR+ R 2 = R..R cos(y) +


- (R.w.

.R;

+ 2R.w.)R sin(y) cos( I/I)

+ 2w R R sin(y)[sin(t/I) cos( ix) sin(ixi) + cos(l/J) cos(ixi)]


+ R.w:[R R cos(y)]
0

Figure B.5 Geometry of local satellite path and beam center.

1 -

In the particular case of a circular orbit so that R R. and w all


h
E
(B
.
'
"' .,

vams m
qn. .3.17), we obtam the case considered specifically by Raney ( 1987). If we
furth~r dro~ the terms of second order in the small quantity w./w., that is the
term mvolvmg ro~ and R 2 , Eqn. (B.3.17) becomes

- 2R.w.w0 {[R. - R cos( y)] cos( ixi)

+ R sin( y) sin( I/I) sin( ix) sin( ixi)}


+ R.{[1 - sin 2 (ix) sin 2 (ixJ][R

w:

1 -

R cos(y)]

- R sin( y) sin( ix) sin( ixi)[ cos( I/I) sin( ixi) cbs( ix)
- sin( I/I) cos(ixi)]}

RR= R.w;[R. - R cos(y)]


*(B.3.17)

With this.JR follows from Eqn. (B.1.18). It is worth noting explicitly in this that
R, y are not independent, either one determining the other through (Fig. B.4 ):
R~ = R:

+R

2RSR cos(y)

- 2R.w0 w.{[R. - R cos(y)] cos( ix;)

+ R sin( y) sin( l/J) sin( ix) sin( ix1)}

(B.3.18)

Following Raney ( 1987), we can note from Fig. B.4 that

R. - R cos(y) = R 1 cos(O)

(B.3.19)

586

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


B.3

COMPRESSION PARAMETERS IN TERMS OF SATELLITE ATTITUDE

and
(B.3.20)

R sin(y) = R, sin(O)

1000.00

Then Eqn. (B.3.18) becomes


RR= V.Va{l - 2(w./w.)[cos(exi)

+ tan(O)sin(Y,)sin(ex)sin(exi)]}
0.00

*(B.3.21)
where the speed of the spacecraft is

-1000.00
and the speed Va of the point P (Fig. B.4) below the spacecraft nadir point on
the earth is

-2000.00
-3000.00 ~-A-~~:1:-.L..-L.-L..L.....J:.......J...-L-L-1.-L....L.-J
0.00
100.00
200.00
300.00
Orbit angle ex
a

Returning to the general expression Eqn. (B.3.17), we can use the expression
Eqn. (B.3.12) for .R, together with various geometric relations in Fig. B.1 and
Fig. B.5 to write, after some labor,
RR=

(R. - 2.R; / R.)[R. - R, cos(O)] + .R; - R2


+ 2R.RR/ R. - R.w.R, sin( 0) sin( O.)

+ R.R w; cos(O)

-515

u;

- 2R.R1 w.w 0 [sin(v) cos(C,) cos(x1 - x.)

~ -516

+ sin(C.) cos( v) cos((,) sin(x1 - x.)]

+ R.R w; cos((.) cos((,) cos(x1 1

x.)

(B.3.22)

This corresponds to another expression obtained by Barber ( 1985 ).

Q)

-517

...

.!!!-518

C-519
Examples

As a numerical example, let us consider a Seasat orbit (Barber, 1985)


with a = 7161.39 km, e = 0.00186, cxi = 108.02, n = 89.37, and w = 148.16.
Suppose we are interested in affairs near a subsatellite point with latitude
= 60 S, under the descending half of the orbit. If, for simplicity, we assume
here that (spherical earth) = (., then (. = -60. Then Eqn. (B.3.10) yields
ex= -65.60, -114.40. We choose ex= -114.40, corresponding to the
descending part of the orbit. (When descending, lexl ~ n/2, and n/2 ~ lexl ~ n
when ascending.)

'c

'c

-520

-5210)~~0~~~~~~~~!--L.--L_J

30 60 90 120 150 180 210 240 270 300 330 360


Orbit angle (deg)
b

=:~~~ ~:xa~~l~oppler center frequency for SEASAT example. (b) Azimuth chirp constant for

587

588

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS

From Eqn:(B.2.13), R. = 7163.09 km; from Eqn. (B.2.17) and Eqn. (B.2.18),
w. =a= 1.04 x 10- 3 rad/s; from Eqn. (B.2.16), R. = 13.76 m/s; from
Eqn. (B.2.19), ii.= -1.9 x 10- 3 m/s 2 and w. =a= -4 x 10- 9 rad/s 2
Supposing we are interested in targets at range R = 850 km, and assuming
the nominal unsquinted value t/I = n/2, we only need yet a value for R, to
complete the calculation, Eqn. (B.3.6), Eqn. (B.3.17). This can be taken as the
earth radius at the subsatellite point:
R, = (xt

+ Yt + zt)

112

(B.3.23)

where, for an oblate spheroidal earth,


(xt

+ Yt)/R~ + ztf R~ =

B.4

589

SIMPLIFIED APPROXIMATE MODELS

Variation of Doppler Centroid with Range

!.:et us first consider the behavior offoe andfR as functions solely of slant range
R from radar to point target. From Eqn. (B.3.12), we have
RR= R.[R. - R, cos(())]
- R.R,w.[sin(O.) - (we/w,) cos((.) cos((), - v)] sin(())

(B.4.1)

In this, as a point target moves across the swath only R and () change
(Fig. B.4). From Fig. B.4, we have

(B.3.24)

with Re= 6378.16 km, RP= 6356.78 km being the equatorial and polar radii
of the ellipsoidal earth. Also,

Recognizing that () is a small angle, we have


cos(())~

1 - () 2 /2

sin( 0) ~ () ~ (2( 1 - cos O)] 112

(B.3.25)

= {[R -(R. - R1) 2 ]/R.R1} 1' 2

Using Eqn. (B.3.24) and Eqn. (B.3.25) in Eqn. (B.3.23) yields

(B.4.2)

Using these in Eqn. (B.4.1) yields


RR= R.H - w.[R.R,(R2_H2)]112

so that, for Cc = - 60, we obtain R 1 = 6362.10 km. From Fig. B.4,


R'f = R;

+ R2-

x [sin(O.)-(we/w.)cos((,)cos(O. - v)]
2R.R cos(y)

(B.4.3)

where we recognize the satellite altitude


from which y = 18.38. From Eqn. (B.3.6) and Eqn. (B.3.17), R = - 51.64 m/s,
ii= 60.96 m/s 2 (using we= 7.29211 x 10-s rad/s), so that, using A.=
0.23522 m, from Eqn. (B.1.17) and Eqn. (B.1.18) we finally obtainfoc = 439.1 Hz,
fR = -518.35 Hz/s.
As another example, we can use Eqn. (B.3.6) and Eqn. (B.3.17) to compute
foe andfR over the full Seasat orbit of the previous example. The needed quantities
R., R., R.., w., w. follow from Eqns. (B.2.13), (B.2.16), (B.2.19), (B.2.17), and
(B.2.18), assuming a uniform spherical earth. We take for example R = 850 km,
y= 20, e. = 0. Fig. B.6 shows the result.

B.4 SIMPLIFIED APPROXIMATE MODELS FOR AZIMUTH


COMPRESSION PARAMETERS

For certain purposes, expressions for foe andfR having less accuracy than those
developed in Section B.1, Section B.2, and Section B.3 suffice. Indeed, for
clutterlock and autofocus procedures (Section 5.3 ), such simplified models are
necessary. In this section we will examine some appropriate approximations.

Thus, from Eqn. (B.4.3), we have the approximate model for variation ofjj
De
across the range swath:
*(B.4.4)
where the constants are
C1

= -2R.H/A.

C2 = [2w.(R,R 1)

112

I A.][sin(O.) - (we/w.) cos((.) cos(D. - v)]

(B.4.5)

For a circul~r orbit, R. vanishes, and only the second term of the model
Eqn.. <B.4.4) is pres.ent.. For an unsquinted beam, with e. = O, only the earth
rot~tton term remams m Eqn. (B.4.5). However, even with the nearly circular
orbit. of Seasat (eccentricity e = 0.00186) the satellite can have a dive angle
sufficient for the first term of the model Eqn. (B.4.4) to make a noticeable

590

SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS


REFERENCES

contribution to foe Similarly, a squint of only a small amount can cause the
satellite motion term to dominate in Eqn. (B.4.5).

where the effective velocity Vis defined by

w; {1 -

V = R,R1
Variation of Doppler Rate with Range

In seeking a similar model for variation of fR across the range swath, from
Eqn. (B.3.17) we could express R in terms of R, sin(y), cos(y), and coefficients
which do not vary with R along a fixed pointing direction for a frozen satellite,
that is across the swath. The coefficients in such an expression are rather
compli~ated, however, and it is preferable to examine the terms in R for order
of magnitude directly before attempting to approximate the significant ones.
As Barber ( 1985) has indicated, the terms involving parameters of noncircularity of the orbit, that is, R., R;, and w., are negligible in determining
fR, for an orbit of the normal small eccentricity used for remote sensing of the
earth surface. From Eqn. (B.3.22), for a circular orbit we have:
RR=

-R 2

+ (<.0 /w,)
0

2(w./w.)[cos(oc;) + sin(O) cos(O,) sin((.)]


cos

( (.) -

sin( 0) sin((.) cos((.) sin( o.

v)]}

(B.4.9)

using Eqn. (B.3.9). With the approximation that sin 0 is small, the parameter
2
V is nearly independent of range, leading to the model
*(B.4.10)
as R varies across the swath. This is the model used in autofocus procedures.
Dropping the earth rotation terms in Eqn. (B.4.9), there results
(B.4.11)
whe.re V. is the satellite speed. Introducing the spacecraft altitude H, and
recognizing that R, = R 0 , the nominal earth radius, Eqn. (B.4.11) results in the
simple approximation

+ R.R1w;{ cos(O)

- 2(w./w.)[cos(O) sin(v) cos((,)+ sin(O) cos(O.) sin((.)]


+ (w0 /w.) 2 ( cos 2 ( (.)cos( 0)
- sin( 0) sin((.) cos((.) sin( o. - v)]}

591

v = V./(1 +HI R.)112

(B.4.6)

(B.4.12)

This expression for Vis not accurate enough for other than rough computation,
however.

In this, we can approximate cos 0 as unity.


From Eqn. (B.4.3), for a circular orbit, approximately

REFERENCES
For small squint angle 0, the earth rotation term may dominate this, in which case

El'yasberg, P. E. ( 1967). Introduction to the Theory of Flight of Artificial Earth


Satellites, Israer Program for Scientific Translations, Jerusalem. (Available from US
Department of Commerce, Ref. NASA TT F-391, TT 67-51399.)
Hay, G. E. (1953). Vector and Tensor Analysis, Dover, New York.
Haymes, R. C. (1971). Introduction to Space Science, Wiley, New York.
Raney, R. K. (1986). "Doppler properties of radars in circular orbits," Int. J. Rem. Sens.,
7(9), pp. 1153-1162.

If squint dominates, on the other hand, we have

IRl 2 /w; R.R, ~ ( 1 - H 2 / R 2 ) sin 2 (0.)


~

sin 2 (y) sin 2 (0.)

(B.4.7)

from Fig. B.4. For a nominal y = 45, say, and squint o. < 8, Eqn. (B.4.7)
amounts to at most only 0.01. In both cases, the corresponding term of Eqn.
(B.4.6) can be dropped.
For a nominally side-looking system, we can then take Eqn. (B.4.6) as
RR= V 2

Barber, B. C. ( 1985). "Theory of digital imaging from orbital synthetic-aperture radar,


Int. J. Rem. Sens., 6(7), pp. 1009-1057.

(B.4.8)

Raney, R. K. (1987). "A comment bn Doppler FM rate," Int. J. Rem. Sens., 8(7),
pp. 1091-1092.

C.1

ASF OPERATIONS

ARCHIVE AND
OPERATIONS SYSTEM
(AOS)

RECEIVING GROUND
STATION (RGS)

593

GEOPHYSICAL
PROCESSOR
SYSTEM (GPS)

SAR PROCESSOR SYSTEM (SPS)

APPENDIX C

~kEJ

THE ALASKA SAR FACILITY

i-

ALASKA
SAR
PROCESSOR

POST
PROCESSOR

ARCHIVE

ICE/OCEAN

AND

GEOPHYSICAL

CATALOG

ANTENNA
RF

SYSTEM

USERS

PROCESSOR
SYSTEM

IMAGE
ANALYSIS
WORKSTATION

MISSION
PLANNING
SYSTEM

Figure C.1

The NASA Office of Space Science and Applications ( OSSA) is sponsoring the
development and operation of an integrated SAR ground data system capable
ofreceiving, processing, and distributing data from a series of non-NASA sensors.
We present an overview of this system, the Alaska SAR Facility (ASF), as a
design example of a recent implementation to illustrate the data flow from
reception to the end user. The primary application of the data received at the
ASF is polar oceans research, including the study of sea ice, open oceans, and
glaciology. Additionally, a number of studies will be conducted in the areas of
hydrology, geology, and forest ecosystems (Carsey and Weeks, 1989). This
system, which is installed at the University of Alaska in Fairbanks, will receive
SAR data from three satellites: ( 1) The European Space Agency (ESA) sponsored
European Remote Sensor (E-ERS-1); (2) the National Space Development
Agency of Japan (NASDA) sponsored Earth Resources Satellite (J-ERS-1); and
(3) The Canadian Space Agency sponsored Radarsat System. The first data
from ERS-1 is expected in the summer of 1991 with the J-ERS-1 data to follow
in 1992 and that from Radarsat in 1995. the key parameters for these satellites
are given in Table 1.2.

The unique feature ,of the ASF is that this facility is the first fully integrated
ground data system, in the sense that it operationally processes the received
signal data from the modulated RF downlink to a variety of high level data
products, including area maps of certain geophysical quantities (ocean wave
spectra, sea ice type, and ice kinematics). The facility consists of four major
elements: (1) The Receiving Ground Station (RGS); (2) The SAR Processor
System (SPS); (3) The Archive and Operations System (AOS); and (4) The
592

Major subsystems in Alaska SAR Facility (courtesy of C. Miller).

Geophysical Processor System (GPS). An overview diagram of the facility is


shown in Fig. C.1.
The ASF has a direct link with the science community in that, through the
AOS, there is a capability to interactively browse the image data catalog and
to directly view the geophysical products. Additionally, a mission planning
subsystem has been incorporated into the AOS for users to submit data
acquisition requests and for resolution of conflicting requests. This subsystem
will coordinate the ASF investigator requests and forward a consolidated input
to the appropriate central planning site for the satellite (e.g., for ERS-1 the
mission management control center is in Darmstadt, Germany).

C.1

ASF OPERATIONS

Before describing the ASF-subsystems in detail, it is instructive to preview the


?perations seque~ce: A data ~cquisition is initiated electronically by logging
~nto t.he AOS m1s~1on planmng subsystem (access is limited to approved
mvest1gators). The mvestigator requests are periodically analyzed by the ASF
proj~t scientist for prioritization and conflict resolution. The resulting acquisition
plan ts then forwarded to the main control center and incorporated into the
overall data acquisition plan, as compiled from all the ground receiving facilities
and science investigation teams. The ASF data takes are scheduled and

594

THE ALASKA SAR FACILITY


C.1

forwarded to the station. These data collection periods are entered into the
RGS tracking computer along with the most recent satellite orbital elements.
During a data acquisition pass the receiving antenna tracks the satellite to
receive the (real-time) SAR downlink. The acquired data is demodulated and
routed (at 105 Mbps for ERS-1) to high density digital recorders (HDDRs) for
temporary staging until the high precision (restituted) ephemeris is available
for processing. After reception of the restituted ephemeris (24-48 hours after
data acquisition), the recorded data is transferred from the HDDR to the SPS
via an interface that is customized for each of the three satellites due to the
differing formats. This interface performs data synchronization into ~an~e .line
records, decoding or descrambling the data if necessary and unpackmg it mto
an integer (two bytes per sample) format for processing. The SPS then process~s
this reformatted raw digitized video data into image products. The system is
designed to generate simultaneously two types of output imagery. The high
resolution (one-look complex or four-look detected) image data is routed to
the HDDRs. The second product, an 8 x 8 averaged low resolution image, is
passed directly to a post-processor over the ASF local area network (LAN).
This low resolution imagery is formatted to the CEOS standard ( 1989) and is
then transferred to the AOS for online storage in -an optical disk jukebox. All
ancillary data for each image frame (which covers approximately 100 km x
100 km) are stored in the AOS Database Management System (DBMS) for
access by the science team and the GPS.
The data products (including both high and low resolution images, as well
as the geophysical data) are ordered electronically via the AOS DBMS using
the NASA Space Physics Analysis Network (SPAN). These products are copied
onto 9 track, 1/2 inch computer compatible tape ( CCT) or 5 1/4 inch write-once
read-many digital optical disk (WORM DOD) and shipped to the user. A
special processing request is required for generation of the one-look complex
images or for geocoded image products (Chapter 8). The projected turnaround
for the standard ASF output product is 2-4 days. For the special prodycts, an
additional delay of 3-4 days is to be expected.
The geophysical processor has two modes of operation. In the automated
mode, the system daily initiates its own query of the AOS database and selects
a set of low resolution images for processing, based on preset criteria such as
image location. After processing, it automatically returns geophysical products
to the AOS for archive and electronic transfer to users. In the manual mode,
the GPS responds to requests from either remote users (via the AOS interface),
or from local users (via the workstation console). The system is designed such
that the automated processing is interrupted by a user request, which carries a
higher priority. Following servicing of the user request, the system then retur.ns
to its automated processing routine. The GPS .is designed to handle a dally
minimum of 1O pairs of ice motion vector maps, 20 ice classification images,
and 20 image framelettes (512 x 512 pixels) which are processed into ocean
wave spectra plots. The algorithms for the GPS will be described in Section C.5.

ASF OPERATIONS

595

120E

Figure C.2 Station masks for ERS-1 at 5 elevation: Kiruna, Sweden (top), Gatineau, Quebec
(right), and Fairbanks, Alaska (bottom).

The Alaska SAR Facility operates in conjunction with several other primary
ground receiving stations and a number of portable stations deployed around
the world. Since ERS-1 does not have a high rate data recorder on-board, the
SAR acquisition area is limited to the station reception range (mask). Figure C.2
illustrates the station maskS, for the three primary Northern Hemisphere stations,
located at: Kiruna, Sweden; Gatineau, Quebec; and Fairbanks, Alaska.
All elements of the ASF were built by JPL, with the exception of the RGS
which was built by the Scientific Atlanta Corporation under contract to JPL.
The facility is operated by University of Alaska students and staff, with sustaining
engineering support from JPL. In the following sections we will describe each
major element of the ASF in more detail. A summary of the ASF data products
produced from the ERS-1 SAR data is given in Table C.1.

596

THE ALASKA SAR FACILITY


C.2

TABLE C.1
Level*

Product

Description

Source

Volume/Day

Raw Video Signal

RGS

IA

Basic Image

< 40 minutes at
105 Mbps
2 frames
(30 km x 50 km)

18

Bulk Image

18

Browse Image

Complex Samples
5 bits I, 5 bits Q
One-look Complex,
161, 16Q bits/sample,
Resolution < 10 m
Four-look Detected,
8 bits/pixel,
Resolution - 25 m
256-look (8 x 8 avg),
8 bits/pixel
Resolution < 200 m
From either Bulk or
Browse Image, 8 bits/
pixel, UTM or PS
projection
3-4 classes, 4 bits/pixel
Online: 5 km and
100 m grid products
Online: contour plots
Offtine: spectra,
Resolution - 25 m
Ice velocity vectors,
Online: 5 km and
100 km grid products

18

Geocoded Image

Ice Type Maps

Ocean Wave Spectra

Ice Motion Maps

597

SPS

SPS

30-50 frames
(100 km x 100 km)

SPS

30-50 frames
(100 km x 100 km)

FREQUENCY SELECTIVE
SURFACE !DICHROICI SUBREFLECTOR

X-BANO FEED

.....

AOS

20 browse frames,
1 bulk frame
(100 km x 100 km)

GPS

20 frames
(100 km x 100 km)

GPS

20 framelettes
(512 x 512 pixels)

GPS

10 pairs
(100 km x 100 km)
( 1000 km x 1000 km)

......

......

'

........

....... ....... .......

__

-,-...J L....,-I
I

.......

.,..... / "

10 METER
REFLECTOR

MODEL 3316 ELEVATION


OVER AZIMUTH PEDESTAL
COUNTERWEIGHT ARM

*Definition of data product levels can be found in Table 6.1. UTM, Universal Transverse Mercator;
/1
PS, Polar Stereographic.

C.2

THE RECEIVING GROUND STATION

ASF Data Products for the ERS-1 SAR Data

AUTOMATIC TILT MECHANISM


(OPTIONAL)

THE RECEIVING GROUND STATION


BASE EXTENSION

The Receiving Ground Station (RGS) is designed to collect data from each of
the three SAR satellites. However, the initial implementation of the RGS will
only support ERS-1 data reception. This system will acquire and track the
satellite using both the ephemeris predicts (as provided by the mission control
center) and an S-band satellite beacon which is in continuous operation. The
RGS can track the satellite down to a 5 elevation angle from the horizon,
im;:luding zenith passes. The SAR data is downlinked from the satellite on an
X-band, 8.14 GHz, carrier signal. This QPSK modulated signal is demodulated
in the RGS receiver and the resulting complex 105 Mbps, 5 bit/sample data
stream is routed to the HDDRs via a Signal and Data Routing Assembly
(SARA). The RGS was designed, built, and installed by Scientific Atlanta Corp.
under contract to JPL.
The antenna layout is shown in Fig. C.3. The parabolic dish is 10 m in
diameter and supports both X-band and S-band feeds. The pedestal is controlled
by servo devices to perform tracking at a slew rate of up to 15 /s in a

Figure C.3

Diagram of the ASF RGS 10 m parabolic dish.

thermal environme?t that ranges from -65 to +90F, with wind gusts to
40 mph. The system is completely automated to acquire and track using program
commands from. a Hewlett-Packard HP-320 computer. Test and installation
were co~pl~ted m June,.1989. The RGS antenna is located on the roof of the
Elvey ~~ddmg at the Umversity of Alaska (Fig. C.4). This system is also capable
of rece1vmg Landsat and SPOT downlink data streams.

598

THE ALASKA SAR FACILITY

I-

=>
a. (/)
a:
=>o

I-

oc

lo u:
<
<
c

I-

(/)

w
a:

..:.
-I

(,.)

<
LL

:::>

a:
w

LL

x.... ~

I-

-z

a.
:IE

N
M
N

(,.)

ci>
a:

-I

(,.)

:::>

a:
w

(,.)

-I

a:
w 0
(/)
I-

a. -I
w
a: a:
:::> a:
a. 0
-I

(j

-I

c
w ....
a.

(/)

Figure C.4

aQ

Dish installed on topofeight-Aoor Elvey Building at University of Alaska in Fairbanks.

(/)

II-

(/)

w
a: w

(.!)

a:
w

3: <

LL

O;=:

:::> c I...JZ:::>

I-

-I-

-- "!\.
-- .-

U<C

-I

a:

I-

0
0

:::>

(/)

:::>
Ill

j::
-I

:::>
:IE

a.

::IE
0
0

-I

a:

0
0

"'

a:

It)

(,.)

::IE

>

a.

a: w
a: 0

<

::IE

<

0
0

I-

(/)
(/)

==

(/)

(/)

a:
a.

z <
0

a:
0

(/)
(/)

C.3 THE SAR PROCESSOR SYSTEM


The SAR Processor System (SPS) consists of a custom built hardware correlator,
a post-processor, a control computer, several high density tape drives, and a
laser film recorder. The processing is initiated by the AOS (based on scientist
request) by specifying the high-density tape identification number and the data
acquisition time (in GMT). Tapes are mounted by the SPS operator for both
input and output. The HDDRs are Ampex DCRSi transverse scan cassette
drives controlled via an RS-232 interface. Each high density cassette tape has
a 48 GB storage capacity. Control of the SPS, shown in Fig. C.5, is based
around a Masscomp MC5600 computer workstation, augmented by a board
level array processor.
The standard operational data processing scenario is as follows. The downlink
data headers are stripped from the signal data by the SPS input interface and

0
0

a:
< a.

<
c
c
3:
<

<

LL

I4

a:

I- I-

I-

w
z
a:
w

a:
w

c( (/)
~
,.... co

J:
I-

a: (/)
WW
c a:
J:

LL

IN
M
N

,....

a.

(/)

<

== I-

00

ci>
a:

-I I-

....
:::> (/)
a. a:
zO

-0

u u
599

600

THE ALASKA SAR FACILITY


C.3

transferred to the Masscomp for analysis and processing parameter generation.


The processing is performed in two passes over the data. During the first pass,
autofocus and clutterlock measurements are performed to derive estimates of
the Doppler parameters. The tape is then rewound to retransfer the data for
image correlation. Given this processing scenario, the effective throughput of
the SPS is that about S hours are required to process 52 minutes of data,
although the image formation process (pass 2) actually operates at only a factor
of three slowdown relative to the real-time acquisition rate. We will describe
each of these processing stages in more detail.
Preprocessing

The preprocessing data analysis is performed at three points (beginning, center,


and end) within a 100 km image frame, or at approximately 50 km intervals
for strip mode processing. The main function of the preprocessing is Doppler
centroid and Doppler rate parameter estimation. At each preprocessing location
a four-look correlation is performed on 204S range lines; the four single-look
images are sent to the Masscomp for clutterlock and autofocus analysis (see
Chapter 5). In addition to the azimuth spectral analysis (for foe) and the look
correlation (for fR), a correlation between looks in the range direction is
performed to check for PRF ambiguity. The cross-track misregistration for a
single ambiguity is estimated to be 2.7 pixels.
The preprocessing stage also performs data quality assurance (QA) and
generates calibration correction factors. The following QA operations are
performed at each preprocessing location: (a) Bit error rate (BER) measurement
from the pseudo noise (PN) code at the start of each range line; (b) SNR
estimate from the range spectra; and (c) Raw data histogram to verify the
I, Q quantization balance in both amplitude and phase. The calibration analysis
includes: (a) Setting the processor gain, (i.e., FFT scaling for the fixed point
arithmetic modules); (b) Estimation of the range transfer function from the
chirp replicas in the header (see Chapter 7); and (c) Determination of the
absolute image location. The cross-track radiometric corrections are calculated
off-line and stored in the Masscomp database. '(hese corrections are applied
following range compression by multiplying the complex data with a real
weighting function.
Based on the preprocessing data analysis, all reference functions and
resampling coefficients are precalculated and stored in the Masscomp CPU
memory. The correlator registers are memory mapped into the control computer
main memory, thus permitting the correlator to behav~ as a slave on the
computer bus. The processing is essentially static as all control parameters are
downloaded prior to a processing run.
Corre/al/on

The image correlator is a custom designed system comprising a single rack (35
boards) of digital hardware. The system is a second generation design based
on the Advanced Digital SAR Processor (ADSP) built by NASA/JPL for

THE SAR PROCESSOR SYSTEM

601

real-time Seasat and Magellan SAR processing. The ADSP d


'bed
Chapt 9 .
I. d b
' escn
m
er rs a mu hmo e, lock floating point, pipeline processor with built-in
hardware modu_Ie~ for a.utofocus and clutterlock. It has two azimuth correlation
modules,. permrttr.ng either the SPECAN or the frequency domain (fast)
convol~tron a~gonthm to be used for the azimuth compression operation. In
con:ipar~son wr~h the ADSP, ~he Alaska SAR Processor (ASP) is a simplified
design m .that. all ~omputahons are fixed point (16 bit I, 16 bit Q); the
preprocessm~ ope~trons are performed off-line; and the system can only perform
fast convolution azimuth compression. Table C.2 compares the architecture of
th~ ADSP an~ th~ A~P correlators. The ASP uses less than half the number
of mtegrat~d c~rcurt c~rps <.I Cs) and only 40 % of the ADSP power consumption.
The .ASP rs pictured 10 ~rg. C.6; in the left-hand rack are the digital boards;
the nght-hand rack contams the Ampex recorders and the data routing assembly
(SARA).
The SPS functional modules and pipelined data flow are shown in Fig c 7
(Carande and Charney, 19S8). Any of these modules can be bypassed for.te~t
or to accommodate special modes (e.g., onboard range compression). Both th;
range processor and the azimuth processor can accommodate up to SK point
comple~ FFT~. To minimize the effect of round-off errors from the fixed point
~FT anthmet~c, range processing is performed by dividing the input range line
mto overlappmg 2K sample segments. Each segment is processed separately
and the data r~as~?1bled, resulting in an SK compressed range line. The corner
turn me?1ory ts di~tded ~nto ~hree pages. One page accepts the range processor
output. m range hne direction, while the other two pages output data to
the azimuth processing module in the along-track direction. Each page is
8 Msamp~es ( 161, 16Q) resulting in a total memory size of 96 MB.
The azimuth ~rocessor can perform either one-look or four-look (quarter
aperture) processmg. The range migration correction module can accommodate
up to .12s sampl~s of range walk; it also performs slant to ground range
correction. The azimuth correlation is performed at zero Doppler to eliminate
TABLE C.2

Comparts~n of the ADSP and the ASP Hardware Configurations

Total Number of Boards


Number of Unique Boards
Number of Racks
Number of ICs
Power Consumption
Clock Rate
Computation Rate
Application
Source: Courtesy of T. Bicknell.

Advanced Digital SAR


Processor (ADSP)

Alaska SAR Processor


(ASP)

76
27
2
27,500
12.5 kW

35
18
1
13,000

20 MHz
6.5 GFLOPS
Magellan, Seasat

10 MHz
3.3 GOPS
E-ERS-1, J-ERS-1
Radarsat

5kW

602

THE ALASKA SAR FACILITY


C.4

DCRSi
HODA

'.\ .

..... .

INPUT
INTERFACE

RANGE
PROCESSOR

ARCHIVE AND OPERATIONS SYSTEM

RANGE
RADIOMETRIC
CORRECTION

CORNER
TURN
MEMORY

(RESAM PUNG
TO GROUND
Pl)(ELS)

HEADER
INFORMATION

l.i a.< t.::11 .

!J II

___._...::.c.....;__~--

TO
POST
PROCESSOR

CONTROL
CLlITTER
~ - -:;;~ PROCESSOR .,.__ _
LOC
_ K _ _- - J
OUTPUT
(MASSCOMP)
AUTOFOCUS
I
BUFFER

~
I
I

(DESKEW)

'---t -~

\'

:1

603

h8

OUTPUT
INTERFACE

REDUCTION

MULTI-LOOK
MEMORY

't,

TO
POST
PROCESOR

Figure C.7

,.
''

the need for a post-processing deskew operation. The reference f~nction is


updated every 8 bins cross-track for four-look imagery an?. every bm for ~he
one-look products. T here is no along-track update capab1hty. The dete~t10n
and multilook operations are performed in the last two modules. T he multtlook
memory is also three pages, performing a final corner turn to output the data

Functional block diagram of SPS correlator modules showing data fl ow.

in a range-line format. The output data is sent both to a HDDR for recording
and to an averaging module (8 pixels x 8 pixels) to produce a low resolution
image for on-line archive and image display. All ancillary data is transferred
to the post-processor alo ng with the low resolution imagery for preparation of
the CEOS standard format data files. This data is then transferred to the dual
ported disks fo r data staging, prior to file transfer to the AOS.

C.4

Figure C.6 The custom SAR correlator (left rack), the Ampex DCRSi recorders and the data
routing assembly (Courtesy of T. Bicknell).

DCRSi
HODA

ARCHIVE AND OPERATIONS SYSTEM

The Archive and Operations System (AOS) consists of a Mission Planning


Subsystem and an Archive and Catalog Subsystem (Fig. C.8). The AOS serves
as the primary user interface. It receives requests for data products and transfers
information about data product availabi lity to the users. This system also
performs mission planning operations for the ASF science team and forwards
data acquisition requests to the satellite operations and control center. The
archive consists of on-line data (i.e., low resolution images and geophysical
processor products) and off-line data (i.e., high resolution images and raw data).
Additionally, this system performs a limited amount of image processing. On
request, it geocodes both low and high resolution images ( to a smooth geoid)
and plans the capability to spatially compress low resolution images for on-line
electronic browse using a tree-searched vector quantization algorithm (see
Chapter 9).

C.5

a:
0u..

(I)~

c0

Zw
a: (I)
~>
w (I)
0
<(~

(J <(A.

0 ~~
w <Co
0 Q(J

<(

w
z
2:
0
w
w t:=
:c
~z Oz
(J
<(
2:
a: :c a: ~o <Co
<( (J w c- ~-~
_, a: z <( <(~ w<C
<( <( w
(I) a:
0 (J ffi 3:w
!:: :!:
-z oz
_,
~
!
(J li::w a:w
Q u.. (J 00 a:i 0

w~

...

2:w
:c~

(J (I)

a:>
<((I)

a:
w

(Ju..
- (I)

zz
O<C
a: a:
~~
Uw
w_,
_,_
w u..

0
z

C.5

A.

:c

(I)

The Geophysical Processor System (G PS) derives information about the surface
characteristics from the SPS image products. This system has three primary
functions: (a) Multitemporal ice motion tracking; (b) Ice type classification;
and (c) Wave spectra analysis (Fig. C.9). The system is designed for fully
automated operations, performing quality assurance checks to ensure product

<(
u..

w
>
a:
0 a:
~
0 0_, 0~
~
~u.. (J
a: (I) w <( z
~w
w<C a:
<( >
cno Q (.)
~

0 :!:
9w
<(

(I)

a:
w

tn

~>

~(I)

::>~

>
:c
A.
<(

0
z

~ (I)
a:
0 (I) z ~
::> a:
0
0
:; ::>
~ 0
<( (J A.
(.) w
~
ID tn <( a:

GEOCODED IMAGES

HIGH-RES IMAGES

t
a:
0
~

<(

(I)

>
(I)
ID

::>
(I)

a:
0
w
~
z

za:
ow
_z
(I) z
~<(

:!: _,
A.

THE GEOPHYSICAL PROCESSOR SYSTEM

2:

w
(J

w
a:
<o
ID~

(I)

<-

~Q

<Cw
Q

a:
w

a:
ww

ANCILLARY DATA

ICE MOTION
TRACKING

ICE TYPE
CLASSIFICATION

WAVE SPECTRA
ANALYSIS

ICE MOTION PRODUCT

: ICE TYPE PRODUCT

WAVE PARAMETERS PRODUCT

_,~

::>~

A.
A.
<(

Q <(

:!:

~o
u..

w :!:
:ca:

SPATIAL QATABA'SE
-BUOY DATA
-TEMP DATA
-WIND DATA
-ICE EDGE MAP

IMAGE/PRODUCT
DATABASE
-IMAGE DATA
-PRODUCT DATA

Figure C.9 Functional block diagram of the GPS.

604

605

The AOS hardware architecture is centered around a Digital Equipment


Corporation VAX 8530 with 64 MB of main memory and a high speed VAX
BI bus. In addition to its magnetic disk storage (5 GB), it has two 5-1/4 inch
WORM DOD writer systems and a Perceptics Lasersystem 12 inch platter
optical disk jukebox capable of storing 100 GB online. The VAX image
processing capability is augmented by an FPS 5205 array processor and a DEC
floating point accelerator. The mission planning is performed on a VAX Station
2000 graphics workstation.
The software utilizes a commercial database management system (DBMS)
developed by INGRES Corp. This relational DBMS (Version 6.3) allows search
of the data catalog using a variety of parameters (GMT, location, site name,
etc.). The mission planning subsystem uses an orbit propagator to predict the
satellite location for data acquisition planning. The GPS products are available
on-line in a graphical format using the GKS graphics standard. The AOS also
supports automated interactive queries from the GPS, transferring geocoded
images to this.system and receiving the high level GPS products.

z
0
u;
(I)
w
a:

0
z

THE GEOPHYSICAL PROCESSOR SYSTEM

606

C.5

THE ALASKA SAA FACILITY

consistency. A high level interface with the AOS performs data~ase queries,
electronic transfer of image data to the G PS, and transfer of geophysical products
to the AOS. The software is designed to be modular, to allow flexibility in its
architecture for post-launch optimization of the system performa~ce. The GPS
requires input data that has been previously geocoded to either a. P~lar
Stereographic (PS) or Universal Transverse Mercator _(UTM_) project.ion.
Additionally, the GPS requires that the SPS perform radiometric corr~ct1ons
to remove the cross-track power variation (see Chapter 7), although 1t does

GMT

REQUEST

TIME
INTERVAL

MOTION
PREDICTOR

IMAGE PAIR

IMAGE
DATABASE

LOCATION

NWS
(SURFACE
WIND.
TEMP.)

AVHRR

607

have limited capability to remove residual calibration errors (e.g., cross-track


intensity ramps). The software is implemented on a Sun 4/ 260 scientific
workstation augmented by a Sky Warrior array processor. The GPS also uses
an INGRES DBMS to archive its data files. We will briefly discuss each of the
GPS algorithms and its output data products in the following subsections.
Ice Kinematics

The ice motion tracker performs matching of common ice floes in SAR image
pairs separated by time scales of days to weeks. A diagram of the motion
algorithm is shown in Fig. C.10. The candidate image pairs are selected from
a listing of all recently acquired image data received daily from the AOS
database. The location of each newly acquired image is input to a motion
predictor algorithm that uses National Weather Service (NWS) wind and
drifting bouy data to select the most probable archive images for matching
(Colony and Thorndike, 1984).

ANCILLARY
DATABASE

SAA DATA
CATALOG

LEVEL 1C
GEOCODED
IMAGES

THE GEOPHYSICAL PROCESSOR SYSTEM

SURFACE
TEMP
AVHRR

PREPROCESSING:
SPECKLE FIL TEAING
SEGEMENTATION

1------~ CLASSIFICATION

4 T YPES

PACK

ZONE

FE ATURE VEC TOR


GENERATION

' f-i1ERARcf:iicAi:.
AREA
CALIBRATION

FEEDBACK
TO MOTION
PRED ICTOR

FEATURE
MATCHING

CONSIST ENCY
CHECK

CLASSIFIED
IMAGE

MOTION ECTORS

Figure C.10

Ice motion and ice classificatio n a lgo rithms (Kwo k et a l., 1990).

c
-

1438

1481

Figure C.11 Ice motion pair from Beaufort Sea acquired by Seasal illustrating the rotatio n a nd
deformation over a three day period : (a) Rev. 1438 acquired October 5, 1978; ( b) Rev. 1481 acquired
Octo ber 8, 1978; (c) Edge ma ps; and (d) Motio n vectors.

en
co

,,,
',,,
" ''''"""""
' "''''""'
''

-2

'\
' ' ' ''""""""
'""""
"
,,,,,,,
'' ''
' ' " " ""
' ' ' ' ' ' '' \ '
'' ' ' ' ' ' \ \ \ \ \ \
\

........

\
\ \ '\

'

'

'\

'

-2

-2

-2

-2

-2

-2

-2

-J

-2

-2

-2

- -
-

-3

-J

-1

-2

-2

-2

-2

-J

co

-?

-J

-2

-l

-l

-]

-l

0)

-5

-3

' ' '


\ \ '\ '
' ' '''

-3

-J

-2

''-\\'\\\\\\\\\
'\ '\ '\ '\ '\ \ \ '\ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \
\ \ \ \ \ \ \ \

Figure C.12

-3
-3

-5

-2

-J

-5

-)

-5

-l

-l

-5

-5

_,

- 4

-J

-3

-2

-J

-2

-J

-
-

-;;
-

-2

-J

-]

-l

-5

-l
-;;

-5

-3

-4

-?

-2

- -

-5

-2
-2

-2

-2

-1

-J

-2

-2

-;

-2

-~

:.

_,

-2

-3

-l

-3

-l

-J

-)

-l

-l

-l

-2

-2

-)

-)

-3

-3

-2

-2

...

.....

-3

-l

-)

-)

-2

-4

-2

-~

-2

-2

-l

-2

-2

-2

-2

-2

-1e
- 12. - lO

-2

-2

-)

-1

-l

-2

-2

-2

_,

-
-~

-7

_,
-5

-
- - - _, -
2

- I

-:

-2

Ice motion o utput products on 5 km grid from Beaufort Sea acquired by Seasat : (a) Rev.
1409 acquired October 3, 1978; (b) Rev. 1452 acquired October 6, 1978; (c) T ranslational vector grid ;
and (d) Rotational grid.

-~

- :.

-)

-I

-5

610

THE ALASKA SAR FACILITY

C.5

The selected image pair is first evaluated using a coarse feature extraction
technique to generally determine the area of common floes. The image pair is
then catagorized as either pack ice or marginal zone ice. Since the pack ice
undergoes little rotation within the time scales considered in this system, this
catagory of ice imagery can be processed using a straightforward hierarchical
area correlation technique. Conversely, ice in the marginal zone can move
several dozen kilometers per day and undergoes both deformation and rotation
(Fig. C.11 ). The matching procedure for ice in the margin is based on a feature
extraction procedure that is invariant to rotation and insensitive to deformation
(Kwok et al., 1990). This procedure is used to derive a sparse field of rotational

E
::.c:
CD

. .'JP',,____
l

'

THE GEOPHYSICAL PROCESSOR SYSTEM

and translational vectors that are in turn used to initialize an area correlation
routine which produces a regular grided output product (Fig. C.12). These ice
motion products ( 100 km x 100 km) are averaged and overlaid on land
boundary maps to derive a regional (Arctic) time series product.
Ice Classification

The ice classification routines identify the various types of ice based on the
measured radiometric brightness of the SAR image pixels. The algorithm requires
information on surface temperature derived either from NWS data or by passive
radiometry (e.g., AVHRR). The temperature information is used to select a
look-up table (LUT) for the classification. The image is first segmented into
three or four classes using a clustering algorithm. These classes are then related
to ice types using a maximum likelihood classifier given the target scattering
information in the LUT. The scattering characteristics of the various ice types
are based on ground scatterometer measurements (e.g., Onstott et al., 1979).
The major ice types, categorized by age, are: (a) Multi-year ice; (b ) First-year
ice; ( c) New ice; and ( d) Open water. Currently, it is expected that this procedure
can produce reliable (95 % correct) results only during the winter season,

JPL AIRCRAFT SAA C-BAND VV

SLBSCENE
FOURIER ANALYSIS
SAR CLASSIFICATION % MY= 32.1

FOURIER COEFF

..........

..--~~~

~~~~~

SMOOTH SPECTRA

SMOOlHED COEFF
.-~~~_..~~~~--.

PEAK DETECTION
PEAKS
CONTOUR PLOT
GENERATION
WHITE: MULTl-YEAR (MY)
GREY= FIRST YEAR ROUGH
DARK GREY= FIRST YEAR SMOOTH
BLACK = NEW ICE

Figure C.13 Comparison of classified SAR images with those acquired simultaneously by the
KRMS 33.4 GHz passive radiometer (Holt et al., 1990a).

611

PRODUCT
(WAVENUMBERS, WAVE DIRECTIONS, PLOT)
Fig ure C.14

Functional block diagram of the wave spectra analysis algorithm.

612

THE ALASKA SAR FACILITY

C.6

October to May. The large difference in dielectric constant ?~tween sea ice
covered with dry snow and sea ice covered with snow contammg free water
molecules and melt ponds make classification during the summer (June to
September) season significantly more complex and not sufficiently reli.able .for
an operational system. An illustration of the performance of the classification
algorithm is shown by the comparison of simulated E-ERS-1 data (from the

10Km

['-~'"'~
,--/_ ! ;,>(

j\
I

-f :\\
r.... -

L-_._~_._~_._--'

Fig ure C.15 Seasat SAR image or Chukchi Sea ( 10/ 9/78) showing fou r 512 ~ 512 pixel framelettes
and their spectral contour plots: (A) Open sea; (B) Frazil ice; (C) Pancake ice ; and (D) Open sea
(Holt et al., 1990b).

61 3

NASA / JPL DC-8 aircraft) with the NORDA KRMS 33.6 GHz passive
radiometer system (Eppler et al., 1986) in Fig. C.13 (Holt et al., 1990a).
Ocean Wave Spectra

The GPS ocean wave spectra routine is designed to extract wave parameters
from the SAR image data (Holt et al., 1990b). Its input is a full resolution
(four-look) image that is output from the SPS on HDDT. These data are read
into the AOS where they are subdivided into 512 x 512 pixel blocks for
processing by the GPS. The functional block diagram of the wave product
generation algorithm is given in Fig. C.14. The processing consists of a two
dimensional transform of each 512 x 512 block of data, followed by a Gaussian
smoothing filter to reduce the noise. The width of this filter is a parameter that
can be adjusted by the user. A peak finding routine is then used to locate the
significant wave peaks in the smoothed spectra. These peaks are defined as
local maxima, when compared to the image mean, which are separated by some
minimum distance from other maxima. From these peak locatio ns, the wave
number is given by the radial distance of the peak from the origin; the wave
direction is given by its polar angle relative to the image axis. No corrections
are applied to compensate for the SAR system impulse response function, or
for nonlinear moving surface modulations. An example of the image spectra
and resultant contour plots is given in Fig. C.15. The contours will be available
to users as an online graphic display, however the smoothed spectra will only
be distributed on hard copy digita l media.

C.6

SUMMARY

SUMMARY

The Alaska SAR Facility is the first fully integrated SAR ground data system
in that it routinely acquires and processes SAR data from Level 0 to Level 3.
Additionally, this facility performs mission planning and has a limited capacity
for electronic distribution of data, permitting rapid data access by the science
team. We present this system as an example of the type of end-to-end design
required for the "EOS era" ground data and information system. This system
has a throughput capacity of close to a terabit of SAR data per (24 hour) day.
This computational capacity is balanced with automated systems for archiving
and cataloging these image products, as well as a system to derive information
from the images and to produce a large number of reduced volume (non-image)
high level data products for direct analysis by the science team.
We consider the ASF as a pathfinder system in that it addresses a number
of the technical challenges facing the EOS ground data system. NASA plans
to use this facility for testing advanced concepts in mission planning, data
integration, electronic browse, and high rate data distribution. We fully expect
that the ASF will contribute significantly not only to our understanding of
polar oceanography, but also to our ability to develop and operate large,
integrated ground data systems.

614

THE ALASKA SAR FACILITY

REFERENCES
Carsey, F. and W. Weeks, eds (1989)."Science Plan for the Alaska SAR Facility," JPL
Pub. 89-14, Jet Propulsion Laboratory, Pasadena, CA.
Carande, R. E. and B. Charney ( 1988). "The Alasak SAR Processor," Proc. IGARSS
'88, Edinburgh, Scotland, ESA SP-284, pp. 695-698.
CEOS ( 1988). "Committee on Earth Observations Satellites: SAR Data Product Format
Standard," Rev. 2., ESA ESRIN, Frascati, Italy.
Colony, R. and A. s. Thorndike ( 1984 ):'"An Estimate of the Mean Field of Arctic Sea
Ice Motion," J. Geophys. R., 89, pp. 10623-10629.
Eppler, D. T., L. D. Farmer and A. W. Lohanick ( 1986). "Classification of Sea Ice Types
with Single-Band (33.6 GHz) Airborne Passive Microwave Imagery," J. Geophys. R.,
91, pp. 10661-10695.
Holt, B., R. Kwok and E. Rignot (1990a). "Status of the Ice Classification Algorit.hm
in the Alaska SAR Facility Geophysical Processor System," Proc. IGARSS 90,
Washington, DC, pp. 2221-2224.
Holt, B., R. Kwok, and J. Shimada (1990b). "Ocean Wave Products from .the Alaska
SAR Facility Geophysical Processor System," Proc. IGARSS '90, Washington, DC,
pp. 1469-1473.
Kwok, R., J. C. Curlander, R. McConnell and S. S. Pang (1990). "An Ice-Motion
Tracking System for the Alaska SAR Facility," IEEE J. Oceanic Eng., 15, pp. 44-54.
Onstott, R. G., R. K. Moore and W. F. Weeks (1979). "Surface-Band Scatterometer
Measurements of Sea Ice," IEEE Trans. Geosci. Elec., GE-17, pp. 78-85.

APPENDIX D

NONLINEAR DISTORTION
ANALYSIS

For a linear, time invariant system, where the principle of superposition applies,
a stimulus such as a step or a series of sinusoids is a suitable input for a complete
characterization of the system. Assuming causality, the resulting impulse
response can be used to predict the output for any input from the standard
convolution integral:
r(t) =

Lx' h(t')s(t -

t') dt'

(D.l)

where s(t) is the input signal, r(t) is the output, and h(t) is the impulse response
function (Appendix A). In a nonlinear system, however, the output function is
not a simple convolution using the input. A sinusoidal stimulus can produce
an output not only at the input (fundamental) frequency, but at all higher
harmonics of this frequency. Since the relative contribution of these harmonics
to the system response is dependent on the stimulus amplitude and frequency
characteristic, there can be no single transfer function that will predict the
response to a general input. Instead, a separate characterizing function would
be required for the response of the system to each amplitude and frequency of
input.
To circumvent this problem, the traditional approach in characterizing a
system is to assume linearity and to use a small amplitude test input to produce
a transfer function dependent only on the fundamental component of the
frequency response. This approach has been used extensively in the characterization
of nonlinear physiological systems (Rodieck, 1965; Tate, 1971; Toyoda, 1974).
615

616

NONLINEAR DISTORTION ANALYSIS

NONLINEAR DISTORTION ANALYSIS

In some respects, a linear characterization of a nonlinear system contradicts


the original intent in measuring the system performance. For this reason,
Marmarelis ( 1978) proposed the use of white noise analysis to characterize the
nonlinear behavior of systems.
The basic concept behind use of a Gaussian distributed white noise input is
that the system response can be evaluated across all possible inputs, spanning
both the system bandwidth and dynamic range. A nonlinear system response
can be represented as a power series with functionals (i.e., functions of functions)
as terms (Volterra, 1959). For any system with input s(t), the output r(t) can
be described by a Volterra functional series as
00

f f

r(t)=n~O _00

00
-ookn(t;,
... ,t~)

(D.6)
which is simply the convolution integral. The first order functional, in fact, can
be used as a measure of the linearity of a system since the difference between
the measured nonlinear response and the predicted linear response is that system
component which cannot be characterized as linear.
For nonlinear systems, higher order functionals are required to describe the
system behavior. To construct the higher functionals, a procedure similar to
the Gram-Schmidt orthogonalization technique is used. The resulting second
and third order functionals are given by

00

00

G2[h2;s(t)] = {
s(t - t;) ... s(t - t~)dt; ... dt~

617

00

h2 (t;, t2)s(t - t;)s(t - t2)dt; dt2

(D.2)

fo

00

where kn(ti. ... , tn) are the Volterra kernels. The Volterra series description is
very powerful conceptually, but practically it is rarely used since no simple
method exists for calculating the kernels of the system. This problem can be
solved by using a functional series originally proposed by Wiener (1958) that
simplifies evaluation of the system kernels by making the terms of the Volterra
series orthogonal for a specific stimulus. Since, to exhaustively test a nonlinear
system, the stimulus must cover all possible amplitudes and frequencies over
which the system operates, Wiener chose a Gaussian white noise (GWN) input
from which to construct a hierarchy of orthogonal functionals. Setting the zero
order Wiener functional to a constant value h0 , i.e.,
(D.3)

- p
00

G3 [h3; s(t)] = {

(D.7)

00

00
{

h2(t;,t;)dt'

h3(t;, t2, t))s(t - t; )s(t - t2)s(t - t)) dt; dt2 dt)

(D.8)
where P is the power spectral density of the white noise. The power level P of
the white noise input is assumed constant for all frequencies over which the
system operates and is equivalent to the Fourier transform of the autocorrelation
function of n(t). In a form similar to the Volterra series in Eqn. (D.2), the
response of a nonlinear system can be written in terms of Wiener functionals as

the first functional is then given by

00

r(t) =

Gm[hm(t;, ... ,t;,,);s(t), t' < t]

(D.9)

m=O

(D.4)
where the functionals are now orthogonal and satisfy the equation
where h 1 (t') is the first order Wiener kernel and k 1 is the constant term necessary
to make the first order functional orthogonal to h0 To deermine k1 , we solve
the equation
(D.5)
where <ff { } is the expected value. For an s( t) of zero mean, k 1 = 0 and the first
order functional becomes

(D.10)
for all i =I= j.
Because of this relationship between the functionals, the kernels can be easily
calculated as follows (Lee and Schetzen, 1965)
h0 =ff { r(t)}

(D.11)

h 1 (t') = (1/P)S{[r(t)- h0 ]s(t - t')}

(D.12)

618

NONLINEAR DISTORTION ANALYSIS

h2 (t'1 , t~) = (l/2P 2 )8{[r(t) - G 1 (t)- h0 ]s(t - t~)s(t - t2)} (D.13)

hn(t'1 , .

,t~) = (l/n!Pn)s{[r(t)-

:t:

Gk(t) }(t- t'i) ... s(t -

t~)}
(D.14)

BIBLIOGRAPHY

Therefore, using Eqn. (D.9) the output r(q; of any nonlinear system ca~ be
exactly characterized for any input signal s(t). The number of terms req~tred
in the summation depends on the degree of nonlinearity of the system. Typically
three to four terms are sufficient.

REFERENCES
Lee, K. w. and K. Schetzen (1965). "Measurement of the Wiener Kernels of a Nonlinear
System by Cross-correlation," Int. J. Control, 2, pp. 237-254.
M armarel.1s, p . z. and v . z. Marmarelis ( 1978). Analysis of Physiological Systems: A
White Noise Approach, Plenum Press, New York.
Rodieck, R. w. (1965). "Quantitative Analysis of Cat Retinal Ganglion Cell Response
to Visual Stimuli," Vision Res., 5, pp. 583-601.
.
. ,,
Tate, c. and M. M. Woolfson (1971). "On Modeling Neural Networks m the Retma,
Vision Res., 11, pp. 167-633.
Toyoda, J. ( 1974 ). "Frequency Characteristics of Retinal Neurons in the Carp," J. Gen.
Physiol., 63, pp. 214-234.
Volterra, v. (1959). Theory of Functionals and of Integral and Integro-Differential
.
Equations, Dover Publications, New York.
Wiener, N. (1958). Nonlinear Problems in Random Theory, Wiley, New York.

Among some materials which are of general interest and help in the study of
SAR are the foUowing.
Books

Apel, J. R. Principles of Ocean Physics, Academic Press, New York, 1987. Chapter 8 is
devoted to scattering of electromagnetic waves from the sea surface.
Barton, D. K. Modern Radar System Analysis, Artech House, Norwood, MA 1988.
Comprehensive coverage of topics in conventional radar systems.
Elachi, C. Introduction to the Physics and Techniques of Remote Sensing, Wiley, New
York, 1987. A tutorial overview of the field.
Elachi, C. Spaceborne Radar Remote Sensing: Applications and Techniques, IEEE Press,
New York, 1988. Technical description of principles and instruments for radar imaging,
altimetry, and scatterometry.
Fitch, J. P. Synthetic Aperture Radar, Springer-Verlag, New York, 1988. A concise
well-illustrated survey.
Harger, R. 0. Synthetic Aperture Radar Systems, Theory and Design, Academic Press,
New York, 1970. A detailed treatment from the point of view of communication theory
and signal processing.
\

Hovanessian, S. A: Introduction to Synthetic Array and Imaging Radars, Artech House,


Dedham, MA, 1980. A concise presentation of the theory.
Kovaly, J. J. Synthetic Aperture Radar, Artech House, Dedham, MA, 1976. An annotated
collection of reprints of key journal articles 1961-1975.
Levanon, N. Radar Principles, Wiley, New York, 1988. Concise presentation of topics
in conventional radar.

619

620

BIBLIOGRAPHY
BIBLIOGRAPHY

Skolnik, M. I. Introduction to Radar Systems, 2nd Ed., McGraw-Hill, New York, 1980.
Broad range of topics in radar, with extensive pointers to the literature.
Ulaby, F. T., R. K. Moore and A. K. Fung, Microwave Remote Sensing Active and Passive.
Vol. l, Microwave Remote Sensing Fundamentals and Radiometry, Addison-Wesley,
Reading, MA, 1981. Vol. 2, Radar Remote Sensing and Surface Scattering and Emission
Theory, Addison-Wesley, Reading, MA, 1982. Vol. 3, From Theory to Applications,
Artech House. Encyclopedic. Remote sensing physics, with a long chapter on SAR
(Vol. 2, Ch. 9).
Wehner, D.R. High Resolution Radar, Artech House, Norwood, MA, 1987. Techniques
for wideband radar systems, including synthetic aperture. Special attention to inverse
synthetic aperture (ISAR) systems.
Survey Articles
Ausherman, D. A., A. Kozma, J. L. Walker, H. M. Jones and E. C. Poggio, "Developments
in radar imaging," Trans. IEEE Aero. and Elec. Sys., AES-20(4), 1984, pp. 363-400.
Barber, B. C. "Theory of digital imaging from orbital synthetic-aperture radar," Int. J.
Rem. Sens., 6(7), 1985, pp. 1009-1057.
Elachi, C., T. Bicknell, R. L. Jordan and C. Wu, "Spaceborne synthetic-aperture
imaging radars: Applications, techniques, and technology," Proc. IEEE, 70( 10), 1982,
pp. 1174-1209.
Moore, R. K. "Radar fundamentals and scatterometers," Chapter 9 in Manual of Remote
Sensing, 2nd Ed., Vol. I Theory, Instruments, and Techniques (Colwell, R. N.,
D. S. Simonett and F. T. Ulaby, eds.), American Society of Photogrammetry, Falls
Church, VA, 1983.
Moore, R. K. "Imaging radar systems," Chapter 10 in Manual of Remote Sensing, 2nd
Ed., Vol. I Theory, Instruments, and Techniques (Colwell, R. N., D. S. Simonett and
F. T. Ulaby, eds.), American Society of Photogrammetry, Falls Church, VA, 1983.
Tomiyasu, K. "Tutorial review of synthetic-aperture radar (SAR) with applications to
imaging of the ocean surface," Proc. IEEE, 66(5), 1978, pp. 563-583.
Reports
Cimino, J. B., B. Holt and A. H. Richardson, "The Shuttle Imaging Radar B (SIR-B)
Experiment Report," Puhl. 88-2, Jet Propulsion Lab., Pasadena, March 15, 1988.
Ford, J.P., R. G. Blom, M. L. Bryan, M. I. Daily, T. H. Dixon, C. Elachi and E. C. Xenos,
"Seasat Views North America, the Caribbean, and Western Europe with Imaging
Radar," Puhl. 80-67, Jet Propulsion Lab., Pasadena, November 1, 1980.
Ford, J.P., J.B. Cimino, B. Holt and M. R. Ruzek, "Shuttle Imaging Radar Views the
Earth From Challenger: The SIR-B Experiment," Puhl. 86-10, Jet Propulsion Lab.,
Pasadena, March 15, 1986.
Fu, L.-L. and B. Holt, "Seasat Views Oceans and Sea Ice With Synthetic-Aperture
Radar," Pub!. 81-120, Jet Propulsion Lab., Pasadena, February 15, 1982.
Kasischke, E. S., G. A. Meadows and P. L. Jackson, "The Use of Synthetic Aperture
Radar Imagery to Detect Hazards to Navigation," Environmental Research Institute
of Michigan, Ann Arbor, 1984.
Pravdo, S. H., B. Huneycutt, B. M. Holt and D. N. Held, "Seasat Synthetic-Aperture
Radar Data User's Manual," Puhl. 82-90, Jet Propulsion Lab., Pasadena, March 1,
1983.

621

"Spaceborne Imaging Radar Symposium, January 17-20, 1983," Puhl. 83-11, Jet
Propulsion Lab., Pasadena, July 1, 1983.
"The Second Spaceborne Imaging Radar Symposium, April 28~30, 1986," Puhl. 86-26,
Jet Propulsion Lab., Pasadena, December l, 1986.
"Shuttle Imaging Radar-C Science Plan," Pub!. 85-29, Jet Propulsion Lab., Pasadena,
September 1, 1986.

MATHEMATICAL SYMBOLS

c
d
d( (), "')
D

Do

MATHEMATICAL SYMBOLS

D'
D'
D'( (), </>)

D,,
e
E

a
a;
A
A( ro)
Ad

A.
A,
A.
A.
b
b;
B

B(ro)
B2 .
B(f,R)
B(f, v)

622

Semimajor axis of elliptical orbit


Fourier coefficients of amplitude spectrum A(ro)
Antenna geometric aperture
Amplitude spectrum
Antenna directive aperture= p.A
Antenna effective aperture= pA = p.Ad
Antenna receiving aperture = A. for matched antenna
Area of radiant surface
Platform acceleration
Attitude adjusted boresight unit vector
Fourier coefficients of phase spectrum B( ro)
Frequency bandwidth
Source brightness, W /sr Hz m 2
Phase spectrum
Constantinearthgravitationalpotential = -4.405 x 10 10 m 2
Doppler spectrum of secondary range compressed data
Two-dimensional spectrum of secondary range compressed
data
Doppler bandwidth
Processed Doppler bandwidth
Receiver noise bandwidth
Range bandwidth
Spatial bandwidth = B0 / V.,
Speed of light= 2.998 x 10 8 m/s

FL
Fop
Fsys

g(f, R)
g(s, R)
g(s, t)

goa
gor

gua
g(J

623

Number of computations per input sample


Data compression factor
Normalized antenna directivity pattern = D( (), q, )/ D
Antenna directivity on axis common to transmit and receive
Diameter of circular antenna
Transmitting directivity for uniform illumination = 4nA/ A. 2
Antenna receiving directivity (on axis)
Antenna transmitting directivity (on axis)
Antenna transmitting directivity pattern = G'( (), <P )/ p.
Differential nonlinearity measure for A/D converter
Eccentricity of orbit
Eccentric anomaly of satellite orbit position
Electric field
System energy
Mathematical expectation operator
Earth flattening factor, 1/ f = 298.257
Frequency
True anomaly of satellite orbit position
Channel imbalance terms in quad polarized SAR system
Sampled signal
Doppler signal
Carrier frequency
Doppler frequency shift
Doppler frequency at beam center
Nominal value of beam center Doppler frequency
Local oscillator frequency
Mid-band frequency
Pulse repetition frequency = 1/I;,
Doppler frequency rate
Sampling frequency for complex range signal
Sampling frequency for real range signal= 2f.
Frequency of stationary phase
. Offset video frequency
Noise factor
Noise figure = 10 log F
Noise factor of attenuator
Operating noise factor
System noise factor
Doppler spectrum of range compressed complex data g(s, R)
Range compressed complex data withR = ct/2
Range compressed complex data
Azimuth oversampling factor
Range oversampling factor
Azimuth undersampling factor
Oversampling factor required for image rotation

624

MATHEMATICAL SYMBOLS

Ga
Ge
G1po1
Gop

G,
G'
GSTc
GI
Gxpo1

G(f, v)
~(s, v)
G( fJ, </J)
h
h(x)

h(s, Rls., R 0 )
H

Ho
i
I

J;
k
k'

ka

k,
K

K(R)

MATHEMATICAL SYMBOLS

Antenna power gain on axis common to transmit and receive


Available power gain
Power gain of transponder electronics
Like-polarized on-axis antenna power gain
Power gain in operation
Receiver power gain
Antenna on-axis power gain for receiving
Sensitivity time control gain function
Antenna on-axis power gain for transmitting
Cross-polarized on-axis antenna power gain
Two-dimensional spectrum of range compressed data g(s, R)
Range spectrum of complex range compressed data g(s, R)
Antenna one-way power gain pattern= 4nP(fJ, </J)/P1
Planck's constant= 6.62 x 10- 34
Target elevation above terrain model
Correlator output
SAR system Green's function
Altitude
Magnetic field
Zero order entropy
Complex image
Electromagnetic intensity (W /m 2 )
Image intensity= ICl 2
Mean of image intensity= 1(1 2
Electromagnetic intensity for isotropic radiator
Spectral radiant intensity, W /Hz sr
Bessel function of first kind and order i
Boltzmann's constant= 1.38 x 10- 23 J/K
Free space wavenumber= 2n/ A.
Wavenumber in matter =
e,
Azimuth fractional scale error
Range fractional scale error
Frequency rate of linear FM waveform
Calibration factor between received signal power and
backscatter coefficient
Calibration factor between image signal power and backscatter
coefficient
Effective chirp rate in secondary range compression
Mainlobe broadening factor
Attenuation loss
Number of looks
Spectral radiance, W /m 2 Hz sr
Antenna length
Azimuth reference function length in samples
Loss due to antenna efficiency = 1/Pe

k.J

nb

nB
nd
nh

nP
n1

N
Na
Naz

NA
Nb
Ne
Next

N;a
N;n1

Ni
Noa
Nq

N,
N.
Ns
p

P( fJ, </J)
Pav

Pb
P;a

pn
p~

Poa

P,
p~
Prad

P.

P!
pt
qd
qt
r

625

Mismatch loss
Range reference function length in complex samples
Radar system electronics losses
Number of quantization levels
Number of bits in block floating point quantized sample
Mean anomaly of satellite orbital position
Number of quantizer bits per sample
Number of bytes per sample
Complex raw data number
Number of slant range pulses to horizon
Complex pixel data number
Number of bits in block floating point quantizer threshold
One-sided noise power spectral density, W /Hz
Available noise power spectral density, W /Hz
Azimuth processing block length in samples
Samples per azimuth aperture
Bit error noise
Number of range processing blocks
Available external noise power spectral density
One-sided available input noise power spectral density
Effective output available noise power density of self noise
Data samples in two-dimensional correlator pattern
One-sided available output noise power spectral density
Quantization noise
Complex samples per range pulse; Range processing block length
Source noise power spectral density
Saturation noise
Sidereal period of orbit
Antenna radiation pattern, W /sr
Average power over TP
Data link bit error probability
Input available power (W)
Receiver noise power
Image noise power at a resolution cell
Output available power (W)
Receiver total power = P. + Pn
Image power at a resolution cell
Antenna radiated power
Receiver signal power
Image signal power at a resolution cell
Power delivered to antenna structure
Instrument duty cycle fraction
Fractional real time rate
Platform roll angle
Platform roll angle rate

626

roL
ri
rt
R

R
gt

R.
Re
Rg

Rm
Ro
RP
R.
R.
Rt
Rt
R1(h)

s
Sa
Sa;

s.
sd

Si
SK

s
s

ff

so
Su

SNRi
SNRO
tr

T
!Y

1d1
7;,
T..nt

r.

MATHEMATICAL SYMBOLS

Downlink data rate


Instantaneous data rate
Data transfer rate
One of the polar coordinates ( R, 0, <P)
Slant range= IR. - R1 I
Slant range unit vector
Quad polarized radar receive system matrix
Slant range at beam center
Equatorial radius of oblate earth model= 6378.137 km
Ground range
Midswath slant range
Slant range of closest approach
Polar radius of oblate earth model = R.( 1 - f)
Length of satellite position vector from earth center
Satellite position vector from earth center {x., y., z.}
Length of target position vector from earth center
Target position vector from earth center {x., y., zt}
Target geocentric position vector {xh, yh, zh} at height h
relative to reference ellipsoid
Azimuth time
Standard deviation
Ambiguous signal power at receiver output
Ambiguous receiver output signal power in ith record
window interval
Azimuth time of target at beam center
Spacecraft clock drift relative to ground standard
Desired pulse echo receiver output power in ith record
window interval
Standard deviation of parameter K
Slow time point of stationary phase
Unit vector along antenna phase gradient
Azimuth integration time of focussed processor
Target complex scattering matrix
Receiver total output signal power
Azimuth integration time of unfocussed processor
Signal to noise ratio at receiver input
Signal to noise ratio at receiver output
Time of stationary phase
Temperature
Time of perigee passage of satellite
Quad polarized radar transmit system characterization
Datatake duration
Standard temperature = 290 K
Effective input noise temperature of antenna losses
Equivalent input noise temperature= Nini/kGa

MATHEMATICAL SYMBOLS

T.xt
~
Tn
1;,p

i;,
'I;,hys

T.

T,,(O, </J)
v
Vi
vr(s,t)
t)r(s,t)

v.
v.

V.(s, v)

v.
V.1
V.w

Vt

w..

WA

W..z
~

w.

Wa
Wg

x
xc Xi

Xo
Xi

x
Y1

z,

Zo
IX

<Xj

627

Noise temperature of external sources


Temperature of ground
Noise temperature
Operating noise temperature = T. + T.
Pulse repetition period
Physical temperature of system
Source noise temperature
Sky noise temperature distribution
Fourier transform variable corresponding to slant range
Analog to digital convertor digital reconstruction levels
Real receiver voltage waveform
Complex basebanded receiver voltage waveform
Speed parameter in azimuth chirp constant model
Earth equatorial speed= meRe
Range spectrum of complex basebanded data
Platform speed
Platform velocity
Relative speed between platform and target
Swath speed
Target velocity
Azimuthal two-way antenna power pattern in Doppler domain
Antenna height
Peak signal loss factor due to azimuth filter weighting
Azimuth matched filter reference function weight
Ground range swath width
Range matched filter reference function weight
Peak signal loss factor due to range filter weighting
Slant range swath width
Azimuth spatial coordinate
System state variable
Azimuth coordinate at beam center
Analog to digital convertor quantization levels
Azimuth coordinate at closest approach
Target coordinate (earth centered)
Azimuth spatial aperture
Target coordinate (earth centered)
Unit vector along antenna axis
Target cOi:>rdinate (earth centered)
Sample value of magnitude of complex image = I(I
Received signal matrix for quad polarized radar
Characteristic impedance of free space = .j( 0 / e0 )
Direction cosine
Terrain slope
Orbit inclination angle
Image orientation angle relative to map grid north reference
frame

628

MATHEMATICAL SYMBOLS

<5( t)
<5p
'51, '52, '53, c54
<5fo
<5R
<5Rg
c5x
c5xaz
<5xgr
<5x1
<5xm
c5xP
<5x.
c5(}
An.,

AR
ARg
AR.
Ax
AXaz
Ax,8
8

80
8'
8"
8,

8x
8y
8z

'
'
''t

(c

11
(}

Chirp rate mismatch factor= l(()K)/ Kl


Look angle of radar beam above nadir
Dira<;:: impulse function
Skin depth of electromagnetic penetration
Cross-talk terms for quad-polarized SAR system
Doppler resolution
Slant range resolution after pulse compression
SAR ground range resolution after pulse compression
SAR azimuth resolution
Azimuth pixel spacing
Ground range pixel spacing
Pixel spacing in line direction
Digital elevation map input pixel spacing
Pixel spacing in sample direction
Slant range pixel spacing
Angular resolution
Target displacement in slant range pixels from true
location
Focusing phase term: R = R0 +AR
Ground range extent of uncompressed radar pulse
Slant range extent of uncompressed radar pulse= crp/2
Azimuth extent of beam footprint
Along-track displacement of azimuth ambiguity relative to
true target location
Cross-track displacement of azimuth ambiguity relative to
true target location
Coefficient of variation
Emissivity of surface
Permittivity of matter
Permittivity of free space= (1/36x) x 10- 9
Real part of relative dielectric constant
Imaginary part of relative dielectric constant
Relative dielectric constant= 8/80
Along track sensor position error
Cross track sensor position error
Radial sensor position error
Complex voltage backscatter coefficient (Fresnel coefficient)
Geodetic latitude
Estimate of Fresnel coefficient ( (complex image)
Geocentric latitude
Satellite latitude (spherical coordinate)
Target latitude (spherical coordinate)
Incidence angle between radar beam and surface normal
Antenna horizontal pattern angle
One of the polar coordinates (R, 0, </>)

MATHEMATICAL SYMBOLS

Ou

o.

Ov
l
A

o
p
Pa
Pe
(]

ii
<Jo

ao
r
't'c
't'e
t1
to

rP
't'RP
't'w

</>

</>q

x
Xe

x.
Xi

"'

w
We

629

Horizontal beamwidth = l/ La
Radar beam squint angle relative to broadside
Vertical beamwidth = l/W..
Carrier wavelength
Bragg wavelength
Gravitational constanttimes earth mass = 3.986 x 10 14 m 3 /s 2
Permeability of free space = 4x x 10 - 7
Antenna efficiency= PaPe
Reflectivity
Aperture efficiency of antenna
Radiation efficiency of antenna
Target cross section (m 2 )
Mean cross section of extended homogeneous region
Specific backscatter coefficient = a I dA
Mean specific backscatter coefficient= Ca 0
Range delay time
Azimuth integration time
Pulse delay in radar electronics relative to vacuum
Pulse delay through ionosphere relative to vacuum
Integration time interval
Radar pulse length in time
Receiver protect window extension about transmitted pulse
Range sampling time window
Antenna elevation pattern angle
One of the polar coordinates (R, 0, </>)
Phase angle
Quadratic phase error at aperture edge
Geodetic longitude
Geocentric longitude
Satellite longitude (spherical coordinate)
Target longitude (spherical coordinate)
Angle of Fresnel coefficient (
Spectrum phase function
Argument of perigee of orbit
Radian frequency= 2xf
Earth angular velocity, m.; = 7.29212 x 10-s rad/s
Longitude of ascending node of orbit
Solid angle, sr

LIST OF ACRONYMS

CEOS.
CMOS
COR
CPU
CRT

cw

LIST OF ACRONYMS

lD
2D
A/C
AASR
ADC
ADCT
ADSP
AGC
AOS
APL
ASF
ASI
BAQ
BER
BFPQ
BITE
bps
Bps
CAL
CAT
CCRS
CCSDS
CCT
CE
630

One-Dimensional
Two-Dimensional
Aircraft
Azimuth Ambiguity to Signal Ratio
Analog to Digital Convertor
Adaptive Discrete Cosine Transform
Advanced Digital SAR Processor
Automatic Gain Control
Archive and Operations System
Applied Physics Laboratory
Alaska SAR Facility
Italian Space Agency
Block Adaptive Quantizer
Bit Error Rate
Block Floating Point Quantization
Built-In Test Equipment
Bits per Second
Bytes per Second
Calibration Subsystem
Catalog and Database Subsystem
Canadian Centre for Remote Sensing
Consultative Committee on Space Data Systems
Computer Compatible Tape
Computational Element

DAC
DC
DDL
DEC
DEM
DFT
DLR
DMA
DN
DOD
DSP
DWP
E-ERS-1
ECL
EHF
EIRP
EM
EOS
ERIM
ERS
ESA
FDC
FDDI
FET
FFT
FLOP
FLOPS
GaAs
GPS
GSFC
HDDR
HDDT
HF
HP
HPA
HSC
I, Q
I/O

631

Committee on Earth Observations Satellites


Complementary Metal on Silicon
SAR Correlator Subsystem
Central Processing Unit
Cathode Ray Tube
Continuous Wave
Digital to Analog Convertor
Direct Current
Dispersive Delay Line
Digital Equipment Corporation
Digital Elevation Map
Discrete Fourier Transform
Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.
V. (German Aerospace Research Establishment)
Direct Memory Access
Data Number
Digital Optical Disk
Digital Signal Processor
Data Window Position
European Remote Sensor
Emitter Coupled Logic
Extremely High Frequency
Effective Isotropic Radiated Power
Electromagnetic
Earth Observing System
Environmental Research Institute of Michigan
European Remote Sensor
European Space Agency
Frequency Domain Convolution
Fiber Distributed Data Interface
Field Effect Transistor .
Fast Fourier Transform
Floating Point Operations
Floating Point Operations per Second
Gallium Arsenide
Geophysical Processor System; Global Positioning System
Goddard Space Flight Center
High Density Digital Recorder
High Density Digital Tape
High Frequency
Hewlett-Packard .
High Power Amplifier
High Speed Channel
In-Phase, Quadrature
Input/ Output

632

LIST OF ACRONYMS

IBM
IEEE
IF

IPP
ISLR
J-ERS-1
JHU
JPL
JSC
LNA
LTPP
MDA
MGN
MIMD
MIT
MMIC
MTI
NASA
NASDA
pdf
PE
PIN
PN
PPI
PRF
PS
PSLR
QA
QPSK
RADAR
RAE
RAM
RAR
RASR
RCS
RF
RGS
rms
S/C
SAR
SAW
SCNR
SCR
SDNR
SIMD

International Business Machines


Institute of Electrical and Electronics Engineers
Intermediate Frequency
Inter-Pulse Period
Integrated Sidelobe Ratio
Japanese Earth Resources Satellite
Johns Hopkins University
Jet Propulsion Laboratory
Johnson Space Center
Low Noise Amplifier
Linear Three Point Predictor
MacDonald-Dettwiler and Associates
Magellan
Multiple Instruction Multiple Data
Massachusetts Institute of Technology
Microwave Monolithic Integrated Circuit
Moving Target Indicator
National Aeronautics and Space Administration
National Aeronautics and Space Development Agency
Probability Distribution Function
Processor Element
P-region Intrinsic-region N-region
Pseudonoise
Plan Position Indicator
Pulse Repetition Frequency
Polar Stereographic
Peak SideLobe Ratio
Quality Assurance
Quarternary Phase Shift Keying
RAdio Detection And Ranging
Royal Aircraft Establishment
Random Access Memory
Real Aperture Radar
Range Ambiguity to Signal Ratio
Radar Cross Section
Radio Frequency
Receiving Ground Station
Root Mean Square
Spacecraft
Synthetic Aperture Radar
Surface Acoustic Wave
Signal to Compression Noise Ratio
Signal Corps Radar
Signal to Distortion Noise Ratio
Single Instruction Multiple Data

LIST OF ACRONYMS

SIR
SLAR
SNR
SPAN
SPECAN
SPOT
SPS
SQNR
STALO
STC
T/R
TB
TDC
TDRSS
TM
TWT
UHF
USGS
UTM
VF
VGA
VHF
VHSIC
VLSIC
VQ
VSWR
WORM

Shuttle Imaging Radar, Spaceborne Imaging Radar


Side-Looking Aperture Radar
Signal to Noise Ratio
Space Physics Analysis Network
Spectral Analysis
System Probatoire d'Observation de la Terre
SAR Processor System
Signal to Quantization Noise Ratio
Stable Local Oscillator
Sensitivity Time Control
Transmit/Receive
Time Bandwidth Product
Time Domain Convolution
Tracking and Data Relay Satellite System
Thematic Mapper
Traveling Wave Tube
Ultra High Frequency
United States Geological Survey
Universal Transverse Mercator
Video Frequency
Variable Gain Amplifier
Very High Frequency
Very High Speed Integrated Circuit
Very Large Scale Integrated Circuit
Vector Quantizer
Voltage Standing Wave Ratio
Write Once Read Many

633

INDEX

A-scan, 72-73
Aarons. J., 315
Absolute location of target:
algorithm, 374-376, 600
error sources, 345, 377-382
Acceleration of satellite, 569-571
Adaptive discrete cosine transform (ADCT).
493-496
Advanced Digital SAR Processor (ADSP).
-456-458,600,601
Agarwal, R C . 560
Alaska SAR Facility (ASF). 13, 592-614
Archive and Operations System (AOS),
603-605
Geophyskal Processor System (GPS).
605-613 '
Mission Planning Subsystem (MPS), 603
Receiving Ground Station (RGS), 596-597
SAR Processor System (SPS), 458, 598-(J03
station reception mask. 595
Aliasing:
of Doppler spectra, 238, 241
in image resampling, 389, 396, 482
of sampled signal, 211, 544-545
in step transform, 510-511
Allan variance, 263-264
Alliant computer, 466, 486, 487
ALMAZ, 12-13
Alpers, W. 52

Ambiguity, 88-91, 296-305


azimuth,20,88-89,224,298-303
effect on processing bandwidth
selection, 434-435
effect on target location, 375-376
wavelength dependence, 240-241.
301-303
function (of transmitted pulse waveform),
132-134, 149
linear FM pulse, 134
minimum antenna area constraint, 21. 274
range,20,89-91, 171,303-305
resolution of ambiguity cycle, 240-246
using multiple PRFs, 242-246, 302-303
using subaperture correlation, 238-240,
302,600
Ampex DCRSi, 598
Analog-to-digital converter (ADC).
279-283,291-294
Anechoic chamber, 337, 346
Angle:
aspect, 156, 217
dive, 389
grazing, 303-304
squint,207,208,503,526,571,583,590
Anomaly:
eccentric, 576. 577
mean, 576, 577
true, 576, 577

635

636

INDEX

Antenna:
active array. 276-277. 318-319. 335-336. 357
aperture efficiency. 83, 273
aperture field distribution. 78-79. 85. 94-96
ASF ground station. 597. 598
beamwidth. 87-88
cross-polarized pattern. 278. 351
cross-talk, 351
current distribution function. 150
directional temperature. 105
directivity pattern. 77. 83-91, 104. 273. 335
effective aperture. 95
effective isotropic radiated power (EIRP),
74,341
Fraunhofer region, 79
Fresnel region, 78
gain function, 76. 80-84. 127. 223, 273
Goldstone ground station. 176, 178. 179,
180. 181
microstrip phased array, 274-275
minimum ambiguity area. 21, 274
noise, 106-108
polarization purity, 278
power pattern, 81
quad-polarized design. 274-278
radiation efficiency. 74. 82-84, 106, 273
reciprocity. 95, 341, 352. 364
sidelobes, 86-88
slotted waveguide array. 275
two-way power pattern. 228
uniformly illuminated aperture. 85-86. 88
Yagi. 34
AOS (Archive and Operations System). 592.
603-605
Apogee. 573, 576
Apollo Lunar Sounder Experiment. 34-38
command module diagram, 38
optical recorder, 37
Appiani. E.G., 470
Applied Physics Laboratory. see Johns
Hopkins University Applied Physics
Laboratory
Aptec,464
Archive and Catalog Subsystem, 603
Archive and Operations System (AOS), 592.
603-605
Argument of perigee. 573, 576
Ascending node. 576
Ascending node of satellite orbit, 576
ASF, see Alaska SAR Facility
Aspect angle. 156, 217
Atmospheric:
absorption spectrum. 5. 48
amplitude scintillation. 315

INDEX

Faraday rotation, 163, 315. 317


oxygen absorption band, 5
propagation (group) delay. 315. 317.
379-381
pulse dispersion, 317
refraction. 303-304
water vapor absorption band. 5. 315
Attema, E . 329. 330. 331
Aushermann, D. A. 504
Autocorrelation function of image, 232-233
Autofocus,222,234-237,320
in ASF processor. 600
by image contrast, 234
in polar processor. 529-534
by subaperture image correlation, 235-237
Automatic gain control, 271-273
AVHRR,611
Azimuth:
ambiguities, 20. 88-89. 298-303
bandwidth time product, 170. 223
compression filter parameters. 588-591
bandwidth, 434-435
length (in samples), 435
filter spectra. 238
frequency domain processing. 196-208
resolution, 169-171, 524
signal processing overview. 167-169
skew. 190-193,390-392,603
spectral analysis. 100
time domain processing, 187-196
Azimuthal range compression. 203
Backscatter coefficient:
mean,93. 122, 139.214,322
specific.91-92. 136
Bandlimiting. 238. 546
filter, 545
low pass signal interpolation. 561
Bandpass waveform. 541
Bandwidth of azimuth processor, 434-435
Bandwidth time product:
azimuth. 169. 174.206.223.503
of matched filter. 130. 132
range, 135. 146. 150, 161, 185.200.528
Barber, B. C.. 159. 161. 187. 211, 222. 234.
528,580,583,586.590
Barker codes, 265
Barnett. I. A. 243
Barrick, E. E. 51
Barrow, H. G . 422
Barton, D. K.. 71. 73
Basebanding:
complex, 136, 166. 183. 185
of pulse compressed data. 213, 224

Bayman, R. W.. 297


BBN Labs. 470
Beckman, P . 285
Bennett. J. R.. 33. 187, 190, 194. 223. 226. 234
Bergland. G. D .. 559
Berkowitz. R. S.. 257
Bernoulli's theorem, 285
Bessel function, 150. 257
Binary phase codes, 265
Bistatic radar, 96
Bit error rate (BER). 286, 600. See also Noise.
bit error
Black body radiation, 102. 104-105
Blahut R. E.. 561
Blanchard. A. 318
Blitzer. D. L.. 31
Block adaptive quantization (BAQ), 289-293
Block floating point quantization (BFPQ),
289-294
Bohm,D.. 97
Boltzmann's constant, 75. 97
Boresight unit vector, 430
Bracewell, R. N .. 149
Bragg, scattering, 51-52, 321. 365
Brigham. E. 0 .. 558
Brightness temperature, 102-104
British Aerospace of Australia, 464
British Royal Aircraft Establishment (RAE),
155
Brookner, E., 27, 163, 317, 380
Brown. W. M .. 241
Browse (of SAR imagery):
requirements. 488-489
system design. 487-489
Brunfeldt, D.R., 341, 342
B-scan. 27
Burrus, C. S., 560
Butler. D., 10, 251
Butler. M .. 265
Butterfly operation in FFT, 556
Calibration:
geometric, see Geometric calibration
polarimetric, see Polarimetric radar
calibration

radiometric, see Radiometric calibration


Calibration processor, 353-357
data flow, 354
processing flowchart, 354-355
Canadian Center for Remote Sensing
(CCRS). 33, 35
Canadian Space Agency, 12, 44, 592
Canny, J. F .. 419, 420, 421
Canny edge detector, 419-421

637

Carande, R. E., 601


Carlson, A. B.. 283
Carsey, F .. 592
Carver,K.,59,274
Cassini Titan Radar Mapper (CTRM), 40, 42
Causal system, 538, 615
CCRS (Canadian Center for Remote
Sensing), 33-35
Central limit theorem, 215, 260
Centre National d'Etudes Spatiales (CNES).
40
CEOS standard. 603
Chain Home network, 27
Chang, C. Y., 197, 238, 242, 243, 245, 367,
436,443,493
Chang, P. C., 499
Chen, W. H . 494
Chinese Academy of Sciences, 40
Chinese remainder theorem, 243, 302
Chirp:
effective rate, 205
pulse, 133
replica (for calibration). 329-331, 364, 600
step, 133
Chu, S., 560
Circular convolution, 549-553
Circulator, ferrite, 276
Clustering algorithm, 611
Clutterlock, 190, 222-234, 301, 320
in ASF processor, 600
using Doppler spectral analysis, 224-226
using energy balance, 227-228
using minimum variance estimator,
228-231
using time domain correlation, 231-234
Clutter rejection filter, 222
Coarse aperture time, 517
Coefficient ordering, 557
Coefficient of variation, 325
Coherent detection. 185
Colony, R., 607
Colwell, R. N., 9, 44, 73
Committee on Earth Observation Satellites
(CEOS),603
Common node processor architecture,
460-467
common signal processor, 460, 467
functional block diagram, 461
1/0 transfer rates, 461-463
Compression processing. 146-148, 183-209
algorithm overview, 165-169
azimuth matched filtering algorithms,
187-208
computational analysis, 443-445

638

INDEX

Compression processing (Continued)


frequency domain. 196-208
time domain, 187-196
filter mismatch, 163, 174
filter parameters, 430-436, 588-591
range matched filtering algorithm,
182-187
computational analysis, 449-452
digital formulation, 210-214
filter coefficients, 213-214, 436
spectral analysis algorithms, 440-443,
503-534
Compression ratio of the processor, 359
Concurrent processor architecture, 467-473
EMMA multiprocessor, 470-472
MIMD processor arrays, 469-473
SIMD processor arrays, 468-469
Convolution:
analog, 537-538
bandlimited signals, 546-547
circular, 549-553
discrete, 545-553
fast, 601
linear, 545-553
overlap add method, 551, 552
overlap save method, 552
Convolutional code, 284
Cook, C. E., 133, 145, 146, 147, 149, 150, 151,
173, 174, 175, 265
Cooley, J. W., 554, 558
Coordinate system:
of satellite orbit, 571-572
of signal data, 157-159
Cordey, R. A, 52
Corner reflectors, 338-340
beamwidths,339-340
device errors, 339
radar cross section, 339-340, 352
Comer turn:
algorithm, 182
memory size, 436
Corr, D. G., 330
Correlator design, 428-473
computational analysis, 437-452
hardware architecture, 452-473
post-processor, 473-487
requirements definition, 428-436
Cray computer, 452, 466, 486
Crochiere, R. E., 563
Cross-correlation:
for autofocus, 235
coefficient, 232-233
hard limited function, 234
for multisensor image registration, 416,
418,422

INDEX

Cumming, I. G., 238


Curlander, J.C., 223, 227, 243, 357, 361,
374,390
Cutrona, L. J~ 20, 30, 32, 123
Data compression:
adaptive discrete cosine transform,
493-496
block quantization techniques, 289-294
of downlink data, 288-289
efficient coding, 289-294, 462
lossless, 288-289
lossy,288-289,493-499
vector quantization, 495-499
Data level definitions, 251-252, 254-255, 353
Data synchronization, 594
Davis, D. N., 460
Davis, W. A. 422
Decimation:
in frequency, 555
of sampled signals, 563
in time, 555, 556
Defocussing (Doppler rate mismatch), 163,
174
Depth offocus, 173-176, 195
Deramp processing:
computational analysis, 440-443
polar processing, 519-534
range processing, 503-507
step transform, 508-518
Deutsch, L., 284
DFf (discrete Fourier transform). 238, 549.
550
Dielectric constant, 48, 55, 60, 62, 63, 137, 612
Diffraction integral, 77-79
Digital electronics, 279-283
Diophantine equation, 243
Dirac function, 140, 156, 543
Directivity:
of antenna, 77, 83-91, 104, 273, 335
of terrain, 139
Discrete convolution, 546
Discrete Fourier transform (DFf), 238, 549,
550
Distortion noise:
crossover, 262
harmonic, 261-262, 270-271
intermodulation, 261-262, 271
Dive angle, 589
Dobson, M., 344
Dolph, C. L., 150
Dolph antenna current distribution function,
150
Dolph-Chebyshev weighting, 151
Domik, G., 402

Dongarra, J. J., 452


Doppler:
bandwidth, 21
of processor, 434-435
beam sharpening, 18-21, 28
center frequency, 168, 569
clutter rejection filter, 222
navigator, 30, 32
parameter bounds, 430-434
parameter update rates, 433-434
rate, 168, 569, 590-591
mismatch, 163-164, 173-175
sampling rate, 545
spectrum, 223, 238
Downlink subsystem, 283-288
bit error noise, 285-286. 357
channel errors, 283-285
error protection codes, 283-284
Dubois, P., 352, 357, 367
Durden, S. L., 56
Earth Observing System (EOS). 9-11. 411
Earth Resources Satellite (J-ERS-1). 592
Earth rotational velocity. 375. 568. 582
nutation. 572
precession. 572
Earth's angular velocity. 568. 582
Eccentric anomaly. 576. 577
Eccentricity (of satellite orbit). 226. 573. 576
Echo tracker. 349-351. 357
E-ERS-1. 274. 329. 331. 375. 467. 470. 471.
592.593.594.595.596
Effective chirp rate. 205
Effective isotropic radiated power (EIRP). 74
EHFband. 8
Eigenfunction. 539. 540. 541
Eigenvalue. 539
Elachi. C.. I. 9. 38. 44. 50. 104. 117. 411
Electromagnetic spectrum:
absorption bands, 5
definitions, 8
remote sensing applications, 7-10
Electromagnetic waves:
phase velocity, 47
polarization, 47, 93-94
Electromagnetic wave scattering:
Bragg,51-52,321,365
facet, 51
polarimetric matrix, 94
radiative transfer, 55
specular, 49
Stokes matrix, 350-353, 364, 367
surface,6,48-55,352
volume, 55-65
Elliott, D. K. 557

639

ELSAG,468
El'yasberg, P. E., 570, 574, 577, 579, 580
Emissivity, 117
EMMA multiprocessor, 470-472
computational analysis, 471-472
functional block diagram, 471
Entropy, 288-289
Environmental Research Institute of
Michigan (ERIM), 33-35
Environmental Science Services
Administration (ESSA), 381
EOS (Earth Observing System), 9-11. 613
Ephemeris (restituted). 594
Eppler, D. T., 613
Equation of motion, 570
Equatorial coordinate system, 572
Equipartition principle, 97-98
ERIM (Environmental Research Institute
of Michigan), 33-35
ESA (European Space Agency), 10-13, 44,
592
Euclid's algorithm, 243-245
European Remote Sensor (E-ERS-1),.274,
329,331,375,467,470,471,592
European Space Agency (ESA), 10-13, 44,
592
Exciters:
analog (SAW) designs, 265-266
autocorrelation function, 267, 268
digital designs, 267-268
pulse codes, 265
pulse jitter effects, 268
SAW geometries, 266
.
Exponential probability distribution
function,215,216,228
External calibration, 337-349
distributed targets, 347-349
ground sites, 327, 344-346, 357
point targets, 337-343
Fairbanks, Alaska, 592, 595
Farnett, E. C., 150, 213
Farr, T . 54
Fast convolution, see Frequency domain
convolution
Feature extraction, 610
Fenson, D . 464
Fermat transform, 561
Filtering, 551-553
Filter weighting functions, 148
Fitch, J. P., 504
Foreshortening distortion, 382-384, 399, 479.
484
Fourier:
pair, 540

640

INDEX

Fourier (Continued)
series, 540
spectrum, 554
Fourier transform:
algorithms:
bit-reversed ordering, 557
in-place, 557
not-in-place, 557
butterfly operation, 556
coefficient ordering, 557
computational analysis, 555
discrete, 547-549
fast, 553, 554-558
pair, 540
radix formulation, 558
three-dimensional inverse, 523
twiddle factors , 555, 556
zero padding, 188, 212
Freden, S. C., 7
Fredholm integral equation, 140
Freeman,A.,327,341,343,344,349,351,358
Frequency domain (fast) convolution, 169,
187
ADSP design, 456-457
azimuth algorithm, 196-208
azimuth computational complexity,
443-444,446-448,452-453
azimuth processing block size, 435-436
range algorithm, 182-187
range computational complexity, 449-452
range processing block size, 436
Frequency shift, 162
Fresnel, see also Reflectivity of target (scene)
integral, 145
reflection coefficient, 136-139, 231
region of antenna, 78
Friedman, D. E., 395, 482
Frost, V. S., 419
Fu, L.-L., 37
Functionals, 616
Fundamental component of frequency
response, 615
Fung, A. K., 55
Gagliardi, R., 104, 105, 106
Gamma probability distribution function,
220
Gatineau, Quebec, 595
Gaussian probability distribution function,
97,215
Gaussian smoothing filter, 613
Gentleman, W. M., 558
Geocoding:
computational analysis, 482-486
definition, 371

INDEX

image rotation, 395-397, 482


smooth geoid projection, 393-399
topographic map projection, 399-410
Geologic applications of SAR, lava rock
classification, 53-55
Geometric calibration:
definition of terms, 371
error sources, 372-387
incidence angle map, 386
Geometric distortion, 372-387
clock drift, 373
clock jitter, 372
electronic delay, 372, 373, 379
foreshortening, 382-384, 399, 406
image skew, 391-393
ionospheric group delay, 379-381
layover, 382-386
shadow, 385-386
slant-to-ground range correction, 480-482
specular point migration, 387
Geometry of satellite SAR, 374, 377, 567
Geophysical processor, 321-322, 605-613
German Aerospace Research Establishment
(DLR),40
Global Positioning System, 402
Goddard Space Flight Center (GSFC), 10
Goldstein, R., 5 '
Goldstone antenna, 176, 178, 179, 180, 181
Goodyear Corp., 28
Gordon, F., Jr., 7
Graf, C., 393
Gram-Schmidt orthogonalization, 617
Gravitational constant, 570
Gravitational potential function, 570
Gray, A. L., 321, 346. 355
Grazing angles, 303-304
Green's function, 135, 138, 148, 155, 164,
502,503,529
inverse, 139-142, 156,504,523,537
GSFC (Goddard Space Flight Center), 10
Habbi, A.. 493
Hadamard transform, 493
Hamming weighting function, 152, 358
Hann weighting function, 152
Hardware architecture, 452-473
common node, 460-467
concurrent, 467-473
design requirements, 452-454
flexible pipeline. 458-459
pipeline. 454-460
post-processor. 486-487
Harris, F. J., 150
Hay, G. E., 568
Haymes, R. C., 570, 571. 572, 573. 574, 577

Heat Capacity Mapping Mission (HCMM), 7


Heiskanen, W. A., 484
Herland, E.-A.. 190, 223, 226, 234
Hewlett-Packard, 264
HF band, 8, 34
High density digital recorder (HDDR), 594
Hillis, W. D . 468
Hirosawa, H . 344
Hogg, D. C., 104
Holt, 8., 610, 613
Homogeneity, 537
Homogeneous scene. 228, 537
Hovanessian, S. A., 506
Hulsmeyer, C . 27
Huneycutt, 8. L., 270. 274
Hunten, D. M . 40
Huygen's diffraction integral. 77-79
Hwang, K., 452
Hybrid algorithm, 206, 437
Ice kinematics, 592, 607-611
IEEE, 3ll
Image registration:
for mosaicking, 412-415
for multisensor data analysis, 416-424
using chamfer matching. 419, 422. 606
using digital elevation maps. 406-408,
418
using distance transform, 422-423
using edge operators, 419-422
using principal component analysis, 419
Impedence mismatch (SNR effect), 115-116
Impulse response function:
analog formulation, 537-539
digital formulation. 552
sidelobes. 148-153
weighting functions, 151-153
Inclination, 573, 576
In-phase quadrature detection. 136, 183
Institute of Electrical and Electronic
Engineers (IEEE), 3ll
Institute for Navigational Studies. University
of Stuttgart, 341. 343
Integrated side-lobe ratio (ISLR):
of antennas. 87-88
definition, 256
of impulse response function, 260-261
Integration time in azimuth processor, 169
Intera SAR system, 40, 62
Interference (communication channels). 541
Interferometry. 5, 399, 402
Internal calibration. 329-337
antenna, 335-336
built-in test equipment (BITE), 327,
335-336

641

calibration tones, 322-324, 356, 364


E-ERS-1design,329-331
post-processor design, 477-479
pulse replica loops. 318, 329-331, 356, 364
SIR-C design. 331-336
Interpolation, 191-196, 204, 561-563. See also
Sampling
Intra-pulse range variation, 159-163
Inverse SAR, 520
Ionospheric effects:
attenuation. 163, 317
Faraday rotation, 163, 315, 317
group delay. 315, 317, 379-381
pulse dispersion, 317
Isotropic scatterer, 136
Italian Space Agency (ASI). 470
Jain. A. K., 288, 499
J-ERS-1, 12, 13, 592
Jet Propulsion Laboratory (JPL), 33-35,
60-61, 155.227,238
Jin, M. Y. 197. 199, 201, 203, 206, 223, 228,
230.432.529
Johns Hopkins University Applied Physics
Laboratory. 52
Johnson, H. W., 560
Johnson. W. T. K., 39
Kahle, A., 7
Kalman filter, 456
Kasischke, E. S., 325
Kennard. E. H . 96
Kepler's first law, 576
Kepler's second law. 574
Kepler's third law, 577
Kim, Y., 333, 334
Kirk, J. C., Jr., 32
Klauder. J. R., 257
Klein, J., 331. 335, 352
Kleinrock, L.. 489
Kliore, A., 304
Klystron, 30
Kovaly, J. J., 29
KRMS passive radiometer, 612, 613
Kropatsch. W., 383
Kwok, R.. 292, 402, 412, 418, 479, 600, 610
Lambert's law, 103
Lawson, J. L.. %
Layove~382-386,479

Leberl, F., 402


Lee, 8. G., 494
Lee, K. W . 617
Levels of data products, 251-252. 254-255,
427

942

INDEX

Lewis, A, 382
Lewis, D. J., 445
Li, F. K., 5, 217, 223, 224, 227, 228, 241, 283,
285, 299, 301
Like-polarized reflection coefficient, 137
Linde, Y., 4%
Linear convolution, 545-553
Linear FM waveform, 133, 134, 144-146, 168,
173, 504
Linear range migration, 172, 190, 193, 194,
431-432
Linear span of data in polar coordinates, 522
Linear systems, 536-541
amplitude error effects, 259-260
convolution, 537-538
distortion analysis, 257-261
nonstationary, 540
phase error effects, 259-260
radar characterization, 136-139
stationary, 128-129, 141, 541
transfer function, 539-540
Location of target:
algorithm, 374-376, 600
error sources, 345, 377-382
Louet, J., 357
Low pass filter, 544, 545
interpoiation, 561-563
Low pass waveform, 541
Luscombe, A P., 238
MacArthur, J. L., 52
MacDonald, H. C., 28
MacDonald-Dettwiler and Associates
(MDA), 33, 155
Madsen, S. N., 223, 231, 233, 234, 389, 390
Magellan (MGN) radar, 39-42, 265, 273,
292-294,317,415,456
Magnetron, 30
Mainlobe broadening, 256
Maitre, H., 422
Map projections:
datum, 371
Polar Stereographic (PS), 393, 479
Universal Transverse Mercator (UTM),
370,393,479
Marginal zone ice, 610
Marmarelis, P. Z., 616
Marr, D.,419
Martinson, L., 507
Mass of the earth, 570
Massachusetts Institute of Technology
(MIT) Radiation Laboratory, 27
Masscomp computer, 598, 600, 603
Massively Parallel Processor (MPP),
468-469. See also Concurrent processor

INDEX

functional block diagram, 469


Matched filter, 127-135, 147, 168, 174, 190,
212
derivation, 128-134
with Doppler frequency shift, 133-134
signal-to-noise ratio, 130
Max, J., 279, 291, 293
Maximum likelihood classifier, 611
Maximum power transfer theorem, 95
McDonough, R. N., 190, 223, 225, 234
MDA (MacDonald-Dettwiler and
Associates), 33
Mean anomaly, 576, 577
Melt ponds, 612
Mercer, J. B., 62
Meyer-Arendt, J. R., 103
Minimum variance unbiased estimator, 229
Mismatch ratio of compression filter,
173-175, 196
Mission desi~n flowchart, 44
MMIC (monolithic microwave integrated
circuit), 31
Monaldo, F. M., 51, 53
Monolithic microwave integrated circuit
(MMIC),31
Mooney, D. H., 222, 242
Moore, R. K., 349
Mosaicking, 412-415
definition, 371
feathering the seams, 412
Motion compensation processing, 529-534
Moving target detection, 222
MTI radars, 302
Multilook processing, 216-221
by image filtering, 220, 367
look filtering, 220, 367
noise reduction, 217
for PRF ambiguity resolution, 238-240
by spectral division, 219-220
thermal noise effects, 220-221
Multipath, 337
Munson, D. C., Jr., 504
Munson, R. E., 274

Noise:
ambiguity, 2%-305
antenna, 101, 106-108
bandwidth, 110-111
bit error, 283-286, 357
distortion, 251, 281, 293-294
crossover, 262
harmonic, 261-262, 270-271
intermodulation, 261-262, 271
equivalent noise temperature, 100, 107,
110, 118-119
external, 101-106
factor, 75, 108, 111-114
figure, 100, I 08, 271
galactic, 105-106
operating noise factor, ll3-ll4. 119-120
operating noise temperature, 110, ll8-ll9
power spectral density, 75. 129. 230
quantization. 279-283
radio, 106
receiver. 108-119
saturation, 280-281
source. 101-108
spatial image compression, 489, 492
speckle,93, 121,214-217,314,324
system noise factor, 113-114
temperature ratio, 117
thermal,96-99,220-221,251.359
Nominal satellite orbit, 574
Nominal target migration locus, 517
Nonlinear system analysis. 615-618
Nonstationary linear system, 198, 540
NORDA KRMS passive radiometer, 612.
613
North, D. 0 .. 128
Number theoretic transforms. 560
Numerical transform theory, 560
Nutation of earth rotation, 572
Nyquist:
frequency, 542
rate, 184, 283, 388, 546
theorem, 99

Nadir interference, 306-307


Naraghi, M., 402, 479
NASA (National Aeronautics and Space
Administration), 10-12, 44, 6o-61, 411,
592
NASDA (National Space Development
Agency of Japan), 10-13, 44, 592
National Weather Service, 607
Newton's law, 570
Nghiem, S. V., 62
Nicodemus, F. E., 103

Ocean waves:
Bragg resonance, 51
capillary waves, 51
spectra,52-53,592,613
Office of Space Science and Applications
(OSSA), 592
Offset video frequency. 183, 211
ofSeasat, 184
One-bit SAR 211
Onstott, R. G., 611
Oppenheim, A V. 548
Optical correlator, 30, 31, 32

643

Orbital elements, 572-580. See also Orbit of


satellite
Orbit of satellite. 572-580
acceleration, 569-571
angular velocity, 582
apogee. 573, 576
coordinate system, 577-579
eccentricity, 226. 571, 573, 576
elements, 573
inclination angle, 573, 576, 581
perigee, 573, 576
perturbations. 579-580
plane, 574-575
precession, 580
radial and tangential speeds, 579
time of perigee passage, 573, 576
track and target position, 566-573
trajectory parameters, 572
true anomaly, 573, 576, 577
Overlap-add method (of filtering), 551-552
Overlap-save method (of filtering), 552-553
Oversampling factor (in step transform), 511
Pack ice, 610
Page. L., 102
Page, R. M., 27
Paired echo technique, 257-260
Papoulis, A, 234, 545
Parseval's relation, 129, 132
Passive radiometer:
AVHRR.611
KRMS, 612, 613
Peak side-lobe ratio (PSLR)
of antenna pattern, 87-88
of impulse response function, 256, 259-260
Pease, M. C., 559
Penetration depth, 55. 59-60
Perigee, 576
Periodic convolution, 548
Permittivity, 46-47
Perry. R. P .. 507
Peterson. D. P .. 396
Pettai, R., 100, 108, 109. 112. 115. 117
Pettengill. G. H .. 39
Phase history of target. 23-25. See also
Range Migration
corner tum memory, 436
migration locus. 517
'quadratic phase error, 432
range cell migration memory, 432, 436
range curvature maximum bound, 431-432
range walk maximum bound. 431-432
Phonon Corp . 267
Physiological Systems. 615
PIN diode. 276

644

INDEX

Pipeline processor. 454-460


Advanced Digital SAR Processor (ADSP).
456-458. 600. 601
control. 460
functional block diagram. 455
reliability. 460
Planar orbit. 574
Planck factor, 99. 101-103, 105
Plan-position indicator (PPI). 27
Platform effects:
attitude errors. 320, 349-351. 430-431
ephemeris errors. 377-379, 480
Point target response. 165
Poisson distribution. 489
Polarimetric radar calibration, 349-353.
364-367
channel imbalance, 351. 357. 366
clutter calibration. 352-353
cross-polarization leakage. 366
cross-products of scattering matrix. 366
efficient coding of scattering matrix.
366-367
phase errors. 352, 357. 365
symmetrization of scattering matrix,
364-366
Polarimetric SAR. 57. 93
Polarization signature. 57
Polar processing. 519-534
azimuth resolution. 524
geometry formulation. 521
interpolation, 525
intrapulse range variation. 526-529
Polar Stereographic (PS). see Map
projections
Porcello. L. J . 34
Post-processor. 473-487
functional block diagram. 474
geometric correction. 479-487
image browse. 487-499
1/0 transfer rates, 476-477. 486-487
radiometric correction. 477-479
requirements. 475-477
Potential function of the earth. 579
Power density. 74. 76
Poynting relation. 80. 82
PPI (plan position indicator). 27
Precession:
of earth rotation. 572
of satellite orbit. 580
Predictive coding. linear three point. 493
Preprocessing. see Autofocus; Clutterlock
Presumming. 241
Prime factor transforms. 560
Principal solution, 149
Project Wolverine. 30

INDEX

Pseudo noise code. 600


Pulse:
compression. 135-152
distortion effects. 163. 315-317. 379-381
repetition frequency. 305-307
waveforms. 133
Quadratic phase error. 173-176. 432- 433
Quantization. see Sampling
Quasistationarity approximation. 540
Quegan. S.. 163. 389. 390. 482
Queueing analysis:
for image browse. 489-491
M/D/I system, 489, 491
response times, 489, 491
Radar:
cross section, 74, 91-94, 136, 214. 322
of calibration targets. 337-341
equation:
of a distributed target, 120-124. 324,
326.347
of an image pixel. 123. 358
ofa point target, 73.-75. 119-120
of a single pulse. 122-123
system:
antenna, 273-279
block diagram, 253
digital electronics, 279-283
operations. 252-256
RF electronics, 264-273
timing and control; 263-264
Radarsat. 12, 592
Rader, C. M .. 560
Radiance, 102-104
Radiometric calibration:
definitions, 311-313
error model, 322-325
error sources, 314-322
image correction factor, 360-364, 477-478
noise subtraction, 363-364
post-processor design analysis, 477-479
using a topographic map, 409-410
Radix of FFf. 558
Ramamurthi, B.. 492
Ramapriyan, H. K.. 402, 468, 469
Raney, R. K.. 322, 360, 580, 583, 585
Range:
acceleration, 584-586
ambiguities, 20, 89-91, 171, 303-308
frequency spectrum, 212
resolution:
in deramp-FFf processing, 506, 524
in matched filtering processing. 15, 162
in uncoded pulse. 15

sensor-to-target. 159-160
variation (intra-pulse). 159-163
Range migration, 171-172. 193, 197, 504
correction, 181, 187, 189, 217
curvature, 172, 190
interpolation, 188
maximum bounds, 431-432
memory, 432, 469
nominal migration locus, 517
phase history. 23-25
Seasat example. 178
walk, 172, 190, 193, 239, 431-432
Range signal processing:
analog formulation, 182-187
compression filter parameters, 213-214
computational complexity, 449-452
digital formulation, 210-214
efficiency factor, 450
overview, 165-167
processing blocks, 450
Rawson. R., 33
Rayleigh-Jeans law. 103
Rayleigh probability distribution function.
216. 323
Real aperture radar. see Side-looking real
aperture radar (SLAR)
Receivers:
for ground calibration. 341-342
in SAR sensor, 271-273
Receiving Ground Station (RGS), 592.
596-597
Rectangular algorithm, 155-208. See also
Compression processing
Reed, C. J., 289
Reference functions (matched filter):
azimuth, 588-591
bandwidth, 434-435
length (in samples), 435
normalization factors, 361-364
range. 213-214
Reference mixing operation. 506
Reflectivity of target (scene), 136-139, 141,
149, 155,214,215.224,228,231,237,506
Remote sensing programs. 7-13
Resolution:
azimuth:
in matched filtering processing. 26,
169-171
in polar processing. 524
in real aperture radar. 16
in spectral analysis processing, 23, 439
range:
in deramp-FFf processing, 506, 524
in matched filtering processing, 15, 162
in uncoded pulse, 15

645

RGS (Receiving Ground Station). 592,


596-597
Rice, R. F .. 288
Richards, J. A.. 56
Ridenour, L. N., 71
Rihaczek, A. W .. 133
Rino, C. L., 315
Robertson, S. D., 338, 339
Rocca, F., 437
Rodieck, R. W., 615
Royal Aircraft Establishment (RAE), 155
Ruck, G. T .. 338
Sack, M. M., 442, 507, 511. 516. 517, 518
Sampling:
aliasing. 211, 238, 241, 283, 389, 3%, 482,
510, 544, 545
ofbandlimited signals, 211, 541-545
Block Floating Point Quantization
(BFPQ), 289-294, 452
image resampling, 388-390, 416, 440, 479
Nyquist rate, 184, 283, 388, 546
one-bit SAR, 211
quadrature, 283
real (offset video), 211
sampled signal, 211. 542
of the target phase history, 191-1%
Scattering. see Electromagnetic wave
scattering
Scattering matrix:
cross-products, 366
definition, 94, 350
efficient coding, 366-367
symmetrization, 364-366
Schaefer, D. H .. 468
Schreier, G., 480
Schwartz inequality, 82, 129
Scientific Atlanta Corporation, 595. 596
Sea ice applications of SAR, 57-62
ice classificaton, 611-613
ice kinematics, 592, 607-611
wave spectra, 613
Seasat.344,348,528
Doppler characteristics, 586-588
offset video frequency. 184
radar parameters, 12
satellite diagram, 2
Secondary range compression, 194, 199-207.
' 239,432,436,443
Second time around effect. 20, 242
Secular perturbations. in orbital elements,
579
Selvaggi, F., 470
Semi-major axis (of satellite orbit). 573
Sensitivity time control (STC). 271-273

646

INDEX

Settling time. 263


Shadow (radar). 385-386
Shannon. C. E.. 288. 542
Shannon-Whittaker sampling theorem. 388.
541.542.544.546.561
Sharma. D. K.. 282
Sherman. J. W . 77
Sherwin. C. W . 29. 30
Shuttle Imaging Radar (SIR). 38-39
SIR-A. 39, 49-50
SIR-B.39.274.361-362
SIR-C. 11. 39. 274-277. 302-303. 331-336,
456
Sidelobe. 148-153
definition. 174
leakage in step transform. 514-515
weighting functions. 150-153
Side-looking real aperture radar (SLAR).
15-21. 28, 71
ambiguity constraints. 20-21, 88-91
azimuth (along-track) resolution, 16-17
Doppler bandwidth, 21
Doppler shift. 17
geometry. 14-15
radar equation. 73-75, 122
range (cross-track) resolution. 15
swath width. 14
Sidereal period. 573. 576
Siedman. J. B. 416. 479
Signal and Data Routing Assembly
(SARA). 596
Signal-to-noise ratio (SNR):
of distributed target in image data. 123
of distributed target in raw data. 121-123
of matched filter. 130
of point target in raw data. 73-75
of receiver output, 110-113
Silver. S. 73, 77. 78, 80. 81. 82. 95
Silverman, H. F., 560
Skewing of the image. in azimuth. 19o-l 93
Skolnik. M. I.. 71. 73. 80. 86. 106. 108. 115,
131, 133
Sky Computer. 607
Slant range. see Range
SLAR, see Side-looking real aperture radar
Slater. P. N . 103
Snyder. J. P . 393
Sobel operators. 419
Space Physics Analysis Network (SPAN). 594
Spatial bandwidth (azimuth) 25-26
SPECAN, 440-443. 452. 453. 456. 470, 504.
507.601
Specific radar cross section. 136, 214
Speckle noise. 93, 121. 214-217. 314. 324
Spectral analysis algorithms. 437-443
deramp FFT. 503

INDEX

fanshape resampling. 440


polar processing. 519-534
SPECAN. 440-443. 452. 453. 456. 470. 504.
507.601
unfocussed SAR. 23. 438-440
Specular point migration. 387
Specular reflection. 139
Spherical coordinate system. 522
Spotlight SAR. 507
SPS (SAR Processor System). 592
Squint:
angle.207.208.503,526,571.583,590
mode processing. 206-208, 519-534
Stable local oscillator. 263-264
Allan variance. 263
drift. 263
effects on image fidelity. 372
Staggered PRF (for ambiguity resolution).
242
Stationary:
Gaussian processes. 233
phase, 142-144. 199,200.208
random process, 231
linear system, 136. 146. 187. 288. 323. 503.
537. 546
Station reception mask (ASF). 595
Step chirp. 133
Step transform. 504. 507-519
autofocus.529-534
azimuth processing. 516-518
coarse range analysis. 508-512
fine range ambiguities. 514-515
fine range analysis. 512-514
Stereo radar. 399, 402
Stewart, R. H . 103
Stokes matrix. 94. 350-353. 367
Stretch processing. 504
Stutzman. W. L.. 77. 273
Subaperture correlation. 237. 238, 240,
529
Subsurface mapping, 49-50
Sun Computer. 607
Surface Acoustic Wave (SAW) device.
265-267
Swath width. 14. 21. 295-296
Tapped delay line. 151
Target cross section. see Radar cross section
Tate. C . 615
Taylor. A.H . 27. 151. 152
Taylor, T. T . 151. 152
Taylor series. 143. 168. 565. 566
Taylor weighting function, 85. 87. 151. 152.
213
Technical University of Denmark, 40
Temperton, C.. 559

Test. J . 466
Three-dimensional inverse Fourier
transform, 523
Tikhonov. A. N . 149
T!le bandwidth product, see Bandwidth
time product
Time domain:
convolution. 537-538
filter weighting function. 151
Time domain processor:
computational complexity. 444-445. 446.
448-449
image formation algorithm. 167. 187-196
Time of echo propagation, 158-159
Time of perigee passage. 573
Titan Radar Mapper (CTRM) 40. 42
Tomographic analysis. 540
Tomographic imaging. 504
Tone generators. 342-343
TOPEX,381
Touzi, R. A. 419
Toyoda. J. 615
Transform:
Fermat. 561
Fourier. see Fourier transform
Hadamard. 493
Laplace, 540
number theoretic. 560
prime factor. 560
z. 543, 547
Transmit interference. 305-307
Transmitter:
solid state. 270
traveling wave tube. 269-270
Transponders. 341
Transverse scan cassette drives. 598
Traveling Wave Tube (TWT). 30
True anomaly of satellite orbit. 576. 577
Twiddle factors in FFT. 555. 556
UHF band. 8, 27
Ulaby. F. T . 6, 47. 55. 56. 62. 63. 64. 65. 76.
92, 100. 101. 102. 121. 122
Unfocussed SAR, 23-24. 31. 438-440
Uniform quantizer. 291
Uniform spherical earth. 570
United States Geological Survey (USGS).
393,412.414.415,416
Universal Transverse Mercator (UTM). see
Map projections
University of Alaska. 592
Van der Ziel, A.. 96. 97. 99
Van Roessel. J. W .. 28
Vant. M. R.. 503

647

Van Zyl. J. J.. 57. 352. 365. 366. 476


Variable gain amplifier. 271
Vectorization of the transform. 559-560
Vector quantization. 289. 495-499
Vegetation applications of SAR:
forest canopy, 56-57
soil moisture. 62-65
Velocity:
angular of earth. 56. 582
angular of satellite 582
radial and tangential of satellite. 579
Venera, 39-40
Venus radar mapper. see Magellan
Vernalequinox.572
VHF band. 8. 27. 34
Video offset frequency:
in deramp processing. 524
sampling. 211
of Seasat. 184
Viksne. A., 28
Volterra. V. 616
Volterra kernels. 616
Wagner. C. A. 376
Walker. J. L.. 520. 529
Wall. S. D . 349
Watson-Watt. R.. 27
Wave equation algorithm. 437
Wavenumbe~47.613

dimension resolution. 524


space.526
vector. 520. 522
Wehner. D. R.. 133. 520. 524
Whalen. AD . 97. 129. 136, 215. 231. 233
Whittaker's interpolation formula. 388, 541,
542.544,545.546.561
Wiener, N . 616
Wiley. C. A.. 1. 17.18. 28, 29
Winebrenner. D. P . 51
Winograd Fourier Transform Analysis
(WFTA).560
Wong. F. H . 206
Wong. Y. R.. 422
Woodward. P. M., 132
Wu.C.. 197.206.234,274-275.302-303,
437.502
Wu.K. H . 507
X-SAR, 11. 39. 274-275. 302-303
z~transform.

543. 547
Zebker, H. A.. 57. 364. 402
Zeoli. G. W . 280
Zero padding ofFFT. 188. 212
Zohar, S., 560

Curlander
McDonough

~~
z

rri

Synthetic
Aperture

z0 ()

Radar

V>

-i
3: I
V> m
)> -i
V>

)>

C)

""'O

zm
~~
-0 c
o~

a~
~o
z

C)

111111 llllllliil I;
047185770X

)>

;.o

WILEYINTEf SCIENCE

Systems and
Signal Processing
JOHN C. CURLANDER
ROBERT N. McDONOUGH

WILEY SERIES IN REMOTE SENSING

You might also like