Professional Documents
Culture Documents
Signal Processing1
Signal Processing1
Signal Processing1
Robert N. McDonough
Johns Hopkins University
Applied Physics Laboratory
Laurel, Maryland
A WILEY-INTERSCIENCE PUBLICATION
PREFACE xiii
ACKNOWLEDGMENTS xvii
CHAPTER 1
INTRODUCTION TO SAR 1
1.1 The Role of SAR in Remote Sensing 4
1.1.1 Remote Sensing Across the EM Spectrum 7
1.1.2 Remote Sensing Programs 9
1.2 Overview of SAR Theory 13
1.2.1 Along-Track (Azimuth) Resolution 16
1.2.2 Doppler Filtering 22
1.3 History of Synthetic Aperture Radar 26
1.3.1 Early History 26
1.3.2 Imaging Radars: From SLAR to SAR 28
1.3.3 SAR Prqcessor Evolution 31
1.3.4 SAR Systems: Recent and Future 33
1.4 Applications of SAR Data 44
1.4. 1 Characteristics of SAR Data 45
1.4.2 Surface Interaction of the Electromagnetic Wave 46
1.4.3 Surface Scattering: Models and Applications 48
1.4.4 Volume Sca~ring: Models and Applications 55
1.5 Summary
j: . .
.
.·, "' "~,,.}
6G
References and Further Reading 66
viii CONTENTS CONTENTS Ix
CHAPTER 2 CHAPTER 5
THE RADAR EQUATION 71 ANCILLARY PROCESSES IN IMAGE FORMATION 210
2.1 Power Considerations in Radar 72 5.1 Digital Range Processing 210
2.2 The Antenna Properties 75 5.2 Speckle and Multilook Processing 214
2.2.1 The Antenna Gain 80 5.3 Clutterlock and Autofocus
84 221
2.2.2 The Antenna Directional Pattern 5.3.1 Clutterlock Procedures 223
2.3 The Target Cross Section 91 5.3.2 Autofocus 234
2.4 The Antenna Receiving Aperture 94 5.4 Resolution of the Azimuth Ambiguity 238
2.5 Thermal Noise 96 References 247
2.6 Source and Receiver Noise Description 99
2.6.1 Source Noise 101
2.6.2 Receiver Noise 108 CHAPTER 6
2.6.3 An Example 116
SAR FLIGHT SYSTEM 249
2.7 The Point Target Radar Equation 119
120 6.1 System Overview 249
2.8 The Radar Equation for a Distributed Target
124 6.2 Radar Performance Measures 256
References
6.2.1 Linear System Analysis 256
6.2.2 Nonlinear System Analysis 261
6.3 The Radar Subsystem 263
CHAPTER 3 6.3.1 Timing and Control 263
THE MATCHED FILTER AND PULSE COMPRESSION 126 6.3.2 RF Electronics 264
6.3.3 Antenna 273
3.1 The Matched Filter 127
6.3.4 Digital Electronics and Data Routing 279
3.1.1 Derivation of the Matched Filter 128
131 6.4 Platform and Data Downlink 283
3.1.2 Resolution Questions
6.4.1 Channel Errors 283
3.2 Pulse Compression 135
6.4.2 Downlink Data Rate Reduction Techniques 285
3.2.1 Linearity, Green's Function and Compression 135
6.4.3 Data Compression 288
3.2.2 The Matched Filter and Pulse Compression 142
6.4.4 Block Floating Point Quantization 289
3.2.3 Time Sidelobes and Filter Weighting 148
6.5 System Design Considerations 294
References 152
6.5.1 Ambiguity Analysis 296
6.5.2 PRF Selection 305
6.6 Summary 307
CHAPTER 4 References 308
IMAGING AND THE RECTANGULAR ALGORITHM 154
4.1 Introduction and Overview of the Imaging Algorithm 155 CHAPTER 7
4.1.1 Data Coordinates and the System Impulse Response 157
164 RADIOMETRIC CALIBRATION OF SAR DATA 310
4.1.2 Imaging Algorithm Overview
4.1.3 Range Migration and Depth of Focus 171 7.1 Definition of Terms 311
4.1.4 An Example 176 7.1.1 General Terms 311
4.2 Compression Processing 182 7.1.2 Calibration Performance Parameters 312
4.2.1 Range Compression Processing 182 7.1.3 Parameter Characteristics 314
4.2.2 Time Domain Azimuth Processing 187 7.2 Calibration Error Sources 314
4.2.3 Time Domain Range Migration Compensation 189 7.2.1 Sensor Subsystem 315
4.2.4 Frequency Domain Azimuth Processing 196 7.2.2 Platform and Downlink Subsystem 320
References 208 7.2.3 Signal Processing Subsystem 320
CONTENTS xi
x CONTENTS
APPENDIX C
592
THE ALASKA SAR FACILITY
C.1 ASF Operations
593 PREFACE
596
C.2 The Receiving Ground Station
598
C.3 The SAR Processor System
603
C.4 Archive and Operations System
605
C.5 The Geophysical Processor System
613
C.6 Summary
614
References
APPENDIX D
NONLINEAR DISTORTION ANALYSIS 615
618
References
The forty year history of synthetic aperture radar (SAR) has produced only a
single spaceborne orbiting satellite carrying a SAR sensor dedicated to remote
619
BIBLIOGRAPHY sensing applications. This system, the Seasat-A SAR, operated for a mere
622 100 days in the late 1970s. We learned from the data collected by Seasat, and
MATHEMATICAL SYMBOLS
from the Shuttle Imaging Radar series and aircraft based SAR systems, that
630
LIST OF ACRONYMS this instrument is a valuable tool for measuring characteristics of the earth's
634 surface. As an active microwave sensor, the SAR is capable of continuously
INDEX monitoring geophysical parameters related to the structural and electrical
properties of the earth's surface (and its subsurface). Furthermore, through
signal processing, these observations can be made at an extremely high resolution
(on the order of meters), independent of the sensor altitude.
As a result of the success of these early systems, we are about to embark on
a new era in remote sensing using synthetic aperture radar. Recognition of its
potential benefits for global monitoring of the earth's resources has Jed the
European Space Agency, the National Space Development Agency of Japan,
and the Canadian Space Agency to join with the United States National
Aeronautics and Space Administration in deploying a series of SAR systems in
polar orbit during the 1990s. A primary mission goal of these remote sensing
SAR systems is to perform geophysical measurements of surface properties over
extended periods of time for input into global change models. To reach this
end, the SAR systems must be capable of reliably producing high quality image
data products, essentially free from image artifacts and accurately calibrated in
terms of the target ' s scattering characteristics.
In anticipation of these data sets, there is widespread interest among the
scientific community in the potential applications of SAR data. However,
xiii
xiv PREFACE PREFACE xv
interpretation of SAR data presents a unique challenge in that there can be Chapter 4 is the first material of the book devoted in detail specifically to
severe geometric and radiometric distortions in the data products, as well as SAR systems. It addresses the central question in formation of a SAR image
the presence of false targets (resulting from the radar pulse mode operation). from the raw radar signal data, that is, the "compression" of the point target
Although these effects can be minimized by proper design of the radar system response, distributed in space and time by the radar system, back into a point
and use of calibration techniques to characterize the systematic error sources, in the image. Section 4.1 gives an overview of the factors involved, and includes
full utilization of SAR data mandates that the scientist be aware of the potential an example, in Section 4.1.4, "stepping through" the formation of a SAR image
for misinterpretation of the imagery. A full understanding of the characteristics from raw signal to the level ofa "raw" (uncalibrated) image. Section 4.2 describes
of the SAR imagery requires some knowledge of the sensor design, the mission in detail the various algorithms which have been developed to carry out the
operations, and the ground signal processing. corresponding digital signal processing. Chapter 5 is a companion to Chapter 4,
In this text we specifically address these items, as applied to the design and and describes a number of ancillary algorithms which are necessary to implement
implementation of the spaceborne SAR system (with heavy emphasis on si~nal the main procedures described in Chapter 4. Chapter 10 discusses a number of
processing techniques). The reader will find that the book has been written image formation algorithms which are alternative to those of Chapter 4 and
from two points of view, reflecting each author's perspective on SAR systems Chapter 5, but which have to date been less commonly used in the remote
and signal processing. We believe that these two perspectives complement each sensing "community". They are, however, of considerable interest in that context,
other and serve to present a complete picture of SAR from basic theory to the and are much used in aircraft SAR systems.
practical aspects of system implementation and test. In preparing the manuscript, Chapter 6 presents an end-to-end description of the part of a SAR system
there were three key areas that we wished to address. which is related to the sensor a nd its data channels. The emphasis is on space
First, we had in mind that, in an expanding field such as synthetic aperture platforms. The various error sources, in terms of their characterization and
radar, new workers would need an introduction to the basics of the technology. effect, are described for a general SAR system from the transmitted signal
We have therefore included considerable material on general radar topics, as formation through downlink of the received echo signal data to a ground station.
well as material on the specific signal processing methods which lie at the heart The point of view is that of the system designer, and in Section 6.5 some of the
of the image formation algorithms. Second, engineers in disciplines closely allied important tradeoffs are described.
to SAR would benefit from a ready compilation of the engineering considerations Chapters 7 and 8 together present in some detail the means by which a SAR
which differentiate a SAR system from a conventional radar system. Third, the system and its images are calibrated. Chapter 7 is concerned with calibration
users of SAR images may wish to know in some detail the procedures by which in the sense that the surface backscatter intensity in each system resolution cell
the images were produced, as an aid to understanding the product upon which is correctly replicated in a single resolution cell of the image ("radiometric"
their analyses are based. calibration). In Chapter 8, the companion question of "geometric" calibration
In seeking to serve this broad potential readership, we have written the book is treated. The techniques described aim at ensuring that a specific resolution
at various levels of detail, and assuming various levels of prior background. cell in the scene being imaged is correctly positioned relative to its surface
Chapter 1 is intended for all our readers. It provides an overview of the general location. Section 8.3 treats techniques for assigning map coordinates to a SAR
capabilities of SAR to contribute to remote sensing science, and a brief image. This allows registration of images from multiple sensors, a topic which
explanation of the underlying principles by which SAR achieves its su~e~ior is dealt with in Section 8.4.
spatial resolution. We include a survey of past SAR systems, and a descnpt1on Chapter 9 is a companion to Chapter 6, which deals primarily with "flight
of systems planned for the near future. The chapter concludes with a summary hardware". In Chapter 9, the "ground hardware" is described, including a
of some important topics in modeling, by which the SAR image is related to characterization of the system considerations necessary for efficient realization
geophysical parameters of interest. of the image formation and geometric and radiometric correction algorithms
Chapter 2 is devoted to a careful derivation of the "radar equation", from discussed in previous chapters. Specific systems are described, along with the
first principles which we hope will be shared by both engineers and remote various tradeoff considerations affecting their design. The subsystems described
sensing scientists. This chapter is intended to serve those readers who may be range from those for initial processing of the raw radar signals, through those
new arrivals to the topic of radar. The chapter culminates, in Section 2.8, with for image archiving, cataloging, a nd distribution.
various forms of the radar equation appropriate for remote sensing work. After the discussions of Chapter 10, on alternative image formation
Chapter 3 continues our discussion of basics, but more specifically those signal algorithms, there follow four Appendixes. Appendix A is a basic introduction
processing techniques which underlie the treatment of radar signals in a ~igital to digital signal processing, with particular emphasis on the fast Fourier
receiver. Section 3.2.2 in particular treats the matched filter from a pomt of transform algorithm. Appendix B is an introductory explanation of satellite
view which is appropriate to the discussion of SAR image formation. orbit mechanics, and culminates in Section B.4 with some simple parameter
xvi PREFACE
models needed in image formation. Appendix C describes the ~ASA SAR data
reception, image formation, and image archive. system newly _implemente~ at
the University of Alaska in Fairbanks, Alaska. Fmally, Appendix D summanzes
a technique for the characterization of nonlinear systems. Througho~t the text,
equations of particular importance have been indi~t~d by an aste~isk.
We believe that this text provides a needed, missmg element m ~he SAR
literature. Here we have detailed the techniqu~s needed . for design ~nd
development of the SAR system with an emphasis on the signal pr~cessi~g. ACKNOWLEDGMENTS
This work is a blend of the fundamental theory underlying the SA~ i~agmg
process and the practicalsystem engineering required to produce qualtty n~ages
from real SAR systems. It should serve as an aid for both the radar engtn~er
and the scientist. We have made special effort to annotate ou~ concepts ~~t
fi ures plots and images in an effort to make our ideas as accessible as possi. e.
I;is o~r sinc~re beliefthat this work will serve to reduce the _mystery surroundi~g
the generation of SAR images and open the door t~ a wider user commumty
to develop new, environmentally beneficial applications for the SAR data.
JoHN C. CuRLANDER
ROBERT N. McDONOUGH
This work draws in large part from knowledge gained during participation in
Pasadena, California the NASA Shuttle Imaging Radar series. For this reason we wish to give special
Laurel, Maryland recognition to Dr. Charles Elachi, the principal investigator of these instruments,
April 1991
for providing the opportunity to participate in both their development and
operation.
The text presents results from a number of scientists and engineers too
numerous to mention by name. However, we do wish to acknowledge
the valuable inputs received from colleagues at the California Institute of
Technology Jet Propulsion Laboratory, specifically A. Freeman, C. Y. Chang,
S. Madsen, R. Kwok, B. Holt, Y. Shen and P. Dubois. At The Johns Hopkins
University Applied Physics Laboratory, collaboration with B. E. Raff and
J. L. Kerr has stimulated much of this work. Among those who shared their
knowledge of SAR, special thanks go to E.-A. Berland of the Norwegian Defence
Research Establishment, B. Barber of the Royal Aircraft Establishment, and
W. Noack and H. Runge of the German Aerospace Research Establishment
(DLR). Additionally, without the technical support of K. Banwart, J. Elbaz
, and S. Salas this text could not have been compiled.
We both benefited from the intellectual atmosphere and the financial support
of our institutions. Special recognition should go to Dr. F. Li of the Jet
Propulsion Laboratory for his support to JCC during the preparation of this
manuscript. Additionally, we wish to thank Prof. 0. Phillips for hosting RNM
as the J. H. Fitzgerald Dunning Professor in the Department of Earth and
Planetary Sciences at The Johns Hopkins University during 1986-87. The
financial support provided by the JHU Applied Physics Laboratory for that
position, and for a Stuart S. Janney Fellowship, aided greatly in this work.
xvii
SYNTHETIC APERTURE RADAR
Systems and Signal Processing
1
INTRODUCTION TO SAR
Nearly 40 years have passed since Wiley first observed that a side-looking radar
can improve its azimuth resolution by utilizing the Doppler spread of the echo
signal. This landmark observation signified the birth of a technology now
referred to as synthetic aperture radar (SAR). In the ensuing years, a flurry of
activity followed, leading toward steady advancement in performance of both
the sensor and the signal processor. Although much of the early work was
aimed toward military applications such as detection and tracking of moving
targets, the potential for utilizing this instrument as an imaging sensor for
scientific applications was widely recognized.
Prior to the development of the imaging radar, most high resolution sensors
were camera systems with detectors that were sensitive to either reflected solar
radiation or thermal radiation emitted from the earth's surface. The SAR
represented a fundamentally different technique for earth observation. Since a
radar is an active system that transmits a beam of electromagnetic (EM)
radiation in the microwave region of the EM spectrum, this instrument extends
our ability to observe properties about the earth's surface that previously were
not detectable. As an active system, the SAR provides its own illumination and
is not dependent on light from the sun, thus permitting continuous day / night
operation. Furthermore, neither clouds, fog, nor precipitation have a significant
effect on microwaves, thus permitting all-weather imaging. The net result is an
instrument that is capable of continuously observing dynamic phenomena such
as ocean currents, sea ice motion, or changing patterns of vegetation (Elachi
et al., I 982a ).
Sensor systems operate by intercepting the earth radiation with an aperture
of some physical dimension. In traditional (non-SAR) systems, the angular
1
INTRODUCTION TO SAR 3
2 INTRODUCTION TO SAR
-J MUL TICHANNEl
refers ~o the accuracy with which an image pixel can be related to the target
MICROWAVE RADIOMETER
scattenng characteristics. Geometric distortion arising from variation in the
terrain elevation is especially severe for a side-looking, ranging instrument such
VISIBLE-INFRARED
RADIOMETER
as~ SAR. Precision correction requires either a second imaging channel (stereo
SAR DATA ALTIMETER or interferometric imaging) or a topographic map. Radiometric distortion, which
LINK ANTENNA
arises primarily from system effects, requires precise measurements from
Figure 1.1 Illustration of the Seasat-A SAR satellite.
4 INTRODUCTION TO SAR
1. 1 THE ROLE OF SAR IN REMOTE SENSING 5
calibration devices to derive the processor correction factors. To achieve the Short wavelength infrared
calibration accuracies required for most scientific analyses, a complex process Ultra violet Near infnt•ed
utilizing internal (built-in device) measurements and external (ground deployed Mid infrared Far i nfrared
~
device) measurements is needed. As a result of the difficulty of operationally
80
implementing these calibration procedures, only in special cases have SAR
systems produced radiometrically and geometrically calibrated data products. 60
The implication of poorly calibrated data products on the scientific utilization 40
of the data is far reaching. Without calibrated data, quantitative analysis of the 20
SAR data cannot be performed, and therefore the full value of the data set is
not realized. o':--'-:-":--'--......_.--'-:-'':--:-':--~~..1_-L-'--L..--L..i-LL-__J::_..l-=:::l
Over the past decade substantial progress has been made, both in digital ~ 0.3 0.5 5.0 10.0 15 0 20.0 30.0
c
computing technology and in our understanding of the SAR signal processing ..
0
·;;; Wavelength (µm)
and system calibration algorithms. Perhaps just as challenging as the develop- .E Far infrared ~ ---- Microwave
~ lOO r-~~~~--;;:;-:::~-----~~~~~-=-~::;;oo,~.,.-~~~.,--~..,.-~---,
ment of the techniques underlying these algorithms is their operational ~ 90 GHz Window !
implementat ion in real systems. In this text, we begin from first principles, 80 135 GHz Window t '
deriving the radar equation and introducing the theory of coherent apertures. 60
We then bring these ideas forward into the signal processing algorithms required 40 ..
i:>
c
.0
algorithms necessary for radiometric and geometric correction of the final data
01---.-.............~......-::""
products. The various radar system error so urces are addressed as well as the
300 500
processor architectures required to sustain the computing loads imposed by I 5.0 10 60 80
these processing algorithms. 0 1
Wavelength (µml
Wavelength (cm)
1.1 THE ROLE OF SAR IN REMOTE SENSING Figure 1.2 Percent transmission through the earth's atmosphere for the microwave portion of
the electromagnetic spectrum.
In the introduction we alluded to several of the features that make the SAR a
unique instrument in remote sensing: (1) Day / night and all-weather imaging;
(2) Geometric resolution independent of sensor altitude or wavelength; and independent of the cloud cover or prec1p1tation, a SAR operating m this
(3) Signal data characteristics unique to the microwave region of the EM frequency range is always able to image the earth's surface.
spectrum. An overview of the theory behind the synthetic aperture and pulse As the radar frequency is increased within the microwave spectrum the
compression techniques used to achieve high resolution is presented in the transmission attenuation increases. At 22 GHz there is a water vapor absor~tion
following section. In this section, we principally address the unique properties band that reduces transmission to about 85% (one-way) while near 60 GHz
of the SAR data as they relate to other earth-observing sensors. As an active the oxygen absorption band essentially prevents any signal from reaching the
sensor, the SAR is in a class of instruments which includes all radars (e.g., surface. Around these absorption bands are several windows where high
altimeters, scatterometers, lasers). These instruments, in contrast to passive frequency microwave imaging of the surface is possible. These windows are
sensors (e.g., cameras, radiometers), transmit a signal and measure the reflected especially useful for real aperture systems such as altimeters and microwave
wave. Active systems do not rely on external radiation sources such as solar radiom~ters .relying on a shorter wavelength (i.e., a narrower radiation beam)
or nuclear radiation (e.g., Chernobyl). Thus the presence of the sun is no t to obtain high. reso.lution. Additionally, for an interferometric SAR system,
relevant to the imaging process, although it may affect the target scattering the topographic height mapping accuracy increases with antenna baseline
characteristics. Furthermore, the radar frequency can be selected such that its separa~ion, or_ eq ~ivalently with decreasing wavelength (Li and Goldstein, 1989).
absorption (attenuation) by atmospheric molecules (oxygen or water vapor) is For this apphca tton, the 35 GHz window is an especially a ttractive operating
small. Figure 1.2 illustrates the absorption bands in terms of percent atmospheric frequency.
transmission versus frequency (wavelength). Note that in the 1- 10 GHz The_ selection of the radar wavelength, however, is not simply governed by
(3 - 30 cm) region the transmissivity approaches 100%. Thus, essentially resolutio n and atmospheric absorption properties. The interaction mechanism
1.1 THE ROLE OF SAR IN REMOTE SENSING 7
6 INTRODUCTION TO SAR
between the transmitted electromagnetic (EM) wave and the surface is highly Thus the selection of radar wavelength is influenced by both atmospheric
wavelength dependent. The EM wave interacts with the surface by a variety of effects and target scattering characteristics. In addition to the relati o nship
mechanisms which are related to both the surface composition and its between :adar wavelength and surface characteristics such as roughness and
structure. For the microwave region in which spaceborne SAR systems dielectric constant, there are a number of other system parameters, such as the
operate ( 1- 10 GHz), the characteristics of the scattered wave (power, phase, imaging geometry and the wave polarization, that can be used to further
polarization) depend predominantly on two factors: the electrical properties of characterize the surface properties. These applications and the underlying
the su rface (dielectric constant) and the surface roughness. scattering mechanisms will be discussed in Section 1.4. There are also a number
As an example, consider a barren (non-vegetated) target area where surface of sensor design constraints that influence selection of the radar operating
scattering is the dominant wave interaction mechanism. For side-looking frequency which are detailed in Chapter 6.
geometries (i.e., with the radar beam poi nted at an angle > 20° off nadir), if
the radar wavelength is long relative to the surface roughness then the surface 1.1 .1 Remote Sensing Across the EM Spectrum
will appear smooth, resulting in very little backscattered energy. Conversely,
for radar wavelengths o n the scale of the surface rms height, a significant fraction Despite the unique capabilities of the SAR to measure properties of the surface,
of the incident power will be reflected back toward the radar system. This it's operating range is limited to a small portion of the electromagnetic spectrum.
scattering characteristic is illustrated as a function of wavelength in Fig. 1.3 Thus, a full characteriza tion of the surface properties with a single instrument,
(Ulaby et al., t 986). Note that the variation in backscatter as a function of rms such as the SAR, is not possible. To get a complete description of the
surface chemical, thermal, electrical, and physical properties, observation by a
height and angle of incidence is highly dependent on the radar frequency or
variety of sensors over a large portion of the electromagnetic spectrum is
wavelength. A similar wavelength dependence is also observed for the surface
required. Figure 1.4 illustrates the various regions of the electromagnetic
dielectric constant. Generally, a fraction of the incident wave will penetrate
spectrum from the radio band (25 cm ~ A. ~ l km) to the ultraviolet band
the surface and be attenuated by the subsurface media. This penetration
(0.3 µm ~ A. ~ 0.4 µm).
characteristic is primarily governed by the radar wavelength and the surface
dielectric properties. It is especially important in applications such as soil Each region of the EM spectrum plays an important role in some aspect of
moisture measurements and subsurface sounding, where proper selection of the remote sensing. For characterizing the earth's surface properties, the most
radar wavelength will determine its sensitivity to the surface dielectric properties. useful bands, in addition to the microwave, are: ( l) Infrared (3-30 µm);
and (2) Visible / near infrared (0.4- 3 µm). At frequencies lower than 1 GHz,
ionospheric disturbances and ground interference d omi nate the received
SOIL
25
RMS MOISTURE signal characteristics, while in the millimeter and submillimeter region
HEIGHT lg cm 311N
20
lcml TOP 1 cm ( 100 GHz- 10 THz) a la rge number of molecular absorption bands provide
. _ _ 41 0.40 information about the atmospheric constituents, but little or no information
0 35
·..-.
----·
--... 23 02 about surface properties. Sensors that perform measurements in the thermal
.
iii ,, 0 38
:!:! 10 \ o---o 1 8 0 39
\I~ 6-• •--.C. 1 1 0 34 infrared region such as the Heat Capacity Mapping Mission ( HCMM )
: "\
0
1- radiometer (Kahle et al., 1981 ), as well as those in the visible/ near infrared
zw
u region such as SPOT and Landsat Thematic Mapper (TM) ( Freden and Gordon,
u:
...w 1983 ), measure surface properties that are complementary to the microwave
5
0 ~...... ~ measurements of the SAR. The thermal infrared ( 10- 15 11m) band is sensitive
................ __ _
u
<.:>
z
10 \ \ '""''~ to emissions from the surface (and atmosphere) relating to the vibrational and
·,_'"
a:w
,_ 15 \0 .. -------- ~ .._ rotational molecular processes of the sensed object. Information on the surface
1-
<t ........ _ temperature and heat capacity of an object can be derived from these
u 20
"'u '"' ........ __'--0 measurements. In the visible and near infrared regions, vibrational and electronic
"<t
25
FREQUENCY 4 25 GHz FREQUENCY 7 25 GHz molecular processes are measured. This information can be interpreted in terms
FREQUENCY 1 1 GHz -..,,.
"' - 30 L..-....1-.....J--...__.__..___, of chemical composition, vegetation, and biological properties of the surface.
0 10 20 30 0 10 20 30 0 10 20 30
ANGLE OF INCIDENCE ldegl ANGLE OF INCIDENCE ldegl Within the microwave region ( 1- 300 GHz) there a re several windows in the
ANGLE OF INCIDENCE ldegl
lb) lei atmospheric absorption bands outside the nominal SAR frequency range of
la!
1- 10 GHz. Most active, real apertu re radar systems, such as the scatterometer
Figure 1.3 Normalized backscatter coefficient as a function of surface roughness for three radar
and altimeter, operate in the 10 20 GHz region (Ulaby et al., 1982). These are
frequencies (Ula by et a l., 1986 ).
8 INTRODUCTION TO SAR 1.1 THE ROLE OF SAR IN REMOTE SENSING 9
~-----+------+-----+-----~· (Hz)
radiometers can play an important role in the geophysical interpretation of
1010
SAR data and are especially useful for absolute calibra tio n of the SAR system.
POWER RADIO INFARED ULTRAVIOLET
LL provide a useful guide, not only for the SAR system engineer but also for the
... scientist using these data sets. We believe that an understanding of the techniques
.2
...
I/)
underlying production of the SAR imagery will enhance the scientist's ability
.!
Q) to interpret the data products.
E
nl
...
nl
Q.
>-
Q)
1.2 OVERVIEW OF SAR THEORY
~
RADIATED
PULSES
Figure 1.7 Radar geometry illustra ting the ground swath, W, and rada r beam width, Ov.
The resolution of the radar in (gro und ) range (Fig. 1.7) is defined as the minimum
range separation _of t~o po ints that can be distinguished as sepa rate by the
s~stem. If ~he .am val time of the leading edge o f the pulse echo fro m the mo re
Figure 1.6 Simplified geometry of a side-looking real-aperture radar (SLAR ). distant po int 1s later than the a rri val time of the t railing ed ge o f the echo fro m
the nea rer p~int, each point can be distinguished in the time history of the radar
echo. If the time extent of the rada r pulse is r , t he m inimum separa tion of two
resolva ble po ints is then P
directed perpendicular to t he flight path of the vehicle and d ownwards a t the
surface of a flat ea rth. T he rela tive speed between platfo rm a nd ea rth is V. 1 • Fo r
this geometry, the po inting (lo ok) angle y, relative to the ve rtical, is the same
( 1.2.2)
as the incidence a ngle, Y/, which is the a ngle between the rada r beam a nd the
no rmal to t he eart h's surface at a pa rticular po int of interest. The radar transmits
pulses of E M energy. The return echoes a re sampled for fu t ure time coherent where tiR. t~e reso~ution in slant range and c is the speed of light.
!s
signa l processing. We will first discuss the capability o f the rad ar system to A~ we will discus~ in C hapter 3, to o btain a reasona ble resolutio n tiR , t he
resolve separa te terrai n elements on the earth 's surface. 8
required pulse dura ti on rP. wo uld be t_oo short to deliver adequate energy per
In Fig. 1.7 the ra nge extent JiYg of the rad ar beam (i.e., the ground swa th pulse .to pro duce a sufficient echo signal to no ise ratio (SNR ) fo r reliable
width ) is established by the a ntenna height W.. which determines the ve rtical det~ct1on. Ther~fore, a pul~e compressio n technique is commonly employed to
beamwidth, flv = A./ W.. If Rm is the (sla nt ) ra nge from rada r to midswa th, then achieve _bo th high resolut10n (with a lo nger pulse) and a high S N R. With
appro~nate pr?cessing of the received pulse (ma tched filtering), th e range
resolutio n o btainable is
A.Rm
w~ --- ( 1.2. 1)
11 w.
cos Y/
16 INTRODUCTION TO SAR
1.2 OVERVIEW OF SAR THEORY 17
where BR is the frequency bandwidth of the transmitted pulse. This resolution
can be made arbitrarily fine (within practical limits) by increasing the pulse
bandwidth.
The radar system range resolution is therefore determined by the type of
pulse coding and the way in which the return from each rul~e i~ rrnces~cd All
radar systems, conventional, SLAR, or SAR, resolve targets tn the range
dimension in the same way. It is the resolution of targets in the dimension
parallel to the platform line of flight (i.e., the azimuth or along-track dim_ension )
that distinguishes a SAR from other radar systems. We now overview th~
mechanisms used by the SAR to achieve fine azimuth resolution and defer until
Chapter 4 a detailed discussion of the techniques for range and azimuth
processing.
X1 = A.Rfo,f2V.1
fo = _ 2R(O) = 2V. 1x
( 1.2.6)
Similarly, energy at a different frequency fo, will be assigned to a corresponding
0
A. A.R(O)
coordinate x 2 • Thus, even though the targets are at the same range and in the
beam at the same time, they can be discriminated by analysis of the Doppler Substituting Eqn. ( 1.2.5) into Eqn. ( 1.2.6), we get
frequency spectrum of the return signal, hence the early name given by Wiley
of "Doppler beam sharpening" for this technique.
The use of Doppler frequency effectively provides a second coordinate for
use in distinguishing targets. These two coordinates are the ground range Rg
and the along-track distance x relative to a point directly beneath the vehicle which is the equation of a conic in the (R g, x) plane. From Eqn. ( 1.2.6) and
(i.e., the nadir point) as shown in Fig. l.9. The SAR system effects an invertible Fig. 1.9 we can write
transformation of coordinates from ground range and along-track position to
the observable coordinates, pulse delay t and Doppler shift / 0 .
From Fig. 1.9 we can write 2v.,-
- /-/ -
R(o>/
- >I
1A.Jo. x
R2 = (x - V.1s)2 + R; + H2
resulting in a hyperbola as shown in Fig. 1.10.
where s is the time along the flight path. The range rate is given by The use of Doppler frequency in addition to pulse time delay thereby provides
targ~t (terrain point) localization in two dimensions. That is (Fig. 1.10), a
R= - V.,(x - V.,s)/ R specific delay t 0 = 2R(O)/c and Doppler shift / 0 0 correspond to a specific circle
Eqn. ( 1.2.5) and hyperbola, which intersect in only four points in the plane of
The echo time delay t = 2R(O)/c and Doppler shift / 0 0 at s = 0 are related by
range R, and along t~ack distance x. The left/ right ambiguity is reso lved by
our knowledge of the side of the platform from which the radar beam is directed
( 1.2.5)
while. the branch of the hyperbola is indicated by the sign of the D o ppler shift'.
Wllh the use of Doppler analysis of the radar returns, the resolution fJx of
targets in the along-track coordinate is related to the resolution fJ/ of
0
Figure 1.9 Illustra tion or ground range and along-track coordinates. Figure 1.10 Illustra tion or use or range delay and Doppler shirt to loca te the ta rget.
20 INTRODUCTION TO SAR
1.2 OVERVIEW OF SAR THEORY 21
measurement of the Doppler frequency. The antenna beamwidth in the swath width is bounded by
horizontal dimension no longer enters directly as a limiting factor. From
Eqn. ( 1.2.4 ), the azimuth resolution is then
( 1.2.10)
( 1.2.8)
Bo= Jo.high - fo,Jow
which results in = (2V., / .A.)[sin(8H/ 2) - sin(-8"/ 2)] ~ 2V.,8H/ A.
= 2 V. 1/ L. = V.,/bx < fp
bx= (~)(L• V.,) = L. / 2 *( 1.2.9)
*(1.2.11)
2V., RA.
Equation ( 1.2.11) states that the radar must transmit at least one pulse each time
This counter-intuitive result, which states that improved resolution comes from the platform travels a distance equal to one half the antenna length.
smaller antennas, was first proposed by Cutrona et al. ( 1961 ). This result actually Combining Eqn. ( 1.2. 10) and Eqn. ( 1.2. l l ), we have
makes some assumptions that are not always valid, as we will discuss in
Section 1.2.2, however, the resolution of contemporary SARs does approach
this limit. Seasat, for example, had an antenna with an along-track dimension *(l.2.12)
L. = 10.7 m, and attained a resolution bx= 6 m from an orbital altitude of
H = 800 km.
Although Eqn. ( 1.2.9) predicts that an arbitrarily fine resolution is attainable ~hich req~ires that the swath width W. decrease as the azimuth resolution is
mcreased (1.e., as bx is made smaller).
by reducing the antenna azimuth dimension, at least one factor operates to put
a lower bound on resolution, even at this simple level of modeling. Since we T~e in.equalities in Eqn. ( 1.2.12) can be rearranged to illustrate the
relat1onsh1p between swath width and resolution as follows
need to measure range as well as along-track position, the radar must be pulsed.
When a pulse is transmitted, the radar then goes into a listening mode to detect
the target echo. Suppose the span of the (slant) range to which targets are ( 1.2.13)
confined (i.e., the slant range swath) is W. (Fig. 1.7). We then require that the
time of reception of the earliest possible echo from any point in the swath due For a satellite in earth orbit, the right side in Eqn. ( 1.2.13) is nearly constant
to a particular pulse transmission be later than the time of reception of the last on t~e ord~r of 20,000. Using Eqn. ( 1.2.1) and Eqn. ( 1.2.9) with the nominal
possible echo from any other point due to transmission of the previous pulse. relat10n (Fig. 1. 7)
Otherwise we will attribute the trailing portion of the previous pulse echo to
a nearby point illuminated by the current pulse. If the near and far edges of W. = vv.i sin 11
the swath in slant range are R' and R", this requires that (Fig. 1.7)
the inequality Eqn. ( 1.2.13) yields a requirement on the antenna area of
2R"/ c < 2R'/ c + TP
A.= W.L. > 4V. 1.A.Rm(tan 17)/ c *( 1.2.14)
where TP = l /fp is the time separation between two pulse transmissions (i.e.,
the interpulse period) and fp is the pulse repetition frequency (PRF). Thus the which is the lower bound for realization of full resolution SAR.
22 INTRODUCTION TO SAR
1.2 OVERVIEW OF SAR THEORY 23
1.2.2 Doppler Filtering a slant ra nge R0 , we have
There is one restriction in the derivation leading to the azimuth resolution
expression of Eqn. ( l.2.9). lf a target is to be positioned along track (relative <P =(- 4n/ A)[Rc + (xc - Xo )(x - x 0 )/ R0 + R ~ (x - x0 ) 2 /(2R~)] ( 1.2. 16)
to the platform) in accord with its observed Doppler frequency, it must produce
a constant Doppler frequency over the observation interval S. However, if this wher~ we can approximate R 0 and R0 as equal for the narrow bea m rada rs
interval is the entire time the target is within the radar footprint, as was assumed used in most practical applications. For this case then
for Eqn. ( 1.2.9), then the corresponding Doppler signal will have a frequency
which sweeps over the entire Doppler bandwidth as the vehicle passes by the
target. The actual analysis interval available using a frequency filtering technique fo . (-2)
= <P/ 2n = ).Ro [(x 0 - x 0 ) + (x - x 0 ) ]
may be much less than S, since it is restricted to the time span over which any
particular point target has essentially a constant Doppler frequency. Put another If we define the value of x ~t which the Doppler frequency ceases to be effecti vel y
way, the Doppler waveform for any finite interval due to a point target will constant as that x for which the quadratic term in Eqn. ( 1.2.16) contributes a
not be that of a sinusoid. A Fourier analysis of such a waveform will always value of. n/ 4 to <P at the edge of the aperture, then we ca n confine atten tion to
result in frequency components at more than one frequency, so that the target the received waveform collected over an " aperture" X , where
may be inferred to have a physical extent greater than f>x = (A.R/ 2 V.1)( I / S), the
resolution cell size. The target return will spread over multiple resolution cells X / 2 = Jx - x0 J < JiRJ8
of the Fourier spectrum.
or
To investigate this point further, consider Fig. I.I I , which shows a point
target at some along-track position x 0 and slant range of closest approach R 0 .
With the radar at some arbitrary position x along track we have ( 1.2. 17)
R = [R~ + (x - xo )2]112 (1.2.15) The corresp onding time interval (i.e., the integration time of the SAR ) is limited
to
The phase difference between transmitted and received waveforms due to
two-way travel over the range R is
S = X I V,1< _J _;._R_o_/_2
</J = - 4nR / ).
V.1
where the time derivative of <Pis the Doppler frequency (in rad / s). Expanding
the relation in Eqn. ( 1.2.15) to second order around some radar position x 0 at With this limitatio n, the resolution fro m Eqn. ( 1.2.7) is
*( 1.2.18 )
(x - x 0 ) 2
AR;:::: · Ix - xol « R ( 1.2.19)
2R 0 '
However, lacking that knowledge we must process with a variety of
.... ~ompensations matched to trial values of x 0 = x' and pick the peak response
and R 0 is the range at the point of closest approach (i.e., s = 0). Since x = V.is, m order to measure x 0 .
A</> is a quadratic function of the along-track time, s, and the change in Doppler . This is all _to say only that the signal processing should correlate the Doppler
frequency is linear with time. For full resolution, we must use all the data signal / 0 (x ) m Eqn. ( 1.2.20) with the known waveform
collected over the interval, X = lJHR 0 , for which the target is in the radar beam.
If this quadratic phase is compensated such that the returns from each pulse
Ix - x'I < X / 2
due to the target at x 0 can be added coherently, targets at x # x 0 will correspond
to improperly compensated returns so they will cancel. The processed returns After some mathematics, we obtain a normalized correlator output
from the target at x 0 will then dominate returns from other targets at the same
range.
x
h( x' ) = ( 1/ X) f f 0 (x )g*(x - x' ) dx
whose magnitude is
AR lh(x' )I = l { sin[2n(x' - x 0 )(X - Ix' - x 0 1)/ (2R0 )] } / [2n( x' - x )X/ (A.R )]1,
0 0
lx'- x 0 l< X
taking _careful account of limits of integration and the sign of x'. If the time
bandwidth product of this signal,
Figure 1.12 Slant pla ne geometry illustra ting SAR focussing technique.
--
1.3 HISTORY OF SYNTHETIC APERTURE RADAR 27
26 INTRODUCTION TO SAR
is sensibly large, say > 10, over regions where lh(x')I is not small we have and ships. In 1903, a mere 15 years following the studies by Hertz on
the generation, reception, and scattering of electromagnetic waves, Christian
u = 2nX /(AR 0 ) ( 1.2.2 1) Hulsmeyer of Germany demonstrated a ship collision avoidance radar which
lh(x')I =!sin [u(x' - x 0 )] / [u(x' - Xo)JI,
he later patented (Hulsmeyer, 1904). In 1922, Marco ni eloquently stated the
This function peaks at x' = x 0 , the target location, and has a width on the value of radar for detection and tracking of shi ps in his acceptance speech for
the IRE Medal of Honor. Most of the early US work in development of radar
order of
detection systems was conducted at the Naval Research Laboratory (NRL ). In
bx = AR 0 / (2X) = 1/ B, *( 1.2.22) 1922, the first continuous wave radar system was demonstrated by A. H. Taylor
and later patented (Taylor et al., 1934 ). However, it was not until 1934 that
This is an important result which we will expand upon in detail in Chapter 3. the first airborne pulsed radar system, operating at a carrier frequency of
Replica correlation of the quadratic phase waveform in Eqn. ( 1.2.20) with itself 60 MHz, was demonstrated by R. M. Page of NRL. In a parallel effort, radar
results in a correlator output with a width which is independent of waveform systems for tracking and detection of aircraft were developed both in Great
duration X , under reasonable assumptions. The same result can be generated Britain and Germany during the early 1930s. By 1935, each of these countries
by matched filtering of the Doppler waveform and the two approaches can be successfully demonstrated the capability to track aircraft targets using short-
shown to be equivalent. pulse ranging measurements. Sir Robert Watson-Watt ( 1957 ) is generally
Such replica correlation, or matched filtering, is the heart of high resoluti on credited with building the first operational radar system in 1937. This evolved
SAR image formation algorithms. In the specific context at hand, from into the Chain Home network. These stations were used throughout World
Eqn. ( 1.2.22) the correlator output is seen to resolve targets to within War II to track aircraft across Western Europe.
Between the development of the first operational systems and the start of
*( 1.2.23) World War II, radar technology became generally available such that all the
major warring powers had aircraft tracking capability. Additional enhancements
in component technology enabled increases in both the tracking range and the
which is the result argued heuristically above, leading to Eqn. ( 1.2.9). radar frequency from the VHF band (30-300 MHz) to the UHF band
Many effects need to be discussed before a full picture of the various focussed (300 MHz-3 GHz). In 1938, an anti-aircraft fire control radar with a range of
SAR processing procedures will be clear. The intent of this overview was to over 100 nautical miles operating at 200 MHz went into production (Brookner,
introduce the SAR concept. From this basis, the reader can better appreciate 1985). Over 3000 units of this system (SCR-268) and its successors were built
the historical developments in SAR sensor and processor technology, as well during the early war years. They contributed significantly to the success of the
as the various applications of SAR data which follow in the remainder of this allied forces. In fact, an early-warning SCR system, installed in Honolulu,
detected the Japanese invasio n in 1941, but by the time the radar echoes were
chapter.
correctly interpreted it was too late to assemble a defense. During this period,
parallel radar development activities were ongoing in both the USSR and Japan.
1.3 HISTORY OF SYNTHETIC APERTURE RADAR However, very little information about that work is available.
Early in World War 11, operational airborne radars were deployed by the
To gain a perspective on the progress that has been made in the evolution of US, Germany, and Great Britain. The first systems, which operated at VHF
synthetic aperture radar systems, we present a brief history of SAR. To set the frequencies, were used for detection of other aircraft and ships with mixed
success. Following the war, improvements in these systems came rapidly, in
stage for the discovery of SAR, we first address the early history of radar from
large part as a result of high frequency component technology development at
ground based detection systems to side-looking airborne mappers. We will then
the Massachusetts Institute of Technology (M IT) Radiation Labora tory. Most
trace key developments in the SAR sensor technology as well as the signal
significant among those developments was a high frequency, high peak power
processor by highlighting the technology milestones leading toward modern
microwave transmitter. Another im portant development came in image display
radar systems. syst~ms. Most of the early radar displays presented the echoes on a long
persistence cathode ray tube (CRT) in a range-angle format (B-Scan) in which
1.3.1 Early History the scan angle was presented relative to the aircraft flight direction. The
development of the plan-position indicator (PPI ) corrected for the angular
Prior to discovery of synthetic aperture radar in the early 1950s, radar had long
been recognized as a tool for detection and tracking of targets such as aircraft distortions in the display and later scan converters enabled the binary phosphor
28 INTRODUCTION TO SAR
1.3 HISTORY OF SYNTHETIC APERTURE RADAR 29
displays to present a full gray scale. It was these among other early technology
developments that set the stage for the evolution of imaging radar.
variation in terrain height produced distinctive peaks that migrated across the state power amplifiers are now used in many applications because of their
azimuth frequency spectrum. He reported that these experimental observati~ns increased reliability. Just as the solid state high power transistor technology
could provide the basis for a new type of radar with improved angular resolution. matured through the 70s and 80s, the technology of monolithic microwa ve
It was also in 1952 that Sherwin first reported the concept of a fully focussed integrated circuit (MMIC) devices is moving toward the forefront in the 90s
array at each range bin by providing the proper phase corrections. Addition~lly, and should become the standard in the next generation of spaceborne and
he put forth the concept of motion compensation based on phase correct10ns airborne SAR systems.
derived from platform accelerometer measurements, as applied to the received
signal before storage. These ideas eventually evolved into development of a 1.3.3 SAR Processor Evolution
coherent X-band radar system. The first published article that included a
focussed strip image was in a 1953 University of Illinois re~ort. This syst~m Given the rapid early advancement in coherent radar sensor technology, in
was designed to study sea surface characteristics as well as ship and submarine most cases the limiting element in radar system performance was the signal
processor. In the early 1950s, with the advent of the first SAR systems, skeptics
wakes. observed that the SAR simply trades antenna fabrication problems for signal
As a result of the accomplishments of the Illinois group, a much larger effort
was initiated. This study, coordinated by the University of Michigan, was termed processing problems. It was true that in this era, prior to digital computing,
Project W olverine. The study team, whose activities are summa~ized by Cutrona focussing the synthetic array posed a severe technical challenge. The key
( 1961 ), was commissioned by the US Army to develop a h1g~ perfo~mance problems were: (I) How to store the information during the synthesis period;
combat surveillance radar. They developed a number of operational airborne and (2) How to apply the range dependent quadratic phase correction to o btain
SAR systems that routinely began producing strip maps by 1~58. It is this gr~up a fully focussed synthetic array.
that is credited with developing the first operational motion compensat10n The early signal processors used an algorithm that is known today as
system, using a Doppler navigator to measure lo~g-term av_e rage drifts in unfocussed SAR (Section 1.2.2). The processing was essentially an incoherent
conjunction with a gyro to correct for short-term yawing _of the aircraft. Perh~ps sum of adjacent samples without phase compensation. One of the first processors,
the most important development by Cutrona' s group 1s the onboard op~1cal using a re-entrant delay line, was developed and tested at the University of
recorder and ground optical correlator for converting the coherent SAR video Illinois in 1952. This system could integrate approximately I 00 echoes before
the distortion of the range pulse ( rP = 0.5 µs) became excessive. This delay line
signal into high resolution strip images.
In conjunction with the development of these early SAR syste~s, there were effectively gave an improvement factor of 7 over the real aperture resolution.
a number of other activities wh ich advanced the state of the art m component The Illinois group also evaluated other storage media, such as a photographic
technology. Recall that the key difference between the real aperture SLAR process using film for storage, in which direct integration of the film produced
system and the SAR (besides the signal processing required) is that SAR is a the desired synthetic aperture image. A third device, the electronic storage tube
coherent system. This requires both the magnitude and the phase of the echo integrator, which was similar to Wiley' s design, produced the best results among
samples to be preserved, which implies that the system pulse-to-pulse phase the storage devices evaluated. Early in the development of the SAR signal
must be stable. The high power magnetron, which was such an important processor, because of the great difficulty in storing and reproducing analog
development for the SLAR, could not be used directly in the SAR system since data, it was recognized that a quantized signal would be a better approach
the starting phase of each pulse was random. Instead, the early SAR systems ( Blitzer, 1959). A key limitation in the analog storage devices was their relatively
used a coho-stalo arrangement, where, for each magnetron pulse, the starting small dynamic range and nonlinear transfer characteristic. However, develop-
phase of the pulse was measured. This phase was retained in a phase locked ment of the required digital computing technology was at best a decade into
intermediate frequency COHerent Oscillator (coho), referenced to the ST Able the future.
Local Oscillator (stalo ), which was then used to demodulate the received echo. Recognizing the limitations of electronic processing, the Michigan group
The development of linear beam power amplifiers such as the klystron in embarked on a major effort to develop an optical recorder and correlator using
1939, followed shortly by the traveling wave tube (TWT), was a key advance photographic film. With film as the storage medium all three dimensions (range,
in SAR technology, since these devices provided both the high peak power and azimuth, intensity) could be simultaneously recorded, thus providing a
phase stability required for SAR systems. The major advance in the TWT over permanent record of the video signal for later processing, allowing optimization
the klystron is the bandwidth. The klystron ' s bandwidth is limited to only a of the processing parameters by iterative correlations. Cutrona ' s group designed
few percent of the carrier frequency, while the TWT is capable of octave the first processor capable of achieving fully focussed resolution by applying a
bandwidths. Many of today's airborne SAR systems, and some spacebor~e correction function that varied with range to compensate for the quadratic
systems requiring high peak power, still use TWT technology, although sohd phase term. In 1957, their laboratory breadboard was converted into an
INTRODUCTION TO SAR
1.3 HISTORY OF SYNTHETIC APERTURE RADAR 33
32
operational unit and the first successful flight of an op~ical recorder .was
conducted. The recording was performed on 35 mm film using CRTs modified
to generate the intensity modulated range trace. The system featured a Doppler
navigator for drift angle compensation to center the return on zero D_oppler
and an optical recorder whose film advance rate was controlled by the estimated
-·--
~
--·-~--r
--·
ground speed. . . CD CD CD
The ground processing equipment was housed in a van for transportat~on -"'""' ' IGGGGI
to the test sites. It contained both the optical correlator and the film processing
equipment, including a photo enlarger for analyzing strip imagery. This_ system
produced the first fully focussed SAR image in August 1957. The architecture
developed by the Michigan group became the standard for SAR correlators for
nearly two decades while the digital computing technology matured. A_layout
r=wr=w
CD' CD'
COlm!Ol
'
--
0- .
- ®
and the study of geomorphic processes (Schaber et al., 1980). Although neither
of these original systems is in operation today, they have both been replaced
with modern systems of much higher performance. The parameters of these
current systems, along with those of the Canadian Centre for Remote Sensing
(CCRS) SAR, are given in Table 1.3.
---
Sands, New Mexico, missile test range (Fig. 1.16). These rockets carried an ><
experimental L-band sounding radar that was being evaluated for the lunar M
lander. At the conclusion of these experiments in 1966, this radar was transferred I
to the NASA CV-990 aircraft and was eventually upgraded to the JPL airborne
SAR system. The sounder's cavity-backed dipole antenna was replaced with a
dual-polarized planar array and the original magnetron (built by Raytheon)
was upgraded to a TWT. This system, which was used for a number of
applications including the study of oceanic phenomena in the Gulf of California,
collected data that eventually led to the approval of the Seasat SAR. In the
period between the conclusion of the rocket experiments and the approval of Cll
the Seasat mission in 1975, NASA initiated the Apollo Lunar Sounder E
QI
Experiment (ALSE). This experiment, conducted jointly by ERIM and JPL, u;
>.
(/)
was flown aboard the Apollo 17 lunar orbiter in December, 1972. It consisted
of four major hardware subsystems (Porcello et al., 1974): ( 1) RF Electronics
(CSAR); (2) IF antennas; (3) VHF antenna; and ( 4) Optical recorder (Fig. 1.17 ). --
G
'O
M
I
At the heart of the system is the coherent SAR (CSAR) transmitter/ receiver
subsystem which could operate· at any of three radar frequencies (5, 15, and
150 MHz). The objectives of the experiment were threefold: to detect subsurface
geologic structures; to generate a continuous lunar profile; and to map the
lunar surface at radar wavelengths. The data was recorded on photographic
film using a 70 mm optical recorder. The two high frequency (HF) dipole
antennas were used for mapping the subsurface geologic features and the very
high frequency (VHF) Yagi antenna oriented 20° off local vertical was used !!!
QI
primarily for surface mapping and profiling (Fig. 1.18 ). The bulk of the signal Qi
E
processing was carried out at ERIM using a modified version of their airborne ...IUIU
SAR coherent optical processor. Due to the large dynamic range of the data a.
(conservatively estimated at 45 dB), the image film was inadequate to observe E
2Cll
a number of subsurface features. At JPL, a small amount of the signal film was >.
(/)
scanned and processed digitally using a PDP-11 computer, while ERIM
constructed several holographic viewers to directly observe and manipulate the
image projection on a liquid crystal display.
The success of the lunar sounder experiment, coupled with the oceanographic
phenomena observed by the JPL L-band airborne SAR, led NASA in 1975 to
-
1.3 HISTORY OF SYNTHETIC APERTURE RADAR 37
Figure 1.17 Optical recorder flown as part of the Apollo Lunar Sounder Experiment and later
on SI R-A.
approve the inclusion of a SAR as part of the Seasat mission (Fig. 1.1 ). Despite
the I 0 years of oceanographic observation with airborne SAR systems, the
proposed Seasat SAR created tremendous controversy within the scientific
community. The dissenting camp argued that the coherent integration time was
too long (...., 2.5 s), and would result in decorrelation of the signal due to
movement of the ocean surface. The issue was never resolved theoretically and
finally it was decided that the only possi ble means of resolution would be
actually to fly the SAR on Seasat. As it turned out, the Seasat SAR observed
a number of unique ocean features that significantly contributed to our
understanding of the global oceans (Fu and Holt, 1982). Although the system
(Table 1.2) was designed primarily to image the oceans with its steep 23 °
incidence angle, Seasat data has found a wide variety of applications. The most
significant of these are in geology, polar ice, and land use mapping (Elachi et al.,
1982a). The success of Seasat, however, was limited in terms of the duration of
the data collection. A complete power failure just JOO days after its July 1978
launch, attributed to a short circuit in the slip rings that articulated the solar
-
38 INTRODUCTION TO SAA 1.3 HISTORY OF SYNTHETIC APERTURE RADAR 39
TABLE 1.4 Key Parameters for the Shuttle Imaging Radar Missions
Mission SIR-A SIR-B SIR-C X-SAR
Date 1981 1984 1993, 1994 1993, 1994
Altitude (km) 259 225 215 215
..........---HF ANTENNA No.210.414 m
Frequency Band (G Hz) L( 1.28) L( 1.28) L( 1.28 ), C( 5.3) X(9.6)
Polarization HH HH HH, HV, VH, VV vv
Incidence Angle so· 15- 60° 15- 60° 15- 60°
Antenna Size (m x m) 9.4 x 2.2 10.7 x 2.2 12. 1 x 2.8(L) 12.1 x 0.4
12.1 x 0.8(C)
Noise Equiv a 0 (d B) -25 - 35 -50(L), -40(C) -26
Swath Width (km) 50 15- 50 30- 100 10- 45
Az/ Rng Resolution (m) 4.7 / 33 5.4/ 14.4 6.1 / 8.7 6.1/ 8.7
"'
.!!
.~
~
0.
Cl)
"'"
-0
Lil"
;,
....
c
,g
~
z
~
c
u"
vi'
UJ
z
u
Figure 1.19 (continued) The Magellan system: (c) Magellan image (right) overlaid on lower
...:;
E
resolution (left) Venera image. c
Ci
'-
0
TABLE 1.6 System Parameters for Four Operational Modes of Casslnl Titan Radar ..
Cll
.!!
~
'i?!
Mapper QI "
·2
>
Mode
Frequency Band (GHz)
SAR
Ku(l 3.8)
RAD
Ku( 13.8)
ALT
Ku(\3 .8)
SCAT
Ku( 13.8)
Linear
..
E
ca
ca
a.
::::>
Cij c
.~ · ~
8
Linear Linear
Polarization Linear 0
]
u '-
Cl)
20- 40 0 0 " 0
Incidence Angle (deg) 30(V) 7500(A) .... >.
30000- 60000(A)
Az/ Yert Resolution (m) 300-600(A)
4.3 0.1 ci' .,
E
100 ::::> -0
Range Bandwidth (MHz) 0.42, 0.85 21 .... ~
21
Dynamic Range (dB) 9 92 * <
4~
44 INTRODUCTION TO SAR
1.4 APPLICATIONS OF SAR DATA 45
operational airborne systems and Table 1.2 for the near-future spaceborne specify the system and then define the experiments that are feasible within its
systems currently under development. Perhaps most notable is the nu~ber of ~erfo~mance constraints. The final design is the result of an iterative process
SAR systems that will be in operation in the 1990s. The strong commitment in which s~ste.m trade-.offs are made to optimize the performance for a specific
by the Europea n Space Agency (ESA), as well as the National Space set o.f a~pltcalions. A simple example of these trade-offs for a geologic mapping
Development Agency of Japan (NASDA) and the Canadian Space Agency, appltca.t10n would be to consider wide swath as higher priority than system
bodes well for advancement in the scientific use of SAR data. Furthermore: the dynam1~ range or radio~etric calibration accuracy. Given that the system is
increasing cooperation between agencies, as evidenced by the American, constrained by the d~wnlink data rate, the quantization (bits per sample) could
German, and Italian cooperation on the SIR-C/ X-SAR instrument packa.ge, be reduced to downlink more samples per interpulse period and thus obtain
the increasing availability of the Soviet SAR data, and the planned worldwide the wider swath.
participation in the Earth Observing System (EOS) program, should lead to . In th~ above .example, the system trade-offs were relatively simple and the
rapid advancements in both SAR sensor and processor technology. science impact, in terms of swath width or calibration accuracy, is generally
:-vell unders.tood. However, trade-offs among other parameters, such as the
~ntegrated s~delobe ratio (ISLR) or the quadratic phase error, are not so easily
1.4 APPLICATIONS OF SAR DATA in terpre~ed in terms of their impact in limiting science applications. Similarly,
geophysical measurements, such as ice type classification or soil moisture
The application of SAR data to geophysical measurements in a number of ~ontent, are difficult to translate into system specifications. This section is
scientific disciplines is well documented (Elachi, 1988; Colwell, l98~b). A key in~end~d to ~resent some key applications for SAR data in conjunction with a
element in the design and implementation of any SA R system 1s a clear bn ef ~1sc.uss1~n of the scattering mechanisms, as an aid for the engineer to gain
understanding of the planned scientific utilization of the in~trum~nt and t~e some insight into the dependency of various geophysical measurements on the
primary science parameters affecting the radar design. In t~1s section we w1.ll radar system design.
describe several key remote sensing applications of SAR imagery and their
dependence on the radar parameters. . .
The first stage in the design of any scientific instrument 1s to establish a 1.4.1 Characteristics of SAR Data .
set of scientific goals that can then be translated into quantitati.ve scie~ce
requirements and ultimately system specifications. A design fi~wchart illustrating The design of a SAR for remote sensing begins with scientific goals which are
the flow of requirements from science objectives to expenm~n~, sensor'. and used. to define a quantitative set of scientific requirements. Generally these
platform design specifications is given in Fig. 1.20..A ~ore real.1sttc. scen~r.1~ for req uirements can be divided in to those affecting the radar subsystem, the
an instrument such as SAR, where the technology 1s still evolving, 1s to in1t1ally p~oc.essor s.ubsyste~, or the platform and downlink subsystems (including
m1ss10n design). A hst of the key parameters is given in Table 1.8.
. To translate scientific requirements into system specifications, some assump-
SCIENCE OBJECTIVES
EXPERIMENT
SYSTEM PERFORMANCE PLATFORM DESIGN llons must be made about the target characteristics. This necessitates some
REOOIREMENTS
a pr iori understanding of the interaction between the transmitted wave and the
e POL.AR ICE e INCIDENCE ANGLE(S) e NOISE FIGURE e DURATION
target. Some of the parameters that characterize the received signal depend
e OCEAN WAVES e SWAniW1Dni e EIRP e LAUNCH DATE/TIME
weakly on the target characteristics, such as :
e SURFACE/SlJBSURFACE
MAPPING
f+ e RESOLUTION
I+ e ISLFL PSLR
1-4 e ORBIT NOOE
• DtURNAUSEASONAI.
e QIJANTIZATION e EPHEMERIS ACCURACY
VARIATION • amplitude (absolute value, statistics)
Figure 1.20 Mission design flowchart illustrating How from science requirements to senso r and
• relative phase (cross-channel statistics)
platfo rm specificatio ns. • polarization (orientation, ellipticity)
1.4 APPLICATIONS OF SAR DATA 47
46 INTRODUCTION TO SAR
this text and can be found elsewhere (Ulaby et al., 1982, 1986). Instead, it is
TABLE 1.8 List of Key SAR System Design and Performance
our intention to provide an overview of the scattering mechanisms as a
Parameters foundation from which we can discuss various applications of the SAR data.
Sensor Parameters If we assume for simplicity that the wave is propagating in a homogeneous,
Radar Frequency or Wavelength . . isotropic, non-magnetic medium, then from Maxwell's equations we can write
Antenna Polarization - Ellipticity and Onentatton an expression for the complex electric field vector as
Range Bandwidth
Signal to Thermal Noise Ratio E( z, t) =A expU(k'z - wt+¢)] (l.4.1)
Dynamic Range
Swath Width where A is the amplitude vector. The angular frequency, w, is given by
Look Angle
w = 2nf0 = 2nc/ ).. ( 1.4.2)
Image Parameters
Range and Azimuth Resolution . where f 0 is the carrier frequency and ).. is the wavelength. The wave propagates
Peak and Integrated Sidelobe Rattos . in some direction z, with k' related to the wavenumber k by
Effective Number of Looks (Speckle Noise) . -
Image Presentation (Radio~etric .andGGeome.tnc ~o~:,:t!imetric) k' = .fi,k = 2n.fif ).. ( 1.4.3 )
Calibration Accuracy (Rad1ometnc, eometnc an
Here e, = e/ e0 is the permittivity of the medium relative to that of free space (e0 ).
Mission / Platform Parameters
The relative permeabilityµ, is assumed to be unity, which is a good assumption
Altitude/ Orbit Coefficients at microwave frequencies.
Flight Date/ Time The polarization of the electric field refers to the direction of the ampli tude
Data Link vector, A, at some instant in time. For a linearly polarized wave, the direction
Platform Stability of A is fixed (i.e., independent of time) relative to the propagation direction as
shown in Fig. l.21a. For an elliptically polarized wave, the direction of A is a
function of time and effectively rotates about the axis of propagation. The easiest
These requirements are then translated into system specifications such as: way to conceptualize this is to consider the E field vector as consisting of signal
components oriented along the x axis and the y axis as shown in Fig. 1.21 b.
• noise temperature or noise figure Each component has the same frequency, but in general a different amplitude
and phase. The vector sum of these two field vectors
• antenna gain f mance
• amplitude/ phase versus frequency / temperature per or
E(z, t) =A, expU(k'z - wt+ ¢ 1 )] + AYexp(j(k' z - wt+ ¢ 2 )] ( 1.4.4)
• transmitter power
. mechanisms These specifications is an elliptically polarized wave. If 1¢ 1 - ¢ 2 1= n/ 2 and A,= Ay the wave is
which d irectly reflect the surface ;catte:ing type ofsurfa~e It is the dependency said to be circularly polarized.
can be used to predict the response or :a~~:t:ristics that m~st be understood to When an EM wave is emitted from a sou rce, such as an antenna, the energy
of these parameters on th~ surface c h . 1'nformation from the SAR data. is radiated over a range of angles. At any given time t0 , the phase of E, that is,
develop models for extraction of the geop ys1ca '
<Po = k' z - wt 0 +¢ ( l.4.5)
1.4.2 Surface Interaction of the Electromagnetic Wave . .
ve (am litude phase, polanzatt0n) are is constant over a surface. If this surface is a plane of constant amplitude, then
The characteristics of the reflected wa p t
th ee surface parame er .
s: (
l ) Dielectric constant the wave is referred to as a uniform plane wave. In a vacuum, this plane
primarily depend ent on r . . d ( 3 ) Local slope. To relate these propagates at a phase velocity
(permittivity); (2) Roughness.(rm~ hhe1ght~, ~ntics some type of scattering model
surface characteristics to the s1gna c arac e.ns , d ls is beyo nd the scope of ( 1.4.6)
. requtr
ts . ed . A detailed analysis of the various mo e
48 INTRODUCTION TO SAR
1.4 APPLICATIONS OF SAR DATA 49
y
Ay SAR SYSTEM
INCIDENT UNIFORM PLANE
WAVE: LINES OF CONSTANT
PHASE AND AMPLITUDE
~'
a ~' t ROUGH SURFACE
; ! ELECTRIC CONSTANT, Er
y
Ay
Figure 1.22
Interaction of radiated electromagnetic wave with a rough surface.
smooth .surf~ce) is very small as compared to the radar wavelength, the scatterin
z mecha~1s~ is specular. In specular scattering the incident wave's reflection anJ
transm1ss1on through the surface are governed by Snell ' s la Th ·
· ·d w. us, given a
wave mc1 :nt at an angle YJ, a portion of the energy will be reflected at an angle Y/
and a portion refracted at an angle r/', where
b
Figure 1.21 Electric field vector propagation: (a) Linearly polarized wave, and (b) Elliptically
polarized wave (after Purcell, 1981 ). ( 1.4.7)
Subsurface Mapping
where c is the speed of light and e0 , µ 0 are the permittivity and permeability of An example of this type of scattering was observed in the Libyan D esert region
free space. Generally, the propagation speed through the atmosphere of an of sou~hwest~rn ~gypt. by .the Shuttle Imaging Radar (SIR-A) instrument.
EM wave in the 1- 10 GHz range can be well approximated by c. At frequencies
above 10 GHz molecular absorption can significantly attenuate the signal, while
Tte chma~e m this regi on 1s hyperarid, resulting in a surface totally devoid
o vegetation. The s ubsur~ace. composition is a ho mogeneous sand layer of
for frequencies below 1 GHz the ionosphere is dispersive, resulting in rotation 1- 2 meters depth under which 1s a second layer of bedrock (Fig. t.23 ). Scattering
of the polarized wave, attenuation of the signal amplitude, and a reduced
propagation velocity. These effects are discussed in more detail in Chapter 7.
The interaction of the radiated EM wave with the surface is represented
pictorially in Fig. 1.22. The interaction of the wave and the surface is generally
referred to as scattering and is classified into either surface scattering or volume
scattering. Surface scattering is defined as scattering from the interface between
two dissimilar media, such as the atmosphere and the earth 's surface, while
vo lume scattering results fro m particles within a non-homogeneous medium. Er= 2.5
~ 2m
1.4.3 Surface Scattering: Models and Applications _ _ __....._ _ _ _ _ _..,. , _ _.___ _ STONE DRAINAGE
Given a homogeneous medium (i.e., no volume scattering), characte rized by a Er = 8.0 CHANNELS
relative dielectric constant e,, if the surface roughness (rms height relative to a 1
Figure .23 Scattering mechanism for imaging subsurface drainage channels in Libyan desert.
1.4 APPLICATIONS OF SAR DATA 51
50 INTRODUCTION TO SAR
surface reflectance which is nearly featureless, while the SAR 's subsurface
d b of image features not evident in
1
~i~~,;hd:a, ~:~~;g~ht~~~~t.,ePd~~to~~b~e:~) l:;r~::~~~:~~:~v~:~;'~~~r~c~~~~t~~;h~!
imaging capability illustrates a detailed map of the bedrock layer. Such radar
sounding techniques are invaluable for scientists studying the geologic history
the san ayer ts es tma "r - · • 500
~ of the region, and may also prove useful for locating sources of water deep
dielectric constant, e,= 8 (Elachi et al., ,1984\ Using Eqn. (l.4.7) or '1 = ' below the surface.
the refraction angle is es~imated to be '1 ~ 29 r' bedrock layer produces a
The resultant scattering from the s~ sur ace f . t natural drainage Bragg Scattering
. I oviding a detailed map o ancten
relatively strong signa pr f . f shifting sand This is illustrated If we now extend our specular scattering model to slightiy rough surfaces,
channels buried by thousa~ds o ~entunes o. 'th a L~ndsat scene of the assuming a homogeneous medium (i.e., no volume scattering), with an rms
· F' 1 24 by the SIR-A image tn companson w1 h height variation Jess than J../ 8, then we can describe the scattering using a Bragg
tn tg. . The Landsat visible wavelength detectors can only measure t e
same area. model (Barrick and Peake, 1968). Given the spatial spectrum of the surface (as
derived from the two-dimensional transform of the height profile), the Bragg
model states that the dominant backscattered energy will arise from the surface
spectral components that resonate with the incident wave. Thus, for surface
variation patterns at wavelengths given by
a strong backscattered return will result. The dominant return will be for the
wavelength where n = l. At steep incidence angles, the scattering is generally
a combination of Bragg and specular scattering. Even for a Bragg surface, the
return can be dominated by specular scattering, which is strongly dependent
on the distribution and extent of the local slope (Winebrenner and Hasselmann,
1988). A natural surface can be approximated by a series of small planar facets,
each tangential to the actual surface, upon which the small-scale roughness is
superimposed. The incident wave therefore has a scatter component that is due
a
to the local slope (i.e., from facet scattering), as well as a point scatterer
component dependent on the roughness (i.e., from Bragg or resonant reflection).
The resultant backscatter curve as a function of local incidence angle is a
combination of these two mechanisms as shown in Fig. 1.25.
Oceanography. Bragg models are most frequently used for describing scattering
from the sea surface. Due to the large dielectric constant of water, the scattering
mechanism is exclusively surface scattering. The resonance phenomenon on
which the Bragg model is based is well suited to the periodic structure of the
ocean waves. Ocean waves are detectable as periodic bands on SAR imagery,
due to the spatial variation of the short waves within the longer waves, as well
as to the orbital motion of the long waves themselves. However, due to the rms
height variation limitation of the Bragg model (i.e., < J../ 8), only the small
capillary waves or short gravity waves exhibit Bragg resonance. The analysis
of SAR ocean wave imagery is typically performed in the spatial transform
domain where the Bragg resonance can be observed directly. Figure 1.26 shows
N\ 0 L-~-'--~--' 20 km a set of ocean wave spectra for an area off the coast of Chile. The SIR-B
b spectrum is shown after removal of the system transfer function and smoothing
. b' ·()La dsat·(b)SIR-A.SIR-A
Figure 1.24 Images ofdesert region between Iraq and s.aud; :i::h•ia .I;82b )~ ' (Monaldo, 1985). From the wave spectra, parameters such as the direction of
detail of drainage channels is from subsurface penetration '
52 INTRODUCTION TO SAR
Figure 1.25 Backscatter curve for natural surfaces illustrating the two scattering mechanisms:
Facet scattering for steep incidence angles; Bragg scattering for shallow incidence angles.
the waves (with a 180° ambiguity), the wavelength (or wave number), and the
wave height can be directly measured. The Bragg resonance is strongest for
waves traveling in the radar look direction. As the azimuth component of the
wave motion increases, the backscattered energy is attenuated and nonlinear
corrections need to be applied for an accurate estimate of the geophysical
parameters (Alpers et al., 1981 ).
Information derived from the directional wave energy spectra can be directly
used for updating and validating ocean wave forecast models. These models
are key elements in predicting global climatology. The measurement of ocean
characteristics is a primary objective of future space orbiting SAR systems such
as the E-ERS-1 and SIR-C, both of which have implemented special modes for
ocean wave imaging. The SlR-C system will feature an onboard processor
experiment, developed by the Johns Hopkins University Applied Physics
Laboratory, to directly generate ocean wave spectra for near real-time analysis
of the ocean wave properties (MacArthur and Oden, 1987). The E-ERS-1 system
also features a special wave mode of operation in which the SAR acquires only
small patches of data ( 5 km x 5 km) spaced at regular intervals ( 250 km) ~
across the oceans. These patches will be ground-processed to produce wave
spectra images for distribution to the science community (Cordey et al., 1988).
volcanic crater is clearly visible in the right center portion of the frame. Due
to acidification from volcanic fumes, there is very little vegetation in this region.
Two main types of lava flows are easily distinguished. The aa flows, which are
rough, appear brightest in the scene, while the smoother pahoehoe flows
E
.>I. comprise the darker regions. Additionally, as a result of the smoothing effect
0
N from weathering, the change in radar brightness as a function of incidence angle
can be used to identify the relative age of the two lava types. This is especially
prominent in the Kau desert region where the contrast between the lava types
is more distinct at 17 = 48° than at 17 = 28°.
( 1.4.9)
where the relative dielectric constant, a complex number e, = e' + je", must
satisfy e" / e' < 0.1 for Eqn. ( 1.4.9) to be valid. In calculating the penetration
depth from Eqn. ( 1.4.9) the scattering within the medium is assumed to be
negligible.
wavelength, determine to a large part the fraction of backs~att~red. energy. If is observed. This type of scattering mechanism is demonstrated by the
we consider for example, a forest canopy, the size and d1stribut10n of the NASA / JPL trifrequency radar data (see Table 1.3). The wavelength dependence
scatterers c~n range from a small-scale very dense distribution, such as t~e can be observed in Fig. 1.29 by noting the relative change in scene brightness
needles of a pine tree, to a sparse configuration of branches and trunks, .as m among the P-band (A = 65 cm), L-band (A.= 23 cm), and C-band (A.= 5 cm)
a deciduous stand during the winter season (Fig. 1.28). Additionally, the moisture images. The three scenes of a farm region near Thetford, England, were acquired
content of each of these component parts determines the fraction of the ~ave simultaneously (i.e., on a single pass) in July, 1989.
energy that is scattered from the tree limb versus the energy a.ttenuated w1t~m In addition to the frequency dependent variation in the backscatter coefficient
the limb. Several models have been recently developed to describe the scattering ( u 0 ) , other scattering characteristics of the forest canopy can be measured using
in vegetation canopies (Richards et al., 1987; Durden et al., 1989; Ulaby et al., a coherent, multipolarization capability such as that of the JPL system. Its
capability to simultaneously image a target with both horizontally (H) and
1990). . .
Although all scattering models make some. approx1mat10ns a9out the EM vertically (V) oriented electric field vectors, and to record both the like- and
wave and surface interaction, certain relationships between the radar wavel~ngth cross-polarized returns, allows synthesis of the target's polarization signature
and the surface properties (such as effective canopy density) can be predicted. (Zebker et al., 1986). A key parameter in a polarimetric SAR such as the JPL
Generally, the lower the radar frequency (longer wavelength) the greater the system is the relative phase between the two like-polarized returns. For
penetration of the canopy, as indicated by Eqn. ( 1.4.9). For sh?rt ""'.avele~gths, single-bounce scattering, such as in the surface scattering from the canopy top
dense canopies, and grazing incidence angles, the scattering 1s typically or the soil, there is zero relative phase shift between the HH and VV
dominated by surface scatter from the top of the cano.py. For long wavel~ngths, like-polarized returns (i.e., the phase difference of the transmit plus receive
sparse canopies, and steep incidence angles the sca.ttering may ~e ~redommantly like-polarized channels is constant). If the dominant scattering mechanism is
from the ground, again resulting in surface scattering charac.teristtcs. In between two-bounce (e.g., ground to the trunk to radar), the relative phase shift is
these extremes, a combination of surface and volume scattenng from the canopy constant, but 180° out of phase with the single-bounce returns. However, for
volume scattering within the canopy, the relative HH to VY phase shift is
random due to the multiple scattering of the EM wave. Given this characteristic
we can determine the scattering mechanism by analyzing the relative phase
term (van Zyl, 1989). As we vary the frequency, the scattering mechanism
changes, since the EM wave canopy penetration is frequency dependent. This
is illustrated in Fig. 1.30 for a scene near Freiburg, Germany. As might be
SIZE DISTRIBUTION OF ..::
CANOPY SCATTERERS _ _ ,.. expected, the short wavelength C-band image is dominated by surface scattering
from the top of the canopy, resulting in a zero relative HH to VV phase shift,
which is classified as single-bounce scatter. At the longer L-band wavelength,
the volume scattering dominates, while the longest wavelength, at P-band,
penetrates the canopy, giving rise to significant two-bounce scatter.
If we measure the change in the relative phase statistics across the three
frequencies, we can derive a very sensitive measure of the canopy density. Thus,
even in the absence of a comprehensive model to characterize the absolute
backscatter from the canopy, relative changes in certain canopy parameters can
be detected using a multichannel SAR system under the right conditions. If the
system is calibrated, a multifrequency, multipolarization SAR, such as SIR-C,
l
is capable of monitoring environmental changes, including deforestation. This
is becoming an especially important application for free-flying orbital SARs as
l~ lll the effects of acid rain on our global ecosystem become more severe.
lcm 10cm lm
I mm
SIZE OF SCATTERER Polar Ice
LEAVES BRANCHES TRUNKS A second example of volume scattering is in the imaging of polar sea ice. Polar
TWIGS
ice has imbedded in it a mixture of salt, brine pockets, and air bubbles. It is a
Figure 1.28 Model response of forest canopy to vario us wavelengths based on number and
characteristic inhomogeneous medium with a relatively low dielectric constant
distribution of scatterers (Carver ct al., 1987).
1.4 APPLICATIONS OF SAR DATA 59
0 RIDGE
.D
Figure 1.31 Scattering mechanisms for various ice types: multiyear, first year and open water
(courtesy of W. Weeks).
of about c, = 3. (Some ice exhibits an t:, lower than that of sand in hyperarid
Saharan desert.) Depending on ice type, which is usually correlated with age,
the cha racteristic scattering can change dramatically. Generally, at steep
incidence a ngles surface scattering will dominate the ret urn signal, while at more
shallow incidence a ngles volume scattering efTects become prominent, depending
on the radar wavelength a nd the ice type. These scattering mechanisms are
illustrated in Fig. 1.3 1 (Carver et al., 1987). For o pen water the mechanism is
exclusively surface scattering with a la rge specular scatter component depending
on the surface roughness (i.e., on wind speed). The first year ice is also
predo minantly su rface scattering, due to the large t:, resulting from the high
salinity content. However, the surface scatter is mo re d iffuse as a result of the
ice ridges a nd rubble fields. The mul tiyear ice exhi bits both volume and surface
scatteri ng due to the low dielectric constant resul ting from its characteristic low
salinity.
The relatio nship between penetrat io n dept h, bP, and the radar frequency for
each of these ice type (age) categories is show n in Fig. 1.32. As expected from
Eqn. ( 1.4.9), the penetration depth is inversely proportional to radar frequency.
60 INTRODUCTION TO SAR
10 T• -lOOC
E
~
.c
g.
0
c:
0
~
.....
Q)
c:
Q)
0..
l. 5 2 3 4 5 6 7 8 910 15 20
Frequency (GHzl
Figure 1.32 Penetration depth in pure ice, first-year sea ice and multiyear sea ice (Ula by et al., 1982).
However, despite the fact that the real dielectric component e' of the multiyear
ice is smaller than that of the first year, the imaginary component e" typically
offsets this factor, resulting in a deeper penetration depth for multiyear ice. The
value of e" is dependent on a number of factors, such as the ice density,
temperature, and salinity. T hus, depending on the environmental conditions
from the point of fo rmation of the ice until it is observed, e" can assume a wide
range of values. Since e" decreases with decreasing temperature, the penetration
depth could vary widely with a diurnal cycle period during the summer
months.
Figure 1.33 illustrates dramatically the wavelength dependence of scattering
from multiyear ice. T his P-, L- and C-band total power (i.e., all polarizations)
three-frequency image set was acquired by the NASA/ JPL airborne SAR in
61
62 INTRODUCTION TO SAR
25
al 20
:S!
'b 15
'E
....
·o 10
c;; 5
0
u
C7'
c 0
·.:
-5
=
"' -10
~
mv • 0. 05 g~c:m:j-3:------l
u
~ -15
-20
0 5 10 15 20 25 30 •
Angle of Incidence !Degrees)
a
25
al
:S! 20
'b 15
'E
....
·o 10
c;;
0 5
u mv • 0. 38 g cm-3
C7'
c 0
·.:
2 -5
~
~ -10 mv. 0.09 g cm-3
u
"'
al -15
-20
0 5 10 15 20 25 30
Angle of Incidence (Degrees) z
w
b ~~
w~
Vl
~
Oci a:
Figure 1.35 Dependence of backscatter coefficient on incidence angle and soil moisture ( i.e., 0
Cl.
w
volumetric water content g/ cm 3 ) for C-band (l = 7 cm), HH polarization at : (a) u,m, = I.I cm ; and a:
...J
(b) Urms = 4.1 cm (Ulaby et a l., 1986). :::; ...J
0
Cl.
<
u..
-o
a: . z
~o ~
a:
Elachi, C., L. E. Rot h and G. G. Schaber ( 1984 ). " Spaceborne Radar Subsurface Imaging MacDonald, H. C. ( 1969). " Geologic Evaluation ofR adar Imagery from Darien Provi nce
in Hyperarid Regions," IEEE Trans. Geosci. and Remote Sens., GE-22, pp. 383 - 388. Panama," Modern Geology, 1, pp. t - 63. '
Elachi, C., E. Im, L. Roth and C. Werner (1991). "Cassini Titan Radar Mapper," Proc. Mercer, J. B; ( 1989). " A New Airborne SAR for Ice Reconnaissance Operations," Proc.
IEEE (in press). IGARSS 89, Vancouver, BC, p. 2192.
Ford, J. P., J. B. Cimino, B. Holt and M. R. Ruzek ( 1986). "Shuttle Imaging Radar Monaldo, F. M. ( 1985). " Meas urements of Directional Wave Spectra by the Shuttle
Views the Earth from Challenger: The SIR-B Experiment," JPL Pub. 86-10, Jet Synthetic Aperture Radar," John s Hopkins APL Tech. Digest, 6, pp. 354- 360.
Propulsion Laboratory, Pasadena, CA. Natio_nal Aeronautics and Space Administration Advisory Council ( 1988). Earth System
Ford, J.P., R. G. Blom, M. L. Bryan, M . I. Daily, T. H. Dixon, C. Elachi and E. C. Xe~os Science: A Program/or Global Change, NASA, Washington, DC.
( 1980). " Seasat Views North America, the Caribbean and Western Europe with Nghiem, S. ~·· J. A. Kong and R. T. Shin ( 1990). "Study of Po larimetric Response of
Imaging Radar," JPL Pub. 80-67, Jet Propulsion Laboratory, Pasadena, CA. Sea Ice with a Layered Random Medium Model," Proc. /GARSS '90, Washington,
Freden, S. C. and F. Gordon, Jr. ( 1983). Landsat Satellites, Manual of Remote Sensing DC, pp. 1875- 1878.
(Simonett, D . and F. Ulaby, eds.), Chapter 12, Vol. I, Am. Society of Photogrammetry, Pettengill, G. H., D. B. Campbell and H. Mas ursky ( 1980). " The Surface of Venus"
Sheridan Press, Falls Church, VA. Scientific American, 243, pp. 54- 65. '
Fu, L.-L. and B. H olt ( t 982). " Seasat Views Oceans and Sea Ice with Synthetic Aperture Porcell o, L. J., R. L. Jordan, J. S. Zelenka, G. F. Ada ms, R. J. Phillips, W. E. Brown, Jr.,
Radar," JPL Pub. 81-120, Jet Propulsion Laboratory, Pasadena, CA. S. W. Ward and P. L. Jackson ( 1974 ). " The Apollo Lunar Sounder Radar System "
Fung, A. K. ( 1982). " A Review of Volume Scattering Theories for Modeling Applications," Proc. IEEE, 62, pp. 769- 783. '
Radio Science, 17, pp. 1007- 1017. Purcell, E. M. ( 1981 ). Electricity and Magnetism, Berkeley Physics Course, Vol. 2, 2nd Ed.
Goddard Space Flight Center ( 1989). Earth Observing System, Reference Handbook, McGraw-Hill, New York.
NASA GSFC, Greenbelt, Maryland. Rawson,~; and F. Smith ( 1974). "Four C hannel Simultaneous X-L Band Imaging SAR
Holahan, J. ( 1963). " Synthetic Aperture Radar," Space/ Aeronautics, 40, pp. 88-93. Radar, 9th / mer. Symp. Remote Sensing of E11viro11ment, University of Michigan,
Ann Arbor, pp. 251 - 270.
Hulsmeyer, C. ( 1904). " Hertzian-Wave Projecting and Rec~iving Apparatus Ad~pted
to Indicate or Give Warning of the Presence of a Metallic Body such as a Ship or Richards, J. A., G. Q. Sun and D. S. Simonett ( 1987). "L-Band Radar Backscatter
a Train," British Patent 13,170. Modeling of Forest Stands," IEEE Trans. Geosci. and Remote Sens., GE-25,
pp. 487- 498.
Hunten, D . M., M. G . Tomasko, F. M . Flasar, R. E. Samuelson, D. Strobel an_d D._J.
Stevenson (1984). Titan , in Saturn (T. Gehrels and M. S. Mathews, eds.), U01vers1ty Schaber, G. G., C. Elachi and T. F. Farr ( 1980). " Remote Sensing of S. P. Mountain
of Arizona Press, Tucson, pp. 67 1- 759. and S. P. Lava Flow in North Central Arizona," Rem. Sens. Env., 9, pp. 149- 170.
Im, E., c. Werner and L. Roth ( 1989). " Titan Radar Mapper for the Cassini Mission," Sherwin, C_. W., J.P. Ruina a nd R. D. Rawcliffe ( 1962). " Some Early Developments in
21st Lunar and Planetary Science Co11f., Joh11son Space Center, Houston, TX, Synt hetic Aperture Radar Systems," I RE Trans. on Military Elec., MIL-6, pp. 111 - 115.
pp. 544- 454. Skolnik, M. I., ed. ( 1970). Radar Handbook, McGraw-Hill, New York.
Jensen, H., L. C. Graham, L. J. Po rcello and E. N . Leith ( 1977). "Side-Looking Airborne
Skolnik, M. I. ( 1980). Introduction to Radar Systems, 2nd Ed., McGraw-Hill, New York.
Radar," Scientific American, 237, pp. 84- 95.
Taylor, A. H., L. C. Young and L. A. Hyland ( 1934 ). "System for Detecting Objects by
Johnson, W. T. K. and A. T. Edgerton ( 1985). " Venus Radar Ma pper (VRM): Multimode Radio," United States Patent 1,98 1,884.
Radar System Design," SPJE, 589, pp. 158- 164.
Jordan, R. ( 1980). "The Seasa t-A Synthetic Aperture Radar System," IE EE J. of Oceanic Ulaby, F. T.: R. K. Moore and A. K. Fung (1981). Microwave Remote Sensing, Active
and _Passive, Volume I : Microwave Remote Sensing Fundamentals and Radiometry,
Eng., 0 E-5, pp. 154- 164. Addison-Wesley, Reading, MA.
Kahle, A. a nd A. Goetz (1983). "Mineralogical Information from a New Airborne
Thermal Infrared Multispectral Scanner," Science, 222, pp. 24- 27. Ulaby, F. T..' R. K. Moore and A. K. Fung ( 1982). Microwave Remote Sensing, Active
Kahle, A. B., J. P. Schieldge, M. J. Abrams, R. E. Alley and C. J. LeVine ( 1 9~1 ). and Passive, _Volume /J : Radar Remote Sensing and Surface Scattering and Emission
"Geological Applicatio n of HC MM Data," JPL Pub. 81-55, Jet Propulsion Theory, Addison-Wesley, Reading, MA.
Laboratory, Pasadena, C A. Ulaby, F. ~-· R. K. Moore a nd A. K. Fung ( 1986). Microwave Remote Sensing, Active
Kirk , J. C., Jr. ( 1975). " Digital Synthetic Aperture Rada r Technology," IEEE and Passive, Volume I I I : From Theory to Applications, Artech House, Dedham, MA.
f11ternational Radar Conference Record, pp. 482- 487. Ulaby, F. T. ( 1980). " Vegetation Clutter Model," IEEE Trans. Ant. Prop AP-28
pp. 538 - 545. ., ,
Li, F. and R. Goldstein ( 1989). "Studies of Multi-baseline Spaceborne lntederometric
Synthetic Aperture Radars," IEEE Trans. Geosci. and Remote Sens., GE-28, pp. 88- 97. Ul ~.by,_ F._ T., K .. Sarabandi, K. McDonald, M. W. Whitt and M. C. Dobson ( 1990).
MacArthur, J. L. a nd S. F. Oden ( 1987). "Real-Time Global Ocean Wave Spectra from · M1ch1gan Microwave Ca nopy Scattering Model (MIM ICS), Int. J. Remote Sensing
SIR-C: System Design," IGA RSS'87 Digest, Vol. II , Ann Arbor, Ml, pp. 1105- 1108. 11, pp. 1223- 1253. ,
70 INTRODUCTION TO SAR
2
Oceanic aves - • p · RADAM "
van Roessel, J. W. and R. D. de Godoy ( 1974). " SLAR mosaic for roJeCt '
Phoiogram. Eng., 40, pp. 583- 595. . . . avior Usin Radar
Z I J J ( 1989 ). "Unsupervised Class1ficat1on of Scattenng Beh g
van y '. . . D t " IEEE Trans Geosci. and Remote Sens., GE-27, pp. 36- 45.
Polarimetry a a, · · f p "
. d C D Sapp ( 1969). "SLR Reconnaissance o anama,
Viksne, A., T . C. Liston an · ·
Geophysics, 34, pp. 54- 64.
att R (1957) Three Steps to Victory, Odhams Press, London.
THE RADAR EQUA TION
Watson- W ' · · d A t " United States
Wiley, C. A. ( 1965). "Pulsed Doppler Radar Methods an ppara us,
Patent, No. 3,196,436, Filed August 1954. d' for Technology
·1 C A ( 1985). " Synthetic Aperture Radars - A Para tgm 3
Wt ey, . . El S AES 21 pp 440- 44 .
Evolution" IEEE Trans. Aerospace ec. ys., - ' . . . .
' H I ( 1988) "Specular Point Scattering Contr1but1on
Winebrenner, D. P. and~· asse manRn d I . age of the Ocean Surface," J. Geophys.
to the Mean Synthetic Aperture a ar m
R . 93(C) p~9281 -9294. . f
es., , D N H Id (1986). "Imaging Radar Polarimetry rom
Zebker, H. A., J. J. van Zyl and . . e - 701
Wave Synthesis," J . Geophys. Res., 92, pp. 683 .
In Section 1.2, we have given a heuristic discussion of the way in which a SAR
achieves higher resolution along track than does a real a perture radar (RAR).
In Section l .4 we indicated many of the links between geophysical parameters
of interest for remote sensing and the corresponding radar signals. In the
remainder of the book, we want to make more precise these matters of SAR
operation and SAR image formation, and their effects on the ability to accurately
determine geophysical information from SAR images.
Since a SAR is a particular kind of RAR, one which maintains precise time
relationships between transmitter and receiver (a " coherent" RAR), with the
"SAR" qualities added in the signal processing, in order to understand SAR it
is necessary to have an understanding of RAR. In this chapter, we develop
carefully the basic mathematical model of a RAR system, the radar equation.
Radar technology, and in particular RAR, has been under continuous active
development for well over a half century. Skolnik ( 1985) gives an account of
the history of the early days of radar, while in Section l .3 we have traced the
historical development of SAR. The state of the art as of about 1950 required
28 volumes to codify (Ridenour, the " Rad. Lab. Series" ). Even to survey in
overview the main aspects of the technology requires a book , for example that
by Skolnik ( 1980) or by Barton ( 1988), while a more detailed review (Skolnik,
1970) runs to over 1500 pages. Therefore in our discussions of RAR we will
necessari ly be selective in choosing topics. Within that framework, however, we
will relate the main ideas of RAR systems to basic physical concepts.
71
72 THE RADAR EQUATION
2.1 POWER CONSIDERATIONS IN RADAR 73
The traditional purpose of radar is to detect the pre~ence ~f. "hard" targets,
such as aircraft, and to localize to some extent their pos1t1ons. The rad~r
transmitter (Fig. 2.1 ) generates a brief (microseconds) high power burst of radio
frequency electromagnetic energy. (The more powerful .th.e better - a few
megawatts is not unusual for a ground based radar.) This is c.onveyed to ~n
antenna through appropriate microwave "plumbing". At the high frequen~1es
of radar (0.1-tOGHz, typically), an antenna structure of rea~onable physical
size acts to confine the radiated energy to a narrow fan or cone m .space, thereby
providing localization in one or two spatial dimensions, respectlv~ly. " . ,,
Having launched the pulse, the transmitter turns off and the re~e1ver l~stens
for any echos of the pulse returned from thf se~tor of the s~y mto which ~he Time (range)
pulse was launched. Any perceived echo has it~ t1~e of recept1~n ~oted, relat1~e Figure ~.2 The receiver signal to noise ratio SNR 0 = P,/ Pn·
to the time of transmission of the pulse. This time delay •. ts 1~terpr~ted m
terms of range to target, R = cr/2, providing another spatial d1mens10~ for
localization. The power of the received echo relative to that of the _transmitted is roughly characterized by the range and character of a target which produces
pulse scales in free space as 1/ R 4 • Mega~atts ~uic~ly tum in_t~ m1.crowatts at an echo strong enough to show up above the noise.
ranges of interest, requiring sensitive receiver circmts, so sens1t1v~, m fact, that The height of a target bump above the noise grass is measured by the ratio
the noise internally generated in the receiver must be reck~ned wit~. The radar of receiver output power P. due to target echo to the average output power
equation expresses this conversion of transmitted power mto r~ce1ved po~er, Pn due to other causes (noise), the output signal to noise (power) ratio SNR0 •
in terms of the ratio of received power due to a target reflection to receiver Specifying the minimum SNR which is required for "reliable" detection is not
power due to noise, together with some system and target param~ters. . easy, and we shall simply assume that some minimum power ratio SNR0 has
The earliest radar receivers used a simple "A-scan" presen~atton, ~1th !he been determined as necessary for satisfactory performance.
receiver output power presented as a function of time (range) ~u~mg the hstenmg With a required minimum value of SNR at the receiver output specified,
time after transmission of one pulse and before transm1ss1on of _the _next it remains to relate that number to the transmitter, target, and receiver
(Fig. 2.2). The "grass" along the baseline is .due to r~ndom thermal no.1se, either characteristics in order to assess the performance of the system in terms of what
internal to the receiver or entering along with the signal from the environment, target size can be discerned at what range. The conversion of transmitter power
while target echos show up as "bumps" above the grassy baseline. The radar into receiver power by the system and target characteristics is simple in concept.
However, the seeming simplicity covers a number of assumptions about the
way the system operates, as weil as some factors wrapping very complicated
Energy pulse phenomena into a few numbers whose determination may not be at all simple.
We will try to be careful in pointing out these matters as they arise. Similar
discussions are given by, (or example, Skolnik ( 1980), Barton ( 1988), Silver.
(1949), and Colwell (1983).
Trans
The parameters to be discussed in detail in this chapter are those appearing
in the (point target) radar equation. The general form of that equation is
*(2.1.1)
z~:~
Rcvr
Here SNR0 is the SNR which has been specified as required for reliable operation.
That is equated to the SNR provided by the system for a target at range R, on
--R----i1sl
Ii.-..: the right. The equation may then be solved for any one of its parameters (often
range) in terms of the others to determine operational capability or a system
Figure 2.1 Notional radar system. requirement.
2.2 THE ANTENNA PROPERTIES 75
74 THE RADAR EQUATION
\ ~n~wer for the scattered power density at the antenna. This backscattered power
\ is mtercepted by the antenna, of surface area A., to provide signal power
\
R \
\
\
\
\ at the antenna output (receiver input) terminals.
I
()a I
The receiver is broadly defined as all elements beyond the antenna terminals
in the receiving system. The receiver input noise is characterized by a power
per unit frequency bandwidth kT W /Hz, where k = 1.38 x 10- 23 J /K is
Boltz~ann's constant and T is a "temperature". The temperature T is a
~Ql(R)
, Trans
AmplifierB, F
Decision point /
I
I
I
I
I
I
I
numencal val~e selected such that the product kT is the correct noise power
spectral density for the case at hand, assuming the antenna and receiver
impedances are matched. The receiver bandwidth B converts this into a noise
power kTB (W). The receiver system is characterized by a noise factor F > 1
whi~h e~presses ~he ~xtent to which internal receiver noise increases apparent
receiver mput noise, if we were to observe the receiver noise output and assume
I
(incorrectly) tqat the receiver system itself were free of internal noise sources.
Figure 2.3 Relation of radar equation parameters to system elements. The (power) SNR at the input to the noiseless receiver would then be
During the time of a radar pulse, while the transmitter is on, suppose that is the antenna directivity pattern. From the relation
the average power flowing into the antenna input port is P1 (Fig. 2.3). (This is
often called the peak power of the radar, to distinguish it from the true average
power, which takes into account the transmitter "off" time as well. The ratio f D'( 0, </>) dQ = (4rr./ Prad) f P( 0, </>) dQ = 4n (2.2.5)
sphere sphere
of "on" time to total time is the duty cycle.) If all of this power were radiated
into space by the antenna, and ifthe antenna radiated uniformly in all directions it is clear that the directivity function trades power increase in one solid angle
(isotropic radiation), then at range R from the antenna the power density sector for decrease in another.
(intensity) of the electromagnetic wave would be . T~e ~articular form of the gain function G'( (), </J) depends on the spatial
dtstnbution of current imposed by the transmitter on the antenna structure.
(2.2.l) We will s~mmarize the development of that relationship. However, the design
of a ~hys1ca~ antenna t~ accomplish the current distribution corresponding to
a desired gam pattern 1s a separate problem with which we will not deal.
However, some power is.lost by dissipation in the antenna itself. Also, by design,
.One of the. most c~mplete discussions is that of Silver ( 1949), who proceeds
the antenna does not radiate isotropically. Rather, the intensity at some space usmg the basic equations of electromagnetic theory to relate antenna currents
point with polar coordinates ( R, 0, </>) relative to the antenna is some value and gain. The relations of main interest are summarized by Stutzman and Thiele
( 1981 ). Sherman ( 1970) has given a comprehensive summary of the engineering
l(R, 0, </>) = G'(O, </>)I 0 (R) results.
The central result for calculation, developed from Maxwell's equations, is
where the gain function G'( 0, </>) has values both greater and less than unity. the Huygens diffraction integral. Let us assume that the antenna excitation is
sinus~idal, ~ith radian frequency w, and that the wavelength ..1. = 2nc/w of the
Usually, the gain G1( 0, </>) is maximum at 0 = 0, </> = 0, the direction of the
elect~1c field ts m~ch less than the physical extent of the antenna. For simplicity,
radar beam. The parameter G1 in the radar equation is the maximum (on-axis)
and m accord with the usual practice in SAR, we will assume that the field
value of G1( 0, </> ).
The gain function G'( (), </>) can be interpreted as the power P( 0, </>) per unit impressed on the antenna by the transmitter is linearly polarized (i.e., the electric
solid angle Q radiated by the antenna in direction ( (), </>) (the radiation pattern field vector has a constant direction). The field radiated by the antenna can
(Ulaby et al., 1981, p. 97)), relative to the power per unit solid angle P 1/4n then be expressed by a scalar diffraction integral.
which would be radiated in that direction by a lossless isotropic antenna. This We can write the one dimensional electric field vector as
follows by relating intensity I, power P, and solid angle Q through
E(R, t) = E(R, t)x
2 (2.2.2)
l(R, 0, </>) dA = P((), </>) dQ = P((), </>) dA/ R for example, where x is the unit vector along the x coordinate in space and we
assume linear polarization in that direction. Using phasor analysis, for the scalar
so that coordinate of this field we write
P((), </>)/(P1/4rr.) = (4rr.R 2 /P1 ) J(R, 0, </>) E(R, t) =Re{ j2E(R) exp(jwt)}
= l(R, 0, </>)/I0 (R) = G'(O, </>) *(2.2.3)
where E(R) is the corresponding complex rms electric field phasor. This can
be written (Silver, 1949, p. 170)
Only some portion
E(R) =
4~ L. E(x', y')[exp( -jkr)/r]
of the power delivered to the antenna is radiated into space, where Pe < 1 is x [(jk + l/r)z·r + jkz·s] dx' dy' (2.2.6)
the antenna radiation efficiency. The correspondingly scaled function
where the geometric terms are defined in Fig. 2.4 and k = 2n/ ..1. is the carrier
D 1( (), </>) = G ( (),</>)/Pe
1
= 4rr.P( 0, </> )/ Prad (2.2.4) wave number. The quantities z, r are unit vectors along the corresponding rays
78 THE RADAR EQUATION 2.2 THE ANTENNA PROPERTIES 79
E(R) = (j/2A.)[exp(-jkR)/R]
E (x,y,z)
x f
Aa
E(x',y')exp[-jk(r-R)](cosO+z·s)dx'dy'
z (2.2.8)
where E is the complex phasor scalar length of the vector E and we assume
free space. (Note that I= III varies as 1/ R 2 since, from Eqn. (2.2.9), IEI oc 1/ R.)
Using Eqn. (2.2.2), this Eqn. (2.2.11) yields the antenna power pattern as
(Silver, 1949, p. 177) ·
(2.2.12)
(2.2.13)
For precision of results, this should be carried out in an enclosed reflection-free E(x', y') = IE(x', y')I exp[jk(iXxx' + iXyy')]
test space. It is then often impractical to work in the far field for such
measurements. Techniques to extrapolate near field pattern measurements on the aperture, where !Xu iXy are the direction cosines of the aperture phase
reliably to the far field are therefore of importance. Fig. 2.5 compares near-field gradient unit vectors:
and far-field directivity patterns for an antenna with constant illumination
E(x', y') in Eqn. (2.2.9). In the case of a large spaceborne antenna, antenna
pattern measurements may need to be made after deployment, using earth
calibration points (Chapter 7).
z,
For specified !Xx, IXy, i.e., specified s, provided s ~ the maximum of the gain
G'(O, c/>) then occurs in directions (Silver, 1949, p. 176). Henceforth we will
2.2.1 The Antenna Gain consider only the usual case s = z,so that
The result Eqn. (2.2.9) for the far electric field of an antenna is related to time
average power in W /m 2 , the quantity we need in the radar equation, by the E(x', y') = IE(x', y')I
Poynting relation (Silver, 1949, p. 70)
This gain function is maximum on the antenna axis ( lJ = 0) (Silver, 1949, p. 177 ), for the case f = E(x, y) and g = 1, yields:
with:
This last quantity is the gain parameter G1 in the radar equation, Eqn. (2.1.1 ). so that, in particular, the maximum possible directivity D0 satisfies
Using the Poynting expression Eqn. (2.2.11~, evaluated,.acr~ss the aper!ure,
the total power Prad radiated by the antenna m the cases= z can be wntten D0 = max (D 1) ~ 4nA 8 / A. 2 (2.2.20)
(Silver, 1949, p. 177): IE(x,y)I
(2.2.17) (2.2.21)
the gain Eqn. (2.2.15) can then be written Since Pa = 1 is in fact attainable, at least in principle, one might wonder why
a lesser value would intentionally be sought. The answer has to do with the
(2.2.18) off-axis behavior of the function G1( 0, <P ), and the appearance of local maxima
(sidelobes) whose values, although less than the global (on-axis) maximum,
may still be large enough to be troublesome. The choice of amplitude
where distributions other than uniform, inorder to control these sidelobes, is an area
of area Aa, the actual power gain Gt is often expressed as From Eqn. (2.2.14) and Eqn. (2.2.15) the antenna pattern is
This pattern is roughly characterized by its principal cuts for <P = 0, n/2, which
This is conventionally normalized to unit gain on axis to yield the pattern are identical except for scaling by La or W,., respectively. Fig. 2.6 shows the
generic result, with l being the length of the antenna in the direction of the cut
in question. Shading of the aperture is used to reduce sidelobes, using for
example Taylor weighting (discussed in Section 3.2.3 ), which changes the result
where D 1 and Gt are the on-axis (maximum) values of Dt( 0, <P) and Gt( 0, <P ). to the curve shown in Fig. 2.7.
86 THE RADAR EOUATION 2.2 THE ANTENNA PROPERTIES 87
-10
-:!:!2. -20
1'"''\
I
I \
I \
I
I
u = (nan..) sin Q
II I
/-, \
Figure 2.7 Directivity pattern of uniformly illuminated rectangufar aperture with Taylor
II I \ weighting, 30 dB levels, ii = 5.
,,,,
11
11
I
I
I \
'\\
,,,1 I
I
I
'
where
0.001
1'I I1
q Then, for small Ou, the two-sided 3 dB beamwidth is
0 1 2 3 4 5 6 7 8 9 10 11
nt sin 9 nD sin Q *(2.2.31)
U= A. • A.
Figure 2.6 Directivity patterns of uniformly illuminated circular (solid) and square (dashed) the rough result we used earlier in Chapter 1.
apertures (from Skolnik, 1970). With permission of McGraw-Hill, Inc. It might be mentioned explicitly that the antenna response pattern Eqn.
(2.2.29), for example, is not the image pattern produced by a SAR in response
to an isolated point target. Although related, for example through Doppler
bandwidth, the antenna pattern is only one factor entering into the more
complicated SAR processor response function which will be presented in
Along the main pattern cut </> = 0 the integration in Eqn. (2.2.29) can be
Chapter 4.
carried out to yield
It is usual to describe an antenna directivity function d1( 8, </>) in terms of a
few scalar measures. The predominant ones of these are the directivity Eqn.
d1(8, </>)=[(sin u)/u] 2 (2.2.30)
(2.2.19) and the 3 dB (two-sided one-way) beamwidth, as in Eqn. (2.2.31) for
the uniformly weighted linear array. In addition, the height of the highest
where secondary lobe is also important, usually being the one next adjacent to the
main lobe. This is the peak sidelobe ratio (PSLR); it controls the extent to
which a point target outside the main beam, but in an unfortunate location at
the peak of a secondary lobe, will be sensed by the radar. Finally, when viewing
The pattern thereby has 3 dB width given by
a distributed "target", such as in the case of SAR viewing earth terrain, the
sidelobe area beyond the first nulls of the pattern, in ratio to the total area of
the pattern, is important. This is the integrated sidelobe ratio (ISLR), and
88 THE RADAR EQUATION 2.2 THE ANTENNA PROPERTIES 89
Circular
Uniform l.02 18 l.O
.------------......... f Dt
j(l - r 2) l.15 21 0.75 ,.,,,,,."" ................
.... /
l - ,2 l.27 25 0.64
..
1
~0-------
We now proceed to the next factor in the point target radar equation, Eqn.
(2.1.t ). This concerns the extent to which a target returns energy incident upon
That is, a is the target area we would infer, based on /rec• by assuming area a
Wg
intercepted the transmitted beam in the far field, with the resulting incident
Figure 2.9 Targets at S 1 , S2 in range sidelobes appear as "ghosts" in image. power scattered isotropically. The value of a depends on a multitude of
parameters of the target. It need not have any direct relation to the actual
frontal area presented by the target to the radar beam. The cross section of a
target will be nearly zero if the target scatters little power back towards the
antenna. This can occur because the target is small, or absorbing, or transparent,
or scatters in some other direction, or possibly all of these. The cross section
a may be drastically larger than the target frontal area in the case that some
electromagnetic resonance effect has been excited.
Only for the very simplest shapes (such as used in calibration measurements,
Table 7.1) can the value of a be calculated analytically, for example for a
perfectly conducting sphere or a flat plate, and even in such cases a depends
markedly on wavelength. For shapes other than a sphere, a depends strongly
on the aspect angle of the target to the radar beam. In practice, one can only
say that if a target at range R presents a cross section a of some given value
to the radar, then the radar system will detect it with some corresponding
probability.
In remote sensing applications, the "targets" usually extend in physical size
beyond what one would regard as a point, for example in observation of the
earth surface. In such a case,,each element dA of the extended target (terrain,
Figure 2.10
L
/------0-----
Bright L Darker
Bright terrain seen by a range sidelobe masks dimmer targets in the main beam.
sea surface, etc.) can be assigned a local value of a. This inferred target area a,
relative to the geometrical area dA, is the specific backscatter coefficient at the
particular point in question on the extended target
a 0 =a/dA (2.3.2)
92 THE RADAR EQUATION 2.3 THE TARGET CROSS SECTION 93
This quantity <To usually depends on wavelength and on the aspect from which where c; 0 (0, </>) is. the ensemble m~an. of <To in each farticular cell dA.
the terrain element is viewed. Here we want to discuss that quantity. In Section Conventionally, this ensemble mean 1s given the symbol <T
2.8 we will discuss a form of the radar equation which is often stated for
distributed targets.
Let us begin by introducing the· notion that the specific radar cross section
0
<T ( e, </>) = 8<To( e, </>)
<To for a terrain element is appropriately considered as a random variable (Ulaby
et al., 1982, p. 476). Consider some nominal region of the earth surface which
It is just the map e, </>) of this quantity, the backscatter coefficient, which is
<T (
0
Similarly, the scattered field at the receiver will have a plane wave The receiver input is taken at the same point in the circuitry as the antenna
representation in terms of a polarization vector output, which we will assume to be the connection between the antenna
structure and the feed line to the first stage of electronics.
The extent to which the power potentially available to be extracted from
the electromagnetic field at the antenna will actually appear in the receiver
in which depends on the relative impedance levels in the system. Some power potentially
available will be lost through reflection (re-radiation) of the incident field
away from the antenna. In addition, since the elements of any real antenna
will have some non-zero resistance, part of the power represented by antenna
currents induced by the incident intensity /rec will be lost as heat in the antenna.
is the complex scattering matrix of the target. Its terms indicate the extent to Both these effects are expressed through the antenna impedance.
which the two orthogonal spatial components of the incident wave each scatter The antenna impedance has two components; that due to resistance,
into the two orthogonal components of the scattered wave. inductance, and capacitance in the structure itself, and a less obvious component,
Finally, if the polarization vector hr characterizes the extent to which receiver the "radiation" impedance. This latter expresses the re-radiation of power
input voltage is induced by the two components h8 of the scattered wave at through the coupling between the impinging field and the currents induced in
the antenna, the receiver voltage phasor is the antenna conductors. Both these quantities can be calculated for simple
structures, or measured more or less precisely.
The power Pr flowing from the antenna port towards the receiver for a
particular incident intensity /rec defines the antenna receiving aperture Ar by
where the superscript T indicates the transpose. The elements of 6 may be
measured using two transmitted waves with only the horizontal or vertical A,= Pr/ /rec (2.4.2)
direction excited, and two receivers sensing separately the horizontal and
vertical components. The determination of these coefficients from radar data The aperture Ar depends on receiver input impedance through P,. For
will be discussed in Chapter 7. maximum possible transfer of power potentially available from the field into
the receiver, the receiver input impedance must be the conjugate of the total
antenna impedance, the maximum power transfer theorem of AC circuit theory.
2.4 THE ANTENNA RECEIVING APERTURE The corresponding maximum value of Ar is defined to be the effective
(receiving) aperture Ae. This value Ae depends on both the antenna radiation
In the simple case of an isolated target in the far field, we have expressed the efficiency Pe and the antenna aperture efficiency Pa• defined in Eqns. (2.2.17)
intensity of backscattered power at the antenna as and ( 2.2.21 ).
In the common case that the same antenna and microwave circuitry is used
I rec = Pt G'a/(4nR 2 ) 2 (2.4.1) for reception as for transmission, reciprocity applies (Silver, 1949, Ch. 2) such
that the effective aperture A 0 , applicable to reception, is precisely the same
a value which we will assume to be constant over the physical aperture of the area as appears in the transmission power gain formula, Eqn. (2.2.24)
antenna. This intensity represents the scattered electromagnetic field incident
on the antenna structure. Some, all, or none of that field will actually be G1 = 4nAe/ A. 2 (2.4.3)
effective in introducing signal power into the receiver circuits, which is
necessary in order to detect a target. Again, in building the radar equa~ion, with
a single parameter is introduced to cover a number of effects and assumpt10ns,
namely, the antenna (receiving) aperture Ar. (2.4.4)
The receiving aperture of an antenna at a particular frequency is an area
defined in terms of the intensity I rec at the antenna structure and the power where Aa is the geometric area of the antenna. Correspondingly, the directional
Pr flowing towards the receiver, across the antenna/receive.r interface, by aperture Eqn. (2.2.26) applies also for reception. In the same way, the gain
G'( (), <P) is the power received per unit solid angle relative to that for an
isotropic antenna. The same remarks apply for the directivity function D'( (), <P)
98 THE RADAR EQUATION 2.6 SOURCE AND RECEIVER NOISE DESCRIPTION 99
which is Nyquist's theorem. By Eqn. (2.5.l ), the noise is Gaussian, and by Eqn.
(2.5.5) it is white, with the indicated power spectral density.
A quantum mechanical refinement (van der Ziel, 1954, p. 301) of the statistical
mechanical argument results in a more precise form ofthe Nyquist theorem:
L
N(f) = 4kTRp(f)
where
Figure 2.11 Circuit with resistor noise equivalent voltage source. p(f) =(hf /kT)/[exp(hf /kT) - 1] (2.5.6)
we take the inductor branch current i and capacitor branch voltage v. The is the Planck factor. Neglecting the Planck factor, which contributes a non-white
stored energy in the system is quadratic in the state variables character to the noise, results in an error ofless than 5% in noise power spectral
density so long as hf /kT < 0.1. At radar frequencies, say/< 35 GHz, this allows
E = (1/2) (Cv 2 + Li 2 ) the Planck factor to be neglected for T > 17 K. In some applications, for example
sky noise or very low noise receiver front ends, equivalent temperatures below
Hence we have a diagonal quadratic energy functional, and equipartition applies. that limit may be in question, in which cases the more precise form Eqn. (2.5.6)
Since equipartition holds, we know at once that the ensemble average noise should be used.
energy associated with the capacitor voltage is kT /2. On the other hand, we Thus we have a basic result, supported independently by observations. The
can calculate the average noise energy as (Whalen, 1971, p. 47) thermal noise equivalent source voltage in a resistor of resistance R at
temperature Tis a Gaussian random process with a constant power spectral
00
Ee= (C/2)v 2 = (C/2) { IH(f)l2 N(f) d/ (2.5.3) density (white noise) 4kTR. Further (van der Ziel, 1954, p. 17), the same result
holds for any passive system at uniform temperature, where the resistance is
the equivalent resistance "looking back into" the output terminals of the system.
Here v 2 is calculated as the integrated power spectral density of the random If such a system is connected to an impedance matched load, the one-sided
variable v, N (f) is the sought (one-sided) power spectral density of the noise spectral density of the power delivered to the load in W /Hz is just
voltage source e0 (t), and
N 3 (f) = 4kTR/4R = kT *(2.5.7)
H(f) = (1 + jwRC + R/jwL)- 1
This is the "available power" spectral density of the noise source. If attention
is the system transfer function from en to v. is confined to a frequency band of width B, say by a lossless filter circuit, the
By choosing R, L, C appropriately, we can make IH(f)l 2 arbitrarily narrow thermal noise power (W) delivered to the matched load is kTB. It is quantities
around the particular frequency / 0 , and write Eqn. (2.5.3) approximately as of this latter form which will appear in the final equation for SNR.
fci
Ec=(C/2) IHl 2 Ndf 2.6 SOURCE AND RECEIVER NOISE DESCRIPTION
f10
= (C/2)N(f0 ) L" IH(f)l2 d/ = N(f0 )/8R = kT/2 (2.5.4)
Theoretical calculation of the noise power at the output of a system in a
frequency band of interest is not often feasible. Direct measurement of noise
power is usually more practical. Such measurements can be made for the various
evaluating the integral using Gradshteyn and Ryzhik ( 1980, Section 3.112.3). elements of a system and the results combined into equivalent parameters of
Letting the arbitrary frequency / 0 in Eqn. (2.5.4) be labeled as a general frequency the total system under actual operating conditions. In our application, two
/yields noise components must be considered, noise entering from the antenna port
into the input terminals of the receiver system and noise generated in the receiver
N(f) = 4kTR *(2.5.5) system itself. The parameters used to characterize these are respectively the
100 THE RADAR EQUATION 2.6 SOURCE AND RECEIVER NOISE DESCRIPTION 101
system. We consider first the source, then the receiver, and finally the
combination.
+
T. The walls of the cavity are assumed to constitute a black body, an idealized
where Bis the brightness perceived by the antenna. In radiometry (Slater, 1980,
passive object which by definition absorbs and re-radiates all incident radiation.
p. 88; Nicodemus, 1967; Meyer-Arendt, 1968), the surface giving rise to the
It is a basic result of theoretical physics (Page, 1935, p. 547) that the radiation
radiation receives central attention, and is assigned a spectral radiance in
inside the cavity is omnidirectional and homogeneous, with energy frequency W/m 2 sr Hz
spectral density per unit volume at any point in J / m 3 Hz (Planck's law)
L = J/A. cos 0 (2.6.4)
u = 8nh(f/c) 3 [exp(hf/kT)- 1r 1 (2.6.2)
where J is the spectral radiant intensity (W /sr Hz), the angular power spectral
The apparent intensity spectral density per unit solid angle incident on any density emitted by surface area A. in direction 0.
point in the cavity in W /m 2 sr Hz is then An antenna of directive area Ad at range R subtends a solid angle Ad/ R 2 as
seen by the radiating surface element A. (Fig. 2.13). Thus the power impinging
B = uc/4n on the antenna surface in W /Hz is
= (2hf 3 /c 2 )[exp(hf/kT) - l]- 1 (2.6.3)
This is defined as the "brightness" (or radiance) (Ulaby et al., 1981, p. 192) of
the source, the cavity wall. At the antenna, the electromagnetic intensity in W /m 2 Hz impinging from the
We ultimately want to calculate the power incident on an antenna directive direction of the elemental source A. is N /Ad. The solid angle subtended by that
aperture Ad. To that end, we need the power per unit area impinging on the source, as viewed by the antenna, is A.( cos 0)/ R 2 , so that the antenna perceives
antenna from various directions (Fig. 2.13). The noise power density available an incident intensity in W /m 2 sr Hz
(2.6.5)
This quantity is called the brightness of the source in remote sensing (Stewart,
1985). Although Eqn. (2.6.5) indicates that the intrinsic source property,
radiance, is numerically equal to the sensed quantity, brightness, more generally
the latter is defined to take into account the spectral characteristics of the
sensing instrument.
An emitting surface which is perfectly diffuse obeys Lambert's law: L = const,
independent of aspect angle 0. Equation (2.6.5) indicates that such a surface
element as perceived by an antenna corresponds to a perceived power density
which is independent of aspect. The source thus appears omnidirectional to the
viewer.
As noted in Section 2.5, at microwave frequencies and temperatures above a
few .te.ns of Kelvin, the Planck factor Eqn. (2.5.6) evident in Eqn. (2.6.3) is
neghgtble. When neglected, the result is the Rayleigh-Jeans law:
B = 2kTf 2 /c 2 *(2.6.6)
It happens that the main contributor to radio noise, the sun, as perceived from
Figure 2.13 Brightness B in W /m 2 sr Hz at a collector due to surface of radiance L. earth generally obeys the functional form of the Planck law, Eqn. (2.6.3),
104 THE RADAR EQUATION 2.6 SOURCE AND RECEIVER NOISE DESCRIPTION 105
provided a temperature T = 5900 K is used (Elachi, 1987, p. 47). At radio Even though the Planck factor in Eqn. (2.6.3) may not be negligible in some
frequencies, however, many electromagnetic effects intrude on the ideal form applications, Eqn. (2.6.11) as it stands defines T.i such that the correct value for
Eqn. (2.6.6), and a general frequency dependent "temperature" T.i(f) must be B results from its use.
used to describe correctly the radiation spectrum of the sun. Below 30 GHz, in Proceeding one final step, it is then useful to extend the black body
fact, roughly (Hogg and Mumford, 1960) Eqn. ( 2.6.10) to the general case Eqn. ( 2.6.11 ), and to express the result in terms
of an available noise power spectral density into a matched load in the form
(2.6.7) of Eqn. (2.5.7)
Consider now the situation of an antenna viewing a radiating black body (2.6.12)
(Fig. 2.13). The source radiates with brightness Bas in Eqn. (2.6.6). A linearly
polarized antenna receives half this power, so that for that case (Gagliardi, where T..xt is a (possibly frequency dependent) temperature so defined by the
1978, p. 99) actual noise density at the antenna terminals. Considering the directionally
dependent brightness Eqn. (2.6.11 ), the available power density Eqn. (2.6.10)
B = kTf 2 /c 2 (2.6.8) takes the form
antenna directivity:
(2.6.9)
= (k/4n) f D'(O, </J)T.i(O, </J) dO (2.6.13)
where we use Eqn. (2.6.11) and the definition Eqn. (2.6.9). Comparing
The antenna available power, without -;:onsidering antenna self-loss, is then Eqn. (2.6.13) with Eqn. (2.6.12) then yields
= (Bc 2 /4nf2) f D'(O, </J) dO = kT.i (2.6.10) as a (possibly frequency dependent) temperature parameter in terms of which
the available external noise power spectral density is expressed (Gagliardi, 1978,
p. 100).
using Eqn. (2.6.1 ). The region of integration is that portion of the antenna
If the directional temperature T;,( 0, <P) is nominally constant in direction at
pattern which views the black body. In the case of an antenna inside a cavity,
value T.i over some sector 0 0 , a directivity weighted equivalent temperature can
from Eqn. (2.2.5) be defined from Eqn. (2.6.14)
f sphere
D'(O,</J)d0=4n
where
T..x1 = T;,D. (2.6.15)
2 2
we simply recover T0 = Bc /kf = T.
An extension of this formalism is used to describe radiation reaching the
earth preferentially from various directions in the sky, or indeed any radiation
D,=(l/4n) fno D'(O,</J)dQ (2.6.16)
reaching an antenna. The expression Eqn. (2.6.8) motivates defining a '
directionally dependent incident power density (brightness) formally in terms of is a receiving directivity taking into account the sidelobe structure of the antenna.
a directionally dependent temperature as (Gagliardi, 1978, p. 99) Since the antenna directivity is by definition normalized as in Eqn. (2.2.5), the
directivity D. is always less than unity. In the case of a nominal point source,
B(O, </J) = (kj2/c 2 )1'.i(O, </J) (2.6.11) such as the sun or a planet, the temperature function in Eqn. (2.6.11) is
106 THE RADAR EQUATION 2.6 SOURCE AND RECEIVER NOISE DESCRIPTION 107
r--
nonzero over a very narrow sector; Dr of Eqn. (2.6.16) is then approximately l PHYS l PHYS
Dr(00 , </> 0 )0.0 /4n, where 0.0 is the small solid angle subtended by the source.
If the antenna is pointed to the night sky, and away from any point sources
of radio noise, T.xi is due mainly to galactic radiation. For clear sky, the value L
l
of 7;. in Eqn. (2.6.13) is quite low at radar frequencies, although significant at
radio frequencies. The nominal temperature variation (K) is (Skolnik, 1980,
p. 462; Gagliardi, 1978, p. 102)
kl PHYS kl PHYS
On the other hand, if the antenna were pointed at the sun, a very high value
7;. as in Eqn. (2.6. 7) would be expected over the narrow sector of the sun's disk. Figure 2.14 Available power and physical temperature of lossy system. P: Sijlnal; k~~.,: Noise.
At typical radar frequencies, pointing away from the sun the main noise
contribution is from the sun's radiation scattered into the antenna by the earth's
atmosphere. Combined with galactic noise, the result is nearly constant at physical temperature Tphys· The- available input noise power density from the
Tn = 10 Kover the radar band (Gagliardi, 1978, p. 103). In the case of a SAR, source resistor is then kTphys• by Eqn. (2.5.7), so that the available output noise
with the antenna viewing the earth surface, the external noise temperature can power density attributable to the input must be kTphys/ L. On the other hand, the
be calculated nominally using Eqn. (2.6.15) for a body at 300 K. The factor total available output noise power density from the source and circuit
Eqn. (2.6.16) results by integration of the beam pattern over the radar footprint. combination at temperature Tphys must also be kTphys• as for any system at
Since the external noise in the environment of the antenna is directionally constant temperature. The difference between available output power density
dependent, as well as frequency dependent, even at a specified frequency, the and that attributable to the source is then just
calculation of a single temperature T.xt for use in the radar equation involves the
antenna sidelobe structure, the pointing direction of the main beam, the type of Nini= kTphys(l - l/L) (2.6.18)
atmospheric layers in view of the antenna, and so on. Skolnik (1980, Ch. 12)
discusses many of the considerations involved. The user of the radar equation This is necessarily attributable to the circuit itself. Referring this circuit-generated
sweeps all these considerations into a single parameter which will presumably output noise back to the circuit input, using the attenuation 1/ L in reverse
be supplied: the total source external equivalent noise temperature T.xt· direction, results in an equivalent input noise temperature component
the other hand, the value Receiver noise can also be summarized in terms of an equivalent input noise
temperature for the system, together with the available power gain of the system.
T~ = 7;,xt + (L - 1) Tphys This has already been done above in discussing the self noise Eqn. (2.6.18) for
an attenuator. We will consider that description for a receiver first, returning
appropriate to the input of the lossy system represented by Pe• corresponds to later to the formalism of noise factor.
the antenna directivity D'. The radar equation could be written using either pair,
but the former is conventional. Aval/able Power Gain
In Section 2.6.3 we will work through an example further illustrating the use Recall that the signal power'entering into the receiver is expressed in the radar
of such expressions as we have been developing. First we will discuss the equation in terms of the receiving aperture Ae of Eqn. (2.4.4 ). This by definition
characterization of noise in the receiver. relates to the signal power which would flow into a matched receiver. If, in
considering noise power into the receiver, we also assume impedance matching,
we then have to do with available noise power quantities kT at the input, and
2.6.2 Receiver Noise available power gain Ga to transfer them to the output, along with the signal.
In discussing receiver noise, the actual impedance conditions make a difference We thereby arrive at the output SNR for matched conditions, which is the SNR
in SNR. Let us continue to model the receiver as a constant power gain G0 P over in operation, except for the internal noise effects indicated in Eqn. (2.6.21).
a band B0 • The receiver input signal and noise powers are P., P0 • Noise due to The available power gain Ga of a circuit is the ratio of power available at the
internal receiver sources adds some amount Nint to output noise density with no circuit output, which depends on both the circuit and the source, to power
corresponding signal enhancement. The resulting output SNR available from a source connected at the input, which depends only on the source.
We take this quantity relative to some frequency of interest, with the ratioed
powers referred to unit bandwidth over a narrow (infinitesimal) band. Thereby
all gains, temperatures, and noise factors generally become functions of
depends on the absolute level of receiver input noise power P0 • This in turn frequency.
depends on the input impedance conditions, which govern the extent to which The available power from a circuit is the power that would be delivered to a
available source noise power is delivered to the circuit. matched load. For example, in Fig. 2.12
Because the available output power Eqn. (2.5.7) of a thermal noise source is
independent of source or load impedance, it is a great convenience to assume in
system noise calculations that all units have matched impedance sources and
loads. Were such to be the case, the actual powers would be identical to the is the actual input power, while
available powers. Such is not necessarily the case. However, we can assume load
matching, since output SNR is independent of load (with some exceptions P; = e;/4R.
discussed by Pettai ( 1984) ), signal and noise being treated the same by the load.
But source mismatch will require a factor in the radar equation to adjust the is the available input power, corresponding to Rio= R •. From Fig. 2.12 then
results, calculated assuming source matching and available power, to the actual
case. For the moment, we assume impedance match between all system elements. (2.6.22)
Any system which generates noise can be characterized by a "noise factor"
F, or a "noise figure" IO log F. (The terminology is not consistently applied - Pettai ( 1984, Ch. 7) has discussed this quantity carefully. It is independent of the
often noise figure is used for both.) The unwanted output might be due to internal: actual load conditions at the circuit output, but depends on the input impedance
noise, thermal (white Gaussian) or otherwise. It might also be due to the' conditions. It is not the ratio of output power to input power under operating
deterministic generation from the input of frequency components which later; conditions, unless the input and output are matched, so that R. = Rio and the
interfere with signal (nonlinear effects present in mixers, for example), or loss of circuit is loaded by RL = R 001 • It depends on source impedance, a fact which, as
signal power in converting from RF to IF. Some possibilities have been we shall see, feeds directly into a property of "the" noise figure of a circuit.
summarized by Skolnik (1980, p. 347). Pettai (1984, Ch. IO) gives a more
complete discussion. Various different definitions of noise figure can be made Receiver Noise Temperature
(Pettai, 1984, Ch. 9). We will discuss some of them in turn, indicating their use Using available power gain, the additional output noise contributed by a circuit
in the radar equation. can be expressed in terms of an equivalent temperature. Suppose a source of
110 THE RADAR EQUATION 2.6 SOURCE AND RECEIVER NOISE DESCRIPTION 111
equivalent noise temperature T. feeds a receiver. The consequent available input sensibly narrowband, so that thermal noise temperatures are approximately
noise power density is kT,,. For the particular equivalent source resistance R. in constant, the passband is not strictly rectangular. The bandwidth Bn used in the
question, suppose the circuit has available power gain Ga. The output available radar equation is an equivalent "noise bandwidth". This is defined such that
noise power density due to source noise is then GakT.. The actual output GakT.Bn would give the right available noise power if we assumed white noise
available noise power density will be found to be some larger number Noa· The from a source at constant temperature T. to have passed through a circuit with
difference Nini is attributable to receiver internal noise, and can be used to rectangular band of width Bn and amplitude A(f0 ) at band center.
define an equivalent input receiver noise temperature J'., such that If the actual circuit (receiver) had transfer function H(jro), and if T,,(f) were
the actual input noise temperature function, the actual output noise power would
(2.6.23) be
Then
(2.6.28)
(2.6.24)
Assuming T. to be constant over the band, and letting
where
(2.6.25)
is the "operating" noise temperature of the combined source and receiver. Since be the midband value, we obtain
the gain Ga is used to refer the receiver noise Nini to the input, the receiver
equivalent temperature T. and the operating noise temperature T.,p depend on
the impedance of the source feeding the circuit. *(2.6.29)
The equivalent noise temperatures Te,, Te,, ... of a cascade of elements combine
easily. Each unit of the cascade is specified by its available power gain Ga; and as the receiver noise bandwidth. If the actual noise temperature T,,(f) is not
equivalent input noise temperature Te;, both specified for the impedance and constant over the band, the expression Eqn. ( 2.6.28) must be used in calculations.
temperature conditions present in the cascade. Then for three elements, for
example, the total available excess output power is Receiver Noise Factor
The operating noise temperature Eqn. (2.6.25) wraps together source and
receiver noise into one parameter. It is sometimes convenient to continue to
keep these as separate. To that end, it is usual to define a noise factor F in
leading to terms of which to characterize the intrinsic receiver noise. This is taken in
T. =Nini/Ga, Ga Ga3
2
reference to the specific source impedance which will feed the circuit in operation,
but with the source assumed to be at a standard temperature T0 = 290 K. The
=Te,+ 'I'e)Ga, + 'I'e)Ga,Ga2 *(2.6.26)
receiver itself is assumed to be at its physical operating temperature. Then the
("standard") noise factor Fis the ratio of the total available output noise power
The radar equation Ecin. (2.1.1) can now be written as
density Noa• with the input at temperature 'JO, to the output available power
density attributable to input:
(2.6.27)
(2.6.30)
where P. is the received power as in Eqn. (2.4.5), that is the signal power which
would flow from the antenna port under matched conditions (hence the available · Using Eqn. (2.6.23) to express the total receiver output available noise power
input signal power). density in terms of the equivalent receiver temperature J'., we have
Noise Bandwidth
F =(Nini+ GakTo)/GakTo
In Eqn. (2.6.27) Bn is "the" bandwidth of the receiver, chosen wide enough to
pass all the signal, but no wider, in order to limit noise. Although receivers are = 1 + G.kT./GakT0 = 1 + T./To (2.6.31)
112 THE RADAR EQUATION 2.6 SOURCE AND RECEIVER NOISE DESCRIPTION 113
·Note that F, like T., depends on the source impedance, through Ga, ~lt~ough so that always SNR 0 < SNRi from Eqn. (2.6.37); all else being equal, SNR can
not on its temperature. (The dependence of F on source impedan~ 1s m fact only degrade in the presence of system noise. The amount of degradation is
more profound than simply via Ga (Pettai, 1984, p. 149); the matter mvolves t_he governed by the ratio of output noise density N 1n 1 due to internal sources to
particular distribution of noise sources in the receiver.) ~lso note that n01se amplified input noise density GakT.. Since the former is nominally the same in
factor, like noise temperature, may depend on frequency smce we deal always each of the various stages of a receiver, while the latter increases from stage to
with power spectral densities. stage, it is the noise figure of the earliest receiver stages which mainly
It is interesting to note the noise factor of a lossy element at standard controls the output SNR, an observation we will make more precise below as
temperature T0 • From Eqn. (2.6.19) and Eqn. (2.6.31), this is Eqn. (2.6.46).
The standard noise factor F defined in Eqn. (2.6.30) is the operating noise
(2.6.33)
factor Eqn. (2.6.36), but assuming that the source temperature T. is the standard
temperature T0 = 290 K. Using Eqn. (2.6.24) in Eqn. (2.6.36), we have
Note that Eqn. (2.6.33) is not correct unless the element is at standard
temperature.
Jn terms of receiver noise factor F, using Eqn. (2.6.25) and Eqn. (2.6.32) the *(2.6.38)
radar equation Eqn. (2.6.27) becomes
Since from Eqn. (2.6.32) we have
(2.6.34)
T.=(F-l)T0
Only in the particular (and unusual) case T. = T0 does this become the common
expression
there results the relation between the operating and standard noise factors
(2.6.35)
F 0 P = 1 + (F - l)(To/T.) *(2.6.39)
In order to rescue the functional form Eqn. (2.6.35), an "operating" noise factor
F can be defined. The operating noise factor of a combined system, including which for T. = T0 again shows Fop = F.
tl;;source and the receiver, is defined as the ratio of the actual available output Finally, one can define a "system noise factor" for use in the simple form
noise power density Noa to the available output noise power density ifthe receiver Eqn. (2.6.35), retaining T0 even when T. =F T0 • This noise factor is defined such
had no internal noise sources that
(2.6.36)
This parameter has the advantage that it ta~es into acc~unt t~e _act~al source that is
temperature T., so that the receiver output signal to noise rat10 ts simply
Fsys = (F - 1) + T./To *(2.6.40)
SNR 0 = GaP./F 0 pGakT.Bn
= P./ FopkT.Bn = SNR;/ Fop (2.6.37) Then the radar equation Eqn. (2.6.34) is just
where SNRi is the output SNR in the case that the receiver, of bandwidth Bn, (2.6.41)
had no internal noise sources. Since
which is the form Eqn. (2.6.35), but not limited to T. = T0 • The system noise
factor, like the operating noise factor, accounts for both source noise and internal
114 THE RADAR EQUATION 2.6 SOURCE AND RECEIVER NOISE DESCRIPTION 115
receiver noise, and is simply expressed as power densities due to internal sources of Nintl, Nint 2 when fed by the impedances
R., R0011 , respectively. (Note that these powers do depend on source impedance,
since the flow of internal noise power depends on the character of the complete
driving circuit.) Then the combined output noise density is
The two are related by
(2.6.45)
*(2.6.42)
were the source to be at physical temperature T0 as assumed in using the standard
using Eqn. (2.6.38). noise factor F. Using the cascaded gain Ga 2Ga 1 with the definition Eqn. (2.6.30)
The radar equation can be written using any of the various noise factors and yields a cascade noise factor
corresponding temperature according to preference. For example, using the
operating noise factor Fop the source noise temperature T. appears explicitly, and
the radar equation has the form Eqn. (2.6.37):
= (Nint2 + Ga2Nint1 + Ga2Ga1kTo)/Ga2Ga1kTo
(2.6.43)
== (Nintl + Ga1kTo)/Ga1kTo + (Nint2 + Ga2kTo - Ga2kTo)/Ga2Ga1kTo
rather than Eqn. (2.6.41). = F1 + (F 2 - 1)/Ga 1
(2.6.47)
for example. In the case of tuning, the factor Lm may be less than unity. In that
case, SNR has been improved by deliberate mismatching. Skolnik ( 1980, p. 345)
mentions the effect, and Pettai ( 1984, p. 149) analyzes the matter.
From another point of view, the operating noise factor depends on source
Figure 2.15 Cascade of noisy amplifiers. impedance. If, by the noise factor F corresponding to the factor F 0 P in the radar
116 THE RADAR EQUATION 2.6 SOURCE AND RECEIVER NOISE DESCRIPTION 117
equation, we imply "the" noise factor of the receiver, we must have reference to earth surface. The noise radiation impinging on the antenna is predominantly
a specific source impedance. If that is the source impedance which matches the thermal in origin, since terrestrial point sources are mostly highly directive and
receiver input impedance, then all is well if the source and receiver are matched away from the orbiting satellite. The thermal noise in turn is partly reflected noise
in operation. However, if mismatch is present at the input during operation, the from the sun, and partly radiation from the relatively warm earth (Elachi, 1987,
corresponding operating noise figure Eqn. (2.6.38) is not the correct number to p. 144 ). Scattering of incident solar radiation by the atmosphere also occurs.
use in the radar equation. It must be modified by some factor Lm as in At radar frequencies, the primary effect is that of radiation from the earth, in
Eqn. (2.6.47). thermal equilibrium with its atmosphere. The earth surface can be taken as a
We turn now to a simplified example of application of the expressions grey body at temperature 'I'g = 300 K, as a nominal average value. (A gray body
developed in this section for noise characterization. emits according to the Rayleigh-Jeans law Eqn. (2.6.6) of a black body, but with
power reduced by a factor e < 1, the emissivity.) The emissivity of the earth
depends on the character of the surface in view, and the geometry of the viewing
2.6.3 An Example situation. The emissivity is expressed as (Elachi, 1987, p. 117)
In this section we want to give an example of noise calculations using the above
relations. The analysis will be simplified in comparison with an actual situation. e=l-p
We will consider only the primary effects in operation; a thorough analysis is
complicated, specific to each situation, and beyond our aims. where p is the reflectivity or reflectance or, in the case of the sun as the energy
Consider then the system schematized in Fig. 2.16. A down-looking antenna source, the albedo. A nominal value p = 0.1 is reasonable as an order of
views the earth, with the received signals passed through a waveguide connection magnitude for the earth surface in the microwave region at 20° viewing angle
and isolator (to protect the receiver during pulse transmission) to the carrier from vertical (incidence angle) (Elachi, 1987, p. 146).
frequency (RF) amplifier. After amplification, the signal is shifted to another The antenna structure itself has losses expressed by the radiation efficiency Pe
frequency band by a mixer and local oscillator (LO), and then passed through of Eqn. (2.2.17). The signal loss resulting from this is already accounted for by
an intermediate frequency (IF) amplifier and filter chain to the output. In a radar use of the power gain in the radar equation, rather than the directivity. The
receiver, the IF amplifier output would be detected to determine its power as a implied noise increase is expressed by the loss Le= 1/ Pe· For argument we take
function of time for decision making, in a simple system, or perhaps digitized for Pe = 0.95. The antenna feed and extraneous losses might amount to 1 dB, and
further processing. On the other hand, the amplified IF signal might be converted we lump those losses with the antenna loss. These together comprise the source
to another carrier frequency for telemetry to a ground station. noise temperature T..
The down-looking geometry is such that the antenna effectively sees only the The circulator we take to have a loss 1 dB in the signal direction in its
operating position, with the transmitter feed connected. Along with the antenna,
we assume a circulator physical temperature say J;,hys = 180 K.
I 250K I 400K The RF amplifier, as fed by its actual source impedance, we take to have an
180 K
TPHYS =
+ I
I Ga= 20dB
+ I
I L=SdB Ga= 60dB
available power gain 20 dB. The (standard) noise figure, with the same source
impedance, and measured with the amplifer at its operating temperature, we take
I F=4dB I t... 1.5 F=3dB
I I as 4 dB. The RF output undergoes cable loss of 1 dB before reaching the mixer.
I I
I I The mixer has a conversion loss (RF to IF) of 5 dB, and a noise temperature
ratio (Pettai, 1984, p. 101 ): ,,
(2.6.48)
where T.. is the output noise temperature under operating conditions, assuming
an input temperature T0 • The local oscillator is followed by an extraneous loss
of 1 dB, and the IF amplifier, as shown in Fig. 2.16. The later components operate
NVVVVVVVVV
300K at 400 K.
t:=0.9 Let us first determine the source temperature T. (Fig. 2.16). The detailed
Figure ~.16 Notional receiver used in noise calculations. situation is diagrammed in Fig. 2.17. The antenna and feeds are assumed to be
118 THE RADAR EQUATION 2.7 THE POINT TARGET RADAR EQUATION 119
L12 = L1 L2 = 1.33 I I
I I
TANT= (l-1)TPHYS= 59 K I I
180 K ~ 250K ~ 400 K
I I
I I
I I
G1 =0.95
L1 = 1 I Pe t---0----1
Ts
Figure 2.18 Parameters in receiver Fig. 2.16. G: Available power gain. F: Noise factor. T.,:
Figure 2.17 "Front end" of Fig. 2.16. Equivalent input temperature of self noise.
matched, so that the attenuation units shown have available power gains Cascading the various temperatures T. back to the source point, where T,. is
Ga= 1/L. taken, using Eqn. (2.6.26) yields a receiver equivalent input value
The generator temperature is just the temperature of the earth, modified by
the emissivity: T.. 1 = eT = 270 K. The available power gains G 1 = p. = 0.95, T. = 47 + 438/0.79 + 65/79 + 1086/63 + 104/20 + 289/16
G2 = -1 dB= 0.79 cascade as G12 = G1 G2 = 0.75, corresponding to antenna
and feed loss L 12 = l/G 12 = 1.33 ( 1.2 dB). From Eqn. (2.6.19), this corresponds
= 47 + 552 + 1+17 + 5 + 18 = 640 K
to an effective input temperature T,. 01 = 59 K, considering the physical tempera-
Then, from Eqn. (2.6.25), T;,P = 888 K would be used in Eqn. (2.6.27). The
ture 180 K. cascade relation makes clear the deleterious effect of extraneous losses early in
The sum of T.. 1 and T,.01 , 329 K, is brought forward using G 12 to yield a total
the receiver chain, and the importance of an early low noise gain, especially
source temperature T. = 248 K. This value is mainly driven by the high earth
before the lossy mixer stage.
temperature. Were the antenna to be situated on earth and looking at a cold sky
Alternatively, proceeding in terms of noise figures, the cascade relation
(T.,1 = 50 K, say), the antenna losses (more precisely, the implied noise sources)
Eqn. (2.6.46) yields
would be proportionally more significant than in the earth viewing case.
The receiver chain can be characterized in terms of noise figure or noise
temperature. For illustration, we will consider both procedures. Let us first seek
F = 1.16 + 1.51/0.79 + 0.22/79 + 3.74/63 + 0.36/20 + 1.0/16 = 3.21
the receiver equivalent (input) noise temperature T., leading to a total operating
noise temperature T;,p as in Eqn. (2.6.25). This would be appropriate if the radar This again corresponds to an overall T. = (F - l)T0 = 640 K, from Eqn. (2.6.32),
equation were written in the form Eqn. (2.6.27). The receiver chain is expanded as it must. The operating noise factor follows from Eqn. (2.6.38): F 0 P = 3.58,
in Fig. 2.18. and the system noise factor from Eqn. (2.6.40): Fsys = 3.06.
The equivalent input noise temperature of each 1 dB loss follows from' The various forms of the radar equation use quantities
Eqn. (2.6.19), taking the varying physical temperatures into account. The noise
factors then result from Eqn. (2.6.31). Those of the two amplifiers follow from Eqn. (2.6.27): kT.,P = 1.23 x 10- 20 = -199.1 dB
Eqn. (2.6.32). The noise temperature ratio t = 1.5 of the mixer yields an output Eqn. (2.6.34): k[T. + (F - l)T0 ] = -199.l dB
noise temperature Eqn. (2.6.48) of T0 = 435 K, were the input to have a
temperature 290 K. Considering the - 5 dB gain, this yields an equivalent input Eqn. (2.6.37): kF0 P T. = -199.1 dB
noise due to the mixer internal noise
design towards. We will henceforth refer to the form Eqn. (2.6.37) geometrical area of the scene as a random variable, with a mean CTo which in
general varies from one scene resolution element to another. The quantity of
SNR0 = P,/kF0 p T.B 0 interest in the radar system is then not the deterministic power of a single echo
1
= P1G CTAe/[(4nR
2 2
) kF0P T.B 0 ] *(2.7.1) pulse received in response to a target with some deterministic cross section CT,
but rather the (ensemble) average power for a single pulse with terrain in view
where F 0 P is the operating noise factor and T. is the total source equivalent noise having average specific cross section CT 0, which will generally depend on which
temperature, including ohmic noise generated in the antenna, both at the radar scene elements are in question.
carrier frequency. (In case of any impedance mismatch, the loss factor Lm should In those terms, the radar equation of the previous section, Eqn. (2.7.1 ),
be included in the denominator.) If it has been decided what value of SNR0 is becomes
required for reliable detection with tolerable false alarm rate, or for adequate
performance more generally, this equation indicates the trade-offs among the *(2.8.1)
system parameters, the target characteristics (CT), and the maximum range.
Consideration of these trade-offs leads directly to the concept of the matched
filter, to be developed in Chapter 3. where the integration is over the terrain illuminated by the antenna beam and
The development of the radar equation above assumes only a single pulse is sidelobes, and we take account that the effective receiver aperture depends on
available for processing. If only a single pulse is used to make the decision as to the direction from which the received field impinges. Taking account that, for a
presence or absence of a target at some range, a signal to noise ratio SNR0 at receiving antenna,
the detection point of the order of 15 dB is required for reliable operation.
Normally, however, measurement using more than one pulse is used, with the
power from the multiple pulses averaged before a decision is taken. In that case,
the signal power might be assumed constant from one pulse to another, while
the noise power fluctuates randomly. Alternatively, the signal power itself due to this is the usual radar equation for a distributed target (Ula by et al., 1982, p. 463 ).
a possible target might be assumed to fluctuate in accord with some stated This form Eqn. (2.8.l) of the radar equation, appropriate for average power
statistical behavior. This latter situation is similar to that discussed above in received from a distributed target, expresses the average power due to terrain
defining the specific backscatter coefficient for an extended scene, in which case backscatter as it competes with average thermal noise. However, any particular
the objective is to estimate the mean of the backscattered power for each scene realization ofa SAR image will use as data particular realizations of the (random)
element. received power for each pulse used in the processing, and each pulse will in turn
Calculation of the SNR needed for each single pulse in order that the average involve some particular realization of the random variable CT 0 in each scene
of some number of pulses behave reasonably as a detection criterion for point element. The processed image will have intensity in each image element which
targets has been carried through in detail for cases of interest in practice. Various is some realization of a random process, whereas what we want in each image
statistical assumptions about the nature of the underlying target randomness are element is the value of the mean backscatter CT 0 for that element. The discrepancy
analyzed. The single pulse SNR needed for detection in the multiple pulse case is speckle noise, and results in a mottled appearance of the SAR image of a
is of course less than needed if only the single pulse itself is to be used. In rough terrain which is nominally homogeneous.
terms, the single pulse SNR required decreases by the square root of the number The fact of speckle is inherent in the nature of the radar signal itself, whose
of pulses whose power is averaged before a decision is taken. As a specific case, voltage is the result of random interference of the backscattered electric field from
for a simple hard target with 300 pulse powers averaged, a SNR of 0 dB yields the multitudinous facets of a distributed scene, as discussed in Section 2.3. In
adequate performance, while 16 dB is required for a single pulse decision. The remote sensing applications, it is necessary to reduce the speckle noise in the
subject is elegant and thoroughly analyzed (DiFranco and'Rubin, 1968), but we image, and this is done by averaging multiple realizations of the backscatter
will not pursue it further. coefficient from the same scene element. In Section 5.2 we will discuss a means
for doing that, and the resulting statistical improvement of the smoothed image.
The quantity SNR0 in the form Eqn. (2.8.1) of the radar equation says nothing
2.8 THE RADAR EQUATION FOR A DISTRIBUTED TARGET directly about speckle noise, but affects the relative influence of speckle. Unlike
the case of detection of point targets, for detection of distributed targets one can
In remote sensing applications, in which the "target" is extended, as we discussed only seek to set a value of SNR0 from the radar equation such that thermal noise
in Section 2.3 it is appropriate to define the radar cross section per unit is not the dominant noise effect in the image. Further processing designed to
122 THE RADAR EQUATION 2.8 THE RADAR EQUATION FOR A DISTRIBUTED TARGET 123
defeat speckle will then be relatively more effective in improving the image for element with isotropic mean backscatter u 0 and extent ox in azimuth and oR
remote sensing use. in ground range, where these are the SAR resolutions. From Eqn. (2.8.1), th;
Since the distributed target radar equation serves the general purpose of single pulse SNR in this single cell case is
expressing the mean influence of receiver noise on the image, over some ensemble
of random images, it is useful to assume that the radar views a homogeneous (2.8.4)
scene, in the sense that the mean backscatter coefficient u 0 is constant over the
scene, and the same for each position of the radar. The radar equation, where we again use the antenna receiving gain.
Eqn. (2.8.1), then appears as The terrain element is effectively in view of the moving radar during
transmission of some number NA of pulses. The pulse return envelope is sampled
*(2.8.2)
at a rate f. =BR to produce NR = rpBR samples per pulse, where BR is the
one-sided bandwidth of the radar pulse. (The numbers NR, NA are just the
dimension of the two-dimensional compression filter used for SAR image
with the integral taken over the footprint of the radar beam on the earth. formation.) The totality of N 1 = N ANR data samples are processed coherently
Equation (2.8.1), and its special case Eqn. (2.8.2), are exact, insofar as the through the SAR image formation algorithm to produce a single image
parameters can be precisely specified. It is informative to recast them in various resolution cell. The thermal noise samples can be taken independent from sample
other forms, however. Although only approximate, these reveal the role of to sample within each pulse (the noise bandwidth Bn:::::: BR), and from pulse to
various parameters more readily related to SAR systems and the resulting pulse. As a result of coherent processing of N 1 input samples, the SNR of each
images than the parameters of the exact equations. We will now develop two SAR processor output sample (image point) improves by N 1• Thus the signal
of these alternative forms. (u 0 ) to (thermal) noise ratio at the output image resolution cell is
In normal SAR imaging situations, as in Fig. 1.6, we can approximate R as
constant and equal to the slant range at midswath. The cross beam extent of the (2.8.5)
footprint is by definition the region of terrain over which the antenna gain is
appreciable. We might take the gain G(O, </>)as approximately constant at the It remains to express N R and NA in terms of factors in Eqn. ( 2~8.4 ). First,
midbeam value G, the parameter in the radar equation, over the 3 dB azimuth
beamwidth (JH, and zero outside the beam. In the range dimension, the (2.8.6)
•p
appropriate limit for the footprint is related to the time extent of the radar
pulse. This is because the radar return voltage at any instant, in the case of a introducing the average power Pav over both the on time rP and off time TP - tP
distributed target, is comprised of contributions from a .slant range span of the pulse. The terrain point in question is in view for a time
.1.R = crp/2, corresponding to the radar pulse time width, projected on the
horizontal using the incidence angle 11· Then approximately:
With this the radar equation, Eqn. (2.8.2), appears as Using Eqns. (2.8.6) and (2.8.7) in Eqn. (2.8.5) there results finally (after
recalling /p TP = 1):
*(2.8.3)
*(2.8.8)
This is a form which has been called the SLAR radar equation (Ulaby et al.,
1982, p. 572 ). This is the SAR radar equation in Cutrona ( 1970). It expresses the average signal
Equation (2.8.3) expresses the average SNR of a single radar pulse viewing a to thermal noise ratio of a SAR image point whose mean backscatter coefficient
terrain with homogeneous mean specific backscatter coefficient u 0 • Another is u 0 • It is valuable as an indicator of the role of its various parameters. (Note
recasting of Eqn. ( 2.8.1) is also of interest. Suppose the radar views a single terrain for example that the azimuth resolution ox does not appear.) However, it will be
124 THE RADAR EQUATION
REFERENCES 125
appreciated from the use of simple nominal relations in its derivation that it StewBarkt, RI . H. (1985). Methods of Satellite Oceanography, University of California Press
~eey '
should not be used for numerical calibration work.
In Section 7.6 we will investigate more fully SNR and calibration considera- Stutzman, W. L.and G. A. Thiele( 1981 ). Antenna Theory and Design Wiley New York
Ulaby,
Add'F. T.,WR. 1K. Moore. and A. K . Fu ng (1981). M"icrowave
· Remote
' '
Sensing Vol. I
tions in SAR images. We now pass on to development of the basis for the SAR
1son- es ey, Readmg, MA. ' · '
imaging algorithms.
Add'F. T.,WR. K · Moore. and A· K · Fung (1982)• M'1crowave Remote Sensing Vol 2
Ulaby,
1son- es 1ey, Readmg, MA. ' · '
van der Ziel, A. (1954). Noise, Prentice-Hall, New York.
Whalen, A. D. ( 1971 ). Detection of Signals in Noise, Academic Press, New York.
REFERENCES
Barton, D. K. (1988). Modern Radar System Analysis, Artech House, Norwood, MA.
Bohm, D. ( 1951 ). Quantum Theory, Prentice-Hall, Englewood Cliffs, NJ.
Colwell, R. N., ed. ( 1983 ). Manual of Remote Sensing, American Society of Photogrammetry,
Falls Church, Virginia.
Cutrona, L. J. (1970). "Synthetic Aperture Radar'', Chapter 23 in Radar Handbook
(Skolnik, M. I., ed.) McGraw-Hill, New York.
Di Franco, J. V. and W. L. Rubin ( 1968). Radar Detection, Prentice Hall, Englewood Cliffs,
NJ (Reprinted by Artech House, Dedham, MA, 1980)
Elachi, C. ( 1987). Introduction to the Physics and Techniques of Remote Sensing, Wiley,
New York.
Gagliardi, R. (1978). Introduction to Communications Engineering, Wiley, New York.
Gradshteyn, I. S. and I. M. Ryzhik ( 1980). Table of Integrals, Series, and Products,
Academic Press, New York.
Hogg, D. C. and W. W. Mumford ( 1960). "The effective noise temperature of the sky,"
The Microwave Journal, 3(3), pp. 80-84.
Kennard, E. H. ( 1938). Kinetic Theory of Gases, McGraw-Hill, New York.
Lawson, J. L. and G. E. Uhlenbeck (eds.) (1950). Threshold Signals, McGraw-Hill, New
York.
Meyer-Arendt, J. R. ( 1968 ). "Radiometry and photometry: Units and conversion factors,"
Applied Optics, 7(10), pp. 2081-2084.
Nicodemus, F. E. ( 1967). Radiometry. Chapter 8 in Applied Optics and Optical
Engineering, Academic Press, New York.
Page, L. (1935). lntrodution to Theoretical Physics, Van Nostrand, New York.
Pettai, R. (1984). Noise in Receiving Systems, Wiley, New York.
Ridenour, L. N., editor-in-chief, MIT Radiation Laboratory Series, McGraw-Hill, New
York, Vols. 1-28. Various titles and dates.
Sherman, J. W. III ( 1970). "Aperture-antenna analysis," Chapter 9 in Radar Handbook
(Skolnik, M. I., ed.) McGraw-Hill, New York, pp. 9.1-9.40.
Silver, S., ed. ( 1949 ). Microwave Antenna Theory and Design, McGraw-Hill, New York.
Skolnik, M. I., ed. ( 1970). Radar Handbook, McGraw-Hill, New York.
Skolnik, M. I. (1980). Introduction to Radar Systems, McGraw-Hill, New York.
Skolnik, M. I. ( 1985). "Fifty years of radar," Proc. IEEE, 73(2), pp. 182-197.
Slater, P. N. (1980). Remote Sensing-Optics and Optical Systems, Addison-Wesley,
Reading, MA.
3.1 THE MATCHED FILTER 127
3
3.1 THE MATCHED FILTER
THE MATCHED FILTER AND The point target radar equation, Eqn. (2.7.l ), indicates the main trade-offs
PULSE COMPRESSION available in a simple radar system. Early radars had ranges for targets of interest
such as aircraft which were rather short for surveillance and warning purposes.
Interest therefore centered on extending the range R for targets with specified
values of cross-section <1, while realizing some specified adequate value of output
signal to noise ratio SNR0 • An apparent barrier was the fact that all of the
remaining parameters of the radar equation are limited by available hardwa;e
technology.
The transmitter power P 1, the average power while the radar pulse is turned
on, is limited by the capability of RF power generation technology. Even if
possible, its increase is costly, and involves scaling up components which are
already large and costly. The antenna gain G has a theoretical maximum value
(p = 1) related to the antenna physical area A by G = 4nA /A. 2 , as in Eqn.
In Chapter 2 the basic functional units of a radar system were discussed. The (2.2.24), so that the antenna linear extent La relative to a wavelength is
transformation of power fed to the antenna input by the transmitter into power controlling. It is difficult to build antennas with ratios Lal A. greater than a few
at the receiver output due to scattering from a target was described. The 100, and this practical limit was reached early on. The receiving aperture A. is
competing influence of thermal noise was emphasized. The result of the directly related to the gain by G = 4nA./ A. 2 , and is not an independent parameter.
development was the (point target) radar equation, Eqn. ( 2. 7.1 ). Its specialization The source noise temperature T,, is largely imposed externally, while the receiver
to a side-looking radar viewing a spatially extended terrain appears as ; noise figure F 0 P depends on the technology of the time, and is reducible only
Eqn. ( 2.8. l ), an approximate form of which is the side-looking aperture radar to some limited extent. Finally, the receiver bandwidth must be wide enough
(SLAR) equation, Eqn. (2.8.3). Finally, drawing upon some nominal relations t? pass the trans~itter pu~se, so that roughly B0 ~ 1/ •p• where •p is the on
for synthetic aperture radar systems from Chapter 1, the SAR radar equation, time of the transmitter. Thts latter would appear to be limited by the required
Eqn. (2.8.8), was developed. slant range resolution bR of the radar: •p < 2bR/c. If a pulse length r larger
In this chapter, we want to describe some developments in radar signal than this limiting value is used, two targets separated by bR in range will create
processing which helped overcome the limitations implied by the point target overlapping returns in the receiver, which may not be distinguished as arising
equation, Eqn. (2.7.1). The discussion will lay the basis for later description of from two separate targets.
SAR imaging algorithms. Ultimately, a clear understanding of the simple SAR One further possibility remains. All of the development of Chapter 2 assumed
relations of Section 1.2, underlying the SAR radar equation, Eqn. (2.8.8), will that the receiver did nothing more sophisticated than amplify the input signal
(and noise), while adding its own noise contribution. The receiver frequency
evolve.
The discussion begins with the development of the matched filter. (The response function was taken to be essentially constant over some band
terminology is not meant to imply any connection with the question of appropriate to the signal. The earliest aim of radar signal processing, as distinct
impedance matching discussed in Section 2.6.2.) The matched filter is important from radar signal observation, was to determine how the receiver might be
in its own right, but it is of considerable interest also in pointing the way towards more effective than a simple amplifier. The fundamental advance which resulted
the solution of a fundamental problem in early radar systems: the conflict was the technology of pulse compression. This is the exact one-dimensional
between detectability and resolution. analog of SAR image formation processing, and its development will lead directly
After developing the matched filter, and examining its target resolution to SAR algorithms. We begin with an earlier development which is related, that
properties, we discuss the procedure of pulse compression from a point of view of the "matched filter".
126
128 THE MATCHED FILTER AND PULSE COMPRESSION 3.1 THE MATCHED FILTEA 129
3.1.1 Derivation of the Matched Filter With the noise n(t) as input we have (Whalen, 1971, p. 47)
In a classic study, North ( 1963) considered the following problem. Suppose the
radar transmits a waveform s(t). This is intercepted by a target at some range
R and scattered back to the receiver, where it arrives with time delay t = 2R/ c.
Slg0 (t}l2 = (N/2) f:
00
IH(jro)l2 df.
Assume the idealized situation that only the pulse amplitude is changed in the
process. The receiver input is thus = (N /2) f:
00
lh(t)l2 dt (3.1.3)
r(t) = as(t - t) + n(t), where N /2 is the two-sided noise density and we have used the Parseval relation
in the last step.
where n(t) is the waveform of the combined source noise and equivalent receiver
From Eqn. (3.1.1), as the quantity to be maximized we can take the output
noise referred to the receiver input. The noise n( t) is assumed to be white
signal to noise ratio SNR0 • Using Eqn. (3.1.2) and Eqn. (3.1.3), with a change
(i.e., to have a constant power spectral density N W /Hz, one-sided, over the
of variable of integration in the former, this is
receiver band).
We are at liberty to choose the receiver to be any linear, time invariant 2
SNR =(2a /N)IJ~ 00 h(t)s(-t)dtl
2
system we please, so that the receiver transfer function H(jro) is to be chosen. (3.1.4)
s~oo lh(t}l2 dt
0
In order that we perceive the target to be present, and assign to it the correct
range R, we want the power output of this receiver at time t to be as high as
possible a "bump" above the surrounding "grass", characterized by the average The neatest procedure at this point uses the Schwartz inequality
value of the noise power at the output (Fig. 2.2). We have no direct interest in
the receiver power output at times other than the time the target return is
received. The receiver itself contributes no noise, since the input noise n( t) (3.1.5)
includes equivalent receiver self noise.
The mathematical problem to be solved is thus to choose the transfer function
H(jro) such that (where we allow complex time waveforms for generality and in which equality holds if and only if / 1 ( t) = kf2 ( t ), with k an arbitrary constant.
use ensemble expectation C) the quantity Using this in the numerator ofEqn. (3.1.4), with/1 = h(t) andf2 = s*(-t), we
have for any choice of h(t) that
a= Slg.(t) + g 0 (t}l2/Slg 0 (t}l2
= 1+lg.(t}l2/Slg0 (t}l2=1 + SNR0 (3.1.1) (3.1.6)
is maximum. Here g.(t) and g 0 (t) are the receiver outputs for signal and noise
inputs respectively. We take g.(t) to be deterministic, and use the fact that the where E is the total energy of the received pulse as( t - t ). Since the choice
noise n( t) has zero mean, so that the random variable g0 ( t) also has zero mean. h(t) =ks*( -t) attains this upper bound, that filter impulse response is the
The output of a linear stationary system with input/(t) and transfer function choice which maximizes the receiver output SNR. Since k is arbitrary, we can
H(jro) is the convolution (Appendix A) choose k = 1, and obtain just
g(t) = f: 00
h(t - t')J(t') dt' ,
h(t) = s*( -t)
where h(t) is the system unit impulse response, the inverse Fourier transform In the frequency domain, the result Eqn. (3.1.7) is
of the transfer function. Hence, with signal as(t - t) as input, we have the ,
output value
H(f) = f: s*( -t) exp( -j2nft) dt = f: s*(t) exp(j2n/t) dt
g.(t) =a f: 00
h(t - t')s(t' - t)dt' (3.1.2)
00
(3.1.8)
L
130 THE MATCHED FILTER AND PULSE COMPRESSION 3.1 THE MATCHED FILTER 131
where receiver circuits to yield a transfer function matched in some sense to the
transmitter pulse envelope shape. Even if the matched filter were not precisely
S(f) = A(f) exp[jt/J(f)] realizable in practice, it provided a known upper bound to SNR, and an exact
specification of the ideal filter against which to judge more convenient
is the spectrum of the transmitted signal s(t). sub-optimal realizations.
The output SNR Eqn. (3.1.6) attained by the matched filter is the For a simple RF burst, the matched filter improves performance less than
instantaneous value precisely at the delay time • of the target echo. The filter 1 dB compared with simple filters (Skolnik, 1980, p. 374 ). In general, however,
is usually implemented in the intermediate frequency (IF) amplifier (Fig. 2.16), it is of prime importance that only pulse energy appears in the matched filter
and its output therefore oscillates at the IF frequency. It is the average power radar equation Eqn. (3.1.9), rather than power and bandwidth separately. So
of that output over a pulse that is the quantity which corresponds to the average far as detection performance is concerned, the net result is an additional degree
received power in the radar equation. That average power corresponds to a of freedom in system design. We will now develop the implications of this at
SNR of E / N, half the SNR corresponding to the peak power of the sinusoidal more length.
matched filter output.
The energy of the input signal is just
3.1.2 Resolution Questions
E = Ps<p
In Section 3.1.1 the radar equation was derived assuming a matched filter
where P. is the average power over the signal duration tP. If the noise bandwidth receiver. In the case of a simple transmitter pulse, for which pulse duration tP
of the receiver is Bn, then the attained average output SNR is and bandwidth Bare related nominally by B = 1/rp, the result Eqn. (3.1.9) is
essentially the same as the radar equation developed earlier, Eqn. ( 2. 7.1 ). That
is, the simple receiver with uniform response over its passband is nearly the
matched filter for this case. On the other hand, the matched filter radar equation
where Pn is the average input noise power. Thus the matched filter achieves a in general involves the energy of the transmitted pulse, and thereby its time
SNR increase equal to the bandwidth time product of the transmitted pulse. duration for a fixed (and limited) available transmitter power, but nowhere
Assuming use of a matched filter in the receiver, the radar equation, Eqn. does the pulse bandwidth appear. This is a significant difference, and the
( 2. 7.1 ), becomes difference has to do with target resolution. Use of a matched filter opens the
possibility to use a long high energy pulse for good SNR, but without sacrificing
resolution. The resolution expression, t5R = crp/2 is no longer in effect, as we
SNR0 = EJN = P 5 tp/N = P 1<pG 10Ae/[(4nR 2 ) 2 kF0 pT.] shall now see.
= E1G1uAe/[(4nR 2 ) 2 kF0 P T.J *(3.1.9) To determine resolution, we need to find the extent to which a point target
in space is "smeared out" by viewing it through the radar sensing system. With
where tP is the pulse length and E1 = P1 <pis the energy of the transmitted pulse. no signal processing, a point target produces a response at the receiver output
For a simple transmitter pulse, say a rectangular envelope burst of RF carrier; which is essentially the time history of average power of the transmitted pulse,
the pulse duration and bandwidth relate as <p :::::: 1/ B, with B some reasonable which has width rP. Thus, two point targets separated in slant range by less
measure of bandwidth, say the noise bandwidth Bn. Then the matched filter than dR. = crp/2 will produce receiver outputs which overlap in time. Such a
radar equation, Eqn. (3.1.9), is just the simple radar equation Eqn. (2.7.1 ). The response is impossible to distinguish from a return due to a single target of
pulse bandwidth time product is unity. In the case of a simple transmitter pulse, space extent wider than a point. It cannot be guaranteed that two targets closer
the development of the matched filter formalism therefore~dded little of practical together in range than crp/2 will be distinguished as two targets. This is the
importance. However, the solution to the above optimization problem, the resolution limit of the simple radar system.
matched filter, provided a precise foundation upon which to base understanding On the other hand, suppose a matched filter processor is used. An isolated
of some ad hoc procedures. In the earlier form of the radar equation, Eqn. point target produces a response s(t - t) at the filter input, where r = 2R/c is
(2.7.1), the noise bandwidth Bn appears. It was clear that this bandwidth should the delay since transmission. The corresponding filter output is the convolution
somehow be optimized to improve SNR. Obviously, the receiver band should
be adjusted so that in some sense "most of" the signal pulse is passed but the
noise is blocked "as much as possible". This led to procedures for tailoring the
f:«> h(t - t')s(t' - r) dt' f
= :«> s*(t' - t)s(t' - r) dt'
L
132 THE MATCHED FILTER AND PULSE COMPRESSION 3.1 THE MATCHED FILTER 133
Shifting origin to center the response at timer, and making a change of variable, The pulse most often used to do this job is the linear-FM, or" chirp", pulse
this is
g(t) = f: 00
s*(t' - t)s(t')dt' (3.1.10)
s(t) = cos [2n(f.,t + Kt 2 /2)],
with frequency (time derivative ofphase)f=/. +Kt which is a linear function
(3.1.14)
f~00
waveforms may be used (Skolnik, 1980, p. 420); in remote sensing SAR, however,
g(t) = IS(f)l2 exp( -j2nft) df (3.1.11) the linear FM is almost universal. The most important exception is the discrete
version of the linear FM
The time width of this function g( t ), the matched filter output in response to S;(t) = COS [2n(f., + i~f)t], i = -(N - 1)/2, N/2, ~f= B/N
a point target, controls the resolving capability of the system. That width depends
on the details of the transmitted pulse. For example, if the pulse is a simple The N pulses s;(t) are transmitted sequentially in a burst to create a "step
burst of RF, so that (using complex notation) chirp", or "synthetic pulse" wave. Wehner ( 1987) discusses the technique in
detail.
s(t) = a(t) exp(jw 0 t); a(t) = l, Because of its practical importance, the matched filter has been analyzed in
extensive detail in the literature. The central quantity studied is the ambiguity
being careful with limits in the integral in Eqn. (3.1.10) we obtain function of various transmitter pulse waveforms, that being the time behavior
of the average output power of the matched filter corresponding to the
(3.1.12) transmitted pulse in question. An excellent resource is the book by Cook and
Bernfeld ( 1967), or that of Rihaczek ( 1969). Many other texts provide more or
The power function lg(t)l2 is a quadratic shape of width the order ofrP. This less detailed treatments of the subject.
yields a time resolution '5t = rP, which is the same as obtained without the In the brief discussion above, we assumed that the target return, the input
matched filter. to the matched filter, was simply a time delayed and attenuated version of the
As a more interesting example, for any pulse with a constant (say unity) transmitted pulse. More generally, because of relative motion between radar
spectrum magnitude (and arbitrary phase) over some (one-sided) band and target, there will be Doppler frequency shift as well as time delay. For a
If- f.I < B/2, the second form, Eqn. (3.1.11), gives narrowband transmitted pulse
which has a power function Ig(t )12 of width '5t :::::: 1/ B Hz at the 3 dB point. the received pulse is approximately
Thus, the time resolution in the matched filter output in this case has nothing:
to do with input pulse length rP, but only pulse bandwidth B. The width of the r(t) = a(t - r) exp[j(<.00 + w0 )(t - r)]
matched filter input rP would be the time resolution afforded without matched
filter processing, while the time resolution with processing is 1/ B. The ratio of where the Doppler shift is w0 = -4nR/ A., with R the target range rate. The
these, the pulse "compression" ratio afforded by matched filter processing, is matched filter for the transmitted pulse, which is what will have been designed
the bandwidth time product BrP of the transmitted pulse. into the receiver, has impulse response h(t) = s*( -t). The resulting matched
The important point is that use of a matched filter, in addition to enhancing filter output function, shifted for convenience of notation to have time origin
detectability by maximizing receiver output SNR, decouples pulse length from' at r, is
range resolution. Therefore, long pulses of tolerable average power can be used
to obtain large energy E = P. rP for satisfying the detectability requirements,
while at the same time a wide bandwidth can be used to obtain good resolution.
f(t.fo) = exp(jw.t) f: 00
a(t')a*(t' - t)exp(j2nf0 t')dt' (3.1.15)
134 THE MATCHED FILTER AND PULSE COMPRESSION 3.2 PULSE COMPRESSION 135
I
appear the same at the matched filter output as a target at some other range
fo / which is not moving. Another way to say this is that the linear FM wave has
/-fo=kt
I frequency and time "locked" together. A frequency shift tif at the matched
I filter input causes a time shift llt = llf/ K at the output.
I Extensive discussion of ambiguity function analysis can be found in the
I
I references mentioned. Since the systems of interest to us in this book all use
I
-.-----1--- - --- the linear FM pulse, we need not pursue the matter further in generality. We
/ will later return to some specific results as needs arise.
I
I
I
I
I 3.2 PULSE COMPRESSION
I
I
I
I As we discussed in Section 3.1.2, the matched filter output, Eqn. (3.1.13), realizes
I time compression of pulses of unit (or at least constant) spectrum magnitude
I in the ratio of the bandwidth time product Br:P. This is the case in particular
I
I
// __________l __
for the common linear FM waveform. That the two objectives of SNR
maximization and resolution improvement (by compression) are realized by
the matched filter follows because, if the transmitted waveforms( t) has spectrum
I
S(f) = A(f) exp[jt/J(f)]
I
/
I
then the matched filter Eqn. (3.1.8) is
I
I H(jw) = S*(f) = A(f)exp[-jt/J(f)] (3.2.t)
--'ttp·-~I
while on the other hand, as we will discuss below, the general pulse compression
--I· filter is
Figure 3.1 Ambiguity function of 3 dB contour for linear FM pulse. H(jw) = 1/S(f) = [1/A(f)] exp[-jt/J(f)], A(f) =I= 0 (3.2.2)
By convention of definition, the corresponding ambiguity function is The two filters Eqn. (3.2.1) and Eqn. (3.2.2) are identical provided A(f) = 1
over the signal band, or at least A (f) = const.
!Y.(t.fo) =If( -t,fo)l2 Looking ahead to application to imaging algorithms, it is desirable to consider
pulse compression processing in its own right, apart from considerations of
As an example, for the linear FM pulse Eqn. (3.1.14), a contour of the ambiguity detection and matched filtering. We begin with a development which will
function is shown in Fig. 3.1. For a target with no motion relative to the radar generalize to SAR image formation, and then develop some material of later
line of sight, fo = 0 and the time width of the matched filter output power use having to do with the properties of the linear FM waveform, and with some
function is nominally t / B, as developed above. For a .target known to be af modifications of the compression filter to alleviate time sidelobes in its response.
some particular range R = cr:/2, t = 0 and the Doppler shift due to targett
motion can be measured with resolution nominally 1/ r:P. The locus of the peak
of the ridge of the function Eqn. (3.1.16) isf0 =Kt, so that a target which is:, 3.2.1 Linearity, Green's Function and Compression
in fact in motion with a consequent Doppler shiftf0 will be assigned to a range · We now want to relate compression processing to a general procedure in linear
different from its true range by an amount llR = ( c/2 )(f0 / K ). This is the source system theory. This amounts to inverting the system impulse response, in the
of the adjective "ambiguity" in ambiguity function. A target which is at some operator sense. In mathematical terms, we have to do with the Green's function
particular range and moving may (and will, for this example of the linear FM) of the dynamic system, and its operator inverse. It is an absolute requirement
136 THE MATCHED FILTER AND PULSE COMPRESSION 3.2 PULSE COMPRESSION 137
that the system we deal with be linear (but not necessarily time invariant in its
properties). We will first discuss the linearity of the radar hardware and signal
processing, and then describe the target features which enter linearly into the
radar received signal.
R
The Radar as a Linear System
Radar systems are designed and operated to be linear in the various voltage y
waveforms, at least up to the output of the IF amplifier and filter stages. In the
coherent radars of later interest to us, the (nonlinear) operation of average ,---- ----- ---------/
/
power formation at the IF output is replaced with the linear operation of / //
/ /
"quadrature demodulation", also called "I, Q detection", or "complex /
/
/
/
result, all the operations in an imaging radar and its associated signal processing / /
/ //
are designed to be strictly linear. The only exception is the final operation of / /
0 .., .., EIN L
/
forming the real image intensity from the signal processor output, the so-called // Es
"complex image". L_____________________ // / ' A, DA
In the radar range equation, Eqn. (2.7.1), the target cross-section u appears.
Figure 3.2 A terrain element of area A illuminated by an incident field E10 • The scattered field
This is the area we impute to the target based on the power it scatters toward E, acts as secondary aperture illumination resulting in directivity DA of the terrain element.
the receiver, under the assumption that the target is an isotropic scatterer (which
it might or might not actually be). If multiple targets are in view, or if we view
an extended region with multiple scattering elements, the receiver response will as the incident field, and has phasor
depend on the characteristics of all the ta,gets. Since the electromagnetic field
equations are linear in field strength, rather than in power, the cross-sections E.(x, y) = ((x, y)E10 (x, y) (3.2.3)
of the individual targets are not immediately appropriate for combining into a
total response. In fact, as we discussed in Section 2.3, for extended targets with where ((x, y) is the (possibly complex) dimensionless Fresnel reflection
specific cross sections u 0 ( 0, q, ), the superposition of mean elemental cross- · coefficient of the surface element. It is determined by the local dielectric constant
sections by means of the expression Eqn. (2.3.4): of the reflecting surface. (More generally, the phasor E. will result by scattering,
and will have components both parallel and perpendicular to the incident,wave.
T, •• f
= [u 0 (0, </J)l(R, 0, </J)/4nR 2 ] dA
Here we consider only the "like-polarized" reflection coefficient.)
Now suppose a receiving antenna views the surface from range R. The electric
field E,•• (R) at the receiving antenna is given by the diffraction integral Eqn.
(2.2.6)
is only approximately correct, and only when interpreted with care, as discusse4
by Ulaby et al. ( 1982, p. 508 ). In order to preserve and make use of linearity~
it is therefore more appropriate to deal with receiver voltage, rather than power,
E, 00 (R)=(jk/2n) L E.(x,y)[exp(-jkr)/r]dA (3.2.4)
To that end, we want to describe the target in terms ,of its effect on electric '
field, rather than on average power. The Fresnel reflection coefficient Cis the 1 in which we have made the approximations (Fig. 2.4)
appropriate quantity to introduce (Ulaby et al., 1981, p. 73). 7
Consider an extended target of area A, which we will take as planar, and . r » A., z·r = 1, S=i
normal to the radar beam center (Fig. 3.2 ). Let E10 ( x, y) be the electric '
field phasor incident at some point on A, and let E.(x, y) be the corresponding Further, using Eqn. (2.2.13), the incident field phasor at the terrain is
reflected field. The incident field is assumed linearly polarized. Then (Ulaby
et al., 1981, Chapter 2) the reflected field is also polarized, in the same direction (3.2.5)
138 THE MATCHED FILTER AND PULSE COMPRESSION 3.2 PULSE COMPRESSION 139
where Z 0 = j(µ 0 /e 0 ) is the impedance of free space and we have inserted the From Eqn. (2.3.3 ), the received intensity and the terrain backscatter coefficient
appropriate phase shift, and assumed the target region does not extend beyond are related as
nominal beam center.
Combining Eqns. (3.2.3), (3.2.4), and (3.2.5), there results
this is of the form of a convolution of the target (terrain) reflectivity coefficient . where DA is the directivity of the illuminated terrain patch.
((R') with the Green's function (impulse response) In this Eqn. (3.2.10) the terrain directivity DA involves the distribution ((R').
In the idealized case of constant reflectivity ((R') (specular reflection), from
h(RIR') = const exp[ -j2klR - R'IJ/IR - R'l 2 Eqn. ( 2.2.19) we have DA = 4nA /A. 2, so that Eqn. ( 3.2.10) becomes
It is through Eqn. (3.2.6) that the radar observable Erec is linearly related to ·.
the terrain "complex image" elements ((R).
It is interesting to relate the (power) backscatter coefficient cr 0 of the surface More generally, a complex random scattering coefficient ( can be defined in
with the Fresnel (voltage) reflection coefficient(. Using the far field expression analogy with Eqn. (3.2.3). From Eqn. (3.2.10), the mean backscatter coefficient
Eqn. (2.2.9) with(}= 0 (Fig. 2.4), the received field phasor Eqn. (3.2.4) is of a terrain patch is then
*(3.2.12)
IEr.cl2 = ( 1/ A.R)2IL E.(x, y) dx dyj2
It is cr 0 (R) which is the terrain "image". Using the radar and processing system,
where /~ = IE;n 12/ Z 0 is the intensity illuminating the terrain. Or(R) = aErec(R)
140 THE MATCHED FILTER AND PULSE COMPRESSION
3.2 PULSE COMPRESSION 141
where a is a system constant which we will absorb into the Green's function
Eqn. (3.2.7). Combining Eqns. (3.2.6) and (3.2.7), we then can write the voltage yields
output phasor of the linear receiver as the convolution
f~
0 h- 1 (R 0 IR)v,(R)dR
This is to say that the indicated operation on the received data v,(R) exactly
v.(R) = f_'XJ h(RIR')((R') dR' (3.2.14)
reconstructs the complex reflectivity distribution ((R) in view of the radar. The
00 processing by h - 1( R 0 IR) produces an image of the reflectivity distribution, and
the operations involved in its application constitute an imaging algorithm. The
processing amounts to correlating the received signal v,(R) with a function
where R = cr:/2 is essentially the receiver voltage time variable, and the finite h - 1( R0 IR) for various values corresponding to ranges R 0 = er: /2 where the
length of the target, or the finite coverage of the radar beam, will limit the reflectivity function is to be determined.
interval of integration. We want to determine the complex image ((R') given Let us now consider how to determine the inverse Green's function h - 1(Ro IR)
the signal v.(R) and the impulse response h(RIR'). Note that, if ( = o(R' - R0 ) from the specified Green's function h(RIR 0 ). Consider first the case that we
represents a unit point target at range R 0 , where c5 is the Dirac delta function have available the radar system output time function v,(R) over the infinite
(unit impulse), the receiver response is time span ( - oo, oo ), an assumption which we will o0viously need to m.odify ~ater.
Suppose also (the actual situation for the current case of one d1me~s1~nal
"range" processing) that the radar, in addition to being a linear system, ts ttme
v.(R) =f : h(RIR')o(R' - R 0 ) dR' = h(RIR 0 ) stationary, i.e., h(RIR 0 ) = h(R - R0 ). (Here we have used a common abuse of
00 notation in designating the one-variable function h(R) with the same letter as
the two-variable function h(RIR 0 ).) Then defining a corresponding h- 1(RolR) =
h- 1 (R 0 - R), the convolution integral Eqn. (3.2.15) which we want to solve
Thus the impulse response h(RIR') can be calculated as the receiver output
becomes
should the reflectivity function be an ideal impulse, since the receiver system is:'
known. ·
f~
Now suppose that in some way or another we have found a
1
h - ( R 0 IR) (the inverse Green's function) such that 0 h- 1 (R 0 - R)h(R - R 0)dR
f~
0 h- (R
1
= 0 -R 0 -x)h(x)dx=c5(Ro-Ro)
-oo <Ro, Ro< oo
or
IYI ~ 00 (3.2.17)
Linear processing of the received signal v,(R) of Eqn. (3.2.14) with this operator
142 THE MATCHED FILTER AND PULSE COMPRESSION 3.2 PULSE COMPRESSION 143
Applying the Fourier transformation to the convolution Eqn. (3.2.17) yields We want to find the spectrum S(f) of s(t):
The integration is in general not possible to carry out in closed form. However,
the principle of stationary phase provides a useful approximation.
The solution, Eqn. (3.2.18), is obvious in this simple case. The filter H- 1 (!), If we consider (say) the real part of the spectrum Eqn. (3.2.19), we have
which compresses the signal h(x) back to an impulse, simply undoes whatever
the radar linear transfer function H(f) has done.
In the particular case that Re[S(f)] = f: 00
a(t) cos[2nft - </>(t)] dt (3.2.20)
H(f) = exp[jt/J(f)]
There may exist time ranges of the interval of integration over which the angle
we have 2nft - </>(t) changes rapidly with respect to the changes of the function a(t).
H- 1 (!) = 1/exp[jtjl(f)] =exp[ -jt/J(f)] Then the contribution to the integral value from regions of adjacent positive
and negative loops of the cosine function will nearly cancel, with no net
so that contribution to the value of the integral. Application of the principle of stationary
phase amounts to taking note of that fact, and concentrating attention elsewhere,
over intervals where the angle of the cosine function changes only slowly.
and we recover the matched filter as the compression processor. (Recall that The location of such time ranges, with slowly varying angle 2nft - </>(t),
R = ct/2 relates range and receiver signal time.) In the general case that depends on the particular value off for which we are trying to calculate the
IH(f)I =fa 1, the compression processor is not the matched filter; the filter number S(f), since f appears as a parameter in the angle. Ranges of time for
amplitude 1/IH(f)I =fa IH(f)I. which we do get net contribution to the integral are characterized by the fact
that the integrand does not oscillate rapidly, which is to say that the phase
angle 2nft - </>(t) is nearly constant. Thus we can confine attention to time
3.2.2 The Matched Filter and Pulse Compression ranges near the stationary points of the phase function, which are times t(f)
Cook and Bernfeld ( 1967, Chapter 3) have given a careful discussion relating for which
the matched filter with compression processing. The developments there also ·.
make precise the relationship locking time with frequency for linear FM d[2nft - </>(t)]/dt = 0, 2nf = d</>/dt (3.2.21)
waveforms having large bandwidth time products, an important basic concep~
we have so far referred to only in passing. Since SAR processing mostly involv~. Since we are confining attention to times t near solutions t(f) of Eqn. (3.2.21 ),
compression of linear FM waveforms, we will here summarize some pointf,; we can expand the integrand of the Fourier transform Eqn. (3.2.19) as a Taylor
relating to the procedure. Much of the development involves an approximate;· series around t(f ). Keeping only the zeroth order term in a( t ), and terms through
way of calculating the spectrum of a time waveform. ' the quadratic in the angle, noting that the first order term in the angle is zero
by the definition Eqn. (3.2.21) of t(f), and for simplicity of notation assuming
The Principle of Stationary Phase that only a single stationary point exists, we obtain (where we write tr = t(f))
Consider a general waveform
S(f) = a(tr) exp{j[ -2nftr +</>(tr)]}
s(t) = a(t) exp[j </>(t)] tr+A
where 2A is the interval (in general a function off) over which the quadratic
v(t) = a(t) cos[wct + </>(t)] approximation to the phase function in Eqn. (3.1.19) is reasonable.
144 THE MATCHED FILTER AND PULSE COMPRESSION 3.2 PULSE COMPRESSION 145
Making a change of variable in the integral Eqn. (3.2.22) results in: Amplitude Phase
x
rav"<l9il/2n)
Jo exp{j sgn[~(tr)Jny 2 } dy (3.2.23)
In the particular case that the upper limit of the integral can be extended with
little error to infinity, the Fresnel integral that arises can be evaluated
(Gradshteyn and Ryzhik, 1965, Section 3.691.1) to yield
for which tP( t) = 2nK = const. The stationary phase relation Eqn. (3.2.21) yields 1
Figure 3.3 Amplitude and phase spectra of linear FM signals with various bandwidth time
products. Phase shown is residual after removal of nominal quadratic phase (from Cook and
Bernfeld, 1967 and after Cook, 1960. Proc. IRE, 48, PP: 300-316. ©IEEE)
That is to say, for any frequency f, only time portions of the signal located near
the value Eqn. (3.2.26) contribute to the spectrum at the frequency in question. In Eqn. (3.2.27), we define
Frequency and time are approximately locked together in the linear FM ;
waveform. B = IKl-rp, f - !. = yB/2 (3.2.28)
Since the phase of the signal Eqn. ( 3.2.25) is exactly quadratic in time, the '
expression Eqn. (3.2.22) is exact, with the range of integration changed tq, For adequately large bandwidth time product B-rP, the Fresnel integral in
ltl < -rp/2, the full pulse extent. The approximate expression Eqn. (3.2.23) iti Eqn. (3.2.27) can be evaluated; and it is found that S(f) ~ 0 for If - fcl > B/2,
replaced by the exact expression · so that the quantity B, defined by Eqn. (3.2.28), is the signal bandwidth. Cook
and Bernfeld ( 1967, p. 139) calculate that to be the ~ase for B-rP > 100
S(f) = IKl- 112 exp[ -jn(f-J.) 2 /K] (Fig. 3.3 ). In the band, for large B-rP the same expression as Eqn. ( 3.2.24) results:
jlh.(1-ysgn K)/2
x
Jjlh.( - I - y sgn K)/2
exp[j(sgn K)ny~] dy S(f) = IKl- 112 exp[j(n/4) sgn (K)] exp[ -jn(f - fc) 2 / K], If-I.I< B/2
*(3.2.29)
using Eqn. (3.2.26) to calculate The principle of stationary phase can also be applied to the inverse transform
relation
2nftr - </>(tr)= 2nftr - 2n(f.tr + Ktf /2)
= n(f-f.)2/K
s(t) = f: 00
A(f)expj[t/J(f) + 2nft] df
146 THE MATCHED FILTER AND PULSE COMPRESSION 3.2 PULSE COMPRESSION 147
obtaining The envelope of this is a pulse of form (sin t) / t with time width nominally 1/ B,
centered at the time delay r = 2R/ c corresponding to the target range R. This
s(t) = A(J;)[2n/lrfr(.ft)l]1;2 is just the result Eqn. (3.1.13).
x expj{2nJ;t + t/J(J;) + sgn[rfr(J;)]n/4]} On the other hand, even if the input spectrum A (f) is not rectangular, the
(3.2.30) (sin t)/t form may still be a good approximation to the filter time output. Again
where the frequency J; is defined for any specified t of interest by according to Cook and Bernfeld (1967, p. 49), provided s(t) is a linear FM with
BrP > 20, and provided the proper matched filter is used for the S(f) that is
tfr(J;) = -2nt the actual spectrum (not having unit amplitude A(f) if BrP is considerably
(3.2.31)
smaller than 100), the matched filter output envelope will have the (sin t)/t
For the lar~e bandwidth time product quadratic phase function Eqn. (3.2.29), form. Thus, although in practice time bandwidth products much larger than
the expre~s1on Eqn. (3.~.30) reduces to the linear FM, Eqn. (3.2.25), while Eqn. 20 (or even 100) are used, even for products as small as 20 the resolution result
(3.2.31) yields the lockmg relationship J; = fc + Kt. c5t = 1/ B is valid, although not necessarily the linkage expression Eqn. ( 3.2.26 ).
The .above relation~ are approximate. They will be more or less accurate The implications of such results are important in considering SAR azimuth
dependmg on the specific nature of the signals(t) in question the more so the compression algorithms. In contrast, for a transmitted pulse which is a simple
larger the bandwidth time product BrP of the waveform. For~ signal s(t) with burst of carrier: s(t) = exp(jwct), !ti< •p/2, the.matched filter output for a
both a smooth envelope a(t) and a smooth spectrum amplitude A(f), according target at range R will be just the expression Eqn. (3.1.12) delayed by 2R/c:
to Cook an~ Bernfeld (1967, p. 49), a bandwidth time product nominally of to
suffices to yield accurate t/J(f) and </>(t) using respectively the approximations'
Eqns. (3.2.24) and. (3:2.30). If on~ of a(t) or A(f) is discontinuous, BrP needs
to be 20 ~r 30, whtle 1f both are discontinuous, BrP needs to be 100. This latter
g(t) = fs*(x - t)s(x - 2R/c) dx
t+r 0/2 ·
case apphes to the nominally time limited, band limited linear FM waveform.
Compression Processing
=
f
2R/c-rp/2
= (rp -
exp[ -jwc(x - t)] exp[jwc(x - 2R/c)] dx
The matched filter time output is then · sliding along the upper function s(x - r) as t varies. For each pair (t, r), the
product function in the integrand of Eqn. ( 3.2.34) contains sum and difference
g(t) = (f -fc+B/2
-fc-B/2
+
f'fc+B/2)
fc-B/2
exp( -j4nf R/c) exp(j2nft) df frequencies (with x as the "time" variable)
x {sin[nB(t - 2R/c)]}/[nB(t - 2R/c)] The sum frequency term will integrate to zero, as will the difference frequency
148 THE MATCHED FILTEA AND PULSE COMPRESSION 3.2 PULSE COMPRESSION 149
in the signal band. (Only in the case A(f) = 1, or at least A(f) = const, such
as for example the linear FM with large bandwidth time product, are the
matched filter S*(f) and the compression filter 1/S(f) the same.)
In the case of a transmitted signal s(t) of finite bandwidth, so that the
qualification in Eqn. (3.2.35) has effect, the problem of finding the compression
filter H(f) is "ill-posed" (Tikhonov and Arsenin, 1977), in the sense that the
x conditions of the problem do not lead us to a unique solution. (Since the signal
s(t) has zero frequency content outside the band If - !cl< B/2, we can add any
out of band components to H(f) and not change the filter output G(f) =
H(f)S(f).) The problem is "regularized" (made to have a unique solution) by
adding some extra conditions solely for that purpose. If we choose to add the
condition that the filter H(f) have zero spectral amplitude outside the signal
band (which corresponds to the "p~incipal solution" of such problems (Bracewell
and Roberts, 1954)), we obtain a compressed output as in Eqn. (3.2.32) above
(for R = 0, say):
We have in fact always done that without comment. Radar receivers are always
x so constructed.
For the linear FM waveform with high bandwidth time product, the matched
filter Eqn. (3.2.35) is the appropriate compression processor if we use the
principal solution. We thus reconstruct the complex reflectivity profile ((R) in
view of the radar as in Eqn. (3.2.16) with the best resolution attainable by linear
Figure 3.4 Correlation of linear FM waveforms. Average product peaks near zero value of processing. (The adjoining of out of band components to the filter output is a
difference frequency !J.f = K ( t - r ). nonlinear process, since zero filter input does not then correspond to zero
output.) However, with that reconstruction of ((R) we have sidelobes to contend
~ -r. The larger is K, the closer must t be to -r in order to
f = K ( t - •) unless t with, just as in the case of a finite antenna aperture (Section 2.2.2). The first
obtain a non-zero value for the integral Eqn. (3.2.34). sidelobes of g(t) of Eqn. (3.2.36) are only 13 dB lower than the main lobe. Thus,
for example, a target 13 dB stronger than an adjacent target one resolution cell
3.2.3 Time Sidelobes and Filter Weighting away will mask its weaker neighbor.
,\, These time (or range) sidelobes in the ambiguity function Eqn. (3.2.36) must
Let us consider again the resolution available from a transmitted pulse od be dealt with to obtain a properly functioning system. Cook and Bernfeld ( 1967)
spectrum 1, discuss the problem in general in the context of signals with large B-rP
products. Suppose we maintain the desirable constant power requirement that
S(f) = A(f) exp[jl/J(f)]
ls(t)I = a(t) = 1, and vary IS(f)I (analogous to antenna illumination) to attempt
over a band B. This is the Green's function of the radar system, except for a to improve the ambiguity diagr&m Eqn. (3.1.11). Assume we will always use a
matched filter H(f) = S*(f) wh~tever S(f) may be (thereby deviating from the
~onstant amplitude factor, since it is the system response to a point target. T~;
mverse Green's function, a compression filter, is · true compression filter H(f) = 1/S(f) over the band). Then some improvement
is possible (Cook and Bernfeld, 1967, Chapter 3), but only at the expense of
needing to generate rather inconvenient phase behaviors </J( t) for the transmitted
H(f) = l/S(f), S(f) =I- 0 signal s(t) = exp[j</J(t)].
since then The usual way to deal with undesirably high time sidelobe levels in the
matched filter output is to unmatch the filter. There is thereby an inevitable
H(f)S(f) = 1 reduction in output SNR, and a consequent decrease in detection performance
150 THE MATCHED FILTEA AND PULSE COMPRESSION 3.2 PULSE COMPRESSION 151
which, although usually not severe, must be evaluated. Beyond that we deal out that not only is the maximum sidelobe level no larger than the requested
with a trade-off between desirable improvement in sidelobe structure, and bound, but that all the sidelobes attain that bound - hence the distribution is
consequent undesirable, but usually tolerable, broadening of the mainlobe of called also the Dolph-Chebyshev weighting.
the filter output function (degradation of resolution). Again, Cook and Bernfeld A flexible and convenient approximation to the Dolph weighting function
( 1967, Chapter 7) have given a thorough discussion in the context of the radar Eqn. (3.2.37) is the Taylor weighting function (Cook and Bernfeld, 1967, p. 180;
matched filter receiver, although the general subject is discussed ubiquitously. Taylor, 1955). Again relative to the center of the band this is:
Farnett et al. (1970) give a convenient summary, while Harris (1978) has given
a particularly comprehensive discussion of the available alternatives in the case il-1
of time sampled data. Here we will follow only one line of thought, leading to W(f + /.) = 1 + 2 L F(m, A, n) cos (2nmf/ B) (3.2.39)
m=l
some filters commonly used in SAR processing.
Let us again assume that the transmitted pulse is the linear FM with constant where the number of terms n determines the goodness of the approximation.
envelope: The numbers Fare
H(f) = W(f)exp[-jl{!(f)]
where A is as in Eqn. ( 3.2.38) (determined from the requested sidelobe level) and
where W(f) is a real function to be found. Assume W(f) to be symmetric around
the band center .r.,. CT= n[A 2 + (n - o.5) 2 r 1 2
' (3.2.41)
We can then formulate the optimization problem of minimizing the mainlobe
width of the filter output jg(t)j, for a specified maximum sidelobe level, where This latter happens also to be the factor by which the Taylor mainlobe is
G(f) = H(f)S(f) = W(f). The answer is (Cook and Bernfeld, 1967, p. 178) the broadened beyond the Dolph mainlobe width. We want CT to be not too much
continuous form of the Dolph ( 1946) antenna current distribution function .. larger than unity, so that to some extent n (quality of approximation) and A
Over the band, this is: (sidelobe level) are coupled. Reasonable nominal values are of the order of
n ~ 3 for 25 dB sidelobes and n ~ 6 for 40 dB sidelobes.
W(f + / 0 ) = nAI 1 (z)/ B cosh(nAz) The Taylor weighting function Eqn. (3.2.40) can be realized with reasonable
convenience, either directly as a filter in the frequency domain, or in the time
writing the frequency relative to band center, where domain. Time domain realization makes use of the fact that
and / 1 is the modified Bessel function of first kind and order 1. The parameter A i$1 . so that the cosine terms in the Taylo~ filter Eqn. (3.2.39) can be realized by a
set by the requested maximum sidelobe level a such that the maximum (voltage)'.'~ linear combination of delayed and advanced (by integral multiples of 1/ B)
sidelobe is a factor replicas of the filter input (so-called tapped delay line realization).
For typical sidelobe levels, the numbers F(m) of Eqn. (3.2.40) in the Taylor
a= 1/cosh(nA) filter approximation to the Dolph filter become small rather rapidly as
ft· m increases towards n. For example, for n = 6 and 40 dB sidelobes, the
below the mainlobe peak. (For example, if we demand that the largest sidelobe!Jf filter coefficients are: F(l,. . ., 5) = 0.3891, -0.945 x 10- 2 , 0.488 x 10- 2 ,
be 40 dB below the peak of the mainlobe, then a= 0.01 and A = 1.69.) -0.161 x 10- 2 , 0.035 x 10- 2 (and incidentally CT= 1.043, so that the main
In addition, at the band edges, f = ± B /2, W(f) has impulses of strength1 lobe broadens only by 4.3% ). This suggests dropping higher order terms in
1/ B cosh( nA ), which fact makes this weighting inconvenient to realize. It turns; " Eqn. (3.2.39), without changing n, which would involve recalculating the
152 THE MATCHED FILTER AND PULSE COMPRESSION REFERENCES 153
coefficients. If this is done in the (6, -40 dB) case, for example, there results Tikhonov, A. N. and V. Y. Arsenin (1977). Solutions of Ill-Posed Problems, Wiley, New
York.
W(f) = 1 + 0.78 cos (21Cf/ B) Ulaby, F. T., R. K. Moore and A. K. Fung (1981). Microwave Remote Sensing, Vol. 1,
Addison-Wesley, Reading, MA.
or, normalizing, Ulaby, F. T., R. K. Moore and A. K. Fung (1982). Microwave Remote Sensing, Vol. 2,
Addison-Wesley, Reading, MA.
W(f) = 0.56 + 0.44 cos (27C// B) Wehner, D. R. ( 1987). High Resolution Radar, Artech House, Norwood, MA.
Whalen, A. D. ( 1971 ). Detection of Signals in Noise, Academic Press, New York.
This is very near the Hann weighting function Woodward, P. M. ( 1953 ). Probability and Information Theory, with Applications to Radar,
McGraw-Hill, New York.
W(f) = 0.5 + 0.5 cos (2nf/ B)
or the Hamming function
REFERENCES
Bracewell, R. N. and J. A. Roberts ( 1954 ). "Aerial smoothing in radio astronomy,"
Austral. J. Phys., 7, pp. 615-640.
Cook, C. E. and M. Bernfeld (1967). Radar Signals, Academic Press, New York.
Dolph, C. L. ( 1946). "A current distribution for broadside arrays which optimizes1 ·
the relationship between beam width and side-lobe level," Proc. IRE, 34(June),
pp. 335-348.
Farnett, E. C., T. B. Howard and G. H. Stevens (1970). "Pulse-compression Radar,"'
Chapter 20 in Radar Handbook (Skolnik, M. I., ed.), McGraw-Hill, New York.
Gradshteyn, I. S. and I. M. Ryzhik ( 1965). Tables of Integrals, Series, and Products,,~
Academic Press, New York. ·
Harris, F. J. (1978). "On the use of windows for harmonic analysis with the discreii~i:·
Fourier transform," Proc. IEEE, 66(1), pp. 51-83. ..~
North, D. 0. (1963). "An analysis of the factors which determine signal/noi5'
discrimination in pulsed-carrier systems," Proc. IEEE, 51 (7), pp. 1016-1027 (Reprint'
of: RCA Technical Report PTR-6C, June 25, 1943). ·'
Rihaczek, A. W. (1969). Principles of High Resolution Radar, McGraw-Hill, New Yo
(Reprinted by Peninsula Puhl., Los Altos, CA, 1985).
Skolnik, M. I. (1980). Introduction to Radar Systems, McGraw-Hill, New York.
Taylor, T. T. (1955). "Design of line-source antennas for narrow beamwidth and lowi
side lobes," IRE Trans. Ant. and Prop., AP-3(1), pp. 16-28. ··
4.1 INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM 155
introduce the rectangular (range Doppler) coordinate system, and describe the
corresponding signals received by a SAR, assuming a "chirped" transmitter
4 waveform. Range migration of the received signals over the many pulses needed
to carry out SAR processing is described in detail. The difficulty of dealing with
range migration has led to various ways in which the correlation operations of
the rectangular algorithm have been realized, and we will distinguish among
those algorithms from that point of view. In this chapter we will describe four
IMAGING AND THE of the methods which have been used. In Chapter 10 we will discuss one more,
deramp processing, which has been used less commonly in remote sensing work,
RECTANGULAR but which is nonetheless of importance.
The algorithms discussed in this chapter realize range migration correction
ALGORITHM by interpolation operations on a rectangular grid of data, in either the time or
frequency domains. The frequency domain realizations have been developed
mainly by the Jet. Propulsion Laboratory of NASA and by MacDonald-
Dettwiler and· Associates of Canada. A time domain SAR compression
algorithm, which operates without using fast convolution in the azimuth
coordinate, has been developed by the British RAE. In Chapter 10 we will
discuss the polar processing algorithm, which has its heritage in the aircraft
SAR systems which have been under steady development since the 1950s.
l
156 IMAGING AND THE RECTANGULAR ALGORITHM
4.1 INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM 157
over the (usually small) change of aspect angle during the time that any particular 4.1.1 Data Coordinates and the System Impulse Response
point is illuminated (the time extent S of the synthetic aperture), and constant
in time. That may often be the case. Otherwise, the image to be derived from In order to describe the system impulse response h(RIR 0 ), we need to write
the data will be a weighted combination of the reftectivities C(R') as observed down the data set ll.(R) resulting from an isolated point target. The scene
from some range of positions R at varying times. reflectivity is therefore taken as
The two-dimensional inverse Green's function h - 1 (R 0 IR), corresponding to
the Green's function (impulse response) h(RIR'), is defined by '(R') = <5(R' - R 0 )
as in Eqn. (3.2.12).
From Eqn. (4.1.3), it is evident that the image formation process is one of
correlation of the data ll.(R) with the inverse Green's function. It is further
clear from Eqn. (4.1.2) that the inverse Green's function can be described
operationally in terms of whatever correlation operati~ns will compress the
system unit impulse response h(RIR 0 ) into the image of an impulse. In
developing an image formation algorithm, we therefore first need to determine
what the system impulse response is, working from the known system properties.
We then must specify the correlation operations necessary to convert the impulse
response into an impulse. Applying exactly those correlation operations to the Figure 4.1 A terrain point is located by the radar position xe when the point is in beam center,
full data set ll.(R) then produces the complex image C(R0 ). and the corresponding range Re.
l
158 IMAGING AND THE RECTANGULAR ALGORITHM 4.1 INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM 159
Data Coordinates Thus
As it moves along its path, the radar transmits narrowband pulses, typically
the linear FM signal. The multipulse real transmitted signal is then t = 2R(t)/[c + R(t)] ~ 2R(t)/c
p(t) = L s(t - nTP) (4.1.5) since c is in all cases very much larger than R(t ). From Eqn. (4.1.5), the received
n pulse train Eqn. (4.1.6) is then
where T. is the pulse repetition period and the sum includes all pulses for which
the targ~t is in the radar beam. Note that we assume synchronization of t~e v.(t) = L ans[t - nTP - 2R(t)/c] (4.1.7)
n
detailed pulse waveform s(t) with the repetition period. That is, the radar ts
time coherent. Since SAR is based on Doppler shift, it is essential that
where an is an amplitude scale factor appropriate to pulse n. At most one of
pulse-to-pulse phase changes be recoverable from the radar signal, requiring the terms in the sum Eqn. (4.1. 7) is nonzero for any particular time t.
coherent operation. The impulse response Eqn. (4.1.7) can be written formally as a function of
At any arbitrary time t, the radar is at some slant range R(t) from the target two variables, time t' = t - nTP within pulse number n, and the time nTP of
point with image coordinates (xc, Re) (Fig. 4.2). The real received signal. v,(t)
transmission of that pulse. That is, the received data samples at s = nTP are
at that instant has the value which the transmitted signal had at some time r
a function of two variables
earlier, scaled by a factor which is locally constant:
The operations required for image formation are those of correlation of the
The time r is the time of propagation of the instantaneous pulse wavefront at
radar data with the impulse response. We therefore have to do with a two
time t - r out to the target, a distance R(t - r), and back to the receiver at
dimensional correlator. However, the range R(t) varies over the time of
time t, a distance R(t). Thus each pulse for which the point target is in view. The received pulses
s [ t - nTP - 2R ( t) / c] are therefore distorted versions of the transmitted pulses
r = [R(t - t) + R(t)]/c ~ [2R(t) - R(t)t]/c
s(t - nTP). The distortion can be different for each pulse of the received pulse
train, since the local functional form of the time varying range R(t) depends
on the differing geometry along the radar trajectory. Were it necessary to account
for these effects in processing, the two dimensional correlation would not
decouple into a sequence of two independent one dimensional procedures. It
is therefore important to examine the consequences of this pulse dependent
distortion. We follow the development of Barber (1985).
Figure 4.2 The radar views a terrain point at (xe, Re) from positions (x, R). s(t) = exp[j2n(fct + Kt 2 /2)], (4.1.9)
l
160 IMAGING AND THE RECTANGULAR ALGORITHM 4.1 INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM 161
where
It - 2Ri/cl/•p « 1 (4.1.13)
g(t) = f:00
s*(t' - t)v,(t') dt' (4.1.10) ( 4.1.15)
where the input is actually one pulse of the (positive frequency part of the) the relation Eqn. (4.1.13) will follow. Eqn. (4.1.15) indicates that a modest
distorted return Eqn. (4.1.7). bandwidth time product will suffice for the validity of the approximation
Substituting Eqn. (4.1.7) and Eqn. (4.1.9) iiito the integral of Eqn. (4.1.10), Eqn. (4.1.14). This is in correspondence with the discussion of Section 3.2.2, in
and being careful of limits, for t ~ ti for example we obtain which it was noted that, for BrP > 20, the matched filter output for the linear
FM would have the form Eqn. (4.1.14).
in Section 3.2.2. Specifically, the filter input signal Eqn. (4.1.7) with range recalling the nominal beamwidth and azimuth resolution relations:
Eqn. (4.1.8) will have a phase variation
The frequency function Eqn. (4.1.16) differs from the nominal variation This is well satisfied for current space systems.
While frequency discrepancy between the returned pulse and the filter
f=f0 + K(t- 2Rifc) waveform gives rise to a range shift of the image point, a mismatch in frequency
rate results in defocussing. From Eqn. (4.1.16), the frequency ratejofthe received
by an amount which depends slightly on time within the pulse, but which is pulse differs from the frequency rate K of the filter waveform by an amount
approximately which is approximately
RiR.1 +Rf= v; 1
When the point target of Fig. 4.3 is viewed from the forward and rear edges of The factor 2V/c is extremely small, while the other factor is not large. We
the real radar beam, nominally at (} = ± f}u/2, the range shift Eqn. (4.1.17) will conclude that any defocussing due to distortion of the received pulse is negligible.
be opposite in direction. The difference represents a distortion, which should It might be remarked that we have only considered returned pulse distortion
be much less than the range resolution interval, which is effects related to the geometry of the situation. There may also be pulse distortion
due to the frequency dependent propagation speed (dispersion) of the earth's
ionosphere. Polarization change due to the earth's magnetic field may also be
JR= c/2B = c/21KlrP
noticeable. We will consider these effects in Ch. 7. Brookner (1977) has given
a summary of the effects, with useful charts of sample calculations. In a study
Comparing the difference in range shift to this latter yields a criterion
specifically concerning SAR, Quegan and Lamont ( 1986) indicate that the effect
on image focus can be severe for low frequency ( L-band) and an aircraft system
2jARlmax/JR = 2V.1f}Hfctp/c operating at long range, but is less marked for a spaceborne system. The effects
= 2V.1rp/La = V.i<p/Jx « 1 *(4.1.18) lessen at higher frequencies.
164 IMAGING AND THE RECTANGULAR ALGORITHM 4.1 INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM 165
With the approximation of constant range from radar to target point over S=xN8
Locust='tn ~.,.,
the width of a transmitted pulse, the received signal Eqn. (4.1.7) from a point
target is ~"-
00
__________ _j_____J?t-'_ __ 1_______ _
S n+1
v,(t) = L ans(t - nTP - 2Rn/c) (4.1.19)
/
/
n= -oo /
/
/
~ Vr (Sn,t-'tn)
where Rn is the range to target during the time of reception of the nth pulse:
Rn= R(tn), with tn the center (say) of the time interval over which pulse n is
received.
---- ----~----
11
--------------
I I
~
We now segment the received signal Eqn. (4.1.19) (the voltage out of the
radar receiver) of a single scalar variable (time) into a two-dimensional data
set. This is convenient to do because the formalism of two dimensional Green's j ____ a(__j__L
function analysis can be segmented now into two sequential one-dimensional
Sn-1 - ----0---1- -----------------
I' II
problems. I I
We define specifically I I
I I
t =2R/c
nTP :>;; t < (n + l)Tp (4.1.20)
Figure 4.4 Two-dimensional Green's function of radar system sampled by PRF.
That is, v,(nTP' t) is the received signal from the time of transmission of pulse
n until the time of transmission of pulse n + 1. (In fact more than one pulse
may be "in flight" from the radar to the target simultaneously, in which case The point target response Fig. 4.4 is dispersed in fast time by the structure
some integral number of pulse periods intervenes between transmission of pulse of the transmitted pulse, and in slow time by the multiple (perhaps thousands
n and the time origin of the corresponding received signal.) Ifwe define a "slow" of) pulses which reach the target as the radar travels past it. Ideally, we would
time variable s as the time of flight of the vehicle along its track, in contrast like the compressed signal to be a point, as was the target. In practice, the finite
b~nd~id.th of the transmitter and the finite time during which the target is in
with the "fast" time variable t of the radar signal voltage, then v,(nTP, t) is a
view hmit us to a compressed version of the target of nonzero width in the two
function v,(s, t) sampled in slow times at the pulse repetition frequency. Using
dim~nsions of the image. As discussed in connection with range processing in
the transformations R = c(t - nTp)/2 and x = V.s, we will also write the data
Section 3.2.3, we then content ourselves with the principal solution of the
set as v,(x, R) when convenient.
problem. Roughly speaking, the ideal point target (impulse function) has infinite
The (slow time sampled) two-dimensional Green's function of the system is
bandwidth in slow and fast time. The physical radar has finite bandwidth and
now seen to be that sketched in Fig. 4.4. This is an array of (fast) time delayed
versions of the transmitted pulse, with the delays -rn = 2Rn/ c depending on
obliter~tes all but a finite band of target return frequencies. Then, by linear
target position (xc, Re) and radar position as determined by the geometry of processmg of the radar observables, we can produce only a finite bandwidth
the problem. The Green's function is inherently sampled in slow time by the (smeared image) approximation to the observed point target.
pulsed radar, and will additionally often be sampled in fast time for digital In concept, the image formation procedure is straightforward. It is exactly
processing. that .operational procedure which compresses to a point the radar response to
a pomt target. Assuming a point target with coordinates (xc, Re) (Fig. 4.1), let
us now describe the procedure. We will take advantage of the possibility of
segmenting the two-dimensional correlation Eqn. (4.1.3) into a sequence of two
4.1.2 Imaging Algorithm Overview one-dimensional correlations.
To design the imaging algorithm, we need to describe operationally how to
"compress" the system response to a point target, shown in Fig. 4.4, back into Range Processing
a point. Any such procedure will approximately attain the result of Eqn. (4.1.2 ), The received signal v,(nTP, t) from each transmitted pulse s(t) is first passed
and will thereby constitute an operational description of the inverse Green's through the matched filter with impulse response s*( -t), or, equivalently,
function of the radar. correlated over time t' with the replica s*(t' - t). Dropping a scale factor i-P,
166 IMAGING AND THE RECTANGULAR ALGORITHM 4.1 INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM 167
the positive frequency portion of the result, for pulse n, is the filter or correlator
output Eqn. (4.1.14)
Here Rn is the range from the radar at time of transmission of pulse n to the
terrain point with coordinates (xc, Re).
The carrier structure of the signal Eqn. (4.1.21) is stripped away by the linear t =2Rlc
operation of complex demodulation, which amounts to a left shift by le in the Figure 4.5 Locus ofrange compressed returns from point target in plane of slow and fast times (s, t ).
freqency domain, to obtain the complex low pass signal
bn(t) = exp(-j4nRn/A.)[sin(u)]/u (4.1.22) sequence of two one-dimensional compression operations, one in fast time and
one in slow time. Since nominally slow time measures a coordinate (along-track
(Removal of the carrier structure of Eqn. (4.1.21) by the nonlinear operation distance) orthogonal to fast time (range perpendicular to vehicle track), this
of average power computation would destroy the crucial phase term 4nRn/ A., processing sequence is called the rectangular algorithm.
the origin of the Doppler shift and the SAR effect.) Alternatively, the signal
v,(nTP, t) can first be basebanded, and the corresponding filter used to obtain
Eqn. (4.1.22) directly. Azimuth Processing
The time of occurrence of the maximum of IOn ( t) I is tn = 2Rn/ c. Reading off The signal g(slxc, Re) of Eqn. ( 4.1.24), which we want to compress as the second
the value of On(t) at that particular time yields the complex number operation of the rectangular algorithm, is in fact the Doppler signal received
from the point target as the radar moves by. Hence this second compression
(4.1.23) operation is the "Doppler" part of the range-Doppler processing algorithm.
The waveform in slow times of the azimuth signal g(slxc, Re) of Eqn. (4.1.24)
is not necessarily simple, since R(s) is a nonlinear function of slow time s, the
This procedure is repeated for each pulse for which the target was effectively
form of which depends on the target parameters (xc, Re). Thus, while the slow
in view of the radar. Collecting together all the values Eqn. (4.1.23), we can
time ("azimuth") compression operation will be a correlation, the correlator
consider them as samples at times sn of a function of slow time s
waveform will depend in general on which point in the image we are computing.
That is, in full generality, to compress the point target function Eqn. (4.1.24)
*(4.1.24)
we need to compute the correlation
where Rn = R(sn).
The locus in the ( x, R) plane of values of the function Eqn. (4.1.24) is a (4.1.25)
one-dimensional path (Fig. 4.5). The radar returns, originally dispersed in two
dimensions, have now been compressed to a one-dimensional space. The
remaining task is to compress this path into a point at (xc, Re), the original using a separate correlator function h- 1 for each point of the image. The "time
target location. The fact that range to target could be considered constant domain" image formation algorithm described by Barber ( 1985) implements
during the time of one pulsewidth, as discussed in Section 4.1.1, has allowed correlation in just this way. However, there is a considerable gain in
the general two-dimensional compression problem to be decoupled into a computational efficiency if the correlation can be implemented as a matched
168 IMAGING AND THE RECTANGULAR ALGORITHM 4.1 INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM 169
filter (Chapter 9). Such implementation of azimuth compression as a matched in which foe• /R depend only on Re. Thus the operation can be realized as a
filter operation, as is commonly done, requires further investigation. fast convolution (matched filter) over slow time for each range Re of the image.
To that end, it is helpful to expand the range function R(s) as a Taylor series As with linear FM range pulse compression, with a bandwidth time product
around sc = xc/ V., the slow time at which the center of the radar beam crosses of 20 or more the correlation operation yields an output Eqn. (4.1.25) whose
the target. (That time is unknown in value, and in fact is just the information modulus is a pulse
we want to derive by azimuth processing, that is the location along-track of
the target). We have i((s~lsc, Rc)I =SI sin (u)/ul
(4.1.26) u = niRS(s~ - sc) *(4.1.31)
In such an expansion, it is often possible to neglect terms of order higher than The peak of this pulse occurs at s~ = sc, the target azimuth location.
the quadratic, although the possibility of realizing the correlation expression
Eqn. (4.1.25) by matched filtering does not depend on that assumption. Rather,
we need to determine that the retained coefficients in the expansion Eqn. (4.1.26) Azimuth Resolution
are independent of sc over the filter span S in slow time. (In Appendix B we The width of the pulse Eqn. (4.1.31) is nominally
give a detailed discussion of the terms in the expansion Eqn. (4.1.26).)
We can identify the leading time derivatives in the expansion Eqn. (4.1.26) <>s = 1/B0
in terms of the Doppler center frequency and Doppler rate of the slow time
signal Eqn. (4.1.24). The time rate of change of phase cf>(s) in the complex where
exponential is just Doppler (radian) frequency, so that we have
(p /2n = io = -2R(s)/). is the Doppler bandwidth. The time S is that nominal time for which a point
~/2n =iR = -2R(s)/). (4.1.27) target is effectively in view. It is the SAR "integration time", and is determined
by the antenna horizontal beamwidth. The target is therefore located in azimuth
These yield the leading coefficients in Eqn. (4.1.26) as with spatial resolution
Both of these are functions of sc and Re in general, since R(s) contains sc, Re where V. 1 is the speed of the radar platform relative to the target point. For an
as parameters. antenna of physical extent La along track, the nominal beam width is (}" = )./ L 30
Assuming that a quadratic expansion Eqn. (4.1.26) suffices, which is often so that any particular earth point at range Re is illuminated for a nominal time
the case, the Doppler signal Eqn. ( 4.1.24) becomes
*(4.1.34)
g(slsc, Re)= exp( -j4nRc/ .A.) exp{j2nl/nc(s - Sc)+ /R(s - sc)2 /2]},
Is - sci< S/2 (4.1.29) For the geometry of Fig. 4.6, where the radar beam center has a squint angle
e. ahead of broadside, assuming RC» Ix - xcl we have
This is a linear FM wave with center frequency inc and frequency rateiR· As
we discuss in Appendix B, the FM parameters inc and iR can depend strongly R 2 (s) = R~ + v; (s - sc)2 - 2Rc V.,(s - sc) sin O.
1
on Re, but usually depend only weakly on sc. The azimuth correlation operation
Eqn. (4.1.25) can then be realized approximately using a correlator function R(s) ~Re + v;,(s - sc) 2 /2Rc - V.1(s - sc) sin {}9
(using the leading terms of the expansion Eqn. (4.1.26))
inc= -2R(sc)/.A. = (2V.1/.A.) sine.
(4.1.30) iR = -2R(sc)/). = -2v;1/.A.Rc *(4.1.35)
170 IMAGING AND THE RECTANGULAR ALGORITHM 4.1 INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM 171
value La/2 of Eqn. (4.1.37), since the correlator output Eqn. (4.1.25) has time
resolution 1I B 0 in any event. (This assumes compensation for the antenna
pattern in the correlator, that is the compressor operator must be used.) On
the other hand, to use the potential wider Doppler band requires sampling (at
the radar pulse repetition frequency) at a rate somewhat greater than the Doppler
band to be processed (Appendix A). Such an increase in PRF may result in
range ambiguities.
Correlator Structure
The correlation operation Eqn. (4.1.25) on the azimuth Doppler signal can
efficiently be implemented as a matched filter operation for each particular
value of R0 , provided the parameters f 00 , fR are sufficiently independent of s0
over the span S to allow the use of fast convolution. In Appendix B these
...... parameters are discussed in detail, and expressions presented which allow
......
............ assessment of the situation in any particular case. In practice, considerations
......
...... ,,,.. of range migration, which we will elaborate on below, also enter into the question.
---------------';:a' In Seasat-like cases, the approximations involved are usually justified, and
................................ azimuth compression is usually implemented as the more efficient matched filter
operation, rather than by correlation. In either case, all the factors dealt with
Figure 4.6 Simplified encounter geometry for a radar with a beam center squinted at angle 6,. in range compression must be considered, and in particular weighting of the
filter for sidelobe control is necessary.
The geometry of the encounter between radar and target is closely involved
Equations (4.1.32), (4.1.34), and (4.1.35) yield a Doppler bandwidth in the azimuth correlator or matched filter structure through the expression for
slant range R(s) in terms of target position (x 0 , R0 ). The structure of the impulse
*( 4.1.36) response function in the slow time domain, Eqn. (4.1.24), may or may not be
closely approximated as a linear FM in the Doppler domain. If it is not, then
and a system along-track resolution terms in the expansion of R(s), Eqn. (4.1.26), of order higher than the quadratic
will need to be considered. It is also possible that the azimuth impulse response
*(4.1.37) depends significantly on the location x 0 of the target, as well as on R 0 • This
will be the case only for rather long slow time span S, or for high squint
This is in accord with the earlier result Eqn. (l.2.9) obtained by incomplete geometries. If such is the case, use of a matched filter may not be possible for
arguments. azimuth compression, since the filter response function called for would then
The azimuth bandwidth time product should be large in order that the above change over the filter time span. At the least, some tracking of the azimuth
relations hold to an adequate approximation. This requires that filter parameters must be implemented over such an image span (Section 9.3.2).
(4.1.38)
4.1.3 Range Migration and Depth of Focus
using Eqn. (4.1.33) and Eqn. (4.1.35). The criterion Eqn. (4.1.38) is usually well Two further considerations have impact on the way in which azimuth processing
satisfied. is carried out: range migration and depth of focus. Range migration is an
Unlike the range bandwidth, which is strictly limited to the bandwidth of inevitable consequence of SAR operation, but may or may not be so severe as
the transmitted pulse, the Doppler bandwidth is not closely limited to the to require compensation, depending on system parameters. Azimuth resolution
nominal value Eqn. (4.1.36 ). This is because the target is actually in view over in SAR depends closely on the bandwidth of the Doppler signal, as in Eqn.
a wider angle span than the nominal 3 dB beamwidth (}"'although with reduced (4.1.33). Since the phase of the Doppler signal Eqn. (4.1.24) is</>= -4nR(s)/A.,
response due to the fall off of the antenna pattern outside the nominal beam. if the Doppler signal is to have a nonzero bandwidth, the range to target must
On the one hand, in principle this makes possible finer resolution than the change during the time of view S, and the compressed point target response
172 IMAGING AND THE RECTANGULAR ALGORITHM 4.1 INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM 173
necessarily occurs at different ranges for different pulses (Fig. 4.5). This is the component of migration. In later sections we will describe how range migration
phenomenon of range migration. compensation is achieved in various imaging algorithms.
The linear part of this is range walk and the quadratic part is range curvature. (4.1.44)
The total change AR= R(s) - Re is range migration, and might involve higher
order terms in the expansion Eqn. ( 4.1.26), but usually does not. This mismatch causes a phase drift between the correlator function Eqn. ( 4.1.30)
We can easily determine a rough criterion to indicate whether range migration and the signal, just as we discussed in Section 1.2 relative to the unfocussed
compensation is needed. Again consider the simple geometry of Fig. 4.6, with SAR processor. At the Doppler band edges (s = ±S/2), for negligible mismatch
a beam squint angle o•. Using Eqns. (4.1.35) and (4.1.39), for the maximum we require (somewhat arbitrarily) a phase error in Eqn. (4.1.30) due to mismatch
values - sc = ±S/2 we have of fR limited by
AR = ± (SV.i/2) sin O. + (SV.1) 2 /8Rc ( 4.1.45)
IARI ~ (SV.1/2)(1sin O.I + SV.1/4Rc) (4.1.40)
Using Eqn. (4.1.44), in terms of the azimuth bandwidth time product Eqn.
Using the nominal relations Eqn. ( 4.1.34) and Eqn. ( 4.1.37), we have the synthetic (4.1.38) this can be written
aperture length as ·
(4.1.46)
( 4.1.41)
In order that migration not require compensation, the maximum distance Eqn. Using Eqn. (4.1.38) we can also write Eqn. (4.1.45) as
(4.1.40) should be less than (say) 1/4 of a range resolution cell oR. Thus we
have the criterion *(4.1.47)
(A.Rc/ox)(lsin O.I + A./8ox) < oR *(4.1.42) Thus with say oR =ox= 7 m, an L-band system (A.= 25 cm) must nominally
update JR each oRc/OR = 56 range resolution cells, while an X-band (J. = 3 cm)
For an unsquinted beam ( o. = 0), this criterion that no compensation be needed system needs to update only once each 467 cells.
becomes Cook and Bernfeld ( 1967, Chapter 11) have given a comprehensive analysis
of both deterministic and random errors, in the case of general waveforms to
(ox/A.) 2 > Rc/8oR *( 4.1.43) be match filtered. For the linear FM waveform, precise results can be calculated
(Cook and Bernfeld, 1967, Chapter 6). These can be used as the basis for a
At nominal Re = 800 km, oR = ox = 7 m, for example, an unsquinted L-band quantitative analysis of the effects of range migration and limited depth of focus.
system (J. = 25 cm) requires compensation, while an unsquinted X-band system In particular, defining the mismatch ratio
(A.= 3cm) does not. On the other hand, with a squint o. = 1°, Eqn. (4.1.43)
indicates compensation is needed in the latter case also due to the range walk (4.1.48)
174 IMAGING AND THE RECTANGULAR ALGORITHM 4.1 INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM 175
the criterion Eqn. (4.1.45) becomes y < 1/ B0 S. For values y(B0 S) < 2, where 2.0 (a)
B0 S is the azimuth filter bandwidth time product, there is little loss of
resolution due to using a compression filter with chirp constant f ~ = fR + ofR 1.8
with a linear FM input with constant fR (Fig. 4.7). With the Seasat Cl>
'O
1.6
value B 0 S = 3500, for example, this amounts to a proportional error ,e 1.4
(ofR)lfR ~ 0.6 x 10- 3 (0.3 Hz/s at the nominal fR = 500 Hz/s). From Eqn.
ii
E 1.2
111
(4.1.44), for the nominal R 0 = 850 km this corresponds to a mismatch oR. = ~ 1.0
(0.6 x 10- 3 )(850 km)= 510 m, so that over the swath in slant range of 35 km ,!;! 0.8
about 70 different filters would be needed for no loss of resolution. The depth ~ 0.6
of field of the processor is 510 m, using this criterion. z 0.4
In addition to resolution changes, however, compression filter mismatch 0.2
disturbs the sidelobes of the matched filter output. For example, whereas the
00 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6
filter output with matched JR has sidelobes down 13 dB (Eqn. (4.1.22)), for even
2sfyS
the case of yB 0 S = 2.5 (mismatch ratio y = 0.7 x 10- 3 with bandwidth time 1.8 (b)
product 3500), the first sidelobe is only 7.5 dB down from the peak (Fig. 4.8a).
Even mismatches only moderately larger, say yB0 S = 5 (ofR = 0.7 Hz/s for 1.6
Seasat ), cause serious disruption of the shape of the filter output (Fig. 4.8b ). ~ 1.4
On the other hand, for sidelobe control the matched filter will always be ~ 1.2
used with some sort of weighting. The presence of this weighting considerably ~ 1.0
ameliorates the degrading effects of chirp rate mismatch, since the influence of ~ 0.8
Iz
0
0.6
0.4
0.2
o...._~~~~~~~~_._~__._~~...._~_._~__,,~___,
errors at the ends of the matched filter is decreased by the weighting. For
example (Cook and Bernfeld, 1967, p. 158), with weighting designed to produce
sidelobes down 40 dB, a mismatch factor y = 8/ B 0 S raises the first sidelobe
only by 4 dB. However, the mainlobe widens by an additional factor 2.3 beyond
that produced by the original weighting (Fig. 4.9). For y = 4/ B 0 S, the widening
is by a factor 1.4. This value of y for Seasat corresponds to a mismatch in fR
of 0.6 Hz/s, or a depth of field in R 0 of 1 km. The azimuth filter in that case
2
would need to be updated 35 times across the 35 km Seasat slant range swath
to stay within the limit.
The determining parameter in such matters, unity for n/4 phase error, is
yB 0 S
Figure 4.7 Compressed pulse widening factor due to filter mismatch y = I( CJfR)lfRI for linear FM in using Eqn. (4.1.38) and Eqn. (4.1.48). Therefore short wavelength systems are
terms of bandwidth time product (from Cook and Bernfeld, 1967). more resistant to filter mismatch than long wavelength systems, that is they
176 IMAGING AND THE RECTANGULAR ALGORITHM
2.4
2.2
C> 2.0
·ec
~;:: 1.8
5l
~ 1.6
1.4
1.2
2 4 6 8 10
1 s0S
Figure 4.9 Effect of filter mismatch y = i(bfR)/fRI on compressed pulsewidth for linear FM in
terms of bandwidth time product. Case offilter weight function 0.088 + 0.921 cos 2 [(it/ B)(f - .f.)]
(from Cook and Bernfeld, 1967).
have better depth of focus. Also, depth of focus degrades quickly as azimuth
resolution becomes finer.
4.1.4 An Example
An example of the steps in the image formation procedure is of interest. In
Fig. 4.10 is shown a classic Seasat image of the NASA Goldstone antenna
complex in the Mojave Desert of California. The bright cross to the left of
center is the image of a large antenna dish, pointing towards the radar transmitter
on the satellite. The resulting very high radar reflectivity overloads the satellite
receiver, and results in the visibility of many sidelobes of the imaging algorithm
(the arms of the cross).
Fig. 4.1 ta shows the time waveform of the received signal for one radar pulse.
The time interval from about point 2700 to point 4200 is essentially the
transmitted linear FM pulse, as reflected from the antenna, and is of width tP.
The amplitude fully saturates the receiver. In Fig. 4.1 lb is shown the result of
matched filter range processing of the single pulse in Fig. 4.1 la. The matched
filter output is of width &, the slant range resolution.
Fig. 4.12 shows the amplitude of the complex number of each cell of the
data plane of slow and fast times. The single pulse waveform of Fig. 4.llb is
177
178 IMAGING AND THE RECTANGULAR ALGORITHM 4.1 INTRODUCTION AND OVERVIEW OF THE IMAGING ALGORITHM 179
(a) one horizontal cut through Fig. 4.12a. Since the very bright antenna dominates
32
the scene, its corresponding data are clearly visible in Fig. 4.12a. We are viewing
24 the system impulse response.
Q)
The curved trajectory in Fig. 4.12a is the locus of the pulse by pulse range
".!::'.
::J
16 compressed peak responses, the nearly parabolic trajectory Eqn. ( 4.1 .26). Range
a. migration correction is needed, as discussed in Section 4.1.3. The linear (walk)
E
<(
8 component, the first term in Eqn. ( 4.1.40), is removed using a procedure discussed
in Section 4.2.3 below. The result is the locus of Fig. 4. I 2b, with only the
0 quadratic (curvature) migration component present.
0 10 20 30 40 50 60 70 80 90
Relative sample points X 102 Without range curvature, each vertical (constant range ) cut through the
(b) complex data field whose amplitude is shown in Fig. 4. I 2b would yield a complex
160
function of slow time, a linear FM Doppler signal. However, with curvature
....0
120 120
x
Q)
80 100
"::J
.!::'.
a. 80
E 40
<(
60
0
2100 2120 21 40 2160 2180 40
Relative sample points
20
Figur e 4.11 Video offset signal and range compressed result for pulse viewing bright scattering
point of Fig. 4.10 (from McDonough et al., 1985)). 0
300
CD
0 240
x 160
Q)
::J
>"' 80
0
300
250
200
150
100
50
each cut passes through two arms of the parabolic locus (except for the single
cut at the apex). Each segment of the linear FM waveform traversed by a single
range cut has adequate bandwidth time product to lock together slow time and
/'IULT ILOOK PROCESS JrlG Doppler frequency. Therefore, the two branches of the parabola cut at different
slow times map into different Doppler frequency regions. This is evident in the
Doppler amplitude spectra shown in Fig. 4.13.
The procedure of range (quadratic) curvature correction assembles the spectra
of Fig. 4.13 for the various range cuts into a single Doppler spectrum
corresponding to the range of the parabola apex. That spectrum is then processed
with the Doppler compression filter to yield a line of complex image in slow
time. Fig. 4. 14a shows the result of separately compressing four subbands of
the available Doppler spectrum to obtain four statistically independent images
of the antenna point. Fig. 4. I 4b finally shows the result of adding the intensities
.OD of the four images to obtain a single image line along slow time, at the range
of the antenna point ("multilook" processing). Fig. 4.14b is the constant range
cut through the antenna point in Fig. 4.10 .
•OD
C<l!BlliED DATA
b
Figure 4.14 (a) Four single-look images of Goldstone antenna in Fig. 4.10. (b) One four-look Figure 4.15 Point reflectors on Goldstone (dry) Lake, showing attained resolution and sidelobe
image resulting from images of (a) (from McDonough et al., 1985 ). structure (from McDonough et al., 1985).
182 IMAGING AND THE RECTANGULAR ALGORITHM 4.2 COMPRESSION PROCESSING 183
In Fig. 4.10, the antenna point so dominates the scene that no other structure The received waveform at the radar in response to a unit point target with
appears in the image cut Fig. 4.14b. On the other hand, Fig. 4.15 shows a detail coordinates (xc, Re) (Fig. 4.1) is the impulse response:
of the image of small point reflectors near the large antenna (Fig. 7.16) on
the dry bed of Goldstone Lake (a smooth background which appears dark to h(x, Rlxc, Re)= cos{2n[fc(t - r:) + K(t- r:) 2 /2]} (4.2.2)
the radar). The sidelobe structure and mainlobe width of the radar and image
formation algorithm response to a point target are plotted as cuts through the The delay is r: = 2R(x)/c with
rightmost reflector point.
R(x) =Re+ Rc(s - Sc)+ Rc(s - sc}2/2 + ... (4.2.3)
4.2 COMPRESSION PROCESSING
The slow time variable is s = x /Va, where Va is the speed of the platform along
The unique aspect of SAR processing is the compression of the complex range
its path.
data Eqn. (4.1.24) in the slow (azimuth) time variables. In order to carry that
out, it is necessary that the results of range processing of perhaps thousands of The received signal Eqn. (4.2.2) is often converted to some different frequency
radar pulses be available. Since each radar pulse produces thousands of range band (S-band for example), perhaps for transmission to ground, and further
time samples, the memory requirements on the computer are considerable. In converted after ground station reception to a relatively low frequency carrier
addition, range data are naturally produced and ordered with range as the
f 1 (the offset video frequency) (Fig. 4.16). The result is an offset video impulse
response function
minor index, and pulse number as the major index. For azimuth processing,
the reverse is needed. This leads to the necessity for some kind of "corner
turning" in order to access the data matrix by columns after having stored it h(s, tlxc, Re)= cos {2n[f1 t - 2R(s)/A. + K(t - 2R(s)/c) 2 /2]},
by rows. With the availability of increasingly large random access memory, or
with the construction of special purpose computing units, these difficulties have
it - 2R(s)/cl :s::; r:p/2 (4.2.4)
tended to recede in importance. However, in the earlier development of SAR
processing algorithms for data from space platforms they were a considerable
hindrance to achieving high speed image formation. lH (f)I
In Chapter 9 considerable attention is given to the computing systems which
have been developed to carry out the SAR imaging process. Here we will be
concerned entirely with the signal processing algorithms which act on the data,
assuming it is available where and when needed. The variety of approaches
0 -fc
.l BR--o--, fc
~
taken by various designers is a reflection of the difficulty of the problem. There
is no clear-cut "best" way to proceed, although lately the trade-offs among
various alternatives have become much clearer.
We begin the discussiQn with some details common to all processors which
<·> I /
use the rectangular algorithm. We then discuss an azimuth compression
algorithm which is in concept the most direct of the various algorithms in
current use, the time domain processor. This is followed by a detailed discussion
-I'0------0-1
of azimuth compression algorithms which operate in the Doppler frequency
domain. The computational aspects of these algorithms are discussed in
Section 9.2.
The range of s values over which this is effectively nonzero depends on the 8000 data pulses need to be considered simultaneously for efficient processing
radar antenna beamwidth Ou =A./ L 8 , since that determines the length of slow (Section 9.2.4 ).
time S = OuRcf V. 1 for which any particular terrain point is in view. The range dimension of the two-dimensional compression processing is
The received data array v,(s, t) will be roughly rectangular, and will extend common to most SAR systems. The operations required are whatever it takes
in slant range R = ct/2 the full swath width W. and in azimuth x = V.s to compress the function Eqn. (4.2.4) into a pulse at fast time t = 2R(x)/c. The
some indefinite amount depending on the amount of data which must be same processing is done for every pulse over the range of slow time needed to
simultaneously accessed for image processing. The impulse response Eqn. (4.2.4) form an image. The first operation is complex base banding (coherent detection),
will cover a region, as indicated in Fig. 4.17, which is of extent cr:p/2 in slant which is initiated by Fourier transformation of the received data vr(s, t) with
range R for every x. The extent of x over which the impulse response is nonzero respect to fast time t. We assume that the radar pulse has a properly large
is not sharply defined, since the edges of the antenna beam are not sharp. The bandwidth time product Bar:p = IKlr:~ (say > 100) so that the point response
midpoint of the region of the impulse response, shown in Fig. 4.17 as a solid function Eqn. (4.2.4) has a rectangular amplitude spectrum. The complex
line, is the curve Eqn. (4.2.3 ), which is often well approximated as a parabola. basebanding operation amounts to deleting the negative or positive frequency
The real valued data v,(s, t) is naturally sampled in slow time s at the radar portion of the spectrum and shifting the remaining half to center on zero
pulse repetition frequency. In fast time t the sampling is done after down frequency. Figure 4.16b shows one case.
conversion to the offset video frequency at a rate a little above the Nyquist Which half of the real range signal spectrum is used depends on a detail of
rate (Appendix A). This is typically somewhat greater than 2Ba. where Ba is conversion to the offset video frequency f 1 . The procedure is to multiply the
the bandwidth of the radar pulse around the carrier (Fig. 4.16). signal Eqn. (4.2.2) by a local oscillator signal cos (2efLt), and reject by filtering
As an example of the size of this real data matrix, the Seasat offset video any frequency components of the result near the carrier fc. Letting r: = 0 for
frequency f 1 = 11.38 MHz required a sampling frequency somewhat greater convenience, the result is
than 2BR = 38 MHz, and 4f1 = 45.53 MHz was used. The target point
illuminated may be located anywhere in the range swath. Therefore provision 2{ cos(2nfLt) cos[2n(fct + Kt 2 /2)] }rniered =cos{ 2n[(fc - fdt + Kt 2 /2]}
must be made to store sampled values for each pulse over a time span nominally
equal to the slant range swath width W. plus the pulse width r:P. For Seasat, (4.2.5)
this was (2/c)(37 km)+ 33.8 µs = 280 µs; in fact, 300 µs was used, resulting in
(exactly) 13680 real data samples to be stored. In the along-track coordinate Figure 4.16a shows the case A <fc, so thatfc -A> 0. In that case, the positive
x, the Seasat impulse response spans about 4000 pulses, while something like frequency components of the signal Eqn. (4.2.5) have phase
The spectrum of this, except for the constant, is the phase factor
A
Figure 4.17 Span in memory of responses to point targets at (xc, Re) beam center coordinates. exp[ -j4nfR(s)/c]
186 IMAGING AND THE RECTANGULAR ALGORITHM 4.2 COMPRESSION PROCESSING 187
corresponding to the time shift t = 2R (s) / c, times the spectrum of the complex usually required during range migration compensation in azimuth processing.
base banded transmitted pulse Eqn. (4.2.1) We will discuss the details below.
Alternatively, the basebanded data Eqn. (4.2.6) could be correlated in fast
.S(t) = (0.5) exp(jnKt 2 ), (4.2.7) time t with the basebanded transmitted pulse Eqn. (4.2.7) to compute
Since the pulse Eqn. (4.2.7) by assumption has a large bandwidth time product, t+<p/2
its bandwidth is just BR= IKltP, and its spectrum is (Eqn. (3.2.29))
Since the correlation operation Eqn. (4.2.13) is stationary in this case of range
Ill< IKltp/2 = BR/2 (4.2.8)
processing (Appendix A), i.e., the integrand involves s(t' - t), and not s(t'lt),
the matched filter realization Eqn. (4.2.11) is exact. There is no reason to carry
The spectrum of the basebanded impulse response vr(s, t) of Eqn. (4.2.6) is then
out range processing as a correlation, unless it is more efficient than the fast
f/;(s,f) = 0.51KI - 112 exp[j(n/4) sgn(K)] exp[ -j4nR(s)/ A.] convolution processing involved in matched filtering. That will only be the case
for a transmitted pulse which spans a small (less than say 64) number of time
x exp[ -j4nl R(s)/c] exp(-jnf2/K), Ill< BR/2 (4.2.9) samples, so that the time bandwidth product is less than 64. This is rarely the
case, although in at least one aircraft system (Bennett and Cumming, 1979)
Since the transmitted spectrum Eqn. (4.2.8) has constant amplitude, the range processing (as well as azimuth processing) was realized as a time domain
compression filter is just the matched filter correlation (convolution).
It is worth recalling that the matched filter output Eqn. (4.2.12) is
H(f) = 1/S(f) = 21KI 112 exp[ -j(n/4) sgn(K)] exp(jnf2 / K), Ill< BR/2 approximately correct even for transmitted pulses with rather small bandwidth
time products, on the order of 20, provided the full signal and matched filter
(4.2.10)
bandwidth are used for whatever pulse is transmitted (Section 3.2.2).
It is in the stage of azimuth (slow time) processing that matters become more
Applying this filter to the basebanded signal spectrum Eqn. (4.2.9) yields for
complicated. This is because the azimuth impulse response function Eqn. (4.2.12)
each radar pulse a filter output
depends on range R 0 , through Eqn. (4.2.3 ). The compression filter is thereby
G(s,f) = H(f)V,(s,f) = exp[-j4nR(s)/A.] exp[-j4nlR(s)/c], non-stationary, and processing in the frequency domain (fast convolution)
requires care. Second, the data to be compressed lie along the range migration
lll<BR/2 (4.2.11) curve Eqn. (4.2.3). Both these effects were discussed in Section 4.1.3. We will
now discuss the ways in which they affect SAR azimuth compressor design.
The corresponding time response is
(4.2.19) ,/
/
/
/
with foe• fR being the Doppler chirp parameters for the scene in question, /
/
/
depending markedly on Re and weakly on sc. (In Eqn. (4.2.18), we ignore the /
/
/
antenna weighting pattern for simplicity of writing. It can easily be included in
the compression filter, but often is not in order to provide sidelobe control.)
,, ,,"
,
For a particular pulse number m, corresponding to azimuth time sm, the
values of Eqn. (4.2.18) for various range bins Rn, ,, Data sample
,,I I
Sm
Once these values are found, azimuth compression proceeds by computing their R
spectrum over some range of slow time s, multiplying by the corresponding Figure 4.18 Sampling nodes in data memory. At pulse m, a target at range R(sm) is sampled at
matched filter spectrum for the image range Re in question, and inverse ranges R •. R = R(s)is the range migration locus of a point target.
transforming. This achieves the matched filter computation of the correlator
output Eqn. (4.2.14) for a full azimuth line of image. To do this, at azimuth time s, counted from time s = 0 taken at the beginning
The procedure described in this section has been used by Bennett et al. ( 1981 ), of the scene for convenience in indexing, we want to compute data corresponding
Herland ( 1981, 1982), and McDonough et al. ( 1985). The interpolation
to the range at the tail of each arrow in Fig. 4.19 and store it at the memory
necessary to compute the numbers which would be present in the data matrix
node corresponding to the arrowhead. Thus, at time s we want to shift the
along the trajectory R(s) (Fig. 4.18), given the numbers which are present at
(unavailable) analog data left by an amount AR= -A.f 0cs/2, and then sample
the nodes of the matrix, is carried out mostly in the time domain, before azimuth
at the discrete range bins (the memory nodes). Note that this removes the full
Fourier transformation. The remaining interpolation operations are carried out
amount of the linear range walk only for the range R~, because we use the same
in the Doppler frequency domain. In effect, the bulk of the range walk, the
value foe for all ranges Re. .
linear component of R(s) in Eqn. (4.2.19), is removed before azimuth Fourier
The unavailable data values at the tails of the arrows in Fig. 4.19 are computed
transformation of the data, with the remaining small range walk, and the full
by interpolating the values available at the memory nodes. Since the range
range curvature (the quadratic term of R(s )), removed in the frequency domain.
compressed data are bandlimited, and adequately sampled by the range bin
spacings, the interpolation procedures of Appendix A apply. In particular, let
Skewing the Data Array
Gk be the N discrete Fourier coefficients (taken overt) of the range compressed
We begin by choosing a nominal value R~ of Re, say the midswath value, and
data g( s, t) corresponding to Eqn. ( 4.2.18 ), sampled at the N range bin values
a nominal s~, say the midscene value. The corresponding Doppler center R = R 0 as in Eqn. (4.2.20). Then the Fourier coefficients of the function
frequency foe is assumed to be known, perhaps by a clutterlock proc.edure g(s, R + AR) are just
(Chapter 5) used in conjunction with the simple model for foe as a functton of
Re developed in Appendix B. For the entire data field, at all ranges R, we now G~ = Gk exp(j2nkAR/ NJx.)
remove an amount of range migration corresponding to a range independent
linear walk, where the slant range sampling interval is Jx. = c/2f.. In particular, for
(4.2.21) (4.2.22)
192 IMAGING AND THE RECTANGULAR ALGORITHM 4.2 COMPRESSION PROCESSING 193
s and stored. Which set to use for any given data row (radar pulse) is determined
by calculating the index p such that
Since the operations of range compression and interpolation are both linear
T p = O,. . .,P - l
where H;,. is the usual range compression filter and Fk(P) is the appropriate set
of interpolator coefficients Eqn. (4.2.23) calculated for i:x = pc5x. corresponding
to p as in Eqn. (4.2.24 ). After compression and interpolation, the shifting
operation by the appropriate integer number of range bins amounts simply to
re-indexing the output of the compression filter before storing in the data matrix.
After this compression and interpolation process, the data corresponding to
R a point target at some beam center slant range Re lie within 1/ P of a complex
range bin of the locus given by
Figure 4.19 Re-indexing of data matrix and interpolation in time domain migration compensation.
where .1R is the total shift Eqn. (4.2.22) carried out in correcting for the nominal
for some i:x with 0 :s:; a :s:; l for convenience, we have linear range migration. This can be written
G~ = Gk exp(j2nkn/ N) exp(j2nki:x/ N) R(s) =Re - (A./2)Uoc - foc)(s - sc) - (A.fR/4)(s - sc) 2 + A.focsc/2
(4.2.25)
The second exponential factor corresponds to interpolation by an amount ac5x.,
and the first corresponds to left shift of that interpolated sequence by n samples. The last term of this represents a skewing of the final image, which can be
The left shift is accomplished simply by storing the interpolated sequence removed after azimuth compression. The remaining terms of R(s) - Re represent
appropriately at the output of the interpolating filter. The interpolating filter a residual range migration after the interpolation and re-indexing procedure.
Fourier coefficients are
Doppler Domain Interpolation
Fk(i:x) = exp(j2nkcx/N) (4.2.23) For Seasat-like systems, the Doppler center frequency foe varies by only a few
hundred Hertz over the range swath, while the azimuth extent of the point
Each row of the data matrix will in general be associated with a different value of a target response is a few seconds at most. Even for the larger values of A., say
at L-band, for which migration effects are more severe, the residual linear and
a= integer(.1R)/c5x. the quadratic terms together, Eqn. (4.2.25), amount to only a few tens ofrange
bins over the full point target response history. For Seasat, for example, from
where "integer" indicates the integer part of the number. To avoid the necessity Eqn. (4.l.34) the nominal integration time is S = 2.4 seconds. Using a nominal
of computing the interpolating filter coefficients during data processing, a can value foe - foe= 100 Hz, the residual range walk in Eqn. (4.2.25) is 28 m, or
be quantized into some appropriate number P oflevels (four or eight, typically), about 5 range bins, while withfR = 500 Hz/s the curvature amounts to 7 range
and the corresponding sets ofinterpolator coefficients exp(j2nkcx/ N) precomputed bins. Thus, the bandwidth time product of the interpolated and shifted data in
194 IMAGING AND THE RECTANGULAR ALGORITHM 4.2 COMPRESSION PROCESSING 195
each range bin is on the order of 1/12 the full azimuth product ( 3200 for Seasat ),
x
or about 250 per bin, which is more than enough to lock together time and
Doppler frequency in each bin. (A basebanded waveform of length T and
two-sided band B, sampled at J. = B, yields a number of samples N = BT, the
bandwidth time product.)
With time and frequency locked together by
, I
range bin to produce Doppler spectra G' (f, R ), the residual migration correction ,,•
needed, Eqn. (4.2.25), can be written in the frequency domain as ,,
I.
,,,
Slope -2/)..foc /
I
•
For each value of Re for which an image line ((s, Re) is to be constructed, we
need to assemble the proper Doppler spectrum for azimuth compression
processing from data G'(f, R) located at Re + J' R for each frequency f of the
"\,/ I
I
I
I
I I
discrete spectrum over the Doppler band. Although Re will be an integral I
I I
I
number of range bins, generally J' R will not, so that there will not usually be
a data node at (f, Re + J' R). Interpolation is then needed, to calculate R
G' (f, Re + J' R) from adjacent values G' (f, nJx.). Simple polynomial interpolation Figure 4.20 Two point targets with extreme range migration may involve chirp constants which
exceed the azimuth depth of focus.
using perhaps four adjacent values suffices. This finally corrects the last range
migration effect, and compression in azimuth follows using the appropriate
sidelobe weighted compression filter. depth of focus will be defocussed. Therefore, the length of azimuth time used
In the case of small range migration, such as for a Seasat-like system with in batch processing in the fast azimuth compression process must be short
beam squint angle e. at most a fraction of a degree, it may not be necessary to enough so that for whatever nominal range walk is present, the span of values
do any time domain adjustments. All the range migration can then be removed Re is within the depth of focus of the processor. In extreme cases (for example,
in the Doppler frequency domain using Eqn. (4.2.26) (Bennett et al., 1980), taking with squint angle more than a degree, especially at L-band and lower), this
foe= foe= 0. may force the azimuth FFT length to be shorter than would otherwise be desired.
Since the nominal range walk locus in Fig. 4.20 is given by Eqn. (4.2.21 ),
Criterion for Success of the lnlerpolalion where foe is the selected (say midswath) value used in the compensation
The procedure we have described here is simple and accurate, unless the linear procedure, and s~, R~ are say the midscene values, the slope of the nominal
range walk is excessive. The potential difficulty in the case of large range walk walk line is
(which the technique of secondary range compression is designed to circumvent;
as described in Section 4.2.4) can be understood from Fig. 4.20. By removal of ds/dR = -2/Afoc
the nominal linear range walk in the time domain, we are ip effect carrying out
compression processing along the indicated diagonal line through memory. As For an azimuth analysis time span L\s, the span of target values Re included is then
shown in the figure, targets with different values of Re have their data lying
near the same diagonal. Since the azimuth chirp constant fR depends on Re,
along the line of analysis there occur linear FM functions in the Doppler domain
with different chirp constants. These will all be compressed by the same azimuth From Appendix B, the model Eqn. (4.1.35) for fR, that is
compression filter, embodying some fixed value fR. Any target for which the
filter constant fR differs from the target constant more than allowed by the
196 IMAGING AND THE RECTANGULAR ALGORITHM 4.2 COMPRESSION PROCESSING 197
holds quite closely, with V taken as a velocity parameter which depends only motivation to use fast correlation whenever it is reasonably possible to do so.
weakly on sand not on R. Therefore, the change in target fR across the span ARc is The phenomenon of range migration, however, considerably complicates the
design of a processor using fast correlation. (The slow variations in azimuth
compression parameters foe• fR with slow time s are. a lesser inconvenience,
compared with range migration.)
or The earliest suggested processor for space-based SAR data of the family we
will discuss in this section (Wu, 1976) was envisioned to operate entirely in the
(4.2.27) Doppler frequency domain for azimuth processing. Such a processor is able to
deal with only small range migration effects, essentially only the quadratic
If we require a mismatch ratio Eqn. (4.1.48) curvature component. Beam squint angles larger than a small value lead to
data sets which are difficult to process accurately. Accordingly, two subsequent
refinements were made.
Firstly, the processor was developed which carried out range migration
then the span of azimuth processing time As is limited by correction partly in the slow time domain and partly in the Doppler frequency
domain, as described in Section 4.2.3. Secondly, a refined algorithm operating
As < 2eRcl A.I foci *(4.2.28) entirely in the frequency domain was developed (Jin and Wu, 1984; Chang
et al., 1992) which is free of approximations which would be unjustified, even
The parameter e depends on the system depth of focus, discussed in Section 4.1.3. for data with rather severe amounts of range migration. In this section, we will
There it was determined that describe the latter processor. It is a direct descendent of the earlier hybrid
correlation algorithm of Wu (1976) and Wu et al. (1982b), but free of certain
y = 2/B0 S approximations used there which are not well satisfied in the case of data with
large range walk.
was within good tolerance, where B 0 S is the system azimuth bandwidth time
product. Using Seasat values, say Re= 850 km, A.= 0.235, and (marginally)
with e = 0.001, if we want to use say 8K azimuth points for efficient fast Impulse Response for Range Compressed Data
convolution, with a PRF of 1650 Hz we must have To begin, consider again the system impulse response. A general transmitted
waveform
1/ocl < 1500 Hz (4.2.29)
s(t) = cos[2n.fct + </>(t)],
This is a somewhat small value, which might be exceeded if the satellite has a
squint of more than a fraction of a degree. Decreasing the azimuth FFT size will result in a received response to a unit point target whose positive frequency
to 4K would double the limit Eqn. (4.2.29), however, which is reasonably within portion is
the typical operating range of a side-looking platform. For higher frequency
systems (C or X band), the problem tends to disappear because A. decreases, s(t - 2R/c) = exp{j[2nfc(t - 2R/c) + t/>(t - 2R/c)]}
allowing f De to increase in inverse proportion for the same azimuth FFT length.
Nonetheless, L-band systems with squint angle e.of more than fraction of a where
degree can be difficult to deal with using the time domain migration compensation
described here. The algorithm in the next section was designed to deal with that R = R(s) =Re+ Rc(s - Sc)+ Rc(s - sc) 2 /2 + ···
situation.
~Re - (A./2)[/oc(s - Sc)+ fR(s - sc) 2 /2] (4.2.30)
The advantage in processing speed which fast correlation, based in the frequency v.(s, t) =exp[ -j4nR(s)/).] exp{j</>[t - 2R(s)/c]}, It - 2R(s)/cl < rp/2
domain, has over time domain correlation is considerable. There is a strong (4.2.31)
198 IMAGING AND THE RECTANGULAR ALGORITHM 4.2 COMPRESSION PROCESSING 199
Range compression of the received data is easily carried out as the first operation and the function h is that on the right of Eqn. (4.2.33 ). This is the impulse
of image formation. The result corresponds to an impulse response which is response of a two-dimensional system which is approximately stationary in s,
the range compressed version of Eqn. ( 4.2.31 ). Let S(v) be the spectrum of the but nonstationary in R, both through the explicit appearance of R 0 and through
base banded transmitted signal: the strong dependence of f 00 , JR on R 0 • We wish to determine its inverse, the
corresponding image formation operator to be used on range compressed
S(v) = ffe{exp[j<f>(t)]}, basebanded data.
(4.2.36)
=0, otherwise
where the inverse Fourier transform is two dimensional, G and Hare the two
The result corresponds to the range compressed spectrum dimensional Fourier transforms of g(s, R), the range compressed complex data,
and h(s, RI R 0 ), and the quantity G/ H is defined as zero for any frequencies for
G(s, v) = HR( v )ffe {ilr(s, t)} = exp[ -j4nR(s)/ A.] exp[ -j4nvR(s)/ A.], which His zero. Writing the two dimensional inverse transform in Eqn. (4.2.36)
lvl < BR/2 as a sequence of one-dimensional transforms, we have
so that
g(s,t) = BRexp[-j4nR(s)/A.]sinc{nBR[t- 2R(s)/c]}
Writing t = 2R/c, this is where R = l/H for H =I= 0 and R = 0 for H = 0. Then
The response function Eqn. (4.2.32) involves both s0 and R 0 other than in the where the convolution is in the variable Rand i}(f, R) is the Doppler spectrum
combinations s - s0 and R - R 0 • That is to say, the linear radar system is of the range compressed data field taken for fixed R. We now need the function
nonstationary (Appendix A). However, the corresponding impulse response is h(f, RIR 0 ) in order to describe the imaging algorithm.
well approximated as The Doppler spectrum of the system function Eqn. (4.2.34) is
enters into the expression Eqn. (4.2.30) only in the forms - s0 , and in the weak x sinc[(2nBR/c)(R - R 0 + A.j00 s/2 + A.fRs 2 /4)] exp( -j2nfs)ds
dependence of f 00 , JR on s0 •
(4.2.38)
From Eqn. (4.2.32), we can then write the impulse response for range
compressed data as where we have explicitly inserted R 1 (s) from Eqn. (4.2.35), and where we also
include the two way antenna voltage pattern G(s) in azimuth. (This is the
h(s, RIR 0 ) =BR exp[ -j4nR 1 (s)/ A.] sine{ (2nBR/c)[R - R 1 (s)]} one~way power pattern G( (), </>)evaluated at constant slant range and expressed
*( 4.2.34) as a function of azimuth time.) Since we include the pattern G(s), the limits can
be left as infinite, although the antenna effectively imposes the limits ( - S /2, S /2 ),
where where S is the integration time of the SAR. In evaluating this integral, a second
order approximation based on the method of stationary phase, discussed in
(4.2.35) Section 4.2.2, leads to the result of Jin and Wu ( 1984 ).
200 IMAGING AND THE RECTANGULAR ALGORITHM 4.2 COMPRESSION PROCESSING 201
The points of stationary phase in s depend on the frequency fas a parameter For the second spectrum, since we have the inverse transform relation
of the integrand, and are given for the integral Eqn. (4.2.38) by setting to zero a/21'
the derivative of the phase function (n/a)
f-a/21'
exp(j2nfs)df = sinc(as)
we have
The points § of stationary phase are then given by
[o</>(s)/os]l .. =s = O
The spectrum Eqn. ( 4.2.40) is then
or
which is just the locking relationship between time and frequency familiar for =exp[ -j4nRe/A. + j(n/4)sgn(fR)JlfRl- 1' 2
waveforms with high bandwidth time product.
x G((f-foe)lfR]exp[-jn(f-foe) 2 /fR]A(R- Ri(s)IR.J
In the integral Eqn. (4.2.38) we do not replace slow time s in the amplitude
factors of the integrand by the stationary points Eqn. (4.2.39) everywhere, but (4.2.43)
rather only in the second order (s 2 ) term of the sine function. This is because
we want to allow for a large range walk term foes in the locus R 1 (s), and where
therefore make no approximation there. Specifically, the linear part of the range
migration at the end of the integration time, A.lfoelS/4, may be larger than the s.12
quadratic part, A.lfRIS 2 /16. On the other hand, if the linear range walk is small,
no harm is done by the approximation of s = § in the quadratic term of the
A(RIRe) =
f -B.12
exp{j2n[(2R/c)x -(A.f0 ex/c) 2 /2fR]} dx *(4.2.44)
sine argument, because for small range walk the stationary phase approximation and
becomes increasingly accurate.
With these replacements, we obtain the spectrum Eqn. (4.2.38) as (4.2.45)
since the waveform Eqn. (4.2.41) has high bandwidth time product.
202 IMAGING AND THE RECTANGULAR ALGORITHM 4.2 COMPRESSION PROCESSING 203
where we write
*( 4.2.48)
then finally
x fB./c
-B./c
exp[j(rr/ fR)(A.f00 /2) 2 v2 ] exp{j2rrv[R + R(f)]} dv
( 4.2.47)
The number B[f, R + R(f)IR is multiplied by
0
]
204 IMAGING AND THE RECTANGULAR ALGORITHM
4.2 COMPRESSION PROCESSING 205
to form a single point of the composite Doppler spectrum of ((s, Re). Finally, Thus the secondary compression filter, with transfer function
inverse Fourier transformation yields all azimuth points ((s, Re) of the range
line Re. A*( -v) = (c/2)exp[j(n/ fa)(·Voc/2) 2 v2 ], lvl < Ba/c (4.2.52)
Since range compression processing will have been digital, the ranges for
which image will be computed are the values at which compressed range function
using Eqn. (4.2.46), can simply be combined in a product with the primary
samples were produced (the range bins), the interval between samples being
range compression filter. The result is an adjusted range compression filter,
Ax.= c/ J., where f. is the sampling rate of the range complex video signal.
relative to range time t = 2R/c, with transfer function
The spacing in the discrete version of the Doppler frequency variable f depends
on the span in slow time s over which the azimuth FFT blocks are taken. Thus
the field of values B(f, RI Re) of Eqn. (4.2.48) is on a specified grid in the (f, R) H(f) = exp(-jnf 2 /Ke), (4.2.53)
plane. For any particular discrete value off, and some specified discrete range
Re for which the line of image is being constructed, there will not in general be where the effective chirp rate Ke is
a discrete range value R(f) of Eqn. (4.2.50) available on the grid. Therefore
interpolation is necessary between neighboring nodes of B(f, RI Re) to find the *( 4.2.54)
needed value. Polynomial interpolation using a few points in range at the
frequency of interest suffices. for a transmitted pulse Eqn. (4.2.1 ).
As mentioned above, foe and fa depend weakly on sc and strongly on Re. In order that secondary and primary range compression can be combined
The procedure of the last paragraph must then be carried out in range blocks as in Eqn. (4.2.54 ), it is necessary that the secondary compression filter,
of size small enough that these parameters are sensibly constant over the block. Eqn. (4.2.52), evaluated say at midswath, have a phase </J( v) which is adequately
The variations with sc are usually slow enough to allow use of FFT blocks in matched to the data. The phase mismatch is due to drift Afa and Afoc of the
slow time of reasonable length ( 4K or SK, typically). In range, the changes in phase parameters from their midswath values:
foe• fa are more rapid, and typically these parameters are changed every few
tens of range resolution intervals, depending on the processor depth of focus. (4.2.55)
The parameters are updated, perhaps in accordance with one of the models of
Appendix B, as the image production moves across the range swath. where the derivatives are evaluated for midswath fa, foe·
From Fig. 4.9, for an acceptable 10 % broadening of the output of a filter
Combined Primary and Secondary Range Compression matched to a linear FM with bandwidth time product Ba 'l:P' it is required that
Jin and Wu ( 1984) indicate that the parameters in A(RIRc) need not be updated the proportional drift in chirp constant K be such that
at all across a reasonable swath width in range, so that only the parameter
values in the phase of the Doppler filter w(f) are critical. For such casesy the (4.2.56)
secondary range compression operation Eqn. (4.2.48) can be combined with
range compression, and therefore done with no additional computations needed This takes account that the primary range filter with which the secondary filter
beyond what is needed in any case for range compression. The operation will be combined will include weighting for sidelobe control.
Eqn. (4.2.48) of forming B(f, RI Re) by correlation with the range compressed The restriction Eqn. (4.2.56) can be written in terms of phase deviation A<jJ(f)
data can then be realized as at band edge. Using the general linear FM phase function,
B(f, R) = f: ~{g(s,
00
R')}A*(R' - RIR:)dR'
at f = Ba/2 we have
~{f~
=
0 g(s,R')A*(R' - RIRc)dR'}
Taking account that "range" frequency v and "time" frequency f are related effects encountered in side-looking SARs, in which the squint angle is consciously
by v = 2f /c, the secondary filter Eqn. (4.2.52) has a phase function kept as small as practicable. However, for some purposes the radar beam of a
SAR may be deliberately aimed at a large squint angle, perhaps tens of degrees
off broadside. In such cases, even the algorithm Eqn. (4:2.49) begins to degrade
in its ability to invert the system point response function. Accordingly, a
so that modification was developed by Chang et al. ( 1992) which is tolerant of the large
range walk encountered with a squint mode SAR.
The problem is that, with a squinted SAR, the secondary compression function
parameter f De changes appreciably with slow time s over the SAR integration
Evaluating this at band edge, f = BR/2, leads to the restriction Eqn. (4:2.57) as time S. Since slow time s and Doppler frequency f are closely locked, the
function A(RIRe) used in the secondary compression operation Eqn. (4.2.49)
*(4.2.58) needs to be updated as the Doppler spectrum g(f, R) is processed. The result
is that the secondary compression function A*(R' - RIRe) in Eqn. (4.2.51)
Equation (4.2.58) is essentially that set forth by Jin and Wu (1984). Wong and depends on s, and the operation cannot be combined with the range compression
Cumming ( 1989) have made a similar calculation and present examp~es. filter as in Eqn. (4.2.53), even if the variation with range would be tolerable.
Equation (4.2.58) is well satisfied across the entire range swath of a Seasat-hke The procedure is then to implement (primary) range compression and
system with moderate (,...., 5-10°) squint. secondary range compression as independent operations. This is emphasized
by Chang et al. (1992). The basebanded data v,(s, R) are Fourier transformed
The Hybrid Correlation Algorithm in range to produce spectra P.(s, v) which are multiplied by the range
In the case of small range walk, the secondary range compression process compression filter transfer function,
reduces to the hybrid correlation algorithm of Wu et al. (1982b). As Jin and
Wu (1984) show by computations (Fig. 4.21), the function A(RIRe) of Eqn. H(v) = exp[-jn(cv/2) 2 /K]
(4.2.44) has width the order of one range resolution interval, or about one range
sampling interval, so long as to produce the range compressed data spectra,
*( 4.2.59)
<i(s,v) = H(v)V,(s,v)
a value for Seasat of about 1500 Hz. In that case, The azimuth spectrum
Seasat-like system at L-band (with 40° look angle) with a squint angle of 15-20°, Bennett, J. R., I. G. Cumming, P.R. McConnell and L. Gutteridge (1981). "Features of
whereas the algorithm Eqn. (4.2.49) itself begins to degrade at a squint angle a generalized digital synthetic aperture radar processor," 15th Inter. Symp. on Remote
of about 5°. Calculations are presented to show that, at a smaller look angle Sensing of the Environment, Ann Arbor, Michigan, May.
(20°), the algorithm Eqn. (4.2.49) is adequate at squint up to about 10°, while Brookner, E. ( 1977). "Pulse-distortion and Faraday-rotation ionospheric limitations,"
the modified algorithm at squint 20° is successful at a full range transform span Chapter 14 in E. Brookner (ed.), Radar Technology, Artech House, Dedham, MA.
of 40 km, and by reduction of the range transform span to 10 km can operate Chang, C. Y., M. Jin, and J.C. Curlander (1992). "Squint mode processing algorithms
at squint as high as 80°. At C-band, the algorithm Eqn. ( 4.2.49) itself performs and system design considerations for spaceborne synthetic aperture radar," IEEE
adequately for squint of 40° with a 40 km range transform span and 35° look Trans. Geosci. and Remote Sensing (Submitted).
angle. Matters improve still further at smaller look angles and narrower range Cook, C. E. and M. Bernfeld ( 1967). Radar Signals, Academic Press, New York.
transform span. Herland, E. A. (1981). "Seasat SAR processing at the Norwegian Defence Research
The algorithm of Chang et al. ( 1992) is therefore adequate for a broad range Establishment," Proc. of an EARSel..rESA Symp., Voss, Norway, May 19-20,
of SAR systems. The only restriction is that, since the range curvature terms in pp. 247-253.
Eqn. ( 4.2.38) are only approximated by usfog the method of stationary phase Herland, E.-A. ( 1982). "Application of Satellite-Based Sidelooking Radar in Maritime
to arrive at the spectrum Eqn. (4.2.43 ), the pro~essor degrades if range curvature Surveillance," Report 82/ 1001, Norwegian Defence Research Establ., Kjeller, Norway,
is excessive. The situation worsens at lower frequency and higher altitude, since September (AD A122628).
the range curvature !lR, measured in range resolution cells bx., from Jin, M. Y. and C. Wu (1984). "A SAR correlation algorithm which accommodates
large-range migration," IEEE Trans. Geosci. and Remote Sensing, GE-22(6),
Section 4.1.3 is
pp. 592-597.
McDonough, R. N., B. E. Raff and J. L. Kerr ( 1985). "Image formation from space borne
synthetic aperture radar signals," Johns Hopkins APL Technical Digest, 6(4),
pp. 300-312.
Finer compressed azimuth resolution bx also rapidly degrades the situation.
Quegan, S. and J. Lamont (1986). "Ionospheric and tropospheric effects on synthetic
Virtually all the SAR processors which have been constructed for earth
aperture radar performance," Inter. J. Remote Sensing, 7(4), pp. 525-539.
remote sensing use one version or another of the algorithms we have discussed
in this chapter so far. In Chapter 10 we will discuss a third way of dealing with Wong, F. H. and I. G. Cumming (1989). "Error sensitivities of a secondary range
compression algorithm for processing squinted satellite SAR data," IGARSS '89,
range migration. This is the "polar processing" algorithm, which has been used
Vancouver, BC, pp. 2584-2587.
mainly in aircraft systems, but is not limited to that platform. Before that
Wu, C. (1976). "A digital system to produce imagery from SAR data," Paper 76-968,
discussion, however, we will complete the description of the rectangular
AIAA Systems Design Driven by Sensors, Pasadena, California, October 18-20.
algorithm with a discussion of the phenomenon of speckle noise in coherent
Wu, C., K. Y. Liu and M. Jin (1982b). "Modeling and a correlation algorithm for
imaging systems, and a description of some algorithms designed for determining
spaceborne SAR signals," IEEE Trans. Aerospace and Electronic Sys., AES-18(5),
the azimuth filter parameters (Doppler center frequency and Doppler rate) in pp. 563-574.
the rectangular algorithm, and for resolving an ambiguity in azimuth image
placement which can arise.
REFERENCES
Barber, B. C. ( 1985). "Theory of digital imaging from orbital synthetic-aperture radar,"
Inter. J. Remote Sensing, 6(7), pp. 1009-1057.
Bennett, J. R. and I. G. Cumming (1979). "Digital SAR image formation airborne and
satellite results," 13th Inter. Symp. Remote Sensing of the Environment, Ann Arbor,
Michigan, April 23-27.
Bennett, J. R., I. G. Cumming and R. A. Deane ( 1980). "The digital processing of Seasat
synthetic aperture radar data," Record, IEEE 1980 Inter. Radar Conf, April 28-30,
Washington, DC., pp. 168-175.
5.1 DIGITAL RANGE PROCESSING 211
carried out, the main elements are sufficiently alike to make a separate
5
description efficient. The methods of Appendix A are the basis of the processing
described here. Section 9.2.5 considers the computational complexity of the
procedures.
The continuous time real radar return signal for some particular pulse is of
some bandwidth BR centered on the carrier freqency fc. By linear frequency
shifting operations, this ultimately appears at the input of the A/ D converter
ANCILLARY PROCESSES as a real signal corresponding to the point target response Eqn. (4.2.4) of
bandwidth BR centered on the offset video frequency / 1 , with / 1 > BR/2
IN IMAGE FORMATION necessary to minimize aliasing (Fig. 4.16a), but often / 1 ~ BR/2. (In the Seasat
•p
case, for example, the range pulsewidth = 33.8 µs and range chirp constant
K = 0.563 MHz/ µs resulted in a bandwidth BR = 19.0 MHz, and the offset
video frequency chosen was 11.38 MHz.) For proper digital processing, the
continuous time signal v.(t) is sampled at some rate greater than the Nyquist
frequency, which is twice the frequency of the highest frequency component in
the signal being sampled, / 1 + BR/2 (Appendix A). With / 1 ~ BR/2, a usual
and convenient choice is !.r = 4/1 ( 45.53 MHz for Seasat ). This results in some
implementational simplification, since then
radar data itself. Finally, we describe some ways of resolving the basic image
position ambiguity which arises in pulse radar, which time samples the Doppler
8.R. = c/2!.r = 3.3 m
signal underlying SAR operation.
(Note that a range bin is not the same as a range resolution cell.)
5.1 DIGITAL RANGE PROCESSING The range samples are now filtered by the digital range compression filter.
If the filter impulse response is h(t), this is sampled at the same rate !.r as the
With rare exceptions, all SAR processors carry out range compression of the range data. With a radar pulse length rP there are required Q = rpf.r samples
raw data for a large number of radar pulses before beginning azimuth (1536 for Seasat). With a linear chirp, the effective transmitted pulse is
compression to compute a block of image. Even though some of the details
of range compression depend on the way that azimuth compression is to be s(t) = cos[2n(f1 t + Kt 2 /2)], (5.1.1)
210
212 ANCILLARY PROCESSES IN IMAGE FORMATION
5.1 DIGITAL RANGE PROCESSING 213
relative to the offset video frequency / 1. The filter function is the matched filter: The range compression filter coefficients Fig. 5.1 b are computed as the
N-point FFT of the sequence computed from Eqn. (5.1.2):
h(t) = s(-t) = cos[2n(f1t - Kt 2 /2)], (5.1.2)
n = O,Q/2-1
An FFT size N which is the next power of two greater than or equal P + Q - 1
is chosen (2 14 = 16384 for Seasat), and the data and filter sequences filled with n = Q/2, N - Q/2 - 1 (5.1.3)
zeros to that length (zero padding). Alternatively, a smaller value can be used h0 = h[(n - N)/.f.,], n = N - Q/2, N - 1
with the overlap-add or overlap-save procedures described in Appendix A.
In Fig. 5.1 are sketched the (periodic) time and frequency waveforms involved taking account that the sequence h 0 is periodic with period N and that we want
in digital compression of the offset video range pulse Eqn. (5.1.1) using the filter always to enumerate sequences with positive indices. Since this sequence is real,
Eqn. (5.1.2). In every case, the region of computation is the first period of the the even-odd separate procedure of Appendix A can be used conveniently.
function in positive time or frequency, shown as solid lines in Fig. 5.1. Further, since we will carry out complex basebanding on the result, only the
coefficients Hk fork= 0, N /2 - 1 (Fig. 5.lb) need be computed (Fig. 4.16).
By themselves the filter coefficients Hk of Eqn. (5.1.3) suffer from the
problem of range sidelobes, discussed in Section 3.2.3. Before using them they
must be modified by some appropriate weight sequence, such as the sequence
corresponding to the Taylor weighting (Farnett et al., 1970):
a
If - !1 I ~ .f.,/4
The weighted filter coefficients are correspondingly
...
•-vu
r compressed
I -data .••.
at.!1 •
-----··~.o
•I I
1
/1
n (5.1.5)
[~'JI~ . rhl···\
Gk= Hk+N/4Fk+N/4• k = O,N/4-1
(5.1.6)
µ,_ . N/2
Gk= Hk-N/4Fk-N/4•
Figure 5.1 Steps in range compression. Solid lines on frequency spectra are base Fourier domain.
Dashed lines are periodic repetitions of spectra of digital signals. »'ic ± N/4 = 1 + 2 L Fm cos( 4nmk/ N)
214 ANCILLARY PROCESSES IN IMAGE FORMATION 5.2 SPECKLE AND MULTl LOOK PROCESSING 215
Finally, computing the (N /2)-point complex inverse FFT of the sequence Gk Any particular realization ((R) of Eqn. (5.2.3) will yield an image l((R)l2
of Eqn. (5.1.6) yields the complex samples gk, k = 0, N /2 - 1, (Fig. 5.lf) of the which is different from the mean Eqn. (5.2.2). The difference is speckle noise.
subsampled basebanded complex compressed range samples corresponding to In this section we want to investigate the statistics of the individual real images
Eqn. ( 4.2.11 ). It is the phase of those numbers which carry the Doppler l((R)l2. Also, we will discuss some ways to generate estimators of the desired
information needed for azimuth compression processing. Since now only N /2 image Eqn. (5.2.2) from available samples ((R).
numbers represent the full range swath, the range sampling interval in this
complex domain, the size of a complex range bin, is c / !.r (6.6 m for Seasat),
Image Statistics
rather than the value c/2!.r of the real time samples of the range video function.
Accordingly, we view the terrain reflectivity ( (R) as a (complex) random variable,
Alternatively, of course, the real offset video data samples fn can be FFTed
whose real and imaginary parts have some probability distributions. The radar
to produce Fourier coefficients Fk, k = 0, N - 1. The coefficients Fk, k = 0,
data number v.(x, R) is then a random variable also. Considering the very large
N /2 - 1, are then rearranged as in Fig. 5.lc-f. The resulting complex basebanded
signal coefficients are filtered using coefficients H k obtained by transforming number of image cells in the radar field of view, we then invoke the central
N /2 time samples, taken at intervals 1/J., of limit theorem to assume that the probability densities of the real and imaginary
parts of v.(x, R) are Gaussian. The number, Eqn. (5.2.3), the computed complex
h(t) = exp(-jnKt 2 ), (5.1.7) image value, being a linear combination of Gaussian random variables, is also
a complex random variable with Gaussian real and imaginary parts. I ts mean is
rearranged similarly to Eqn. (5.1.3).
@''[C(x, R)] = L~R'= -oo
1
h- (x, Rix', R')
5.2 SPECKLE AND MULTILOOK PROCESSING
The resolution element of any SAR is large with respect to a wavelength of the x L:R.= _ lif[((xo, Rolx', R')]h(x', R'lx
00
0, R 0 ) dx 0 dR 0 dx' dR'
radar system. As a result, it is generally unfruitful to attempt to define a ( 5.2.4)
deterministic backscatter coefficient for each terrain element to be imaged.
Rather, as discussed in Section 2.3, the sought image is the local mean of the If we now assume that the expected value of the terrain reflectivity function C
radar cross section per unit area of each patch of the terrain in view. This is is independent of aspect angle over the range of angles for which the terrain point
defined in terms of the random specific cross section is in the radar beam, using Eqn. ( 4.1.2) the delta function is recovered in
Eqn. ( 5.2.4) to yield
a 0 (R) = a(R)/dA ( 5.2.l)
tf[C(x, R)] = &[((x, R)] (5.2.5)
The random nature of a 0 (R) is due to underlying variations on the order of a
wavelength in scale which can not be resolved by the SAR system. Thus, the computed complex image function C(x, R) is a random variable whose
As discussed in Section 3.2.1, the mean of the coefficient Eqn. ( 5.2.1 ), the mean is the mean of the terrain reflectivity function.
(real) image I ( R ), is related to the sample functions ( ( R) of the complex image by We are mainly interested in the statistics of the random variable
where ((R) is the terrain reflectivity function defined in Eqn. (3.2.3). Its the magnitude of the computed complex image, whose mean square is "the
approximation in any particular realization, image". Ifwe assume that the real and imaginary parts of the complex Gaussian
random variable C(x, R) are independent and zero mean (implying incidentally,
C(R) = f:00 h- 1 (RIR')6.(R')dR'
from Eqn. (5.2.5), that the complex terrain function (has zero mean) with equal
variances a 2, then Z(x, R) has the Rayleigh density. This follows from the compu-
tation (Whalen, 1971, Chapter 4 ):
is the complex image derived from the radar voltage phasor signals vr(R) by
processing with the inverse of the radar system function (Section 4.1 ). p(Z,c/>) = det[o(a,b)/o(Z,c/>)]p(a,b) (5.2.6)
216 ANCILLARY PROCESSES IN IMAGE FORMATION 5.2 SPECKLE AND MULTl LOOK PROCESSING 217
where we write The image then has a randomly fluctuating intensity /(R) at each pixel, which
leads to the grainy appearance of speckle. For purposes of visual interpretation,
t =a+ jb = Zcos(cf>) + jZsin(cf>) it is generally desirable to reduce those fluctuations, and to cluster the observed
intensities /(R) closer to the mean intensities / 0 (R), since it is the mean intensities
so that the Jacobian is lo(a, b)/o(Z, cf> )I = Z. Since, by our assumptions, which are usually the required image information. This is usually done by
computing some number of nominally independent images (looks) of the same
p(a,b) = p(a)p(b) = (l/2mr 2 )exp[-(a 2 + b2 )/2u 2 ] (5.2.7) scene, and averaging them, pixel by pixel. Alternatively (Li et al., 1983), a single
high resolution image can be locally smoothed.
Eqn. (5.2.6) then yields If we let JdR) be the average of L independent realizations (looks) l;(R) of
the intensity /(R) for a pixel at R:
p(Z, cf>)= (Z/2nu 2 ) exp( -Z 2 /2u 2 )
L
2x
p(Z)=
f 0
p(Z,cf>)dcf>=(Z/u 2 )exp(-Z 2 /2u 2 ) (5.2.8) the mean is unchanged:
2 2
(5.2.9)
= (1/L) 2 L <1~ = <1~/L
p(J) = (dZ/dJ)p(Z) = (l/2u )exp(-J/2u ) I= 1
The mean and standard deviation of the intensity are then (This reduction will be less if the look intensities are unequal or the looks
are not independent.) An image such as Eqn. (5.2.12), is called an L-look
l 0 (x, R) = S(J) = 2u 2
image.
In SAR, independent looks J1(R) can be generated from data taken at different
u1(x, R) = 10 = 2u 2 aspect angles as the vehicle moves past the terrain (Fig. 5.2, drawn for the
common case of four looks). Thus the first look is generated from the forward
where u 2 may depend on (x, R). From Eqn. (5.2.9), the exponential density of ,, quarter of the antenna along-track beam, the next from the next quarter beam
the samples l(x, R) is equivalently: back, and so on. Since signals from all parts of the beam reach the radar receiver
superimposed, however, such segregation of data can not be done in the time
p(J) = (1//0 )exp(-J//0 ) *(5.2.10) or space domains. However, the high azimuth bandwidth time product of a
useful SAR locks together time and frequency, which allows the look data to
be sorted in the Doppler frequency domain. That is, data with high Doppler
Mu/I/look Images frequency necessarily originated from terrain points in the forward edge of the
Although there are many assumptions in the above derivation, analysis of typical azimuth beamwidth, while the same point in the rear quarter of the beam
SAR images supports the final result that the image resolution cells have produces a low Doppler frequency and appears in the lowest quarter of the
intensities I which follow the exponential distribution: Doppler band.
r-~t-~~~-1-~~_::=-f o
f oc
... ...... ... ......
......... ...... ... ... ... ...
...... ......... ......... ... ....
Figure 5.2 Two subaperture looks at a target as the radar moves past.
Figure 5.3 Doppler spectrum and look filters. (Antenna pattern weighting not shown.)
azimuth compression filter is applied. The spectrum is then divided into (say)
four subbands by filters before compression, suitably tapered to avoid sidelobes
in azimuth time (Section 3.2.3), and overlapped to some extent to avoid loss data block in order to achieve fast convolution efficiency (Appendix A). Then
of too much signal energy, but not so much as to lose independence of the there is no particular reason not to implement multilook filters by simply
looks (Fig. 5.3). Since the Doppler band width B0 is essentially independent of combining the amplitude characteristic of Fig. 5.3 for each look with the single
range Re at beam center, the look filters can be taken with constant bandwidths look full band compression filter to produce the L multilook filters to apply to
Bi> (nominally B0 / L for L looks) and with center frequencies evenly spaced the azimuth Doppler data. Since the compressed data in Doppler frequency
across the band B0 . Since foe changes with range Re, the look filter complex has only nominally 1/ L the bandwidth of single-look data, a sampling rate 1/ L
Fig. 5.3 slides in frequency as a unit as the range bin Re in question changes. that needed for single look images suffices. This rate reduction is easily brought
Since the resolution in each look I;(R) is inversely proportional to the about by doing the inverse FFT of the compressed data with an (N / L)-point
bandwidth Bi> of Doppler data compressed in that look, processing only 1IL IFFT, where the original single look spectrum was taken with an N-point
of the full Doppler band B0 degrades the resolution in each look by 1IL as transform. If something other than L-look imagery, with L a power of 2, is
compared to the resolution available if all data were compressed to form a desired, some zero padding is useful to bring N / L to an integral power of 2.
single image (single-look processing). Thus, for example, a single look Seasat With this procedure, slow time registration of the images of the individual looks
image uses the full Doppler band of 1300 Hz and attains a resolution ideally is automatic, since the compression filter for each look retains exactly the proper
c5x = V.1/ B 0 = 6600/1300 = 5.1 m, while a four look image has resolution in phase function to place the image pixels at the proper azimuth positions.
each look 4 x 5.1 = 20.4 m, with the resolution in the superposition of the four Alternatively, some computational and memory savings can be realized if
looks being the same as each look separately. (The exact resolution attained there is no intention to produce single look images with the processor. In that
in a multilook image depends on the details of implementation of the look case, the largest set of Doppler frequency data ever needed at any one time is
filters, since the precise answer depends on the bandwidth taken for each look that corresponding to the band of one of the multiple looks, of bandwidth
filter.) Bi>= B0 / Lfor an L-look image. The memory savings in such a case are obvious.
The computational savings in a frequency domain processor follow because
doi~g~FFTs oflength N /Lrequirescomputation of the order L(N /L)log(N /L),
Mull/look Processing
If the capability to produce single look images is desired in the processor, the which ts less than that for one FFT of length N, which requires computation
full Doppler data band B0 must be produced using an FFT of adequate length of order N log(N). In time domain processing, the savings are in the ratio of
in the azimuth time variable. Since the full synthetic aperture time S must be N 2 to L(N I L) 2 , since both the data length and the compression filter length
used for the filter function, something markedly longer must be used for the decrease in the ratio N / L for each look computation. In either case of time or
220 ANCILLARY PROCESSES IN IMAGE FORMATION 5.3 CLUTIERLOCK AND AUTOFOCUS 221
frequency domain processing, with reduced data span, the look filtering should so that system noise adds a bias to the desired image I 0 • Since the quantity li!2
be done in the time domain to avoid taking a full band FFT of the Doppler also has the exponential density, its mean is also the image standard deviation,
data. A conventional FIR filter is applied to the PRF-sampled azimuth time so that the biased noisy single-look image still has unity SNR.
data in each slant range bin to produce the data for each look. Since the band The system noise bias in the image estimate Iii 2 can be removed if an estimator
of each look is only 1/ L the band of the Doppler data, decimation !s ~s~d as Pn of the noise power is available. That can be obtained from receiver output
well as filtering to reduce the data rate to the minimum needed for the mdlVldual voltage during a pre-imaging period with no input, or from a dark part of the
look bands. . image with little terrain backscatter evident. The image is then computed as
If the segmentation procedure of the last paragraph is used, compensation
must be made according to which subband the image came from before
superposing them. The images for each look must be shifted along track
explicitly, if the same compression filter is u.sed for ~ac~ look. The necessary This has mean
correction can be done in the Doppler domam by adjusting the filtered output
after compression by the delay factor exp[ -jnfocUoci - !oc)lfR] to ac~ount
for the different Doppler center frequencies foci in each look. ~ltern~tively,
these factors can simply be included in the look filter to result m a different where we assume Pn to be an unbiased estimator of Pn. The variance of the
filter to be used for each look. computed image is
In SAR image formation, using a high resolution (focussed) system of the type
i=(+n discussed in Chapter 4, the compression operation in azimuth (slow) time is
the crucial ingredient which makes the system function. The azimuth compression
where ( is a realization of ' and n is an independent complex Gaussian noise filter is the filter appropriate to the range compressed point target response
output. The mean image is then Eqn. (4.1.24 ):
L
222 ANCILLARY PROCESSES IN IMAGE FORMATION 5.3 CLUTTERLOCK AND AUTOFOCUS 223
The filter therefore involves the parameters of the range migration locus R(s), reasonably enough, clutterlock circuits. Hence an algorithm which automatically
the slant range to a point target as a function of slow time. The locus R(s) is determines the center frequency f De of the Doppler band of SAR azimuth time
usefully expanded in a Taylor series about the slow time sc at which the target returns is called a clutterlock algorithm.
is in the center of the radar beam (Fig. 4.1). Although at least one processor
(Barber 1985a) uses terms through the third order in slow time, it usually
suffices to retain only the second order term: 5.3.1 Clutterlock Procedures
(5.3.2) All SAR clutterlock algorithms for automatic determination of the center
frequency foe of the Doppler spectrum in one way or another use the fact that
where the Doppler center frequency foe and azimuth chirp constant fR are the high azimuth bandwidth time product of a SAR locks Doppler frequency
defined as: to position along track. Thus, returns contributing to any particular Doppler
frequency originate from targets in a specific part of the radar beam. As a
f De = - 2Rc/ A, ( 5.3.3) consequence, the power of the Doppler spectrum around the Doppler center
frequency foe on average should follow the shape of the two-way azimuth power
In Appendix B we discuss determination of the parameters foe and fR from pattern G2 (s - sc) of the antenna. (Here G(s) is the one-way power pattern
satellite orbit and attitude data. Such procedures are inherently quite accurate, G( 0, <P) evaluated at constant ground range and expressed in terms of azimuth
up to the level of accuracy of the attitude measurement instrumentation and times= x/V..)
the accuracy of the satellite orbital parameters computed from tracking data. One clutterlock scheme (Berland, 1981; McDonough et al., 1985) therefore
It can be, however, that instrumentation difficulties limit the former, while the determines foe by correlation of the average Doppler spectrum of the basebanded
time lag in smoothing and refining tracking data may make it inconvenient to data, before range compression, with the known azimuth power pattern of the
use the latter. For these reasons, most image formation processors include antenna. Another implementation (Bennett et al., 1980) uses the spectrum of
procedures for automatic determination of the parameters foe and fR to be used range compressed data before azimuth compression, and determines foe as the
for any particular scene, using only information derived from the radar data frequency of the spectral peak. Other workers have used range and azimuth
to be processed. These procedures are called respectively clutterlock and compressed data. A single-look complex image has been used (Li et al., 1985),
autofocus algorithms, and we will discuss some of them in this section. with foe taken as the frequency balancing the spectral power. Multiple
A few remarks on terminology might be interesting. The term "focus" is of single-look real images have also been used (Curlander et al., 1982), with the
course borrowed from optics, in analogy to the manipulation of light wavefront looks taken equally above and below the Doppler centroid assumed in the
curvature carried out by a lens. An autofocus procedure is thereby an algorithm processing. The final centroid value is that for which the image energies balance.
for automatic determination of the wavefront curvature constant fR of the A refinement (Jin and Chang, 1992) of the technique of Curlander et al.
azimuth filter. Clutterlock is borrowed from conventional aircraft pulse Doppler (1982) determines the maximum likelihood estimate of .foe given multiple real
radar (Mooney and Skillman, 1970). In the case of an aircraft radar at least images from different looks, in the case of a scene with constant backscatter
partially viewing terrain, targets of interest are obscured by the radar returns coefficient. An extension of this latter algorithm, to scenes with non-constant
from terrain reflectors at the same range, the so-called clutter on the radar backscatter coefficient, is described by Jin ( 1986, 1989). Another scheme
display. If the target of interest is moving with respect to the terrain, it will (Madsen, 1989) operating in the time domain has been implemented.
have returns which appear at the transmitting aircraft with a different Doppler A further consideration arises because foe is a function of the range Re at
frequency from that at which the clutter features appear, the latter frequency beam center. Since it is essential that some spectrum averaging take place before
being due solely to motion of the radar platform. There is thus the possibility applying the procedures indicated above, some span of R 0 values will contribute
of carrying out Doppler filtering on the radar returns to ·block the band of the to determination of foe· It is therefore necessary to introduce some model for
clutter (terrain) returns, while passing any other Doppler frequencies (due to the variation of foe with Re, which may simply be to assume that foe is constant
targets moving with respect to the terrain). The extent to which a moving target over the range of Re used in its determination, or varies approximately linearly,
can thereby be distinguished from its stationary background is the subclutter or obeys some more detailed model, such as one determined from the
visibility capability of the radar. If this technique is to work, the Doppler clutter considerations of Appendix B. The further assumption is made that foe is
rejection filter must always center more or less on the band of the terrain returns, constant with slow time over a sufficient span to allow the Doppler spectrum
which changes as the motion of the platform aircraft changes. The filter rejection to be developed (perhaps 5-10 km), an assumption which is satisfied to good
band is locked to the clutter band by feedback circuits (or algorithms) called, accuracy.
224 ANCILLARY PROCESSES IN IMAGE FORMATION 5.3 CLUTTERLOCK AND AUTOFOCUS 225
We will now indicate in some detail the specific choices which have been
made in developing these clutterlock algorithms. The precise arrangement of
procedures is not especially critical, since slight to moderate misplacement of
foe ( < 0.05 B0 ) only leads to some loss of SNR and some increase in ambiguity
levels (Li and Johnson, 1983). However, some of the procedures can lead to
noticeable SNR and ambiguity effects with certain scene characteristics, so that
the availability of a repertoire of procedures is useful.
( 5.3.5)
in order to determine the constants a and b. (Here H is the nominal altitude
of the satellite.) The algorithm was implemented using data prior to range
compression and without range migration compensation. At some tens of range The trial value of fDc is then incremented by some nominal amount, say 10 Hz,
values spaced uniformly across the swath, clusters of a few adjacent range bins and the entire procedure repeated to obtain a new value AE. Some number
were evaluated. Along each range of a cluster, an FFT was taken in the azimuth (say 16) of such values are computed and plotted vs. fDe· The value of fDe for
direction to create Doppler spectra at adjacent ranges (Fig. 5.4b ). The squared which a linear fit to the AE(fDe) values intersects AE = 0 is taken as the estimate
amplitudes of these adjacent spectra were averaged at each frequency to yield JDe for the particular range of the image piece used in the computations. The
a single power spectrum for each cluster. entire procedure is then repeated for each 1 km or so span of slant range across
Each averaged power spectrum was then correlated with the nominal antenna the range swath, and a linear fit made to the resulting values Joe(R) to determine
two-way power pattern G2 (s) in the along track dimension, assuming along the final (assumed linear) relation of foe to Re.
track time and Doppler frequency to be adequately locked together, to determine Although somewhat computationally intensive, the procedure was reported
the Doppler frequency JDe(Re) at beam center for that particular range (taken to be accurate to within a few Hz over ocean regions, which are nearly
say at the center of the cluster). These values for the various clusters across the homogeneous in scattering properties, and to within a few tens of Hz over
full range swath were then fit to the Doppler centroid model Eqn. ( 5.3.4) to urban regions. This accuracy estimate was based on the observed variation of ·
determine the constants a and b. During SAR processing, the Doppler model the estimates across the swath about the deterministic model of foe(R).
Eqn. ( 5.3.4) was used with the values a and b found to d,etermine a value f 0c
as needed for processing the image at any particular range Re. The use of a
large (full swath) data span and multiple stage smoothing alleviated such effects·
single full aperture complex image "s,
In the algorithm of Li et al. (1985), rather than four subaperture images, a
R) is produced, using a trial value fDe
at each range, computed from nominal orbit parameters. Azimuth spectra ( (f, R)
as indicated in Fig. 5.5. are produced and averaged over a number of adjacent range bins spanning a
Essentially this algorithm, taking a = 0 in the model Eqn. ( 5.3.4) (assuming . ·•· small region (say 1 km) over which foe(R) is nominally constant. Each average
negligible eccentricity of the satellite orbit), was used earlier by Berland power spectrum l((f, R;)l 2 is then balanced to find the frequency above and
( 1981 ). The same algorithm has been used with range compressed data (Bennett below which half the power lies. That collection of estimates JDe(R;) is then
et al., 1980). fitted to a linear model fDc(R) to determine the final values fD 0 (R).
228 ANCILLARY PROCESSES IN IMAGE FORMATION
5.3 CLUTIERLOCK AND AUTOFOCUS 229
Even though the use of azimuth compressed (image) data obviates the range compressed data phasor is given by the convolution
problem of Fig. 5.5, Li et al. (1985) note that some bias of foe is present. It is
attributed to variation of the true reflectivity '(x, R) of discrete targets with
respect to aspect angle, so that they may appear more strongly in some parts
of the Doppler spectrum than in others. The effect was not noted for
O(s, R) = f
h(s - s'IRK(s', R) ds' (5.3.9)
homogeneous scenes.
Jin (1989) worked out the statistics of the quantity AE of Eqn. (5.3.5), where h is the azimuth impulse response function, embodying the two-way
assuming that the computed real images had elements which were exponentially azimuth antenna voltage pattern (the one-way power pattern) and the Doppler
distributed (Section 5.2), and independent from one resolution cell to another. phase shift. Then the azimuth spectrum is
He determined that, approximately, the mean of AE of Eqn. (5.3.5) was related
to the deviation Afoe of the value f ge used in the computation of the images g(f, R) = H(flR)Z(f, R)
from the true value foe by:
where
(5.3.6)
where
because of the time and frequency locking effect of the high azimuth bandwidth
Afoe = foe - f ge (5.3.7) time product. The function Z is the Doppler spectrum of the complex
reflectivity '·
and If the azimuth compression operation is carried out with a filter H- 1 (JIR),
then the computed complex image ((s, R) has spectrum
B,/2
Ct= 2[W(O)- W(Bp/2)]
If -B,/2
W(f)df
2(f,R) = H- 1 (/IR)g(f,R)
is the two-way antenna power pattern expressed in the Doppler frequency where again W is the two way antenna power pattern in the Doppler domain.
domain. From Eqn. (5.3.6), an estimator of Afoe is just In this, the term IH - 1 I2 is known, and is unity if the compression filter is not
weighted for sidelobe control. The term IZl 2 is an exponentially distributed
random variable, since the spectrum Z is a linear operation on the complex
Gaussian process '(s, R).
where AE is the value at hand, so that, from Eqn. ( 5.3. 7), foe can be estimated as Using the assumed constant mean of 1'1 2 over the scene, Jin and Chang
(1992) derive the minimum variance unbiased estimator AJoe of the deviation
The correction procedure is iterated using the value Joe as a new value fge·
Here fge is the Doppler center frequency used in forming the image (and foe
Minimum Variance Unbiased Centroid Estimation is the true value, about which the antenna pattern W(f - foe) is assumed to
Jin and Chang (1992) and Jin (1989, Appendix B) have considered clutterlock be symmetric. They find
for a homogeneous scene, that is, one for which the exponentially distributed
intensities of the scene elements have constant mean, so that the backscatter *(5.3.10)
coefficient u 0 is constant. For such a scene, the azimuth time variation of the
230 ANCILLARY PROCESSES IN IMAGE FORMATION 5.3 CLUTTERLOCK AND AUTOFOCUS 231
where The integrals in this last equation are just the spectral energies of images
created from weighting of the portions of the Doppler spectrum below and
w(f) = (1/a)W'(f)/W 2(f) (5.3.11)
above the trial centroid fge· The Doppler band can be further subdivided into
with multiple (e.g. four) "looks", with energies E'1 , •• .,E~ computed from four
B,/2
weighted subapertures. The denominator term,
a= SP
f -B,/2
[W'(f)/W(f)] 2 df
R(t) = 8((s + t}(*(s) ( 5.3.14) R,(t) = Uo/Bo)exp(j2nf0 et) f:ao IH(f- L\f)l2W(f)exp(j2nft)df
( 5.3.17)
Since then also
where
For small !if, for any specified t = t 0 the amplitude and phase of the integral
in Eqn. (5.3.17) will be proportional to ll.f:
any shift in the power spectrum, say to S(f - foe), is evidenced by a phase
factor in the correlation function:
R( t):::;.. R( t) exp(j2nfoe t)
For the selected t 0 , the value R,(t 0 ) is estimated based on Eqn. (5.3.14). The
This suggests that we can determine foe by analysis of the phase of the slow angle of that complex number, say
time correlation function of a computed image line ((s, R), which can be
estimated using Eqn. (5.3.14).
Suppose that the true scene is homogeneous, with independent intensities in
from Eqn. (5.3.18) then yields the value of fo in:
each resolution element. Then the reflectivity ((s, R) has a power spectral density
in the azimuth variable which is constant: 2nf0 t 0 =2nf0 eto + 2na0 t 0 A.f,
fo = foe + ao(foe - foe),
foe= Uo - aofoe)/(l - ao) *( 5.3.19)
where / 0 = 81((s, R)l2 is the scene intensity and B 0 is the azimuth bandwidth
of the scene. The scene power spectrum enters into the· azimuth data through The procedure is iterated. Madsen ( 1989) suggests that the first sample of the
the antenna azimuth two-way voltage pattern G(s), expressed in the frequency estimated autocorrelation be used, so that t 0 is the first available lag value.
domain through the frequency time locking relation, and shifted to the true The coefficient a 0 in Eqn. (5.3.18) is derived under some reasonable
Doppler center frequency foe· The range compressed data azimuth power assumptions by Madsen (1985). As Madsen (1989) suggests, its determination
spectral density is thereby can be obviated by plotting a succession of values / 0 , found with different foe•
in order to determine the value fo for which fo =foe• implying from
Eqn. (5.3.19) that foe= foe·
The considerable computational efficiency of Madsen's method comes about
where partly because it is not necessary to compute any power spectra, but mainly
because of the possibility of computing the estimate of R,( t 0 ) using hard limited
data. In particular, let x, y be any two real stationary Gaussian processes, and
let x., Ys be their hard limited versions:
The azimuth data Eqn. (5.3.16) are passed through a compression filter
x.(t) = 1, x(t) ~ 0
H(f - JO.,) with an amplitude spectrum H(f) shifted to some presumed
Doppler center frequency foe· The line of image thereby produced has power = -1, x(t) < 0
234 ANCILLARY PROCESSES IN IMAGE FORMATION 5.3 CLUTTERLOCK AND AUTOFOCUS 235
and similarly for y. The cross-correlation coefficient Suppose that two complete intensity images were produced for some modest
sized patch of terrain, taken small enough in range extent that fR could be
considered constant, and large enough in azimuth extent to allow convenient
FFT size. Each image is produced from a different part of the Doppler spectrum,
of the original two processes is related to the cross-correlation function of the as in multilook processing. Some nominal value f R. is used in the processing.
hard limited versions by (Papoulis, 1965, p. 483): After formation of the two images, they will be registered in azimuth time by
shifting one relative to the other by exactly the amount corresponding to
Pxy(r) = sin[(n/2)Rx,y.(r)] (5.3.20) Eqn. ( 5.3.21 ):
Since the correlation coefficient and correlation function of a complex process ( 5.3.22)
such as '(s, R) have the same phase angle, Madsen suggests applying his
procedure to the estimated correlation coefficient p,( r) rather than to R,( r ). where f f,c, f5c are the centers of the subbands used in forming the images,
Using Eqn. ( 5.3.20 ), p,( r) can be computed by determining the cross-correlation and f R. is the trial value used. If we were forming a single multilook image, the
of hard limited data, which can be done essentially by tabulation of sign registered subimages. would now be added. However, we now make the
comparisons of x.(s + r) = Re C(s + r) and y.(s) =Im C(s). observation that, if the value JR. used in processing is not the correct value fR,
Madsen (1985) finds that the variance of his estimator Eqn. (5.3.19) is the registration will be incorrect because the imposed azimuth shift Eqn. (5.3.22)
proportional to scene contrast. In practice, Madsen ( 1989) reports accuracy of will not accord with the actual relation in the image:
the same order as previous (frequency domain) methods, combined with
significant computational savings. (5.3.23)'
Thus, the two images, which should be identical on the same time scale s, will
5.3.2 Autofocus in fact be displaced from one another in time, with the amount of the
displacement being a measure of the mismatch in fR between scene and processor.
Most SAR image formation processors in current use carry out determination In the processor of Curlander et al. ( 1982 ), the outer two looks of a four-look
of the azimuth chirp constant fR in the same way, using the subaperture. processor are used in this procedure. A nominal value of f R. is chosen, two
correlation method (Bennett et al., 1981; Curlander et al., 1982; Wu et al., 1982; images / 1 (s,R) and / 2 (s,R) are produced, and the cross correlation function
McDonough et al., 1985 ). The exceptions are those processors such as in (Barber,
1985a), which use direct computation of fR from orbital data according to the
expressions of Appendix B, and processors, such as in ( Herland, 1981 ), which
use the fact that the image contrast is estimated for each range R of the image. These correlations are averaged
over range to obtain a single average cross correlation function. The location
in time of the peak of that function is found, for example by reading off the
peak of a local quadratic fit around the nominal peak. This gives the measure
decreases if the speed parameter V in the model of slow time misregistration of the two images:
(5.3.24)
This is taken as one point on a curve of f>s vs. JR., and the entire process is
is improperly chosen, thereby defocusing the image (Herland, 1980). Here we cycled for new nominal values JR., displaced slightly (a few Hz/s) from one
will concentrate attention on the subaperture method. .. another. The correct value of fR for the range used in the images is taken as
As with so much of SAR processing, the subaperture method depends o~ the value at which a linear fit to the points on such a curve crosses the axis
the locking relationship between azimuth time or position and Doppler f>s = 0, implying from Eqn. ( 5.3.24) that f R. = fR. The entire procedure is stepped
frequency: along in range across the swath of the SAR. The procedure works best over
land areas, where point-like targets exist which act to sharpen the cross-
(5.3.21) correlation peaks, with a reported accuracy of a few tenths of a Hz/s.
236 ANCILLARY PROCESSES IN IMAGE FORMATION 5.3 CLUTTERLOCK AND AUTOFOCUS 237
In another version of the same idea (McDonough et al., 1985), the model range bin:
equation of Appendix Bis used:
(5.3.27)
(5.3.25)
where V is an equivalent speed, very nearly constant with both range and where I 1 and I 2 are the intensities of the pixels in the two images and the sum
azimuth position over a typical scene. Some nominal value of V is chosen, is over whatever portion of image slow time has been computed.
perhaps from nominal orbit data, or simply the approximate value Eqn. (B.4.12):
Since, in this version, we change the value fR over range as in Eqn. (5.3.25),
there is a systematic azimuth displacement as a function of range. We need
to compensate that dependence before averaging the correlation functions
Eqn. (5.3.27) over range. This can be done by computing the average as
where V., H are nominal satellite speed and altitude, and Re is the nominal
earth {adius. Using the nominal value of V in the model Eqn. ( 5.3.25) for fR,
a moderate size piece of image is formed from each of at least two Doppler
subbands. A value JR for each range in the image is computed from Eqn.
(5.3.25), using the nominal V, and that value fR(Rc) used in the compression
processing. where the sum is over whatever range bins are available in the image and R 0
Suppose the two images are produced with Doppler bands having center is the smallest value of Re used in the computations Eqn. (5.3.27). The value
frequencies which differ by some amount Aloe· Then we will expect the pixels yP of y for which p.(y) peaks is then the measure of Js in Eqn. (5.3.25):
in each range line of the two images to differ in slow time location s by an
amount, from Eqn. (5.3.22): *(5.3.28)
As' = fl.foe/ f R
which may be solved for the unknown value V.
We will compensate each range line of one of the images by that amount, so In the particular case that the range interval used in the image formation is
as to register the two images in slow time. In reality, however, the pixels in the. sufficiently small that fR rather than V can be considered constant, the formulas
two images along any range line will differ in slow time location by an amount reduce to the earlier case Eqn. (5.3.24):
where fR is the true value for that range in the scene. After compensation,
which peaks at a value
therefore, the pixels along any range line will still misregister by an amount as
in Eqn. (5.3.24): *(5.3.29)
R(s) ~ Re - (.A.foc/2)(s - sc) ( 5.4.2) Figure 5.7 Range walk locus error resulting from use of ambiguous Doppler spectrum with m "# 1.
240 ANCILLARY PROCESSES IN IMAGE FORMATION 5.4 RESOLUTION OF THE AZIMUTH AMBIGUITY 241
at f be• will be associated with a frequency in the second subband: so that we require the error
At C-band, say A. = 5 cm, and at the altitude of the space shuttle, Re = 250 km, from which we conclude foe= fb. Chang and Curlander (1992) set forth a
with a common single-look azimuth resolution bx = 6 m, and range resolution more deductive solution, which has the possibility of extension to account for
bR = 7 m, this yields (form= 1) measurement errors.
Assume first that all frequency values are integers. Then by definition the
tiR/bR = 0.6 expression Eqn. (5.4.9) is a congruence (Barnett, 1969, Chapter 6):
Hence the misregistration is less than a resolution cell per ambiguity cycle, and
may be difficult to sense. (The cross-correlation uses single-look images, which
foe =f ~e mod(/~), i = 1, ... ,M ( 5.4.10)
have a signal-to-speckle noise ratio of only 0 dB.) The situation worsens at That is, the integer difference foe - f ~e is an integer multiple of the integer f ~.
X-band. Given the numbers f ~e and f~, we want to solve the simultaneous set
Eqn. (5.4.10) for the unknown foe·
Ambiguity Resolution Using Multiple PRFs The existence of a solution to Eqn. (5.4.10) rests on the Chinese remainder
Accordingly, another method of resolving the Doppler ambiguity has been i,
theorem (Barnett, 1969, p. 115): If the members of each pair f~, f i # j, have ·
proposed by Chang and Curlander ( 1992 ). It assumes the transmission of radar no common integer divisors, other than unity, then the simultaneous set
pulse trains with more than one pulse repetition frequency fp· The procedure Eqn. (5.4.10) always has solutions, and, further, those solutions are congruent
is reminiscent of the staggered PRF method (Mooney and Skillman, 1970) of modulo f M = f ~ x . . . x f ~. That is, the solution foe is determined to within
resolving range ambiguity due to second time around (Section 1.2.1 ), but is the product fM, and the ambiguity span of the Doppler centroid is expanded
implemented as brief (a second or less) bursts of pulses at each fp in turn. The to that value.
result is the availability of spectra of the Doppler signal in each range bin, The proof of the Chinese remainder theorem is by construction, and is given
sampled at a variety of PRFs fp· by Barnett (1969, p. 115). In the new baseband 0 ~foe< fM, the solution is
The idea of the method is clear from Fig. 5.8. The position of the "real"
M
Doppler spectrum, centered on the true Doppler centroid foe• is the same
regardless of sampling rate fp, while the "pretenders" change position as fp
foe= L MinJ~e mod(JM)
i= 1
( 5.4.11)
changes. Clutterlock algorithms yield estimates f'oe in the baseband region,
where
0 ~ f'oe < fp· The object is to analyze the measured values f'oe resulting from
the various fp, and infer from them the true value foe·
We then have the following problem to solve. Given measured values
and the integers ni are any solutions of
( 5.4.9)
(5.4.12)
where ki are unknown integers, find foe· One obvious procedure is to compute
. + nf .~ for n = ± 1, ±2,... , untt·1 we o bserve f 01 = f o2 = ... ,
. = f ~e
the numbers f 0
Barnett (1969, p. 88) shows that Eqn. (5.4.12) has exactly one solution ni,
modulo f ~. That solution can be found by solving the Diophantine equation
,,---...... \ ,,........., corresponding to the congruence Eqn. (5.4.12):
I
/
I
,,
fee -fp1
I
\
I
I C\ fee
'
/
.
., /
fee +fp1
I
''
I
I fo
(5.4.13)
Euclid's Algorithm
The solution of the equation Eqn. (5.4.13) can be constructed (Barnett, 1969,
p. 51) by chaining backwards through Euclid's algorithm (Barnett, 1969, p. 47)
for finding the greatest common divisor (gcd) of the integers M; and f~. This
Figure 5.8 True Doppler spectrum and differing ambiguities induced by two PRFs f~, J;. will be illustrated by an example.
244 ANCILLARY PROCESSES IN IMAGE FORMATION
5.4 RESOLUTION OF THE AZIMUTH AMBIGUITY 245
Suppose we choose f~ = 1652 = 2 x 2 x 7 x 59 and f~ = 1745 = 5 x 349. The solution of Eqn. (5.4.14) is now identified as
Having no common factor other than one, these fulfill the requirement of the
method. Suppose the true Doppler centroid is foe= 5275. We then measure n 1 = -675, k 1 = 713
(assuming a perfect clutterlock algorithm):
Similarly,
J:;e = 5275 mod(1652) = 319,
n 2 = 713, k 2 = -675
flfe = 5275mod(1745) = 40
Then Eqn. ( 5.4.11) yields
The set Eqn. (5.4.13) is (M 1 =1745, M 2 = 1652):
foe= [1745 x (-675) x 319 + 1652 x 713 x 40]mod(2882740)
1745n 1 + 1652k 1 = 1 (5.4.14)
= 5275
1652n 2 + 1745k 2 = 1 (5.4.15)
We are assured by hypothesis that the greatest common divisor of Extensions of the Multiple PRF Method
(a,b) = (1745, 1652) is 1. Euclid's.algorithm leads to that result as (the dots Chang and Curlander ( 1992) develop a slight extension of this algorithm, aimed
tag the quotients, and remainders are labeled r 1): at noninteger measured values fte· From Eqn. (5.4.10), it is clear that the
integer part of the f te arises from the integer part of foe• so that the integer
a = 1745 = 1652 x 1 + 93 parts of the fte can be used in the above procedure, and their (common)
fractional part added to the resulting foe· To allow for estimation errors in the
b = 1652 = 93 x 17 + 71 f ~e• the measured f te are least squares fit to numbers a1 constrained to have
r 1 = 93 = 71 x i + 22 common fractional part, and the integer parts of the resulting a 1 used in the
algorithm. Their common fractional part is added to the solution foe found.
'2 = 71 = 22 x 3 + 5 Also, the case is considered that the true value foe may change slightly
r 3 = 22 = 5 x 4+ 2 over the time interval from one PRF burst to another, so that, in place of
Eqn. (5.4.10), we have
r 4 =5=2xi+l
r5 = 2= 1 xi+ Q f be =f tc mod(f~), i= l, ... ,M ( 5.4.16)
which identifies 1 (the last nonzero remainder) as the gcd of (1745, 1652). An additional algorithm is presented by Chang and Curlander ( 1992) for
Now carry out the back solution of the Euclid array according to the scheme resolution of the ambiguity in this case also, provided a minimum of three
(the dotted quantities are combinations of quotients): values of fp are used. The values of fp in this algorithm may contain common
factors.
In the algorithm relative to Eqn. (5.4.16), the true values fbe can be assumed
1=5-ix2 to be random perturbations of a nominal value foe:
ix (22 - 5 x 4) = -ix 22 + 9 x 5
= 5-
=-ix 22+9 x (71-22 x 3)=9 x 71-29 x 22
= 9 x 71 - 29 x (93 - 71 x i) = -29 x 93 + 38 x 71 Any two of the values fbe• say f be and f '&e• are used as data in the maximum
= -29 x 93 + 38 x (1652 - 93 x 17) = 38 x 1652 - 675 x 93 likelihood estimator of foe and k 1 , assuming k 2 = k 1 + i, with i some deterministic
value. Thus, with p 12 (!!.fbe,t!.ff,e) being Gaussian, say:
= 38 x 1652 - 675 x (1745 - 1652 x 1)
= -675 x 1745 + 713 x 1652
246 ANCILLARY PROCESSES IN IMAGE FORMATION REFERENCES 247
Luscombe, A. P. (1982). "Auxiliary data networks for satellite synthetic aperture radar,"
Marconi Review, 45(225), pp. 84-105.
Madsen, S. N. (1989). "Estimating the Doppler centroid of SAR data," IEEE Trans.
Aerospace and Elec. Sys., AES-25(21 pp. 134-140.
Madsen, S. N. (1985). "Speckle Theory," Electromagnetics Institute of Technical
University of Denmark, Report LD62, November.
6
McDonough, R. N., B. E. Raff and J. L. l<.err(l985). "Image formation from spaceborne
synthetic aperture radar signals," Johns Hopkins APL Technical Digest, 6(4),
pp. 300-312. SAR FLIGHT SYSTEM
Mooney, D. H. and W. A. Skillman ( 1970). "Pulse-Doppler Radar," Chapter 19 in Radar
Handbook (Skolnik, M. I., ed.), McGraw-Hill, New York, pp. 19.1-19.29.
Papoulis, A. ( 1965). Probability, Random Variables, and Stochastic Processes, McGraw-
Hill, New York.
Ulaby, F. T., R. K. Moore and A. K. Fung (1982). Microwave Remote Sensing, Vol. 2,
Addison-Wesley, Reading, MA.
Whalen, A. D. ( 1971 ). Detection of Signals in Noise, Academic Press, New York.
Wu, C., B. Barkan, W. J. Karplus and D. Caswell ( 1982). "Seasat synthetic-aperture
radar data reduction using parallel programmable array processors," IEEE Trans.
Geosci. and Remote Sensing, GE-20(3), pp. 352-358.
The SAR data system is comprised of three main subsystems (Fig. 6.1):
(1) SAR sensor; (2) Platform (spacecraft or aircraft) and data downlink;
and ( 3) Ground signal processor. The radar subsystem can be functionally
divided into four main assemblies: (1) Timing and control; (2) RF (analog)
electronics; (3) Digital electronics and data routing; and (4) Antenna. Each of
these assmblies can be further divided into subassemblies and components. For
249
6.1 SYSTEM OVERVIEW 251
example, the antenna typically has three major subassemblies: (1) Feed; (2)
<'>
Radiators; and ( 3) Support structure. The characteristics of each radar assembly
ci will be addressed in more detail in the following sections. The spacecraft (S/C)
...I
w
>
bus generally provides the downlink processor (including the communications
w
antenna) and the onboard recorders for temporary storage of data.
In discussing the SAR ground data system, we will refer to the various levels
of data products as defined by NASA (Butler, 1984 ). Their definitions, as adapted
u specifically to SAR data products, are presented in Table 6.1. We will treat the
ground receiving station and Level 0 processing for removing the telemetry
..>i'
:§ artifacts as part of the data downlink. The ground data processing subsystem
~
0
= consists of a Level lA processor which produces the single-look, complex image
"O
"O
by performing two-dimensional matched filtering of the data, followed by a
"'El=
Level lB processor that performs radiometric and geometric corrections on the
Level lA output image. This Level lB processor may also perform low pass
.:a filtering for speckle noise reduction and detection of the complex image for
'iii
Q::; video display. The final stage of the ground processing is the Level 2, 3 processor
e..: that derives geophysical information (e.g., soil moisture, surface roughness) from
i.::
Q
"'
"O
either a single image frame (Level 2) or multitemporal coregistered images
z ...I
:i
w
>
w
"'
i:o::
-;-
(Level 3).
...I Each element of the data system introduces noise of some type that corrupts
e"' the signal, effectively degrading the overall system performance. Typically, this
.J:J ~» performance is characterized by the system impulse response function. Additional
"'
.t:l
;::J
performance characteristics relating to the radiometric and geometric calibration
..."'0 accuracy will be discussed in detail in Chapters 7 and 8. The key noise sources
ct '<;' degrading the synthetic aperture radar performance can be categorized as
:.: a: e
~i
~ 51 M follows:
-' UI
zw 0
;;:u
00 .5
cg: "O 1. Amplitude and phase errors across the system bandwidth which degrade
D. II) "'
"O
the range impulse response function;
~ oil
e ·~ 2. Phase instability over time intervals varying from the relatively short round
... -u
UI
"" z
';"';; 8"' trip propagation time to the longer synthetic aperture duration (or
I- 0 » 0
- a: "' ...
CJ I-
-u
Ow
...I
w
-"' -
"' "'
"O "'
i:o:: "O
~
coherent integration time) which primarily degrades the azimuth impulse
response function;
<"O
r.ll § 3. Thermal noise introduced by the analog electronics which degrades the
UI
u
...I
0
0
system dynamic range and the polarimetric performance (e.g., phase
CJ - CJ rr: ;;i cS
oz
_,o
"" t>
z rr:
z!z
-o
!u
"' . <:>
QI
:I "O
~
estimation accuracy);
4. Distortion noise, introduced by quantization error, system nonlinearities,
"" ...I
w
"""o
z
.'fl <IS
IL =
w (e.g., saturation effects) and bit error noise (from the communications
"" subsystem) which degrades the impulse response function in both dimensions.
The degradations introduced by the first two items listed above are
generally considered as linear system errors, while the distortion noise is a
nonlinear error. To the degree that linear system errors can be characterized
as deterministic errors, they can be compensated in the signal processor by
250
252 SAR FLIGHT SYSTEM 6.1 SYSTEM OVERVIEW 253
TO
TABLE 6.1 SAR Data Level Definitions PLATFORM
BUS FOR
Level Level Definitions RECORDING
OR
DOWNLINK
Level 0 Reconstructed digital video data.
Reversibly processed image data (one-look, complex) at full TIMINGANO
Level lA CONTROL
resolution, time referenced, and annotated with ancillary information,
including radiometric and geometric calibration coefficients and
-----------•
georeferencing parameters (i.e., platform ephemeris) computed and
appended but not applied
Level 1B Level lA data that has been geometrically resampled and
radiometrically corrected to sensor units (i.e., radar backscatter cross FREQ. CONTROi.ER
section). Standard SAR data product. MULT. MICROPROC.
Level 2 Derived geophysical parameters (e.g., ocean wave height, soil moisture,
ice concentration) mapped on some uniform time/space grid with
processing parameters appended.
Level 3 Geophysical parameter data mapped on uniform space-time grid
scales, usually with some completeness and consistency properties
added (e.g., missing points interpolated, multiframe mosaics).
Level 4 Model output or results from analysis of lower-level data (i.e., data
not measured by the instruments but derived from instrument
measurements). IF I
BPF
adjusting the matched filter reference function. The residual, random linear RECEIVER
error component degrades the impulse response function. The nonlinear noise
sources to some degree can be modeled as white additive noise. However, TRANSMITTER
frequency harmonics will arise from saturation effects that must be treated
separately. These issues will be discussed in more detail in Section 6.2. The
thermal noise will result in measurement errors on an individual sample basis,
but over a large number of samples the mean noise power can be accurately
estimated and subtracted from the average received power to derive an accurate . _.
backscatter coefficient estimate.
Prior to addressing the system performance specifications for individual RF ELECTRONICS :------
assemblies in the radar subsystem, it is instructive to briefly review the radar !'
system operation, keeping in mind that there are a number of variations on I
I
I
this operational scenario. A block diagram of the SAR sensor is shown in Fig. 6.2. ANTENNA :
..... ___________ _,
I
I
I
Transmission I
I
I
I
The coherent radar signal originates in the stable local oscillator ( stalo ). This
signal is gated into the exciter subsystem according to a predefined pulse duration Figure 6.2 Block diagram of SAR subsystem illustrating four assemblies and key subassemblies.
and pulse repetition frequency (PRF). The exciter modulates the pulse tone
with a frequency or phase code. This signal is then translated to the desired
carrier frequency by a series of mixers, amplifiers, and bandpass filters. At the antenna subsystem. The antenna feed network consists of coaxial cable and/or
carrier frequency, the RF signal is input to a high power amplifier (HPA) which waveguide with power dividers. It divides the signal into a number of parallel
is either a cascade of solid state amplifiers or a traveling wave tube (TWT} coherent paths (assuming a phased array antenna) which terminate with
device. This high power signal is then passed through a circulator switch to the radiating elements or slots. The feed network may also contain phase shifters
l
254 SAR FLIGHT SYSTEM 6.1 SYSTEM OVERVIEW 255
for electronic beam steering and transmitter/receiver (T /R) modules for aperture formation (azimuth compression). Real-time on board signal processors
improving the SNR (i.e., an active array). are most commonly found in SAR systems designed for military applications.
The complexity of the onboard hardware could be significantly reduced by
Reception downlinking the analog video SAR signal and digitizing the data at the receiving
The return echoes are collected by the same antenna radiator and feed system station (e.g., Seasat ). The disadvantage of this design is a degraded system
as was used for signal transmission. The exception is an active array where the performance, specifically in terms of the radiometric calibration accuracy.
T /R module paths are not the same on receive as on transmit. A circulator
then switches the echo signal into the receiver where it is bandpass filtered and Processing
input to a low noise amplifier (LNA). A variable gain amplifier (VGA) typically The recovered digitized SAR video data is either recorded on HDDRs by the
follows the LNA to normalize the signal amplitude according to the target Level 0 processor (and the tapes shipped to the Level 1 processing facility), or
backscatter. This signal is then frequency shifted to an intermediate frequency the data is retransmitted and recorded at the Level 1 facility for real-time or
(IF) for narrowband filtering to the pulse ban~width, amplified, ~nd agai~ off-line processing (Fig. 6.4 ). The first stage of the Level 1 processing performs
shifted either to baseband or offset video by a senes of filters and mixers. This data synchronization and reformatting. Since the data is recorded in range line
video signal is split by a power divider and digitized using dual analog-to-digital order, the Level lA signal processing in the range dimension.typically preceeds
convertors (ADCs) clocked to sample the in-phase (I) and quadrature (Q) the azimuth processing. Almost all correlator systems use two one-dimensional
components of the baseband signal. Alternatively, a single ADC sampling at a reference functions as an approximation for the exact two-dimensional matched
rate twice the system bandwidth can be used to digitize the offset video. The filter. Since the Level lA output is a single-look complex image, this processing
ADC output is then time expansion buffered in a high speed random access stage is essentially reversible. The processing operations in the Level 1B
memory (RAM) to achieve a constant rate data stream. This data is the~ pas~ed processor include: ( 1) radiometric corrections to remove the cross-track signal
to a formatter unit which appends the header (e.g., GMT clock, cahbrahon power modulation; (2) geometric resampling to a map grid; and (3) detection
signals, engineering telemetry) and outputs the data to the platform bus. and lowpass filtering for speckle noise reduction. In general these operations,
which produce the radiometrically and geometrically calibrated imagery for
Downlink extraction of surface information, are not reversible. The final processing stage,
The platform bus transfers the formatted video data to an onboard signal the Level 2, 3 processor, generates the high level, non-image products. This
processing system via the digital data routing electronics for recording, processor converts the calibrated imagery into geophysical data units that
processing, or transmission to a ground receiving station (Fig. 6.3). The ground represent some type of surface characteristic. Very few of the Level 2, 3 products
receiver and Level Oprocessor demodulate the carrier signal, strip off the channel can be produced in a fully automated fashion, due to the complex scattering
coding (that is applied for bit error protection), and correct for telemetry artifacts
(e.g., data sequence, polarity). The platform bus may include high density digital
recorders (HDDRs) for temporary data storage, digital signal processors for
CORRELATIVE
removal of the pulse code modulation (range compression), and/or synthetic DATA
HIGH DENSITY
DIGITAL
RECORDERS
SAR TO USERS
FROM DATA SAR SAR POST· GEOPHYSICAL AND LARGE
LEVEL 0 SYNC AND CORRELATOR PROCESSOR
PROCESSOR(S) SCALE
PROCESSOR FORMATTING (LEVEL 1A) (LEVEL 18)
(LEVEL2.3) MODELS
..-----.~:"'
PROCESSING
SUBSYSTEM
....___.,.. 0-0
DOWNLINK
PROCESSOR ~
DOWNLINK
ANTENNA
GROUND
RECEIVING
STATION
LEVELO
PROCESSOR
HDDR 0ARCHIVE
Figure 6.3 Block diagram of the platform and data downlink subsystem illustrating major Figure 6.4 Block diagram of the ground data processing subsystem illustrating major processing
subassemblies. stages.
256 SAR FLIGHT SYSTEM 6.2 RADAR PERFORMANCE MEASURES 257
· mechanisms that give rise to the target reflectivity. Since this processing requires Azimuth distortion results from pulse-to-pulse errors such as timing jitter or
operator interaction, it is typically not considered as an element of the SAR drift of the coherent local oscillator.
ground data system, even though it is an essential processing stage for extracting
information from the SAR data. System Transfer Function Distortion Analysis
An example of an end-to-end SAR data system design and operation is The radar system transfer function can be modeled as a distortionless filter
presented in Appendix C. This appendix describes the NASA Alaska SAR followed by a distortion filter as shown in Fig. 6.5. We will use the paired echo
Facility ground processing system. This system is designed to process data from technique (Klauder et al., 1960) to analyze the distortion filter with frequency
the ESA-ERS-1 SAR, the NASDA J-ERS-1 SAR and the Canadian Radarsat domain transfer admittance Y( w) given by
systems. It includes all elements of the signal processing chain described above,
including a Level 2, 3 processor for multitemporal tracking of sea ice and Y(w) = A( w) exp[jB(w )] ( 6.2.1)
production of ice concentration maps and ocean wave spectra contour plots.
where A(w) is the amplitude transfer characteristic and B(w) is the phase
characteristic. These terms can be expanded in a Fourier series as follows
6.2 RADAR PERFORMANCE MEASURES ao
A( w) = a0 + L an cos( new) (6.2.2a)
As previously discussed, the end-to-end system performance depends on an n=l
errors as well as distortion errors degrade the impulse response function. B(w) = wb 0 + L bn sin( new) (6.2.2b)
n=l
Typically three measures are used to characterize this function:
where e is a complex constant dependent on the filter bandwidth.
1. Mainlobe broadening (Km1); Each term in the summation of Eqn. (6.2.2) will give rise to a set of echoes
2. Peak side-lobe ratio (PSLR); on either side of the desired impulse response. As an example, consider the case
3. Integrated side-lobe ratio (!SLR). where n = 1 (i.e., only one ripple component is present). If a signal s( t) is applied
to the filter given by Eqn. (6.2.1), the output r(t) is (Berkowitz, 1965)
The mainlobe broadening is usually defined as the actual 3 dB width relative
ao
to an ideal value estimated assuming no system errors. The sidelobe measures r(t) = a0 J 0 (bi)s(t + b0 ) + L Jm(bi) x
are evaluated over a region that excludes the mainlobe response (e.g., the ideal m=l
null-to-null width). The PSLR is the ratio of the largest sidelobe value (outside
a specified mainlobe region) to the mainlobe peak, while the !SLR is the
integrated sidelobe to mainlobe power ratio. Prior to considering the individual
[( 1 + amal
b
)s(t + bo +me)+ ( -
0 1
l)m( 1 - amalb )s(t + bo -
0 1
me)]
subsystem elements, we first present techniques to analyze the linear and (6.2.3)
nonlinear system distortions.
where J; is the Bessel function of the first kind and ith order. The first term in
Eqn. (6.2.3) is the desired signal, weighted by the zero order Bessel function.
6.2.1 Linear System Analysis Each term of the summation consists of two echoes, advanced and delayed
The assumption of system linearity allows us to charact,erize each element of replicas of s( t) weighted by the mth order Bessel function. The desired output
the SAR system from the antenna to the signal processor in terms of its transfer signal relative to the input signal is delayed by b0 and the paired echoes are
function. Unmodeled or random errors in any of these elements will degrade displaced from the desired output by me (Fig. 6.6). Note that the first phase
the performance of the matched filtering process in the SAR correlator and
produce errors in the impulse response function (increased sidelobe levels and S1(t)
mainlobe broadening). Certain types of errors tend to affect the system impulse INPUT SIGNAL
DISTORTIONLESS
DISTORTION FILTER
FILTER OUTPUT SIGNAL
response mainly in the azimuth dimension while others are predominant in the H(co) Y(co) • A(m) EXP OB(co))
S 1(co) So(m)
range dimension. Range distortion often results from errors in narrowband
filters, amplifiers, and other devices over the period of the pulse dispersion. Figure 6.5 Linear distortion model of radar system transfer function.
258 SAR FLIGHT SYSTEM 6.2 RADAR PERFORMANCE MEASURES 259
B(o>)
s(t)
b
Figure 6.6 ( conNnued)
distortion term b 1 actually gives rise to an infinite series of paired echoes which
are generally neglected beyond the first pair. The peak sidelobe ratio (PSLR)
from each amplitude ripple term is given by
and similarly the peak sidelobe for each phase ripple term is given by formulation can be made for the mainlobe broadening. Thus the overall system
performance is given by
PSLRP = 20log(~) (6.2.5)
(6.2.10)
Generally, the system overall peak sidelobe performance is dominated by one
of the terms in either Eqn. (6.2.4) or Eqn. (6.2.5). ( 6.2.11)
For small amplitude errors, these terms degrade the impulse response function
predominantly as a result of quadratic and higher order amplitude versus
frequency characteristics (i.e., rms error around a linear fit across the passband). where <T; is the standard deviation of the phase or amplitude error of the ith
Similarly, only quadratic and higher order phase errors relative to the desired subassembly and Km 1, is the fractional broadening from the ith subassembly.
phase versus frequency function will degrade the impulse response. System
timing errors (including sample jitter) can be treated as phase errors. As a Measurement Techniques
general rule of thumb, lower order terms (in the summation of Eqn. ( 6.2.2)) will A technique commonly used to measure the amplitude characteristic of the
produce mainlobe broadening errors, whereas the higher order terms will affect system transfer function is to use as input a series of equal level tones spaced
predominantly the sidelobes. across the frequency spectrum and measure the output signal using a power
To assess the effects of random errors on the mainlobe width and ISLR, a meter. A least squares linear fit is applied to the data points from which an
good approximation is that, for small errors, the ISLR is given by the variance rms error performance is derived. To measure the phase characteristic, a series
of the error about the best linear fit. Thus, for amplitude errors of pulses, each at a different frequency spaced across the system bandwidth, is
used as an input. The relative change in group delay of each pulse is measured
at the output using a network analyzer. This group time delay td(w) and the
ISLRa = 20 log <18 (6.2.6)
phase distortion B( w) are related by
where <18 is therms amplitude error about a linear fit across the frequency band,
td(w) = -dB(w)/dw (6.2.12)
and for phase errors
Numerical quadrature is used to derive the phase versus frequency data points.
ISLRP = 20 log <TP (6.2.7)
A least square error quadratic fit is applied to these points and the rms phase
error is calculated from the residuals. For timing error measurements, a counter
where <Tp is the rms phase error in radians about the desired phase function.
can be used to measure the relative differences between the leading edges of a
Similarly, the mainlobe broadening for amplitude and phase errors respectively
series of timing pulses. The variance of these measurements determines the
is given by
timing jitter, which can then be converted into phase error by
Km1. = (1 - a;)- 2 (6.2.8)
( 6.2.13)
Km1, = (1 - a~)-2 (6.2.9)
where a 1 is the rms timing jitter and f is the frequency of the measured signal.
where Km1 is the broadening factor relative to the theoretical mainlobe width.
Each element in the radar subsystem will produce a phase and amplitude
error characteristic. To derive an overall performance specification for the radar, 6.2.2 Nonlinear System Analysis
it is typically assumed that each error source is an independent process, While most radar systems are designed such that their components operate in
characterized by some probability distribution function (PDF). The resultant the linear region over a wide range of inputs, the actual performance of the
PDF of all error sources is assumed Gaussian, by the central limit theorem, radar can never be strictly categorized as linear. Given that the return echo
with mean and variance given by the sums of the mean and variance! amplitude modulation is random, some fraction of the data (i.e., the tails of the
contributions of the individual error processes. This formulation allows the probability distribution) will always be in saturation (Fig. 6.7). If the percentage
effective ISLR of the system to be calculated as the sum of the ISLR contributions of the data in saturation is small, the system is in a quasi-linear operation mode,
from each subassembly or component comprising the subassembly. A similar where the nonlinearities are characterized by the level of harmonic or
262 SAR FLIGHT SYSTEM 6.3 THE RADAR SUBSYSTEM 263
So So
to best accommodate the expected range of backscattered power. An important
consideration in the receiver design is to set the video amplifier saturation point
such that the front end (RF or IF) amplifiers can saturate without first saturating
the video amplifier (i.e. the amplifier that matches the analog output to the
ADC) over all possible VGA settings. Nonlinear effects resulting from saturation
in the early stages of the receiver could be masked by additive noise and therefore
be difficult to detect in the digitized video signal.
In addition to the harmonic distortion, the effect of system nonlinearities
also depends on the settling time or system memory. The settling time is generally
defined as the time required for the response to an input to return to zero once
the input is removed. The response to a signal at some time t 1 and the response
to an identical signal at t 2 is not the same if
(6.2.14)
Figure 6.7 System transfer function illustrating the effect of saturation on the echo signal where where tm is the system memory or settling time. The system memory can be
s1 is the input signal and s0 is the output signal. measured using a two-pulse input, where the response to each pulse is measured
as a function of the time spacing between inputs. The minimum time interval
which results in identical responses to the two inputs is the settling time. This
intermodulation distortion. This distortion characteristic is typically evaluated parameter could also be measured directly from the autocorrelation function
by using a sinusoid as an input signal and measuring the spurious power in of the system response to white noise. The nonlinear characteristics of the analog
the output signal spectrum. However, this technique only approximately to digital conversion process will be considered in more detail in the section
estimates the system nonlinearity since the signal distortion is generally on ADCs.
dependent on the frequency of the sinusoidal input. Radar system test procedures
therefore typically call for testing with a number of tones spaced at frequencies
across the system bandwidth. Additionally, two-tone tests are performed to 6.3 THE RADAR SUBSYSTEM
evaluate intermodulation distortion, where several pairs of tone inputs are used
to characterize the second order system nonlinear response characteristics. n This section will review the four assemblies of the radar subsystem in terms of
should be noted, however, that no finite set of tone inputs can fully characterizt} their performance characteristics and design trade-offs.
the system nonlinearity. An alternative measurement technique using Gaussian
white noise as an input is described in Appendix D. This approach, which
is used routinely in physiological system analysis, provides a complete 6.3.1 Timing and Control
characterization of the system nonlinearities. However, even though .a The timing and control assembly consists of a free-running crystal oscillator
comprehensive set of tones and tone pairs spread across the system bandwidth and the associated frequency multiplier and divider circuitry to generate the
does not fully characterize the nonlinearity of the radar system (receive chain~ signals required by the other subsystem assemblies. Additionally, a microprocessor
the Gaussian white noise technique is rarely used to characterize radar system is typically included to generate the signal sequences required for the radar
nonlinearities. It can be argued that even if such tests were performed, meaningfu{ operation. A stable local oscillator (stalo) with good short-term relative stability
interpretation of the results in terms of the output image qmdity is difficult at best7 is essential for the radar to perform in the SAR mode. Specifically, the transmitted
Radar system nonlinearities typically arise from saturation or limiting .Hf·· signal phase must be retained to coherently demodulate the received echo. The
devices such as amplifiers and mixers. Additionally, crossover distortion ~ . stalo drift can be translated into azimuth phase error by
arise due to the nonlinear characteristics of a device changing operating mod~ ·
(e.g., high power switch). Most receivers have several amplification stages • · (6.3.1)
the signal is mixed down to baseband (Fig. 6.2). Typically, at the intermediatO) .·
frequency (IF) stage, a variable gain amplifier (VGA) or a switched attenuator. where t is the round trip propagation time of a pulse, fc is the carrier frequency
is inserted to adjust the position of the receiver instantaneous dynamic range and u~(-r) is the Allan variance of the crystal oscillator. The Allan variance,
264 SAR FLIGHT SYSTEM
6.3 THE RADAR SUBSYSTEM 265
which is typically provided by the manufacturer, is defined as the fractional Among the various pulse coding schemes, frequency coding and phase coding
frequency drift (Af / f) over some time interval of interest r. The following are commonly used, with frequency coding by far the most popular. The
example illustrates the performance requirements for the stalo design. frequency codes can be categorized as linear or nonlinear FM. The linear FM
or chirp code is used in most radar systems, primarily due to its ease of
Example 6.1 Typical performance for a crystal oscillator such as the Hewlett- implementation and its insensitivity to Doppler shifts. Almost all currently
Packard HP10811 with an Jo= 10 MHz is o'y(r) = 1 x 10- 10 for r on the operational (non-military) SAR systems, as well as those planned for the 1990s
order of milliseconds. As an example, consider the E-ERS-1 system where (with the exception of Magellan), use a linear FM chirp (Fig. 6.8a). Nonlinear
R = 850km, r = 5.7 ms and fc = 5.3 GHz. Assuming the HP10811 oscillator FM codes (e.g., Taylor weighted) are used primarily in military applications
is used for the stalo, from Eqn. ( 6.3.1 ), the azimuth phase noise is where very low sidelobes are required (Fig. 6.8b). The nonlinear chirp permits
exact matched filtering (i.e., range compression) without the severe SNR loss
that would result from an equivalent processor w~ighting of a linear FM signal
(Butler, 1980).
From Eqn. (6.2.7) and Eqn. (6.2.8), the azimuth impulse response function is Phase code modulation is used primarily in systems where the available
degraded by resources (i.e., power, mass) are limited or in situations where a relatively
inexpensive coding implementation is required. Most popular is the binary
ISLRP =. - 34.5 dB phase code, where a 180° phase shift is switched into the circuit at periodic
intervals (Fig. 6.8c). The sequence of O's (no shift) and 1 's (180° shift), which
Kmi p = 1.0007 occur at uniform intervals of At (a chip), is chosen to achieve the best possible
sidelobe characteristics. For small pulse compression ratios (:::;;; 13 chips per
which are negligible errors. pulse), Barker codes are commonly used due to their optimal equal-level sidelobe
characteristics. However, since longer codes are required for most SAR systems
The long-term stability of the stalo (over the mission) is also an important (e.g., Magellan has 60 chips per pulse), pseudorandom sequences such as the
consideration in maintaining the carrier within its specified frequency band. maximal-length sequence are more common. A detailed treatment of these and
Additionally, since the stalo provides timing signals for the other assemblies, other phase coding techniques is given in Cook and Bernfeld (1967).
long-term drift could cause some timing errors, although typically systems are
designed such that the effect of long-term drift produces negligible system Dispersive Delay Lines. The most common implementation of the FM code
performance degradation. is a surface acoustic wave (SAW) dispersive delay line (DDL) of the configuration
shown in Fig. 6.9a. The SAW DDL typically consists of two complementary
transducers, each composed of a number of electrodes whose periodicity varies
6.3.2 RF Electronics
The RF electronics assembly can be divided into the following main subassemblies: •
-------71 -------71 rum
(1) Exciter; (2) Transmitter; and (3) Receiver. We will discuss the performance Al
180
and design trade-offs of each subassembly.
j
0
Exciter
The exciter subassembly generates a coded pulse waveform from the continuou& ,
tone stalo output. As described in Chapter 3, coding of the transmitted pulse .
1__ / I
j __ /
I
j
I
I I
provides a range resolution, '5R, dependent only on the bandwidth of the pulse' ; I I
code (i.e., (;R = c/2/Jp_ where BR is the pulse code bandwidth). Since '5R ;if. J+---rp-j
independent of the pulse duration, the transmitter peak power requirements · 14--"P-I
can be reduced by extending the pulse duration without degrading either the (a) (b) (c)
resolution or the SNR. This peak power reduction simplifies the transmitter
Figure 6.8 Pulse coding schemes: (a) Linear FM code; (b) Nonlinear FM code; (c) Binary phase
design, increasing both performance and reliability as well as reducing the risk code where TP is the pulse duration, Ba is the pulse bandwidth and lit = 1/Ba is the chip
of breakdown or arcing in the high power cables. duration for the binary phase code.
266 SAR FLIGHT SYSTEM 6.3 THE RADAR SUBSYSTEM 267
THERMAL COMPRESSION
REFLECTIONLESS
(having higher density at the higher frequencies). The position and length of
BONDED A u CONTACTS
the electrodes set the phase and amplitude response. The DDL is essentially a
linear filter whose group delay va ries over the system bandwidth. The delay
versus frequency characteristic can range from a li near, flat amplitude response
to a nonlinear weighted response for sidelobe supp ression. T ypical time
expansion facto rs are on the order of 1000 where, for example, a 30 ns input
is gated from the stalo to produce a 30 µs pulse. For large time bandwidth
HIGHLY POLISHED products (TB > 1000), spurious internal reflect ions can degrade the phase and
PIEZO·ELECTRIC amplitude performance characteristics. To reduce these effects an inclined
SURFACE transducer geometry is used (Fig. 6.9b, c ). Without special compensation, at
a TB ~ I 000 the peak sidelobes of the autocorrelation function are typically 30
to 35 dB down from the mainlobe (Phonon Corp., 1986).
T he adva ntages in using a D DL fo r pulse code generation are that it is a
proven technology, the performance specifications in terms of TB and pulse-
to-pulse jitter meet most system specifications, and it is relativel y lightweight.
Its key d isadvantages are that it is inflexible (i.e., fixed code) and that it is lossy
(up to 60 dB at TB= 1000).
( 6.3.2)
Transmitter
The transmitter subassembly consists of a series of mixers and bandpass filters
to convert the coded exciter pulse output to the carrier frequency. The low
power input signal is fed into a high gain amplifier (HGA) unit for generation
of the high power signal that is output to the antenna feed system. The HGA
components commonly used in the SAR are either solid state or tube amplifiers.
RF
MOD Generally the trade-off to be made is the high peak power and efficiency available
GATE
from a tube versus the reliability of a parallel solid state amplifier network.
Traveling Wave Tubes. The tube commonly used in most airborne and some
PRF SYNCHRONOUS spaceborne systems is the traveling wave tube (TWT). The TWT consists of an
CU< LOGIC
electron gun (heated cathode, control grid and anode), a delay line, and a
a collector (Fig. 6.1 la). The electron beam, formed by an electric field, passes
Slow wove
Anode structure
Cotholle
RF
MOO
GATE
CNTRL CNTRL
Heoter
and amplitude errors can be characterized in terms of mainlobe and sidelo~;? --i---i--- I
I
characteristics of the pulse autocorrelation function as discussed in Section 6.2.t"
Typical numbers are aP ~ 3° rms and CT8 ~ 1.0 dB. The amplitude distortion in
1 Lineor
dynamic
I
I
ISoturotion I
I
I
_:~r:_
1
Power I region I
output
the DDL is not a factor since the signal is clipped in the transmitter (see ne,~\. I I
section). The pulse-to-pulse jitter degrades the azimuth iqipulse response. TJW;.~ II "'9dB II
jitter error is translated into azimuth phase error by: , ,.
I
., I
I I
<Tp = 2nfa1
Smin, Tongentiol
sensitivity Power input -
where a, is the standard deviation of the pulse-to-pulse jitter and f is, t~j b
operating frequency of the DDL. For a 10 MHz SAW DDL, a 0.5 ns jitter,·
Figure 6.11 Traveling wave tube: (a) Circuit layout; (b) Gain characteristic of broadband TWT
produces aP = 2°, resulting in an ISLR of -24 dB. amplifier.
270 SAR FLIGHT SYSTEM 6.3 THE RADAR SUBSYSTEM 271
through a delay line, where the beam energy is transferred to the delay line, Receiver
effectively amplifying the RF signal. The TWT is characterized by both high The receiver assembly is typically divided into a radio frequency (RF) stage,
gain and large (octave) bandwidths. For radar applications, the tubes are an intermediate frequency (IF) stage, and a video frequency (VF) stage
typically operated in saturation (Fig. 6.llb) to maximize the available output (Fig. 6.2). The RF front end basically consists of a: ( 1) Limiter to prevent high
power and to ensure a stable power level despite variation in the input signal. power signals (from the transmitted pulse or interfering radars) from damaging
However, operation in this region makes the TWT a nonlinear device and the system; (2) Bandpass filter (which is wide compared to the pulse bandwidth)
harmonics of the fundamental signal are generated that must be removed using to limit the spurious signal power; and (3) Low noise amplifier (LNA) whose
a bandpass filter. The efficiency of microwave TWTs ( 1-10 GHz) has improved noise figure is a key factor in establishing the overall system signal to thermal
to 30-50% with advanced collector designs. Typical gains are 45 to 60 dB. noise ratio. The noise figure is given by
Solid State Amplifiers. Most lower frequency spaceborne SAR systems (i.e., F = 10 log(SNRif SNR0 ) (6.3.4)
L- and S-bands) employ solid state amplifier designs for improved reliability.
A parallel-cascaded design is used to achieve the required output power.
Consider the SIR-B amplifier network as an example (Fig. 6.12). The low power where SNR1 and SNR0 are the signal to noise ratios at the input and output
signal is initially split into three parallel channels. Each channel is amplified of the amplifier respectively. This measure is a figure of merit for noise internally
with a set of (Class A) predriver amplifiers operating in the linear region. These generated in the amplifier (Section 2.6.2). A typical noise figure is 3-4 dB for
are followed by isolation circulators and Class C driver amplifiers. This driver an L-band amplifier and about 1 to 1.5 dB higher at C-band.
signal is input to a power amplifier subassembly which consists of a series of The intermediate frequency stage typically consists of: (1) IF amplifier(s);
bipolar transistor stages to achieve the required gain. Combiners are then used (2) Variable gain amplifier (VGA); and (3) Bandpass filter(s), slightly wider
to reassemble this parallel network output into a single high power signal. This than the pulse bandwidth, to limit the system noise. The VGA is used to set
SIR-B design using 50 W bipolar transistors produced a 1.5 kW output power the quiescent gain of the system for a given data acquisition sequence. However,
at about 12-15 % efficiency (Huneycutt, 1985). Current technology using GaAs for some systems the instantaneous dynamic range of the signal is such that a
FETs can achieve 20-25 % efficiency at C- and L-bands and about half of that sensitivity time control (STC) or an automatic gain control (AGC) is required
at X-band. to reduce the signal dynamic range. These techniques are discussed in a later
section.
The video frequency stage consists of: ( 1) Low pass filter; and (2) Video
amplifier to match the output of the receiver to the ADC input. A second VGA
i.........................................................
POWERAMP: 10.SdBGAIN
~
may be included in this stage.
DRIVERSTAGE: 54dBGAIN
, .................................................................... , ......... '91i
l\ - :' At each stage a number of directional couplers are inserted as test points
': \ ~ :' and a calibration signal is injected using a directional coupler, typically at the
i
CLASS A
PRE DRIVER
CLASS A CLASS C
PRE DRIVER DRIVER !l ll : front end following the circulator, or just preceding the IF amplifier.
:' ' \~
\ '
Receiver Performance. Each element in the receiver subassembly is characterized
by its phase and amplitude errors across the passband using the techniques
described in Section 6.2.1. The nonlinear distortion can be characterized by
frequency analysis using a series of single-tone and two-tone test inputs.
Harmonic and intermodulation distortions arise when the input signal exceeds
TO
ANTENNA
the linear dynamic range of the individual components. Typically, the mixers
FROM CHIRP
GENERATOR POWER
PORT or the amplifiers limit the system dynamic range and contribute the bulk of the
DRIVER
POWER STAGE AMPLIFIER s.purious resp~nses. However, with existing off-the-shelf components, a receiver
DIVIDER
hnear dynamic range (from noise floor to the 1 dB compression point) of
DRIVER POWER 50 dB is achievable with an rms phase error of less than 5° and an rms amplitude
STAGE AMPLIFIER
error of less than 0.5 dB. Typical amplitude and phase errors for receiver
Figure 6.12 SIR-8 solid state high gain amplifier design.
components are given in Table 6.2.
272 SAR FLIGHT SYSTEM 6.3 THE RADAR SUBSYSTEM 273
TABLE 6.2. Typical Amplitude and Phase Errors for Receiver Components RCVR PROTECT
PRF WINDOW
Peak-to-peak Peak-to-peak EVENT °'"'"I
Amplitude Phase Errors I 1-38 J&B-r--._ STC
RESPONSE
Components Errors (dB) (deg)
Attentuators 0 0.5
1.2,.) I
0.5
I I
Power dividers 0
Circulators 0.1 0.5 I RANSMITTER
194 J&B-----•oto'""-... CENTER OF
Directional couplers
Mixers
0.1
0.1
0.5
0.5
,.. AKAGE PULSE RECORD WINDOW
G
sTc-
-(R 3
(r)siny(r))
G 2 ('t')
112
(6.3.5)
basics of antenna design can be found in any of a number of books (e.g.,
Stutzman and Thiele, 1981). The key antenna parameters affecting the SAR
performance are the antenna gain (or directivity) and its beam pattern. The
antenna gain is directly proportional to its area. Assuming uniform illumination,
where G( 't') is the nominal vertical antenna pattern as a function of echo delay the gain is given by (see Section 2.2)
time r projected into the cross-track ground plane, y( 't') is the look angle, and
R is the slant range. The STC function used in the Seasat receiver is shown iJt . G = pD = p(4nA/A. 2 ) ( 6.3.6)
Fig. 6.13. .
An automatic gain control (AGC) is typically designed to compensate foo •· where p = PePa is the antenna efficiency, Pe is the radiation efficiency (loss), Pa
intrapulse variation in the return echo power, minimizing changes in the echo1 .·• is the aperture efficiency, Dis the directivity, A. is the wavelength, and A is the
dynamic range resulting from variation in target reflectivity. Essentially, theSe> · aperture area. Typically, to achieve the required SNR for spaceborne systems,
devices employ a control loop with a detector (integrator) to estimate the aperture gains of 30 dB or more are required.
received power across a portion of the echo. The integrated power estimate iS: · An additional minimum area constraint is imposed by the ambiguity
fed back with a negative gain to the receiver VGA. The trade-off in AGC characteristics of the system. Material in Section 1.2 ·shows that, to prevent
performance is dependent on the time constant of the servo loop. It must be .·. overlapping echoes in range, the antenna width must satisfy
short to minimize the feedback error, yet sufficiently long for the integrator to
make an accurate estimate of the echo power. (6.3.7)
274 SAR FLIGHT SYSTEM
where 17 is the incidence angle, fp is the pulse repetition frequency (PRF), and
c is the propagation speed oflight in free space. Similarly, to prevent overlapping
azimuth Doppler spectra, the antenna length must satisfy
(6.3.8)
l 275
276 SAR FLIGHT SYSTEM 6.3 THE RADAR SUBSYSTEM 277
~~'
INPUT/OUTPUT
Figure 6.15 SIR-CL-band antenna feed system (one-half of symmetrical design) illustrating the
incorporation of active elements to achieve amplitude taper. l-BAND PANEL
PIN diode design. The C-band HPA is a 3-stage GaAs FET operating in Class
A for a 25 dB gain, while the L-band is a 3-stage silicon bipolar transistor
design operating in Class C for a 29 dB gain. The LNA designs are GaAs FET
and silicon bipolar for the C- and L-bands, respectively, each achieving a noise
r::::
figure of 1.5 dB. The ferrite circulator provides 20 dB of isolation at 0.5 dB
insertion loss. .
The design of the SIR-C antenna is illustrative of the future of spaceborne
SAR technology. Although SIR-C uses discrete components for its T /R modules
and phase shifters, monolithic microwave integrated circuits (MMIC) are
} C·BAND
approaching the point where they can now be considered viable for a spacebome
SAR application. The advantage is that the electronics can be incorporated into
the printed circuit board with the microstrip radiator ;md the feed network, Figure 6.16 Panel layout of SIR-C antenna.
providing a fully integrated system. The MMIC devices have been demonstra~
at frequencies from under 1 GHz to over 100 GHz. As the RF frequency of~
device is increased, generally both the output power and the efficiency drO,:t Antenna Performance. The antenna gain (or efficiency) and the pattern shape
Typical numbers for L- or C-band devices are 40 to 50% efficiency at 5-10 W; are certainly two key considerations in the antenna design; however, a number
output power, dropping to 25 % efficiency and 3-5 W output at X-band. of other specifications must be met for adequate performance. As previously
A key issue limiting wide application of this technology remains the manufacturing discussed for the receiver, phase and amplitude errors across the passband will
yield and therefore the production costs. degrade the system impulse response function. In the antenna assembly we must
278 SAR FLIGHT SYSTEM
6.3 THE RADAR SUBSYSTEM 279
where Oe,+ifp refers to the range ·or azimuth angles that give rise to signal
components within the processing bandwidth (including ambiguous regions).
A typical performance requirement for the cross-polarization isolation as defined
in Eqn. ( 6.3.11) is - 25- - 30 dB. SAR ambiguities wilfbe further discussed in
Section 6.5.1.
l
280 SAR FLIGHT SYSTEM 6.3 THE RADAR SUBSYSTEM 281
where nb is the number of bits per sample. The actual SQNR is typically less wher~ N. and Nq ~re the saturation and quantization noise powers respectively,
than that given in Eqn. (6.3.12) due to errors in the quantizer. The ADC errors X;
p(x) i~ ~he Gaussian PDF for the input signal, and v; are the quantization
can generally be classified as either timing errors or quantization level errors. and digital reconstruction levels respectively, and
Errors classified as timing errors are sample clock jitter and sample bias, which
result in a relative phase error between the two ADCs in a quadrature sampling ( 6.3.16)
design. Sample jitter gives rise to a phase error according to Eqn. ( 6.3.3 ), where
u, is now defined as the standard deviation of the sample jitter and f is the is the total number of digital reconstruction levels as shown below
sampling frequency. Sample bias errors are- typically stable or slowly varying '
and can be measured with calibration signals and corrected in the ground XLv+ I
processor. Quantization level errors result from DC bias (a shift in all
quantization levels) or errors in the relative spacing between levels (differential
I > S;
Dq = L(X;/X)- 1 ( 6.3.13)
; P;
where x; is the total number of counts in the ith bin, P; is the expected fractional
number of counts in the ith bin for an ideal ADC, and x is the total number
40
of samples in the histogram~
The SNR given by Eqn. (6.3.12) describes the ADC performance given a full
scale deterministic input signal. Since the digitized SAR video is a random,
Gaussian distributed, zero mean signal, the SNR depends on the statistics of
the echo (Zeoli, 1976). The assumption of a Gaussian distribution is reasonable ,,
m _30
considering that the typical antenna footprint is large and that the echo consists
of scattering from a diverse ground area. The noise energy is calculated for each =
a:
z
sample as the square of the difference between the input analog value and its 0 20
digital reconstructed value. This noise is commonly classified into saturation
noise and quantization noise components. The saturation noise is defined as
"'
the noise arising from input analog signals that exceed the maximum or
minimum range of the analog-to-digital converter, while the quantization noise 10
is the error resulting from input signals within the ADG dynamic range. For a
Gaussian input signal these noises are given by
(6.3.14) 0 10 20 30 40
according to Eqn. (6.3.12). For input signals with large standard deviations 100 Msamples/s at 8 bits/sample can be bought "off the shelf". For radar
(high power) the saturation component dominates. Thus, independent of the systems with bandwidths over 50 MHz, in-phase and quadrature sampling can
number of quantization bits, as the input signal power increases each curve be employed using two devices, each operating at half the Nyquist rate of 2BR.
tends toward unity SDNR. In most radar systems, oversampling is applied to· minimize the effects of
Therefore, in terms of the optimal gain setting for the video amplifier preceding aliasing. For Seasat, the system range bandwidth is 19.0 MHz and the real
the ADC in the SAR receive chain, there is a unique value that produces the sampling frequency is 45.54 MHz, resulting in an oversampling factor
maximum signal to distortion noise ratio,- SDNR = S/(Nq + N.) (Sharma,
1978). As the number of bits per sample increases for a given input signal power, ( 6.3.17)
the gain setting (that gain maximizing the signal to distortion noise) should be
reduced to balance the saturation and distortion noise components. In setting where fs, the sampling frequency of the I, Q detected complex signal, is one
the gain in the receiver subassembly, it should be noted that, in any one imaging half the real sampling frequency. When calculating the effective distortion noise
period (e.g., a frame or a synthetic aperture), the standard deviation of the echo for an ADC that uses oversampling, a reasonable approximation is that the
may vary from a very low value (a low backscatter region) to a high value (a quantization noise will be reduced by the oversampling factor, while the
bright backscatter region). Thus, the dynamic range of the echo over time saturation noise is essentially unaffected. This noise reduction occurs during
intervals on the order of the synthetic aperture time or longer may be much the range matched filtering operation in the signal processor. An analogous
greater than the instantaneous dynamic range of the return from targets within reduction in quantization noise occurs in the azimuth signal processor as a
a small time interval. For many types of natural targets, instantaneous dynamic result of the PRF to processing bandwidth (Bp) oversampling of the azimuth
ranges of 25 dB within a short time interval are not uncommon. Adding to this spectrum. The assumption inherent in the above statement is that the quantization
is the additional dynamic range required to accommodate the antenna pattern is essentially white over the range and azimuth spectra of the echo data. This
modulation, the range attenuation, and the cross-track variation in the sample has been demonstrated by simulation of the noise spectra (Li, et al., 1981).
cell size. The instantaneous dynamic range required in the ADC may be 40 dB
or more. Receiver techniques to reduce this dynamic range, such as the sensitivity
time control (STC) or the automatic gain control (AGC), have major drawbacks 6.4 PLATFORM AND DATA DOWNLINK
in that these devices degrade the system radiometric calibration accuracy.
With the advent of high speed, wide dynamic range ADCs the need for either Most spaceborne SAR systems and a few airborne systems downlink the digitized
an STC or an AGC to reduce the echo dynamic range is greatly diminished. SAR echo data to ground receiving stations. The key downlink characteristics
Table 6.3 lists some of the commonly available ADCs. Devices capable of that affect the SAR system performance are: ( 1) The noise introduced by bit
errors; and (2) the downlink data rate (which limits either radar swath width,
duty cycle, or dynamic range). These two factors are interdependent since
TABLE 6.3. List of Currently Available Analog to Dlgltal Converters
increasing the bandwidth of the downlink signal processor to increase the data
Sampling ADC rate also increases the noise bandwidth and therefore the probability of a bit
Frequency(MHz) Bits/sample Channels Manufacturer error. A detailed treatment of the trade-offs in the design of communication
10 12 1 Analog Devices systems, link budgets, and error encoding schemes can be found in the literature
20 8 1 TRW (Carlson, 1975). Here we will consider the SAR system design options, given a
20 10 1 Analog Devices downlink communications system with a known probability of bit error Pb (or
30 10 2 Sony /Tektronics bit error rate) and bandwidth (or maximum data rate).
36 12 2 Analogic
50 12 1 Nicolet
60 10 1 Sony /Tektronics
6.4.1 Channel Errors
100 8 2 Analogic
100 8 2 Biomation Following quantization of the SAR video signal, the data stream is passed to
250 6 1 Hughes the platform data bus. There it is either captured on a high density recorder
300 8 1 Tektronics for non-real-time transmission to the ground receiving station, or directly
525 4 1 Hughes downlinked via the communications subsystem signal processor. This signal
Source: Courtesy of S. W. McCandless, Jr., 1989.
processor typically encodes the data with some error protection code (e.g.,
284 SAR FLIGHT SYSTEM 6.4 PLATFORM AND DATA DOWNLI NK 285
convolutional code) and modulates the downlink carrier signal with the resultant An analysis of the Shuttle to TDRSS link indicates tha t the signal-to-noise ratio
coded data using quaternary phase shift keying (QPSK). is 6.5 dB resulting in Pb~ 10 - 5 with an average burst length of 4-5 bits and
The error statistics of this system depend on the type of error protectio n an expected period between bursts of 2 x 10 5 bits for the 1/ 2 rate, length 7 code.
code used. Altho ugh rando mly occurring bit errors are typically assumed for T o determine the effect of bit errors o n the SAR performance, we assume
the link, if a convolutional code of Jong constraint length is used, burst error that the bit errors occur randomly in time. This allows us to apply Bernoulli 's
statistics can result (Deutsch and Miller, 1981). It should be noted that NASA theorem, where the probability of an m-bit error in an nb-bit code wo rd is given
has adopted a co nvolutional code, constraint length 7, rate 1/ 2, as standa rd by:
for the Shuttle high rate data do wnlink. The NASA TDRSS downlink fro m
the Shuttle is relayed by White Sands Receiving Station to G oddard Space Flight
Center ( GSFC) via a high rate Domsat link. The data transfer is actually through ( 6.4.3 )
a cascade of two links (TDRSS and Do msat ), each using a different coding
scheme. The effects of the two links in tandem could cause severe burst errors.
For a single m-bit error wit hin any code word designated by v;, the resultant
Consider the situation shown in Fig. 6.20 for the NASA high rate shuttle data
code word vi contributes a noise term
transmission. The pro bability of bit error for the entire link is given by
(x - V; )2 p(x) d x ( 6.4.4)
j ,. i
where Pb 1 is the bit error probability for the shuttle to TORS to White Sands
segment and Pb2 is the bit error probability for the White Sands to Domsat to where the signal between X; a nd X ; + 1 is digitized to V; and L v given by Eqn.
GSFC segment. The third term in Eqn. ( 6.4.1) represents the coupling between (6.3. 16) is the total number of possible o utput levels fo r v. Therefore the total
the two links, which could produce burst errors with a longer expected burst bit error noise is given by
length than is cha racteristic of either link individually. However, if the
perfo rmance of each link is sufficiently high, the probability of occurrence of ( 6.4.5)
the bursts is small and
( 6.4.2)
where Pm is given by Eqn. ( 6.4.3 ). An expression for Pm given multiple errors
can be fo und in Beckman ( 1967). The effect of bit errors o n the signal to
~DCM SAT SPACE
SHUTILE
distortio n noise is shown in ( Fig. 6.21 ). For small values of Pb (i.e., random
errors), the effect is essentially to raise the q uantization noise accord ing to the
noise power given in Eqn. ( 6.4.5 ). The assumptio n in this analysis is that the
bit error noise power spectrum is fla t across the system bandwid th (Li, et al.,
198 1). A final point is that, given the bit error noise is white, the noise power
Nb in Eqn. (6.4.5) is fu rther reduced by the oversampling factor given in
Eqn. (6.3.1 7).
10
where V. 1 ~ 7.5 km/sis the spaceborne sensor to target velocity. Assuming the
0 same oversampling factor in azimuth,
Bit error rate = 0.1
-10 .L----4~-.....--..---..--.L...----,---.----r-~--1
0 10 20 30 40 50 Assuming the ADC output is buffered to achieve time expansion over the entire
inter-pulse period, the average (sustained) real-time downlink data rate is
Standard deviation in dB
figure &.21 Effect of random bit errors on signal to distortion noise ratio as a function of signal
power for 8 bit quantization.
where TP = 1/fp is the inter-pulse period.
Example 6.2 Consider a spaceborne SAR system with the following charac- From this example we can easily see that, to achieve the 8 bit quantization
teristics: necessary to preserve the echo dynamic range and the wide swath, we need an
extremely high data rate downlink. Typically a downlink data rate of this
Quantization nb = 8 bits/sample; magnitude cannot be achieved, since it would require a large downlink
Bandwidth BR "== 20 MHz; transmitter and antenna subsystem that cannot be accommodated within the
platform resources, given the large mass and power requirements of the SAR.
Antenna Length La = 12 m;
The alternative is to reduce the system performance by /modifying either the
Swath Width~= 100 km; system design or the data collection procedure. Among the available options are:
Incidence Angle '1 = 45°.
1. Increase the azimuth length (La) of the SAR antenna and reduce the PRF
The required minimum slant range swath width is approximately and/or the azimuth oversampling factor (g0 a) at the cost of increased mass
and degraded azimuth resolution;
W. ~ ~sin '1 = 71 km 2. Reduce the system bandwidth (BR) and/or the range oversampling factor
(g0 , ) at the cost of range resolution;
which corresponds to a data sampling window duration of 3. Reduce the swath width ( ~) or change the imaging geometry to a steeper
incidence angle ( '1) at the cost of ground coverage and increased geometric
't"w ~ 2W./c = 471 µs distortion from foreshortening and layover effects (Chapter 8);
l _______________________________........._
288 SAR FLIGHT SYSTEM 6.4 PLATFORM AND DATA DOWNLINK 289
4. Reduce the quantization to fewer bits per sample (nb) at the cost of An analysis of SAR raw data from the NASA DC-8 airborne system indicates
increased distortion noise and therefore a degraded impulse response that H 0 ~ 6- 7 bits/sample. Thus, assuming 8 bit quantization, the maximum
function and radiometric calibration accuracy (Chapter 7). achievable compression factor is on the order of 1.2. An analysis of this sort
must take into account that the SAR data is stationary only over a small time
Assuming the swath width is maintained, these data rate reduction options and space interval, and therefore the entropy of the data depends on the local
essentially become a trade-off between degrading either: (1) Geometric (spatial) target characteristics. Furthermore, when characterizing the SAR data, care
resolution; or (2) Radiometric resolution (dynamic range). must be taken to ensure that the radar system is not limiting the data dynamic
If a tape recorder is available onboard for capture of the real-time output, range prior to the ADC.
then the sensor duty cycle could also be factored into the required downlink Assuming that a 20% savings could be achieved in the downlink data rate
capacity. Furthermore, if an onboard processor were available to generate the without loss of information, data compression could provide a substantial
image data in real time, the resolution degradation could be performed by improvement in the radar system performance (wider swath, more bits per
multilook averaging, thus reducing the speckle noise in the process. sample, etc.). However, realistically there is no lossless data channel, since bit
errors from the transmission will always degrade the data. In fact, most lossless
compression algorithms result in the data being more susceptible to bit errors,
6.4.3 Data Compression effectively increasing the BER for a given link performance. To offset this factor,
Spatial data compression has long been used as a technique for data volume error protection codes must be applied to the data before transmission. Since
reduction. Generally, the assumption in most compression algorithms is that • the overhead for error protection is typically 20 % or more, a real savings in
some type of redundancy exists in the representation of the data (Jain, 1981). the downlink data rate is not achieved.
Many data compression algorithms have been devised to reduce redundancy Several studies have been performed using lossy compression to reduce the
based on the statistics of the data set. Compression algorithms are classifed as downlink data rate. They conclude that the vector quantization algorithm
either lossy or lossless. exhibits good performance (Reed et al., 1988). Compression factors as high as
The lossy (or noisy) algorithms are designed to achieve a relatively large 10: 1 have been claimed, but to date a full error analysis has not been performed
compression factor with the loss of some information (i.e., added noise) in the to quantitatively assess the actual impact on image quality.
reconstructed data. Conversely, a lossless (or noiseless) algorithm is capable of
exactly reconstructing the original data set from the compressed data stream. 6.4.4 Block Floating Point Quantization
For an application such as reducing the downlink data rate, lossy algorithms
are rarely considered for scientific instruments. This is due to the inability to A more useful technique to achieve a reduction in the downlink data rate is
predefine what an acceptable information loss would be, since the data is to block floating point quantization (BFPQ), also referred to as block adaptive
be used for a variety of research applications. Lossy algorithms will be considered quantization (BAQ). The BFPQ algorithm is based on the fact that over a
in more detail in the ground segment of the SAR data system (Chapter 9) for small time interval (in both azimuth and range) the entropy of the data is lower
the distribution of browse image products. than is that of the data set as a whole. The block floating point quantizer is a
Lossless compression, on the other hand, has been routinely used to reduce device that receives the output data stream from the ADC (Fig. 6.22) and codes
the downlink data rate for optical instruments (Rice, 1979). The redundaneyi the uniformly quantized data samples into a more efficient representation of
in the data set is typically characterized by its zero order entropy (Shannon, 1948) the data, requiring only mb bits/sample (mb < nb). This technique cannot be
strictly considered as lossless compression, since certain portions of the data
L, set (e.g., land/water boundaries) may exhibit an entropy (or dynamic range)
H0 = L P 1 log2 P 1 (6.4.6) ~arger than the number of bits (mb) used in the BFPQ representation, resulting
i=l m an increased distortion noise. ~
The BFPQ technique is analogous to the AGC, in that the sampled radar
where Lv is the number of quantization levels and P1 is the probability a sample echo data are integrated (in power) over a period of time to determine a threshold
will assume the value v1• A basic assumption in Eqn. (6.4.6) is that of stationarity (or exponent) for that block of data. Given this threshold, the BFPQ codes
for the data statistics. The entropy of a data set establishes the minimum number each data sample output from the ADC such that it represents only the variations
of bits required to represent the information in each data sample. It is therefore about the threshold value for that block of data. The dynamic range of the
a useful measure to characterize the potential for lossless compression of the data within the block essentially requires fewer bits per sample to achieve a
SAR raw data downlink. · signal to distortion noise ratio comparable to the original uniformly quantized
6.4 PLATFORM AND DATA DOWNLINK 291
290 SAR FLIGHT SYSTEM
SAR
where nb, mb, and n1 are the number of bits required to represent the original
ANALOG
AOC BFPO data sample, the BFPQ data sample, and the threshold, respectively. The
VIDEO
SIGNAL L - - - - - ' Thresholds instantaneous dynamic range of the BFPQ data is that of an mb bit uniform
quantizer. However, its adjustable dynamic range is that of the original nb bit
quantizer. Thus, the BFPQ will preserve the full information content of the
input data stream if the dynamic range of the original data within any I. x l.
data block does not exceed the dynamic range of the mb bit quantizer.
The assumption in the BFPQ design is that, within a given block of data,
nb SAR
the signal intensity with high probability does not exceed some prescribed
DOWNLINK 1----19! DEFORMATTER BFPO -1 1-+--1. . CORRELATOR
RECEIVER (LEVELO) 1------1~.__ __.. (LEVEL 1A) dynamic range. Thus, selection of the block size is essential to proper
Thresholds performance of the BFPQ. The factors to be considered in selection of the block
a size are:
1. The block should contain a sufficient number of samples to establish
Gaussian statistics for the data set used in estimating each threshold. Due
to the speckle noise a minimum of 50 to 100 samples is required.
1BIT 2. The block should be small in range relative to the variation in signal power
due to antenna pattern modulation and range attenuation. The design
should allow a maximum variation of only 1-2 dB from these effects.
8 ABSOLUTE 7 BIT 3 BIT ~FFER 3. The block should also be small in range relative to the number of samples
I BIT 7 81
VALUE. IF SIGN i...;.e;T,;.__ __,'+I TABLE LOOK UP ' - " " - -.....
in the pulse; and small in azimuth relative to the synthetic aperture length.
BIT-0, TAKE 1'• (ROM) MAGNITUDE
COMPL 12 BIT INPUTS BITS
3 BIT OUTPUTS
Typically the data is approximately stationary over 1/4 to 1/2 of the pulse
(4K x 3)BITS
I xI I and synthetic aperture lengths.
128
Figure 6.22 Functional block diagram of the block floating point quantizer (BFPQ): (a) SAR 1. For each input data block the standard deviation a is calculated. Typically
data system with a BFPQ; (b) Design of the SIR-C BFPQ with nb =. 8, mb = 4, n, = S. this is implemented by calculating the mean of the absolute value of each
sample and relating this to a by
input. Consider a data block of la samples in azimuth ~nd I. sa~ples in .range.
A single threshold is derived for each data block and 1s downlmked with the
encoded data. The compression factor is therefore given by
-I ~ Ix;!+ 0.5
x I = L-
i= I Jiica
f x .. + I
x,
exp ( - x 2/2a 2) d x (6.4.8)
(6.4.7) where Lv is the number of quantization levels and the X; are the normalized
quantizer transition points.
292 SAR FLIGHT SYSTEM 6.4 PLATFORM AND DATA DOWNLINK 293
2. Each sample in the block is scaled by the estimated standard deviation for TABLE 6.4. Uniform 2 Bit Quantizer Transfer Function
that block and the result compared to the optimum quantization levels for (Max, 1960)
a mb bit quantizer with u = 1. Input Signal Level* BFPQ Output
3. The resulting mb bit word and the estimated threshold (which is an n, bit 0.98160' ~ x l l
quantized value of lxl) are downlinked. 0 ~ x < 0.981611 l 0
4. The BFPQ decoder in the ground receiver determines the correct multiplier -0.981611 < x < 0 0 0
(gain) from the quantized threshold and reconstructs a floating point x ~ -0.981611 0 l
estimate of the original data sample. *The value 0.9816 is the optimum transition point for an ideal
uniform quantizer; the value of u is estimated from Eqn. ( 6.4.8).
Example 6.3 Consider the BFPQ design used in the Magellan spacecraft
mapping Venus (Kwok and Johnson, 1989). Due to the small mass and power
budgets available on a deep space probe, such as Magellan, the peak downlink In the Magellan implementation, the transfer function for this normalization
data rate is constrained to approximately 270 kbps. Additionally, the data link giv~n.by Eqn. (6.4.8), is precalculated and stored in a look-up table. Thus, th~
is available only 50 % of the time since the SAR and the communications system 8 bit mput sample and the 8 bit threshold address a 2 bit output sample from
share the high gain antenna. To achieve the prime mission objective of mapping ~he look-~p table accqrding to Table 6.4. The ground reconstruction simply
the entire planet within one year at 150 m resolution, some type of data mverts this process, and a gain function calculated from the threshold is used
compression was required. to reconstruct the original data stream according to Table 6.5. The performance
A BFPQ of (8,2) was adopted (i.e., nb = 8 bits, mb = 2 bits). The analog curves for the. M~llan desi~n are s~own in Fig. 6.24. Note that the (8, 2)
video signal data is quantized to values between -128 and 128, while the block BFPQ SNR d1stortlon curve is essentially a set of 2 bit SNR curves spaced
size used for the estimate of each threshold is set at l, = 16 range samples and across the dynamic range of the 8 bit curve.
la = 8 azimuth pulses. The system, shown in Fig. 6.23, is designed such that the It is important to note that with the Magellan BFPQ we can never achieve
estimated threshold value is applied to a following data block. The standard a better sig~al to di~tortion noise ratio (SDNR) than is given by the peak value
deviation is estimated by the absolute sum method given in Eqn. (6.4.8). The fo.r the 2 bit q~antizer. However, we can maintain that performance over a
input data is normalized by this value and quantized according to the uniform wider range ?f mput values (approaching that of the 8 bit quantizer) using the
quantizer levels given in Table 6.4. BFPq te~hmque. The effect of the distortion noise incurred by using the 2 bit
quantization depends. on the relative level of other noise sources in the system.
For most s~stem ~es1gns, the SD_N~ should be small relative to the signal to
Look-up
table thermal noise ratio (SNR). This is based on the radiometric calibration
ADC
requirements (Chapter 7), which assume the thermal noise power is known and
Output
8 bit no. can be subtracted from the total received power to derive the backscattered
1128 Determine . Next energy. Since the distortion noise is nonlinear and cannot be subtracted, it must
ii:E (lll+IOI) threshold burst
8
TABLE 6.5. Look-up Table for the Two Bit Data
Input
Reconstruction
Voltage
I Slgnblt '} ~ Decoder Input Reconstructed Value*
{ Magnltudeblt • ~
l l l.7211
l 0 0.5211
0 l -0.5211
Q
Analog l/Q data
Threshold determined
In previous burst
0 0 -l.7211
*T.he valu~s 0.4528, 1:5104 are optimum transition points for
an tdeal umform quantizer. Due to saturation effects the uniform
Figure 6.23 BFPQ design used for the Magellan SAR with nb = 8, mb = 4, n, = 8 (Courtesy of quantizer has an effective gain of 0.8825, resulting.in the given
reconstruction levels.
H. Nussbaum).
294 SAR FLIGHT SYSTEM
6.5 SYSTEM DESIGN CONSIDERATIONS 295
Example 6.4 Assume that the measurable range of target backscatter coefficients
(i.e., the noise equivalent a 0 ) and the wavelength, .A., are specified by the scientist.
Furthermore, assume the mass and power budgets are constrained by the launch
vehicle such that:
1. Maximum antenna area (A), and therefore the antenna gain ( G ), are limited
by the mass;
IXI 30 2. Maximum radiated power (P1) is limited by the available de power and
"'O system losses (L.);
.: 8-bit
3. Minimum noise temperature ( T,,) is determined by the earth temperature
a:
z ( "'" 300 K) and the receiver noise figure;
Q 20
ti) 4. Slant range ( R) is determined by the imaging geometry and the platform
10
/8,2)BfPQ altitude.
The designer has little flexibility to meet the SNR requirements given these
system constraints. Consider the single pulse radar equation for distributed
targets (Section 2.8)
0 10 20 30 40 50 60 (6.5.l)
Standard deviation In dB
Figure 6.24 Distortion noise as a function of input power for the Magellan BFPQ. where 11 is the incidence angle and Bn is the noise bandwidth. The system
parameters available for enhancing the SNR are:
be small relative to the thermal noise or very small ( < -18 dB) relative to the
1. Increase the pulse duration, rP, at the cost of increased average power
signal power for calibrated imagery. ·
consumption;
2. Decrease the antenna length, L3 , while increasing the width, W,., to keep the
6.5 SYSTEM DESIGN CONSIDERATIONS area constant to maintain the constraint in Eqn. ( 6.3.9). This will reduce the
swath width and increase the average power consumption due to the higher
The design of the SAR system is generally dependent on the ap~licatio~ for 'PRF required;
which it is intended. Typically, specifications are provided to the design engmeer 3. Reduce system losses by improving the antenna feed system (waveguide) or
by the end data user such as: (1) Resolution; (2) Incidence angle; (3) Swath by inserting T / R modules into the feed to improve the system gain; again
width; (4) Wavelength; (5) Polarization; (6) Calibration accuracy; (7) SNR, at the cost of increased power consumption.
and so on. Additional constraints are imposed by the available platform
resources and mission design (e.g., launch vehicle): ( 1) Payload mass, power, ;, Note that all of the options considered to improve the SNR require an increase
and dimensions; (2) Platform altitude; (3) Ephemeris/attitude determination x in the available power. If additional power is not available then the designer
accuracy; (4) Attitude control; (5) Downlink data rate, and so on. Given these~ must request a modification in the given requirements. Lowering the altitude
inputs the system specifications are determined: (1) System gains (losses); (2) ~ will produce a significant increase in SNR due to the R 3 factor, but will !;lecrease
Rms amplitude error versus frequency; (3) Rms phase error versus frequ~ncy; · the swath width. The 3 dB swath is approximately
(4) Receiver noise figure; (5) System stability (gain/p~ase .versus time/
temperature), and so on. The final design is the result of an iter~tive proce~ure,
balancing performance characteristics among subsystems to achieve the optimal ( 6.5.2)
design. The following example is presented to illustrate these trade-offs.
296 SAR FLIGHT SYSTEM 6.5 SYSTEM DESIGN CONSIDERATIONS 297
where Jv., is the antenna width. The effect of reducing R on the swath could be SPECTRAL
POWER
compensated by reducing Jv., and increasing La to keep the antenna area and
swath constant. A small drop in SNR and a reduction in the azimuth resolution
~x ~ La/2 would result, but the net effect would be a significant increase in SNR.
m.n ~-oo
I B.12
_ •
2
G (f + mfp, T + n/fp) · u (f
0
+ mfp, T + n/fp) df
8 12
0
ASR(T)=~m.;::.=..""~0'--~~~----,,...,,.--,.,..-~~~~~~~~~~~~~
These in turn affect the resolution in azimuth and the available swath width in
I
B.12
range. The shape of the antenna beam, specifically its sidelobe characteristics, G 2(f, T) · u 0 (f, T) df
is also key to the performance of the radar system. The discussion in -B.12 (6.5.5)
Example 6.4 considers only the signal to thermal noise requirements of the
system. An additional noise factor, ambiguity noise, is also an important ECHO
ENERGY
consideration, especially for a spaceborne SAR. Equations (6.3.7) and (6.3.8)
presented rough guidelines for determining the antenna dimensions. These Tp =INTERPULSE PERIOD
Tp __1_
bounds are based on the criteria that the 3 dB mainlobe of the antenna pattern fp
does not overlap in time for consecutive echoes, and that the azimuth 3 dB
Doppler spectrum is less than the PRF. Obviously, these constraints are very
approximate and may not meet the required signal to ambiguity noise ratios. RANGE
The azimuth ambiguities arise from finite sampling of the Doppler spectrum AMBIGUITY
NOISE
at intervals of the PRF (Fig. 6.25). Since the spectrum repeats at PRF intervals,
the signal components outside this frequency interval fold back into the main
part of the spectrum. Similarly, in the range dimension (Fig. 6.26), echoes from
preceding and succeeding pulses can arrive back at the antenna simultaneously
with the desired return. For a given range and azimuth antenna pattern, the AMBIGUOUS
REGION
PRF must be selected such that the total ambiguity noise contribution is verj cli'o
small relative to the signal (i.e., -18 to -20 dB). Alternatively, given a PRF or ISS,1:
range of PRFs, the antenna dimensions and/ or weighting (to lower the sidelobe /i'"'c-t
energy) must be such that the signal-to-ambiguity noise specification is met. Figure 6.26 Illustration of SAR range ambiguities.
298 SAR FLIGHT SYSTEM
6.5 SYSTEM DESIGN CONSIDERATIONS 299
where B is the azimuth spectral bandwidth of the processor. Note that the
AASR = I 0 dB is not uncommon when there is a bright backscatter region
ASR is ;ritten as a function of r, or equivalently the cross-track position in
adjacent to a dark region. For example, an urban area located next to a calm
the image. Since the system ambiguity specifications typically refer to the lake, or a bridge over a river, can produce very high AASRs. Some examples
integrated azimuth ambiguity and the peak range ambiguity (which depends of azimuth ambiguities are shown for Seasat and SIR-B images in Fig. 6.27 and
on cross-track position), the expression in Eqn. (6.5.5) is not very useful for Fig. 6.28, respectively.
design engineers. It requires both the two dimensional antenna pattern and .the The location of the azimuth ambiguity in the image is displaced from the
target reflectivity to be formulated in terms of the Doppler frequency and time true location of the target. The relative displacement in range and azimuth
delay. Additional relations are required to derive these quantities from the respectively is given by (Li and Johnson, 1983)
measured data. Typically, antenna patterns are given as a function of off-
boresight angles and <ro is given as a function of local incidence angle. For dXRA ::::: (mJ../p/ fR)Uoc + m/p/ 2) (6.5.7)
design purposes it is more useful to rewrite Eqn. (6.5.5) separating the azimuth dXAz ::::: m/p V.ilfR (6.5.8)
and range ambiguity components. In the following two sections we will analyze
the effects of each type of ambiguity separately.
Azimuth Ambiguity
As previously described, azimuth ambiguities arise from finite sampling of the
azimuth frequency spectrum at the PRF. As in any pulsed radar, the SAR
Doppler spectrum is not strictly band limited (due to the sidelobes of the antenna
pattern), and the desired signal band is contaminated by ambiguous signals
from adjacent spectra. It is important to note that, due to the one-to-one
relationship between azimuth time and frequency (Section 3.2.2), the shape ?f
the azimuth spectrum is simply the two-way power pattern of the antenna in
azimuth convolved with the target reflectivity.
The ratio of the ambiguous signal to the designed signal, within the SAR
correlator azimuth processing bandwidth (Bp), is commonly referred to as the
azimuth ambiguity to signal ratio (AASR). The AASR can be estimated using
the following equation:
(6.5.6)
where we have assumed that the target reflectivity is uniform for each azimuth
pattern cut (including sidelobes) at each time interval dr within the record
window. Additionally, we have assumed that the azi muth antenna pattern at
each elevation angle within the mainlobe is similar in shape and that the coupling
between range and azimuth ambiguities is negligible. These assumptions are
generally valid for most SAR systems. The AASR as given by Eqn. (6.5.6) is
typically specified to be on the order of - 20 dB. However, even at this value
ambiguous signals can be observed in images that have very bright targets
adjacent to dark targets. As previously described, SAR imagery can have an
extremely wide dynamic range due to the correlation compression gain for point
targets. Thus, even with a 20 dB suppression of the ambiguous signals, a value Figure 6.27 Seasat image of New Orleans, LA (Rev. 788) illustra ting azimuth ambiguities.
6.5 SYSTEM DESIGN CONSIDERATIONS 301
300 SAR FLIGHT SYSTEM
( 6.5.9)
where R is the slant range and bx, JR are the focussed azimuth and slant range
resolutions, respectively. A relatively large value of N 0 R is desirable since the
unwanted ambiguous targets will be dispersed in the image.
(6.5.10)
where e. is the squint angle, <19, is the standard deviation of the squint angle
error, and V. 1 is the relative sensor-to-target speed. From Eqn. (6.5.6) a Doppler
centroid estimation error would result in the processing bandwidth (Bp) being
offset from the mainlobe of the azimuth spectra. Since the ambiguous signal
energy is higher at the edges of the mainlobe than it is in the center (see
Fig. 6.25), an increase in the AASR results.
For cases where the squint angle determination uncertainty becomes so large
that
Figure 6.28 SIR-B image of Montreal, Quebec (DT 37.2) illustrating azimuth ambiguities.
(6.5.11)
v.
where 1 is the magnitude of the relative platform to target velocity, m is the
ambiguity number, and f 00 , fRare the Doppler centroid freque ncy and Dop~Jer it is possible that the clutterlock algorithm will converge on an ambiguous
rate used in t he processor azimut h reference function at the true target locat10n. Doppler centroid (i.e., t he estimated centroid from the clutterlock will be some
Typical values for Seasat, assuming m = l, are integer multiple of the PRF offset from the true centroid). Substituting
Eqn. (6.5.11) into Eqn. (6.5.10) and rearranging terms, we see that for squint
tl.xAz = 23 km angle errors greater than
tl.xRA = 0.2 km
( 6.5.12)
Because the ambiguous targets are significantly displaced from their true
locations, the range migration correction applied to the signal data ~t t~e the clutterlock routine will converge on an ambiguity. Since the Doppler
ambiguous target location is offset from the true value, resulting in blurnng m bandwidth B0 = 2 V.1 / L. and the azimuth beam width()" = -1./ L.,for small squint
6.5 SYSTEM DESIGN CONSIDERATIONS 303
302 SAR FLIGHT SYSTEM
angles Eqn. (6.5.12) becomes that is being employed on the SIR-C/X-SAR shuttle missions is presented in
(Chang and Curlander, 1992).
(6.5.13)
Range Ambiguity
Thus, assuming the PRF is approximately equal to the Doppler bandwidth, Range ambiguities result from preceding and succeeding pulse echoes arriving
the clutterlock algorithm converges on an ambiguous centroid if at the antenna simultaneously with the desired return. This type of noise is
typically not significant for airborne SAR data, since the spread of the echo is
( 6.5.14) very small relative to the interpulse period. As the altitude of the platform, and
therefore the slant range from sensor to target, increases, the beam limited swath
which is one half the azimuth beamwidth. width increases according to Eqn. (6.5.2).
For spaceborne radars, where several interpulse periods (TP = 1/fp) elapse
Example 6.5 Consider a system such as the X-SAR, which will operate jointly between transmission and reception of a pulse, the range ambiguities can become
with SIR-C aboard the Shuttle (Table 1.4). The X-SAR azimuth antenna significant. The source of range-ambiguous returns is illustrated in Fig. 6.26.
dimension is L = 12 m and the radar wavelength is A.= 3 cm, resulting in an For PRFs satisfying the relation
azimuth beam;idth, ()H = 0.143°. Since the Shuttle has an estimated pointing
uncertainty of approximately 1.0° (3a) in each axis, the X-SAR (3a) Doppler T. > 2A.R tan I'/ (6.5.16)
centroid estimation error will be on the order of 10 to 15 ambiguities. This P cW,.
pointing error presents a difficult problem for the processor to resolve the t~ue
Doppler. Two techniques for this PRF ambiguity resolution are currently bemg range ambiguities do not arise from the mainlobe of the adjacent pulses.
considered (Section 5.4 ). Typically this is considered an upper bound on the PRF. To derive the exact
The first technique, range cross-correlation of looks, uses the fact that the value of the range ambiguity to signal ratio (RASR), consider that, at a given
range migration correction, when derived from an ambiguous Doppler centroid, time ti within the data record window, ambiguous signals arrive from ranges of
will result in a target in one look being displaced relative to an adjacent look by
j = ±1, ±2, ... ± nh ( 6.5.17)
(6.5.15)
where j, the pulse number (j = 0 for the desired pulse), is positive for preceding
where AR is the range displacement in meters and As is the time separation interfering pulses and negative for succeeding ones. The value j = nh is the
between the centers of the two looks. Since As, A., and fp are known, m can be number of pulses to the horizon. To determine the contribution from each
determined by a range cross-correlation of the two single-look images. Note ambiguous pulse, the incidence angle and the backscatter coefficient must be
that, in the absence of edges or point-like targets in the images, the correlation determined for each pulse (j) in each time interval ( i) of the data record window.
peak-to-mean ratio is quite small due to the speckle noise in the single look Assuming a smooth spherical model for the earth, the incidence angle l'/;i at
some point i within the data record window (corresponding to a range delay
images.
A limiting factor in the performance of this ambiguity resolving technique t;) and some pulse j is given by (Fig. 8.1)
arises from the fact that AR is proportional to both A. and As, which are inversely
proportional to frequency. For X-SAR, with m = 10 we get AR~ 20 meters. (6.5.18)
At a complex sampling frequency off.= 22.5 MHz this represents an offset.of
approximately 3 pixels. Since these are single-look pixels, the speckle noise The target distance is R, = IR, I and R, = IR, I ·is the sensor distance from the
makes it nearly impossible to exactly determine m. • earth's center and Yii is the antenna boresight angle corresponding to l'/ii· This
An alternative approach, called the multi-PRF technique, is derived from boresight angle can be written in terms of the slant range Rii as follows
those used in MTI radars (see Section 5.4 ). It requires the SAR to cycle through
two or more PRFs, dwelling on each for several synthetic aperture periods. From (6.5.19)
each data block using the same PRF, an ambiguous Doppler centroid is derived
using conventional clutterlock techniques. Using the Chinese remainder theorem, In this formulation we have ignored any refractive effects of the atmosphere.
the true centroid can be derived if the squint angle uncertainty and squint angle Typically, this is a good approximation for earth imaging, except at grazing
drift rate are not too large. A detailed treatment of the multi-PRF technique angles (i.e., j approaching nh). Additionally, when imaging through dense
304 SAR FLIGHT SYSTEM
6.5 SYSTEM DESIGN CONSIDERATIONS 305
atmospheres, such as on Venus, the refra~tion eff~cts are significant and a "4------SWATH LIMITS-------<..i
where Sa, and Si are respectively the range ambiguous and desired si_gnal i;x>wers
(at the receiver output) in the ith time interval of the data recordmg wmd~w,
d N is the total number of time intervals. From the radar equation, -30 T_H_ER_MA_L_N0_1s_E_E_au_1v_A_L_EN_T_a_·_ _......1.-.._ ____,
~~n. (6.5.1 ), only the parameters that do not cancel in the ratio of Eqn. (6.5.20)
1..-_ _ __,__ _ _ _ _
and SIR-A
( 6.5.26c)
where R 1 is the slant range to the first data sample (i.e., j = 0, i = 1), RN is the
slant range to the last (Nth) data sample in the recording window, rP is the
transmit pulse duration, and •Rr is the receiver protect window extension about
t . The functions Frac and Int extract the fractional and the integer portions
of their arguments, respectively. These relationships are illustrated in the timing
~-
diagram, Fig. 6.30. <..:>
The nadir interference restriction on the PRF can be written as follows: z
ex: I
~ I
0 37 I
j = 0, ± I, ± 2,. .. ±nh (6.5.27a) g
2H /c + j/ fp > 2RN/ c SIR-B.-J
I
j = 0, ± I, ±2,. .. ±nh (6.5.27b) I
2H/ c + 2tP + j/JP < 2R 1 / c I
I
26 I
where H :::::: R. - Rt is the sensor altitude above the surface nadir point. We I
have assumed in the above analysis that the duration of the nadir return is 2tP. I SEASAT •
I
The actual nadir return duration will depend on the characteristics of the terrain. I
15-t-~.-~-+..;.._-r"'---+__JL-r~ ..........~---.-~-.....L.~~_.:..;
For rough terrain the significant nadir return could be shorter or longer than 1000 1100 1200 1332 1395 1464 1600 1700 1824 1900 2000
2tP. An example of the excluded zones defined by Eqn. ( 6.5.26) and Eqn. ( 6.5.27)
PRF, Hz
is given in Fig. 6.31.
~ INDICATES NADIR RETURN GJ INDICATES TRANSMIT EVENT
Figure 6.31 Plot of PRF against y for SIR-B illustrating excluded zones as a result of transmit
't RP 'tp and nadir interference.
~
I
~'+-------+-~r--m
ILll :-nL_-_.'"':_. .4=------f
I I
...
The set of acceptable PRFs, or range of PRF values, is therefore established
I by the maximum acceptable range and azimuth ambiguity-to-signal ratios, as
Frac (2R, fp /c) 2R, well as the transmit and nadir interference. For a given sensor and mission
fp c design, there may be no acceptable PRFs at some look angles that meet the
minimum requirements. The designer then has the option to relax the performance
a
specifications for these imaging geometries or exclude these modes from the
operations plan. In general, as the off-nadir angle is increased, the PRF
availability is reduced and the ambiguity requirements must be lowered to find
acceptable PRFs. H owever, the signal to thermal noise ratio at the higher loo k
angles is also red uced, so that the relative thermal noise to ambiguity noise
ratio remains relatively constant.
'tp
6.6 SUMMARY
b
Figure 6.30 Timing diagram illustrating the constraints on PRF selections: (a) Transmit In th is chapter we have presented an analysis of two major subsystems in the
interference; (b) Nadir interference. end-to-end radar data system. The first part of the chapter described the rada r
308 SAR FLIGHT SYSTEM REFERENCES 309
instrument and its major assemblies. This was followed by a discussion of the Cook, C. E. and M. Bernfeld (1967). Radar Signals: An Introduction to Theory and
spacecraft bus and data downlink subsystem. Application, Academic Press, New York.
The SAR sensor subsystem consists of four major assemblies: (1) Timing Deutsch, L. and R. L. Miller ( 1981 ). "Burst Statistics of Viterbi Decoding," TDA Progress
and control; (2) RF electronics; (3) Digital electronics; and (4) Antenna. Their Report 42-64, Jet Propulsion Laboratory, pp. 187-189.
performance can be analyzed in terms of a linear distortion model. Quantitative Huneycutt, B. ( 1989). "Spaceborne Imaging Radar-C Instrument," IEEE Trans. Geo.
relationships between the linear system errors and the resultant impulse response and Remote Sens., GE-27, pp. 164-169.
function were given. Additionally, the non-linear performance characteristics of Huneycutt, B. L. (1985). "Shuttle Imaging Radar-B/C Instruments," 2nd Inter. Tech.
the SAR were described in terms of the signal to distortion noise ratio. Symp. Opt. and Electr. Opt. Applied Sci. and Eng., Cannes, France.
The platform and data downlink subsystem is often a limiting factor in the Jain, A. (1981). "Image Data Compression: A Review," Proc. IEEE, 69, pp. 349-389.
SAR performance, in that the available data rates, power, and mass may be Kwok, R. and W. T. K. Johnson (1989). "Block Adaptive Quantization of Magellan
insufficient to accommodate the instrument. To reduce the data rate, the system SAR Data," IEEE Trans. Geo. and Remote Sens., GE-27, pp. 375-383.
performance is often degraded. Alternatively, a data compression technique, Klauder, J. R., A. C. Price, S. Darlington and W. J. Albersheim ( 1960). "The Theory
block floating point quantization, can be employed. This concept was described and Design of Chirp Radars," Bell Syst. Tech. J., 39, pp. 745-808.
in detail with an example of the Magellan SAR design. Klein, J. (1987). "Effects of Piecewise Linear Chirp Phase," JPL Internal Publication.
The chapter concluded with a discussion of various aspects of the SAR system Kliore, A. ( 1981 ). "Radar Beam Refraction Model for Venus," JPL Internal Publication.
design. A detailed treatment of ambiguities was presented with examples from
Li, F., D. Held, B. Huneycutt and H. Zebker (1981). "Simulation and Studies of
the Seasat and SIR-B systems. The limitations of nadir and transmit interference Spaceborne Synthetic Aperture Radar Image Quality with Reduced Bit Rate." 15th
were also presented as another factor in the PRF selection. Inter. Symp. on Remote Sensing of the Environment, Ann Arbor, Ml.
The intent of this chapter was to introduce the various error sources that
Li, F. and W. T. K. Johnson (1983). "Ambiguities in Spaceborne Synthetic Aperture
result from the sensor and data downlink. These errors to some degree can be Radar Data," IEEE Trans. Aero and Elec. Syst. AES-19, pp. 389-397.
compensated in the signal processor by adjusting the matched filter reference Max, J. (1960). "Quantizing for Minimum Distortion," IRE Trans. lrifo. Theory, IT-6,
function. However, some component of the sensor and data link errors will be pp. 7-12.
passed through to the final image product. An understanding of the sources
Munson, R. E. (1974). "Conformal Microstrip Antennas and Microstrip Phased Arrays,"
and characteristics of these errors is essential for proper design of the ground IEEE Trans. on Antennas and Prop., AP-22, pp. 74- 78.
data system and interpretation of the SAR imagery. Phonon Corp. (1986). "Special Report on Military SAW Applications: Interdigital
Dispersive Delay Lines," RF Design, June 1986.
Reed, C. J., D. V. Arnold, D. M. Chabrias, P. L. Jackson and R. W. Christianson (1988).
REFERENCES "Synthetic Aperture Radar Image Formation from Compressed Data Using a New
Computation Technique," IEEE AES Magazine, October, pp. 3-10.
Bayman, R. W. and P.A. Mcinnes (1975). "Aperture Size and Ambiguity Constraints Rice, R. F. (1979). "Some Practical Universal Noiseless Coding Techniques," JPL
for a Synthetic Aperture Radar," IEEE 1975 Inter. Radar Conj., pp. 499-504. Publication 79-22, Jet Propulsion Laboratory, Pasadena, CA.
Beckman, P. ( 1967). Probability in Communication Engineering, Harcourt, Brace and Shannon, C. (1948). "A Mathematical Theory of Communication," Bell Syst. Tech. J.,
World, New York. 27, pp. 379-423, 623-656.
Berkowitz, R. S., et al. ( 1965). Modern Radar, Linear fm Pulse Compression, C. M. Cook, Sharma, D. K. ( 1978). "Design of Absolutely Optimal Quantizers for a Wide Class of
Chapter 2, Part IV, Wiley, New York. Distortion Measures," IEEE Trans. Comm., COM-20, pp. 225-230.
Butler, D. ( 1984) "Earth Observing System: Science and Mission Requirements Working Stutzman, W. and G. Thiele (1981). Antenna Theory and Design, Wiley, New York.
Group Report," Vol. I, NASA TM 86129. Zeoli, G. W. (1976). "A Lower Boun~on the Data Rate for Synthetic Aperture Radar,"
Butler, M. ( 1980). Radar Applications of SAW Dispersive Filters, Proc. I EE, 127, Pt. F. IEEE Trans. Jrifo. Theory, IT-22, pp. 708-715.
Carlson, A. B. ( 1975). Communication Systems: An Introduction to Communications
and Noise in Electrical Systems, McGraw-Hill Book Company, New York.
Carver, K. and J. W. Mink (1981). "Microstrip Antenna Technology," IEEE Trans. Ant.
and Prop., AP-29, pp. 2-24. ··
Chang, C. Y. and J.C. Curlander (1992). "Algorithms to Resolve the Doppler Centroid
Estimation Ambiguity for Spaceborne Synthetic Aperture Radars," IEEE Trans.
Geosci. Rem. Sens. (to be published).
7.1 DEFINITION OF TERMS 311
In this chapter, we will introduce a set of definitions for the basic calibration
terms as well as image calibration performance parameters. From this basis,
310
312 RADIOMETRIC CALIBRATION OF SAR DATA 7.1 DEFINITION OF TERMS 313
terms of its ability to measure the amplitude and phase of the backscattered separation between the pixel values to be compared. Typically, systems are
signal. This calibration process generally consists of injecting a set of known specified in terms of both their long-term and short-term performance. Long-term
signals into the data stream at various points and measuring the system response, relative calibration refers to the precision of the estimate of the backscatter
either before or after passing through the signal processor. We distinguish coefficient ratio between two image pixels (or groups of pixels), separated by
calibration from system test, in the sense that calibration is performed as part the time required to produce uncorrelated error sources in the dominant error
of the normal system operation, while testing is only performed prior to or terms (e.g., thermal instabilities, attitude variation). Short-term relative calibration
following the normal operations. is the uncertainty in the backscatter coefficient ratio between two pixels (or
The calibration process can be divided into two general categories: groups of pixels) separated by a time interval that is short relative to the time
(1) Internal calibration; and (2) External calibration. Internal calibration is the constant of the dominant error sources.
process of characterizing the radar system performance using calibration signals The distinction between short- and long-term relative calibration is somewhat
injected into the radar data stream by built-in devices (e.g., calibration tone, qualitative and is generally based on the science utilization of the data. In a
chirp replica). External calibration is the process of characterizing the system typical data analysis, a key parameter is the ratio of the mean power (within
performance using calibration signals originating from, or scattered by, ground an image frame) of two homogeneous target areas (e.g., for target classification).
targets. These ground targets can be either point targets with known radar Alternatively, an analysis may be from pass-to-pass over a common target area
cross section (e,g., corner reflectors, transponders), or distributed targets with for change detection. The fact that many error sources are negligible in a short
known scattering characteristics (e.g., u 0 ). term comparison, such as within an image frame, establishes the need for an
The calibration process is distinguished from verification in that verification independent performance specification. Relative errors, such as the variation
is the intercomparison of measurements from two (or more) independent sensors due to thermal effects and errors resulting from platform instability, are negligible
with similar characteristics. The consistency between independent sensors of if the time separation between measurements is sufficiently short.
the measurements of the same target area under similar conditions can be used Given that the backscattered signal is a complex quantity, we must extend
to verify the calibration performance specifications of each instrument. Instrument the above definitions for the system radiometric calibration to include the
validation refers to the comparison of geophysical parameters, as derived from estimation accuracy of the target dependent phase. However, this phase term
some scattering model, to known geophysical parameter values (e.g., surface is only meaningful for multi-channel SAR systems as discussed in the following
roughness) as determined from ground truth measurements. The validation section.
process assumes that reliable models are available to derive the geophysical
parameters from the u 0 values. Otherwise, the instrument measurement errors Multiple Channel Parameters
cannot be separated from the model uncertainty. For a multi-polarization SAR, both the relative amplitude and the relative phase
stability must be specified to determine the cross-channel calibration performance.
7.1.2 Callbratlon Performance Parameters The polarization channel balance is the uncertainty in the estimate of backscatter
coefficient ratio between coincident pixels from two coherent data channels.
The performance of the radar system can be characterized in terms of a set of Similarly, the polarization phase calibration is the uncertainty in the estimate of
calibration parameters. The system performance is typically divided into the relative phase between coincident pixels from two coherent data channels.
absolute (accuracy) and relative (precision) terms. Absolute calibration requires The phase uncertainty should include both the mean (rms) value and the
determination of the overall system gain, while relative calibration does not standard deviation about the mean, since the second order statistics of the phase
require system gain since it involves the ratio of data values within a single error can contribute significantly to uncertainty in the target polarization
radar system. If relative comparisons are made across radars (or radar channels), signature (Freeman et al., 1988). These polarization parameters should be
then the system gain does not cancel in the ratio and the absolute gain of each specified for each radar channel combination.
channel is required. For a multifrequency SAR both the relative and absolute cross-frequency
calibration must be specified for each cross-channel combination. The absolute
Single Channel Parameters cross{requency calibration is defined as the uncertainty (precision) in the estimate
Three calibration parameters are generally used to specify the performance of of the backscatter coefficient ratio between two pixels (or image areas), either
a single channel (i.e., single frequency, single polarization) radar system. Absolute simultaneous or time separated, from frequency diverse radar channels. The
calibration is the accuracy of the estimate of the normalized backscatter relative cross{requency calibration is the uncertainty in the estimate of the
coefficient from an image pixel or group of pixels as a result of system induced cross-frequency ratio of relative backscatter coefficients between two image
errors. Relative calibration is generally categorized according to the time pixels or homogeneous target areas. Phase calibration is not meaningful across
314 RADIOMETRIC CALIBRATION OF SAR DATA 7.2 CALIBRATION ERROR SOURCES 315
frequency channels, since the phase difference between backscatter measurements subsystem. The end-to-end system performance, involving the sensor as well as
at different frequencies is uncorrelated. the downlink and ground processor, must be considered. In this section, we
will review each element in the end-to-end system in terms of its characteristic
error sources and its effect on the overall system calibration.
7.1.3 Parameter Characteristics The objective of the calibration process is to characterize the system with
The calibration performance parameters defined in the previous section typically sufficient accuracy that the properties of the imaged target area (as measured
refer only to systematic error sources. These parameters characterize performance through its electromagnetic interaction with the radiated signal) can be derived
by excluding target dependent errors such as speckle noise and range and from the image data values using some systematic analysis procedure. This data
azimuth ambiguities. Additionally, it is assumed that the power contributed by analysis procedure, usually referred to as geophysical processing, interprets
the thermal noise is known and can be subtracted from the total received power image u 0 values in terms of some geophysical characteristic of the target (e.g.,
prior to the data analysis. Uncertainty in the noise power estimate is typically soil moisture, ocean wave height). The sensitivity of this analysis to errors in
not included in the error model. u 0 determines the required calibration performance for that specific application.
Furthermore, the calibration parameters are random variables. Since generally In general, the greater the dimensionality of the data set (i.e. multiple incidence
calibration accuracies are specified as a single number, it is inherently assumed angles, polarizations and frequencies), the more robust the analysis procedure
that the probability distribution function of each error term is Gaussian. and the more demanding the system calibration performance.
Typically, the specified numbers are one standard deviation errors. It is also The key elements to be considered in calibrating the SAR system are
generally assumed that the error sources are uncorrelated, such that the various illustrated in Fig. 7.1. The following subsections provide an overview of the
contributors can be root sum squared (rss) to determine the overall system calibration error sources for each major subsystem element.
· performance.
An additional point that should be made is that the calibration errors are 7.2.1 Sensor Subsystem
in general a function of both along-track and cross-track position of the target.
In the cross-track dimension, for example, the slope of the antenna pattern Included in our discussion of the sensor subsystem are the effects of the
increases with the off-boresight angle. For a given error in the estimate of the atmospheric propagation errors, as well as those of the radar antenna and the
antenna electrical boresight, the relative calibration error will vary depending sensor electronics.
on the position of the target within the elevation beam. In the along-track
Atmospheric Propagation
dimension, orbit-dependent variations, for example, may affect the calibration
The propagation of both the transmitted and reflected waves through the
uncertainty due to thermal cycling of the instrument.
In summary, parameters defining both the absolute calibration accuracy and atmosphere (in which we include the ionosphere' can result in significant
the relative calibration precision should be defined to encompass the end-to-end modification in the electromagnetic wave parameters. The key atmospheric
system. Each of these parameters is a random variable and its value should be effects are: ( 1) Attenuation of the signal (amplitude scintillation); (2) Propagation
specified with a probability of occurrence. Additionally, the error source (group) delay; and (3) Rotation of the polarized wave (Faraday rotation). These
characteristics may be functions of both along-track and cross-track target effects are typically localized in both time and space and are therefore extremely
position and therefore, in general, should be specified as functions of these two difficult to calibrate operationally.
variables, or at least bounded by the maximum error over some domain. This Amplitude scintillation does not occur naturally above 1 GHz, except along
set of calibration error sources typically excludes target dependent effects such a band of latitudes centered on the geomagnetic equator and within the polar
as speckle and ambiguity noise, since the relative contribution from these effects regions during periods of peak sunspot activity, which occur in 11 year cycles
is unique for a given target area. The calibration parameters for single channel (Aarons, 1982). The peak in 1990 nearly coincides with the launch of the ESA
radar systems must be extended for multiple channel radars, since, in a multiple ERS-1, however the effects will be small for this system, which is a C-band SAR
channel system, additional cross-channel error sources exist. (A.= 5.6 cm), since the perturbation strength is proportional to wavelength
squared. An analysis for the Seasat SAR (A.= 23.4 cm), which was launched
just prior to the peak sunspot activity in 1979, concluded that fully 15 % of the
7.2 CALIBRATION ERROR SOURCES nighttime Seasat images would show significant degradation. However, an
evaluation of the processed image data does not support this analysis (Rino,
As was discussed briefly in the previous section, the radiometric calibration 1984). At higher frequencies (above 10 GHz), attenuation from water vapor
accuracy of the SAR data is not simply dependent on the stability of the sensor absorption could also effect the SAR measurement accuracy (Chapter 1).
7.2 CALIBRATION ERROR SOURCES 317
Group delay is also an ionospheric effect that is most severe for low frequency
(::::;; 1 GHz), high altitude ( > 500 km), polar orbiting SARs. An uncompensated
(.') 0 group delay will degrade the SAR performance in two ways. First, the slant
z t:>
cc - range estimate will be offset according to error in the propagation velocity
~~f-
<(
(Chapter 8). A second effect is pulse distortion, which results in spreading of
(.) <(
C/) :::iE
__J the pulse (i.e., the ionosphere behaves like a linear dispersive delay line
~d di H
(Fig. 7.2)). An EM wave propagating through a medium ionosphere typically
experiences a two-way group delay of 50 to 100 ns, increasing to as high as
·.s.,
t>
500 ns during peak sunspot periods, with a nominal pulse dispersion of less
~ e
than 1 % for Seasat-like parameters (Brookner, 1973).
x
~ a: (.)
f-
~
~.. Faraday rotation is the effect of the ionosphere on a lipearly polarized wave,
91 ul <(
Cl u..
"':::s producing a rotation in the wave orientation angle. The amount of rotation is
8~ ~i:g
z~ ·c:ol0
~
directly related to the ionospheric dispersion resulting from the earth's magnetic
Q~ .,>
~a: field. It is inversely proportional to the radar carrier frequency squared. At
() ::i '5 frequencies above 1 GHz the rotation is small under most ionospheric conditions
~~ 00
and can be neglected.
Z;:.?
w
Q(S
__J
"!
:§
An example where the atmospheric effects are significant is the Magellan
SAR designed to map Venus (Chapter 1). The Venusian atmosphere is more
tu~
:c
(.!)
~a..
e
t- @ ~"'~ TRANSMITTED RECEIVED
(.!)
PULSE PULSE
~
"Cl
ii=:
<
rn
"Cl
s::
__J y
t-
iD ~~ z~
~
5 I I
a:
---0-40n•
0
-.s
0
::;i
a: ~8 0
0
LL g:~
~ H: ~ ~w l· 1J
-n-27ns
/\ I
t I/ "
a.. w~ ~1
1j 2... ,
0
-
IQ 0
I:: ., ,A,,, .. I I
"": g
~·~
~ "'"·=
! !!'
~~ en g:e
a:i l\1(\ ..
t~
~g
~
a:
w
+5ns
~ If ii
I-'(.)
),\ .
Figure 7.2 Ionospheric pulse dispersion for short pulse with a Gaussian envelope. The results are
for grazing angles during severe ionospheric conditions for two way propagation of a 1 GHz wave.
The pulse attenuation is given by °'P"· (Brookner, 1985.)
316
318 RADIOMETRIC CALIBRATION OF SAR DATA
dense than that of the earth. The highly elliptical orbit of Magellan results in
both very shallow and very steep incidence angles over the orbital period. The
result is that the long propagation path through the dense atmosphere causes
significant attenuation and refraction of the EM wave, altering the incident
surface geometry and in some cases the orientation of the wave.
Antenna
The SAR antenna can be a major source of calibration error. There are several
factors that limit the antenna subsystem calibration. First, to achieve the required ,----,
SNR, a large antenna gain is required and therefore a large physical aperture I
area. Spaceborne antenna systems are typically over 10 m in the azimuth I
dimension. To maintain pattern coherence, the structure must be rigid such I
that its rms distortion is less than A./8. Considering the spaceborne environment, L---,
both zero gravity unloading and the large variation in temperature will cause I
distortion in the phased array. This distortion can result in gain reduction,
mainlobe broadening, and increased sidelobe levels. I
A second key factor limiting the antenna calibration is that the characteristics I
of the antenna cannot be easily measured using internal calibration devices. As I
we will discuss in a later section, most internal calibration systems bypass the I
antenna subsystem and inject known reference signals directly into the radar
receiver electronics. In general, the only method to calibrate the antenna in I
flight is by use of external calibration targets. However, this approach limits I
monitoring of the antenna performance to certain discrete places within the
orbit. Any intra-orbital variation in this subsystem performance cannot be well
characterized.
A final consideration for the antenna is specifically for the case of an active
array (Fig. 7.3). An active array has phase shifters and transmit/receive (T/R)
modules inserted in the feed system to improve the system SNR and provide
electronic beam steering. Typically, hundreds of active devices are used in such
a design. This presents a difficult problem in characterizing the performance of
each device, which may degrade or fail during the mission lifetime.
Antenna calibration implies precise characterization of the gain and phase
transfer characteristic across the system bandwidth as a function of off-boresighf
angle. Additionally, the cross-polarization isolation is an important factor, not '.
only in the mainlobe of the antenna pattern but also in the sidelobe regions
that are aliased back into the mainlobe by the PRF sampling (Blanchard and
Lukert, 1985).
Sensor Electronics
The sensor electronics, which include both the RF and digital assemblies, are ···
typically well characterized by internal calibration devices. The system ·
performance, which is given in terms of the rms phase and amplitude errors
across the system bandwidth, can vary as a result of component aging or thermal
variation. The internal calibration loops employ either coded pulse replicas ()r
calibration tones to determine the system response function.
319
320 RADIOMETRIC CALIBRATION OF SAR DATA 7.2 CALIBRATION ERROR SOURCES 321
A second factor in characterizing the performance of the sensor electronics SAR Correlator
is the system linearity. The dynamic range of the receiver electronics should The SAR correlator (Level lA processor) forms the image products from the
always exceed that of the ADC, and the video amplifier linear dynamic range digitized video signal data by convolving the raw data with a two-dimensional
should always be designed such that it is the first to saturate at any gain setting. matched filter reference function (Chapter 4 ). The reference function coefficients
Typically, a 35-40 dB instantaneous dynamic range is required for an acceptable are derived from the Doppler characteristics of the echo· data. Typically, the
distortion noise level. SAR correlator processing algorithm approximates the exact matched filter
function with two one-dimensional filters. Additionally, in the frequency domain
fast convolution algorithm, the Doppler parameters are assumed constant within
7.2.2 Platform and Downlink Subsystem
a processing block. For large squint angles and large attitude rates, these
A key element in determining the overall system calibration accuracy and the approximations are inadequate, producing matched filtering errors. The result
image quality is the sensor platform. A stable platform with precise attitude is an increased azimuth ambiguity level, loss of SNR, degraded geometric
and orbit determination capability is a necessity for the generation of calibrated resolution, and geometric distortion (image skew). The accuracy of the matched
data products. Uncertainty in the sensor position and velocity primarily affects filtering is especially critical when external calibration targets are used to derive
the geometric calibration, degrading the target location accuracy and the the sensor induced errors, since the sensor and processor errors cannot be
geometric fidelity of the image. This will be discussed in more detail in Chapter 8. separated to identify the error source. As we will discuss in more detail in
The platform attitude variables, in conjunction with its ephemeris, are key Section 7.6.1, a technique has recently been developed to minimize the effect of
parameters for determination of the echo data Doppler parameters. Even with matched filtering errors on calibration performance (Gray, 1990). However, as
parameter estimation routines, such as clutterlock and autofocus, the initial described above, these errors will still affect the image quality (impulse response
predicts must be sufficiently accurate for the estimates to converge properly. It function) characteristics.
should be noted that these Doppler parameter estimation techniques are target
dependent, thus the convergence accuracy, and therefore the system performance, Post-Processor
depend on the surface characteristics. It is preferable to have attitude sensors The post-processor performs geometric and radiometric corrections on the SAR
capable of measuring the sensor attitude to within one tenth of a beamwidth image data. A key element in this process is the estimation of the correction
in azimuth and several hundredths of a beamwidth in elevation. coefficients. This requires an analysis of ancillary data sets such as: ( 1) Engineering
The platform control is an important factor determining the quality of the telemetry; (2) Sensor, platform, and processing parameters; and (3) External
SAR image products. A large attitude rate, if not tracked by the SAR azimuth calibration device measurements. These data, in conjunction with preflight test
reference function, will degrade the image quality by reducing the SNR within data and calibration site imagery, are used to develop a time dependent model
the processing bandwidth. For block processing in azimuth, the Doppler for the radar system transfer characteristic. This model in turn provides estimates
centroid varies as a function of time over the synthetic aperature length, which of the sensor errors at any time during the mission, assuming that the sensor
results in the processing bandwidth being properly centered at only one point instabilities (e.g., thermal drift) are deterministic and can be measured. The
within the block. The calibration error bias can be corrected, if the attitude rate accuracy of the model will depend on the performance of the internal calibration
is known, by adjusting the processor gain for each block according to the signal . . ••. devices, as well as on the frequency of the spatial (cross-track) and temporal
loss. (along-track) sampling of the system transfer function using ground calibration
Random errors caused by the data downlink have little effect on t~ i sites. The calibration plan must consider the effects of the space environment
radiometric calibration for distributed targets. A severe bit error rate (i.e., .:. as well as the telemetry bandwidth and the performance limitations of both the
> 10 - 3 ) can degrade the impulse response function and therefore affect t~) internal and external calibration devices. Typically, the post-processor correction
external calibration accuracy if point targets are used. If an entire echo line of/ errors are driven by the accuracy of the input data used to derive the correction
data were lost in the Level 0 (telemetry data) processing, the internal fidelity · coefficients and not by the performance of the post-processor subsystem.
of the data set would be degraded. The effect is most severe for multichannel
systems such as an interferometer or a polarimeter, where the loss of a line of Geophysical Processor
echo data in one channel will cause a relative channel-to-channel phase error. The geophysical processor interprets the calibrated backscatter measurements
(e.g., u 0 ) in terms of the surface biogeochemical characteristics. Depending on
the specific parameter to be measured, this can be done by inversion of a
7.2.3 Signal Processing Subsystem scattering model (e.g., Bragg model), or empirically by using the statistics of
The signal processing subsystem consists of three major elements: ( 1) SAR the image (e.g., the ratio of the mean to the standard deviation). With either
correlator; (2) Post-processor; and (3) Geophysical processor. approach, ground truth data is generally required to train and/or verify the
322 RADIOMETRIC CALIBRATION OF SAR DATA 7.3 RADIOMETRIC ERROR MODEL 323
geophysical processing algorithm. The accuracy of the derived geophysical d~ta set of scatterers with complex reflectivity, as in Eqn. (3.2.3), by
depends on the image data calibration and the .adequacy ~f the sc~ttenn_g
model. A critical factor in developing a geophysical processmg algonthm is ((x,y) = A(x,y)exp[jt/f(x,y)] (7.3.3)
parametrization of the analysis such that key environmental factors can be
included (e.g., surface temperature, diurnal variation, wind speed, etc.). The spaced at intervals equal to the unprocessed resolution cell size. The amplitude
most successful algorithms to data are those that are relatively insensitive to A ( x, y) is modeled as a Rayleigh distributed, stationary process, while the phase
calibration errors (e.g., they utilize only the ratio of pixel values). l/J(x, y) is uniformly distributed and stationary. The expected radar cross section
is therefore ·
Summary
Calibration of the SAR end-to-end data system presents a formidable challenge (7.3.4)
to both the radar and ground processor design engineers. The uncertainty in
the characterization of each element in the data system must be established, where ifx,y is the expectation over x and y and Ax, ARg are the azimuth and
and an overall error model developed to determine if the expected system ground range resolution cell sizes of the unprocessed raw video signal (the beam
performance meets the specification. A key factor is the stability of the radar footprint).
sensor relative to the calibration measurement sampling interval. If the transfer Substituting Eqn. (7.3.4) and Eqn. (7.3.2) into Eqn. (7.3.1), we can write the
characteristic is not adequately sampled in either time or frequency then the mean received power for a homogeneous target as
accuracy of the correction coefficients will be degraded. . .
In the following sections, we discuss the internal and external calibration (7.3.5)
measurement strategies by reviewing current system designs. Their performance
will be assessed in terms of a system error model. In the second portion of the where Pn is the mean noise power over some block of data samples used in the
chapter we will discuss the ground calibration processor design in terms of the estimation of a 0 •
image analysis and data correction algorithms required. If we ignore the effects of system quantization and saturation noise, the mean
received power for a homogeneous target is related to the digitized video signal
by
7.3 RADIOMETRIC ERROR MODEL M
2 2
Pr ="In
L.,,d11 /Ml = ii d
The process of radiometrically calibrating the SAR image data can be red~ced i,j
to estimation of the bias and scale factors that relate the backscatter coeffietent
to the image data number (DN). Assuming the system is linear, we can write where nd,i is the complex data number of the (i,j) digitized sample and M 2 is
the receiver output power as the number of samples averaged. From Eqn. (7.3.5) we can write
(7.3.1)
0 ii~ - Jin
(] = --=----= (7.3.6)
K(R)
where p is the total received power, P. is the signal power, and Pn is the additi~e
(thermah noise power. Ignoring the effects of ambiguities, the signal power is
where
related to the mean radar cross section ii by
(7.3.2) K(R) = K'(R)AxARg (7.3.7)
P. = K'(R)ii
where K'(R) is a range dependent scale factor. . Thus, if the scale factor K(R) and the mean noise power Pn can be estimated
Recall from Section 2.3 that the radar cross section a of a patch of terram over a small area ( M x M samples) of the data set, then the mean backscatter
is a random variable. The mean radar cross section ii of a region is only defined coefficient a 0 can be determined from Eqn. (7.3.6).
for an extended area of homogeneous statistical properties. Assuming the average In general, Pn and K(R) will be both frequency and time dependent given
signal level for a homogeneous SAR image is independent of scene coh~rence the radar component aging, thermal stress, and platform motion. However, the
(Raney, 1980), a statistically uniform target region can be modeled as a discrete frequency dependence is significant only in. terms of the processor matched
324 RADIOMETRIC CALIBRATION OF SAR DATA 7.3 RADIOMETRIC ERROR MODEL 325
filter error characteristics. For a point target, these errors will be expressed in unbiased such that
terms of mainlobe broadening and increased sidelobe energy in the point target
response function. For a distributed target, the processor matched filtering t9'{K(R) - K(R)} = O; lf{P 0 - P =00 }
integrates the frequency response, thus the shape of this response is not
significant, since only the integrated power affects the radiometric calibration. where 8 represents the expectation and K(R) and P0 are the estimated values.
In general, the noise power and scale factor should be written as functions of Combining Eqn. (7.3.8) and Eqn. (7.3.9) and rearranging terms, the fractional
time P 0 (t) and K(R, t), and can on!y be considered constant over a small block
uncertainty in the estimate of a 0 from errors in the noise power and the correction
of data. factor is given by ·
Since the calibration correction parameters vary with time, the estimates of
these parameters cannot be extrapolated over a large area. Additionally, there 2 ( SK )2 ( Sp )2
is a large uncertainty in the a 0 estimate if M is only a few pixels. This is due
to the inherent speckle noise in the data resulting from a large number of
( Sa•)
a 0 = K(R) + a° KCR)
(7.3.10)
independent scatterers within a single resolution cell (Section 2.3, 5.2). Since
the intensity of a one-look pixel (M = 1) obeys the exponential probability where we have assumed the estimation errors are uncorrelated, Gaussian
distribution function Eqn. ( 5.2.9 ), this uncertainty is ± 3 dB. Stated differently, distributed variables. Recall from Eqn. (2.7.1) that K(R) is the product of a
there is about a 50% probability that the single-look pixel value lies outside number of terms (transmit power, antenna pattern, etc.), such that
the a 0 ± 3 dB range. The estimate of the noise power also must be derived
from a large number of pixels to determine the statistical mean. On an individual
pixel basis, the actual noise power may deviate significantly from the mean
noise estimate. If we assume that the distribution of the estimation errors for each term is
The variation in noise power over time primarily results from variation in Gaussian and uncorrelated, and if we further assume that the variances are
the radar receiver chain component gains. This drift can usually be measured small, then the coefficient of variation of the K(R) estimation error is given by
from receive-only noise measurements, when the transmitter is placed in a the sum of the coefficients of variation of the individual parameters (Kasischke
standby mode and only the thermal noise is recorded. The changes over time and Fowler, 1989)
in thermal noise power can be monitored using internal calibration signals that
measure the overall receiver gain characteristic. Bi = Bi + BR + ··· + BR
1 2 •
(7.3.11)
A formulation for the range dependent scale factor K(R) in terms of
measurable quantities can be derived from the radar equation, as we will show where the coefficient of variation, Bx= sx/x, is the ratio of the standard deviation
in the next section. It is dependent on radar system parameters such as the to the sample mean for the random variable x. Combining Eqn. (7.3.10) and
antenna gain pattern, the transmit power, and the sensor-to-target slant range. Eqn. (7.3.11) the error model becomes
Errors in the estimates of these system parameters will degrade our estimate
of K(R) and therefore the radiometric calibration.
To evaluate the sensitivity of a 0 to errors in the estimate of K(R) and Po =
2 + 2 + ··· + 2 + ( sP. )2]1/2 (7.3.12)
Ba• [ BK
'
BK 2 BK
• K(R)a 0
we take the partial derivative of Eqn. (7.3.6) with respect to each parameter.·
The uncertainty in the estimate of a 0 for a given error in K(R) is:
where Ba• = Sa•! a 0 • Using the relationship in Eqn. (7.3.6), we get a final expression
(7.3.8) for our error model as
while the a 0 error for a given error in P 0 is: Ba•= [BR +BR + ... +BR + (-2Bp~·-P_n_)2]112
• • • fid - Pn
Sa•= Spj K(R) (7.3.9)
Thus the coefficient of variation for a 0 is given by the root-sum-square of the
0
where Sa•• sK, and sp. are the standard deviations of the estimates of a , !'(R), coefficients of variation of the individual terms in the radar equation plus a
and P 0 , respectively. We have assumed that estimates of K(R) and P 0 are scaled noise term.
326 RADIOMETRIC CALIBRATION OF SAR DATA 7.5 RADIOMETRIC CALIBRATION TECHNIQUES 327
7.4 THE RADAR EQUATION and the incidence angle 17, which depends on the platform roll angle. Additionally,
the noise power term in Eqn. (7.3.5) must be estimated.
Given the radar equation for a distributed target as defined in Eqn. (2.8.2), we The calibration techniques to estimate these parameters are broken into
can write the receiver output signal power as internal calibration and external calibration measures. The internal calibration
uses data from built-in calibration devices to measure primarily the transmitter
p = PtGrG 2 (<f>),P(u 0 AxAR 8 ) power output and the receiver gain. Typically these devices will only be used
(7.4.1) to track the system drifts over time. External calibration techniques generally
• (4n)J R4
use image data of calibration sites equipped with point targets of known
for a homogeneous scene, where we have assumed that the antenna is reciprocal scattering properties, or images of distributed target sites with known u 0 • These
(i.e., Gt= Gr= G), Pt represents the radiated power, Gr is the overall receive data are used primarily for absolute gain and antenna pattern estimates. The
gain, and AxAR 8 is the ground area of each precompression resolution cell. following section will describe each of these techniques in detail.
(The point target radar equation would use the term u, the radar cross section
of the point target, in parentheses in Eqn. (7.4.1).)
From Eqn. (7.3.2), Eqn. (7.3.4) and Eqn. (7.4.1), the range dependent scale
factor K(R) is given by 7.5 RADIOMETRIC CALIBRATION TECHNIQUES
Perhaps the most difficult task in calibrating the SAR system is not in
collecting this set of calibration data, but rather in performing the calibration
w analysis to derive the correction parameters. As shown in Figure 7.4, the final
0 stages in generating calibrated data products are: ( 1) Assembling the calibration
z<( en z z metadata and calibration site imagery into a database; (2) Performing analysis
~ :J
-
:S en 0 (/) 0
a: >- i== Ci5 i== of this data to derive the radiometric correction factors (i.e., the K(R) and P0
a: >- ~-
0 ..J <( <(
LL <( ....J a: UJ <( terms as functions of time); and (3) Incorporating this information into the
~~
<(
a: z Ill
z<( I- Ill
operational processing data flow to routinely generate calibrated data products.
w <( :J
<(
:J
<( a_
a.. (.) (.) This section addresses specifically the sensor calibration measurements and the
ground calibration site design. The following sections will address the calibration
11llOIUUUnOHllllllllllllllOUHlll11•UUlllllUlllOllllUOHUl111111111•1111 '"""'"'"'''''""'"""""'''''''"""'"""""""'''111111"
processor design and data analysis in some detail.
~
z
0
i==
<(
a:
Ill
UJ
(/)
<(
Ill <(
<(
~ ,_
~
r
~
<(
I-
UJ
>
5:
<( (.)
0 a:
The internal calibration measurements are only useful in conjunction with the
preflight system test results that define the relationship between these built-in
a.. a:
m
0
w a: :J ~ <( device measurements and the key system performance parameters. This is
<( ::::i ~ 8 <( 0 especially true for a spaceborne SAR such as the E-ERS-1 SAR or the SIR-C.
.... <( cc
roG
fil (.)
For systems such as these, extensive testing of the RF electronics, digital
<( 0
~~ A A• ' '
j
c 0- electronics, and the antenna are made over temperature and, when possible, in
a vacuum environment. Key system parameters such as: transmitter output
........................................................ ,,,,,,,,, ....... UllHll
""' ..... "'""'"''"'""""'""'""""""""'ll"ttlln1111uunuu
power, transmitter and receiver losses, receiver gain, antenna gain and pattern,
RF /digital electronics linearity and dynamic range, and phase/amplitude versus
frequency stability are measured as functions of temperature at each (unique)
z
0
j::
<( ....
a: z
en Iz-- I a: z
radar gain and PRF setting. Proper placement of internal calibration devices,
such as temperature, current, and power meters, will permit determination of
mW I- !zUJ ~ 0(/) 0 (/)
z the system performance as a function of variation in these parameters.
g~.... g
-..J :s I ::::!: (/) Cf z i== 0 Obviously, this technique assumes that the variation in system performance
w 0 ~ I- ::::!:
a_ UJ ffi::::!: <(
<(
<( a: :J (/) (/) UJ
z ....J ~ Ill
a: a_ can be modeled as a function of these observable parameters. Furthermore, we
o::::>
a: en
0 <(
LL
UJ
a:
a_
>-
Cl)
UJ
I- UJ
(jj
(5
UJ
zUJ ~
s
:J
:J ::::!:
<(
<( (.)
assume that these calibration devices are themselves accurately calibrated and
stable over time. In addition to these built-in test meters, most radar systems
en w Ci5 0 ::::!: (.)
z :s perform in-flight RF test measurements using calibration loops. To illustrate
w
en i Jt.
i the two fundamental approaches to the RF internal calibration design we
consider as examples the ESA E-ERS-1 SAR and the NASA/ JPL SIR-C designs.
IUtllffHllUUlllUllllllUHllllllUllHllllllUlllllHllHllllllllltllllll
llllllllUllllllllllllllllllllllHUlllllllllllllllllllllllllllllllHlllllH1111Hff1
X...____
IF REPLICA
CALIBRATOR!--------, FROM 5.18 GHz
FREQUENCY-.,..--...r
GENERATOR
RF REPLICA
Figure 7.5 Internal calibration loop design used by ESA ERS-1 SAR. A similar design is employed 43dB
by the X-SAR shuttle radar (Attema, 1988).
system. The high power amplifier (HPA) output is coupled into a bypass circuit POWER TO CONTROL
METERAGC PROCESSOR
that has two possible paths. The calibration loop signal (RF replica) passes
through the entire receiver chain, bypassing only the antenna, while the pulse
(IF) replica loop additionally bypasses the entire RF stage of the receiver and FROM
CONTROL
inserts a signal into the front end of the receiver IF stage. PROCESSOR 123.2MHz
The details of the calibrator block in Fig. 7.5 are shown in Fig. 7.6. The
calibration loop is used only during the turn-on and turn-off phases of the data
collection operation. The high power amplifier (HPA) output is coupled
( - 58 dB) into the calibrator bypass circuit and demodulated to an intermediate 11 dB
frequency (123 MHz). The signal is then filtered, attenuated, and shifted back
to its original RF center frequency where it is coupled ( -44.5 dB) into the
x j_
front end of the receive chain prior to the low noise amplifier (LNA). An HPA TO 44.5 dB X -
FROM
power out measurement is performed using a power meter. This measurement RECEIVER ------:;......;:~--------SAR ANTENNA
is then sent to the control processor for incorporation into the downlink data Figure 7.6 Detail design of the internal calibrator for E-ERS-1 (Atterila, 1988).
stream.
The pulse replica loop is used primarily during the data acquisition phase
of the operations. This loop injects a replica of the transmitted pulse into the assumption since the period between turn-on and turn-off is relatively short
data stream during the quiet periods between pulse transmission and echo (nominally < 5 minutes). The pulse replica loop is primarily used to obtain the
reception. A delay line is used to properly insert this echo into the data stream relative gain and phase characteristics (minus the LNA) across the system
without interfer~ng with the received signal. A command from the control bandwidth. This transfer function estimate is then used to determine the exact
processor is used to set the signal level to be compatible with the selected IF range pulse code for use in the ground signal processor. If the pulse code (e.g.,
amplifier gain in the receive chain. The pulse replica loop injects this attenuated chirp) generator is not stable (e.g., phase drift), then a frequent update in the
signal into the receiver following the LNA at an intermediate frequency to , range compression function may be required for formation of the synthetic
minimize the front-end noise contamination. It is impor\ant to note that tiu; aperture.
pulse replica loop cannot directly measure the system gain variation since the.
primary source of gain drift is the front end LNA. , SIR-C Internal Calibration
The E-ERS-1 internal calibration loops will be used as follows to correct for An alternative approach to internal system calibration is to use a single frequency
system errors (Corr, 1984). The relative change in transmitter output power tone generator that is coherent with the stable local oscillator ( stalo) controlling
times the receiver gain variation is measured by the calibration loop during th~ the radar system. This design, shown in Fig. 7.7, is used by the NASA/JPL
turn-on/off sequences. The gain at any time during the data acquisition period Shuttle Imaging Radar series of instruments (Klein, 1990a). The calibrator
is then estimated assuming a linear variation over the period. This is a reasonable subsystem generates a stable low power tone that is used to monitor changes
332 RADIOMETRIC CALIBRATION OF SAR DATA 7.5 RADIOMETRIC CALIBRATION TECHNIQUES 333
and the PRF fp· It is selected such that the calibration tone falls into a discrete
RF DIGITAL
ELECTRONICS ELECTRONICS FFT bin during the signal processing. The calibration output power is controlled
by a thermal compensation circuit to maintain less than 0.1 dB variation over
fca1 a range of operating temperatures. A step attenuator is used to adjust the caltone
signal power such that it is always 12-18 dB below the echo signal power. The
resulting caltone will be phase locked with the radar from pulse to pulse. This
STABLE
CALIBRATOR LOCAL permits coherent integration of consecutive echoes to effectively increase the
..___ _ __. '----~OSCILLATOR
f slo caltone power relative to the echo power for a precise measurement of receiver
gain.
I /"""\. REFLECTED The caltone is extracted from the data during signal processing by performing
'-J POWER
an FFT on each echo line within a data block (e.g., 1024 samples by 512 lines).
FORWARD
rO POWER
"-------'TRANSMITTERi-----i EXCITER
Each transformed line is then summed coherently in the along-track direction.
For example, a 1024 sample range transform effects a 30 dB gain in the caltone
to receiver output power (P. + Pn) ratio. This gain is achieved since the caltone
Figure 7.7 Internal calibration loop design used by NASA/JPL SIR-8 and SIR-C instrument.
energy is confined to a single FFT bin, while the received signal energy is spread
across all 1-024 bins. A ppase coherent azimuth summation of 512 transformed
lines achieves an additional 27 dB gain in the caltone power level. However,
in the receiver transfer characteristic. Prior to the data acquisition during the this is partially offset by the unfocussed SAR aperture gain which is approximately
turn-on phase of operation, the calibrator generates a tone spanning the full 15 dB (35 lines) for a nominal SIR-C mode. Thus a caltone to signal ratio of
dynamic range of the receiver. This continuous tone signal is injected into the 30 dB can be achieved from processing a 1024 by 512 block of data, assuming
receiver data stream via a directional coupler. It scans across the passband, the initial caltone to signal data ratio is set at -12 dB. The resulting caltone
dwelling at each frequency position for a fixed number of pulses. Typical numbers estimation error ( <0.01 dB) is small relative to the expected caltone power
for SIR-C would be a scan over 11 frequency positions, dwelling at each position drift ( -0.1 dB). The results of a simulation using NASA/JPL DC-8 SAR data
for 64 pulses ( "'0.05 s ). (unfocussed aperture gain "'10 dB) are shown in Fig. 7.9 (Kim, 1989).
During the data acquisition phase, the tone is set in a fixed position in the The caltone gain estimate is used to normalize the data samples acquired
center of the system bandwidth at a power level more than 12 dB below the during a time interval around the processed block of data. Typically the signal
expected signal power. The calibration tone (cal tone) signal power is set at this processing generates an image frame from each 15 s block of data. Caltone
low level to ensure that it does not contribute significantly to receiver saturation. estimates from the beginning and end of the data block are routinely produced
Details of the SIR-C calibration subsystem are shown in Fig. 7.8. The caltone to verify system stability over the 15 s period. The raw digitized video data are
frequency is derived from the stalo frequency f. 10 , the sampling frequency J., then normalized according to the estimated mean caltone power level after their
conversion to a floating point representation and after subtraction of the caltone.
The caltone subtraction can be performed in either the time domain or the
,---------------------, frequency domain, given estimates of both the caltone gain and phase. If zero
I I padding of the data is required to achieve the "power of two" FFT block in
fs10
I SYNTHESIZER fcal POWER
LEVELER
STEP
ATTEN.
INJECTION
COUPLER
II the range correlator, then the caltone energy will be dispersed according to the
fraction of zero samples. This greatly complicates the frequency domain
estimation and subtraction procedures. In this case, the caltone subtraction is
I most efficient in the time domain. The caltone scan sequence during the turn-on
CONT~OLLED TEMPERATURE I and turn-off phases of the data collection measures the gain and phase variation
_ _J
across the system bandwidth. These measurements can be used to adjust the
range reference function for optimum matched filtering during the signal
processing.
REGUlATED STATUS CONTROL The caltone scheme described above has one distinct benefit in that it can
D.C. POWER SIGNALS SIGNAL be used to measure the gain variation throughout the data take. However, its
Figure 7.8 Details of the SIR-C calibration design. shortcoming is that it does not measure transmitter output power. This can be
RADIOMETRIC CALIBRATION OF SAR DATA
7.5 RADIOMETRIC CALIBRATION TECHNIQUES 335
334
Spectru m of I Line temperature. In this scheme, the absolute measure of radiated power can only
l00 be determined using external calibration devices such as a ground receiver.
reflector, t he RCS is very sensitive to disto rtions in the plates forming the sides
40 cb. -40° - <I>. oo
of the reflector. Fabr ication errors or warping from thermal cycling could cause I/ r"\. ,,
a significant change relative to the theoretical RCS of the device. '
20
I/
/
' [\
,f '1
Passive Calibration Devices. The most frequently used devices for SAR
calibration are corner reflectors. By far the most popular reflector is the 0
triangular trihedral design (Fig. 7.11 ). The triangular trihedral radar cross
section is given by (Ruck et al., 1970) a;
:!:!
40
20 ..
IV ,,
<I> . -300 -
"v v - <I>.
..........
100_
I' \
(7.5.1)
Q;
>
_,
C1>
'
...
C1>
0
-
~
<I>. -200 - <I> . 20°
where a is the length of one side. This design is preferred since it is relatively a.. 40
0
,,,,...
C1> ........ ~- .....
stable for large radar cross sections and exhibits a large 3 d B beamwidth ( -40°) >
~ 20 I/ '\ '/ \. ~
independent of wavelength and plate size. Q; I
An example of the dependence of radar cross section and beamwidth on °'
pointing angle relative to the axis of symmetry is given in Fig. 7.12 (Robertson, 0
1947). This figure shows the response of a triangular tri hedral (~ = 0.6 m) .to 4>• -100 <I> . 300
40 looo"'" -....... !'...
a K-band radar (A.= l.25 cm). The variation in RCS as a function of device
orientation is an important consideration if the device is to be deployed in a
permanent configuration and imaged as a target of opportunity during normal
20
I
'"
/
'l/I
'
-
II/
-- r--..
I"'--
I'
A
operations. This approach was used for several of the Seasat corner reflectors 0
-40 -20 0 20 40 -40 -20 0 20
Angle e (Degrees)
Figure 7.1 2 Relative radar cross section patterns as a (unction or angle relative to the axis or
symmetry; 0 is vertical elevation angle, <P is the horizontal angle (Robertson, 1947).
which were imaged from both ascending and descending passes over the
calibration site. These devices were oriented with the axis of symmetry
perpendicular to the surface. For Seasat at a 20° look angle this resulted in
only a few dB of lost RCS, but eliminated the need to re-orient the devices for
each pass. A summary of the RCS and beamwidth parameters for various
reflector designs is given in Table 7.1.
The construction of the reflector must be to an error tolerance that is small
relative to the radar wavelength. Typical specifications for surface irregularity
are for an rms variation less than 0.12, resulting in a 0.1 dB RCS Joss; the plate
curvature should be less than 0.2A. for a 0.1 dB RCS Joss; and the orthogonality
requires plate alignment of better than 0.2° in each axis for a 0.1 dB Joss.
Assuming another "'0.2 dB uncertainty from pointing (orientation) of the device,
typical numbers for device accuracies are on the order of 0.5 dB. However,
additional calibration errors may result from uncertainty in estimating the
background backscatter or from multipath effects. For this reason it is desirable
Figure 7.11 Triangular trihedral corner reOector (a = 2 m) deployed by JPL at Goldstone, to find a suitable location for deployment where these contributions are small
California, calibration site. (i.e., < - 20 dB) relative to the RCS of the corner reflector.
340 RADIOMETRIC CALIBRATION OF SAR DATA 7.5 RADIOMETRIC CALIBRATION TECHNIQUES 341
TABLE 7.1 Scattering Properties of Several Common Reflector Designs and the antenna patterns, which are key parameters that cannot be measured
with internal calibration devices. The tone generators are used in pairs to
Maximum 3dB produce two continuous frequency tones offset by some fraction of the system
Reflector RCS Beamwldth Shape bandwidth at orthogonal polarizations. These devices are primarily used to
measure the cross-polarization isolation of the radar. A comprehensive ground
Sphere 211:
@ calibration site design typically would include all three device types.
(7.5.2)
Luneberg lens -40° where G1, Gr are the transmit and receive antenna gains and G. is the net gain
of the transponder electronics. This design provides the flexibility to achieve
the desired RCS by selecting amplifiers with the required gain. The antenna
selection is driven primarily by cross-polarization isolation and beamwidth
requirements, with gain a secondary consideration. With a two-antenna design,
Triangular trihedral -40°
as pictured in Fig. 7.13b, the cross-coupling between antennas is an important
consideration. since this signal is amplified by the. transponder gain. The required
cross-coupling performance ( < -80 dB) is achieved by spatially separating the
antennas. Typically, standard gain horn or microstrip patch antennas are used.
Square trihedral -40° However, if large cross-polarization isolation and low sidelobes are required a
corrugated horn may be used.
Active Calibration Devices. This class of devices includes instruments such ~ ~here EI~P is the effective isotropic radiated power, R is the slant range, Pr
transponders, receivers, and tone generators. Each of these serves an important is the received power as measured from the digitized signal, and Gr, G. are the
function in calibration of the SAR system. The transponder is similar to the antenna and electronic gains of the receiver unit. The use of ground receivers
reflector in that it relays the transmitted signal back to the radar. However, the, ~n be a highly accurate technique for measurement of the SAR antenna pattern,
transponder has the benefit of increasing the signal strength by electroniq smce the forward radiated power is measured. This is a much stronger signal
amplification. The ground receivers are essentially half a transponder with some than the reflected RCS or the background u 0 . However, if the SAR antenna is
recording capability. They can be used to directly estimate the radiated power not reciprocal, then the receivers cannot determine the SAR receive antenna
7.5 RADIOMETRIC CALIBRATION TECHNIQUES 343
342 RADIOMETRIC CALIBRATION OF SAR DATA
RS 232C
RECEIVE TRANSMIT
INTERFACE
ANTENNA ANTENNA
Gr Gt
Figure 7.14 Ground calibration receiver design by the Institute for Navigational Studies ( INS)
at University of Stuttgart, Germany (Freeman, 1990d).
a
bandwidth. The cross-polarization isola tion of the SAR receive antenna can be
determ ined from the raw signal data by
I = Gr(f,) (7.5.4)
xp Gr(J;}
where Gr(J;) and c r(f,) a re the SAR receive antenna like- and cross-polarized
gains, respect ively. These signals, offset in frequency by J. - J;, will be shifted
by the one-way Doppler associated with the relat ive sensor to target position
for that range line. T he q uantity in Eqn. (7.5.4 ) can be measured in the ground
processor from a Fourier transform of each range line. Across the SAR azimuth
aperture, the received tone generator signal can migrate through several bins
in the FFT due to the Doppler shift. T hus, if azimuth summation of adjacent
range lines is required to reduce the signal estimation erro r, care should be taken
that the tone falls within a discrete FFT bin for each range line used.
LINEAR POLARIZED
HORN TRANSMIT
ANTENNA, GAIN
b Gt
Figure 7.13 Active transponder design by Applied Microwave Corporation (Brunfeldt, 1984).
POWER
METER
VARIABLE PORT
pattern, since a ground receiver can only measure the overall SAR tra nsmit SIGNAL
ATIENUATOR
GENERATOR
chain characteristic. AC
POWER
SUPPLY
CONTINUOUS WAVE TONE GENERATORS. Tone generators typically consist of a
linearly polarized antenna and a signal generator, as shown in Fig. 7. 15. These
devices are used in pairs, with each unit tra nsmitting o ne of two orthogonal Figure 7.15 Block diagram of continuous wave tone generators to measure antenna cross-
polarization isolation.
polarizations at a frequency offset from the other by some fraction of the system
344 RADIOMETRIC CALIBRATION OF SAR DATA
7.5 RADIOMETRIC CALI BRATION TECHNIQUES 345
Calibration Site Design
To perform the required measurements for SAR system calibration, groups of L-band image of the Goldstone site acquired by the DC-8 SAR is shown in
devices are required (Dobson et al., 1986). Typically these devices are deployed Fig. 7.17. Since each reflector has been surveyed to determine its true location
in along-track and cross-track geometric configurations to measure the geometric this image can be used to assess the scale and skew errors (Chapter 8) as weli
as the absolute location error of the DC-8 system. '
calibration accuracy as well as radiometric ca libration parameters. A site
originally used by NASA/ JPL for the Seasat SAR and later upgraded for the The elevation antenna pattern is determined by fitting the RCS measurements
DC-8 airborne multipolarization, multifrequency SAR is shown in Fig. 7.16 from each device with a least squares error polynomial. Across the mainlobe
(Freeman et al. I 990a). M ost of the array consists of triangular trihedrals with return, a qu~drat.ic fit is sufficient to characterize most antennas (Fig. 7.18 ).
transponders, receivers, tone generators, and dihedrals inserted as shown. The The uncertamty m each estimate is given by the device errors (fabrication,
transponders and dihedrals were oriented to enable measurement of the SAR deployment, etc.), the uncertainty in the background contribution (i.e., ao()x()R ),
8
cross-polarized transfer characteristic (Hirosawa a nd Matsuzaka, 1988). An
~~~<9
GOLDSTONE
~~
LAKE
~00 m ~ivT~~"-.
JPL
I
I I
I II
r-1
I
r·
'-~ BM
'-<.:.:to;:&"
·-..........
~
._~il-€!..,;;.--....;:t_24.
"
ff ,' ;ec;.0 (llHEDRAL
t\ 6 ' TRIHE"DAAL
a. f .a. 8' TRIHEDRAL
I~I . ·"' PASSIVE RECEIVERS
I I \ e; L+C ?ARCS
Jjfo
V)
-1'.0 L ...CTONEGEN
/:z:I
~I
/II
I I
I I
J
(1
QC'1<.
:>/~
/# .//
/
~'<"
I I _r):j,-:£~._0
GOLDSTONE SAR I I ( ~Q
CALIBRATION SITE L .J ( / ' ~
Figure 7.16
\
Diagram of the NASA/ JPL calibration site at Goldstone, California{Freeman, 1990a).
Figure 7.17
SAR.
L-band total intensity image of Goldstone, California, acquired by NASA/ JPL DC-8
346 RADIOMETRIC CALIBRATION OF SAR DATA 7.5 RADIOMETRIC CALIBRATION TECHNIQUES 347
associated with the platform attitude variation (e.g., roll angle errors) and
thermal variation can be neglected. The short-term stability performance
(short-term relative calibration) is an important measure for many scientific
analyses.
I
,I
I Distributed Target Calibration
I
REFLECTOR Distributed target calibration refers to external calibration using natural targets
RCS MEASUREMENT
WITH TOLERANCES of large areas with homogeneous backscattering properties. A fundamental
/ assumption is that the scattering properties of these areas are stable or that the
N'-TWO-WAY PATTERN
variation is well characterized. This permits the image characteristics associated
with the target scattering to be decoupled from the sensor performance.
One important benefit of using distributed calibration targets is that they
measure the radar performance at various operating points within the system
dynamic range. Recall that, for point target calibration, the device RCS must
be large relative to the surrounding a 0 to minimize the background estimation
error. Therefore these devices can only measure the system performance at the
high end of the linear dynamic range (Fig. 7.19). Distributed target calibration
ALONG
TRACK
sites exhibit a wide range of a 0 values that can be used to assess the system
Figure 7.18 Cross-track (vertical) antenna pattern measurement using point targets deployed performance at a number of points across its linear dynamic range.
cross track. A second important advantage of distributed calibration sites is that they
and the image measurement errors. Assuming these error sources are uncorrelated, can be used as a direct measure of the cross-track variation in the received
signal power as reflected in the digitized raw video signal after range compression.
the pattern estimation error is given by
Referring to our formulation of the distributed target radar equation in
Sp = (s~R + s~R + s~) 1 ' 2 / .jM (7.5.5)
where M is the number of devices used in the pattern estimate and sCR, s8R,
and sM are the standard deviation of the device RCS estimate, the background SAT.
a 0 estimate, and the image measurement error, respectively.
The image measurement error as well as the background error can be •11 •
•.--POINT
significantly reduced by using a technique proposed by Gray et al. ( 1990). Their 11 x• TARGETS
approach is to integrate the return power over a local area surrounding the
reflector, rather than to attempt to estimate the peak return. The total power
in an equivalent adjacent area is also estimated, and the difference between
these two powers is that attributed to the RCS of the reflector. Thus, the only
0
error in the estimation procedure is the variation in background a between
the area containing the device and the reference area. This variation can be
minimized by selecting the calibration site such that th'l reflector is placed in
a large homogeneous backscatter region. The remaining error contributor is NOISE
that of the device itself, which can be mediated by measuring the reflector (or
transponder) under controlled conditions such as in an anechoic chamber, or FLOOR
on an antenna range.
The short term stability of the radar system can also be assessed by placing 0
a second group of devices at some distance down-track from the main calibration Figure 7.19 System gain characteristic illustrating the operating point for the calibration devices
site. These two calibration sites should be sufficiently close that the errors (e.g., reflectors, transponders).
348 RADIOMETRIC CALIBRATION OF SAR DATA 7.5 RADIOMETRIC CALIBRATION TECHNIQUES 349
Eqn. (7.4.1 ), four parameters vary as functions of cross-track position within the versus 'I dependence is known, leaving just the elevation antenna pattern and
swath. They are: the roll angle as the key parameters to be estimated. It should be noted that
the total received power consists of both the signal power and the noise power.
( 1) Slant range, R Thus the noise power must be subtracted prior to performing any corrections
(2) Ground range resolution, L\R/sin 'I on the cross-track signal power. If the noise power is subtracted after range
( 3) Elevation antenna pattern, G 2 ( </J) compression then the compression gain must be taken into account as described
in Section 7.6. In some cases, where the SNR is low, the thermal noise can
(4) Backscatter coefficient, u 0 ( 'I)
dominate the signal return power, resulting in a large antenna pattern estimation
Both the look angle y and the incidence angle 'I can be written in terms of the error unless the the noise power is known to a very high precision.
slant range, the platform ephemeris, and the platform attitude as given in To reduce the effects of thermal noise, a large number of range compressed
Eqn. (8.2.4) and Eqn. (8.2.5). Typically, the most important platform parameter (or range and azimuth compressed) lines can be'. incoherently added in the
for calibration is the roll angle estimation error, which causes the antenna along-track direction. The number of lines integrated must be short relative to
pattern to be offset relative to its expected cross-track location. A plot of the the rate of change of the roll angle. This technique was used by Moore (1988)
Seasat antenna pattern correction factor (roll= 0°) as a function of slant range to estimate the SIR-B antenna pattern over the Amazon rain forest.
(or equivalently cross-track pixel number in a slant range image) is shown in A similar echo tracker approach was implemented operationally in the SIR-B
~~~ .
correlator to estimate the roll angle prior to the antenna pattern correction
To extract the antenna pattern from the range compressed signal data, the stage (Fig. 7.21). For each standard image frame, consisting of -25 K range
received signal power variation due to u 0 , R, and sin 'I must first be estimated. lines, 1 K, range compressed lines spaced throughout each 5 K block were
Typically the slant range, R, the range bandwidth, BR, and the platform position, incoherently averaged, smoothed using a low pass filter, and fit with a least
0
R., are well known. Additionally, for each of the main calibration sites, the u square error (LSE) quadratic polynomial. The error function was weighted
according to the estimated SNR of each data sample. The peak of the estimated
pattern was extracted and averaged with estimates from the other four (5 K
line) image blocks to provide a single roll angle estimate for the image. As
-1.9968
ex~ected, this technique worked well for regions of relatively low relief. In high
rehef areas the LSE fit residuals were used to reject the estimate and revert to
attitude sensor measurements. A roll angle echo tracker technique was needed for
SIR-B because of the large uncertainty in the shuttle attitude determination.
The estimated (3u) attitude sensor error was on the order of 1.5° in each axis
wi.th drift ~ates as high as 0.03° /s (Johnson Space Center, 1988). Results usin~
this techmque to measure the roll angle variation for SIR-B are shown in
Fig. 7.22 (Wall and Curlander, 1988).
CJ The distributed target approach to antenna pattern and roll angle estimation
0>' -5.6927 should not be considered as a replacement for the point target estimation
.2
~
procedure. Rather, this techniqueshould be treated as an approach (target of
I opportunity) that can be used to fill gaps between the point target site estimates
for monitoring intra-orbital variation. Additionally, distributed targets can
me~su~e performance over wide swath areas (e.g., 100 km E-ERS-1 swath),
which is very costly using point target devices.
23.9 r----.--..----.--...---..-...--------.--
NO
l
.J
.J
0
a:
~ 23.8
::c:
YES USE ATTITUOE C1
SENSOR ESTIMATES iii
w SIR-8 OT 90.3
a:
0 GMT: 285/01:02:00
m 1 POINT/1K LINES
23. 7 .____.__..____,__..._........_ _.___._ _._--''---'
1.0 4.5 8.0 11.6 15.1 18.6 22.1 25. 7 29.2 32.7 36.2
AVG SPEAK
LOCATIONS TIME, sec
Figure 7.22 Echo tracker roll angle estimate as a function of time for two SIR-B data segments.
Each estimate results from the integration of 1000 range lines.
CALCULATE
ROLL ANGLE
where q/ and ff characterize the radar receive and transmit systems respectively
and JV is the additive noise term. For an ideal system, ff and q/ could be
ROLL ANGLE characterized as identity matrices with some complex scale factor. Polarimetric
Figure 7.21 Flowchart of the SIR-B echo tracker routine to estimate the platform roll angle. system errors can be modeled as channel imbalance and cross-talk terms
(Freeman et al., 1990a), i.e.,
6 = (s"" Suv)
Svu svv Inserting Eqn. (7.5.7) into Eqn. (7.5.6) we get an absolute phase term I/Ir+ I/I"
which is not significant since it only represents the relative position of the
where each element S··
Q is a complex number. The received signal (voltage) is , dominant scatterer within the resolution cell. The gain term A.A1 represents
given by the common gain across all channels and is equivalent to JP.
in Eqn. (7.3.1).
This gain can be estimated from calibration site data as described in the previous
(7.5.6)
section. The cross-talk terms c5 1 , c5 2 , c5 3 , and c5 4 represent contamination resulting
from the cross-polarized antenna pattern, as well as poor isolation in the
352 RADIOMETRIC CALIBRATION OF SAR DATA 7.6 RADIOMETRIC CALIBRATION PROCESSING 353
transmitter switches and circulators. These terms can be directly measured using incidence angle across the swath, then the amplitude balance as a function of
polarization selective receivers and tone generators as described in the previous cross track position could be estimated using a distributed target technique.
section. The b 1 and b2 terms are directly measurable from the raw signal data The absolute value of zuu/ zvv could then be determined using a single device
by evaluating the ratio of like- and cross-polarized tone generator signals in or group of devices in a local area. In the NASA/JPL SAR processor for the
each H and V channel. Similarly, receivers with exceptionally good cross- DC-8 polarimetric system, the phase error between the H and V channels is
polarization isolation performance ( >40 dB) with antennas oriented for routinely estimated using a distributed target (such as the ocean) and software
like- and cross-polarized reception can be used to estimate b3 and b4. has been distributed to the investigators to perform clutter calibration on their
The channel imbalance terms f 1 and f 2 are generally complex numbers whose images using the approach proposed by van Zyl. It also should be noted that
amplitude and phase characteristics must be precisely known fo~ many in the calibration of polarimetric data the cross-polarized terms zuv. zvu are
polarimetric applications (Dubois et al., 1989). A reasonably good estimate of av~raged (after phase compensation) to obtain a single value (see Section 7.7).
the amplitude imbalance can be obtained from internal calibration pr~cedures, This approach. is based on the fact that all natural targets are reciprocal, and
assuming the antenna H and V patterns are similar and the borestghts are therefore the difference between the cross-polarized terms is due only to system
aligned. However, the phase imbalance can only be estimated using external errors. A final point is that in all these techniques we have assumed the noise
targets since the antenna contribution cannot be ignored. The relative gain and power to be negligible. For distributed target calibration techniques to be valid,
phase of the channel imbalance terms f 1 and f 2 can also be estimated using the data should be averaged over a large number of independent samples to
active devices such as transponders, where the scattering matrix of the target reduce the effective noise power, keeping in mind that the parameters to be
can be controlled. It can be shown that three transponders with independent estimated may be dependent on their spatial position, limiting the area over
which the estimate can be performed.
scattering matrices, such as (Freeman et al., 1990a)
In the SAR ground data system, the signal processing consists of a raw data
can be used to solve for all six error terms. correlation (Level lA processing) to form the SAR image, followed by a
An alternative approach, using known characteristics of a distributed target post-processing stage (Level 1B processing) to perform the image radiometric
scattering matrix in addition to passive corner reflectors, has been proposed by and geometric corrections. The geometric correction algorithms will be addressed
van Zyl ( 1990) and Klein ( 1990b ). Given a target dominated by single-bounce in Chapter 8. The remainder of this chapter will be used to describe the
surface scattering, the target imposes no cross-polarized term and the relative radiometric calibration processing. The radiometric calibration processing
HH to VV phase is constant. Thus, assuming reciprocity (i.e., b1 = b4, b2 = b3, involves analysis of the internal and external calibration data, generation of the
f 1 = f 2 ), these terms can be calibrated without the use of any point target calibration correction factors, and application of these corrections to the image
calibration devices. To determine the channel amplitude imbalance, a corner data. The calibration processing data flow is shown in Fig. 7.23. There are three
reflector such as a triangular trihedral is required whose scattering matrix is
major ground data system elements. The calibration subsystem (CAL) is
given by typically an off-line workstation tasked to perform analysis of the internal and
~xternal calibration dafa as well as the preflight test data. The catalog (CAT)
is the data base management system responsible for archiving the calibration
data including preflight test data. The CAT is also responsible for reformatting
the engineering telemetry data into time series records for each .internal
where we have ignored errors in the device construction and deployment and calibration device (e.g., P.(ti), i = 1, N). These data are then accessed by the
Arr= .j;;; is given by Eqn. (7.5.1). The relative channel phase imbalance ~an CA~ in c?njuncti~n with the calibration site imagery to derive the necessary
be estimated from a trihedral reflector or from a distributed target, assummg rad1ometnc correction parameters for the SAR correlator (COR). The corrections
that the dominant scattering mechanism is a single bounce type scatter. are .precalc~lated and stored in the CAT for eventual access by the correlator
A limitation in the technique as presented by both van Zyl and Klein (other dunng the image processing operations. Typically, the correction factors are
than the reciprocity assumption) is that the channel imbalance can only be ~lso stored as time series (e.g., G( </>, tJ, ti = 1, M) where the sampling frequency
estimated in a local area around the reflector. If the target scattering could be ts dependent on the stability of the sensor and the calibration device used for
. modeled such that the relative change in zuu/ zvv were known as a function of the measurement.
354 RADIOMETRIC CALIBRATION OF SAR DATA
7.6 RADIOMETRIC CALIBRATION PROCESSING 355
PREFLIGHT
TEST DATA;
devices. For the SIR-C active phased array antenna, the thermal sensors on
ENGINEERING GROUND SITE DATA CALIBRATION the antenna backplane will be used to calibrate the T /R module output power
TELEMETRY SITE RAW DATA and gain drift over the mission. Additional parameters, such as the DC current
drawn by each panel, will be used to indicate if a T /R module or a phase shifter
is performing anomalously.
1. Preflight test data analysis; 1. Calibration site image analysis of single point targets to determine
mainlobe broadening (Km1), sidelobe characteristics (ISLR, PSLR), and
2. Calibration processing (i.e., correction factor generation/application);
absolute location accuracy;
3. Verification processing and performance analysis.
2. Multiple point target analysis to determine geometric distortion (scale,
skew, orientation errors) and the elevation antenna pattern;
Each of these phases is described in the following subsections.
3. Raw data analysis of tone generator signals to determine cross-polarization
Preflight Test Data Analysis isolation of the receive antenna;
The preflight test data analysis is used to derive the relationship between the 4. Engineering telemetry analysis to estimate drift in the system operating
internal calibration device measurements and the radar performance parameters. point (i.e., change in receiver gain or transmitted power);
For example, the transmitter power output may depend uniquely on its baseplate 5. Generation of calibration correction factors, K(R, t;), including antenna
temperature. Preflight testing can establish the functional relationship between pattern and absolute calibration scale factor;
the transmitter output power and the baseplate temperature sensors to provide 6. Distributed target calibration site analysis for antenna pattern estimation.
a means of indirectly calibrating the transmitter drift during operations.
Additionally, the stability of the sensor, which is established in preflight tests, The correction factors are passed from the CAL to the SAR correlator (via the
is used to determine the required sampling of the internal calibration data and CAT) for incorporation into the processing chain as shown in Fig. 7.24.
the number of external calibration sites. If the roll angle variation is slow relative to the azimuth coherent integration
The preflight testing is especially important for the SAR antenna characterization, time, then the radiometric correction factor can be directly applied to the
since its performance cannot be directly measured using internal calibration azimuth reference function, eliminating the need for an additional pass over the
356 RADIOMETRIC CALIBRATION OF SAR DATA 7.6 RADIOMETR IC CALIBRATION PROCESSING 357
7.6.2 Calibration Algorithm Design If we compare the radar equation before and after processing, from
Eqn. (7.6.1) and Eqn. (7.6.2) the ratio of the mean image signal power to the
In this section we address in more detail the problem of operationally producing mean raw video data signal power is
radiometrically calibrated SAR images. We first derive a form of the radar
equation applicable to the SAR image which includes processor gains. A basic
tenet that should be used in establishing a procedure for image calibration is
P! oxoRLNf WL
(7.6.3)
that all corrections be reversible (i.e., the original uncorrected image should be P. = AxAR.
recoverable). This inversion process may be necessary if the calibration
correction factors are updated at some time after the initial processing. A second where AR., oR are the precompression and image slant range resolutions, and
key requirement is that the algorithm be flexible such that the corrections can Ax, ox are the precompression and image azimuth resolutions respectively.
be applied to either the detected or the complex SAR images. Additionally, the Equation (7.6.3) is sometimes called the processing compression ratio.
procedure should allow for subtraction of the noise floor by the user but should The question now arises as to whether there is an improvement in the signal
not operationally apply this correction to the data, since it will cause local to noise ratio (SNR) as a_result of the signal processing. Again consider a
errors in the a 0 estimate and may result in negative power estimates. distributed homogeneous target. We wish to evaluate the expression
1
K (R) -
_
(4n) 3
2
P 1G,G 2 (t/J)A. LWLL;c5xc5R 1
R4
(t1) (7.6.13) TABLE7.2 Effect of Azimuth Reference Function Length Laz and Normalization on the
Expected Image Power
Recall that the azimuth reference function size was assumed to be equal to the1 Normalization Length Signal Power Noise Power SNR
number of pulses spanning the azimuth footprint, i.e., None Variable, ocR ocl/R 2
ocR l/R 3
1/Laz Variable, ocR ocl/R 4 ocl/R 1/R 3
(7.6.14) 1/Az Variable, oc R ocl/R 3 Constant 1/R 3
None Fixed ocl/R 3 Constant 1/R 3
362 RADIOMETRIC CALIBRATION OF SAR DATA 7.6 RADIOMETRIC CALIBRATION PROCESSING 363
processing block was varied to maintain a constant azimuth resolution. To reference normalization factor of
minimize ambiguities, the azimuth processing bandwidth BP was set at 0.8 Bo.
We can write Eqn. (7.6.14) in terms of B0 as K,=~ (7.6.19)
(7.6.15) should be applied. This yields an image with constant mean noise power equal
to the input noise level in the raw data. This is a useful representation since
Waz• W,. can be determined directly from the ratios of the processed to
assuming the full aperture is processed. For SIR-B the processing bandwidth unprocessed mean receive-only noise power with and without weighting applied.
was estimated using A second basic requirement is that all interpolations such as the range cell
migration correction, or the slant-to-ground range reprojection, preserve the
BP= (0.8)/p ~ (0.8)B 0 (7.6.16) data statistics. The specific criteria for the interpolation coefficients such that
the data statistics are preserved are presented in Chapter 8. Assuming the
Substituting Eqn. (7.6.16) in place of B 0 in Eqn. (7.6.15), we get the expression normalization factors in Eqn. (7.6.18) and Eqn. (7.6.19) are applied to the
used to determine the SIR-B correlator azimuth reference function length as reference functions, the radar equation as given by Eqn. (7.6.2) becomes
(7.6.17) (7.6.20)
The SIR-B reference function was always normalized by the azimuth FFT block where we have assumed the multilooking process is normalized by the number
size (i.e., 2048 samples) independent of Laz· Since this correction ~actor is of samples integrated. Equation (7.6.20) is now identical to the raw data radar
independent of range, it does not affect the range dependence of either the equation (except for the resolution cell sizes) and the u 0 can be estimated using
expected signal power or the SNR. Hence for the SIR-B image product~ the Eqn. (7.3.6). Thus, if the expected noise power is first subtracted from each
signal power varies as 1/R 2 while the noise varies as R with an SNR proportional image pixel intensity value and (in the resulting image) each range line is
to 1/ R 3 • weighted by the factor 1/ K ( R ), the data number will be equivalent to u 0
(ignoring speckle and ambiguity noises).
In practice, very few processors perform noise subtraction since the estimated
Correlator Implementation mean noise power may deviate significantly from the actual noise on an
The radiometric calibration algorithm should produce image products that are individual pixel basis. The problem is that negative powers can result. For a
both relatively and absolutely calibrated. Simply stated, in a relatively calibrated complex pixel representation a large phase error can occur, since the phase of
image each pixel value (i.e., data number or gray level) can be uniquely related the additive noise term is random. A more useful algorithm is to first apply the
to some backscatter coefficient (within an error tolerance), independent of its K ( R) correction to the received signal-plus-noise image. The resulting relationship
cross-track position or time of acquisition. In an absolutely calibrated image, between the image data number and the u 0 value is
the coefficients specifying the relationship of each relatively calibrated data
number to a backscatter value (within an error tolerance) are given. For example, (7.6.21)
assuming a linear relation, u 0 is given by
where we have assumed that a two parameter stretch, i.e., a gain K 0 and a bias
Ke, are used to minimize the distortion noise associated with representing the
image within. the dynamic range of the output medium.
where In I is the detected pixel value and K 0 , Ke are real constants. To derive the image correction factor K 1(R), each of the parameters in Eqn.
Since Pto maintain a constant azimuth resolution independent of range target (7.6.13) must be estimated. The terms A., L, WL, R, L., Laz• bx, bR 8 are all well
position'. the azimuth reference function length should vary in proportion to known or easily measured and contribute little to the overall calibration error.
the change in range across the swath, a relative calibration factor of Significant errors come only from uncertainty in the estimation of P1, G,, G2 ( <P ),
and P0 •
(7.6.18) The thermal noise P0 can be estimated by averaging a block of samples from
the turn-on and turn-off receive-only noise segments in each data take.
is required to normalize the azimuth reference function. Similarly, a range Throughout the data take, the drift in receiver gain, G., can be estimated from
364 RADIOMETRIC CALIBRATION OF SAR DATA 7.7 POLARIMETRIC DATA CALIBRATION 365
a caltone. Therefore, the thermal noise estimate at the center time of the image receiver channel is 2.8 cm longer than the other, the two channels are 180° out
frame, tc, is given by of phase. Thus the balancing operation in Eqn. (7.7.1) would effectively cancel
the cross-polarized return (in the absence of other system errors and noise),
(7.6.22) resulting in a value of zero for zHv independent of the target scattering
characteristics. To compensate for this systematic phase offset, prior to balancing
where GcAL(tc) is the ratio of the system gain at time tc to the gain at the a phase difference correction must be applied to the data. The mean phase
turn-on time, t 0 • This gain drift may also be characterized by other internal difference is given by
calibration devices such as a leakage chirp or thermal sensors.
N
The radiated power P, is most accurately measured using a set of ground
receivers. The variation in P 1 over the time interval between ground receiver fbx = L arg(zuvziu)/ N
i=1
(7.7.2)
measurements can be tracked using internal meters (power, temperature) or by
a leakage chirp. Similarly, the receiver gain can be directly measured by a
where the summation is performed over some representative set of data samples
calibration tone or a leakage chirp. The antenna is typically measured preflight
spanning the entire image frame. Since just one cross-polarized channel need
to obtain a nominal pattern. Inftight variation from thermal stress or zero
be corrected to compensate for this phase error, Eqn. (7.7.1) becomes
gravity unloading is typically measured using external targets. Either a distributed
homogeneous target, or point targets (e.g., transponders or corner reflectors),
can be used to measure the two way pattern from the SAR image. Alternatively, ZHv = [zuv exp( -jfbx) + ZvnJ/2 (7.7.3)
the transmit pattern can be directly measured using ground receivers and, if
reciprocity can be assumed, the two way pattern inferred from this measure. Phase calibration of the like-polarized terms requires an analysis similar to that
The antenna boresight, or equivalently the pattern roll angle, can be refined by of Eqn. (7.7.2). A mean phase difference for the like-polarized channels is
calculated from
analysis of the antenna pattern modulation in an uncorrected image by
estimating the location of the peak return power from a least square error fit N
of the image data. fb1 = L arg(z88 ziv )/ N (7.7.4)
i= 1
This correction is then applied to all pixels in one of the like-polarized images, i.e.,
7.7 POLARIMETRIC DATA CALIBRATION
(7.7.5)
The polarimetric data products are typically represented in a Stokes matrix
format. This is achieved by first performing a symmetrization of the scattering A necessary condition for this procedure to be valid is that there be a zero
matrix. The symmetrization procedure is as follows (Zebker et al., 1987). Given phase shift between s88 and svv for all targets included in the summation of
four radiometrically corrected images (in a complex amplitude format) that Eqn. (7.7.4). However, only if the scatterer is single-bounce (e.g., Bragg
represent the two like-polarized target backscatter measurements (i.e., z88 and scattering) will the relative phase be zero (van Zyl, 1989). The phase correction
zvv) and the two cross-polarized measurements (i.e., zuv and zvu), the procedure thus requires identification of a single bounce target, such as ocean
symmetrization procedure is to average the cross-polarized terms such that or slightly rough terrain (rms height < A./8) with a relatively high dielectric
constant (i.e., no volume scattering). An additional assumption in the procedure
ZHv = (zuv + Zvn)/2 (7.7.1) outlined above is that the phase difference distribution is symmetric and
unimodal. For an asymmetric distribution, the mean values estimated in
on a pixel by pixel basis. The inherent assumption in thls process is that for Eqn. (7. 7.2) and Eqn. (7. 7.4) should be replaced by the median of the distribution.
all natural targets s8 v = svn· Therefore any differences between z8 v and zvu If the probability distribution function is bimodal a smaller block of samples
must arise from radar system errors. · should be used for estimating the phase correction factor.
In practice, prior to balancing the cross-polarized channels, the data must Like-.: and cross-polarized phase corrections in Eqn. (7.7.2) and Eqn. (7.7.4)
be compensated for systematic phase errors that arise from path length. typically need not be estimated for every image. A single correction factor is
differences or electronic delays in one channel relative to another. Consider, for usually applied to a group of images over some time period dependent on the
example, a C-band (A.= 5.6 cm) quad-polarized SAR system with two receive instrument stability. If the radar is highly sensitive to slight thermal variations,
chains, one each for the H and V channels. If the electrical path length in one causing the electrical path length to vary in one receive chain relative to the
366 RADIOMETRIC CALIBRATION OF SAR DATA REFERENCES 367
other a unique correction factor may be required for each image frame. cross-product data to effect an improved speckle reduction performance over
Calib~ation of the like-polarized channel amplitude imbalance cannot be that of incoherent pixel addition (Chang and Curlander, 1990). The final
performed using distributed targets since the ratio ~HHI svv ~s very target processing stage (which is optional) is the formation of the ten real Stokes
dependent and cannot be predicted. Since the scattermg matnx of a corn~r matrix elements and the efficient coding of these data by normalizing the Stokes
reflector such as a triangular trihedral is well known (sHH/ svv = 1), an analysis matrix elements (Dubois and Norikane, 1987). The shortcoming in producing
of the return from this target can be used to balance the like-polarized channel the Stokes matrix as a final output product is that the noise subtraction is a
amplitude in that local area. Amplitude imbalance can arise from H,.v pat~ern relatively complex procedure since each noise power array (i.e., P0 ) KHH(R),
misalignment, which would require balancing to be performed at multiple pomts P0 .f Kvv(R), (P0 " + P0 .)/2KHv(R)) must be manipulated similarly to the image
ac)'oss the swath. This can be accomplished using an array of reflectors deployed data processing used to form the Stokes matrix elements. This is a fairly involved
across the ground track. Another, as yet untested, approach would be to perform process for the scientist to perform. In practice, since the thermal SNR must
the absolute like-polarized channel balancing at a single point within the swath be large for polarimetric data analysis to be feasible, the noise power contribution
(using a reflector), and. then to use a distributed target such as t~e ocean to is often neglected.
perform a relative balance at all other points across. the swath. This appr.o.ach
requires that the target sHH/ svv not change as a function of cross-~rack pos1t10n.
However, it does not require that the ratio be known. The reqmrement that 7.8 SUMMARY
sHHI svv = constant over range This chapter has addressed the issue of SAR radiometric calibration primarily
from the signal processing perspective. The basic terms were defined and an
is never valid for an airborne system, since the range of incidence angles is so
end-to-end system view of the various error sources presented. Several internal
large. However, for a spaceborne polarimetric SAR, where '1 varies over the calibration schemes were described in detail to identify the system measures
entire swath by only a few degrees, this relative balancing technique may be that can and cannot be performed using built-in test equipment. We then
feasible. addressed the techniques and technology currently employed for external
The final step in the polarimetric calibration is correction of the cross- calibration with ground sites. The relative merits of point target versus
polarized leakage terms that typically result from poo~ i~olation in t?e ante~n.a distributed t~rget calibration sites were discussed and several techniques using
or transmitter switch, or from platform attitude variation. We beheve th~s is
clutter statistics for calibration were presented.
best implemented using the previously described clutter based techmque
The second portion of the chapter concentrated on design of the ground
proposed by van Zyl ( 1990). These corrections can. be applied as ~ post-
processor to utilize the acquired calibration data for operational correction of
processing step (on the Stokes matrix) and are typically not operationally
the data products. We described a configuration using an off-line calibration
applied in the SAR correlator. . processor to analyze both the internal calibration device measurements and the
Following the polarimetric calibration steps outlmed above (except the
calibration site imagery. This system generates correction factors that are passed
cross-polarized leakage term correction), the Stokes matrix products are formed.
to the correlator for application to the image data. We derived an appropriate
This first requires generation of the six cross-products form of the radar equation that explicitly indicates the processor induced
(7.7.6a) gains/losses and discussed the effect of various processor implementations on
JHHHV = z~HZ~v this equation. We concluded with a brief discussion of the calibration procedures
JHHHH = z~HzifH (7.7.6b) for a polarimetric SAR system.
370
372 GEOMETRIC CALIBRATION OF SAR DATA 8.2 GEOMETRIC DISTORTION 373
error (displacement) between two coincident pixels from image data acquired delay to derive the actual propagation time used in the slant range calculation,
by two separate radar channels. that is,
The characterization of the image geometric calibration in terms of the above
listed parameters is not unique. The representati~n we presen~ here is convenient, R = c('t" - 't" 0 )/2 (8.2.3)
since these parameters are directly measurable m the SAR image.
Here 't" is the total delay from the time a control signal is sent to the exciter for
pulse generation until the echo is digitized by the ADC. This delay is precisely
8.2 GEOMETRIC DISTORTION known since it is controlled by the radar timing unit which in turn is based on
the stalo frequency. Error in the estimate of the propagation time will result in
Before describing the various techniques for geometric corre~tion of the image a slant range error which in turn will bias the incidence angle estimate. From
products, we first address the geometric distortions inherent m the uncorre~ted Fig. 8.1, we can write
image data and the source of these distortions. T~ey ca~ ~ener~lly be categon~
as resulting from sensor instability, platform mstabihty, signal propagation (8.2.4)
effects, terrain height, and processor induced errors.
where I'/ is the incidence angle, y is the look angle, R. and R1 are the magnitude
8.2.1 Sensor Errors of the spacecraft and target position vectors relative to the center of the earth, and
The sensor stability is a key factor controlling the internal geometric fidelity y = cos- 1 [(R 2 + R~ - R;)/(2RR.)] (8.2.5)
of the data set. For example, the consistency of the interpulse or intersample
period is governed by the accuracy of the timing sign~ls. sen~ to the .P~lse where R is the sensor-to-target slant range. Therefore, an error in the estimate
generator and the analog to digital convertor (ADC). Vanatton i? these timmg of the slant range resulting from hardware electronic delay error as given by
signals is dependent primarily on the stability of the local oscillator (stalo ). Eqn. (8.2.3) will result in an incidence angle estimation error from Eqn. (8.2.4)
Typically, short term variation in the stalo frequency t.hat produces s~mple-~o and Eqn. (8.2.5). This in turn will cause an across-track scale error in the SAR
sample variation (clock jitter) is negligible from an im~ge geometric fidelity image since the ground range pixel spacing is given by
standpoint. Perhaps more significant is the long-term dnft of the stalo. For. a
mapping mission, such as the Magellan Venus radar mapper, the stalo dnft c5x1 , = c/(2/. sin I'/) (8.2.6)
must be measured over the course of the mission to determine the actual PRF,
since this establishes the along-track pixel spacing, that is where f. is the complex sampling frequency. From Eqn. (8.2.6) we see that
errors in either y or f. translate into cross-track scale errors, as will be shown
(8.2.1)
in the following section on target location errors.
A third type of error, which may be more accurately classified as a platform
where Lis the number of azimuth looks andfp is the pulse repetition frequency error than as a sensor error, is drift in the spacecraft clock. Any offset between
(PRF). The magnitude of the swath velocity V.w is given by the spacecraft clock and the clock used to derive the ephemeris file from the
spacecraft tracking data will result in target location errors. If the spacecraft
(8.2.2) ephemeris is in an inertial coordinate system, then the planet rotation must be
derived from the time difference between the actual data acquisition and the
reference time for the inertial coordinate system. Drift in the spacecraft clock
where R and R are the magnitudes of the sensor and target position vectors will result in an error in the target longitude estimate according to
and y a~d V ar~ the sensor and target velocity vectors, respectively. A fractional
error in the s~alo frequency translates into a similar fractional error in the PRF
and therefore the along track pixel spacing, which results in an along track
scale error.
A second sensor parameter that directly affects the geometric fideli~y of the where w. is the earth rotational velocity, sd is the clock drift, and ( is the target
data set is the electronic delay of the signal through the radar transmitter and latitude. An along-track position error will also result from clock drift according
receiver. This electronic delay 't"e must be subtracted from the total (measured) to sd V.w, where V.w is the swath velocity.
374 GEOMETRIC CALIBRATION OF SAR DATA 8.2 GEOMETRIC DISTORTION 375
SENSOR where R. and Rt are the sensor and target position vectors, respectively. The
~lant range R is gi~en by Eqn. (8.2.3). For a given cross-track pixel number j
m the slant range image, the range to the jth pixel is
c c
R; = -( < - <.) + - (j + tlN) (8.2.8)
2 2J.
where tlN represents an initial offset in complex pixels (relative to the start of
I the sampling window) in the processed data set. This offset, which is nominally
I 0, i~ required for pixel location in subswath processing applications, or for a
I
NADIR design where the processor steps into the data set an initial number of pixels
to compensate for the range walk migration.
The Doppler equation is given by
(8.2.9)
Ren
where A. is the radar wavelength.foe is the Doppler centroid frequency, and v.,
Vt are the sensor (antenna phase center) and target velocities, respectively. The
target velocity can be determined from the target position by
CENTER
lw where roe is the earth's rotational velocity vector. The Doppler centroid in
(8.2.10)
Eqn. (8.2.9) is the value offoe used in the azimuth reference function to form the
given pixel.
OFPLANETV An offset between the value of foe in the reference function and ,the true JOe
r
causes the target to be displaced in azimuth according to
Figure 8.1 Relationship between look angle, y, and incidence angle, 17, for a smooth spherical
geoid model. The spacecraft position is given by R, = H + R•• where R•• is the radius of the earth
at nadir and H is the S/C altitude relative to the nadir point. (8.2.11)
where tlfoe is the difference between the true foe and the reference foe.JR is the
8.2.2 Target Location Errors Doppler rate used in the reference function, and V.w is the magnitude of the
swath velocity. To compensate for this displacement, when performing the target
The location of the (i,j) pixel in a given image frame can be derived from
location, the identicalfoc used in the reference function to form the pixel should
knowledge of the sensor position and velocity (Curlander, 1982). More precisely,
the location of the antenna phase center in an earth referenced coordinate
?e used in Eqn. (8.2.9). The exception to this rule is if an ambiguous foe is used
system is required. The target location is determined by simultaneous solution m the reference function. That is, if the true foe is offset from the reference foe by
of three equations: (1) Range equation; (2) Doppler equation; and (3) Earth more than ±fv/2. In this case, the pixel shift will be according to the Doppler
offset between the reference foe and the Doppler centroid of the ambiguous
model equation. Doppler spectrum, resulting in a pixel location error of
The range equation is given by
(8.2.7) (8.2.12)
376 GEOMETRIC CALIBRATION OF SAR DATA 8.2 GEOMETRIC DISTORTION 377
where m is the number of PRFs the reference f De is offset from its true value SAR
z
(i.e., the azimuth ambiguity number). Using Seasat as an example, with m = 1, ISOOOPPLER
V. = 7.5 km/s, f, = 1647 Hz, and fR = 525 Hz/s, the azimuth target location CONTOUR
e;~or associated ~ith a processing Doppler centroid offset by one ambiguity is
approximately 23 km. Additionally, there is a small range offset which is given
by Eqn. (6.5.7). Nominally, for Seasat this is on the order of 200 m.
The third equation is the earth model equation. An oblate ellipsoid can be /
used to model the earth's shape as follows S/CTRACK /
(R.
2 2
x, +y, +~= 1
+ h) 2 .
2
R~
(8.2.13)
,, /
""'/
/
/
/
_,/
;
where R is the radius of earth at the equator, h is the local target elevation
SENSOR, Rs
relative to the assumed model, and RP, the polar radius, is given by
where f is the flattening factor. If a topographic map of the area imaged is used
to determine h, then the earth model parameters should match those ~sed to
produce the map. Otherwise, a mean sea level model such as that given by
Wagner and Lerch (1977) can be used.
The target location as givt;n by {x 1, Yo z,} is determined from the simultaneous
solution of Eqn. (8.2.7), Eqn. (8.2.9) and Eqn. (8.2.13) for the three unknown
target position parameters. This is illustrated pictorially in Fig. 8.~.. Thi.s fi~ure
shows the earth (geoid) surface intersected by a plane whose position is given X, VERNAL EQUINOX
by the Doppler centroid equation. This intersection, a line of constant Doppler,
is then intersected by the slant range vector at a given point, the target location. Figure 8.2 Geocentric coordinate system illustrating a graphical solution for the pixel location
equations.
The left-right ambiguity is resolved by knowledge of the sensor pointing
direction.
The accuracy of this location procedure (assuming an ambiguous f De was not 8.2.3 Platform Ephemeris Errors
used in the processing) depends on the accuracy of the sensor position and
velocity vectors, the measurement accuracy of the pulse delay time, ~nd The platform position and velocity errors can be broken into three components:
knowledge of the target height relative to the assumed earth model. The l~c.atio.n (1) Along-track errors; (2) Cross-track errors; and (3) Radial errors. We will
does not require attitude sensor information. The cross-track target position is examine the effects of each of these in terms of the azimuth and range target
established by the sampling window, independent of the antenna footprint positioning error.
location (which does depend on the roll angle). Similarly, the azimuth squint
angle, or aspect angle resulting from yaw and pitch of the platform, is determined Along-Track Position Error, ARx. An along-track position error causes an
by the Doppler centroid of the echo, which is estimated using a clutterlock azimuth target location error according to
technique. Thus the SAR pixel location is inherently more accurate than that
of optical sensors, since the attitude sensor calibration accuracy does not (8.2.15)
contribute to the image pixel location error. The following sections discuss the
relationship of platform ephemeris errors, ranging errors, and target elevation where .1.R. is the along-track sensor position error. The cross-track or range
errors to the image geometric calibration accuracy parameters. location error from an error in AR. is negligible.
378 GEOMETRIC CALIBRATION OF SAR DATA 8.2 GEOMETRIC DISTORTION 379
Cross-Track Position Error, LiRy· A cross-track sensor position error pre- Sensor Velocity Errors, Li Vx- Li Vy, Li Vz. The along.track, cross-track, and radial
dominantly results in a target range location error of sensor velocity errors each produce an azimuth location error proportional to
the projection of that sensor velocity error component in the sensor-to-target
(8.2.16) direction. This component of the velocity error is given by
where LiR is the cross-track sensor position error. A small azimuthal target LiV = LiV. sin O. +Lilly sin y +Li~ cosy (8.2.22)
displacem~nt will result from a shift in the earth's rotational velocity at this
new cross-track target position according to Eqn. (8.2.11 ). However, the effect where ()5 is the squint angle of the sensor measured relative to broadside. From
is quite small and can be neglected for most applications. Eqn. (8.2.20) with
Radial Position Error, LiRz. A sensor radial position error is essentially an error (8.2.23)
in the estimate of the sensor altitude, H. From Eqn. (8.2.5) the change in look
angle for a given change in the sensor radial position is we get an azimuth location error of
2 2 2
_ 1 [R + R; - Rt] _ 1 [R + (R. + LiRz) - Rt] ) (8.2.24)
Liy =cos 2R.R - cos 2(R. + LiRz)R (8.2.17
The range location error from these sensor velocity error components is
which leads to a target range position error of approximately negligible. However, an along-track velocity error does produce an azimuth
scale error in the image according to
Lir 2 ~ RLiy /sin 17 (8.2.18)
(8.2.25)
A radial sensor position error will also cause an azimuthal target location error
according to the resultant Doppler sh~ft Lifoc which is given by
8.2.4 Target Ranging Errors
2 (8.2.19) The sensor-to-target slant range is determined by the signal propagation time
Lifoc = ; (cos Ct sin ai cos y)Liy
through-the atmosphere as given in Eqn. (8.2.8). Slant range errors arise from
error in the estimation of the sensor electronic delay, r., or uncertainty in setting
where V. is the earth tangential speed at the equator, ( 1 is the geocentric latitude the data record window relative to the pulse initiation. The electronic delay
of the ta~get, ai is the orbital inclination angle, and Liy, the change in loo~ a~gle, term represents the time elapsed from generation of the transmit pulse control
is given by Eqn. (8.2.17). The resultant target azimuth location error ts given signal (i.e., the data record window timing reference) until the pulse radiates
by Eqn. ( 8.2.11) which can be rewritten as from the antenna, plus the time for the received echo to travel from the antenna
through the receiver electronics to the ADC. The electronic delays, which are
LifocA.R V.w (8.2.20) typically on the order of microseconds, are generally characterized preflight
fiX2 ~ 2
2v.t and monitored inflight to measure relative drift as a result of component aging
or temperature variation. Typically, this delay is measured using a leakage chirp
where V.st is the magnitude of the relative sensor to target
•
velocity. • • that flows directly from the transmit chain to the receive chain via a circulator
Perhaps a more severe effect resulting from a radial sensor pos1tton e~ror (see Fig. 6.2). The additional delay through the antenna feed system to the
than the target location error is the image cross-track SFale error. Cons1d~r radiating elements is usually estimated by analysis. For passive antenna
the look angle offset Liy resulting from a radial position error LiRz. This subsystems, such as Seasat or E-ERS-1, this technique is adequate. However
approximately translates into an equivalent incidence angle error ~i.e.~ ~y ~ Li17 ). an active system, such as the SIR-C antenna which has transmit/receive (T/R)
Therefore, the ground range pixel spacing given by Eqn. ( 8.2.6 ), which is mversely modules in the antenna feed assembly (see Fig. 6.15), requires a more complex
proportional to sin 17, results in a range scale error of experimental setup with an external transmitter/receiver unit to measure delay
through this portion of the system.
k, =[sin(~+ Li17) _ l]lOO% (8.2.21) A second key source of slant range estimation error can arise from
sm 17 propagation timing errors. It was assumed in Eqn. (8.2.3) that the propagation
380 GEOMETRIC CALIBRATION OF SAR DATA 8.2 GEOMETRIC DISTORTION 381
velocity of the electromagnetic wave was equal to the speed of light, c. In general ionospheric conditions the round trip delay is on the order of 1- 2 µs at L-band.
this is a good approximation, however under certain ionospheric conditions a Assuming a medium ionosphere, for the Seasat incidence angle and radar
significant increase in the signal propagation time relative to propagation time frequency, the expected delay is on the order of 150 ns, which translates into a
in a vacuum can occur. This additional delay, T 1, is given by 22.5 m slant range error and a range target position error of nearly 65 meters.
The variation in ionospheric conditions from mild to severe is both temporal
(8.2.26) and geographical. The electron density, NTv• which is the key ionospheric
parameter determining K., is typically several times greater at local noon tha n
where R 1 is the propagation path length through the ionosphere,!. is the radar at midnight; it also peaks near the equator and is minimum at the poles. An
carrier frequency, and K 1 is a scale factor that depends on the ionospheric additional factor affecting K 1 is the solar activity. The density is highest
electron density (NTV). Figure 8.3 is a plot of ionospheric group delay versus (large K 1) at the sun spot maximum which occurs every 11 years (e.g., 1990, 200 I ).
carrier frequency (Brookner, 1985). At a grazing angle 11 = 80°, for severe For a radar system such as an ocean altimeter, where the propagation delay
must be measured to a fraction of a nanosecond accuracy, a dual-frequency
radar is required to measure K 1 (TOPEX, 1981). The relative shift in r 1 can be
used to solve for K 1 by
K - [t (f) - t (f )] fr - f ~ (8.2.27)
I - I 1 I 2 R,(fi - Ji)
0.0001 Target Elevation Error. In the target location algorithm outlined in Section
8.2.2 an oblate ellipsoid was assumed for the earth model. To account for
variation in the target elevation about this ellipsoid, the ellipsoid radius can be
adjusted by the elevation, h, as in Eqn. (8.2.13) and Eqn. (8.2.14). The effect of
0.1 0.2 0.3 0.5 0.7 1.0 2.0 3.0 5.0 7.0 10.0 20.0 an error in estimating the target height can be stated in terms of the effective
CARRIER FREQUENCY, fc (GHz) slant range error. A slant range error of (Fig. 8.4)
Figure 8.3 Plot of ionospheric group delay (two-way) versus radar carrier frequency for both
severe and medium ionosphere (Brookner, 1985).
AR= ilh/ cos 11 (8.2.29)
382 GEOMETRIC CALIBRATION OF SAR DATA 8.2 GEOMETRIC DISTORTION 383
SAR
Figure 8.4 Geometry illustrating effect of height estimation error ~h on target range location.
~
I· GROUND RANGE IMAGE ·I
will result from a height estimation error Ah, where '1 is the local incidence
angle. The target range location error is then given by
, NEAR
Ar 4 =Ah/tan '1 (8.2.30) '~ANGE
''
Assuming an incidence angle '1 = 20°, as in Seasat, Ah= 1 m results in a target
location error Ar4 = 2.4 m.
I
II " NEAR RANGE
/ a c d
NEAR RANGE / FARRANGE
c
a c
SLOPING SURFACES
a
SAR
RADAR
ANTENNA
RADAR BEAM
~ RADAR-IMAGE PLANE
""
"
NEAR RANGE
a c d
SLOPING SURFACE
b Figure 8.6 (continued)
Figure 8.6 Geometric distortions in SAR imagery: (a) Foreshortening; (b) Layover; (c) Shadow;
(d) A combination of imaging geometries illustrating secondary peak.
386 GEOMETRIC CALIBRATION OF SAR DATA 8.3 GEOMETRIC RECTIFICATION 387
particular range bin from each iso-range target (in the layover region) can be SENSOR
In this section, we will present algorithms for performing the image geometric In other words, if the input complex image data samples are uncorrelated then
rectification. Our algorithms are based on a model of the sensor imaging a unit energy interpolation filter preserves the image statistical inform;tion.
mechanisms and do not require tiepointing to derive the correction factors. For correl.ated data samples, with an autocorrelation function given by Pv• the
Essentially, there are three main categories of geometric rectification algorithms: filter requirement becomes (Quegan, 1989) ·
(1) Ground plane, deskewed projection; (2) Geocoding to a smooth ellipsoid;
and ( 3) Geocoding to a topographic map. Each of these algorithms use the
pixel location technique previously described in Section 8.2. Therefore the
r1c;1 2 +2Rer L C;cfpv(i-j)=l (8.3.3)
i i j > i
geometric calibration accuracy of the corrected data products is directly related
to the target location error. It should be noted that, although we have preserved the statistical distribution
and moments with the criteria ofEqn. ( 8.3.2) and Eqn. ( 8.3.3 ), the autocorrelation
function, and therefore the texture of the resampled output image, will be altered
8.3.1 Image Resampllng (ex~pt ~n the special case of nearest neighbor resampling). Depending on the
apphcatton of the data, other criteria for determination of the filter coefficients
Prior to a discussion of the geometric correction algorithms, it is appropriate
to outline some basic rules for resampling the SAR image data. In the strictest may be used which are a better match to the desired image characteristics
sense, assuming it is required that the resampling operation not degrade the (e.g., t?e i~pulse res~onse function and sidelobe levels). In any case, a data
quality of the SAR image (i.e., no information is lost), then the resampling analysts or mterpretatton scheme that utilizes textural information must account
algorithm must conserve all statistics (i.e., the probability distribution function for the effects of resampling.
and all moments) of the input image. We know, from the Shannon-Whittaker It is not unusual for the image geometric rectification to. be applied to a
sampling theorem (Appendix A) for a Nyquist sampled image, that an detected (intensity) image product. The detection process, which involves
interpolation kernel of the form sinc(x) can be used with no loss of information. squaring the real and complex values, doubles the spectral bandwidth of the
In practice, however, a truncated sine function must be used, resulting in image ?riginal .image and ~herefore requires twice the sampling frequency of the input
artifacts (e.g., distortion of the image statistics). Thus, given that we cannot image (see Appendix A). If the sampling is not doubled (which is usually the
preserve all the image information using a finite resampling filter, the question case) aliasing occurs (the severity of which depends on the scene content) and
remains as to what the best approach is for optimally conserving the input the detected samples will be correlated.
characteristics in the resampled output image. In the case of resampling the intensity image, we are again interested in
Since the complex signal data is of finite bandwidth, as determined by the preserving the output image statistical distribution and the moments relative
sensor and data processing parameters, the required sampling frequency is to the input image. Since, as was discussed in Section 5.2, the input intensity
definable and is typically met by most radar systems. Oversampling factors of image has an exponential rather than a Gaussian distribution (as in the real
10% to 20% are typical to minimize the effects of aliasing from the tails of the and imaginary components of the complex image), the image statistical
spectra. Additionally, assuming the filters used in the signal processing are also distribution will not be preserved. Assuming the intensity image is oversampled,
Nyquist sampled, the complex image samples are uncorrelated. We can define, such that the data are independent, the interpolated image can be described in
in general, a complex interpolation operation of the form terms of gamma distributions (Madsen, 1986).
Given an interpolation filter of the form
where JI!, V0 are the complex input and output (amplitude) images respectively where JI> I~ are the input and output (intensity) images respectively and the d.
and the cJ are complex resampling coefficients. It can be shown that the are real interpolation coefficients, preservation of the image mean sets a conditio~
interpolation of Eqn. (8.3.1) preserves the statistical distribution of input data, on the resampling coefficients of
including all moments, if
(8.3.2) (8.3.5)
390 GEOMETRIC CALIBRATION OF SAR DATA 8.3 GEOMETRIC RECTIFICATION 391
The preservation of the second moment and the variance requires ( Quegen, 1989~ where Xaz and x 8 , are the azimuth and ground range input spacing arrays and
Na, N, are the input array sizes in azimuth and range, respectively. The primed
(8.3.6) values are the output arrays. Typically the output spacing is chosen such that
LL d;djlP1U - 2
i)l = t'
j i
where Pi is the image autocorrelation function. Similar equations ca~ ~ written resulting in square pixels. The output spacing array thus serves as a pointer to
for preservation of the higher order moments (Madsen, 19~6). Ag~m, tt sho~ld the input spacing array to generate the resampling coefficients. These coefficients
be noted that additional criteria may be necessary to denve an. mt~rpolat10n should be determined to preserve the image statistics according to conditions
kernel that meets other image quality specifications. A fin.al po~nt ts that t~e outlined in the previous section. The real and imaginary parts are resampled
interpolation should not be carried out in the d~te_cted ~mphtude image don;-am separately.
(i.e., the square root of the intensity image). ~hts ts a fairly com_mo_n error smce In establishing the two one-dimensional resampling arrays in Eqn. (8.3.10),
image data are typically represented as ampbtude data ~hen ~tstnbuted to ~he we assumed that the azimuth and range input pixel spacings were independent.
users. Images are represented in an amplitude fo~mat smce this _repres_entatton While it is true that the range spacing is independent of azimuth, the azimuth
has more contrast than the intensity image and ts therefore easter to mterpret spacing does have some dependence on range position. This comes from the
visually. However, resampling the amplitude image is a. no~line~r process a~d target "'.elocity term in Eqn. (8.2.2) which can be approximated by
1__
The parameter 11(j) is the incidence angle at cross-track pi~el number j. The (8.3.12)
slant range to that pixel is given by Eqn. (8.2.8) and the magmtude of the swath
velocity V.w is given by Eqn. (8.2.2). • . .
The process to convert the input image to a ground plane deskewed pr0Ject10n where AnsK is in output aximuth pixels. For most systems this deskew can be
at uniform ground spacing is given by Curlander ( 1984 ). The output cross-track approximated as a linear function where
and along-track pixel spacing arrays are first generated by
(8.3.13)
x~z(i') = i'bx~z i = 1, N,; i' = 1, N~ (8.3.lOa)
X8 z{i) = ibXaz; where ksK is a skew constant approximated from Eqn. (8.3.12). The deskew
x 8,(j) = jbx 8,; x~,(j') = j' bx~, j= l,Na; j' = 1, N~ (8.3.lOb) operation is not required if the azimuth reference function is centered about
392 GEOMETRIC CALIBRATION OF SAR DATA 8.3 GEOMETRIC RECTIFICATION 393
ALONG TRACK parameter estimation, while orientation errors arise from both skew errors and
~ OIRECTlO~ ephemeris errors (primarily platform velocity).
EARTH U
ROTATION / /~/ I I
/ I I I I I
ISO /
OOPPtE~ /
//
!I 1
/
/
I
I
,
'
/
I
' i
~ZERO
OOPPtER
8.3.3 Geocoding to a Smooth Ellipsoid
Geocoding is the process of resampling the image to an earth fixed grid such
as Universal Transverse Mercator (UTM) or Polar Stereographic (PS) map
/
I projections (Graf, 1988). A key element for routine production of geocoded
ZERO PARALLEL
I 1 \ products is the use of the radar data collection, processing, and platform
DOPP!..ER DOPP!..ER parameters to derive the resampling coefficients. The technique described here
CENTROID is based on using a model of the SAR image geometric distortion rather than
a operator intensive interactive routines such as tiepointing (Curlander et al.,
1987). The geocoding routine is based on the absolute pixel location algorithm
ALONG described in Section 8.2.2. Recall that this technique relies on the inherent
ALONG --+----i~TRACK
---+-----TRACK internal fidelity in the SAR echo data to determine precise sensor to target
range and antenna pointing (squint angle), without requiring specific information
about platform attitude or altitude above nadir. The geocoding procedure
generally consists of two steps: (1) Geometric rectification; and (2) Image
rotation.
p (GRID NORTH) where the coefficient set {ai, bi} of each block is derived from the corner locations.
The block size is selected according to the geometric error specification for the
output image.
The transformation in Eqn. (8.3.14a) requires resainpling of the complex
image, which involves two-dimensional (2D) interpolation of each of the real
and imaginary components. To reduce the number of computations, these
equations can be rewritten such that each 2D resampling can be performed in
two one-dimensional ( lD) passes. The decomposition of the 2D resampling
into two 1D resampling passes is performed as follows (Friedmann, 1981)
(l,p) • GEOCODED IMAGE COORDINATE FRAME Pass 1:
(x', y') • RECTIFIED IMAGE COORDINATE FRAME
y=v (8.3.16)
Pass 2: u =x'
(8.3.17)
where the coefficient set {ei,,h} is determined from the set {ai, b;} for that block.
The first"I>ass represents a rectification in the along-track direction and the
second pass represents a rectification in the cross-track direction as shown in
Figure 8.10. An intermediate image is generated by Pass 1 in the (u, v) grid and
x' the two-pass rectified image is in the desired (x', y') grid.
Figure 8.9 Relationship between the rectified and geocoded image coordinate frames. Geometric Rotation. The geometrically rectified image is in a grid defined by
(x', y'). To transform the image into a geocoded format, a rotation of the image
'lit.
locations into the slant range and azimuth pixel locations. Geometric
rectification without geocoding thus involves resampling of the input image
'i
(x, y) into a coordinate system defined by the map grid (x', y'). Equation (8.3.14)
is written in terms of transformations on the output image, and so the first step
in the resampling procedure is to determine the fractional slant range and
azimuth pixel numbers in the original image that correspond to each output
grid element.
:ECTIRCATIOO
An exact mapping on a pixel-by-pixel basis of the output grid to the input
image is a computationally expensive process. This procedure can be simplified
ft iIii: :E~RCATIOO
(at the cost of some geometric distortion) by subdivision of the output grid
into blocks. Only the corner locations of each block are talculated using the
previously described location procedure, and the relative locations within each
block are then obtained using bilinear interpolation, that is
(8.3.15a)
(8.3.15b) Figure 8.10 Illustration of the two-pass resampling procedure for geometric rectification.
GEOMETRIC CALIBRATION OF SAR DATA
8.3 GEOMETRIC RECTIFICATION 397
396
P::::::sm 1 -
<X;)
. _ (cos-
cos'
(8.3.18)
VERTICAL
SHEAR
I
where ( is the geodetic latitude of the image center. The approximation in
Eqn. ( 8.3.18) is strictly valid only for nadir pointing instruments. A more accurate
approach to derive the rotation angle P is to use the location algorithm in
Section 8.2.2 to determine the geocentric location of two iso-range points in
the image, from which the rotation angle relative to grid north can be derived.
The mapping of the rectified image pixels into the geocoded map grid is given
l!!t:..:. bt q
HORIZONTAL
~£
SHEAR
by the standard coordinate system rotation
F~ure 8.11 Illustration of the two-pass resampling procedure for image rotation.
(x') y'
= ( co~ P sin
- sm P cos P
P)(')p where gp is th~ oversampling factor. This represents an additional resampling
pass o~er the imag~. The next section describes a technique for reducing the
where pis the image rotation angle. Again, 2D resampling of the complex image geocodmg process mto three lD resampling passes.
to effect the rotation can be separated into two lD resampling passes by
decomposing the rotation matrix into the following form Geo~oding: Rectification and Rotation. The two resampling passes to rectify
the image, and the two passes required to rotate the rectified image into a
(x')
y'
= ( 1
- tan p
0 )(cos P sin P)(
sec p 0 1
l)
p
geocod~d fo~mat, can be combined into three lD resampling passes. Pass 2 of
the rectification process and Pass 1 of the rotation process are combined into
the se~ond pass of. this three pass process. The total transformation is
determmed .by combi~ing Eqn. (8.3.16) and Eqn. (8.3.19). The resultant three
The image resampling passes are therefore (Fig. 8.11) transformations are given by
into the first pass. The cross-track rectification and an image shear are combined
into the second pass. The third pass is a second image shear and resampling
that takes the (q, r) coordinate intermediate image into a geocoded format.
Figure 8.12 illustrates the intermediate stages during generation of a geocoded
image using the above scheme. The along-track corrections are applied and the
image is oversampled in the first pass. In the second pass, the cross-track
corrections are applied and the image is sheared. A final shear and an
undersampling in azimuth then transform the image into the desired output
grid. An example of this algorithm as applied to Seasat data is given in
Fig. 8.13. This image is from an ascending pass (Revolution 545) of an area near
Yuma, Arizona (( ~ 33°N). A small segment of the original 100 km image frame
was selected for processing. The unrectified image data (detected from the
complex format for illustration) is shown in Fig. 8.13a. This image is oriented
at an angle, f3 = 21.9°, relative to true north as determined from Eqn. (8.3.18)
for the Seasat inclination angle, ex;= 108°. Figures 8.13b, c, d show the outputs
of the three resampling passes. Note that the final image in UTM projection
aligns the agricultural field boundaries with the image line and pixel axes.
The UTM projected image in Fig. 8.13 can be compared with a geocoded
image from a descending Seasat pass covering the same area (Rev. 681) as
shown in Fig. 8.14. The ease with which changes can be detected between the
various fields in the two images demonstrates the benefits of using a common
coordinate system for representing the data products. A second example given
in Fig. 8.15 compares a geocoded Seasat scene to a SIR-B scene again covering
Flg_u~e a._13 Seasat image of Yuma, AZ (Rev. 545) showing intermediate geocoded products: (a)
O ngmal image; (b) Pass I output is azimuth corrected a nd oversampled ; (c) Pass 2 output is range
corrected a nd range skewed ; a nd (d ) Pass 3 output is azimuth undcrsampled a nd azim uth skewed.
the same ground area. These data sets, acquired six years apart, dem onstrate
the utility of the geocoded format for monitoring changes in land use. However,
the most striking difference in the two images is the distortion in the mountainous
region. Seasat had an incidence angle, '1 = 23°, while this particular SIR-B
x image was acquired at '1 = 44 °. Since the geocoding was performed assuming
a smooth oblate ellipsoidal earth model, the foreshortening distortio n (which
is more severe for Seasat) remains in the final image product. An extensio n of this
PASS2 "
; RECTIFICATION, VERTICAL SHEAR geocoding technique to account for variation in the local topography is described
in the following section.
~_,_~·l PASS3
8.3.4 Geocodlng to a Topographic Map
As previously described, in addition to the slant range nonlinearity a nd azi muth
skew distortion, effects such as radar foreshortening, layover, and shadow can
arise fro m deviation of the target elevation relative to a smooth geoid
HORIZONTAL SHEAR (see Sect10~ 8.2.4 ). i::-o ge.o metrically correct these distortions, an independent
Figure 8.12 Illustration of the three-pass geocoding procedure combining rectification a nd
source of •?formation ts required, either from a second imaging angle
(e.g., radar interferometry, rad ar stereo) or from surface topographic maps.
rota tio n.
""'
0
0
Figure 8.14 Multitemporal geocoded Seasat images near Yuma, AZ: (a) Rev. 545, ascending pass; (b)
Rev. 681 , descending pass.
ILLUM\
a
b
""'0
-4
Figure 8.15 Multisensor geocoded images near Oxnard, CA: (a) Seasat image acquired at 'I= 23° in
9/ 78; (b) SIR-B image acquired at 'I= 44° in 10/ 84.
402 GEOMETRIC CALIBRATION OF SAR DATA
8.3 GEOMETRIC RECTIFICATION 403
More information on the topics of stereo and interferometric SAR techniques
can be found in the literature (Zebker and Goldstein, 1986; Leb~rl et al., 1?86; time geocoding system is required this approach greatly simplifies the design
(Chapter 9).
Ramapriyan et al. 1986). In this sectio1:1' we will .specificall~ descnb~ a techn~que ·
to automatically derive these geometnc corrections from mformatton provided Given the target elevation values in the output grid, the next step is to
generate a latitude, longitude versus (i,j) pixel number map for the complex
by a digital elevation map (DEM). . . . . .
One possible technique for rectification of terram mduced d~stortton usmg slant range SAR image, using the location algorithm outlined in Section 8.2.2.
a DEM was reported by Naraghi et al. (1983) and l~ter by Domik et ~l. (1986). For a given element in the output grid (10 , p0 ), the fractional pixel location in
This technique uses the DEM to generate a simulated ra.dar image by the original SAR image (1 0, p 0) is determined by a two-dimensional coordinate
illuminating the map from the radar imaging geometry. The si~ulat~d r~dar transformation of the output image to the input grid, as described in the previous
image is then registered to the actual radar image by using a. fine gnd of tiepom.ts. section. This transformation provides the target location R1(0) in the original
imageassuming a smooth geoid as shown in Fig. 8.16. The pixel number (1 , p )
The absolute locations of these tiepoints are then used to estimate the polyn.omial 0 0
uniquely identifies a time t(/ 0) and a range R(p0). This time is used to calculate
coefficients of a warping function that spatially transfo.rms t~e rad~r im~ge
coordinates into the simulated image coordinates. Followmg this coregistration the spacecraft position R,.(1 0) from an orbit ephemeris file by polynomial
process, the radar image is resampled into a rectified f?rm~t us~ng the ~now!1 interpolation. The spacecraft ephemeris is nominally in a geocentric rectangular
distortions in the simulated image. The key shortcommg m this techmque is coordinate system. For simplicity we assume the coordinate system is rotating
that both the acquisition of the tiepoints and the generation of the sim.u~ated with the x axis at longitude zero (Greenwich meridian), the z axis at grid north,
and the y axis completing the right hand system.
images are operator and computationally expensive pr~sse~. An ~dd~tional
limitation of this procedure is that the accuracy of the rectified i':11age is dtrectl.y The next step is to convert the geodetic latitude and longitude of the target
a function of the density of matching tiepoints. Therefore, this procedure is into this rectangular coordinate system. Given the reference ellipsoid in
generally used on small subimage blocks where only a few tiepoints are required Eqn. (8.2.13), the target position R1(0) can be represented in terms of its
geographical coordinates
for a good registration accuracy. .
An alternative approach, to be described in this section, re~uires at. most 2- 3
tiepoints for a long image strip (up to 1000 km). This techmque, which can be
applied to either complex or intensity image data, .was first prop~sed by
Kwok et al. (1987). His a direct extension of the techmque for geocodmg. t~ a
smooth ellipsoid described in the previous section. It utilizes the ch~ra~tens~ics
of the radar imaging geometry to model the terrain induced geometnc distortion
and perform resampling based on the predicted correcti~n factors. Th~ few
tiepoints needed are used only to remove the residual tr~nslatto~al and r?tational
errors between the predicted geodetic location of an ~mage ~nxe~ a~d its ac~ual
location on a topographic map. Furthermore, there is no tiepomtmg required
if the platform ephemeris errors are small, as is expected for future ~AR sy~tems
using the Global Positioning System (G PS) satellite network for orbit trackmg.
and ,, x are the geodetic latitude and longitude of the t~rge~ an~ ~e• RP are Geocoding Procedure. The operational procedure for geocoding to a topo-
the equatorial and polar radii of the DEM reference elhpsotd. ~tmt~arly, the graphic map is essentially the same as the three step procedure outlined in the
geographic coordinates of a point at an elevation h above the elhpsotd previous section, with two exceptions: (1) A preprocessing step is required to
register the DEM to the SAR image; and (2) the cross-track correction
procedure (i.e., Pass 2, to be described under the next heading in this section)
is modified to account for the relief displacement of the target. An operational
are given by flowchart of this geocoding procedure is presented in Fig. 8.17. The input
ancillary .data consists of the spacecraft ephemeris, the radar parameters, and
(8.3.24a) the correlator processing parameters. The ephemeris update vector AR. is
xh = x 0 + h cos ' cos X
(8.3.24b) derived from the preprocessing.
Yh = y0 + h cos' sin X The preprocessing step to register the SAR image to the DEM can be
(8.3.24c) performed either by operator tiepointing, to determine the translational error
zh = z0 + h sin '
between the two data sets, or by an automated tiepointing scheme as outlined
From the spacecraft position vector R.(1 0) and the target ~?sition v~ctors R,(O), in Fig. 8.18. The procedure for the automated tiepointing is as follows. The first
step is to select several small areas of the original image (e.g. 512 x 512 pixels).
R,(h), the relative slant range vectors to each target pos1tton are given by
The size of this area should be twice the maximum (3u) location error. For
R(O) = R.(1 0) - R 1(0) (8.3.25a)
Afo = fo(h) - fo(O) (8.3.27) Figure 8.17 Flowchart of the procedure for image geocoding with terrain correction.
406 GEOMETRIC CALI BRATION OF SAA DATA
ANCILLARY
CALCULATE SELECT DEM CHIP;
DATA
GEODETIC · ROTATE AND
COORDINATES
ILLUMINATE FROM
VERSUS SAA GEOMETRY
PIXEL No.
.0
Figure 8.18 Flowchart of the preprocessing step to register the SA R image to the DEM.
407
8.3 GEOMETRIC RECTIFICATION 409
z
0
~
<t
E z
~
u ~
0
LO :::>
...J
l
...J
a b
.0
c d
Figure 8.21 Comparison o( ascending and descending Seasat images near Los Angeles, CA,
geocoded to a smooth ellipsoid, with the same images geocoded to a DEM; (a) Rev. 351, ascending,
smooth; (b) Rev. 660, descending, smooth; (c) Rh. 351 ascending, DEM; and (d) Rev. 660
descending, DEM.
radiometrically saturated. This arises from the increase in the effective scattering
area of a resolution cell sloped away from the radar (i.e., a+ in Fig. 8.6a). T he
ground range resolution is given by
408
410 GEOMETRIC CALIBRATION OF SAR DATA 8.4 IMAGE REGISTRATION 411
8.20c and 8.2lc, d and an increase in the total image power. A more correct 8.4 IMAGE REGISTRATION
representation of the scattered power (from a unit ground area) would be to
normalize each pixel by the actual resolution cell area, which depends on the A natural question following from our discussion on geocoding and terrain
local slope as derived from the DEM. Assuming no radiometric co~re~tions correction of the SAR data regards the application of these data products. As
have previously been applied, the corrected image should first be multtphed by previously discussed, a radiometrically and geometrically terrain corrected
a factor image, in conjunction with an incidence angle map, allows the scientist to
calculate relative values of u 0 as a function of incidence angle. In this way, the
g 1 = sin(17 - (X) (8.3.30) relative scattering between two target types could be derived directly from the
geocoded data products. Furthermore, if information on the system radiated
to account for the increased· (decreased) cell area resulting from a positive power anctr' the receiver gains were available, the absolute u 0 could also
(negative) surface slope. be derived from the image data by proper scaling of the image intensity
A second radiometric correction factor, that is also incidence angle dependent, (see Section 7.3). However, as we described in Chapter 1, the SAR data
is the antenna pattern. Given the polar antenna gain function, G( </> ), where </> interpretation is greatly enhanced when it is combined with other data sets
is the off-boresight angle relative to the look angle y, we can project this pattern (i.e., correlative data). This is especially true for data acquired by remote sensors,
onto the ellipsoid. From Eqn. (8.2.5) such as visible and infrared detectors, that measure the earth radiation at
distinctly different parts of the electromagnetic spectrum (Elachi, 1987).
(8.3.31) The factors that have slowed progress in interpretation of these multisensor
data sets are essentially twofold. First, and perhaps foremost, is that there is at
where R1b = I R1(h) I is given by Eqn. (8.3.~4) and R~ = I R(h)I is from Eqn. bes! a very limited database of synergistic SAR and optical (or infrared)
(8.3.25b). The parameter R. is the S/C altitude relat1v~. to the c~nter of ~he wavelength data. Secondly, the radiometric and geometric calibration procedures
ellipsoid and y is the actual look angle (i.e., antenna electnc_al bores1ght relative as described in this chapter have only recently been well understood, and are
to nadir including the platform roll angle). Thus for a given target at some just now being mechanized into a set of algorithms that can be implemented on
height, h, the parameters Rb and R 1b are determined from ~qn. (8.3.24) and an operational basis. The application of these techniques to future data sets,
Eqn. (8.3.25b). The off-boresight angle in the polar ~atter~ ts then ~alculated such as those that will be acquired as part of the NASA Earth Observing System
from Eqn. (8.3.31). From this pattern, a second rad1ometnc correction factor (EOS) program, offers the potential of a wide range of applications. Specifically,
to be applied to the terrain geocoded image is determined these products are key for developing an understanding of the earth's
environmental processes.
(8.3.32) The integration of data from a multitude of sensors presents a number of
challenges in cross-sensor calibration and image registration. Perhaps the most
where we have assumed the antenna is reciprocal. A final correction factor for obvious problem arises from the fact that the SAR is a side-looking instrument
the range attenuation is given by while most optical instruments (including those operating at near infrared and
infrared wavelengths) are nadir looking. To acquire synergistic data, the orbits
(8.3.33) must be offset by the SAR cross-track swath distance. Alternatively, the swath
width of the two instruments must be sufficiently wide that they overlap. Neither
Combining these three corrections and assuming they are applied to the complex of these are very practical solutions since the radar characterization of target
data, then type may require a specific imaging geometry (e.g., oceanography requires a
steep incidence angle, polarimetric applications a shallow angle). Perhaps the
gT(h, (X) = J glg2g3 (8.3.34) best solution is to set the platform in a drifting orbit to obtain global coverage
over a period of time and to systematically build a geodetic database that
where g 1 , g 2 , and g 3 are given by Eqn. (8.3.30), Eqn. (8.3.32), and Eqn. (~.3.33), incorporates the data from each sensor.
respectively. Equation (8.3.34) is the relative radiometric correction reqmred to The tools we need to generate this multi-sensor, global database are: ( 1)
normalize the received amplitude signal from a target at elevation h on a slope geocoding algorithms to map the data into an earth fixed grid; (2) mosaicking
(X relative to the ellipsoid. To date, no system has operationally applied both algorithms to assemble the image frames into a map base; and (3) data fusion
the radiometric and geometric topographic corrections to SAR image products algorithms to precisely register the data from the various sensors to subpixel
as described in this section. accuracy. A key factor in developing such a database is to establish standards
412 GEOMETRIC CALIBRATION OF SAR DATA
8.4 IMAGE REGISTRATION 413
for the geocoded data products to which all instrument processing systems this empirical co ~recti.on app_lied to the image data in the boundary region may.
adhere. In this area, not only is there a lack of consistency among processing ?e~rade the calibration. Given two adjacent images acquired at different
centers handling data from different sensors, but there is often little agreement incidence angles, the data in the overlap region will have a different mean
across processors for the same sensor. Jn an effort to solve the problem, a 0
intensity since the a varies as a function of '1· The feathering process to blend
number of committees have been formed to provide recommendations for the seams adjusts this mean, and therefore degrades the calibration accuracy,
standards in spaceborne data. One group, the Consultative Committee on Space to generate. an aesthetically pleasing image product. Jn principle the effect of
Data Systems (CCSDS), has dealt mainly with downlink data stream formats. the smoothing can be accounted for in the calibration scale factors however
A second group, the Committee on Earth Observations Satellites (CEOS), has practically it_ is relatively complex to keep track of these correction p~rameters.
addressed specifically both optical (Landsat) and SAR data products in terms The~efore, ~his process ~houl~ only be performed when generating photoproducts
of image format and presentation. However, a community consensus has not or video displays for visual interpretation.
been reached on key items such as standards for the ellipsoid, the map projection, An ~x~mple of a three-frame mosaic using Seasat data covering an area of
the output image grid spacing, or image framing within the grid. These will be geologic interest near Wind River, Wyoming, is shown in Fig. 8.22. The images
important topics of discussion for the multi-national working groups being were first geocoded to a UTM projection at 12.5 m spacing using USGS 24,000
formed under the EOS program. to l_DEM _d at_a .. The images were radiometrically corrected assuming a smooth
geo1d. The md1v1d ual frames were registered to each other using cross-correlation
8.4.1 Mosaicking
The generation of large scale maps using SAR imagery requires a capability to
assemble multiple image frames or strips into a common grid. These mosaics
could then be cut into standard quadrants and stored in the database according
to a grid structure. One possible convention for selecting these quadrants is the
US Geological Survey map quadrant system. For example, in this system, the
250,000: 1 maps have a quadrant on the order of 100- 150 km on a side.
Given that the image data from the various sensors has been geocoded to
a standard database, the generation of a large scale mosaic is a relatively simple
process (Kwok et al., 1990). It is analogous to assembling a jigsaw puzzle onto
a template, where in our case the template is a map grid. The analogy is poor
in the sense that the geocoded images do not fit nicely together. Rather, there
is generally an overlap or gap between adjacent frames, and therefore there
needs to be a convention on how to merge these data; specifically, which portion
of the data is to be discarded or how the gaps are to be filled when generating
the image mosaicks.
Jn general, even if the systematic geometric distortions have been properly
corrected in generating the geocoded image products, there remains a random
residual error in registering each frame to the map base. It is therefore necessary
to cross-correlate adjacent image frames (assuming there is sufficient overlap
region) to determine this residual misregistration error. Typically the correlation
is performed over a number of small patches along the overlap region and the
average misregistration is used to correct the offset. The new image is then set
into the grid, replacing the existing image data in the overlap regio n.
To blend the seams, a feathering process is needed. This procedure consists
of deriving the mean of the image in a small area on either edge of the seam
from a data histogram. An averaging process is applied in this border region
by adjusting the mean using a linear ramp function. Obviously, if a larger Figure 8.22 Mosaic of lhree Seasat image frames near Wind River, Wyoming geocoded using a
boundary region is selected then the seam transition will be smoother. However, USGS 24,000 to I DEM. '
GEOMETRIC CALIBRATION OF SAR DATA
8.4 IMAGE REGISTRATION 415
414
and the seams smoothed by feathering the output. A second example of the
mosaicking process is a larger-scale Southern California mosaic as shown in
Fig. 8.23. This image, which is comprised of 33 Seasat geocoded frames, covers
approximately 240,000 km 2 • It is particularly useful for studying the geologic
formations and fault lines in the region. Figure 8.24 is a 32-orbit mosaic of
Venus compiled from data acquired by the Magellan spacecraft. The image
dimension is approximately 500 km on a side. Each image strip comprising the
mosaic is 20 km wide and extends the entire vertical dimension of the image.
ST 0
MAP QUADRANT
250 :000:1
Figure 8.23 Mosaic of 33 Seasat image frames of Southern California region covering
approximately 240,000 km 2 .
Figure 8.24 Multiorbit mosaic of the "crater farm" region of Venus, centered at -27°S, 339° E.
The largest of the craters shown is 50 km in diameter.
416 GEOMETRIC CALIBRATION OF SAR DATA
417
418 GEOMETRIC CALIBRATION OF SAR DATA 8.4 IMAGE REGISTRATION 419
GEOCODEDPRODUCTS
PATCH
PRE-SELECTION
SEGMENTATION
•EDGES
• REGION BOUNDARIES
•PRINCIPAL COMPONENTS
MATCHING OF SUB-PATCHES
• CHAMFER MATCHING
•BINARY CORRE LATION
a b
•DYNAMIC PROGRAMMING
features that are invariant across the two scenes. Three candidate techniques
are: ( 1) Edge operators; (2) Statistical analysis using the stationarity properties
of local regions; and (3) Principal component analysis. Jn the remainder of this
section we will address the edge operators in some detail.
There is a large body of literature on the subject of edge detection. However,
in almost all cases only optical image data are considered. For the SAR imagery,
since it is corrupted by speckle noise, techniques based o n the first and second
d
c o rder directional derivatives (e.g., Sobel or Roberts operators) will perform
Figure 8.26 Comparison of Landsat image framelettes with simulated imagery from DEM images: poorly. This is especially true in terms of localization of the edges, since these
(a) and (c) are simulated images; (b) and (d) are Landsat data. operators produce large responses in the edge region. Similar performance
limitations are characteristic of statistical edge operators such as those proposed
by Frost et al. (1982) and Touzi et al. (1988). An alternative procedure, using
used for multisensor registration. Fig. 8.27 shows a generalized flow for a a two-dimensional smoothing operator such as a Marr- Hildreth operator
multisensor registration algorithm, where a number of techniques are made (Marr et al., 1980) or a Canny edge detector (Canny, 1983, 1986), exhibits
available at each stage of the processing to accommodate the variety of sensor significantly improved localization and edge detection performance relative to
and target types, as well as varying environmental conditions. A report the derivative and statistical operators.
evaluating this approach to multisensor registration, including a number of An example of a Canny edge detector as applied to Seasat, Landsat TM,
candidate algorithms, has been published by Kwok et al. (1989). and SPOT images is shown in Fig. 8.28. This region, the Altamaha River, GA,
Consider, for example, the image pair presented in Fig. 8.25. A simple shows a variety of target types (rivers, fields, roads, etc.). The Seasat image,
cross-correlation would yield a very weak correlation peak (or peak-to-mean acqui red in July, 1978, has a significantly greater number of detected edges,
ratio) in the region of the sand dunes, as a result of the dramatic radiometric primarily due to the statistical characteristics of the original image. The SPOT
difference between the two images. A better approach would be to extract (Band 3) and the Landsat (Band 4), both acquired in July, 1984, are markedly
8.4 IMAGE REGISTRATION 421
similar, although there are textural differences in the images that give rise to
some dissimilar lines. Perhaps the key point demonstrated by this example is
that the matching routines must be adaptive, to optimize their performance for
a given set of data and imaging conditions. For example, the width of the Seasat
edge operator could be increased to reduce the number of spurious edges as
compared to the optical data. An example of the effects of varying the spatial
filter width is given in Fig. 8.29. In fact, the matching routine may require an
iterative procedure in which, for each pass, the filter parameters would be
adjusted until some cross-image similarity criterion is satisfied.
.c
a b
c d
Figure 8.29 Effect of variation in spatial filter width parameter u in Canny edge detector for SAR
image of Altamaha River, GA: (a) Original image (512 x 512 pixels); (b) Edge image with u = I
pixel; (c) Edge image with u = 2 pixels; (d) Edge image with u = 4 pixels.
420
422 GEOMETRIC CALIBRATION OF SAR DATA
the Seasat image of Fig. 8.28a when matching it to the optical images of
Fig. 8.28b and Fig. 8.28c. However, a small rotation between the two images
(i.e., <0.5°) will decorrelate the image pair. Thus, the matching procedure must
consist of a series of image rotations given a candidate set of angles about the
nominal (zero rotation) angle. However, binary cross-correlation can be made
more robust to small rotations by thickening the edge lines (Wong, 1978). For
example, a three pixel wide edge will generally tolerate a rotation of 1.5° to
2.0° without significant decorrelation.
An alternative technique to binary cross-correlation is the distance transform,
in which the edge map is converted to a grey level image according to the
pixels' distance from an edge. This is illustrated in Fig. 8.30 for Seasat and
Landsat TM images of Wind River Basin, Wyoming. In comparing the binary
.0
cross-correlation technique to the distance transform (chamfer) matching, the
general conclusion is that chamfer matching is less sensitive to rotational offsets
but more sensitive to the existence of extraneous edges.
The dynamic programming technique is a relatively new approach that has
not yet been tested using SAR data. It essentially uses an autoregressive model
to register severely distorted images to a map base without any a priori
information about the distortion. The model is used to define the deformation
at the pixel scale. Dynamic programming then optimizes the search for the best
registration of an ordered sequence of primitives. These primitives could be
edges, or another type of cross-image invariant feature. There are a number of
unique features to this matching process that may well lead to its selection as
the optimal solution for many of the matching problems. Maitre and Wu ( 1989)
have demonstrated the approach using optical data with remarkably good
results.
In summary, multisensor registration is the final stage of the Level 1B
processing. It utilizes the output of the geometric correction/ geocoding routines
described earlier in this chapter to perform registration of data sets from
distinctly different portions of the electromagnetic spectrum. This in turn leads
to a more detailed description of the surface features, which can then be used
to model the change processes or simply to survey the current land use. Perhaps
the most challenging aspect of this problem is the wide variety of data types
resulting from different sensors, target types, and environmental conditions. It
423
424 GEOMETRIC CALIBRATION OF SAR DATA REFERENCES 425
is clear that no fixed procedure or set of procedures will satisf~ a~l t~e m~tching processing operations. Assuming that calibrated products become the standard
requirements. This task is perhaps a good c~ndidate f~r an artificial mte.lh.gen~e, in the near future, our next big challenge is to merge the SAR data with other
rule based approach for selecting the optimal algonthm and determmmg its non-SAR imagery as a preprocessing stage for geophysical data analysis.
parameters. Furthermore, the comp~ta~ional load .for some of the m?re Considering the effect of scene content and environmental conditions on the
complex algorithms may mandate a distnbuted (massively parallel) processmg statistics of the image data, this may be an extremely complex task to fully
architecture or a neural network type implementation. automate. In this area, perhaps the best approach is to use some rule based
Independent of the final system design selected to perform the multisensor expert system to evaluate the data characteristics and then select the optimum
registration task, the payoff in developing a capability to routinely gener~te technique for matching.
registered multilevel data sets will be far re~ching. !he.se products are cru~al
for presenting the data in a format allowmg denvat1on of the geophysical REFERENCES
parameter information, which in tum is used to drive large scale models of
the earth's global processes. Barrow, H. G., J. M. Tenenbaum, R. Bolles and H. C. Wolf (1978). "Parametric
Correspondence and Chamfer Matching: Two New Techniques for Image Matching,"
Proc. DARPA Image Understanding Workshop, pp. 659-663.
Brookner, E. ( 1985 ). "Pulse-Distortion and Faraday-Rotation Ionospheric Limitations,"
8.5 SUMMARY Chapter 14 in E. Brookner (ed.), Radar Technology, Artech House, Inc., Dedham,
MA, pp. 201-211.
This chapter completes our discussion of the SAR image calibra.tion and Canny,J. F. (1983). "Finding Edges and Lines in Images," MIT Tech. Report, Al-TR-720,
the correction algorithms. In Chapter 7 we presented the techmques for Artificial Intelligence Laboratory, Mass. Inst. Tech., Cambridge, MA.
characterizing the radar system transfer function using both internal a~d Canny, J. F. (1986). "A Computational Approach to Edge Detection," IEEE Trans.
external calibration devices. Throughout Chapter 7 we assumed a smooth ge01d Pattern Anal. and Mach. Intel/., PAMI-8, pp. 679-698.
in order to concentrate on the issues associated with radiometric calibration. Curlander, J.C. (1982). "Location of Spaceborne SAR Imagery," IEEE Trans. Geosci.
The problem of geometric distortion in the SAR imagery was pres~nted. in and Remote Sensing, GE-20, pp. 359-364.
Chapter 8. We initially reviewed the basic definitions of t~e geom.etnc cahbratton Curlander, J.C. (1984). "Utilization of SAR Data for Mapping," IEEE Trans. Geosci.
terms and introduced a set of parameters to charactenze the image accuracy. and Remote Sensing, GE-22, pp. 106-112.
This discussion was followed by an error analysis to identify the key sources Curlander, J. C., R. Kwok and S. S. Pang (1987). "A Post-Processing System for
of geometric distortion and target location error. However, the bulk of the Automated Rectification and Registration ofSpaceborne SAR Imagery," Int. J. Rem.
chapter was dedicated to the geometric correction algorithms. Sens., 8, pp. 621-638.
We presented automated techniques to map the natural SAR. correl~tor Davis, W. A. and S. K. Kenue ( 1978 ). "Automatic Selection of Control Points for the
output image (i.e., without resampling) into a rectified format (umform pi~el Registration of Digital Images," Proc. 4th Inter. Joint Corif. on Pattern Recognition,
spacings), either in the SAR azimuth/range grid or into~ standard map gnd. Kyoto, Japan, pp. 936-938.
Much of the discussion was centered around the techmques to perform the Domik, G., F. Leberl and J. Cimino ( 1986). "Multiple Incidence Angle SIR-B Experiment
image rotation and to compensate for the terrain e~ects. yve presente~ a over Argentina: Generation of Secondary Image Products. IEEE Trans. Geosci. and
three-pass resampling technique that requires only one-dm:~e~sional res.ampl~ng Remote Sensing, GE-24, pp. 492-498.
operations. We proposed a technique to correct for the terram mduced d1stort~on Elachi, C. (1987). Introduction to the Physics of Remote Sensing, Wiley, New York.
during the second pass. Specific equations were presented to calcu~ate the pixel Friedman, D. E. ( 1981 ). "Operational Resampling of Corrected Images to a Geocoded
displacement as well as the radiometric correction factors resultmg from the Format," 15th Inter. Symp. on Remote Sens. Enuir., Ann Arbor, Ml, p. 195 et seq.
local relief. Frost, V. S., K. S. Shanmugan and J.C. Holtzman (1982). "Edge Detector for Synthetic
The chapter concluded with a discussion of an application of ~eoco~ed Aperture Radar and Other Noisy Images," IGARSS '82 Digest, FA-2, pp. 4.1-4.9.
imagery to multiframe image mosaicking and multisensor image registration. Graf, C. ( 1988 ). "Map Projections for SAR Geocoding," Tech. Report, ERS-D-TN-22910,
A number of examples were presented from Seasat SAR and Landsat TM data DLR, Oberpfaffenhofen, Germany.
sets to illustrate the pros and cons of the various algorithms. We compared the Heiskanen, W. A. and H. Moritz ( 1967). Physical Geodesy, W. H. Freeman, San Francisco,
performance of a number of edge detectors for matching, co~cluding ~rinci~ally CA.
that much work remains to be dooe in the area of multisensor image registration. Kropatsch, W. and D. Strobl (1990). "The Generation of SAR Layover and Shadow
A final point is that we are only now beginning to mechanize these radiometric Maps from Digital Elevation Models," IEEE Trans. Geosci. and Remote Sensing,
and geometric algorithms in terms of making them a part of the automated GE-28, pp. 98-107.
426 GEOMETRIC CALIBRATION OF SAR DATA
We should note that the design process outlined above is not necessarily serial, TABLE 9.2 List of PlaHorm Parameters Required for
in that the selection of a particular algorithm or architecture in Steps 2 or 3 Correlator Design
may not be feasible once the costs are evaluated. The performance requirements
Inclination angle (a;)
may conflict with the available resources or technology, requiring some descope. Orbital altitude (H)
In Section 9.1, we address the requirements definition of Step 1 in detail; Position determination accuracy (u., c;Y' uz)
Section 9.2 then addresses the algorithm selection and loading analysis of Velocity determination accuracy (u., uy, u;)
Step 2; and in Section 9.3 we present various candidate architectures with their Attitude determination accuracy (u., c;Y' up)
performance versus cost trade-offs. A~titude rate determination accuracy ( c;t> uy, up)
Following the SAR correlator discussion, the design options and practical Bit error rate (P8 )
constraints for the Level 1B processor will be presented. This processor performs
the radiometric and geometric corrections required for production of calibrated
image products. Considerations relating to the throughput performance, storage
and access of ancillary data (e.g., digital terrain maps), and the data product
formats will be discussed. The chapter concludes with a section on browse data
TABLE 9.3 List of Output Specifications Required for
generation and specifically on image data compression techniques. A complexity Correlator Design*
analysis of several lossy spatial compression algorithms is presented, along with
a queueing analysis to determine the required compression ratio. Throughput: peak and sustained rates
Data product types/formats
Image quality
azimuth and range ambiguity to signal ratios
(AASR, RASR)
9.1 CORRELATOR REQUIREMENTS DEFINITION
azimuth and range resolutions (8x, 8R)
integrated sidelobe ratio (ISLR)
Prior to evaluation of the candidate architectures and algorithms, the basic peak sidelobe ratio (PSLR)
processor system requirements must be established. Thes" are derived from the Geometric fidelity
sensor and platform design and performance characteristics, as well as from the location, orientation accuracy
user product requirements. The basic radar and platform parameters used in scale, skew error
the processor design are listed in Tables 9.1 and 9.2, respectively. Table 9.3 is a Radiometric fidelity
list of the output specifications required for the correlator requirements analysis. relative, absolute accuracy
A number of detailed specifications have been excluded from these lists for *It is assumed that geometric and radiometric calibration are
brevity. performed in the post-processor following image correlation.
430 THE SAR GROUND SYSTEM 9.1 CORRELATOR REQUIREMENTS DEFINITION 431
9.1.1 Doppler Parameter Analysis co~trol specifications. The magnitude of the attitude variation is given by the
The extreme bounds for the Doppler centroid, foe• and the Doppler rate, fR, must attitude control error, while the variation period is derived from the attitude
first be established. This includes the limiting values that each parameter can rate. This analysis should be performed for both the minimum and maximum
assume, as well as the maximum rate of change in both the along- and cross-track look angles and for the yaw and pitch, both in phase and 180° out of phase
dimensions. The rate of change of the Doppler parameters in the along-track The output will provide the Doppler bounds ·
direction becomes critically important in selection of the correlation algorithm
since, for the frequency domain fast convolution technique, there is an inherent
assumption that the Doppler parameters are constant over the synthetic aperture
period. These parameters can be expressed in terms of the relative sensor to !·max J·max
De' R
target position and velocity vectors as follows:
in ea~h of the along-track and cross-track dimensions. An example of the
2 (9.1.1) r~sultt.ng ~lots for these parameters, using the SIR-CC-band characteristics, is
foe= RA. V.1 · R,1 given m Fig. 9.1. The!';;:• and f~in are used to determine the maximum range
(9.1.4)
- a:
4000 1000
(9.1.5)
300 400 500 600 700
SLANT RANGE (km)
The second term in Eqn. (9.1.4) is a small contributor to the Doppler rate
( < 10 % ) as compared to the first term. I I
Given the expressions In Eqn. (9.1.1) and Eqn; (9.1.4) for foe and fR in terms 1525 35 45 55 65
of the orbital parameters, the nominal Doppler parameter bounds and maximum LOOK~
rates of change can be evaluated by simulating an orbit of the platform, assuming F~gur~9.1 ~lot of fo. ([~)and fR (e) for SIR-CC-band SAR at worst case attitude (yaw= 1.4°,
some sinusoidal variation for the attitude parameters according to the platform pitch - -1.8 ) as a function of slant range for two orbit inclinations (57°, 90°).
432 THE SAR GROUND SYSTEM 9.1 CORRELATOR REQUIREMENTS DEFINITION 433
(9.1.8)
0.001
where TB, the time bandwidth product, is given by
(9.1.9)
and -r 0 is the coherent integration time. Any imaging mode (i.e., combination
of look angle, latitude, and squint angle), that produces a Doppler centroid
resulting in a range walk that satisfies Eqn. (9.1.8), requires secondary range 0.0001 ~----:-1:-------L-----L-_J
0 1.0 2.0 3.0
compression to meet nominal performance specifications.
NORMALIZED DISTANCE
Doppler Drift Rates Figure 9.2 Effect of quadratic phase error, t/Jq, on the point target response function.
The change in Doppler parameters as a function of both along- and cross-track
position establishes the need for reference function updates to meet the matched sidelobes by amplitude weighting, a unity error criterion in most cases yields an
filter accuracy requirements. The parameter typically specified for fR is the accep~able performa~ce. The.maximum time between reference function updates
maximum quadratic (or higher order) phase error at the edge of the synthetic resultmg from fR dnft (i.e., f p:ax) is given by
aperture. For foe it is the fractional error between the true Doppler centroid
and the reference function centroid at the aperture edge. A typical number for =---
the allowable quadratic phase error resulting from fR estimation error is n/4
't
ur J• Rmax 'tc2 (9.1.10)
(i.e., </Jq = 45°). Errors of this magnitude produce very little degradation in the
impulse response function (Fig. 9.2). Assuming that no weighting is applied to ~here we have assumed a n/4 phase error and -r0 is the coherent integration
the reference function, the effect of allowing a phase error of </Jq = 45° is to time. ~or the frequency domain fast convolution algorithm, the processed block
broaden the mainlobe by approximately 2 %, and increase the peak sidelobe duration (from center to edge of aperture) is ·
level about 2 dB relative to the mainlobe. Although this is a relatively modest
degradation, considering the other phase errors in the system (which combine
(9.1.11)
in root-sum-squared fashion with this error) and our ability to reduce the
434 THE SAR GROUND SYSTEM 9.1 CORRELATOR REQUIREMENTS DEFINITION 435
where Naz is the FFT length and Laz is the azimuth reference function length. where G 2 (f) is the two-way azimuth antenna pattern. For example, consider
The update requirement is therefore a spaceborne system with a uniformly illuminated azimuth aperture. Assuming
fp = 1.1 Bo, from Eqn. (9.1.15) a value BP= 0.75 B0 provides an AASR =
(9.1.12) -WdR .
since within a data block the fast convolution algorithm requires that the Azimuth Reference Function Length
Doppler parameters remain constant. If the requirement in Eqn. (9.1.12) is not The azimuth reference function length, Laz• is given by
met, the data must be preprocessed to correct for the phase errors (motion
compensation) or an alternative algorithm (e.g. time domain convolution) could
be used. (9.1.16)
A matched filtering error in the Doppler centroid foe results in lost signal
power and increased azimuth ambiguities. T~e maximum time between reference
function updates for a given foe drift (i.e., fj)~x) is given by where Laz is in samples. This can be rewritten as
where we have assumed that the allowable centroid error is 10% of the Doppler Note that, since fR is range dependent, the length of the azimuth reference must
bandwidth B 0 , which produces a relatively small degradation in the SNR and be updated as a function of cross-track position to keep the azimuth resolution
AASR. Thus, a further requirement to use the fast convolution technique is constant.
AASR = m~oo
00 f
B /i
_:.,,
2
G (f + mfp)df
If. B.12
-B,/Z G
2
(f)df
(9.1.15)
processor; however, Naz is limited by the Doppler parameter update criterion.
From Eqn. (9.1.12), Eqn.(9.1.14), and Eqn. (9.1.19), the block size is bounded by
m;'O
436 THE SAR GROUND SYSTEM
9.2 CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS 437
For a multilook processor, where subaperture spectral division is used, Naz 9.2 CORRELATOR ALGORITHM SELECTION AND
would be the block size for each look. COMPUTATIONAL ANALYSIS
The range cell migration memory is given by
!he selection of the appropriate SAR correlation algorithm for data processing
(9.1.21) is dependent on the signal data characteristics, the system throughput
requirements, and the output image quality specifications. There is no simple
assuming a complex, floating point ( 8 byte) rep_resentation for each data sample. procedure for evaluating the trade-offs among these factors. Rather, a fairly
For example, the Seasat azimuth reference function for a full resolution, complex analysis is needed, requiring consideration of the design and
single-look image is Laz ~ 4 K, resulting in a minimum block size of Naz = 8 K. implementation constraints in conjunction with signal processor architectures
The largest foe produces a range walk on the order of 128 samples. From and the available technologies. A fundamental trade-off to be made is the relative
Eqn. (9.1.21), the range cell migration memory is therefore MRcM = 8.0 MB. importance of system throughput versus image quality.
The key element in the processing chain is the azimuth processing stage,
which involves formation of the synthetic aperture to focus the azimuth return
9.1.3 Range Reference Function into a high resolution image. In this section, we consider the trade-offs between
The range FFT block size is determined by the number of samples in the echo what we consider to be the two fundamental azimuth correlation techniques:
window and the reference function length. The range reference function length is (1) spectral analysis (e.g., unfocussed SAR or SPECAN); and (2) matched
filtering (e.g., frequency domain or time domain convolution). We recognize
(9.1.22) that there are a number of other possible techniques, such as the polar
processor with step transform (Chapter 10), the hybrid algorithm (Wu et al.,
where f. is the complex sampling frequency and tP is the tr~nsmitted pulse 1982), and the wave equation processor (Rocca et al., 1989). Generally, these
duration. The range FFT length, N~, is usually chosen to be the smallest power techniques are used for special situations (e.g., inverse SAR, large squint angles,
of 2 that satisfies high phase precision) and will not be considered here.
The processor performance in terms of output image quality depends on the
N~ ~ L,/(l - g,) (9.1.23) characteristics of the echo data. A primary characteristic driving algorithm
selection is the time bandwidth product of the azimuth signal. This parameter,
where g, is the range compression efficiency factor. Typically, g, is selected to which is the product of the processing bandwidth and the coherent integration
be greater than 1/2, and usually it is limited by the corner turn memory size, time, given by
which is given by
(9.2.1)
Mer= Naz(N~ - L,) (8) bytes (9.1.24)
is a good benchmark to determine if an approximation can be used for the
again assuming a complex floating point data representation. For example, jf exact 20 matched filtering algorithm. Small TB products are generally
the azimuth and range FFT sizes are set at 4 K complex samples each, and if characteristic of high frequency (X-band or higher) spaceborne radars, or of
the range reference function length is 512 complex samples, the minimum corner relatively low-flying platforms (e.g., airborne systems). Generally, for these
turn memory size is Mer= 112 MB. The McT can be reduced by shortening systems we can obtain good quality imagery with a simplified azimuth
the range block length N~. However, recall that each block must be overlapped correlation algorithm.
by L,, thus Mer reduction is achieved at the cost of processing efficiency. In the following two sections we address the trade-offs in performance versus
Memory can also be reduced by packing the data into a ( 16I, 16Q) or (8I, 8Q) computational complexity for the spectral analysis algorithms and the matched
filtering algorithms.
format.
A final consideration in selecting the range FFT size is the cross-track
variation in the Doppler centroid, foe· Since the secondary range compression 9.2.1 Spectral Analysis Algorithms
filter function depends on foe• and this is assumed constant for each block, N~
may be limited by the performance requirement for the secondary range We will consider two commonly used spectral analysis algorithms. These are
compression. This limitation is typically only important for wide azimuth the unfocussed processor, which applies no phase compensation to correct for
beamwidth or squint imaging mode radars (Chang et al., 1992). the quadratic phase history of the target, and the deramp-FFT or SPECAN
438 THE SAR GROUND SYSTEM
9.2 CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS 439
algorithm in which a phase correction is applied to the signal prior to a forward substituting Eqn. (9.2.3) we get
transform.
(9.2.5)
Unfocussed SAR Algorithm
To utilize a spectral analysis technique such as unfocused SA~ or SPECAN,
we must first consider the resolution requirements of the output image products. Thus, for a spaceborne system such as Seasat, where A. = 0.235 m and R = 850
For the unfocussed processor, the azimuth resolution is given b~ the along-!rack km, <>x ~ 316 m which is too coarse for most science applications. However,
integration time associated with, for exam~le, a rr./4 quadratic phase shift. It in the case of an airborne X-band system such as the Canadian STAR-1, where
can be shown using simple geometry (see Fig. 9.3) that A. = 3.2 cm and R ~ 10 km, an unfocussed azimuth resolution of <>x ~ 13 m is
achievable with c/>q = rr./4. This is acceptable for many applications.
1C )2 (9.2.2) The unfocussed SAR processor was used by many of the early SAR systems.
c/>q = 2A.R(V.1't'cu
This processor does not compensate for the along-track phase shift resulting
from the change in sensor-to-target range. In its most rudimentary form this
where q, is the relative change in quadratic phase between the center and the
processor consists of summing adjacent pixels over the unfocussed aperture
edge of the aperture and i- 00 is the unfocussed aperture.tim~. For c/>q = rr./4, the length
coherent integration time for unfocussed SAR processmg ts
(9.2.3)
where •cu is given by Eqn. (9.2.3). However, in general, this will not produce
good quality imagery, since the inherent assumption is that the beam is steered
Since the azimuth resolution is given by
to zero Doppler. For squint angles producing a significant Doppler shift (e.g.,
(9.2.4) foe > 0.25 B0 ), the azimuth ambiguities will be severe. Additionally there is
uncompensated range walk which will cause the targets to be dispersed in the
range dimension. Thus, a more practical algorithm requires a preprocessing
step where the data is multiplied with a factor W., =An exp{j2rr.f00 n/ fp} that
~UNFOCUSSEDAPERTURELENGTH.....j shifts it to zero Doppler and also weights the terms in the summation to reduce
the sidelobes. The data block should also be skewed by the range walk
SENSOR Eqn. (9.1.6) prior to summing to minimize the range dispersion.
Thus the aggregate computational complexity for the unfocussed SAR processor
is
computations required for the reference function generation, which are negligible
assuming the Doppler centroid is slowly varying relative to the image frame size. 0
z Zz
0
~o
w
Q
....~ u. i:::
w=>
...cc
The Spectral Analysis Algorithm w gi c Q
iw ... ij2 rr:5
:c> w ~
The SPECtral ANalysis (SPECAN) algorithm corrects for quadratic phase c rr:c rr: w
rr:
variation across the processing bandwidth and separates targets based on their
rr: Q !! ...
:!:
... z
::>O
:::EO
CJ
c
:!!
differential Doppler shift. This technique is an improvement over both the
8 ~
unfocussed SAR and Doppler beam sharpening algorithms in that it achieves
significantly higher resolution. However, it is limited in that it cannot 8
u. '5
accommodate the variable cross-track range curvature correction. The flowchart rr:
ti) ·c:
rz:W wz 0
co
for this algorithm, which is described in detail in Chapter 10, is shown in rr:w ou
~ ti :c CJ
Fig. 9.4a. Basically, it performs a skew (or an interpolation) of the data in the D.
D.
:::E
c
... 0
=>z 8
;!!! :::> ~
range dimension to compensate for the range walk, applies a linear FM (deramp) g~
correction to the data block, and then uses a forward FFT to spectrally separate D.
NU.
c eif
the targets. The deramp function is centered about the mean centroid for the ..c:
data block with a slope determined by the Doppler rate. The reference function ·E
0 ~
is updated as a function of cross-track position to track the Doppler parameter z z ou
:::>
variation. The output image must be resampled from its natural range-Doppler w
Q
Q
rr:
t:u. 0
i::: u. !::;
u. :::> w u.
t: ~ .,~
grid (fanshape) format to an orthogonal grid. w gi c c cQ 0.
The output image following the forward FFT stage does not provide valid
CJ w ...
Zrr:c
c D. Q
c:z:
;:: ...
rr: :::>
0-1
:::EO
0 D.
~::I!
:z:~
ti)
w::>
:z:
rr: ...
w e
IZI
3:....
rr: :::!! o! rr: rr: ... D. >:::!! CJ .g
data for all targets within the block. Targets at the edge of the output block 0
0
U. N
c ...
w
:!:
:::> :::!!
::!lo
:!: N
c
c
;!!! ....
~
generally are degraded in both resolution and azimuth ambiguity to signal ratio N
c
(AASR). To achieve a uniform data quality, some data is discarded and the
0
FFT blocks are overlapped at the cost of processing efficiency. The fraction of "
..c:
data to be discarded becomes severe as the required resolution approaches the :;
rr:
ti) u.
w e
full aperture resolution. To improve the efficiency, the FFT (i.e., the processing rz:W
w ...
rr:Z
w ·s
block) can be shortened at the cost of azimuth resolution (assuming the deramp -1W :z: CJ '-
0
D. :::!! ... 0
function is applied to all data within the processing block). D. c =>z e
OS
Orr: ;ji :::> ....
co
For radar systems with a small TB product, where the range curvature is cc N IL
essentially negligible, the SPECAN algorithm presents a computationally
D. c ~
..lol
efficient method for azimuth correlation. To assess the computational complexity "0
::0
of the SPECAN algorithm (Fig. 9.4a), we divide the azimuth correlation into OS S
.g=·E
0 ..c:
processing steps and evaluate the number of computations per input sample as z w
follows
Q
w ~!::;
IL:::> Q~
D. CJ
CZ ...cc =
" 0
co
:I (;I
wmc c~l!:: w:::E rr: :z: :c ::::i Q ~u
CJ w ... ...ow ti) D.
Zrz:c
Co. C c9~
c m ti)
rr:~
:c D. ;~ Z:::E
Cc
w
CJ
~ .. Q
1. Azimuth Naz-point reference function generation rr: :::!!
0
0
... :::E
io
N
rr: :::E
o-
IL !;I!
~:a
err:
c
;!!! ..
a;
GI """
§i]
~
~
c C:. U. OS
4/nu real multiplies
1/nu real adds
2/nu cosine operations ti) IL
rr: rr: wz
rr:w
.......
WW
D. w :c CJ
where nu is the update interval in range samples times the update interval in D. ::I! ... 0
0C =>z
azimuth blocks. err:
c
:!!!:::>
NI.I.
D. c
441
442 THE SAR GROUND SYSTEM
9.2 CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS 443
2. Reference function multiply where BP= 't"cslfRI· Thus Eqn. (9.2.8) gives maximum TB, and therefore the
maximum block size that can be used in the SPECAN algorithm, assuming the
one complex multiply range curvature cannot exceed one range bin. For Seasat, where J. = 22.76
Msamples/s and A.= 0.235 m, the maximum TB is 449. The resulting coherent
3. Forward Naz-point complex FFT integration time is on the order of •cs = 0.95, which is equivalent to an azimuth
resolution at a range R = 850 km of Jx ~ 14 m, as compared to 19.5 km for
( 1/2) log 2 Naz complex multiplies
the real aperture resolution, 316 m for the unfocussed SAR processor, and about
log 2 Naz complex adds 6 m for the fully focussed synthetic aperture. For a system such as the ESA
ERS-1, where A.= 5.6 cm and J. = 19 Msamples/s, the maximum TB= 2256,
4. Fanshape resampling, two four-point complex interpolations which results in a maximum •cs = 1.0 s which is greater than the nominal full
aperture observation time.
16 real multiplies
12 real adds 9.2.2 Frequency Domain Fast Convolution
Summing the total number of operations in Steps 1-4 above, the aggregate Given the requirement for a high precision azimuth correlator that can produce
computational complexity in floating point operations (FLOP) for azimuth imagery at an azimuth resolution near the fully focussed aperture ideal
correlation with the SPECAN algorithm per sample input to the azimuth performance, spectral analysis algorithms are inadequate. The frequency domain
correlator is convolution (FDC) algorithm, which consists of two one-dimensional matched
filters (as described in detail in Chapter 4 ), provides a close approximation to
C~A = 7/nu + 5log 2 Naz + 34 (FLOP/sample) (9.2.6) the exact two dimensional matched filter. This algorithm can be used for most
spaceborne systems operating in the nominal strip imaging mode, assuming
secondary range compression (SRC) is employed. For large squint angles (i.e.,
where N az = • csf.p is the azimuth block size and 't"cs is the coherent integration
time.
o. > 10 OH), an additional processing stage may be required (Chang et al., 1992).
The modification entails performing the azimuth transform prior to application
For multiple block processing, typically the blocks will be overlapped, with
of the SRC.
the samples from the edges of the block discarded. The fractional block to block
The computational complexity of the FDC azimuth correlator given in
overlap is
Fig. 9.4b can be assessed as follows. Assuming Naz input samples constitute the
azimuth processing block, the number of computations per data sample input
to the azimuth correlator (for processing a single block of data) can be broken
down as follows:
where l:!..N is the number of samples in the overlap region. Then the multiblock
azimuth correlator computational complexity is 1. Naz-point complex forward F FT
where nu is the cross-track update interval (in samples) times the along track azimuth correlation, however it is also the most computationally intensive. The
update interval (in blocks) TDC algorithm is capable of characterizing each sample in the echo data set
by its exact Doppler parameters, and therefore theoretically the azimuth
4. Reference function multiply reference function contains no approximations as to the processing block size.
In a time domain processor, each reference function can be exactly tailored to
1 complex multiply
its position within the data set (Lewis et al., 1984). Thus, the algorithm can
5. Naz·point inverse FFT produce an exact matched filter for a given set of radar characteristics (ignoring
random system errors). The computational complexity of the TDC azimuth
(1/2)log 2 Naz complex multiplies correlator, shown in Fig. 9.4c, can be assessed in terms of the number of
log 2 Naz complex adds operations per data sample input to the azimuth correlator as follows:
Summing the total number of operations in Steps 1-5 above, the aggregate
computational complexity required for azimuth correlation with the FDC 1. Azimuth La 2 -point reference function generation
algorithm per input sample is
4 La2 /(Na 2 nu) real multiplies
(FLOP/sample) La2 /(N 02 nu) real adds
(9.2.9) 2 L 82 /(N.2 nu) cosine operations
where L az is the azimuth reference function length in complex samples, given by where nu is the update interval in range samples.
(9.2.10) 2. Range migration correction,four-point complex interpolation
for full aperture processing. In Eqn. (9.2.9) we have not taken into account !he 8 real multiplies
efficiency factor of the azimuth correlator as given by Eqn. (9.1.19). Assummg 6 real adds
that the raw data set to be processed is divided into azimuth blocks, Eqn. (9.2.9)
gives the number of computations per input sample to process a single block. 3. Time domain L 82-point complex convolution
The efficiency factor determines the overlap between blocks, or equivalently the
number of input samples that must be processed twice. Thus, for multiblock L 32 complex multiplies
processing, the computational rate is given by
L. 2 - 1 complex adds
(9.2.11)
where we have assumed the reference function is not updated as a function of
Thus, for example, if the reference function length plus the block skew is 40 % along track position within a data block, Naz~
of the block size, then ga = 0.6 and 1.7 times as many computations per input Summing the total number of operations in Steps 1-3 above, the aggregate
pixel are required for multiblock processing than for processing a single block. computational complexity for azimuth correlation using the TDC algorithm
We have also assumed that the squint angle is relatively small, such that the per azimuth correlator input sample is
standard frequency domain convolution algorithm can be used. For larger
squint angles, the algorithm must be modified to perform.the forward azimuth
FFT prior to the secondary range compression, thus requiring an additional (FLOP/ sample) (9.2.12)
two corner turns for the da!a and an additional complex multiply per sample.
where L. 2 is given by Eqn. (9.2.10).
The time domain convolution algorithm is typically used only for very
9.2.3 Time Domain Convolution
short apertures or in high precision processing applications where small volumes
The most precise approach for SAR correlation is the matched filter tim~ domain of data are being processed (e.g., as in a verification processor to produce the
convolution (TDC) algorithm. Conceptually it is the simplest algorithm for optimum quality image product).
446 THE SAR GROUND SYSTEM 9.2 CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS 447
soo....-~~~~~~~~---,,...--E-R-S--1'1~~.,S-E_A_SA-T~~-,
extremely computationally intensive process, even with short reference functions.
~I 1¥ To illustrate the type of computational capability required for real-time azimuth
correlation, we present the following example.
500
Time
Domain Example 9.1 For Seasat SAR, the digitized raw video data has the following
Convolution characteristics
400
N, = 6840 complex samples/range echo line
c
300 L, =•pf.= 760 complex samples/range reference function
TP = 1I fp = 607 µs
200
Spectral Analysis
~e. h.ave converted the Seasat real sampling frequency to complex samples by
I (deramp FFT)
dlVldmg by 2. After range compression the range line length is
100
N, = N, - L, = 6080 complex samples (9.2.13)
the processor efficiency from Eqn. (9.1.19) is ga = 0.5. Since we are performing or
multiblock processing, the computational complexity from Eqn. (9.2.9) and
Eqn. (9.2.11) is CsA ~ 86 FLOP /input sample.
Naz =
/Sc
fp r •• = fp -V TjJJJ--
reasonable image quality for small time bandwidth product (TB) data sets such
as the ESA ERS-1. To achieve the full azimuth resolution for larger TB data
sets, either the time domain or the frequency domain convolution algorithms
Naz= 1538 pulses can be used. The time domain convolution is inherently more precise, but at
an extremely large computational cost for spaceborne systems, since its
From Example 9.1, the full aperture reference function Laz = 4099 samples. For computational complexity increases linearly with the number of pulses in the
quarter aperture, four-look processing, Laz = 1025, which is less than the synthetic aperture. The frequency domain convolution provides a good
maximum block size constraint. Since the block must be a- power of 2 less than compromise between throughput and image quality in that, for most systems,
Naz we select the image degradation is very small relative to TDC, but the computational
requirements are on the order of the SPECAN algorithm.
Naz = 1024 pulses
9.2.5 Range Correlation
Assuming nu= 4, from Eqn. (9.2.6) we get
For the cross-track or range dimension processing we will only consider the
CsA ~ 36 + 5 log2 Naz frequency domain fast convolution algorithm. Similar to the azimuth correlation,
450 THE SAR GROUND SYSTEM 9.2 CORRELATOR ALGORITHM SELECTION AND COMPUTATIONAL ANALYSIS 451
the range correlation consists of a forward transform, a complex reference discarded, then Ne is reduced by one. Alternatively, the fractional data block
function multiply, and an inverse transform. Since the range reference function can be processed with a reduced size N~ and the range efficiency factor calculated
changes very slowly as a function of foe• the overhead from reference function as a weighted average of each g,, dependent on the block size.
generation is negligible. Thus the computations per input data sample can be
broken down as follows: Example 9.3 Again, consider the Seasat data set where
1. Forward transform of N~ points, requiring N, = 6840 complex samples
( 1/2) log 2 N~ complex multiplies L, = 760 complex samples
log 2 N~ complex adds fp = 1646.75 Hz
2. Reference function multiply, requiring Assuming we have a block size of NR. = 2048 samples
1 complex multiply Ne= Int(4.7) + 1 = 5
3. Inverse N; transform, requiring and
( 1/2) log 2 N~ complex multiplies
= 6840 = 0.67
log 2 N~ complex adds g, 5·2048
The computational complexity for frequency domain fast convolution range
Therefore
compression per input pixel is therefore
C~oc = 173 FLOP /input sample
C~oc = (6 + 10log2 N~)/g, (9.2.16)
For real-time processing the range correlator must operate at
where g, is the efficiency factor for multiblock range correlation.
To calculate the efficiency factor in the range correlator, the number of
R~oc = N,C~ocfp
processing blocks must first be estimated. Assume N, complex samples per input (9.2.18)
range line, L, complex samples per reference function, and a processing block R~oc = 1.95 GFLOPS
size of N~ complex samples. The number of good points from each processed
block is N~ - L, + 1. Therefore, the number of processing blocks required is The computational rate can be reduced by increasing the processor block size.
(N, - L, + 1)/(N~ - L, + 1). Since we cannot process a fraction of a block, we If a processing block of N~ = 8192 were selected, then Ne= 1 and g, = 0.83.
must round up to the nearest integer, thus The computational complexity becomes
where Int represents the integer operation. The range efficiency factor is given by R~oc = 1.83 GFLOPS
N, (9.2.17b) which is a 5 % improvement in the rate required for the smaller block.
g, = N N'
e r
Since the computational load on the processing system for range correlation is
In the above analysis we have assumed that the residual block fraction at the dependent on the processor block size, unless there is a large change in Doppler
end of the range line is processed as a full block. If this fractional block is across a range line, requiring an update in the reference function secondary
452 THE SAR GROUND SYSTEM 9.3 SAR CORRELATOR ARCHITECTURES 453
0.9
I
9.3 SAR CORRELATOR ARCHITECTURES FDC
0.8
Considering the large number of computations required in SAR processing the 0.7
selection of the correlator architecture requires careful analysis to ensure that
fc 0.6
the system throughput requirements are met. For example, we could take a
straight-forward approach and buy as many CRAY X-MP /4 computers as 0.5
needed to do the job. Using the UNPACK benchmarks for a standard
FORTRAN implementation, a single-processor X-MP/4 system performs 0.4
69 MFLOPS (Dongarra, 1988). Assuming that a network of CRAYs can operate 0.3
at 100% efficiency, a real-time Seasat azimuth correlator using the FDC
algorithm requires 48 CRAY X-MP /4 processors. If we used the TDC algorithm, 0.2
we would need over 5300 CRAYs. Obviously, some optimization in the
0.1
architecture, going beyond a network of general purpose computers, is required.
0
0 2 4 6 8 10 12 15
9.3.1 Architecture Design Requirements
n
The design process to determine the system architecture must consider more
than just the basic computational rate of a machine (Hwang, 1987). Initially, (Laz= 2")
a trade-off study should be performed to prioritize the relative importance of
the system throughput versus flexibility. In other words, the more specialized Figure 9.6 Plot of fraction of total computations in FFT as function of azimuth reference function
length.
we can make the processor to generate a single type of output with a similar
set of processing parameters (i.e., block size, FFT length, range migration, etc.),
the better we can tailor the architecture to achieve extremely high throughput.
A second, equally important, consideration is the radiometric accuracy
requirement. If high precision radiometric calibration is not required, we can efficient technique for performing FFTs. This will be addressed in detail in this
for example consider fixed point arithmetic for the mathematical operations, section for each of the architecture designs.
or truncate the range correlator output prior to corner turn. If however a high We will categorize the various SAR correlator architectures into what we
precision output is required, a full floating point (or a block floating point) consider to be the three fundamental designs: (1) Pipeline; (2) Common Node;
representation is needed, increasing the complexity of the correlator hardware. and ( 3) Concurrent Processor. There are a number of possible variations or
A third key design parameter is the resolution requirement. The resolution combinations of these basic designs and we will address some of them with
specification on the output image product not only impacts the number of examples of real systems. For each architecture, the key design parameters
computations per input data sample, as discussed in the previous section, but to be considered are: (1) Peak I/O data rates; (2) Memory capacities;
is also a key driver determining the required processor memory capacity. (3) Computational requirements per processor; (4) Reliability/redundancy of
To optimize the implementation of the azimuth corr6lator, an important the design; ( 5) Maintainability/ evolvability of the design; and ( 6) Complexity of
parameter to consider is the fraction of computations that are FFT operations. the control system. These design parameters should be evaluated in conjunction
This is shown in Figure 9.6 for the SPECAN and FDC algorithms. (The with the current technology to factor into the trade-off analysis the relative cost
unfocussed SAR and the time domain convolution do not require FFTs.) For of the hardware. For example, a memory requirement of 32 Mbytes is not
the frequency domain convolution, assuming the reference function length is especially stringent with current technology, considering that 4 Mbit chips are
1-8 K samples, the fraction of FFT computations is over 80% of the total currently available. A typical cost per byte of RAM is on the order of 1/20 of
computations. For the SPECAN algorithm this fraction is over 50%. Therefore, a cent. Thus, a 32 Mbyte memory might cost $16 K. Conversely, if the architecture
the optimal architecture for implementation of these algorithms requires a highly requires an 1/0 bandwidth of 100 MB/s, that forces a departure from standard
454 THE SAR GROUND SYSTEM
data bus architectures {such as the VME bus), or even the newer fiber optic
ring networks {FDDI), to say an {as yet) unproven HSC star architecture,
which could be quite costly.
Perhaps the most important consideration that is overlooked by many system
designers is that the hardware technology evolves faster than the software.
Typically, new hardware {such as the high speed data bus architectures) will
operate in only a very limited environment. Using such equipment in a custom x
designed SAR correlator could require a sigriificant amount of software to be ::J~j
developed at the microcode level. The software drivers necessary to communicate ~!i
8
with peripheral devices are a chronic problem for system engineers attempting
';-
to incorporate the latest state-of-the-art technology into their system. It is ,
~
usually advisable when building an operational system to use equipment one
version removed from the most recent release. The system should be designed
--- - . z
-- z
IL~:; QQ 0
x I=
such that technology upgrades can be incorporated within the basic structure, 1:1!~~ i= !< ~:5
requiring a minimum amount of redesign. ---· z
~Q
fil~ !ii
8w
Q fi3
~
Cl
3
9.3.2 Pipeline Arithmetic Processor
The optimal system architecture for achieving extremely high throughput SAR
w
~~~
::Ea:
I.!.!
a:
--
ii!;
correlation is the pipeline machine. A generalized functional block diagram of
a pipeline processor is presented in Fig. 9.7. The data is input to the processor
-- - - .
~ :c
from some type of storage device {e.g., a high density tape drive or the SAR
~t:
sensor ADC). Each processor element {or functional unit) performs some type 4 wz
o::::i
of operation on the data array {x 1 ,x 2 ,. .. ,xn} to generate a new array w
I-
{ A 1 {x 1 , •.. , xn), ... , Am{x 1 , ••• , xn) }, where each operation A; may be performed w~
CJ
.1 ,
on any or all of the input data samples. The pipeline consists of a series of these ~'9~
5 --
functional units, connected by a data bus. The movement of data and the
~
::::>
arithmetic operations are controlled by a digital clock whose cycle time is ::E
a: t;
compatible with the hardware elements comprising each unit. The pipeline is j .J~~ ~Ci: w
I-
terminated by a second storage device whose 1/0 data rate requirements may if w !< w
0-
~1ffi
';- :.::
--z
- - ~- - §
be either higher or lower than the input device, depending on the functional
811.~
operations applied to the data.
o:;
5::::> 0
We can apply this generalized description of the pipeline processor to the ILZ ::::>
l:l!~::E
I ::E ~
g::e
SAR correlator as shown in Fig. 9.8. In this simplified diagram of a pipeline
SAR processor, we first divide the processing task into modules that relate -- - - _;iffi
directly to the major stages of the SAR processing: (1) Range correlation; (2) ~
- ~
Corner turn; (3) Azimuth correlation; (4) Geometric rectification; and (5) I
:.::c
Ow
Ot)W
,
M ultilook filtering. Each of these modules may be further bi;oken into functional ~~
5<
::!w~
:; I-
::::iw-
units. For example, the range correlator consists of a forward FFT unit followed QC ;:EC
11:1<i!
-Z
455
456 THE SAR GROUND SYSTEM
9.3 SAR CORRELATOR ARCHITECTURES 457
by a complex multiply unit followed by an inverse FFT unit. This architecture
is optimal from a system throughput perspective since there is a dedicated
hardware element for each stage of the processing. The aggregate computational
rate of the system is the sum of the computational rates of each functional unit
u
10p ~z lCLKl !SYNC
comprising the system, since when the pipeline is full all units are operating ltANGE UNE LENGTH
fsYN
I LINE sue ILOCK SIZI
simultaneously. 8UFFER AND IEP£A1 IATE DATA,CLKt
LOOK I ANO 4 FRAME AUTOFOCUS OUTPUT FR- SIZE
RAT£ AND OFfSf.1 CROS$· i.-
COUELATOI ON/OFF Ml,ILll-
20 .... , LOOK
Advanced Digital SAR Processor COllElATION
fl£AK OFFSET fNf\JT FRAt.tf SIZE MEMORY
A good example of an operational pipeline SAR processor is the Advanced RANGE FFT LENGTH
FFT
Digital SAR Processor (ADSP) built by the Jet Propulsion Laboratory for RANGE OOffLER AZIMJTH LINE LENGTH
SHFT RANGE L,,...E LfNGTH
NASA. This system, shown in some detail in Fig. 9.9, is a straight pipeline UNSCRAMILEl
SUI ILCXK LENGTH
CLUnERLOCK i . -
ACCUMULATOR
LOOK OVfllAP RATE
INTRA-
20 .... , LOOK OVERLAP
architecture consisting of custom designed digital units using commercial (off NUMIER OF LOOKS
LINE
AOOER
the shelO integrated circuit (IC) chips. This system is capable of operating with REAL tNP\11 ADJUST
•
-~
a continuous input data stream at Seasat real-time data rates and generating REFER£Nt"f FUNC
RANGE
a four-look detected output image that is written to a high density digital RANGE REFHENCE FUNCTION RffflENCE SELECT ION RATE
GENERATOR W!IGHTING fUNC ON/OFF
recorder. The system, completed in 1986, features a computational rate measured DETECTION
at over 6 GFLOPs when the pipeline is full. It consists of 73 boards (22 unique
board designs) comprising two racks as shown in Fig. 9.10. RANGE
01
ffT LlNGTH
FFT
The functional block diagram in Fig. 9.9 is detailed to illustrate the level of INTEGER RANGE
GLL SHFT
COEFFICIENT SELECT
LINSCRAMILER I· 0
programmability required to provide the necessary flexibility such that the INTERPOLATION
COEFFICINTS
llSAMPLEi
'
IN AZIM.ITH
ADSP can be used by both the Spacebome Imaging Radar (SIR) program and
the Magellan (Venus radar mapper) program. The data flows through the main LNE UNGlH
10 M-tz
TRIPLE IUfffl
pipeline as indicated by the vertical lines. Horizontal lines entering functional OUTPUT SELECT
UN.SCRAMIUl1:
units illustrate key control parameters to be passed to that unit. Some c'
FFT LENGTH AZIMUTH
10 .... , m·I
parameters, such as the FFT length or the weighting functions, are updated
AZIMUTH ILOCK
only once per processing run, while others, such as the interpolation coefficients LENGTH
CORNER
or the azimuth reference function coefficients, must be dynamically updated TURN
MEMORY
FRAM& AOORlSS
OFFSET NUMlll OF LOOKS
during the processing run. Some of the key system characteristics are: ( 1) Range PHCENT ZERO-FILL DOPPLER
PROCESSING
-~
and azimuth FFT sizes are variable and can be set up to a maximum of 8 K DOPPLER OFFSET AND
rOEFfW'lfNIS ANO RATES CIRCULAR SHIFTS
complex (or 16 K real) samples; (2) Range cell migration memory can DfRAMP REFIRlNCE
FLINCllON AZIMUTH REFERENCE LENGTH
REffRENCf
accommodate up to 1024 bins of range walk; (3) Azimuth correlator performs GENERATOR
ANO wtK;jHJ INU
Of.RAMP OR
either the SPECAN algorithm or the frequency domain convolution (FDC) CONVOLUTION
algorithm; and (4) Programmability of individual units permits flexibility in AZIMUTH REFERENCE FUNCTIC>f'll
FFI LENGTH
selection and update of processing parameters. AZIMUTH
ffT
Some of the limitations of the pipeline architecture can also be seen in the
ADSP. For example, the autofocus and clutterlock modules must operate in a
feedback mode, performing the analysis on one block of data and applying the
result to a following data block. In general, this type of feed)>ack results in error
which will degrade the image quality. However, due to the slowly varying nature
of the Doppler parameters along-track (excluding the airborne SAR case), this
RANGE
-TION '
MIMCW:Y
OOPPllR OffSEllAn
M...... ,-.... PAYW
... _. ·-,.N11i
COEFFICIENT
UpPATI RATE
COEFFICIENT
SELECT
INTERPOLATION
COEFFICIENTS
I • 0
RISAMPLER
N IANGl
(8 boards/ set) are identical, and there are four such board sets (two for the
azimuth correlator and two for the range correlator). Similarly, the memory
boards used in the corner turn and multilook memories ( 14 total ) are designed
identically. This introduces the possibility of sharing these boards among the
various modules at the cost of throughput.
Consider the architecture of Fig. 9.11, where the range and azimuth
correlators share the same modules. Instead of a continuous data transfer, as
in the straight pipeline operation, the data is input to the bent pipe correlator
in bursts. Each burst is one processing block (N. 2 x N, samples) of data. In
the first pass of the data through the system, the complex interpolator module is
bypassed, and the range reference function is read into the reference function
multiplier unit. The range compressed output is stored in RAM until range
processing of the data block is complete. The matrix transpose of this data
block is then fed back into the correlation module, which is reset for azimuth
compression. The complex interpolator can perform range migration correction
and slant-to-ground range correction in the same step, or alternatively it can
output the slant range imagery. The azimuth compressed output is again stored
in RAM until the block processing is complete. The feedback loop is then
switched, transferring the processing data block to the multilook module, while
the next block of data is input to the correlation module for range compression.
The correlator design described above is just one example of how a flexible
pipeline design could be used for SAR processing. In general, this approach is
Figure 9.10 The ADSP system shown with a Thorn-EM! high density recorder. less expensive in terms of the number of digital boards required to implement
the correlator. However, it does require a more complex control system to
switch the data paths, and it is significantly slower than the straight pipeline
linear FM frequency domain representation does not replicate the Fourier architecture. The Alaska SAR Facility correlator was originally planned to be
transform of the time domain chirp. Other performance compromises, such as a bent pipe design. However, a trade-off study of cost versus performance
the number of bits used in the computations and flexibility in updating indicated that the straight pipeline was the optimal approach.
parameters (e.g., antenna patterns), are characteristic of most pipeline systems
where precision and flexibility are traded for speed.
As a follow-on to the ADSP, the JPL group has built a second pipeline
processor which is installed at the University of Alaska in support of processing
the J-ERS-1 , E-ERS-1, and Radarsat data to be received at the Fairbanks CORRELATION CORNER-TURN MULTI-LOOK
ground station. As might be expected, this system, completed in 1990, is more M:X>UE ~ ~
compact, using less than half the number of ICs at less than I / 3 the power I
MULTI-STAGE
consumption. This saving derives primarily from the utilization of low power I COt.f'l£)( IFEFEFENCE I 1 RAl"-Va.1 !MULTI·
CMOS technology and the larger capacity ( 1 Mbit) memory chips. The Alaska FFT I INTERP0-1 FU~IDN I FFr Kr;f$ DETEC·1 I.CO<
I LATON f MULTIPLY I I.EMORY Tx:>N Fil.TEA
SAR Facility is described in detail in Appendix C.
used in the ADSP) can greatly complicate the control due to the short interval
available for coefficient updates. In fact, in most systems it is the complexity of FFT
UNIT<S)
the control system that is the key factor limiting the throughput of the pipeline. ....
Figure 9.12 Functional block diagram of common node processor architecture.
9.3.3 Common Node Architecture
A more traditional architecture, generally used for implementing a non-real
time SAR signal processor, is the common node architecture. A functional block interface controller boards. As the computational demand or the required
diagram of this architecture is illustrated in Fig. 9.12. Essentially, in this throughput increases, additional computational units can be added to the system
architecture all data transfers pass through a common node or data bus to without a major reconfiguration. A further benefit of this architecture is that
which are attached storage devices, computational elements, and a control the system can be redundantly configured such that failures result in graceful
system. Input data transfer can be via the host (control) computer or via direct degradation in the performance of the system.
memory access (OMA) ports located on the computational elements (CEs). ihe key disadvantage of this architecture is the data transfer node. Given
These OMA ports permit data transfer directly from an external device into that the system is configured with a single high speed switch or computer bus,
the CE memory without passing through the host CPU memory. The common this node does represent a single-point failure in the system. Additionally, for
node architecture in its simplest form would be an array processor, or a digital extremely high throughput SAR correlator applications, the data rate through
signal processor (DSP) board, interfaced to a host computer via an external this node can become the limiting factor in system performance. To illustrate
bus (Davis et al., 1981). A more advanced configuration (such as the IBM this point we present the following example.
common signal processor) might consist of multiple custom FFT units or
arithmetic processors operating in parallel, connected by a high speed switch Example 9.4 Consider the common node architecture of Fig. 9.13a. To achieve
real-time throughput, concurrent azimuth and range correlation must be
to route data between units when a process is complete.
The prime advantage of a common node architecture over a pipeline performed in separate computational units. For Seasat, the real-time input data
rate is
configuration is its flexibility to adapt to the specific processing requirements
of a particular data set. These systems are predominantly software based with r 1 = 2Nr x /p = 2 x 6840 x 1646.75 = 22.53 MB/s
the bulk of the software residing in the host CPU. For example, algorithm
modifications to reconfigure the system to. process a new mission data set are assuming the 5 bit real data stream is converted to a complex 8 bit I, 8 bit Q
relatively easy, since a high level operating system is available to program the representation in the input interface. This data is transferred via the node to
462 THE SAR GROUND SYSTEM
9.3 SAR CORRELATOR ARCHITECTURES 463
RANGE
CORRELATOR I FFT-1 Exa~ple 9.5 Assuming a (9,9,6) complex data representation is used, the
real-time Seasat 1/0 rate through a single node can be determined as follows
r REF MUL T
- r 1 = 22.53 MB/s
FFT
r2 = 3fp(Nr - Lr)= 30.0 MB/s
therefore
INPUT/ i----~r
~I NODE
I- r2
-
CORNER
TURN
OUTPUT ...; I SWITCH I- - (RAM)
r1 = r 1 + 3r 2 = 112.5 MB/s (9.3.2)
HDDR INTERFACE
r2
which approaches the capability of state-of-the-art technology using the HSC
network architecture.
I Ifwe now consider the configuration shown in Fig. 9.13b, where all the FFTs
are ~r~or~ed in one mo_dule and the complex arithmetic (i.e., interpolations,
I REF MULTIPLY
HDDR AZIMUTH
mult~phes) is performed m a separate module, four additional data transfers
I' INTERPOLATION _ CORRELATOR
relative to Eqn. ~9.3.2) are required for the azimuth and range correlation.
- Ther~fore, even with the (9,9,6) data representation, the aggregate data rate for
FFT real-time Seasat processing is
Figure 9.13a Common node SAR processor architecture with computational units grouped r1 = r 1 + 7r 2 = 232.5 MB/s
according to processing sequence.
the range correlator module. Assuming both the reference function multiply
and forward and inverse transforms are performed within the module, the output
I FFT 3
where we have allocated 8 bytes for the complex floating point representation.
1
I CTM 3
We have similar transfer rates into and out of the corner turn memory and the
azimuth correlator before output to the HDDR. The aggregate data rate through - I
t I
I CTM 2
-
the node is therefore INPUT I -I NODE - - CORNER
HDDR -- OUTPUT
INTERFACE
-
- I SWITCH I- ~
TURN
<CTM; -
r1 = r 1 + 3r2 = 263 MB/s h I
across the node and decode it before the next processing stage. A convenient
representation is (9,9,6), where 9 bits are allocated to each of the real and
- I- AP 2
"-
imaginary components of the mantissa and 6 bits to a common exponent. This ARITHMETIC
PROCESSOR (AP) .___
type of representation (which is used in the ADSP) adds only a relatively small 1
distortion noise, but it does put an additional burden on each signal processing
module to pack and unpack the data. Figure 9.13b Common node architecture with computational units grouped according to type.
464 THE SAR GROUND SYSTEM 9.3 SAR CORRELATOR ARCHITECTURES 465
British Aerospace of Australia SAR Processor Design For this system a 10: 1 slowdown is planned from the real-time E-ERS-1
An example of the common node architecture is the E-ERS-1 correlator being data rate. This translates into an input data rate r 1 = 2 MB/s assuming the
developed by British Aerospace of Australia. That design, shown in Fig. 9.14, data is unpacked into byte format. The data for each transfer is buffered in the
utilizes an APTEC IOC-24 as the data transfer node with a micro VAX II as IOC-24; thus the aggregate data rate must include both inputs and outputs to
the host CPU and custom processor elements (PEs) to perform the bulk of the the IOC. Since the corner turn is performed in the IOC local memory, one
numerical computations. input/output transfer pair is eliminated. The aggregate data rate through the
IOC is therefore given by
(9.3.3)
(console)---.., MicroVAX II Since the correlator operates at a fractional real-time throughput rate, qt = 1/10,
Eqn. (9.3.1) becomes
(9.3.4)
High level
assuming all arithmetic uses a 16 bit complex integer format. Inserting the
Control following values
ALLIANT FX - 8
CE8 192MB
RAM AM
CACHE
IP
SIMO Processor Arrays program data management unit ( PDMU) which is a VAX 11 /780 minicomputer.
Single instruction multiple data (SIMD) systems are parallel processors which The PDMU executes the programs that are developed in the host computer.
operate synchronously under the same control unit. Physically the processor In the current configuration, both the host and the PDMU functions are handled
elements (PEs) can be connected in any communication topology. For example, by the VAX system. Due to the limited size of the staging memory (SM) of
the MPP is a two-dimensional (planar) array where each PE can transfer data 16 MB, the SAR data is processed in blocks.
only to its four nearest neighbors. Conversely, the Connection Machine is an Assuming there are no data 1/0 transfer bottlenecks, and that the corner
n-cube topology where any PE can be connected to n other PEs according to turn could be managed such that this operation is overlapped with the actual
some predefined configuration that may be optimal for a given application computations, the MPP has the processing power to achieve approximately
(Hillis, 1985). 1/20 real-time Seasat processing with floating-point computations. If the
The SAR correlation algorithm has been implemented on the MPP by a radiometric accuracy of the output were not a prime consideration, 8 bit
group at the GSFC (Ramapriyan et al., 1984). A functional block diagram of arithmetic could be employed to achieve a rate about one half real-time. The
this system is shown in Fig. 9.17. The array unit (ARU) consists of a 128 x 128 algorithm implemented by Ramapriyan et al. ( 1984) used 16 bit arithmetic. It
(i.e., 16,384) array of PEs, each with its own 1024 bit local memory. The cycle was possible to perform 16, 4 K complex FFTs simultaneously by partitioning
time is 100 ns (i.e., 10 MHz clock), however, each PE can perform only bit the array into 16 32 x 32 subarrays. The actual PE control software to perform
serial arithmetic. The result is that this system is highly efficient for fixed point the radix-2 butterfly across 1024 1 bit processors is beyond the scope of this
operations, but its performance is dramatically reduced· for floating point text, but suffice it to say that the overhead from this type programming
operations. For example, the MPP is measured to perform 1.86 GOPS for complexity has severely limited the use of SIMD architectures for operational
8 bit integer multiply operations, but only 39.2 x 10 6 complex multiplies per SAR correlation.
second (Schaefer, 1985).
Data input (output) occurs through 128 bit wide ports at the 10 MHz clock MIMD Processor A"ays
rate with 1 bit flowing to (from) each PE in the first (last) column of the array. The multiple instruction multiple data processors can be categorized into either
The array is controlled by an array control unit (ACU) which is microprocessor shared memory tightly coupled machines (e.g., Cray Y-MP/4), where a single
based. The data management and application software are housed in the bus is shared by both the processors and the memory, or distributed memory
470 THE SAR GROUND SYSTEM 9.3 SAR CORRELATOR ARCHITECTURES 471
multicomputer systems, where each processor node has local memory and is
interconnected by some topology (e.g., ring, hypercube, etc.). In this section we
will address only the latter type of MIMD architecture. A number of MIMD
topologies have been created for specific processing applications, such as the
BBN butterfly switch (BBN Labs, 1986), where the arithmetic processors are
arranged to access other processors' local memories to efficiently execute the
FFT operation (among other signal processing_ tasks). As previously discussed,
REGION 1 REGION2
both the communication efficiency and the program complexity are major
concerns in utilizing this type of architecture for SAR processing.
RAW DATA
(LEVEL 0)
SAR
CCHBATI;A MN3E.OATA
~ ~ST-
~
I a"MAPS
- GE<J'HVSCAL ....____
~
(LEVEL 1A) _ _ ___.... (LEVEL 18) ....__ _ _ _ PARAMETERS
PHYSICAL_
9.4.1 Post-Processing Requirements
The post-processor design depends on the data rate and data volume output
._____ (LEVEL 2, 3)
from the SAR correlator, the variety and accuracy requirements for the various
a product types, and, perhaps most importantly, the precision required in the
computations. In our analysis, we will assume the SAR correlator produces
only single-look, complex, full resolution image data without any geometric
resampling or radiometric corrections applied. All multilook filtering and
detection operations are performed in the post-processor. In this formulation,
LEVa L.EVa
181 we move all output product options to the post processor, resulting in a
1A
correlator output that is of a single type, thus simplifying the archive. The
LEVa
RADIOMETRIC 182 • correlator processing is also reversible, allowing us to recover (most of) the raw
OOARECTDI data by applying the inverse of the compression filters. This would permit an
archive of only the single-look image data without retention of the Level 0 raw
data. (To be fully reversible we must retain all partially filtered data, i.e., the
OTM;ENG.
TELEMETRY
reference function length in each dimension, and perform full floating point
computations throughout the correlation.) The volume of data to be archived,
the location of the archive relative to the SAR correlator, and the quality of
LEVa the original raw data set (which indicates the amount of reprocessing required)
2,3 are key factors in determining at what level of data product the permanent
....------, LEVa LEVa .----.LEVa archive is to be maintained .
183 MULTI- 184 lAroE 4
SIN!OR , _ _.. GEOPH'Y'SICAL SCH.E --+
RJSICJll PRlC6SSNG MOOB.S
Data Volume and Throughput
CORREi.ATM: For spaceborne SAR systems, such as SIR-C or E-ERS-1, the acquired data
~TASE1'S ---- volume is almost always constrained by the downlink data rate, roL· The range
b line length in complex samples (ignoring the overhead from the ancillary data
Figure 9.19 Functional block diagram of SAR ground data system: (a) Top level organization; headers) is given by
(b) Details of post-processing subsystem.
(9.4.1)
architectural trade-offs in the post-processing system. The details of the post- where nb is the quantization. In Eqn. (9.4.1) we have assumed that the onboard
processing algorithms were presented in Chapters 7 and 8. digital system time-expansion buffers the downlink data across the entire
In many SAR processing systems, the radiometric and geometric correction interpulse period. After range compression, the number of good samples per
procedures are not functionally separate from the SAR correlation process-_ In range echo line is given b~
fact, most of these operations can be incorporated into the SAR correl~tton.
processing chain without additional passes over the data set. The functional
breakdown between correlation processing and post-processing assumed here (9.4.2)
is just one possible design and is not necessarily optimal for the computational
performance aspects of the system. However, it does provide for maximum where rP is the pulse duration and f. is the complex sampling frequency.
flexibility in terms of the variety of output product types that can be produced. Assuming each data acquisition period (datatake) is long relative to the
A SAR processing system dedicated to a single application or user grou~ may azimuth reference function length, i.e.,
combine a number of these processing steps with the range and azimuth
compression, since the variety of products is not required. Some of these
trade-offs were previously discussed in Chapters 7 and 8.
476 THE SAR GROUND SYSTEM
9.4 POST-PROCESSOR SYSTEMS 4n
where 'Jd1 is the datatake duration, then we can write the correlator instantaneous are to be applied to each input data sample. Since these correction algorithms
output data rate as depend on system characteristics, such as the sensor stability over time, the
r00 = nuNJP (bytes/s) (9.4.3) pla.tfor~ ephemeris and attitude accuracy, and the frequency and type of internal
cahbrat1on measurements, the number of operations could range from only a
where nu is the number of bytes per pixel (e.g., for a 64 bit complex representation few to several hundred per pixel, depending on the system stability. For this
nu= 8). Substituting Eqn. (9.4.1) and Eqn. (9.4.2) into Eqn. (9.4.3), we get reason, we emphasize the methodology for scoping the size of the post-processor,
followed by specific examples for a quantitative evaluation of the computational
(9.4.4) rate.
where qd is the instrument duty cycle (i.e., the fraction of total time that the 9.4.2 Radiometric Correction
SAR is operating). For a real-time processing system Eqn. (9.4.4) specifies the
The radiometric calibration process consists of evaluation of both the internal
input data rates that the post-processor must be capable of processing.
and external calibration data and generation of the calibration correction factors.
These factors are then used to correct the image data, thus establishing a
Example 9.6 Consider the following Seasat parameter set
common basis for relating the pixel data number representation to the target
backscatter coefficient. In general, we can define a radiometric calibration and
qd = 50%dutycycle -rP = 33.4µs
image correction procedure as consisting of the following steps:
nu= 8 bytes/pixel nb = 5 bits/sample
1. Internal calibration data evaluation;
fp = 1646.75 Hz f. = 22.77 Msamples/s
2. External calibration data evaluation;
roL = 112.7Mbps 3. Generation of calibration correction factors;
4. Radiometric correction of image data.
(Note that r 0 L for Seasat, which had an analog downlink, represents the output
data rate from the ground digital units.) From Eqn. (9.4.4), the corresponding
The internal calibration data includes: (1) engineering telemetry used to assess
correlator output data rate is
syste~ gain/phase errors or drift in the operating point of the system; (2)
r 00 = 40.1 MB/s receive-only noise power; and (3) calibration loop data such as injected
calibration tones or leakage pulses (e.g., chirps). The external calibration data
which is over six times the correlator input data rate. consists of images of point target calibration devices or distributed homogeneous
The net increase in data relative to the correlator input stems from the target sites.
dynamic range as a result of processor compression gain. A data rate reduction For this analysis we assume that the calibration data evaluation in Steps 1
between the correlator and the post-processor can be achieved by coding the and 2 is performed offiine by a dedicated calibration analysis workstation. This
8 byte complex floating point representation. It has been shown that for most is a reasonable assumption since a significant portion of the analysis may involve
applications a code using 1 byte for each of the real and imaginary mantissa, operator interaction to select targets and interpret the telemetry data. Additionally,
and 1 byte for a common exponent, adds negligible additional distortion noise much of the analysis is performed only occasionally since the time constants
(van Zyl, 1990). for variation are large relative to the sampling period and the point target sites
are typically observed infrequently.
Assuming some fraction of the correlator output is .used exclusively for ~nits simplest form, the radiometric correction factor is a scalar array that
analysis as single-look, complex products, the post-processor input data rate vanes as a function of cross-track image pixel number. This correction factor
can be written as is dependent on
If we assume that the system is stable over some time period after which ~ new Assuming the calibration data evaluation is performed off-line prior to the
correction factor must be derived, the correction as applied to the amphtude correlation processing, the correction factor Kr(I) could be precalculated and
data is used to scale the azimuth reference function, thus eliminating the additional
computations required in the last. This approach requires that the correction
3 12
_ [sin tl(/)(R( 1) + cl /(2f.)) ] ' (9.4.6) array update interval, Atu, be greater than the synthetic aperture period. A final
Kr(J) - G2 (J)GsTdJ) point is that, in general, the calibration correction is a two-dimensional complex
filter function. The radiometric correction stage can be used as a second filtering
where R( 1) is the slant range to the first image pixel, 'l(J) is the ~nciden~e angle pass over the data to correct for mismatch in the azimuth and range reference
at cross-track image pixel I, G(J) is the antenna pattern .projected m~o the functions due to Doppler parameter estimation errors or phase and amplitude
image plane, and GsTd I) is the sensitivity time control gam as a function of errors across the system bandwidth. The nature of these correction filters will
depend on the characteristics of the image point target response function; if
time (sampling interval). . . . the data is dispersed along the range and azimuth dimensions, two one-
Typically, it is reasonable to assume that Kr(J) ts mdepend~nt of (slow) tI!11e
over scales of 10-15 s (which constitute an image frame), with the exception dimensional filters may be adequate. However, if the data is skewed, either a
resampling step or a two-dimensional filter would be required. An additional
of the roll angle rate. Changes in the roll angle will cause the a~tenn~ pattern
post-processor filtering stage could add an additional 50-100 FLOP per sample
modulation and the incidence angle to change relative to the samplmg wmdow by
depending on the filter size. If the system errors are deterministic, the correction
filters could be incorporated into the range and azimuth compression reference
Arir = 2f RJ. tan '1 / c samples/s
functions, thus eliminating the need for the correction filter.
Assuming a maximum shift An~ax is acceptable before a Kr update, the update
interval is 9.4.3 Geometric Correction
Inherent in the SAR data is geometric distortion caused by the side looking
Anmaxc
At =--r_ _ (9.4.7) geometry, surface terrain, system sampling errors, and platform velocity
u 2rRf. tan,, variation. Assuming the location of any pixel can be determined relative to a
fixed earth grid (e.g., UTM, Polar Stereographic), the images can be geometrically
Consider as an .example the Shuttle Imaging Radar for '1 = 45°, !. = rectified by performing a two-dimensional resampling (Siedman, 1977). The
22.5 MHz, r = 0.033° /s, and R = 300 km. Assuming we update at An~ax = 1 pixel locations can be derived· by tiepointing (either operator assisted or
pixel, the update interval is Mu= 0.025 s. For fp = 1400 Hz, we must update automated), or predicted using a model for the sensor imaging geometry and
every 35 range lines, which is about 600 updates for each 15 s data set. Rather the target elevation. The latter approach requires precise knowledge of platform
than generate a new correction factor for each update, th~ Kr(I) array can (actually antenna phase center) position and velocity during the imaging period.
be extended such that it is larger than the actual swath width. The u~ates It should be noted that the geometric fidelity of the resampled image product
are then accomplished by simply shifting the array without any add1~1onal is not depencfent on knowledge of the platform attitude. If the range and Doppler
computations. The assumption here is that the antenna pattern as projected information inherent in the echo data is used in the target location, as described
into the image plane does not change significantly over the range of roll angles in Chapter 8, then the value of foe reflects the antenna yaw and pitch angles,
within a 15 s frame. . and the range gate is independent of roll angle. Therefore, the only significant
Given that the above assumptions are valid, only a single Kr(J) is required error contributors in the target location procedure are the satellite orbit
for an image frame and the computational rate for generatio~ of ~h~ correction determination uncertainty and the target elevation relative to the reference geoid.
factor is negligible. Since, in this case, the radiometric corre~tion ts ~ust. a scalar It has been shown that the aforementioned tiepointing procedure can be
multiply applied to each complex pixel, the comp~tational c~mplex1ty ts CRc = used to geometrically rectify a SAR image using a polynomial warping algorithm
2 FLOP /input pixel, and therefore the computational rate 1s (Naraghi et al., 1983). However, this approach is ineffective for images with
significant relief due to the local distortion caused by foreshortening and layover
effects. A more precise technique, proposed by Kwok et al. (1987), uses only a
few point targets of known position (latitude, longitude, elevation) to refine the
where rp1 is the post-processor input rate in complex sample~. For Se~sat accuracy of the ephemeris using the SAR range and Doppler equations. It
real-time processing RRc ~ 10 MFLOPS for the radiometnc correction. requires a minimum of two targets distributed in range to provide incidence
480 • THE SAR GROUND SYSTEM
9.4 POST-PROCESSOR SYSTEMS 481
angle diversity and two targets in azimuth to determine the along-track scale
errors. This approach is described in detail in Chapter 8. The tiepoint selection azimuth an~ range ~irections, .we first generate a grid of location versus pixel
number as discussed m the previous section. The resampling process is as follows:
and image registration are performed oftline in the calibration analysis workstation,
and therefore do not contribute to the post-processor computational rate
requirement. 1. Gen~r~te a resampling index in azimuth direction using 4-point interpolation,
requiring
Geometric Correction Procedure 4 real multiplies
For a spaceborne platform with a relatively small amount of drag, the position
3 real adds
errors (.dx, .dy, .dz) derived from a single site are highly correlated over a small
arc. Additionally, since the position and velocity errors are also highly correlated
with each other, the corrected platform ephemeris can be repropagated, thus 2. Peiform azimuth interpolation using N, points (e.g., sine or cubic spline
allowing all image data for that arc to be geometrically calibrated. The geometric interpolator ), requiring
correction procedure is as follows:
N, real multiplies per I and Q
1. Point target analysis; (N, - 1) real adds per I and Q
2. Orbit refinement and repropagation;
3. Generate location vs. pixel number grid; 3. Repeat Steps 1 and 2 for the range dimension.
4. Register image with digital terrain map (repeat 2 and 3);
The aggregate number of floating point multiplies per complex input pixel is
5. Resample image to uniform grid.
FLOP /complex input pixel (9.4.9)
Steps 1-4 are typically implemented oftline in a calibration analysis
workstation. Determination of the point target locations involves some operator where gor• goa are the ove~sampling factors in range and azimuth respectively.
interaction, therefore these operations are adjunct to the high speed processing
chain. To register the image with a digital elevation map (DEM), a small area
(e.g., 512 x 512 points) of the DEM is projected into the SAR geometry (e.g., Ex~?1ple 9.7 Assum~ for the single-look Seasat image, where bx ~ 6 m, that
rotated to the SAR ground track and illuminated according to some backscatter a umform output spactng of bxaz. = 3.125 mis selected for the azimuth dimension
model) and cross-correlated with the SAR image. This registration step is used and .c5x8 ~. = 12.5 m for the ground range dimension. The input slant range
to derive the residual target location error after all systematic corrections are spacmg is
made. Steps 2 and 3 are repeated following the image to map registration
process. Steps 2, 3, and 4 generally require no operator interaction. The ax.= c/(2J.) = 6.58 m
resampling process in Step 5 above is typically designed to produce one of three
geometrically corrected products (Schreier et al., 1988) resulting in an average ground range spacing of
1. Ground plane projected to smooth geoid in an azimuth/range grid; c5xgr = c/(2J. sin 11) = 19.2 m
2. Geocoded to geoid model in an earth fixed grid;
3. Geocoded to terrain elevation map in an earth fixed grid. where a mean incidence angle across the swath of '1 = 23 ° is assumed. fhe
range oversampling factor is therefore
We will analyze the computational complexity of each geometric resampling
procedure in the following subsections. g0 , = DXgr/ DXgr. = 1.54 (9.4.9)
Inserting this value into Eqn. (9.4.10) we get bXaz,;:::: 4.07 m and Example. 9.8 Assume that the post-processor input is Seasat single-look,
complex imagery rotated 45° relative to grid north. The output image is to be
geocoded to a uniform 4 meter spacing, i.e.
(9.4.11)
If multilook detection is performed prior to geometric correction, the azimuth From Eqn. (9.4.13) the computational complexity is
input pixel spacing is reduced by the number of looks. However, if the output
spacing requirement is not reduced the number of computations remains the Coe, = 261 FLOP
same. Any resampling operation performed after detection should use intensity
data to preserve the first and second order statistics (Quegan, 1989). per complex input pixel. Assuming an input data rate of 5 Msamples/s, the
computational requirement for the geocoding from Eqn. (9.4.5) is
Geocoding to a Smooth Geoid. To geocode the correlator output into a
standard map projection we perform a three-pass resampling process, as Roe, = rp1Coe = 1.3 GFLOPS
2
1 Example 9.9 Assume we have a one tenth real-time Seasat processor, such
goa =--p (9.4.12) that the SAR correlator output data rate is
cos
rco = 0.5 Msamples/s
where p is the rotation angle. Since a third pass must also be added to the
number of computations, the aggregate number of floating point operations and that only 50 % of the correlator output is to be geocoded. The post-processor
per input sample for geocoding to a smooth geoid is given by input data rate is reduced to
where gua• the azimuth undersampling factor, is given by If the data is first L-look averaged, such that- <>xaz· = LV.w/fp, requiring
4 .FLOP per samp~e, the d~ta rate is reduced to r~, = r~.! L. Assuming L = 4,
with an output pixel spacmg of <5x1= bxP = 12.5 m, we get the following
484 THE SAR GROUND SYSTEM
9.4 POST-PROCESSOR SYSTEMS 485
oversampling factors 3. Determine the foreshortened target displacement
goa = 1.30 approximately 25 operations per output sample to determine both azimuth
gor = 1.54 and range components, Eqn. (8.3.22) to Eqn. (8.3.28).
gua = 1.0
The computational complexity for the DEM resampling operations of Steps 1
and 2 is
where we have assumed p = 30°.
The computational rate per detected L-look input sample is given by
(9.4.15)
where g0 m is the map oversampling factor. We have assumed in Eqn. (9.4.15) that
(9.4.14)
no r?tat~on of the m~p is required and that the input and output DEM pixel
spa~mg ts the same m both the line and pixel dimensions (e.g., northing and
eastmg for a UTM projection).
where the superscript L refers to the look averaging. From Eqn. (9.4.14) for
The computational complexity for the image resampling is given by
N 1 = 4, Coe,= 106 FLOP/sample. For rp1 = 0.25 Msamples/s and L = 4
Eqn. (9.4.13), with the additional calculations required in Step 3 to determine
the foreshortening displacement. Thus the computational complexity for the
Rt'.e, = 4.3 MfLOPS geocoding with terrain correction is
which can be handled by most scientific workstations augmented with an array (9.4.16)
processor or a floating point accelerator.
The following example illustrates the number of computations required.
1. Convert the DEM from a geodetic to a geocentric system (Heiskanen and = 7.6 MFLOPS
Moritz, 1967)
Assuming the oversampling factors of Example 9.9, the image resampling
11 FLOP per DEM sample, assuming the radius of cu~ature varies slowly complexity can be estimated from Eqn. (9.9.16) as
relative to the target elevation (which is true for DEMs greater than
250,000: 1 scale); Coe,= 153 FLOP/complex input pixel
2. Resample the DEM to the required image output grid If an L-look detection operation is performed prior to geocoding
2N1 + 2 FLOP in each of the line and sample directions per output sample;
486 THE SAR GROUND SYSTEM
9.5 IMAGE DATA BROWSE SYSTEM 487
Assuming a one tenth real-time rate, four looks, and one half of the data
geocoded (i.e., rp1 = 0.25 Msamples/s, L = 4) REAL-TIME INPUT FROM
SARCOR El.ATOR(S<4QMB/S)
R~c, = 5.8 MFLOPS
log-on to the browse image data base management system and select imagery Data Access
for transfer to their home institutions across more conventional communication
• channel capacity, re (bps)
channels.
An analogy to this data access scenario is the card catalog systems used in • channel characteristics (BER, SNR)
a library (many of which are now electronic data bases). A user can search the • peak image transfer rate, A. (images/hour)
card catalog by title or author, if the specific book is known, to determine the • maximum access delay, T (seconds)
book location and status (e.g., on loan). Alternatively, if only the subject area
of interest is known, the subject catalog can be used to access all books related Given these inputs, we can then perform the analysis necessary to derive the
to a specific topic within the library system. Contained in the catalog is a required compression ratio, d. Typically, the required compression is larger
synopsis or an abstract summarizing the book content, as well as detailed than can be achieved by any lossless compression algorithm and, depending
information on its location. Similarly, an image browse system provides the on the required minimum signal to compression noise ratio, only a few lossy
user with a low resolution summary of the image information contents. It could compression algorithms are suited for SAR data compression (Chang et al.,
be accessed by image file number or by site name if the user knows of a specific 1988a).
scene. Also, as in the library catalog, if the user knows only of a location (i.e.,
latitude, longitude, area) a search can be made across all the image data products
in some specified region acquired during the time period of interest. The image 9.5.2 Queueing Analysis of the Online Archive System
catalog contains information as to the processing status and the types of products To determine the required compression ratio we must establish the system access
available. load. We assume a Poisson distributed access pattern where each access consists
The key science requirements in a browse data generation and distribution of a single image file transfer. For this analysis we will further assume that a
system are twofold: good reconstructed image quality (at the user site); and a single serial port is shared by all users. We do this without loss of generality
short transfer delay time. The specifications controlling the browse system since the extension to multiple image transfers and multiple communication
performance are the channel capacity and the computational capacity of both channels can be made simply by redefining the image size and the channel
the transmitting a11d receiving computer systems. Generally, to achieve the capacity. The browse system will therefore be modeled as a MID I 1 queueing
required access times for interactive browsing for some given link capacity, system, where M represents a Poisson distribution, D is a deterministic time
spatial compression of the data products is required. The image compression required to encode and transmit the image file, and 1 indicates a single system
algorithm should be designed to minimize the number of computations needed for processing and distribution.
for image reconstruction since this capability must be replicated at each user It can be shown that for this system the mean response time, T, approaches
site. Additionally, the algorithm should be optimized for the unique characteristics (Kleinrock, 1975)
of the SAR image data, namely:
(9.5.1)
1. Large dynamic range ( >60 dB) as a result of compression gain;
2. Speckle (multiplicative) noise, which increases the data entropy; and where Wis the waitin:g time to access the system and 4, 7;, and ~ are the
3. Nonstationary statistics due to the varying target scattering characteristics. encoding, transfer, and decoding titnes, respectively. The wait time is given by
c
Thus, the SAR data characteristics place some unique constraints on selection w = A.(4 + 7;) 2 /2(1 - A.(4 + 7;)) (9.5.2)
of the data compression algorithm.
where A. is the mean number of images transferred per second. The transfer time
9.5.1 Browse System Requirements is given by
Following are the system requirements necessary for the design of a browse
(9.5.3)
data processing and distribution system:
Image Quality Specifications where nb is the number of bits per image pixel, N 1 and NP are the line and pixel
dimensions of the image file, d is the compression ratio, and r c is th~ channel
• reconstructed image resolution, ~x x ~Rg (m) capacity in bits per second. Furthermore, we can write
• signal to compression noise ratio, SCNR (dB)
• reconstructed image size, N 1 x NP (pixels) (9.5.4)
490 THE SAR GROUND SYSTEM
and
(9.5.5)
8
7
-
where Ce and Cd are the numbers of computations per pixel required to encode
and decode the image and Re and Rd are the computational rates (in FLOPS)
of the encoding and decoding processors, respectively. ·ec 6
i='
We can now insert Eqn. (9.5.2)-Eqn. (9.5.5) into Eqn. (9.5.1) and write an GI
expression for the compression ratio, d. However, this relatively complex E
j:: 5
algebraic equation is not very useful since, in most cases, the compression
GI
algorithm encoding and decoding computational complexity factors (i.e., Ce, Ill
c
Cd) depend on the compression ratio. Instead we will illustrate the use of these 0
Q.
Ill 4
equations with an example. GI
a:
Example 9.11 Consider a browse system designed such that the images are
compressed upon receipt from the post-processor and stored in a compressed
3
•
format, so that the encoding time is T. = 0. Furthermore, assume that the Te= 60s
decoding procedure is such that the receiving system can decode the data faster 2
than the channel can transmit, i.e., 0 5 10 15 20
a
allowing the decoding process to be fully overlapped with the image data transfer 8
(i.e., Td = 0). Equation (9.5.1) becomes
- d=20
7
T=W+I; (9.5.6) d=15
• d=10
c 6
Inserting Eqn. (9.5.2) and Eqn. (9.5.3) into Eqn. (9.5.6) we can plot the total ·e
access time (including the queue) as a function of access frequency, A., and the i='
5
compression ratio, d, given the image size (N., NP' nb) and the link capacity GI
E
(r 0 ). If we assume a 1 K x 1 K pixel image is required for the user display, the j::
4
browse system must first reduce the original full resolution input image frame, --'41
Ill
c
either by segmenting or averaging the original image. We will assume a byte 0
Q. 3
representation for each pixel and that the communication link is a 9.6 Kbps Ill
GI
line. No channel coding is included. The results shown in Fig. 9.21a indicate a:
that a compression ratio of 15-20 provides data access in less than 2 minutes 2
for 20 access requests per hour. If a 1 minute encoding time is required •
El •
El
(Fig. 9.2lb) following the request receipt (i.e., T. = 60 s), then the queue begins
to grow large as the request frequency approaches A.= 20 images/h. For this
case a reasonable solution would be to add a second 9.6 Kbps line. 0
0 5 10 15 20 25 30
b
Given that a compression ratio of 15-20 is adequate to service 25, 1 MB image
Figure 9.21 Response time of browse (M/D/1) system as a function of access frequency ..i and
requests per hour per 9.6 Kbps data link, the problem remains to determine compression ratio d for (a) Encoding time T. = O; (b) T. = 1 min. (Courtesy of C. Y. Chang.)
491
492 THE SAR GROUND SYSTEM 9.5 IMAGE DATA BROWSE SYSTEM 493
the compression algorithm that can achieve the desired compression ratio, given the browse application, only lossy techniques will be considered in detail. The
some image quality criterion. lossy algorithms can be grouped as follows:
For this measure, the traditional parameter used is a signal to compression
noise ratio
1. Predictive Coding
2. Transform Coding
SCNR = lOlog[~n~,J;~(nPu -
IJ IJ
n;,)2] (9.5.7) 3.
4.
Vector Quantization
Ad Hoc Techniques (e.g., fractal geometry)
where nP•; is the pixel value in the original image and n;,J is the reconstructed We will discuss each briefly as it applies to the SAR image browse application.
pixel value following transmission and decompression of the data. To achieve
a visually good quality image, the compression noise should be of the same
Predictive Coding
magnitude or less than the other noise sources in the data. For the SAR, system
This category typically offers a very simple coding/decoding procedure. However,
noises such as thermal, bit error, quantization, and saturation are typically on these algorithms cannot achieve compression ratios below 1 bit/pixel (i.e.,
the order of 10-12 dB below the signal level, while the target dependent noises, compression factor of 8). The image quality at 1 bit per pixel is generally not
such as range and azimuth ambiguities, are nominally 15-18 dB down. The adequate for most science browse applications. A good example of this algorithm
exception is speckle noise. For a four-look image the signal to speckle noise is Linear Three Point Predictive (LTPP) Coding (Habibi, 1971). The algorithm
ratio is only 3 dB (Section 5.2). If 8 x 8 averaging is performed on this data uses an autoregressive model to linearly predict the value of a pixel based on
the speckle noise is then about 12 dB below the signal level and becomes three neighboring values. The prediction error is then quantized and sent through
comparable to other noise factors. the channel. The prediction coefficients must be updated if the statistics of the
The SCNR required for browse applications will therefore depend on the image change. Since this is generally true for SAR data, we assume the correlation
processing applied to the image data before compression. If a low SCNR is matrix for each block is calculated before encoding. This requires 7 FLOP for
acceptable, as in the case of high speckle noise (one-look images), a large each pixel. The encoding and decoding operations each require an additional
compression ratio can be achieved, and thus we effectively trade distortion noise 5 FLOP. Thus, for the LTPP, c. = 12 FLOP and Cd= 5 FLOP per input
for a higher resolution at a given link capacity. If we assume the browse image pixel. The LTPP offers a simple implementation for compression factors of.
size is that of a typical video display (i.e., 1 K x 1 K pixels), and that to achieve d ~ 4, however it is limited in flexibility. For most SAR applications other
this reduction we 8 x 8 average the four-look data, a SCNR ~ 15 dB is required techniques have better performance characteristics.
for good quality reconstructed images.
An additional consideration is the spectral distribution of this noise power.
In the above comparison with the various system and target noise sources, we Transform Coding
assumed that the compression noise is essentially white across the spatial Transform coding maps data from the spatial image domain to a representation
spectrum of the image. In fact, many compression algorithms add a high tfiat is more efficient for encoding the image information. The most frequently
frequency noise characteristic, resulting from block encoding of the input data. utilized transfurms are the cosine and the Hadamard. The Hadamard transform
There are various techniques to distribute this noise more evenly across the offers a lower computational complexity than the cosine transform at a reduced
spectral bandwidth, although they typically result in an increased overall performance. However, transform coding almost always yields better performance
compression noise (Ramamurthi and Gersho, 1986). than predictive coding at the same compression ratios, and it offers more
flexibility in that any compression ratio can be specified if the resultant image
distortion is acceptable. The major disadvantage is the computational complexity,
9.5.4 Compression Algorithm Complexity Analysis since both the encoding and decoding procedures require a large number of
two dimensional transforms.
Data compression algorithms can be broadly classified as either lossy or lossless A comparative analysis of the compression algorithms listed at the beginning
(noiseless). Because of the speckle noise and nonstationary statistics characteristic of this section has recently been performed for SAR (Chang et al., 1988b ). That
of SAR data, lossless techniques are relatively ineffective, yielding at most a report concludes that an adaptive discrete cosine transform (ADCT) procedure
compression factor of 1.2 to 1.4 (Chapter 6). Since we require much higher is the optimum approach for coding SAR image data in that it produces the
compression ratios, and can tolerate some degradation in the image data for best SCNR for a given compression ratio. Essentially, the steps of the adaptive
494 THE SAR GROUND SYSTEM 9.5 IMAGE DAT A BROWSE SYSTEM 495
The coded transform coefficients, the bit allocation map, and the class map a b
must be transmitted via the channel for use in the image reconstruction process.
Following the data transmission, the inverse processes of renormalization and
quantization are table lookup procedures, while the 20 inverse transform is
computationally intensive.
The computational complexity for block sizes 16 x 16 pixels or larger is
essentially driven by the cost of performing the transforms. For an ADCT
procedure, the encoding complexity per input pixel, assuming a square transform
block of dimensions S, is (Lee, 1984)
where the first term on the right is for the transform (Step 2) and the second
term is the sorting operation (Step 3). For example, a 1 K x 1 K browse image,
coded using block size S = 16, requires C~ocr ~ 22 FLOP per input pixel for
c d
encoding and 14 FLOP / pixel for decoding which does not require sorting. For
a 128 pixel block, the encoding and decoding complexity each increase to Fl~ur~ 9.~2 Ada~ti_ve di ~crete cosine transform (ADCT) compression of Seasat image of Detroit,
26 FLOP / input pixel (i.e., the sorting is negligible). An alternative transform M1ch1gan. (a ) Ongmal image; (b) Compression ratio, d = 10, SCNR = 18.4 d B; (c) d = 30
SCNR = 16.0 dB; (d) d = 50, SCN R = 15.1 dB. '
algorithm, the Hadamard transform, is sometimes preferred, in which integer
a rithmetic is employed since it requires only addition operations. The performance
of the Hadamard transform exhibits a slightly degraded SCNR relative to the !'lowever, for. point-to-point d a ta transfer where high speed processors can be
cosine transform. mstal~ed at either end, the ADCT is the optimum solutio n fo r compression of
Some results from coding Seasat browse images (after 8 x 8 averaging of SAR image data.
the four-look image) are shown in Fig. 9.22 and Fig. 9.23. For this data we
have used a 16 pixel block size with four activity classes. Note that the image Vector Quantization
becomes blocky at the higher compression ratios, even though the SCNR ~he ~e~tor qua n tizatio n ( VQ) algorithm offers a co mpromise between the
remains above 15 dB. For the Detroit scene, the statistics vary widely from the s1mp~1c1ty of t he LTPP a nd the performa nce of the ADCT. This procedu re
urban regio n to the la ke, thus sk ewing the classes and degrading the ADC T pr~vtdes a reasonably good reconstructed image quality a t high compression
performance. rat1o_s._ Further~ore, the decoding procedure is reduced to a table lookup,
For a browse application, where the user typically has little processing requmng essent1 ~1ly n~ mat hema tical operatio ns by the user. The d isad va ntage
ca pability at the home institution (or on a ship, or in the field), the transform of the VQ algo rithm 1s tha t the encoding complexity ca n be h igh for large
coding generally exceeds the maximum decoding complexity requirement. cod ebooks a nd the edge effects can be severe if the image exhibits a wide
496 THE SAR GROUND SYSTEM 9.5 IMAGE DATA BROWSE SYSTEM 497
4. Transmit the index of the selected codeword fo r each vecto r and the image
codebook.
The performance of this algorithm is dependent on how well the subset of the
source data used to train the codeboo k (Step 2) represents the enti re source
data set. If the statistics vary at different porti ons of the image, such as in the
Detroit scene of Fig. 9.22, and if the codebook does not contain vectors from
the bright city areas, for example, these areas will be highly distor ted in the
reconstructed image. Assuming we select 2m codewords as the codebook size,
the maximum compression rati o is
(9.5.9)
a b
a b
c d
where the vector block size is S x S and nb is the number of bits per pixel. The example, assume m = 8, q = 0.25, and M = 4 ; the computational complexity is
second denominator term is the overhead associated with transmitting the c~ 0 = 512 FLOP / input pixel. The predominant operations are adds and
codebook. As an example, consider a 1 K x 1 K pixel browse image with S = 4, compares, with very few multiplies required. As a result of this large number
m = 8, nb = 8. The compression ratio is 15: I. The codebook therefore represents of computations fo r the VQ encoding process a number of more efficient coding
approximately a 6 % overhead. schemes, such as m ulti-level (tree) codebooks, have been developed (Cha ng,
The n umber of computations for the encoding procedure in Step 3, using a 1985 ). The performance of the VQ algorithm is illustrated in Fig. 9.24 and
fully searched codebook, is Fig. 9.25. An 8 bit, 2 level codebook was used in the compression routine to
generate these images.
c~o = (Mq + 1)2m (9.5.10)
Ad Hoc Techniques
where q is the fraction of the original image used in training the codebook There are a number of other compression routines that do not fall into these
and M is t he number of iterations required to train the codebook. For basic categories. Several of these have been evaluated for the SAR application.
Among those evaluated are (Chang et al., 1988)
• Fractal Geometry
• Micro Adaptive Picture Sequencing (MAPS)
• Block Adaptive Truncation (BAT)
REFERENCES
a b
Appiani, E., G. Barbagelata, F. Cavagnaro, B. Conterno and R. Manara ( 1985).
"EMMA-2 An Industry-Developed Hierarchical Multiprocessor for Very High
Performance Signal Processing Applications," First Inter. Conf on Supercomputing,
St. Petersburg, Florida.
BBN Labs ( 1986). " Butterfly Parallel Processor Overview, Version !."
Chang, C. Y., R. Kwok and J. Curlander ( I988a). "Spatial Compression of Seasat SAR
Imagery," IEEE Trans. Geosci. and Remote Sensing, GE-26, pp. 763- 765.
Chang, C. Y., R. Kwok and J.C. Curlander ( 1988b). " Data Compression of Synthetic
Aperture Radar Data," Jet Pro pulsion Laboratory, Technical Document, D-5210,
Pasadena, CA.
Chang, C. Y., M. Jin and J. C. Curlander ( 1992). "Squint Mode Processing Algorithms
and System Design Considerations for Spaceborne Synthetic Aperture Radar," IEEE
Trans. Geosci. and R emote Sensing, in press.
Chang, P. C., J. May and R. M. Gray ( 1985). " Hierarchical Vector Quantizers with
c d Table-lookup Encoders," Proc. IEEE Inter. Conf. Comm., 3, pp. 1453-1455.
Figure 9.25 VQ compression of averaged Seasat images: (a) Original, Los Angeles, California; Chen, W. H., and H . Smith ( 1977). "Adaptive Coding of Monochrome and Color
(b) d = 15.1, SCNR = 11.7 ; (c) Original, Beaufort Sea; (d) d = 15.1, SCNR = 17.2 1. Images," IEEE Trans. Comm., COM-25, pp. 1285- 1292.
500 THE SAR GROUND SYSTEM
REFERENCES 501
Davis, D. N. and G. J. Princz ( 1981 ). "The CCRS SAR Processing System," 7th Canadian Schaefer, D. H. (1985). "MPP Pyramid Computer," Proc. IEEE Syst. Man. Cyber Conj.,
Sym. on Remote Sensinf!, Winf!il'~g, Manitoba~ pp. 520-526. Tucson, AZ.
Dongarra, J. J. (1988). "Performance on Various Computers Using Standard Linear Schreier, G., D. Kossman and D. Roth ( 1988). "Design Aspects of a System for Geocoding
Equation Software in a Fortran Environment," Argonne National Laboratory Satellite SAR Images," ISPRS, Kyoto Comm. I, 1988.
Technical Memorandum, No. 23. Selvaggi, F. ( 1987). "SAR Processing on EMMA-2 Architecture," RI EN A Space Meeting
Fenson, D. (1987). "British Aerospace of Australia, ERS-1 Data Acquisition Facility," Proceedings, Rome, Italy.
Technical Document. Siedman, J. B. (1977). "VICAR Image Processing System Guide to System Use," Jet
Friedman, D. E. (1981). "Operational Resampling for Correcting Images to a Geocoded Propulsion Laboratory Publication 77-37, Pasadena, CA.
Format," 15th Inter. Symp. on Remote Sens. of Envir., Ann Arbor, Ml, p. 195. Test, J., M. Myszewski and R. C. Swift ( 1987). "The Alliant FX Series: Automatic
Habibi, A. (1971). "Comparison of the nth-order DPCM Encoder with Linear Parallelism in a Multi-processor Mini-supercomputer," in Multiprocessors and Array
Transformations and Block Quantization Techniques," IEEE Trans. Comm. Tech., Processors, Simulation Councils, San Diego, CA, pp. 35-44.
COM-19, pp. 948-956. van Zyl, J. (1990). "Data Volume Reduction for Single-Look Polarimetric Imaging
Heiskanen, W. A. and H. Moritz( 1967). Physical Geodesy, W. H. Freeman, San Francisco, Radar Data," submitted to IEEE Trans. Geosci. and Remote Sensing.
CA, pp. 181-183. Wolf, M. L., D. J. Lewis and D. G. Corr (1985). "Synthetic Aperture Radar Processing
Hillis, W. D. (1985). The Connection Machine, MIT Press, Cambridge, MA. on a Cray-1 Supercomputer," Telematics and biformatics, 2, pp. 321-330.
Hwang, K. ( 1987). "Advanced Parallel Processing with Supercomputer Architectures," Wu, C., K. Y. Liu and M. Jin (1982). "Modeling and a Correlation Algorithm for
Proc. IEEE, 15, pp. 1348-1379. , Spaceborne SAR Signals," IEEE Trans. Aero. Elec. Syst., AES-18, pp. 563-575.
Jain, A. K. ( 1981 ). "Image Data Compression: A Review," Proc. IEEE, 69, pp. 349-387.
Jin, M. and C. Wu (1984). "A SAR Correlation Algorithm which Accommodates Large
Range Migration," IEEE Trans. Geosci. and Remote Sensing, GE-22, No. 6.
Kleinrock, L. (1975). Queueing Systems, Vol. 1: Theory, Wiley, New York.
Kwok, R., J. C. Curlander and S. S. Pang (1987). "Rectification of Terrain Induced
Distortions in Radar Imagery," Photogram. Eng. and Rem. Sens., S3, pp. 507-513.
Lee, B. G. (1984). "A New Algorithm to Compute the Discrete Cosine Transform,"
IEEE Trans. Acoust. Speech Sig. Proc., ASSP-32, pp. 1243-1245.
Lewis, D. J., B. C. Barber and D. G. Corr ( 1984 ). "The Time Domain Experimental SAR
Processing Facility at the Royal Aircraft Establishment Farnborough," Satellite
Remote Sensing, Remote Sensing Society, Reading, England, pp. 289-299.
Linde, Y., A. Buzo and R. M. Gray ( 1980). "An Algorithm for Vector Quantizer Design,"
IEEE Trans. Comm., COM-28, pp. 84-95.
Naraghi, M., W. Stromberg and M. Daily (1983). "Geometric Rectification of Radar
Imagery using Digital Elevation Models," Photogram. Eng., 49, pp. 195-199.
Quegan, S. (1989). "Interpolation and Sampling in SAR Images," IGARSS '89 Symposium,
Vancouver, BC, Canada.
Ramamurthi, B. and A. Gersho (1986). "Nonlinear Space-Variant Postprocessing of
Block Coded Images," IEEE Trans. Acoust. Speech Sig. Proc., ASSP-34, pp. 1258-1268.
Ramapriyan, H. K., J. P. Strong and S. W. McCandless, Jr. ( 1986). "Development of
Synthetic Aperture Radar Signal Processing Algorithms on the Massively Parallel
Processor," NASA Symposium on Remote Sensing Retrieval Techniques, Williamsburg;
VA, December 1986.
Rocca, F., C. Cafforio and C. Drati (1989). "Synthetic Aperture Radar: A New
Application for Wave Equation Techniques," Geophysical Prospecting, 37, pp. 809-830.
Sack, M., M. R. Ito and I. G. Cumming (1985). "Application of Efficient Linear FM
Matched Filtering Algorithms to Synthetic Aperture Radar Processing," Proc. IEE,
132, pp. 45-57.
OTHER IMAGING ALGORITHMS 503
Within the limitations imposed by depth of focus, the function Eqn. ( 10.0.1)
10
corresponds to a stationary system function
(10.0.2)
OTHER IMAGING
ALGORITHMS and </>1(R) = </>(2R/c). With the definition Eqn. (10.0.2), the response function
Eqn. ( 10.0.1) is just
(10.0.3)
where P, is the two dimensional spectrum of the basebanded data O.(s, t) before
range compression. The algorithm Eqn. (10.0.3) was developed in particular by
Vant and Haslam (1980, 1990).
In earlier chapters, we have discussed mainly those SAR imaging algorithms Another class of processing algorithms different from rectangular range-
which have been developed for high resolution remote sensing applications. Doppler processing has grown up, based on alternate schemes for attaining
The emphasis has been on spaceborne systems. In the case of such a system, range resolution in pulse compression radar. These are based on the "deramp"
the effects of range migration and limited processor depth of focus are processing scheme for range compression (Section 10.1 ). The idea is to do
immediately evident (Section 4.1.3). This is even more the case at the relatively whatever is necessary to salvage the process of simple frequency filtering on
low frequency (L-band) of the earliest earth orbiting SAR, Seasat. The remote the Doppler spectrum of the azimuth signal, while at the same time making
sensing application set the direction towards strip mapping (side looking) sensor use of the full target spectrum thereby attaining improved resolution (focussed
deployment, and towards terrain imaging algorit~ms operating in that mode. processing). Such algorithms have been mainly developed for use in airborne
In Chapter 4 and Chapter 5 we described the developments leading to systems, but are not restricted to such systems. They are, however, particularly
appropriate processors in such applications, building on such work as that of well adapted to systems which are squinted away from side-looking so as to
Wu (1976). deliberately aim (say) forward at some limited region of interest, as for example
At the same time, other classes of processors were being developed. One in a spotlight mode SAR. Such systems are in contrast to the Seasat-like
approach treats the impulse response function of the system directly as deployments we have been mainly considering so far, in which the objective is
a two dimensional Green's function to be inverted. The complex basebanded to map the terrain below the vehicle more or less uniformly, with squint only
radar signals, before range compression, correspond to the response function a nuisance to be compensated in the processing.
Eqn. (4.2.31): In the case of the large bandwidth time product of the azimuth Doppler
signal imposed by the usual geometries, high resolution azimuth processing can
tJ.(s, t) = exp[-j4n:R(s)/).] exp{j</>[t - 2R(s)/c]} ( 10.0.1) be done using the techniques of matched filter processing. From the point of
view of the Green's function h(x, Rix', R') and its inversion (Section 3.2.1), the
where </>(t) is the phase of the transmitted pulse: return signal v.(x, R) of the radar, in response to a distributed target with
complex reflectivity ((x', R'), is ,
s(t) = cos[2n:.fct + </>(t)]
and R(s) is the range migration locus Eqn. ( 4.2.30). v,(x, R) = fh(x, Rix', R')((x', R') dx' dR'
502
504 OTHER IMAGING ALGORITHMS 10.1 DERAMP COMPRESSION PROCESSING 505
'(x, R) = f h- 1
(x, Rix', R')v,(x', R') dx' dR' (10.0.4)
3 (t) Delay
where h- 1 (x, Rix', R') is the inverse Green's function. In the case of the along to
track variable x, the kernel h(x, Rix', R') is approximately a linear FM, and
the inversion kernel h- 1 (x, Rix', R')is therefore another linear FM, the azimuth
compression filter. Convolution is necessary to apply the inverse kernel to the
data, as in Eqn. (10.0.4). Range migration enters as a complicating factor. Delay
The algorithms we will describe in this chapter take an alternative point of tr
view. The received radar data v,(x, R) are pre-processed into signals data v,(x, R)
such that, in the corresponding superposition equation:
v,(x, R) = f
h(x, Rix', R'),(x', R') dx' dR' Figure 10.1 Chirp generation and corresponding deramp range compression scheme.
K (tr-to)
the kernel h(x, Rix', R') is of a very simple form, and in fact is just that kernel This has a frequency f = fc + K ( t - t 0 ) which depends on time, so that full
which is inverted by Fourier transformation. Thereby the image function '(x, R) resolution processing is not possible by simple frequency filtering.
results from Fourier transformation of the data function v,(x, R). Application In deramp compression, the received signal corresponding to Eqn. ( 10.1.2)
of compression filters and inverse Fourier transformation as needed in the is converted to a constant frequency signal with frequency linearly related to
rectangular algorithm do not occur. The focussed image results by a single t 0 , the quantity to be determined, by the system of Fig. 10.1. In the case t, = 0,
two dimensional Fourier transform operation. The cost is (perhaps considerable) for example, we have
data preprocessing to form the signals ii, from the radar data v,.
The algorithms of the class to be discussed go by various names in v,(t) = [s(t)v,(t)]diff.freq.
their variants, such as deramp FFT processing (sometimes called stretch = cos 2n(Kt0 t + fct 0 - Kt~/2) ( 10.1.3)
processing), step transform processing, SPECAN processing, and polar
processing. Ausherman et al. ( 1984) have given an overview of the class. All of which is a constant frequency sinusoid whose frequency Kt 0 encodes the range
these algorithms have links to the methods of tomographic imaging, which delay t 0 • Working in terms of positive frequency components only, for
Munson et al. (1983) and Fitch (1988) discuss. We begin with a discussion of convenience, the computation of Eqn. (10.1.3) can be written
deramp processing, which is the direct predecessor of the step transform method
of SAR imaging. v,(t) = s(t)v~(t)
v,(t) = cos 2n[fc(t - t0 ) + K(t - t 0 ) 2 /2], It - tol < r:p/2 (10.1.2) s*(t - t,) = exp(-j2n.fct) exp[ -jnK(t - t,) 2 ], It - t,I:;:;; /Ji.t/2 (10.1.5)
506 OTHER IMAGING ALGORITHMS 10.2 STEP TRANSFORM PROCESSING 507
(b) allow for a target return at any position in the swath, the processor FFT must
f have time length At, even though any particular frequency bin is occupied by
signal for at most a much shorter time •p· By lengthening the processor time
//(a) to At we have degraded the signal to noise ratio of the system. Further, there
/ are generally present signal frequencies in v.(t) ranging from K(t. - tnear) to
/
/ K(t. - tea.), where tnear and tear correspond to the two extremes of the range
swath. Thus the deramped signal v.(t) has a bandwidth IKIAt, whereas v.(t),
the radar return itself, has only the band IKl•p· Thus the sampling rate of the
deramped signal must be artificially high. The system is simplest to arrange in
the case that At and •p are roughly equal. This means that either the
swath must be narrow, less than a pulse width, or that subswaths must be
processed with multiple reference functions used to dechirp each subswath signal
Figure 10.2 Frequency against time in deramp range compression. (a) Transmitted; ( b) Reference; separately, perhaps using the step transform procedure discussed in Section 10.2.
(c) Received. The potential application of deramp processing to SAR azimuth compression
is clear. The algorithm has recently been called the SPECAN (SPECtral
I ANalysis) algorithm in that context (Sack et al., 1985). A number of difficulties
(Fig. 10.2). The reference pulse Eqn. ( 10.1.5) is generated such that its length arise, however, which can make the procedure somewhat involved for high
At is the timewidth of the slant range swath over which returns are expected. resolution image formation. In addition to the problems mentioned above in
The result of the reference mixing operation then is a preprocessed signal at regard to range processing, which are also present in the application to azimuth
baseband: processing, the phenomenon of range migration can make it necessary to
assemble together from various range bins the data to be applied to the azimuth
v.(t) = s*(t - tr)s(t - to) FFT processor. Finally, since the azimuth chirp constant fR depends on slant
It - t0 1~ rp/2 range across the swath, the relation between FFT bin number and image
= exp[jnK(t~ - t:)] exp[j2nK(tr - to)t],
point azimuth position changes with range, a circumstance which requires
(10.1.6) interpolation operations to construct a uniformly sampled image. The situation
is discussed by Sack et al. (1985), and in detail by Wu and Vant (1984). Both
This function v (t) is a constant frequency sinusoid, available over the full Sack et al. ( 1985) and Wu and Vant ( 1984) give a detailed analysis of the step
transmitted pul:e duration •p• whose frequency K(t. - t 0 ) is a direct measure transform, an important modification to which we now pass.
of the target range parameter t0 • The precision to which that frequency can be
measured is IKl&o = 1/-rp, so that target range resolution is
10.2 STEP TRANSFORM PROCESSING
The basic idea of deramp processing can be realized in a version known as the
where BR = IKI• is the bandwidth of the transmitted pulse. Thus the resolution step transform. The method as applied to range compression is discussed by
of full bandwidth pulse compression processing is realized. Perry and Kaiser (1973) and by Martinson (1975). Perry and Martinson ( 1977)
All of the operations involved in carrying out the deramp procedur~ ~re also mention the technique in the context of along-track SAR processing. An
linear, since the reference function Eqn. ( 10.1.5) is independent of target position analysis of the along-track application is given by Sack et al. (1985), and by
in the swath. Therefore a complex reflectivity distribution '(R) across the swath Wu and Vant (1984). Wu and Vant (1985) analyze the modifications that need
is reproduced by the system of Fig. 10.1, with the squared magnitude of each to be made in the case of a highly squinted (spotlight) SAR, in which case the
complex Fourier coefficient at the output of the FFT processor used for along-track Doppler signal is not necessarily well approximated as a linear FM.
filtering the deramped received signal being the real reflectivity l((RW at the With simple deramp range processing (Section 10.l), difficulties arise if the
corresponding range. A radar system with this type of range processmg has range swath timewidth At over which a return signal can occur is noticeably
been called a stretch radar (Hovanessian, 1980, p. 114). longer than the width •P of a transmitted pulse. Even in the case of a swath
A practical difficulty arises in deramp processing. Normally the swath width only the width of the transmitted pulse, in deramp processing the deramped
At is considerably larger than the pulse length •P
(Fig. 10.2). Since we need to signal v.(t) of Eqn. (10.1.6) will not capture the return signal over the majority
508 OTHER IMAGING ALGORITHMS
10.2 STEP TRANSFORM PROCESSING 509
I
I
I
I
I
I I!::./= K (tn-t 0 )
,~
I
I
I =K(t-tn)
of its width unless the target is near the center of the swath (Fig. 10.2). This
suggests separating the full swath of interest into a number of subswaths, each
of width considerably less than that of a transmitted pulse, with each subswath Figure 10.4 Frequency plot for step transform pulse compression.
provided with its own local reference signal (Fig. 10.3). Thereby essentially all
of the signal span of any return can be captured, with different time segments
of the full pulse appearing in different subswaths. Full resolution processing final full resolution range information. This redundant information resides in
then requires simultaneous processing of the signals from multiple subswaths. different frequency intervals in adjacent subapertures, and it is the phase
The step transform is the two-stage procedure which implements the scheme. changes from one subaperture to another which lead to full resolution range
measurement. It is therefore of interest to examine the Fourier analysis of a
Coarse Range CoeH/clents ~arget return in each pa~ticular subaperture centered on time t 0 • Proceeding
Consider then a simple subswath, the nth, centered on a reference time tn, m the language of contmuous time and frequency variables, we determine
with a target at range time t 0 (Fig. 10.4). The deramped signal for that subswath, the Fourier transform over the aperture tn - tit/2 ~ t ~ tn + tit/2 of the
similar to Eqn. (10.1.6), is deramped signal Eqn. ( 10.2.1), taken with origin at the start of the aperture:
vn(t) = exp(jtf>) exp[j2nK(tn - t 0 )t], It - tnl ~ tit/2 (10.2.1) 11
Vn(f) = exp(j<f>) { exp[j2nK(t0 - t 0 )(tn - tit/2 + t)]
where
x exp( -j2nft) dt
tf> = nK(t~ - t~)
=tit exp[jnKt0 (t 0 +tit)] exp[jnKtn(tn - tit)]
Carrying out Fourier transformation of this over the interval tit centered on
tn determines the frequency K(tn - t0 ) to a resolution bf= 1/ tit (assuming that x exp( -j2nKtnt0 ) exp(jntitu) sinc(ntitu) (10.2.2)
the interval in question is not at the end of the target pulse), and thereby
determines the range R 0 of the target to a resolution bR = c/21Kltit, coarser where
by the ratio tit/rp « 1 than the full resolution capability c/21Kl-rP of the system.
This so-called coarse range processing yields the same range information in
adjacent subswaths, since any particular target appears in multiple subswaths,
although at. different frequencies separated by the frequency step Ktit I~ each subaperture, a target at some t 0 appears essentially in one frequency
corresponding to the time shift tit of the reference linear FM signals. bm, th.at at f = K ( tn - t 0 ). The first exponential factor in the corresponding
It is the further processing of the redundant coarse resolution information coefficient Eqn. ( 10.2.2) is a constant, independent of n, and the second can
about each target in adjacent subswaths ( subapertures) which leads to the be compensated by multiplying by its conjugate, since all of its terms are
510 OTHER IMAGING ALGORITHMS
10.2 STEP TRANSFORM PROCESSING 511
known. The compensated value of Vn(f) in that bin is just targets out to a distance such that
(In practice, the FFT is used so that Vn(f) is sampled at a spacing 1I At in so that the band to be analyzed over the subaperture time tis p/At. This requires
f) This is a sinusoid in the time variable tn with frequency Kt 0 (Fig. 10.5). a sample spacing in tn of At/ p, rather than At, in order to avoid aliasing.
Discrete Fourier analysis of the compensated coarse resolution frequency The oversampling factor P typically used is on the order of 2 or 3, unless the
coefficients Eqn. (10.2.3) over tn, during the target span Tp (Fig. 10.5), then coarse resolution filter has very well controlled sidelobes. That is, two or three
yields a spectrum which is ideally an impulse at frequency Kt 0 • The impulse times as many subapertures are generated than are sketched in Fig. 10.3.
can be located to a resolution of nearly 1/-rp in 1Klt0 , or a resolution
1/IKl-rP in t 0 , the full available resolution of the pulse compression system. Digital Coarse Range Analysis
Sack et al. (1985) describe the digital algorithm of step transform range
compression in detail. For a target at t 0 = m& (Fig. 10.6a), the basebanded
Oversampled Coarse Range Analysis radar return, when multiplied by the appropriate deramping chirp in terms of
The appearance of the sine function in the expression Eqn. (10.2.2) for Vn(f) the local index l on subaperture n, is
rather than a rectangle function in frequency of width 1 /At introduces the
possibility of aliasing (Appendix A). Ideally, only targets with t 0 such that exp{ -jnK[(l - L/2)<5t]2}, l = 0, L- 1 ( 10.2.5)
~.._~__.o~---'-----40~~~'--~~o~~~-'-~~- 3
t n- 1 t0 tn+1 tA tn
~M=L8t--1
Figure 10.6 Time sampling in step transform. (a) Single subaperture; (b) Oversampling of
Figure 10.5 Coarse range bins in deramp range compression. subapertures for fine resolution transform (case p = 3).
512 OTHER IMAGING ALGORITHMS
10.2 STEP TRANSFORM PROCESSING 513
This produces the sampled deramped signal in the aperture at t 0 = nJt:
The range analysis proceeds as indicated in Fig. 10.6b. Selecting a subaperture
number say A, A= 0, 1, ... , centered at tA = (n 0 +AN)&, all targets with
2
v,(lln) = exp{jnK(&) (n - m)[(n - m - L) + 21]} (10.2.6)
lt 0 - tAI < rp/2 will appear in that subaperture. The available coarse FFT
coefficients Eqn. ( 10.2.7) relative to those targets lie in ±I /2 apertures around
Taking the FFT of the sequence Eqn. ( 10.2.6) over the apertur<:_ time variable
aperture A, where I = Prp/ At can be arranged to be an integer. The phase of
I yields the discrete coefficients corresponding to the spectrum V,(f):
the coefficients Eqn. ( 10.2.7) is compensated to obtain coefficients for analysis:
L-1
V,(kln) = L v,(lln)exp(-j2nkl/L) V~(kln) = V,(kln)exp[-jnK(&) 2 n(n - L)] ( 10.2.11)
1=0
= exp[jnK(&) 2 (n - m)(n - m - L)] exp[ju(L - 1)] These coefficients are found by tracing through the matrix of values V,(kln) as
x sin(Lu)/sin(u), k = 0, L - 1 in Fig. 10.7, remembering that the coefficients V,(kln) are periodic ink.
The fine range analysis is then obtained by taking the /-point FFT of the
u = n[K(<5t) 2 (n - m) - k/L] (10.2.7) coefficients Eqn. (10.2.11). The resulting coefficients g(rlA) are such that
Digital Fine Range Analysis
The coefficients Eqn. ( 10.2.7) provide a complete coarse range analysis in every
subaperture n, with resolution 1/IKIAt. The various subaperture coefficients k
Analysis A+ 1
Eqn. (10.2.7) are processed together, with respect to the time index n, to obtain
the final fine resolution analysis of the range returns. We select sequential
I
subapertures indexed by i and centered at uniformly spaced times t 0 = i(N&) Analysis A -------1
(Fig. 10.6b ). Allowing for oversampling to avoid ghost images in the final output,
we have L-1
using Eqn. (10.2.8). In the expression Eqn. (10.2.7) for the coefficients V,(kln),
we have thereby arranged for u to be held constant as n changes by iN from
one subaperture to another. The remaining variation of V,(kln) with n resides
in the phase angle,
v = nK(&) 2 (n - m)(n - m - L)
= nK(Jt) 2 [n(n - L) + m(m + L) - 2mn] (10.2.10)
The variation with n represented by 2mn is what will "do the job" in determining
the target index m when we transform over n. The factor n(n - L) is an unwanted
variation with n, and must be compensated.
Figure 10.7 Subaperture selection for fine resolution analysis in step transform (case p = 2).
514 OTHER IMAGING ALGORITHMS
10.2 STEP TRANSFORM PROCESSING 515
(taking n0 = 0): (a)
~/•K(m-NA)31
Elimination of Fine Range Ambiguities
The index A in v of Eqn. (10.2.13) numbers the coarse frequency resolution
cells of size 1/ L\t within which the index m provides fine resolution (Fig. 10.8).
Since lg(rlA)I of Eqn. (10.2.12) is periodic in r with period I, if lgl peaks at Figure 10.8 Frequency bins and response functions in step transforms. (a) Coarse and fine
r = r, we know only that the target is at m with frequency bins. (b) Response functions sin(Lv)/sin(v), sin(Jw)/sin(w).
The term 2al in the exponent is accounted for from one aperture to the next
'-----Active region of fine bin by the nature of the noninteger stepout in n. However, the factor a(a + L)
represents a term depending on n, since a is in general different for every n, and
Figure 10.9 Coefficient selection in fine range analysis of step transform compression (case P= 3 ).
must be compensated in forming the sequences v;(kln) of Eqn. (10.2.11) from
the coarse coefficients V,(kln), just as was the previous term n(n - L).
Range migration is handled much as in the algorithms of Section 4.2.3. That
Azimuth Compression
is, the appropriate data is gathered together along the curved migration path
When the step transform algorithm is used for azimuth SAR processing, it is
in range/azimuth memory before Fourier transformation in the Doppler
the linear FM of the sampled Doppler signal which is analyzed. The procedure
domain, but after range compression. If we assume the nominal linear range
is just that which has been detailed above, with the us~al complications o~ range
walk is removed in the time domain as described in Section 4.2.3, then we deal
migration and change of Doppler frequency rate fR with range to be considered.
essentially only with range curvature. Each coarse resolution FFT in the Doppler
Sack et al. (1985) have given a good discussion.
So far as change offR with range is concerned, the step transform procedure domain operates over some limited span S' of azimuth time. If S' is adequately
is simply adjusted every so often across the range swath as fR is updated. Since small, the residual migration (mainly quadratic) over the aperture S' will be
the input and output sampling rates (;t of the algorithm are independent of fR, less than one range bin, or half a range bin, or whatever is desired, depending
no interpolation is needed on the output to produce a uniform image grid. The on the precision of processing needed. As Sack et al. ( 1985) note, this establishes
only complication internal to the algorithm is the requirement Eqn. (10.2.9) an upper bound to the coarse aperture time S'. Since we have as always the
that the coarse ra:nge bin stepout p used from one subaperture to another be nominal migration locus (after walk removal):
an integer. This is most conveniently done by adjusting N inversely to the
change infR, so that the overlap ratio {J depends onfR· Since the percent change ( 10.2.18)
in J. over the swath is normally small in space based systems, no great change
res~lts in the system operation on that account. However, whatever N is used where s is slow (azimuth) time, the worst case situation, at the end of the full
must also perforce be an integer, and changes in N involving a fractional p~rt synthetic aperture, yields a range migration due to the curvature over the
of an integer cannot be accommodated. Sack et al. ( 1985) suggest then ~smg subaperture, and a limit (for quarter-cell accuracy):
some number, say J, of reference ramps, with time origins spaced at multiples
j of (;s/J: AR= (V~/2R 0 )[(S/2) 2 - (S/2 - S') 2 ]
sn(j) = (n + j/J)(;s V~(S - S')S'/2R 0 < (;R/4 ( 10.2.19)
If integer spacings (in (;s) of N 0 , corresponding to sn = no(;s, (no+ No)(;s, Provided 2R 0 (;R/S 2 v;
> 1, this is always the case. Otherwise, for correction
(n 0 + 2N 0 )(;s, ... are to be changed to noninteger spacings No+ j/J, to within a quarter range resolution cell the subaperture time is bounded
518 OTHER IMAGING ALGORITHMS 10.3 POLAR PROCESSING 519
as t\R 1 ___.j
I _.I
I I
I
s I
With apertures S' chosen in each range bin, the appropriate coarse resolution I
I
bins are then patched together to form the input to the fine resolution FFT. I
Another constraint is imposed by the necessity for range migration correction
when using the step transform for slow time processing. Each fine resolution
FFT relates to a number of targets separated in slow time by the sampling S c2 -+---_.....
interval bs, which in this case is the radar pulse repetition interval. In frequency,
these targets fill the band of width 1/ S' corresponding to the resolution of the
coarse resolution FFT. Thus each coarse resolution frequency coefficient relates S c1 -+-----.- s
to a number of targets, which are separated in frequency by up to 1/S' Hz,
corresponding to a maximum separation in slow time of 1/lfRIS'. (Since the
analysis band lfRIS of the coarse resolution FFT is just 1/bs, this can also be as (SIS')
written as S(bs)/S'.)
The data for each fine resolution FFT is gathered together by selecting a
single coarse resolution coefficient from each coarse resolution time interval
and applying the appropriate range curvature correction (again assuming the
linear range walk has been previously compensated). Therefore, each of the
targets contributing to a particular coarse bin must have the same curvature R
correction to be applied, again to within a range bin, or some appropriate
Figure 10.10 Range migration considerations in step transform azimuth processing.
fraction (say a quarter) thereof. Now consider (Fig. 10.10) two targets, in the
same coarse Doppler bin, separated by the maximum amount ll.foc = 1/S'. In
slow time this corresponds to somewhat less computationally demanding than in the case of the rectangular
algorithm. However, it is not clear what the overall operation time of such an
ll.s = 1/lfRIS' = S/B 0 S' = S(bs)/S' algorithm might be in the case of a satellite platform. The FFT operations
typically constitute only on the order of half the calculations, and the FFT is
The two targets are not in general at the same Re. The largest discrepancy in a very efficient operation in comparison with interpolation or range migration
range curvature correction required by any segment of length S' common to correction in other ways. As a result, techniques such as this, based on
the two targets occurs at the positions shown. From Eqn. ( 10.2.18), the difference spectral analysis, are being used by ERS-1 for high-speed production of image
in range curvature corrections required for those two targets, assuming fR to products.
be the same for both, is Yet another SAR image formation algorithm has a long history, especially
in the aircraft "community". This is the polar processing algorithm, which can
(V;1/2Rc){(S/2) 2 - [S/2 - S(bs)/S']2} also be linked to the general idea of deramp processing, and to which we now
turn.
= (V:i/2Rc)[S 2 (bs)/S'](l - bs/S') ~ bR/4
for the quarter cell criterion, requiring a bound 10.3 POLAR PROCESSING
(10.2.20) An algorithm of considerable practical importance has been developed over the
past decade for the digital processing of SAR data, primarily for use in aircraft
in which the term bs/S' may be dropped. platforms, but not in general restricted to that case. This has come to be known
In considering the computational requirements of the step transform SAR as the polar processing algorithm. The algorithm is rooted in the general class
algorithm, Sack et al. ( 1985) conclude that the FFT operations needed are of deramp algorithms discussed in Section 10.1. As in all deramp processing,
520 OTHER IMAGING ALGORITHMS
10.3 POLAR PROCESSING 521
the mechanism of constant frequency filtering is salvaged by converting the
linear FM signal due to a target return into a constant frequency signal, whose
frequency encodes spatial position, either in range or along track, or both in
the case of a two dimensional "frequency", a wave number vector. In contrast
with deramp processing, while the step transform algorithm of Section 10.2
retains the range migration effects until compression processing is under way,
in polar processing the range walk effect is removed during preprocessing of
the radar returns. This simplifies the actual compression part of the algorithm.
Looked at from the point of view of inversion of the SAR system Green's
function (Section 4.1.1), in polar processing the data are formatted in such a
way that the Green's function of the resulting reformatted data is very simple,
just that which is inverted by a single (two-dimensional) Fourier transformation.
This is in complete analogy to the implementation of the range compression
matched filter as a simple Fourier transform operation in the deramp algorithm
for range processing, once the data have been formatted properly by the range
deramp operation.
As originally phrased (Walker, 1980), the algorithm was inteqded for the
imaging of a rotating object (a planet, for example) by a fixed radar, in
the procedure which has now come to be known as inverse SAR (ISAR)
(Wehner, 1987, Chapter 7). Although we follow the development presented by Figure 10.11 Geometry of radar encounter with target in polar formulation.
Walker (1980), we shall use the language appropriate to a moving platform
and a stationary target. In either case, the use of polar processing is particularly
appropriate in the situation that a relatively localized region at some distance
from the platform is to be imaged. Whether or not the radar is side-looking is where tn =: 2Rn/ c, and Rn is the (approximately constant) range to target during
pulse penod n.
of no great concern - the algorithm is useful for the case of large squint, as in
the spotlight mode of SAR operation (Brookner, 1977). Spotlight processing
Deramplng the Received Data
has also been specifically related to tomographic imaging, a point of view which
also relates to polar processing (Munson et al., 1983). For each pulse, the received signal Eqn. ( 10.3.2) is deramped, just as in the
first step of deramp range compression (Section 10.n using the waveform
Eqn. ( 10.3.l ), delayed and conjugated:
10.3.1 The Basic Idea of Polar Processing
Let us consider first the situation of a unit point target located at a fixed vector d(t) = s*(t - tan) exp[ -j2n.f.,(t - tan)J
position R0 possibly in space (Fig. 10.11). The origin of coordinates is some =exp{ -j2n[fc(t- tan)+ K(t - tan) 2 /2]} ( 10.3.3)
arbitrary point in the general vicinity of the region to be imaged. A radar moves
in space along some path described by a vector Ran(t), which is assumed to be where tan = 2Ran/ c is the known range from radar to coordinate origin during
known at every instant. The radar transmits pulses s(t), assumed to be linear pulse period n. The result is a video signal
FM with frequency rate K:
g(t) = d(t)v.(t) =exp{ -j2n[(.f., + Kt)(tn - tan) - K(t~ - t;n)/2]}
s(t)exp(j2n.f.,t) = exp[j2n(.f.,t + Kt 2 /2 )], ltl < tp/2 (10.3.1)
=exp{ -j2n[(.f., + K(t - tan))(tn - tan)- K(tn - tan) 2 /2]} (10.3.4)
For the moment we ignore the change in range to target from time of pulse
transmission until reception. With the origin of time taken at the instant of The real signal represented by Eqn. (lOJ.4) is then complex basebanded
transmission of a pulse, the received waveform is then ("I, Q detected") to obtain the complex signal Eqn. ( 10.3.4) itself. Letting AR
be the scene extent, and noting that t - tan• tn - tan are of order 2AR/c, the
(10.3.2) sec?nd. term in the exponent in the right can be dropped if AR« c.f.,/31KI,
which ts normally the case.
522 OTHER IMAGING ALGORITHMS
10.3 POLAR PROCESSING 523
From Fig. 10.11, the range from radar to target during pulse n is
( 10.3.6)
where Ran is the unit vector Rani Ran· Then thederamped video Eqn.( 10.3.4)is
( 10.3.8)
where tk = k/f.. By comparison of Eqn. ( 10.3.7) with Eqn. ( 10.3.8), the data
array is seen to be just
(10.3.9)
The pulse and time indexes (n, k) are joined in the vector r., which from Figure 10.12 Region of data index in polar processing.
Eqn. (10.3.9) appears as a vector wave number. Eqn. ( 10.3.9) is
~x = A./2/lfJ
10.3.2 Polar Processing Details v.(t) = s(t - t0 ) exp[j2nfc(t - t0 ) ] exp[ -j2n(fc - fi)t] (10.3.12)
In this section, we discuss an alternate deramp procedure, and examine an
so that the transformed response is
approximation made above.
V.(f) = S(f - Id exp[ -j2n(f- fi)t 0
] exp( -j2nfct0 ) (10.3.13)
Frequency Domain Deramplng
In some situations the deramp processing described in Section 10.1 may indeed where S(f) is the baseband spectrum of the transmitted pulse. After complex
be feasible, if implemented at some video offset frequency. This is especially the basebanding, we have available frequency samples of
case if the region of targets to be imaged is small enough that the entire region
is covered simultaneously by a single pulse, so that a single reference ramp of
( 10.3.14)
reasonable width can be used to deramp the return from any target point
(Fig. 10.2). where
More generally, the equivalent of the deramp operation Eqn. ( 10.3.4) can be
realized in the Fourier transform domain. To that end, the data Eqn. (10.3.2)
( 10.3.15)
across the full target region are first downconverted to some convenient video
offset frequency / 1 , then time sampled and Fourier transformed. For a is constant, depending however on pulse number.
transmitted pulse Eqn. ( 10.3.1 ), and again assuming the range of the radar from A result equival~nt to that of the time domain deramp operation Eqn. ( 10.3.4)
the target can be considered constant during the time of pulse period n, the can now be realized by frequency domain operations on the spectrum
526 OTHER IMAGING ALGORITHMS
10.3 POLAR PROCESSING 527
Eqn. ( 10.3.14 ). The procedure consists in adding values 2nftan to the phase of the
spectrum Eqn. (10.3.14), where tan= 2Ran/c, and dividing out the known Si~ce the. transit ti~e T is not constant, the received signal v.( t) is not simply
spectrum S(f). This results in deramped data a time shifted vers10n of the transmitted signal s( t ), but rather has a different
waveform, which furth~rmore depends on the specific form of the function R(t).
G(f) =an exp[ -j2nf(tn - tan)J (10.3.16) Jus~ as was the case with matched filter range compression, this can introduce
a difficulty. The matter involves further consideration of the form of the delay
Using again the approximation Eqn. ( 10.3.6), this is function .T(t') and the way in which it affects the received signal waveform v,(t).
We consider only signals in the baseband.
G(f) =an exp[j(4nf/c)Ran • RiJ (10.3.17) To that end, let us consider the spectrum of the received signal:
Tp/2
For a distributed target C(R1), Eqn. (10.3.17) becomes V,(f) = ff1{s(t')} =
J -Tp/2
s(t')exp{-j2nf[t' + T(t')]} dt (10.3.22)
( 10.3.18) We can use an expansion of T(t') around the time of launch of the midpoint
(say) of the pulse, taken as t' = 0. Using Eqn. (10.3.21), and defining R1 and R,
as the ranges to target at the times of transmission and reception of the midpoint
Defining the storage index by
of the pulse, i.e., R 1 = R(O) and R, = R[T(O)], we obtain from Eqn. (10.3.21)
r. = (2f/c)Ran (10.3.19)
T(O) = T0 = (R 1 + R,)/c
the corresponding data field in wave number space is t(O) = t 0 = (R, + R,)/(c - R,)
=t(O) = =t 0 = {R1 + [(c + R1)/(c - R,)] 2 R,}/(c - R,) (10.3.23)
(10.3.20)
Using these, the factor in the phase in the exponent of the integrand of the
received spectrum Eqn. (10.3.22) becomes
which are essentially the same numbers as in Eqn. ( 10.3.10 ). Fourier
transformation of the data Eqn. (10.3.20) yields the complex function anC(R1), t' + T(t') = T0 + [1 + i 0 ]t' + f 0 t' 2 /2] (10.3.24)
however. Since the amplitude of an is unity, as in Eqn. ( 10.3.15), the factor
disappears when the image 1(1 2 is taken. In the.course of forming the data Eqn. (10.3.20) to be used in image formation,
we adjust .the phase of V,(f) of Eqn. ( 10.3.22) by adding the quantity 2nftan•
Intra-Pulse Range Variation wher~ tan ts some nominal range time to the reference point for the pulse in
As we have discussed in Section 4.1.1, the approximation that the target range quest10n, say tan= 2Ran(O)/c, the range at time of transmission of the midpoint
is a constant value Rn during the time of pulse period n is reasonable in the of the pulse. We then divide out the transmitted pulse spectrum S(f), with the
case of a side looking radar, and in the case of the longer wavelengths, in result (dropping the term t( t') « 1 in the scale factor):
particular at L-band. However, for a higher frequency system, X-band, say, and
particularly for a system with considerable forward squint, this may not be the G(f) = V,(f) exp(j2nfta0 )/S(f)
case. We need to examine the approximation again, and to describe a means 2
for compensation of the resulting effects if it is not well satisfied. = {f Tp/
-Tp/2
s(t') exp{ -j2nf[t' + T(t')] + j2nft 80
} dt'}/S(f) (10.3.25)
As in Section 4.1.1, suppose that v,(t) is the signal rrceived at time t in
response to a unit point target. This is the value of the signal transmitted at Using the approximation Eqn. ( 10.3.6), as well as
some time t' which is earlier than t by the two-way transit time T. The time T
in general depends on t, or equivalently on t': T = T(t'). Let R(t) be the slant
range of the radar from the target at time t. Then the total travel time T(t')
must satisfy we have
c-r(t') = R[t' + T(t')] + R(t') (10.3.21)
(10.3.26)
528 OTHER IMAGING ALGORITHMS
10.3 POLAR PROCESSING 529
Then, Eqn. (10.3.25) becomes
The matter reduces to determining the "spectrum" (actually a time waveform
in this case) of a linear FM signal with extent BR and small chirp constant 2t0 / K.
G(f) = {exp(j2nr. • R,)/S(f)} Such spectra have been computed by Jin and Wu (1984) in the applicable case
'tp/2 (Fig. 4.21 ). The result is a well-localized pulse with reasonable sidelobes so long as
x
f-•.12
s(t')exp{ -j2nf[(1 + i 0 )t' + i'0 t' 2 /2J} dt' (10.3.27)
to < K/2Bi_ = K/(Ktp) 2 = 1/(2BRtp)
where the storage index r. is as in Eqn. (10.3.19).
Since approximately
The response G(f) of Eqn. ( 10.3.27) is compressed by Fourier transformation.
The discrepancy between the integral term in Eqn. ( 10.3.27) and S(f) represents
a mismatch between the compression filter exp( -j2nr. • R1) and the signal t0 = 2R/c
Eqn. (10.3.27). Analysis of the resulting distortion in general is possible, but it
this corresponds to a limit
suffices for our purpose to consider that the transmitted pulse is a linear FM.
Further, we will drop the quadratic term in the exponent of the integral in
Eqn. ( 10.3.27). Consideration of some typical values indicates the feasibility of (10.3.30)
this latter approximation. However, in the case of a long X-band pulse, the
linear term is significant. In the case of Seasat, however, with!= 1 GHz roughly which is likely to be well satisfied. In the case of more complex waveform
and a 33 µs pulse, the linear term is only 0.16 rad, and can itself be neglected, coding, the criteria may be more stringent. In that case, one might make an
but, as Barber (1985) has mentioned, only marginally. With an X-band system estimate of a nominal t 0 for a scene, or some part thereof, and compensate that
and 1° squint, using the 33 µs pulse, the linear term at the edge of the integral value by Doppler shifting the spectrum S(f) before using it in Eqn. ( 10.3.25)
amounts to a phase of 1.7 rad, and further analysis and care are warranted. to generate G(f) from V,(f).
In any event, we need only consider the linear phase distortion term in
Eqn. ( 10.3.27). For the special case of a linear FM transmitted pulse of large
bandwidth time product, the analysis can easily be carried further. Since the 10.3.3 An Autofocus Procedure for Polar Processing
matter has to do essentially with range compression, we can consider the
treatment of a single pulse. For that case, still remaining in the baseband, The deramp operation described in Section 10.3.2, in which a linear phase term
exp(j2nftan), with phase proportional to the range Ran = ctan/2 of the radar
s(t) = exp(jnKt 2 ), (10.3.28) from a reference point, is subtracted from the data spectrum, obviates the
problem of range migration correction in forming the image in polar processing.
The spectrum Eqn. (3.2.29) is In addition, it yields a kernel (Green's function) in Eqn. (10.3.20) which is
trivially inverted (by Fourier transformation). However, in order to carry out
S(f) = exp( - jnf 2 / K) the procedure, it is necessary to know the range Ran of the radar antenna from
the reference point (the origin in Fig. 10.11), and to know that pulse by pulse.
to within a constant. From Eqn. ( 10.3.27) we have (dropping the quadratic term): Any errors in this range, due perhaps to errors in the navigation system of an
aircraft, or tracking errors in the case of a satellite platform, will introduce
G(f) = exp(j2nr5 • R,)S[(1 + i 0 )f]/S(f) phase errors into the deramped data corresponding to Eqn. (10.3.20), and
degrade the image. Following the work of Walker (1980), who describes the
= exp(j2nr. • R,) exp( -j2nio.f 2 / K) (10.3.29)
effects of such errors on the image, and drawing upon the subaperture correlation
procedure for autofocus described in Section 5.3.2 in reference to the rectangular
where we use the fact that t 0 is small to make the approximation algorithm, it is possible to suggest a scheme for the automatic compensation
of some of these motion compensation errors which can arise in polar processing.
(1 + t 0 ) 2 = 1 + 2t0 Let us first make a specialization of the procedures described in
Section 10.3.2 to the (usual) case ofa planar image, so that R1 (Fig. 10.11) is a
The signal represented by the spectrum Eqn. ( 10.3.29) will be processed by two-dimensional vector. This allows use of a two-dimensional data field. Let
simple Fourier transformation. We want the result to be a pulse in time of the us also assume that in the deramp operation leading to Eqn. (10.3.16) a
nominal width 1/BR, where BR is the pulse bandwidth IKltP. (measured or presumed) vector R~n is used, which might not be the same as
530 OTHER IMAGING ALGORITHMS
10.3 POLAR PROCESSING 531
the true radar position vector Ran at pulse n. As before, the single pulse data
transform Eqn. ( 10.3.14) is essentially
where we have neglected any of the Doppler effects discussed in Section 10.3.2.
The deramp operation is carried out with a phase factor exp(j4nfR~n/c) to
yield data Eqn. ( 10.3.16) to be Fourier transformed as
(10.3.33)
(10.3.34)
where we assume both R, ands are small with respect to R~n• and keep only
first order terms. Then, again to first order,
( 10.3.35)
where R~P is the unit vector in the (x, y) plane (Fig. 10.14): ( 10.3.40)
( 10.3.41)
532 OTHER IMAGING ALGORITHMS
10.3 POLAR PROCESSING 533
This can be done if we assume the antenna position azimuth angle l/Ja(t) and define
(Fig. 10.14) to be a monotonic function over the full synthetic aperture
observation time.
g(xp, Yp) = xp(Bxot + ilxot 2 /2)
+ Yp(By 0 t + ily0 t 2 /2)
From Fig. 10.14 we have
+ [rp/tan(¢a)](szo + Bzot + ilz0 t 2 /2) (10.3.47)
y/x = tan[l/Ja(t)] (10.3.42)
In Eqn. (10.3.47) we recall that t(xp, Yp) is a known function, and note that
From this we can determine t for any point (x, y) in the data array, possibly g(xp, Yp) depends only linearly on all the parameters of the motion error as
by table look up if l/Ja(t) is a complicated function. More usually, .Pa(t) will written in Eqn. (10.3.45). '
admit a low order rational fraction approximation. For example, if the path of Now consider some point (xk, Yk) in the data array. Expanding g(xp, yp) of
motion of the radar vehicle is nominally a straight line, so that Eqn. (10.3.47) at that point, we have
X 8 (t) = Xo + Xot
Ya(t) =Yo +.Pot (10.3.43) (10.3.48)
then where (x, y) are the deviations of (xp, yp) froin (xk, Yk). If we now make an
image using the data over the subregion (subaperture) locally around (xk, Yk),
y/x = tan[l/Ja(t)] = (y0 + y0 t)/(x 0 + Xot) (10.3.44) a target at (xi, Yi) will appear in the image displaced by the phase error terms
in Eqn. ( 10.3.36) which are linear in the data coordinates. Those are the linear
For a specified point in the data array, Eqn. ( 10.3.44) can be solved for t ~s an terms of Eqn. (10.3.46). The target point will therefore appear with image
coordinates
explicit function of y/x. The same is the case if X 8 (t), Ya(t) are quadratic, or
can at least be so approximated over the span of time of interest in the process.
In any case, from the measured radar position R~ relative to the reference point, Xik =Xi+ Bxo + og/oxk
which corresponds to a particular data index rP, values of t(x, y) can be found Yik =Yi+ syo + og/oyk (10.3.49)
for any index in the data array.
Let us now consider the distortion term in the data G(f) of Eqn. ( 10.3.39). with the quadratic distortion terms contributing defocus.
We first consider an expansion of s(t) to some adequate order about some If we repeat this procedure for other points in the data array, (xm, Ym>.
nominal time t 0 , taken as t = 0: (xn, Yo), ... , we can compute the cross-correlation functions of the subaperture
images two by two. Since the target displacements are differ~nt in the different
( 10.3.45)
subaperture images, as in Eqn. ( 10.3.49 ), the correlation functions will peak
away from zero. Specifically, for example, if we correlate images k and m, the
where for illustration we truncate at the second order. The distortion term in peak of that cross-correlation function will occur at
Eqn. ( 10.3.39) is then
oxkm = og/oxk - og/oxm
(4nf /c)Ra ·s = (4nf /c){k cos(¢a) + [sin(</Ja)/rp]rp} ·s
0Ykm = og/oyk - og/oym (10.3.50)
= 2nrP·s + (4nf/c)k·scos(¢a)
The left sides of these two equations can be measured. The right sides are linear
= 2n[xpsxo + YpByo + g(xp, Yp)] (10.3.46)
in the error parameters sxo• Bxo• Byo• .... Therefore, by computing at least as
many correlation pairs as there are parameters in the model Eqn. (10.3.43) to
where we use Eqn. ( 10.3.37) and Eqn. ( 10.3.41 ), and note from Eqn. ( 10.3.40) that be determined, we can solve for the error parameters, perhaps using a least
squares procedure if the set of equations Eqn. (10.3.50) is over determined.
Rap= rp = rp/rp With specific values of the coefficients in Eqn. ( 10.3.45) in hand, the function
g(xp, Yp) of Eqn. ( 10.3.47) is fully determined, again using the function t(xp, yp),
rP = (2f/c) sin(¢a) perhaps by a look-up procedure on tan[l/Ja(t)]. The data function Eqn. (10.3.36)
534 OTHER IMAGING ALGORITHMS
REFERENCES 535
can then be compensated by multiplying the data array entries point by point
by the compensator REFERENCES
f (xp, yp) = exp[ -j2rcg(xp, Yp)] Ausherman, D. A., A. Kozma, J. L. Walker, H. M. Jones and E. C. Poggio (1984).
"Developments in radar imaging," IEEE Trans. Aero. and Elec. Sys., AES-20(4),
The result is to replace the data Eqn. ( 10.3.39) by pp. 363-400.
Barber, B. C. (1985). "Theory of digital imaging from orbital synthetic-aperture radar,"
( 10.3.51) Int. J. Rem. Sens., 6(7), pp. 1009-1057.
Brookner, E., ed. (1977). "Synthetic aperture radar spotlight mapper," Chapter 18 in:
where Radar Technology, Artech House, Dedham, MA.
Fitch, J.P. (1988). Synthetic Aperture Radar, Springer-Verlag, New York.
(10.3.52) Hovanessian, S. A. (1980). Introduction to Synthetic Array and Imaging Radars,
Artech House, Dedham, MA.
The image is thereby fully corrected for the distortions due t?. moti~n Jin, M. Y. and C. Wu (1984). "A SAR correlation algorithm which accommodates
compensation errors, except that it appears with a constant pos1t1on shift large-range migration," IEEE Trans. Geosci. and Remote Sensing, GE-22(6),
Eqn. ( 10.3.51) in accord with the constant vehicle position offset. . · . pp. 592-597.
In carrying out the procedure indicated, values for the coeffic~ents m the Martinson, L. (1975). "A programmable digital processor for airborne radar," IEEE
derivatives ag/xk, ag/ayk are needed in Eqn. (10.3.50). These mvolve the Inter. Radar Corif., April, pp. 186-191.
derivatives dt/dxP, dt/dyP, which may be known explicitly if the motion of ~he Munson, D. C. Jr.,J. D. O'Brien and W. K. Jenkins (1983). "A tomographic formulation
vehicle was taken as a simple approximation such as in Eqn. ( 10.3.43 ). Otherwise, of spotlight-mode synthetic aperture radar," Proc. IEEE, 71(8), pp. 917-925.
we can write Perry, R. P. and H. W. Kaiser (1973). "Digital step transform approach to airborne
radar processing," Record, NAECON '73, May, pp. 280-287.
d(yp/xp)/dxP = d{tan[l/la(t)]}/dxP Perry, R. P. and L. W. Martinson (1977). "Radar matched filtering," Chapter 11 in:
Radar Technology (Brookner, E., ed.), Artech House, Dedham, MA.
= {d tan[ l/J 8 (t)]/dt }( dt/dxp) ( 10.3.53)
Sack, M., M. R. Ito and I. G. Cumming (1985). "Application of efficient linear FM
matched filtering algorithms to synthetic aperture radar processing," IEE Proc.,
In this the left side is a simple function of the point (xp, Yp), which can be taken 132(Part F, No. 1), pp. 45-57.
specifi~ally at any of the subaperture cent~r poi~ts (xk, Yk) of interest. The ~rst Vant, M. R. and G. E. Haslam (1980). "A Theory of 'Squinted' Synthetic-Aperture
terni on the right can be found, numencally 1f necessary, from the vehtc~e
Radar," Report No. 1339, Communications Research Centre, Ottawa, November.
traject~ry in the vicinity of the subaperture points. The nee~ed term dtzdx~ is
Vant, M. R. and G. E. Haslam (1990). "Comment on 'A new look at nonseparable
then calculated at the point (xk, Yk). A similar procedure yields the denvat1ve synthetic aperture radar processing'," IEEE Trans. Aero. and Elec. Sys., AES-26(1),
dt/dyk. . . . pp. 195-197.
Polar processing in general, even without autofocus constderattons, mvo~ves
Walker, J. L. (1980). "Range-Doppler imaging of rotating objects," IEEE Trans. Aero.
a considerable amount of data interpolation. The ultimate image formation, and Elec. Sys., AES-16(1), pp. 23-52.
the Fourier transform of the data field G(xp, yp), will be done digitally using
Wehner, D.R. (1987). High Resolution Radar, Artech House, Norwood, MA.
the FFT. It is therefore necessary that data values be available referred to a
Wu, C. ( 1976). "A digital system to produce imagery from SAR data," Paper 76-968,
rectangular grid on the data array. In contrast, the uniformly s~aced s~mpling
A/AA Systems Design Driven by Sensors, Pasadena, California, October 18-20.
done by the video digitization for each pulse produces values umformly mdexed
Wu, K. H. and M. R. Vant (1984). "Coherent Sub-Aperture Processing Techniques for
along rays in the data array, with the rays not in general.parallel, but r~ther
Synthetic Aperture Radar," Report No. 1368, Communications Research Centre,
in the polar format. Without careful consideration of the ~ompu.tatt~~al Ottawa, January.
operations; it is difficult io make a clear state~ent about the relat1v~ des1rab1h~y Wu, K. H. and M. R. Vant (1985). "Extensions to the step transform SAR processing
of polar processing and the rectangular algonthm. The trade-offs mvolved ~ill technique," IEEE Trans. Aero. and Elec. Sys., AES-21(3), pp. 338-344.
also depend on the radar system deployment, specifical~y t~e slant range, squmt
angle, and swath size. This algorithm is commonly used m auborne SAR systems
for target detection.
A.1 ANALOG LINEAR SYSTEM THEORY 537
so that
for arbitrary scalar a. (Note that the output of a linear system must be identically
zero if the input is zero.)
For linearity to hold we do not require that
If this latter property is true, that is, if a time shift of the input causes only a
corresponding time shift of the output, the system is in addition called stationary.
In this Appendix, we will describe the digital signal processing alg_orithms Stationarity, although convenient, is not a fundamental property on a par with
required to realize the main operations needed in SAR image f~rmahon. We linearity. Without stationarity, the world of signal processing can proceed
will have to do with bandlimited signals, such as the radar IF signal and the relatively unimpeded, but without linearity considerable complications ensue.
SAR Doppler signal. We will also include a. discussio~ of analog. fi_lter
Impulse Response and Convolution
calculations, and an analysis of the process of time samplmg a bandhm1ted
signal to produce the numbers for input to digital processing algorithms. A linear system, whether stationary or not, is completely specified by its unit
impulse response h(tlt'), which is the response g(t) of the system as a function
of time t to an input '5(t - t') which is a unit impulse (Dirac) function occurring
A.1 ANALOG LINEAR SYSTEM THEORY at time t'. In another terminology, h(tlt') is the Green's function of the system.
If we consider an arbitrary continuous input function f (t ), the defining property
of the impulse function is:
The first part of the definition of a linear system is that the system output in
response to the sum of two inputs is the sum of the outputs in _res~nse to the
two inputs taken separately. Symbolically, if the system operat10~ is expre~sed f(t) = f'"oo J(t')'5(t - t') dt' (A.1.4)
as O( •), and if we choose to think of the inputs and outpu!s as time f~nctlons
f(t) and g(t) respectively, then a system is linear (almost) if and only if
The linearity properties Eqn. ( A.1.1) and Eqn. ( A.1.2 ), and the definition of the
O[f1(t) + f 2(t)] = O[f1(t)] + O[f2(t)] = 91(t) + 92(t) (A.1.1) impulse response h(tlt'), then yield at once
.
for any inputs f 1 , f 2 from the class of functions for which the system out~ut is
defined. (The system output must be well defined, in the sense that an mput
g(t) = O[f(t)] = o[f: 00
f(t')t5(t - t')dt' J
f (t) uniquely determines the output g( t ), although the reverse need not be true.)
It follows from Eqn. (A.1.1) that, for integer m, n, we have: = f:
00
J(t')O[J(t - t')] dt' = f_00
00
J(t')h(tlt')dt' (A.1.5)
wheres is an arbitrary complex number, from Eqn. (A.1.8) we have the output
(assuming convergence of the integral):
which has the graphical interpretation shown in Fig. A.1. By a change of variable,
Eqn. (A.1.7) becomes also 00
g(t) = f_ 00 h(t')exp[s(t - t')] dt' = exp(st)H(s)
g(t) = f :00 h(t')f(t - t') dt' (A.1.8)
where we define the system "transfer function"
(A.1.9)
realizable system. Unless true real-time operation is desired for a system, physical
From Eqn. (A.1.9), for any s the function exp(st) is an eigenfunction of the
system. The corresponding eigenvalue is H(s) of Eqn. (A.l.10). We then hope
to be able to find coefficients F(s) such that an arbitrary input function can be
expressed in terms of the set of eigenfunctions exp( st) by the linear combination
h (t') t' If this can be done, then the corresponding output function has the expression
t'
=l:1
g (t)
t
)Ir
t'
g(t)
where we define
= O[f(t)] =
= f
o[fF(s)exp(st)ds J= fF(s)O[exp(st)] ds
F(s)H(s)exp(st)ds = f G(s)exp(st)ds (A.1.12)
f(t) = ~- 1 [F(jro)] = f: 00
F(jro)exp(jrot)dro/2n (A.1.15) A.2 SAMPLING OF BANDLIMITED SIGNALS
· f(t) = L f exp(jn2nt/T)
0
(A.1.17) if f(t) is real, since then F(jro) = F*( -jw), from Eqn. (A.1.16).
n= -oo
(A.2.6)
where n= -co
sinc(u) = [sin(u)]/u (A.2.3) In Eqn. (A.2.5) we recognize a Fourier series, Eqn. (A.1.17), with series
coefficients / 0 • Therefore, from Eqn. (A.l.18), we must have
Eqn. (A.2.2) expresses the bandlimited function f(t) at any arbitrary t exactly
in terms of the infinite sequence of its samples f 0 = f(n/ f.) at discrete times
t 0 = n/ f.. Equation (A.2.2) is also Whittaker's formula for interpolation of
fn = ( 1/f.)
f./2
f -f./2
F!(jw) exp(jnw/f.) df =
i lzl= I
F(z )z 0 - 1 dz/2nj
into the errors which are introduced if either ofthese requireipents is violated.
Suppose then that f(t) is any function, bandlimited or not, having a Fourier By the uniqueness property of Fourier coefficients, comparison of the two forms
transform F(jw). Let f. be some arbitrary sampling frequency, and let Eqn. (A.2.7) and Eqn. (A.2.8) for f 0 yields the result that
J;. = f(n/ f.) be the time samples of f(t). Let these be encoded into a
mathematical construct called (historically) the "sampled signal": F!(jw) = !. L"' F[j(w +kw.)] (A.2.9)
k= -oo
f!(t) = L"'
n= -co
fnl>(t - n/ f.) (A.2.4)
That is, F!(!w) is a superposition of shifted replicas of the transform F(jw) of
the analog signal f(t), as indicated in Fig. A.2.
544 DIGITAL SIGNAL PROCESSING
A.3 DISCRETE CONVOLUTION 545
IF Oro) I
_,.,,.-,..,,._,,,-,, IF I 0CJ?) I / .. -, - .. -...
... ' / .... _,,, ',···
'\ I
;"
/
, '' \
---J
2 2 2
~ Alias region
-i/2
s
~ ls<B
(B-fr)/2
Figure A.2 Bandlimited spectrum (solid line); replications due to sampling (broken lines), without
aliasing; and reconstruction filter. Figure A.3 Spectrum of bandlimited signal and replications due to sampling. Aliasing present
due to f, < B.
where z = exp(jw/ J.), and similarly for H(jw ). Then in the same band we have N-1
<X>
fn = (1/ N) L Fk exp(j2nkn/ N), O~n~N-1
k=O
J.G(jw) = L gkz-k = f.H(jw)F(jw)
k=-oo N-1 (A.3.5)
<X>
Fk = L f,, exp( -j2nkn/ N), O~k~N-1
=(1/J.) L imhnz-n-m n=O
m,n= -co
<X> is an identity in the two sets of numbers {!0 }, { Fd, which are in general
=(1/f.) L h[(k-m)/J.]i(m/f.)z-k (A.3.4) complex. This pair is the discrete Fourier transform (DFT) and its inverse. In
k,m=-co case the numbers f,,, 0 ~ n < N, are the only nonzero samples of a properly
548 DIGITAL SIGNAL PROCESSING
A.3 DISCRETE CONVOLUTION 549
sampled bandlimited function, then the numbers Fk, 0:::;; k < N, have the
interpretation (Eqn. (A.2.5) and Eqn. (A.2.9)):
I t : N n
Eqn. (A.3.5) are periodic, in the variables n and k, respectively, with period N. I
I I
I
I I
The restriction of the calculation Eqn. (A.3.5) to the base period [O,N - 1] is I I
,, \
/\ \
of length N, in fact only the single value UN_ 1 will be computed correctly.
\
\ Filtering a Data Stream by Transform Operations
N~L+M-1
n If we are carrying out batch processing of a certain number L of input samples
fn th~ough a filter with impulse response sequence hn of length M, we can use
the discrete Fourier transform procedure Eqn. (A.3.5) and Eqn. (A.3.8) with a
value N larger than L + M - 2 to compute the convolution Eqn. (A.3.2).
However, it may be that this value of N is inconveniently large, or we may
have. to do with an input sequence fn that is on-going in time indefinitely.
Special arrangements then need to be made to carry out the computation
successfully.
t-M t As in Fig. A.6, suppose that we segment the input data flow fn into sections
N of length L, each ith subsequence being defined by
I
I f~ = fiL+n• O~n~L-l (A.3.10)
I
\
\
_,, .... ------....
N
t ....
,.... ____ _,,~~,..
Figure A.5 Linear convolution realized by circular convolution. Sample span extended to avoid
circulatory replications.
L 2L n
sufficiently such that, fo! computation of the last value of Un, the troublesome
replication of hn in a neighboring period interval has not yet moved in to take
part in the computation, while the first replication of fn after the base period
has not yet begun. If fn = 0 outside the span 0 ~ n ~ L - 1, and if hn = 0
outside the span 0 ~ n ~ M - 1, then the linear convolution sequence Un in L N
Eqn. (A.3.2) has nonzero values at most over 0 ~ n ~ L + M - 2. We must
have N ;;:?: L + M - 1 in order that the last value Un of interest not be replaced ,l!
by a replication of g0 • This also assures that the first replication of hn outside
,,,, I
the base period has just not begun to overlap the values of fn in the base period. -""
~
I
I
I
I
/,-····\
,______ ______..............._,,,,
'• '•
"'........................ -..----·
and because of the linearity of the convolution operation Eqn. (A.3.2):
~~~~---~~~--./ n
(A.3.11) M-1 N-M+1
where g~ is the output of the syst~tn in response to the ith input segment of
Eqn. (A.3.10). If the impulse response sequence hn is of length M, we need only
choose some convenient value N ~ L + M - 1 to carry out the component
convolutions in Eqn. (A.3.11) correctly. The results are simply added. As shown
in Fig. A.6, the result of convolution of the input sequence !~ with the impulse n
response sequence hn will generally spread over more than one data interval. N
Then part of the output from one ith component convolution Eqn. (A.3.11)
must be added to the outputs of other segments with which it overlaps. This
procedure is called the overlap-add method of filtering an ongoing data stream.
-
ht-n
Alternatively (Fig. A.7), the input stream can be segmented into blocks of
length L = N, the contemplated transform size. If the impulse response sequence t-M t
h is oflength M, then we know from above that the first L + M - 1 - N = M - 1 N
p~ints of the calculated convolution are incorrect, and only the remaining
N - M + 1 points represent progress in computing the output data stream gn.
The procedure is then to shift the input segment M - 1 points farther back on
the input stream than would otherwise be necessary, with the result that the N 0
input points to each convolution calculation are made up of the last M - 1
points of the previous section, saved and reused, and the first N - M + 1 points
of the data stream fn which have not yet been used. The good N - M + 1 t M N
output points from each calculation are pieced together appropriately to form
the output stream Un. This procedure is called the overlap-save method, since Discard Good
the input segments must be overlapped, necessitating saving some of the input
Figure A.7 "Overlap-save" method of filtering a long data stream using N-point transform.
points from one computation to another.
It might be noted that, so far as computation time is concerned, we lose by
using the discrete Fourier transform procedures as we have described them to
sequence, and finally an N-point inverse transform, with N multiplies and N - 1
compute a convolution sum. For suppose we want to convolve an L-point
adds for each of the M + L - 1 output sequence values, all adding up to
sequencewithanM-pointsequence. TheoutputsequenceisoflengthM + L- 1.
(M + L- 1)(4N - 1) + N operations,oforder4N(M + L).lf ML< 2N(M + L)
Each of the middle L - M + 1 points requires M multiplies and M - 1 adds
whic.h is always the case since we must have N ~ L + M - 1, direct convolutio~
for its calculation, while the M - 1 points on each end of the output sequence requires fewer computations.
require (M - 1)2 operations, a total operation count of L(2M - 1) - M - l,
The indications in fact go just the other way as a result of the fast Fourier
of order 2ML. If we use the discrete Fourier transform technique, we need to
calculate two N-point transforms, one requiring L multiplies and L - 1 adds
t~ansform alg?rithm, which is a dramatically more rapid way of calculating the
for each of the N values F k• and the other requiring M multiplies and M - 1
drscr~te Four~er transform than is evident from the form Eqn. (A.3.5). It is that
algorrthm ~htch has made possible much of what is now called signal processing,
adds for each of N values Hk. We then have N multiplies to calculate the Gk
and to which we turn attention in the next section.
554 DIGITAL SIGNAL PROCESSING
A.4 THE FAST FOURIER TRANSFORM ALGORITHM 555
A.4 THE FAST FOURIER TRANSFORM ALGORITHM
realized by the FFT. This procedure of carrying out a convolution computation
using the FFT is called "fast convolution", and is universally used, except for
One of the most common operations in signal processing is Fourier transformation. special cases such as that in which N is quite small, say N < 64.
As applied to a finite number of numerical data, the ap~ropriate Fourier The key observation in development of the FFT is that the periodicity, in
transform pair is the discrete Fourier transform (D FT), defined m Eqn. ( A.3.5): both variables k and n, of the complex exponential factors in the DFT of
N-1 Eqn. (A.4.1) and Eqn. (A.4.2 ), with period N, can be exploited in the computation.
fn = (1/N) L Fkexp(j2nkn/N), n = O,N - 1 (A.4.1) Two ways of doing this can be constructed, which are in the technical sense
k=O duals of one another. The first, leading to "decimation in frequency" FFT
N-1 algorithms, separately computes various subsequences of the output sequence
Fk = L fn exp(-j2nkn/ N), k = O,N - 1 (A.4.2) Fk. The second, leading to "decimation in time" algorithms, separates the input
n=O
sequence f 0 into subsequences, and computes separately on each.
Since we have to do here with finite sums, convergence questions do not arise.
The pair is valid as an identity for complex data numbers fn as well as for real Decimation In Frequency
data. Let us consider first the decimation in frequency scheme, and suppose for
Two distinct uses are made of the DFT. The first is spectral analysis, in simplicity that N is an even number. Decimation in frequency algorithms evolve
which we seek the Fourier spectrum Eqn. (A.1.16) of a time waveform which by first noting from Eqn. (A.4.2) that
is bandlimited to Iii~ B/2. As in Eqn. (A.2.9), the signal spectrum F(j~) on
the band exactly equals the scaled spectrum F!(jw )/f. of the sampled signal Nn-1 N-1
f!(t) of Eqn. (A.2.4), constructed from the samples fn taken at a. rate f. > B. Fk = L fnexp(-j2nkn/N) + L fnexp(-j2nkn/N)
n=O n=N/2
Since in practice only some finite number N of the samples fn are nonzero, the
spectrum function F!(jw) has sampled values as in Eqn. (A.3.6): N/2-1
= L
n=O
Un+ exp(-jnk)fn+N/ 2] exp(-j2nkn/ N) (A.4.3)
F! (j2nkf./ N) = F k> k = -(N/2- l),. . .,N/2
Then the output sequence F k can be partitioned as
Thus the D FT Eqn. (A.4.2) computes samples of the Fourier spectrum F(jw ). .
The second main use of the DFT is as an aid in computing the convolution N/2-1
of two signals, that is, the output of a linear stationary system in response to F2m = L
n=O
Un+ fn+N/2)exp[ -j2nmn/(N/2)] (A.4.4)
some input. As we saw in Section A.3, in terms of operation counts the DFT
is. an inefficient way to carry out convolution processing, and indeed the N/2-1
procedure was not much used until the mid 1960s. At that ti~e, howe~er, an F2m+1 = L
n=O
Un - ln+N12)exp(-j2nn/N)exp[-j2nmn/(N/2)]
algorithm was presented (Cooley ~nd Tukey, 1965) which ex~loited a
rearrangement of the computations of the DFT to reduce the operation count. (A.4.5)
This algorithm, which came to be known as the "fast" Fourier transform (FFT),
gave birth to the discipline of signal processing as it is practiced today. (The Thus the even numbered Fk, Eqn. (A.4.4), are calculated as the N /2 point
advances in digital computer hardware which were taking place at the same transform of the sequence f 0 + !n+N12 , while the odd numbered Fk, Eqn. (A.4.5),
time played an essential role as well.) . are the N /2 point transform of the sequence (J;. - ln+N/ 2) exp(-j2nn/ N), where
Suppose we want to compute the convolution Eqn. (A.3.2) of two N.-pomt the exponential multipliers are the so-called "twiddle factors". The two N /2
sequences J;., hn. By di~ect conv?luti?n, to compute the ;.,,2N outyut pom~s 9n point transforms are further subdivided into N / 4 point transforms, and so forth,
until we deal ultimately with two-point transforms.
requires -2N 2 operations, whde with the DFT we need -6N operatt?ns.
With the FFT, however, as we will see below only -2Nlog 2(N) operations For each n, the operations in forming the sequences to be transformed as in
Eqn. (A.4.4) and Eqn. (A.4.5):
are needed, for the case of N a power of 2. For the modest value N = 1024,
say, direct convolution then requires about 2 x 106 operations, the DFT needs
6 x 10 6 , and the FFT, only 5 x 104 • This difference in behavior between N 2 9n = fn + fn+N/2 (A.4.6)
and N log(N) easily tips the computational balance in favor of the DFT, as g~ = (fn - ln+N/2) exp(-j2nn/ N) (A.4.7)
556 DIGITAL SIGNAL PROCESSING
A.4 THE FAST FOURIER TRANSFORM ALGORITHM 557
are just those of taking a two-point transform (the case N = 2 of Eqn. ( A.4.2) ),
cycles until we finally deal with constituent transforms of length just 2 points.
followed by adjustment of the second output coefficient by a twiddle factor.
The total operation count in carrying out the original N-point transform with
Thus, for say N = 8, we first do four 2-point transforms using(/o,f4), ... , (/3,f1>·
this procedure turns out to be precisely the same as for the decimation in time
The four output coefficients g0 of Eqn. (A.4.6) are the inputs to a four-pomt
transform, and the four twiddled outputs g~ of Eqn. (A.4.7) are the inputs to procedure, and in fact the graphs of the computations are duals of one another.
another four-point transform. In carrying out the first of these, we first do two
two-point' transforms using (g 0 , g 2) and (g 1, g 3 ) and twiddle the second output
Coefficient Ordering
coefficients of each. This yields numbers (h 0 ,hi), (h 0 ,h~), each pair of which is
the input to a final two-point transform. Collecting all these together, we have The detailed techniques of realizing an FFT algorithm (note we do not s~y at
done four (N /2) (complex) two-point transforms at each of three (log 2 N) this point "the" FFT algorithm - there are many variants (Elliott and Rao,
computation stages, with twiddle factors applied at each stage. 1982, Chapter 4)) become closely involved with the type of hardware with which
Looked at another way, in Eqn. (A.4.4) and Eqn. (A.4.5) we have N /2 one wishes to deal. There is an interplay between computation ordering and
two-point transforms, with the resulting coefficients adjusted by twiddle factors necessary access to memory which provides a considerable number of possibilities.
(one of each pair of which is unity), and finally two (N /2)-point transforms on We might mention here only the choices between in-place and not-in-place
algorithms, and between natural ordering of inputs and outputs and bit reversed
the resulting adjusted coefficients. ordering.
If then N is a power of 2, N = 2m say, we need m stages of decimation to
For either of the choices of time decimation, Eqn. (A.4.8), or frequency
get down to the final two-point transforms. Each two-point transform (a decimation, Eqn. (A.4.4) and Eqn. (A.4.5), there exist algorithm versions which
"butterfly") requires two (complex) additions, and since there are altogether require two storage arrays, each of length N (complex), with input data at
m(N /2) two-point transforms, we require mN complex additions. Applying the each stage of the computation taken from one array and output data loaded
twiddle factors uses N /2 complex multiplications for each stage except the last, into the other. There also exist versions which require only one array ("in-place"
for a total of N(m - 1)/2 multiplications involving twiddle factors. The grand computation), with the numbers in the input array for each stage being replaced
total of complex operations needed for the N-point transform is thus: by output from that same stage. One pays a price for the storage saving in the
latter case, however, in that either the input data or the output data will appear
mN + (m - l)N /2 = (3/2)N(log 2 N - 1/3) in sequential memory in bit reversed order. For example for N = 8, locations
(binary) 000, 001, 010, ... , 110, 111 will contain data numbers with indexes 000,
Various reorderings of the computation can reduce the operation count even 100, 010, ... , 011, 111. Whether time or frequency decimation is used, if both
below this, but the lion's share of the improvement is indicated by the transition input and output are to be in normal order, two storage arrays must be provided.
from N 2 behavior to N log 2(N). On the other hand, for in-place computation, with either time or frequency
decimation one has the choice of either the input or output, but not both, being
Decimation In Time in normal order, with the other being bit reversed.
With decimation in time procedures, we separate the fn sequence into a sequence If we are interested in fast convolution, we can always use in-place
with even index, n = 2m, and a sequence with odd index, n = 2m + 1, for computation (a single storage array), and normally ordered input to, and output
m = 0, ... , N/2 - 1. Then from, the convolution. We simply use normally ordered input f 0 , and say an
in-place decimation in time algorithm. The output coefficients Fk then appear
N/2-1
Fk = L
m=O
f 2mexp[-j2nkm/(N/2)] in bit reversed order, but if we have also arranged that the filter coefficients Hk
are in bit reversed order, we simply multiply the two arrays in the ordering in
N/2-1 which they appear to determine the array Gk in bit reversed order. We then
+exp( -j2nk/N) L f 2m+l exp[ -j2nkni/(N /2)] (A.4.8) use an in-place algorithm which expects its input coefficients to be in bit reversed
m=O
order, which produces the final system output g0 in normal order. The penalty
is that we may need to invoke different codes for the direct transform of f 0 and
Thus, the coefficients in the N-point DFT appear as sums of coefficients in two the inverse transform of Gk.
DFTs, each of length N /2, with adjustment of the second set by twiddle factors
The inverse transform operation Eqn. (A.4.1), forming the sequence f 0 from
exp( -j2nk/ N). Assuming that N /2 is even, each of these constituent ( N /2)-point
the sequence Fk, is done using the same code as the direct transformation
DFTs can be calculated by further subdividing the sequences f2m and f2m+1• Eqn. (A.4.2), with minor changes of index and scale, since the computations
and carrying out a total of four transforms, each of length NI 4. The process differ only in the sign of the exponential and in the scale factor 1/ N.
558 DIGITAL SIGNAL PROCESSING
A.5 ADDITIONAL TOPICS RELATING TO THE FFT 559
Since the FFT computation is the heart of fast convolution SAR image where
formation algorithms, as well as of so much of signal processing, it is worth
giving some additional discussion of the choices that can be made, as well as
some indication of the more recent developments in the subject of fast Gk = ( lk + Y~12-k)/2
convolution in general. Hk = (Yk - Y~12-k)/2j, k = O,N/4
Gk= G~12-k
A.5 ADDITIONAL TOPICS RELATING to THE FFT Hk = H~12-k' k = N/4 + l,N/2
The Radix of the Transform
In taking FFTs, a number of points N = 2m is commonly used, in the way with. th~ sequence lk being the (N /2)-point complex transform of the numbers Yn·
we have described briefly in Section A.4. This realization of the calculation Similarly, an N-point inverse transform Eqn. (A.4.1) leading to a real sequence
Eqn. (A.4.1) using successive segmentation by factors of2 is called a "radix 2" is computed using an (N /2)-point complex transform as:
algorithm. The original presentation of the FFT (Cooley and Tukey, 1965)
made no such assumption, and an N-point FFT algorithm can be derived for f2n = Re(y0 )
any arbitrary factoring of any number N. The size of the basic transform unit
coded is called the radix of the algorithm. Thus, if an N = 2m point transform f2n + 1 = Im(y0 ), n = O,N/2- 1 (A.5.2)
is coded as a cascade of m stages of two-point transforms, we have a radix 2
FFT, while if we code m/2 stages of four-point transforms (requiring m to be where the sequence Yn is the (N /2)-point inverse FFT of the sequence
even), we have a radix 4 algorithm. If we code m/2, or (m - 1)/2 for odd m,
stages of four-point transforms, followed or preceded by one two-point transform k = 0, N/2- 1
for odd m, we have a simple mixed radix transform, and in this case a transform
of "radix 4 + 2". with
The various factorings of N lead to transforms with different operation counts,
although all with the basic N log(N) behavior, and hence somewhat different Gk= (Fk + F~12 -k)/2
running speeds. As Gentleman and Sande (1966) early pointed out, radix 4 or
radix 4 + 2 is nearly a factor of2 faster than radix 2, for moderate size transforms Hk = exp(-j2nk/N)(Fk - F~12 _k)/2, k = O,N/4
of the order N = 1024. Bergland ( l 968a) pointed out that radix 8 + 4 + 2 is Gk= G~12-k
even faster. Since the rate of increase in program complexity grows as the radix
goes up, while the rate of increase in operation speed slows, radix 8 is the Hk = H~12-k• k = N/4 + 1,N/2
highest generally used. For special values of N, use of radixes such as 3 and 5
may lead to faster transforms than simply extending data sets of the desired In th~ seco~d way of dealing with real data (Bergland, 1968b ), a complex FFT
length N by zeros to reach a power of two. algorithm 1s. pruned to remove all redundancy in computing a value and its
complex conjugate, and to eliminate all computation of values known to be zero.
Arrangements for Real Data
We often have to do with data which are real numbers, for example, the time Vectorization of the Transfotm
samples of the real offset video signal resulting from each pulse of a SAR system.
Quite ~ignificant improvements in FFT computation times can result by
There are two standard ways of computing an FFT for a. real data sequence.
paralleltng aspects of the computation, using vector machines such as the
In the first (Brigham, 1974, Chapter 10), the N-point transform sequence Eqn.
(A.4.2) of N real numbers fn is computed using a complex FFT routine with
Cyb~r-205 or CRAY. Pease (1968) and Temperton (1979), for example, have
N /2 input points Yn = f 20 + jf2 n+ 1 , n = 0, N /2 - 1. The transform coefficients
?e
considered the ad~antages to gained by arranging specific FFT computations
are: to ac~rd better with the architectures of specific machines than do the standard
algor~thm~ we have been discussing. One potential advantage which might be
Fk =Gk+ Hk exp(-j2nk/ N), k = O,N/2 expl01ted .m the SAR. computation in particular would be carrying out the range
+ l, N co~press1on of multiple radar pulses simultaneously on a vector machine, with
Fk = F~-k• k = N /2 - 1 (A.5.1)
the mnermost loop of the computation being indexed by pulse number.
560 DIGITAL SIGNAL PROCESSING
A.6 INTERPOLATION OF DATA SAMPLES 561
Prime Factor and Number Theoretic Transforms
In recent years, two main developments in numerical transform th~ory have and its inverse can be formulated which maps sequence convolution into
evolved. On the· one hand, these center on more efficient computation of the transform multiplication. The transform, when realized on a binary machine,
traditional (exponential based) DFT, Eqn. (A.4.1) and Eqn. (A.4.2). On the requires no multiplies, N(N - 1) adds, and (N - 1)2 circular register shifts for
other hand, development of a new class of transforms suitable for realizatio_n an N-point input sequence. The algorithm is particularly suitable for fixed point
of fast convolution has been carried forward, the so-called number theoretic realization of convolutions of relatively short sequences. If, in addition, in this
transforms. We will mention only some of the main developments here. transform the prime N is of the form N = 2m + 1, in which case N is a Fermat
In the conventional (Fourier) FFT, a main development has been the number, the Fermat transform is obtained, which requires only (m + l)N adds
and no multiplies.
Winograd FFT, or the Winograd Fourier transform analysis (WFTA). Silv~rman
( 1977) gives a tutorial account of the theory, while Zohar ( 1979) gives. a The entire field of fast algorithms for signal processing applications is
discussion oriented towards realization in a computer code. For an N-pomt discussed in depth and generality in the useful texts by Elliott and Rao ( 1982)
transform to be realizable by WFTA, N must be decomposable as a product and by Blahut (1985). The advantages to be gained from use of algorithms
of any number of mutually prime factors n; selected from among (2, 3, 4, 5, 7, other than traditional FFT procedures of radix 4 + 2 in SAR calculations are
8, 9, 16). The largest possible value of N realizable by WFTA ~s thus N = 5~, relatively unexplored at this time. We therefore will end our discussion of the
but as Agarwal and Burrus (1974) discuss a discrete convolution problem with matter here, having pointed out perhaps some possibilities.
larger N can be converted to a two-dimensional problem with smaller WFTA
sizes.
In WFTA itself, the N-point transform is realized as a specially arranged A.6 INTERPOLATION OF DATA SAMPLES
nest of n;-point transforms, each of which is further realizable with special
efficiency. The full algorithm requires the same or somewhat more adds than Let us finally consider interpolation of a bandlimited low pass signal l(t), with
the conventional FFT, but considerably fewer multiplies. For example, for a spectrum F(jw) which vanishes for Ill> B/2. As always, we assume the signal
complex transform with N = 1008, WFTA requires 34668 real ~dds and 3548 to have been sampled at an adequate rate f. > B to produce a sequence
real multiplies, while a radix 8 + 4 + 2 FFT for !if = 1024 requires 21793 real f., = l (n/ f.). Suppose further that all but a finite number N of the samples are
adds and 10244 multiplies. If, on a particular machine, rnultiplies are noticeably zero. Then we know from Eqn. (A.2.2) that we have exactly
slower to perform than adds, the use ofWFTA may have considerable advantage.
Following the idea of Winograd to decompose N into a product of mutually N-1
prime factors n;, the "prime factor" FFT was developed. In this procedure, the l(t) = L f., sinc[nf.(t - n/f.)] (A.6.1)
n=O
one dimensional N-point transform is converted into a K-dimensional transform,
with the transform in each dimension involving a number of points equal to
the corresponding factor of N. Thus, for N = 5040 = 5 x 7 x 9 x 16 for valid for all t, which provides error free interpolation (or extrapolation) of the
example, the transform is realized as a 4-dimensional ~ransform w!th 5,. 7, 9, given N samples. Beyond this there is only the question of implementation to
consider.
and 16 points respectively in each dimension. Each constituent one dimensional
transform is realized by some appropriately fast method, often the WFTA. We at once remark, however, that Whittaker's formula, Eqn. (A.2.2), is often
Burrus and Eschenbacher (1981), Chu and Burrus (1982), and Johnson and not the most reasonable way to carry out such an interpolation computationally.
Burrus (1982) have discussed various aspects of such prime factor algorithms. It is usually more efficient to introduce a time shift in the transform domain,
At the cost of considerable program complexity, substantial speedup can be since we now have at hand an efficient way to compute transform coefficients
(the FFT).
realized. For example, Chu and Burrus (1982) report operation of a 280-point
prime factor algorithm at a rate 27 times faster than a radix 2 FFT run on a In general, if l(t) is a function withFourier transform F(jw), the transform
of the delayed version g(t) = l(t - T) is
comparable machine.
Finally, it is useful to recall that we are often interested in the F~ only_ as
a means to realize a fast convolution. If fast convolution is the primary aim, G(jw) =exp( -jwT)F(jw) (A.6.2)
transforms other than the FFT may be appropriate and advantageous. Rader
( 1972) has discussed a transform in which 2 plays the role which exp( - j2'!" IN_) Therefore, if l(t) is bandlimited to the band Ill ~ B/2, so also will be g(t), and
plays in the Fourier transform, Eqn. (A.4.1). For an input sequence which 1s samples of g(t) at the same rate as those of l(t) (f. > B) will suffice for
integer (fixed point) data, and of length N which is a prime number, a transform reconstruction. From Eqn. (A.2.9) and Eqn. (A.6.2), the transforms of the
562 DIGITAL SIGNAL PROCESSING
A.6 INTERPOLATION OF DATA SAMPLES
563
corresponding sampled signals are related by
In fact, if we want a delay T = p/mf., p = 0, 1, ... , m - 1, the same reasoning
leads at once to
G!(jw) = exp( -jwT)F!(jw) (A.6.3)
since F (jw) and F !(jw) are identical on the band If I < BI 2. Then, from 9n = f(n/ f. - p/mf.) = mg:,.n-p• n = 0, N - 1
Eqn. (A.3.6) and Eqn. (A.6.3),
so that we obtain the full set of interpolation points on the grid of fineness
Gk = F k exp( - j2nkf. T / N), k = 0, N - 1 (A.6.4) 1/mf. with one operation.
A companion operation to interpolation is loosely called decimation. Whereas
interpolation, in the last version discussed above, increases the sampling rate
The sought samples 90 = g(n/ f.), n = [O, N - l], are then just the inverse FFT
of a bandlimited signal by a factor m above that minimum f. > B which is
of the numbers Eqn. (A.6.4 ). . .
As a special case, suppose that we want to interpolate to the m1dpomts of necessary to assure the absence of aliasing, decimation decreases the rate f. by
the original sampling intervals. Then we have T = 1/2f., and Eqn. (A.6.4) a factor m in the case that in fact f. > mB, so that the original function is
oversampled by a factor at least m. The obvious answer is the correct one: Since
becomes
the given sequence f 0 is oversampled by a factor m, simply discard all but every
mth sample.
The operations of interpolation and decimation, which in themselves change
the sampling rate by integer ratios, can be combined so as to increase or decrease
so that, from Eqn. ( A.4.1 ), the sampling rate by any desired rational ratio a = p / q, so long as in the
decimation part of the combination enough samples are always retained that
N-1
g0 = (1/N) L Fkexp[jnk(2n - 1)/N], n=O,N-1 the restriction f. > B is not violated. For example, if we wish to increase the
k=O sampling rate from f. to (S/3)J., we FFT the original sequence of N samples
J.. by an N-point transform, pad the Fk sequence to a length SN by appending
If we define a sequence F~ by zeros, do an inverse FFT of length SN, and throw away all but every third
sample in the result, after multiplying by S to adjust for the new scale factor
in the inverse FFT.
k = O,N -1
In many applications, one "wants to carry out this process of sample rate
k=N,2N-1 adjustment on an ongoing data stream, rather than on a batch of N points.
Crochiere and Rabiner ( 1981) considered the matter. The procedures, which
so that F~ is the zero-padded version of Fk, then the inverse FFT of the sequence operate entirely in the time domain, are based on the observation that, if we
F~ is insert p zeros between every sample f 0 of the original data sequence, we obtain
a sequence g0 , any pN samples of which have FFT coefficients given by
N-1
g~ = ( 1/2N) L F k exp(jnkn/ N), n = 0, 2N - 1 pN-1 N-1
k=O
Gk = L 9; exp( -j2nki/pN) = L f 0 exp( -j2nkn/ N), k = 0, pN - 1
i=O n=O
which is to say that g0 = 2g~n-l • n = 0, N - 1, so that the g~ sequence is the
sought interpolation of the original function f(t). • . .
making the change of variable n = i/p, so that the Gk sequence is just the Fk
This procedure obviously generalizes to the case T = 1/mf., m which case
sequence, but considered over p of its base periods of length N. If then the g
we compute the N-point FFT of the sequence f 0 , extend the resulting sequence 0
sequence is low pass filtered in the time domain by a digital filter to remove
Fk by appending zeros to fill out the N-~oint Fk seque~ce to a length mN,
and compute the inverse FFT over mN pomts. The result ts a sequence of mN spectral components from k = N through k = pN - 1, the result has a spectrum
which is exactly the F k sequence padded out by zeros to a length pN. The result
points g~ such that
therefore is the sequence with the values f(n/f.) interpolated on the mesh with
fineness p If.. Discarding all but every qth sample in the result therefore
g0 = f(n/ f. - 1/mf.) = mg:,.n-1 • n = 0, N-1
accomplishes sample rate increase by the rational factor p / q.
564 DIGITAL SIGNAL PROCESSING
REFERENCES
Agarwal, R. C. and C. S. Burrus (1974). "Fast one-dimensional digital convolution by
multidimensional techniques," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-22( l ),
pp. 1-10.
APPENDIX B
Bergland, G.D. (1968a). "A fast Fourier transform algorithm using base 8 iterations,"
Math. Computation, 22, pp. 275-279.
Bergland, G. D. (1968b). "A fast Fourier transform algorithm for real-valued series,"
Comm. ACM, 11(10), pp. 703-710.
SATELLITE ORBITS AND
Blahut, R. E. (1985). Fast Algorithms for Digital Signal Processing, Addison-Wesley,
Reading, MA. COMPRESSION FILTER
Brigham, E. 0. ( 1974 ). The Fast Fourier Trans/orm, Prentice-Hall, Englewood Cliffs, NJ.
Burrus, C. S. and P. W. Eschenbacher (1981). "An in-place, in-order prime factor FFT
PARAMETERS
algorithm," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-29(4), pp. 806-817.
Chu, S. and C. S. Burrus (1982). "A prime factor FFT algorithm using distributed
arithmetic," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-30(2), pp. 217-227.
Cooley, J. W. and J. W. Tukey (1965). "An algorithm for the machine calculation of
complex Fourier series," Math. Computation, 19(90), pp. 297-301.
Crochiere, R. E. and L. R. Rabiner (1981). "Interpolation and decimation of digital
signals - A tutorial review," Proc. IEEE, 69(3), pp. 300-331.
Elliott, D. K. and K. R. Rao ( 1982). Fast Transforms: Algorithms, Analyses, Applications,
Academic Press, New York. The esse~~ of SAR, a~d the root of its dramatic resolution properties, lies in
Gentleman, W. M. and G. Sande (1966), Fast Fourier transforms - for fun and profit, the ~oss1_b1hty ~f carrym_g out compression processing of the Doppler shifted
AFIPS Fall Joint Computer Conf, San Francisco, November 1966, Spartan Books, carn~r signal m the az~muth coordinate as the vehicle flies by the target
Washington, DC, pp. 563-578. (~ection 1:2.2~. For an isolated point target, the waveform of that Doppler
Johnson, H. W. and C. S. Burrus ( 1982). "The design of optimal DFT algorithms using signal, which is the slow (azimuth) time variation of the phase of the output
dynamic programming," Proc. IEEE Inter. Conf Acoust., Speech, and Sig. Proc., Paris of the range compression filter, is (Eqn. (4.1.24)):
(May), pp. 20-23.
Oppenheim, A. V. and R. W. Schafer (1975). Digital Signal Processing, Prentice-Hall, g(s) =exp[ -j4nR(s)/A.] (B.0.1)
Englewood Cliffs, NJ.
Papoulis, A. ( 1966). "Error analysis in sampling theory," Proc. IEEE, 54( 7), pp. 947-955. ~h~re _R(s) is range from radar to target and s is slow time. This signal is
Pease, M. C. ( 1968). "An adaptation of the fast Fourier transform for parallel processing," mtrm~1ca~ly_ sampled by the pulsed nature of the radar, with a sampling frequency
J. ACM, 15, pp. 252-264. fp wh1c~ is Just the radar PRF. The waveform of this point target response is
Rader, C. M. (1972). "Disqete convolutions via Mersenne transforms," IEEE Trans. needed m order to construct the compression filter. The detailed nature of the
Computers, C-21, pp. 1269-1273. rang~ ~unction R(s) _is therefo~e cru~ial to the construction of a SAR processor,
Shannon, C. E. (1949). "Communication in the presence of noise," Proc. IRE, 37(1), and it is that behavior to which this Appendix is devoted.
pp. 10-21 . Th~ range function R(s) for practical geometries is a complicated expression
Silverman, H. F. ( 1977). "An introduction to programming tl\e Winograd Fourier mvolvm~ many ~arameters of the relative motion of satellite and target, the
transform algorithm (WFTA)," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP- latt~r hem~ earned al?ng by the rotating earth, and possibly having some
25(2), pp. 152-165. motion. of its own relative to the earth surface. However, because of the limited
Temperton, C. (1979). "Fast Fourier transforms and Poisson solvers on CRAY-1," pp. ~eamw1dth of ~he ra~ar. antenna pattern, a specific ~oint target creates radar
361-379 in: Super-Computers, Vol. 2, Infotech International. s~gnal only dunn~ a hm1ted span of slow time. That time span, the integration
Zohar, S. (1979). "A prescription of Winograd's discrete Fourier transform algorithm," time of the S~R, is usual~y small enough that a Taylor series expansion of R(s)
IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-27(4), pp. 409-421. abo~t a nommal center time, say sc, can be terminated after the first few terms
to ~1eld an adequately accurate approximation to the full function R(s).
Typically, only terms through the quadratic in slow time need to be retained
'
565
566 SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS
B.1 PARAMETERS IN TERMS OF SATELLITE TRACK AND TARGET POSITION
567
in which case the azimuth function Eqn. (B.0.1) is a linear FM signal in the
Doppler frequency domain. Therefore, in this Appendix we will seek expressions z
for R(s) and its first few derivatives evaluated at the time scat which the target
in question is in the center of the radar beam. Those lead to the parameters
needed in the azimuth compression filter of a SAR processor.
We will derive three different forms of expression for these derivatives of
R(s) evaluated at beam center, arranged to accord with three different situations
in which one wants to calculate them. First, we will need versions of the
parameter computations which can use the accurate values of satellite position
and velocity obtained by observing the trajectory of the vehicle. Second, for
prediction of the azimuth filter parameters during system design, as well as for
their computation in the case that the satellite orbit and orientation are known
rather precisely, it is useful to have accurate formulas in terms of these quantities.
Third, we need analytical models upon which to base the data fitting procedures
involved in clutterlock and autofocus (Chapter 5).
y
B.1 PARAMETERS IN TERMS OF SATELLITE TRACK AND
I Xs
TARGET POSITION I
I
I
As shown in Fig. B. l, consider a coordinate system with origin at the center I
of the earth. Let the satellite position as a function of slow time s be the vector I
I
R.( s), and let an isolated point target be at position R1( s ). The range vectoris I
I
R(s) = R.(s) - R1(s) (B.1.1)
R(s) = IR.(s) - R1(s)I (B.1.2) Figure B.1 Satellite and target positions in inertial system fixed at earth center.
the scalar slant range. It is convenient to expand R(s) as a Taylor series about
some time sc, which will be the time of passage of the nominal beam center
across the target, but which we can take as arbitrary for the time being. Then Differentiating both sides of Eqn. (B.1.4), we have
we can write
and seek the various derivatives of R(s) evaluated at the special time of beam For ~enerartty, suppose that the target moves with respect to the surface of the
center on the target, rather than seeking the analytical form of R(s) directly. rotattnfg earth. Let r be the target position, with coordinates taken relative to
a set o axes fixed on the rotating earth's surface. That is,
Slant Range Derivatives Given Platform Trajectory
From Eqn. (B.1.2), suppressing henceforth the explicit appearance of the slow
time variable s:
(B.1.7)
is the acceleration of the target relative to the earth's surface. Using Eqn. ( B.1.11)
Assuming the earth to have rotational symmetry about its axis, and standard identities, Eqn. (B.1.15) becomes
where we is the (constant) earth's angular velocity. Further (e.g. Hay, 1953, p. 80 ), +(we x Rt)·(we x R.)- 2V.·v, + 2we·(R. xv,)
2
+ lv,1 - (R. - Rt)·a, *(B.1.16)
t =We X r + V, (B.1.9)
Considering the Doppler signal Eqn. (B.0.1), with phase
where by v, we mean the target velocity as seen by an observer fixed on the
earth's surface: <P = -4nR(s)/A
(B.l.10)
we have the general Doppler frequency expressions
From Eqn. (B.1.7), with Eqns. (B.1.8), (B.1.9), and (B.1.10) taken into account, • f: •
we obtain f 0 (s) = <P/2n = -2R(s)/).
i 0 (s) = -2R(s)/A
Vt = We x vto +We x r + v,
In particular,
= We X Rt + V, (B.l.11)
and corresponding to the range function expansion in Eqn. (B.1.3), are the Doppler
center frequency and Doppler rate for use in the SAR processor.
(B.l.13) The expressions Eqn. (B.1.14), Eqn. (B.1.16) indicate explicitly how target
motion relative to the earth surface affects the parameters foe and fR through
Using Eqn. (B.1.12) and Eqn. (B.1.13) in Eqn. (B.1.5) yields changes in R(s) and R(s). The expressions form the basis for assessment of
defocusing caused by uncompensated target motion, in conjunction with depth
of focus considerations (Section 4.1.3). Henceforth, however, we will assume the
RR= V.·(R. - Rt)+ we·(R. x Rt)-(R. - Rt)·v, *(B.1.14)
target is a terrain point fixed on the earth surface, and take
This expresses the first derivative R of slant range in terms Qf t~e t~r~et pos~tion
a,= v, = 0
and velocity on the earth and the satellite position and veloetty m its orbit.
Differentiating Eqn. (B.1.14), we have
Satellite Acceleration Given Earth Potential Function
RR+ JP= A.·(R. - Rt)+ V.·(V. - Vt) To use the expressions Eqn. (B.1.14) and Eqn. (B.l.16) to obtain the Doppler
parameters foe andfR, we need to know the motion of'the satellite. The tracking
+ We• (R. X Vt + V5 X Rt) - (V5 - Vt)• v, system and the orbit smoothing processor will normally provide the satellite
- (R. - Rt)·(a, +We Xv,) (B.1.15) position and velocity R., v. as functions of slow time s (vehicle flight time),
although some interpolation may be needed between the times at which these
570 SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS
8.1 PARAMETERS IN TERMS OF SATELLITE TRACK AND TARGET POSITION
571
quantities are made available. However, the higher derivatives of R. must
we have Eqn. (B.1.20) as
usually be calculated if they are needed; . . .
If we assume a uniform spherical earth, the calculatton 1s easy, sm~ then
the force field in which the satellite finds itself is a central field (n~glectmg t~e A.= -(µ/R:){[l + (1.5B2/R;)(l - 5 sin 2 (.)]R.
influences of the sun, moon, etc.). Then (whatever the form of the orbit) Newton s + (2B2 z/R;)k}
law yields the equation of motion: (B.1.22)
This shows both the nonuniformity and noncentrality of the force field, through
(B.1.19) the terms with(. and k, respectively. Higher order terms in the potential function
Eqn. (B.1.21) are available, and could be used for more accurate calculation of
/s
where R • = IR• I and µ - 3.986 x 10 14 m 3 2 is the product of the gravitational
. A., or for calculation of A. in a third order expansion of the slant range function
R(s).
constant and the mass of the earth. Differentiating Eqn. (B.1.19) yields
The expansion Eqn. (B.1.22), when used in Eqn. (B.1.16), together with
Eqn. (B.1.14) yields the parameters foe and fR· For example, if we assume a
central force field, B 2 = 0, and a target fixed on the earth, we obtain
and so forth for the higher derivatives, where
foe= ( -2/A.R)[V.·(R. - R,) + w. ·(R. x R,)] *(B.1.23)
R. = V.·R./R.
JR= (-2/A.R)[(-µ/R:)R.·(R. - R,) + v.·v. - R2
In the case of a noncentral force field, due to a nonspherical a~d/or no?uniform + (w. x R,)·(ro. x R.) + 2w0 ·(V. x R,)] *(B.1.24)
earth, it is convenient to introduce the gr~vitationa~ potential functton U(p)
(Haymes.--1!)71, p. 42). This is a scalar function of pos1tton p such that the force All quantities here are evaluated at the time sc of passage of the terrain point
per unit mass on the satellite is of interest through the radar beam center.
The expression Eqn. (B.1.23) exhibits terms due to satellite motion (perceived
F(p) = - [VU(p)]lp = p(sJ due to squint and orbit eccentricity) and earth rotation. Both of these are
generally significant. In expression (B.1.24), however, for rough calculations it
Then Newton's law is may be adequate to use the approximation
where V.1 is the speed of the satellite relative to the target point. Thus,
It is usual and convenient to express the function U(~) as ~ series in powers
of 1/ p with coefficients that are indirectly measured b~ mfemn.g them from the vs2 t-1.R1
-
2
-1v
-, s - w e x RI 12
observed orbits of satellites. For a uniform earth with ro!att~nal symmetr~,
the potential at the satellite loq1tion R., correct to the 10d1cated order, ts *(B.1.26)
(Haymes, 1971, p. 45):
The expression Eqn. (B.1.25) differs from Eqn. (B.1.24) only in the small
U = (µ/R.)[1+(B 2 /R;)(-0.5+1.5 sin 2
((.)] (B.1.21) centripetal acceleration(µ) and R2 terms, and in the small term (ro. x R,)'(ro. x R).
More accurately, Eqn. (B.1.24) defines a speed parameter V for use in
Eqn. ( B. l .25) in place of V., '. The matter is discussed in more detail in Section B.4.
where ( is the latitude of the satellite on a sphere with center at the earth
center. The coefficient is B 2 = -4.405 x 10 10 m 2 (El'y~sberg, 196?, P· 199). Coordinate System
With the form of the potential Eqn. (B.1.21), and takmg a coordmate system
(Fig. B.l) such that In order to carry out numerical calculations, it is necessary to introduce some
coordinate system in which to express the various vector quantities in expressions
such as Eqn. (B.1.23) and Eqn. (B.l.24). The usual system is the equatorial
sin((.)= z/ R. (inertial) coordinate system (Haymes, 1971, p. 1). This is a right-handed
572 SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS
B.2 TRAJECTORY PARAMETERS IN TERMS OF SATELLITE ORBIT
573
rectangular system with the z axis the axis of rotation of the earth, positive
towards the north pole (Fig. B.1 ). The positive x-axis points in a fixed direction ~f th~ orbit ellipse:.rather than at its center.) Such an orbit can be described by its
in inertial space, the direction of the vernal equinox, also called the first point or~ital elements , t~ese bein~ constants of the ellipse and of its orientation
in Aries, and denoted symbolically as Y, the sign of the ram. The earth rotates relative to the equatonal coordmate system (Fig. B.2). They are (Haymes 1971
on its axis in this fixed coordinate system. The origin of the system can be p. 498): ' '
regarded as fixed at the center of the earth, so that the system itself moves with a, the semi-major axis of the ellipse;
fixed orientation around the sun as the earth travels in its orbit. Since throughout e, the eccentricity of the ellipse; .
we neglect the influence of the sun on the satellite, that this coordinate system
moves around the sun is of no concern. Ot:;, the inclination of. the ellipse (the angle between the plane of the ellipse
and the xy-coordmate plane);
The vernal equinox in this context is a specific direction, rather than a time.
The xy-plane of the equatorial coordinate system of Fig. B.1 is the plane of the n, the_ longitude .of the ascending node (the azimuthal angle of the point at
earth's equator. The z-axis, the axis of rotation of the earth, is tilted at nominally whi~h the orbit cuts the xy-plane in passage of the satellite from the lower
23.5° with respect to the plane of the earth's orbit around the sun. As a result, hemisphere to the upper, that point being the "ascending node")·
for nominally six months of each year the sun lies below the xy-plane, while w, the argu~e.nt ~f perig~e (t~e angle ("argument") along the orbit ~lane,
for the other six months it is above. At one precise instant each year, to an taken ~osit1ve m the dtrectton of satellite travel, from ascending node to
observer at the earth's center the center of the sun would appear to pass through t~e ~om!. of close~t ap~roach of the satellite to the earth center
the xy-plane headed into the positive-z hemisphere. That instant, nominally ( pe~igee ), th~t pomt bemg on the major axis of the ellipse);
some time on March 21, is the time of the vernal equinox, and the direction P, the s1~ereal ~n.od (the ti~e required for one transit of the ellipse by the
from earth center to sun center at that instant is called the vernal equinox. satellite) - this is not an mdepen4ent parameter of the orbit·
A slight complication arises in use of the vernal equinox as a coordinate T, the time of peri~ee passag.e (the absolute time at which the sateilite passed
direction. Because of a variety of perturbing effects, the earth's axis of rotation through the pomt of pengee during the orbit in question).
moves with respect to the plane of the earth's orbit. There is a mean circular
movement (precession) around the cone with central half angle 23.5°, with a
period of 25800 years, and a small wobbling (nutation) about that mean, with
a period of 18.6 years. As a result, the direction of the vernal equinox moves, z
and it is necessary to specify a date to which the equatorial coordinate system
in question relates. Until 1984, that was taken as the beginning of 1950, precisely
defined. Since 1984, the year 2000 is the convention. The vernal equinox actually
occurred at the first point (horn) of Aries (the ram) about 2000 years ago, so /
/
that the current equatorial system has an x-axis rotated about (2000/25800) x /
/
360 = 28° away from that star. /
Before considering detailed formulas for target position R, in Eqn. (B.1.14) /
/
and Eqn. (B.1.15), we will describe determination of the satellite vectors from
orbital elements.
A satellite which finds itself in a central force field, Eqn. (B.1.19), will move in
a planar orbit which is one of the conic sections (Haymes, 1971, p. 41). If the
satellite is to be of use for remote sensing, it must be in an orbit which is
nominally elliptical. The elliptical orbit is further often arranged to be a near
circle. Were the earth to be a uniform sphere, a satellite would move in a strict
elliptical orbit, with the center of the earth at one of the foci of the ellipse.
(Note that thereby the origin of the equatorial coordinate system is at a focus x'
Figure B.2 Definition of elements of satellite orbit.
574 SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS
B.2 TRAJECTORY PARAMETERS IN TERMS OF SATELLITE ORBIT 575
Nominal Satellite Orbit for Given Orbital Elements
It is useful to have available the equations relating these orbital elements to
the quantities of interest in computing the filter parameters, namely R. and v•.
The orbital elements are constants of integration, or transformations thereof,
which arise in integrating the equation of motion of a satellite in a central force
field, Eqn. (B.1.19). We will follow Haymes (1971). El'yasberg (1967, esp.
Chapters 4, 5) gives a more detailed treatment.
Again using the inertial system centered on the (assumed) uniform spherical
earth (Fig. B. l ), the equation of motion of the satellite is
(B.2.1)
R. x A. = 0 = d(R. x V.)/dt
R. x v. = const (B.2.2)
Equation (B.2.2), the first integral of Eqn. (B.2.1), is the "areal integral'', and
indicates that a vector normal to the plane of R.(s) and V.(s) is a constant in
time. Hence R., v. evolve in a plane, and the orbit is planar.
Since the orbit is planar, we can confine attention to that plane and introduce
the polar coordinates shown in Fig. B.3. Then
R. = R.ur (B.2.3)
Figure 8.3 Orbit plane for central force field.
v. = R.ur + R.ur = R.ur + R.iiu, (B.2.4)
A.= (R. - R.ii 2 )ur + (2R.ii + R.61)01 the transformation R. = l/uand using Eqn. (B.2.6)and Eqn. (B.2.7), there results
= -(µ/R;)ur (B.2.5) I;
l{• = 2 2 2 2
- K u (d u/dex ) = K 2u 3 - µu2
using Eqn. (B.2.1) and the definition of ex indicated in Fig. B.3. Thus
Substituting Eqn. (B.2.3) and Eqn. (B.2.4) into Eqn. (B.2.2) yields the areal
integral as
with solution
(B.2.6)
so defining "· This is Kepler's second law, that the motion in the orbit plane u =A cos(ex - w) + µ/K 2 (B.2.8)
is such that the vector R. sweeps out area at a constant rate (hence the term
where A and w are constants of integration.
"areal" integral).
Equating coefficients of ur in Eqn. (B.2.5), we have Transforming from u back to R. = 1/u, and defining
(B.2.7) (B.2.9)
yields
To find R. as a function of ex, which will turn out to be the equation of an
ellipse, we use Eqn. (B.2.6) to eliminate time s from Eqn. (B.2.7). Introducing
R.=(e/A)[l +ecos(ex-w)]-1 (B.2.10)
576 SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS
8.2 TRAJECTORY PARAMETERS IN TERMS OF SATELLITE ORBIT 577
Therefore R. has minimum and maximum values (the values at perigee and
apogee): Satellite Coordinates for Nominal Orbit
If we want to calculate the position R. of the satellite at some specified time s,
(R.)min = e/ A( 1 + e) expressed through the mean anomaly, we need to work our way from the mean
anomaly M_, back through the eccentric anomaly E, to the true anomaly f.
(R.)max = e/A(l - e) (B.2.11) Together with R., which follows immediately from fusing Eqn. (B.2.13), this
locates the satellite.
Then defining First, it is straightforward to show (Haymes, 1971, p. 33) that
(B.2.12) p = 2nalf2Iµ112
there results which is Kepler's third law, so that P is not an independent element of the
orbit. From the time s, and known time T, this yields the mean anomaly M.
R. = a(l - e 2)/[1 + e cos(IX - ro)] *(B.2.13)
A moderate amount of geometric consideration of Fig. B.3 (El'yasberg, 1967,
p. 60) leads to Kepler's equation:
Equation (B.2.13) is the equation of an ellipse, provided that e < 1, with
semi-major axis a. (This result is Kepler's first law.) The values Eqn. (B.2.11) E - e sin(E) = M
become
This ~ust be solved numerically for E given M. (Since, in cases of interest to
us, e 1s very small, Newton's method converges in a few steps). More geometry
(R 8 )min = a(l - e) leads to
(R.)max = a(l + e)
tan(f /2) = ((1 + e)/(1 - e)J 1' 2 tan(E/2)
The ellipse Eqn. (B.2.13) in the orbit plane is described by three of the orbital from which follow f and then R •.
elements - the semi-major axis a, the eccentricity e, and the phase angle ro
Simple transformations now yield the inertial coordinates of the satellite and
(the argument of perigee). Two other elements, the longitude n of the ascending ?fits. velocity, i.e., the vectors R. and v•. From Fig. B.2, rotating the geocentric
node and the inclination IXi of the orbit, relate the orbital plane to the equatorial 10ert1al system through the angle n corresponds to the transformation
coordinate system. The sidereal period P and the time T of passage of the
satellite through perigee serve to locate the satellite in its orbit for any given
times.
It is convenient to introduce a number of angles with regard to the movement
of a satellite around its orbit (Fig. B.3). The angle
[i"] [
0
which appears in Eqn. (B.2.13) is the "true anomaly". The angle E (Fig. B.3)
is the "eccentric anomaly". This is the central angle, measured from perigee, of
the point on the circumscribed circle where a line througli the satellite parallel
j"
k"
= 01
0
c~s IXi
-SID IXi
i'J
sin0 IXi ][ j'
COS IXi k'
to the minor axis intersects the circle. Finally, the "mean anomaly" is defined as
Finally (Fig. B.3),
M = (2n/ P)(s - T)
Ur] [ sin IX
OJ[ j"i"J
COSIX
This is the angle of the satellite, measured from perigee, if the motion were
circular with period P, the sidereal period. [ ;~ = -s~n IX COS IX 0
0 1 k"
578 SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS
B.2 TRAJECTORY PARAMETERS IN TERMS OF SATELLITE ORBIT 579
Cascading these yields
where we introduce the radial and tangential speeds. Writing the vectors u,, u
1
u, = i[cos(a) cos(!l) - sin( a) cos(a 1) sin(!l)] in terms of the inertial system, using Eqn. (B.2.14 ), then yields the components
of v. in the equatorial system.
+ j[cos(a) sin(!l) + sin( a) cos(a1) cos(!l)] Higher derivatives of R. could be found in the same general fashion, but we
will refrain from doing that, except to note that
+ k[sin(a) sin(a1)]
u1 = i[ -sin( a) cos(!l) - cos( a) cos(a1) sin(!l)]
ft..= (µ/R;)e cos(a - w)
+ j[ -sin( a) sin(!l) +cos( a) cos(a1) cos(!l)]
Ci.= (-2µ/R;)esin(a-w) *(B.2.19)
+ k [cos(°') sin( a;)]
uP = i[sin(a 1) sin(!l)] - j[sin(a1) cos(!l)] + k cos( a;) (B.2.14)
Perturbations of the Nominal Orbit
Since All of the above has assumed a central force field, Eqn. (B.2.1), leading to the
orbit being strictly an ellipse in a plane. However, since the earth is an oblate
spheroid, bulging at the equator, with somewhat of a pear shape (larger below
Eqn. (B.2.14) yields the equator), and with even higher order nonsphericities, the force field acting
on the satellite is not central. The result is that the satellite orbit is not a simple
x. = R.[cos(!l) cos( a) - sin(!l) cos( a;) sin( ix)] ellipse in an invariant plane. The analytical treatment of the changes in the
Ys = R. [sin(!l) cos( a)+ cos(!l) cos(a1) sin( a)] elliptical orbit due to noncentrality of the force field is straightforward, but of
some complexity. A detailed treatment is given by El'yasberg (1967, Chapter
z. = R. sin( a 1) sin(°') (B.2.15)
13), taking into account the first term of the earth's potential function,
Eqn. (B.1.21 ), past the simple inverse square force behavior (that involving the
Let us now consider the satellite velocity Eqn. (B.2.4). From Eqn. (B.2.13) coefficient B2 ).
we have The perturbing effects of higher order terms in the earth's potential function
R. = [R;e sin(a - w)/a(l - e 2 )]& *(B.2.16) are most conveniently expressed as perturbations of the nominal elliptical orbit.
To first order, two of the orbital elements increase or decrease monotonically
From Eqn. (B.2.6) we have with time ("secular perturbations"), the argument of perigee wand the ascending
node n. Three of the orbital elements remain constant, again to first order: the
i:X = 1<./R; *(B.2.17) length a of the semi-major axis, the eccentricity e, and the inclination a1• Over
the course of one revolution of the satellite in its orbit, the accumulated changes
in the perturbed elements are (El'yasberg, 1967, p. 212):
while from Eqn. (B.2.11) and Eqn. (B.2.12) we have
(R) 2
s max =2a=2e/A(l-e )
. +(R)
s mm <>w = (ne/p 2 µ)[5 cos 2 (a;) - 1]
Using Eqn. (B.2.13), Eqn. (B.2.17), and Eqn. (B.2.18) in Eqn. (B.2.16), we have p = a(1 - e 2 ) (B.2.21)
Eqn. (B.2.4) as
and e is a constant of the gravitational field:
v. = [µ/a( 1 - e 2 )] 1 12 { e sin(f)u, + [ 1 + e cos(f)Ju1}
*(B.2.18a)
e = -1.5µB 2 = 2.634 x 10 25 m 5 /s 2
580 SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS
B.3 COMPRESSION PARAMETERS IN TERMS OF SATELLITE AITITUDE 581
These perturbations Eqn. (B.2.20) may also be expressed as average rates of
change over a period of the orbit: Slant Range Rate In Terms of Beam Angles
!n Fig. B.1, we show an equatorial coordinate system with the satellite at an
aver(w) = <>w/P ~nstantaneous posit~on R. in ~he plane of its orbit. The orbit inclination angle
ts °'!' and the satellite at. the mstant considered has climbed an angle ix in its
aver(O) = <>O./ P (B.2.22) o~btt and has local headmg ~ngl~ v east of north, latitude C., and longitude x•.
Figure B.4 shows the local situation around the satellite. The target point Rt is
where the sidereal period can be expressed as on t~e eart~ ~urface, so that the plane shown, through Rt and normal to the
satellite p~s1t1on vector. R., cuts R. somewhat below the surface. The angle I/I
P = 2na u IJµ (B.2.23) measured m th~t pla~e ts taken relative to the plane determined by R. and v•.
The. local headmg v ts the angle from the meridional plane to the latter. The
As an example, for Seasat with a= 7161.39 km, e = 1.86 x 10- 3, <X; = 108.02°, angles Y and () are defined as shown in Fig. B.4. The satellite motion is given
from Eqn. (B.2.23) we have a period P = 100.5 min. Then Eqn. (B.2.20), by Eqn. (B.2.3), Eqn. (B.2.4), and Eqn. (B.2.5).
Eqn. (B.2.21), and Eqn. (B.2.22) yield As shown in Fig. B.4, the slant range vector is
(B.3.3) (With this definition, for a right looking radar with forward squint of say 1°,
o. = 1°, while for a left looking radar squinted 1° forward we have o. = 179°.)
where From Fig. B.4,
is the (constant) earth angular velocity. Then, differentiating Eqn. (B.3.2), R cos(y) = R. - Rt cos(()) (B.3.8)
and using Eqn. (B.3.1) and Eqn. (B.3.3), we obtain
From the upper spherical triangle (1) in Fig. B. l:
RR= v. · R - w. ·(R. x R) (B.3.5)
sin( v) cos((.) = cos( ai)
From Fig. B.4, we have
0 = cos( a) sin((.) - sin( a) cos((.) cos(v) (B.3.9)
R = R cos(y)ur + R sin(y)[ -cos(rfl)u1 + sin(rfl)up]
while for the lower (2):
Using this with R. and v. from Eqn. (B.2.3) and Eqn. (B.2.4), Eqn. (B.3.5)
becomes sin((.)= sin( a) sin( a;) (B.3.10)
R = R.. cos(y) - R.w. sin(y) cos(t/I) Using Eqn. (B.3.10) in Eqn. (B.3.9) yields:
+ w.R. sin( y)k ·[sin( rfl )u + cos( I/I )up]
1
cos(v) cos((.)= cos( a) sin( a;) (B.3.11)
= R, cos(y) - R.w. sin(y) cos( I/I)
+ w.R. sin(y)[sin(l/I) cos( a) sin(ai) + cos( I/I) cos(a1)] *(B.3.6) which also holds for sin(a) = 0, as is seen directly from Fig. B.l.
Using Eqn. (B.3.7), Eqn. (B.3.8), and Eqn. (B.3.lO)in Eqn. (B.3.6), we obtain:
where we write
RR = R.[R. - R 1 cos( O)] - w.R.R1 sin(O)[sin(O.) - (w./w.) cos((.) cos( o. - v)]
w. = de
(B.3.12)
for the instantaneous satellite angular velocity and use Eqn. (B.2.14~.
Equation (B.3.6) yields the range rate R, and thus, fromfv = -2R(s)/A., the This form of Eqn. (B.3.6) is a result of Barber (1985), taking into account that
we have defined v with the opposite sign.
instantaneous Doppler frequency / 0 , for arbitrary pointing angles 'l''. t/I from
satellite to target point. If we have particular reference to the compression filter For the special case of a circular orbit, so that R.. = 0, Eqn. ( B.3.6) becomes
'
parameter foe• however, we are interested. in poin~ing along the center ~ the
radar beam. In that case, for an exactly stde-lookmg radar, we ~ave t/I - n/2 R= -R.w. sin(y) cos( I/I)
(looking to the right of track), or l/J = - n/2 (looking left). In operation, however, x { 1 - (m0 /w.)[tan(l/J) cos( a) sin(a;) + cos( a 1}]}
slight yaw and pitch of the satellite about its nominal forward path lead to an (B.3.13)
angle I/I which differs from ±n/2 by something typically less than a degree, a
This is a result given by Raney ( 1986).
584 SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS
B.3 COMPRESSION PARAMETERS IN TERMS OF SATELLITE ATTITUDE
585
Slant Range Acceleration In Terms of Beam Angles
Let us now investigate the other main Doppler parameter, fR. Differentiating N
Eqn. (B.3.5) to introduce R, we have
RR + R2 = As • R + V• • R
- w. · [V. x R + R. x R] (B.3.14)
Substituting Eqn. (B.3.15) into Eqn. (B.3.14), and using simple identities, we
obtain
Substituting for R and R1 and its derivatives in terms of u,, 0 0 and up from
Eqn. (B.2.3), Eqn. (B.2.4), Eqn. (B.2.5), and Eqn. (B.3.1), and substituting u.,
0 1, and uP in terms of their rectangular components from Eqn. (B.2.14 ), in order
to carry out the operations with
+ R.w:[R R cos(y)] 1 -
- 2R.w.w0 {[R. - R cos( y)] cos( ixi) In the particular case of a circular orbit so that R R. and w· all ·h ·
E (B . ' "' ., • vams m
qn. .3.17), we obtam the case considered specifically by Raney ( 1987). If we
+ R sin( y) sin( I/I) sin( ix) sin( ixi)} furth~r dro~ the terms of second order in the small quantity w./w., that is the
w:
+ R.{[1 - sin 2 (ix) sin 2 (ixJ][R 1 - R cos(y)] term mvolvmg ro~ and R 2 , Eqn. (B.3.17) becomes
- R sin( y) sin( ix) sin( ixi)[ cos( I/I) sin( ixi) cbs( ix) RR= R.w;[R. - R cos(y)]
- sin( I/I) cos(ixi)]} *(B.3.17) - 2R.w0 w.{[R. - R cos(y)] cos( ix;)
With this.JR follows from Eqn. (B.1.18). It is worth noting explicitly in this that + R sin( y) sin( l/J) sin( ix) sin( ix1)} (B.3.18)
R, y are not independent, either one determining the other through (Fig. B.4 ):
Following Raney ( 1987), we can note from Fig. B.4 that
R~ = R: +R 2
- 2RSR cos(y)
R. - R cos(y) = R 1 cos(O) (B.3.19)
586 SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS
B.3 COMPRESSION PARAMETERS IN TERMS OF SATELLITE ATTITUDE
587
and
and the speed Va of the point P (Fig. B.4) below the spacecraft nadir point on -2000.00
the earth is
-3000.00 ~-A-~~:1:-.L..-L.-L..L.....J:.......J...-L-L-1.-L....L.-J
0.00 100.00 200.00 300.00
Returning to the general expression Eqn. (B.3.17), we can use the expression Orbit angle ex
Eqn. (B.3.12) for .R, together with various geometric relations in Fig. B.1 and
a
Fig. B.5 to write, after some labor,
~
...
.!!!-518
This corresponds to another expression obtained by Barber ( 1985 ). §:
C-519
Examples
As a numerical example, let us consider a Seasat orbit (Barber, 1985) -520
with a = 7161.39 km, e = 0.00186, cxi = 108.02°, n = 89.37°, and w = 148.16°.
Suppose we are interested in affairs near a subsatellite point with latitude
-5210)~~0~~~~~~~~!--L.--L_J
'c= 60° S, under the descending half of the orbit. If, for simplicity, we assume
'c
here that (spherical earth) = (., then (. = -60°. Then Eqn. (B.3.10) yields
30 60 90 120 150 180 210 240 270 300 330 360
Orbit angle (deg)
ex= -65.60°, -114.40°. We choose ex= -114.40°, corresponding to the
b
descending part of the orbit. (When descending, lexl ~ n/2, and n/2 ~ lexl ~ n
when ascending.) =:~~~ ~·:xa~~l~oppler center frequency for SEASAT example. (b) Azimuth chirp constant for
588 SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS
For certain purposes, expressions for foe andfR having less accuracy than those For a circul~r orbit, R. vanishes, and only the second term of the model
developed in Section B.1, Section B.2, and Section B.3 suffice. Indeed, for Eqn.. <B.4.4) is pres.ent.. For an unsquinted beam, with e. = O, only the earth
clutterlock and autofocus procedures (Section 5.3 ), such simplified models are rot~tton term remams m Eqn. (B.4.5). However, even with the nearly circular
necessary. In this section we will examine some appropriate approximations. orbit. of Seasat (eccentricity e = 0.00186) the satellite can have a dive angle
sufficient for the first term of the model Eqn. (B.4.4) to make a noticeable
590 SATELLITE ORBITS AND COMPRESSION FILTER PARAMETERS
REFERENCES 591
contribution to foe· Similarly, a squint of only a small amount can cause the where the effective velocity Vis defined by
satellite motion term to dominate in Eqn. (B.4.5).
2
V = R,R1 w; {1 - 2(w./w.)[cos(oc;) + sin(O) cos(O,) sin((.)]
sin( 0) sin((.) cos((.) sin( o. -
Variation of Doppler Rate with Range 2 2
+ (<.0 /w,)
0 [ cos ( (.) - v)]} (B.4.9)
In seeking a similar model for variation of fR across the range swath, from
Eqn. (B.3.17) we could express R in terms of R, sin(y), cos(y), and coefficients using Eqn. (B.3.9). With the approximation that sin 0 is small, the parameter
which do not vary with R along a fixed pointing direction for a frozen satellite, 2
V is nearly independent of range, leading to the model
that is across the swath. The coefficients in such an expression are rather
compli~ated, however, and it is preferable to examine the terms in R for order
*(B.4.10)
of magnitude directly before attempting to approximate the significant ones.
As Barber ( 1985) has indicated, the terms involving parameters of non- as R varies across the swath. This is the model used in autofocus procedures.
circularity of the orbit, that is, R., R;, and w., are negligible in determining Dropping the earth rotation terms in Eqn. (B.4.9), there results
fR, for an orbit of the normal small eccentricity used for remote sensing of the
earth surface. From Eqn. (B.3.22), for a circular orbit we have:
(B.4.11)
RR= -R 2 + R.R1w;{ cos(O) whe.re V. is the satellite speed. Introducing the spacecraft altitude H, and
recognizing that R, = R 0 , the nominal earth radius, Eqn. (B.4.11) results in the
- 2(w./w.)[cos(O) sin(v) cos((,)+ sin(O) cos(O.) sin((.)] simple approximation
+ (w0 /w.) 2 ( cos 2 ( (.)cos( 0)
v = V./(1 +HI R.)112 (B.4.12)
- sin( 0) sin((.) cos((.) sin( o. - v)]} (B.4.6)
This expression for Vis not accurate enough for other than rough computation,
In this, we can approximate cos 0 as unity. however.
From Eqn. (B.4.3), for a circular orbit, approximately
REFERENCES
RR= V 2 (B.4.8)
C.1 ASF OPERATIONS 593
-
PROCESSOR
ANTENNA
SYSTEM SYSTEM
RF
I USERS
~
- IMAGE
ANALYSIS
WORKSTATION
MISSION
PLANNING
SYSTEM
-
Resolution < 200 m .,..... / "
18 Geocoded Image From either Bulk or AOS 20 browse frames, .......
Browse Image, 8 bits/ 1 bulk frame -,-...J L....,--
pixel, UTM or PS (100 km x 100 km) I 1
I I 10 METER
projection REFLECTOR
2 Ice Type Maps 3-4 classes, 4 bits/pixel GPS 20 frames
Online: 5 km and (100 km x 100 km)
100 m grid products
2 Ocean Wave Spectra Online: contour plots GPS 20 framelettes
Offtine: spectra, (512 x 512 pixels)
Resolution - 25 m
3 Ice Motion Maps Ice velocity vectors, MODEL 3316 ELEVATION
GPS 10 pairs OVER AZIMUTH PEDESTAL
Online: 5 km and (100 km x 100 km)
100 km grid products ( 1000 km x 1000 km) COUNTERWEIGHT ARM
*Definition of data product levels can be found in Table 6.1. UTM, Universal Transverse Mercator;
/1
PS, Polar Stereographic. AUTOMATIC TILT MECHANISM
(OPTIONAL)
I-
=>
I-
=>o
oc
-
a. (/)
a: lo u:
<
I-
<
c
(/)
w w
a:
..:.
-I
(,.)
<
LL
:::> a:
LL
w
x.... ~ I-
w
-I
-z
a. N
:IE M
N
0
(,.) ci>
a:
~
(,.)
(/) (/)
0 :::>
-I
(,.)
w
a: 0
a: w
a: w • (.!)
w 0 I- LL
3: < a: 0
I- 0 O;=: w 0
(/) I- :::> c I- I-
0 c ...JZ:::>
-I-
:::> "'
It) a:
-- "!\.
a. -Iw U<C (/) a. (,.) 0
a: a: :::> ::IE ::IE > (/)
:::> a: Ill 0 < (/)
a. 0 j:: 0 a. a: w
-I
c
(j c
0
-I
-- .- -I
0
-I
:::>
:IE
-I
0
a:
::IE
0
0
(/)
a: 0
< 0
a:
a.
w .... a: I- (/)
a.
(/) aQ I-
z
z <
0
0 )
Figure C.4 Dish installed on topofeight-Aoor Elvey Building at University of Alaska in Fairbanks.
0
0
==
-
a:
0
(/)
(/)
w
0 I-
C.3 THE SAR PROCESSOR SYSTEM w 0 w
0 a: z
< < a. a:
LL
w
The SAR Processor System (SPS) consists of a custom built hardware correlator, < a: I- I- J:
I-
c w ~ c( (/)
,.... co
I-
w
a post-processor, a control computer, several high density tape drives, and a c 4• I- ,.... a.
~ a: (/) LL
laser film recorder. The processing is initiated by the AOS (based on scientist 3: WW
c a: (/)
request) by specifying the high-density tape identification number and the data < N J: <
a: M
N == I-
acquisition time (in GMT). Tapes are mounted by the SPS operator for both ci> 00
-I I-
input and output. The HDDRs are Ampex DCRSi transverse scan cassette a:
drives controlled via an RS-232 interface. Each high density cassette tape has ••
a 48 GB storage capacity. Control of the SPS, shown in Fig. C.5, is based
:::> -
around a Masscomp MC5600 computer workstation, augmented by a board ....
level array processor.
The standard operational data processing scenario is as follows. The downlink
a. a:
zO
-0
(/)
u u
data headers are stripped from the signal data by the SPS input interface and
599
600 THE ALASKA SAR FACILITY
C.3 THE SAR PROCESSOR SYSTEM 601
transferred to the Masscomp for analysis and processing parameter generation.
The processing is performed in two passes over the data. During the first pass, real-time Seasat and Magellan SAR processing. The ADSP d 'bed ·
autofocus and clutterlock measurements are performed to derive estimates of Chapt 9 . I. d b ' escn m
er •rs a mu hmo e, lock floating point, pipeline processor with built-in
the Doppler parameters. The tape is then rewound to retransfer the data for hardware modu_Ie~ for a.utofocus and clutterlock. It has two azimuth correlation
image correlation. Given this processing scenario, the effective throughput of modules,. permrttr.ng either the SPECAN or the frequency domain (fast)
the SPS is that about S hours are required to process 52 minutes of data, convol~tron a~gonthm to be used for the azimuth compression operation. In
although the image formation process (pass 2) actually operates at only a factor con:ipar~son wr~h the ADSP, ~he Alaska SAR Processor (ASP) is a simplified
of three slowdown relative to the real-time acquisition rate. We will describe design m .that. all ~omputahons are fixed point (16 bit I, 16 bit Q); the
each of these processing stages in more detail. preprocessm~ ope~trons are performed off-line; and the system can only perform
fast convolution azimuth compression. Table C.2 compares the architecture of
Preprocessing th~ ADSP an~ th~ A~P correlators. The ASP uses less than half the number
The preprocessing data analysis is performed at three points (beginning, center, of mtegrat~d c~rcurt c~rps <.I Cs) and only 40 % of the ADSP power consumption.
and end) within a 100 km image frame, or at approximately 50 km intervals The .ASP rs pictured 10 ~rg. C.6; in the left-hand rack are the digital boards;
for strip mode processing. The main function of the preprocessing is Doppler the nght-hand rack contams the Ampex recorders and the data routing assembly
centroid and Doppler rate parameter estimation. At each preprocessing location (SARA).
a four-look correlation is performed on 204S range lines; the four single-look The SPS functional modules and pipelined data flow are shown in Fig c 7
images are sent to the Masscomp for clutterlock and autofocus analysis (see (Carande and Charney, 19S8). Any of these modules can be bypassed for.te~t
Chapter 5). In addition to the azimuth spectral analysis (for foe) and the look or to accommodate special modes (e.g., onboard range compression). Both th;
correlation (for fR), a correlation between looks in the range direction is range processor and the azimuth processor can accommodate up to SK point
performed to check for PRF ambiguity. The cross-track misregistration for a comple~ FFT~. To minimize the effect of round-off errors from the fixed point
single ambiguity is estimated to be 2.7 pixels. ~FT anthmet~c, range processing is performed by dividing the input range line
The preprocessing stage also performs data quality assurance (QA) and mto overlappmg 2K sample segments. Each segment is processed separately
generates calibration correction factors. The following QA operations are and the data r~as~?1bled, resulting in an SK compressed range line. The corner
performed at each preprocessing location: (a) Bit error rate (BER) measurement turn me?1ory ts di~tded ~nto ~hree pages. One page accepts the range processor
from the pseudo noise (PN) code at the start of each range line; (b) SNR output. m range hne direction, while the other two pages output data to
estimate from the range spectra; and (c) Raw data histogram to verify the the azimuth processing module in the along-track direction. Each page is
I, Q quantization balance in both amplitude and phase. The calibration analysis 8 Msamp~es ( 161, 16Q) resulting in a total memory size of 96 MB.
includes: (a) Setting the processor gain, (i.e., FFT scaling for the fixed point The azimuth ~rocessor can perform either one-look or four-look (quarter
arithmetic modules); (b) Estimation of the range transfer function from the aperture) processmg. The range migration correction module can accommodate
chirp replicas in the header (see Chapter 7); and (c) Determination of the up to .12s sampl~s of range walk; it also performs slant to ground range
absolute image location. The cross-track radiometric corrections are calculated correction. The azimuth correlation is performed at zero Doppler to eliminate
off-line and stored in the Masscomp database. '(hese corrections are applied
following range compression by multiplying the complex data with a real
weighting function.
TABLE C.2
Comparts~n of the ADSP and the ASP Hardware Configurations
Based on the preprocessing data analysis, all reference functions and Advanced Digital SAR Alaska SAR Processor
resampling coefficients are precalculated and stored in the Masscomp CPU Processor (ADSP) (ASP)
memory. The correlator registers are memory mapped into the control computer Total Number of Boards 76
Number of Unique Boards 35
main memory, thus permitting the correlator to behav~ as a slave on the 27 18
computer bus. The processing is essentially static as all control parameters are Number of Racks 2
Number of ICs 1
downloaded prior to a processing run. 27,500 13,000
Power Consumption 12.5 kW
Clock Rate 5kW
Corre/al/on 20 MHz 10 MHz
Computation Rate 6.5 GFLOPS
The image correlator is a custom designed system comprising a single rack (35 Application 3.3 GOPS
boards) of digital hardware. The system is a second generation design based Magellan, Seasat E-ERS-1, J-ERS-1
on the Advanced Digital SAR Processor (ADSP) built by NASA/JPL for Radarsat
Source: Courtesy of T. Bicknell.
602 THE ALASKA SAR FACILITY
C.4 ARCHIVE AND OPERATIONS SYSTEM 603
..... .
HEADER (RESAM PUNG
INFORMATION TO GROUND
'.\ . Pl)(ELS)
___._...::.c.....;__~--
!J II TO
POST
PROCESSOR
CONTROL
(MASSCOMP)
CLlITTER
PROCESSOR .,.__ _
LOC
_ K _ _- - J
AUTOFOCUS I
~ - -:;;~ -
OUTPUT
BUFFER
'---t -~
~
I
I
(DESKEW)
\'
:1 h8
REDUCTION
OUTPUT
INTERFACE
MULTI-LOOK
MEMORY
't,
TO
POST DCRSi
PROCESOR HODA
Figure C.7 Functional block diagram of SPS correlator modules showing data fl ow.
in a range-line format. The output data is sent both to a HDDR for recording
and to an averaging module (8 pixels x 8 pixels) to produce a low resolution
,. image for on-line archive and image display. All ancillary data is transferred
. '' to the post-processor alo ng with the low resolution imagery for preparation of
the CEOS standard format data files. This data is then transferred to the dual
ported disks fo r data staging, prior to file transfer to the AOS.
w
i satellite location for data acquisition planning. The GPS products are available
on-line in a graphical format using the GKS graphics standard. The AOS also
z supports automated interactive queries from the GPS, transferring geocoded
2: 0 w a:
:c w t:= w images to this.system and receiving the high level GPS products.
(J ~z Oz
w~
2: <(
a: :c a: ~o <Co
(Ju..
- (I)
<( (J w c- ~-
2:w
:c~
(J (I) <( <( w
...
_, a: z <( <(~ w<C
-~
(I) a:
zz
O<C
a: a:
0
z
a:> 0 (J ffi 3:w ~~ C.5 THE GEOPHYSICAL PROCESSOR SYSTEM
!:: :!:
_, -z oz Uw A.
<((I)
£! - ~
(J li::w a:w _,_
w_, 2:
:c
Q u.. (J 00 a:i 0 w u.. (I) The Geophysical Processor System (G PS) derives information about the surface
•••• • • • characteristics from the SPS image products. This system has three primary
functions: (a) Multitemporal ice motion tracking; (b) Ice type classification;
t and (c) Wave spectra analysis (Fig. C.9). The system is designed for fully
automated operations, performing quality assurance checks to ensure product
w
(J
<(
(I) >
:c
u..
w 0
0 :!: a: > A. z
w a: 0 a: a:
<(
~ (I)
9w ~ 0 0_, 0~ 0 (I) z ~
<( tn ~u.. (J~
0 ::> a: GEOCODED IMAGES
ICE MOTION
ICE MOTION PRODUCT
~>
~(I)
a: (I) w <( z
w<C a: ~w :; ::>
~ 0
<( (J A.
0 TRACKING
t WAVE SPECTRA
ANALYSIS WAVE PARAMETERS PRODUCT
a:
0 za:
w . a:
ww
ANCILLARY DATA
~ a:
(I) a: _,~
<( ow
_z <o w ::>~
a: (I) z ID~ A. Q <( SPATIAL QATABA'SE
(I) 0 ~<(
<-
~Q
A.
<( w :!: -BUOY DATA IMAGE/PRODUCT
> w :!: _, <Cw :!: :ca: DATABASE
(I) ~
z A. Q ~o
u..
-TEMP DATA
-IMAGE DATA
ID -WIND DATA
::>
(I) -ICE EDGE MAP -PRODUCT DATA
consistency. A high level interface with the AOS performs data~ase queries, have limited capability to remove residual calibration errors (e.g., cross-track
electronic transfer of image data to the G PS, and transfer of geophysical products intensity ramps). The software is implemented on a Sun 4/ 260 scientific
to the AOS. The software is designed to be modular, to allow flexibility in its workstation augmented by a Sky Warrior array processor. The GPS also uses
architecture for post-launch optimization of the system performa~ce. The GPS an INGRES DBMS to archive its data files. We will briefly discuss each of the
requires input data that has been previously geocoded to either a. P~lar GPS algorithms and its output data products in the following subsections.
Stereographic (PS) or Universal Transverse Mercator _(UTM_) project.ion.
Additionally, the GPS requires that the SPS perform radiometric corr~ct1ons Ice Kinematics
to remove the cross-track power variation (see Chapter 7), although 1t does The ice motion tracker performs matching of common ice floes in SAR image
pairs separated by time scales of days to weeks. A diagram of the motion
algorithm is shown in Fig. C.10. The candidate image pairs are selected from
NWS a listing of all recently acquired image data received daily from the AOS
(SURFACE database. The location of each newly acquired image is input to a motion
TIME WIND.
GMT LOCATION INTERVAL AVHRR TEMP.) predictor algorithm that uses National Weather Service (NWS) wind and
drifting bouy data to select the most probable archive images for matching
(Colony and Thorndike, 1984).
REQUEST I
MOTION ANCILLARY
IMAGE PAIR
PREDICTOR DATABASE
SAA DATA
IMAGE
DATABASE CATALOG SURFACE
TEMP
PREPROCESSING: AVHRR
SPECKLE FIL TEAING
LEVEL 1C
GEOCODED
SEGEMENTATION
IMAGES
1------~ CLASSIFICATION
4 T YPES
a b
ZONE PACK
FEATURE
FEEDBACK
MATCHING
TO MOTION
PRED ICTOR
c d
CLASSIFIED
CONSIST ENCY
IMAGE - 1438
CHECK
- 1481
Figure C.11 Ice motion pair from Beaufort Sea acquired by Seasal illustrating the rotatio n a nd
MOTION ECTORS
deformation over a three day period : (a) Rev. 1438 acquired October 5, 1978; ( b) Rev. 1481 acquired
Figure C.10 Ice motion and ice classificatio n a lgo rithms (Kwo k et a l., 1990). Octo ber 8, 1978; (c) Edge ma ps; and (d) Motio n vectors.
en
0
co
a b
,,,
'' -3 -3
-J -· -;;
" ''''"""""
-2 -3 -J -5 -5 -) -5
-· -2
',,,' "''''""' -2 -3
• -· -· -l -l -5 -J -] ·]
,,,,,,,
'' '' " \ -2 -2 -2
-· -J -3 -2 -l -5
\
-J -2 -2 -2 -J
-· -l -5 -1e
'
'\
' \
''-\\'\\\\\\\\\
'\ '\ '\ '\ '\ \ \ '\ \ \ \ \ \
-l -l
-l
-] -2
-J
-J -2
-2 -2
-2 -4
-2
-? -l
-1 -J
-2
•7 . -· -7
-·
I
_,
' ' ''' \ \ \ \ \ \ \ \ \ \ \ 8 -) -) -) -) -3 -3 -2
...
..... ·2 - I
-· 0 p -)
· I
-2
-l
-4
-2
-2
-2
-~
-2
-2
-2
-2
-2
-·
-2
-2
-3 -l
-)
-1
-· -· -· _, -·
-2 -~
-:
- :.
-I -5
c d
Figure C.12 Ice motion o utput products on 5 km grid from Beaufort Sea acquired by Seasat : (a) Rev.
1409 acquired October 3, 1978; (b) Rev. 1452 acquired October 6, 1978; (c) T ranslational vector grid ;
0)
and (d) Rotational grid.
0
co
610 THE ALASKA SAR FACILITY
C.5 THE GEOPHYSICAL PROCESSOR SYSTEM 611
The selected image pair is first evaluated using a coarse feature extraction and translational vectors that are in turn used to initialize an area correlation
technique to generally determine the area of common floes. The image pair is routine which produces a regular grided output product (Fig. C.12). These ice
then catagorized as either pack ice or marginal zone ice. Since the pack ice motion products ( 100 km x 100 km) are averaged and overlaid on land
undergoes little rotation within the time scales considered in this system, this boundary maps to derive a regional (Arctic) time series product.
catagory of ice imagery can be processed using a straightforward hierarchical
area correlation technique. Conversely, ice in the marginal zone can move Ice Classification
several dozen kilometers per day and undergoes both deformation and rotation The ice classification routines identify the various types of ice based on the
(Fig. C.11 ). The matching procedure for ice in the margin is based on a feature measured radiometric brightness of the SAR image pixels. The algorithm requires
extraction procedure that is invariant to rotation and insensitive to deformation information on surface temperature derived either from NWS data or by passive
(Kwok et al., 1990). This procedure is used to derive a sparse field of rotational radiometry (e.g., AVHRR). The temperature information is used to select a
look-up table (LUT) for the classification. The image is first segmented into
three or four classes using a clustering algorithm. These classes are then related
to ice types using a maximum likelihood classifier given the target scattering
information in the LUT. The scattering characteristics of the various ice types
E
::.c: ( are based on ground scatterometer measurements (e.g., Onstott et al., 1979).
'
CD
The major ice types, categorized by age, are: (a) Multi-year ice; (b ) First-year
. .'JP',,____ ice; ( c) New ice; and ( d) Open water. Currently, it is expected that this procedure
l \ can produce reliable (95 % correct) results only during the winter season,
JPL AIRCRAFT SAA C-BAND VV
SLBSCENE
FOURIER ANALYSIS
SMOOTH SPECTRA
SMOOlHED COEFF
.-~~~_..~~~~--.
PEAK DETECTION
PEAKS
CONTOUR PLOT
GENERATION
October to May. The large difference in dielectric constant ?~tween sea ice NASA / JPL DC-8 aircraft) with the NORDA KRMS 33.6 GHz passive
covered with dry snow and sea ice covered with snow contammg free water radiometer system (Eppler et al., 1986) in Fig. C.13 (Holt et al., 1990a).
molecules and melt ponds make classification during the summer (June to
September) season significantly more complex and not sufficiently reli.able .for Ocean Wave Spectra
an operational system. An illustration of the performance of the classification The GPS ocean wave spectra routine is designed to extract wave parameters
algorithm is shown by the comparison of simulated E-ERS-1 data (from the from the SAR image data (Holt et al., 1990b). Its input is a full resolution
(four-look) image that is output from the SPS on HDDT. These data are read
into the AOS where they are subdivided into 512 x 512 pixel blocks for
processing by the GPS. The functional block diagram of the wave product
generation algorithm is given in Fig. C.14. The processing consists of a two
dimensional transform of each 512 x 512 block of data, followed by a Gaussian
smoothing filter to reduce the noise. The width of this filter is a parameter that
can be adjusted by the user. A peak finding routine is then used to locate the
significant wave peaks in the smoothed spectra. These peaks are defined as
local maxima, when compared to the image mean, which are separated by some
minimum distance from other maxima. From these peak locatio ns, the wave
number is given by the radial distance of the peak from the origin; the wave
direction is given by its polar angle relative to the image axis. No corrections
are applied to compensate for the SAR system impulse response function, or
for nonlinear moving surface modulations. An example of the image spectra
and resultant contour plots is given in Fig. C.15. The contours will be available
to users as an online graphic display, however the smoothed spectra will only
be distributed on hard copy digita l media.
10Km
C.6 SUMMARY
The Alaska SAR Facility is the first fully integrated SAR ground data system
in that it routinely acquires and processes SAR data from Level 0 to Level 3.
Additionally, this facility performs mission planning and has a limited capacity
for electronic distribution of data, permitting rapid data access by the science
team. We present this system as an example of the type of end-to-end design
required for the "EOS era" ground data and information system. This system
has a throughput capacity of close to a terabit of SAR data per (24 hour) day.
A 8 c 0 This computational capacity is balanced with automated systems for archiving
and cataloging these image products, as well as a system to derive information
['-~'"'~
,--/_ ! ;,·>(
from the images and to produce a large number of reduced volume (non-image)
high level data products for direct analysis by the science team.
We consider the ASF as a pathfinder system in that it addresses a number
j\ -f :\\ /
of the technical challenges facing the EOS ground data system. NASA plans
to use this facility for testing advanced concepts in mission planning, data
I
r.... - \
i
L-_._~_._~_._--'
integration, electronic browse, and high rate data distribution. We fully expect
Fig ure C.15 Seasat SAR image or Chukchi Sea ( 10/ 9/78) showing fou r 512 ~ 512 pixel framelettes that the ASF will contribute significantly not only to our understanding of
and their spectral contour plots: (A) Open sea; (B) Frazil ice; (C) Pancake ice ; and (D) Open sea polar oceanography, but also to our ability to develop and operate large,
(Holt et al., 1990b). integrated ground data systems.
614 THE ALASKA SAR FACILITY
REFERENCES
Carsey, F. and W. Weeks, eds (1989)."Science Plan for the Alaska SAR Facility," JPL
Pub. 89-14, Jet Propulsion Laboratory, Pasadena, CA.
Carande, R. E. and B. Charney ( 1988). "The Alasak SAR Processor," Proc. IGARSS APPENDIX D
'88, Edinburgh, Scotland, ESA SP-284, pp. 695-698.
CEOS ( 1988). "Committee on Earth Observations Satellites: SAR Data Product Format
Standard," Rev. 2., ESA ESRIN, Frascati, Italy.
Colony, R. and A. s. Thorndike ( 1984 ):'"An Estimate of the Mean Field of Arctic Sea NONLINEAR DISTORTION
Ice Motion," J. Geophys. R., 89, pp. 10623-10629.
Eppler, D. T., L. D. Farmer and A. W. Lohanick ( 1986). "Classification of Sea Ice Types ANALYSIS
with Single-Band (33.6 GHz) Airborne Passive Microwave Imagery," J. Geophys. R.,
91, pp. 10661-10695.
Holt, B., R. Kwok and E. Rignot (1990a). "Status of the Ice Classification Algorit.hm
in the Alaska SAR Facility Geophysical Processor System," Proc. IGARSS 90,
Washington, DC, pp. 2221-2224.
Holt, B., R. Kwok, and J. Shimada (1990b). "Ocean Wave Products from .the Alaska
SAR Facility Geophysical Processor System," Proc. IGARSS '90, Washington, DC,
pp. 1469-1473.
Kwok, R., J. C. Curlander, R. McConnell and S. S. Pang (1990). "An Ice-Motion
Tracking System for the Alaska SAR Facility," IEEE J. Oceanic Eng., 15, pp. 44-54.
Onstott, R. G., R. K. Moore and W. F. Weeks (1979). "Surface-Band Scatterometer
Measurements of Sea Ice," IEEE Trans. Geosci. Elec., GE-17, pp. 78-85.
For a linear, time invariant system, where the principle of superposition applies,
a stimulus such as a step or a series of sinusoids is a suitable input for a complete
characterization of the system. Assuming causality, the resulting impulse
response can be used to predict the output for any input from the standard
convolution integral:
where s(t) is the input signal, r(t) is the output, and h(t) is the impulse response
function (Appendix A). In a nonlinear system, however, the output function is
not a simple convolution using the input. A sinusoidal stimulus can produce
an output not only at the input (fundamental) frequency, but at all higher
harmonics of this frequency. Since the relative contribution of these harmonics
to the system response is dependent on the stimulus amplitude and frequency
characteristic, there can be no single transfer function that will predict the
response to a general input. Instead, a separate characterizing function would
be required for the response of the system to each amplitude and frequency of
input.
To circumvent this problem, the traditional approach in characterizing a
system is to assume linearity and to use a small amplitude test input to produce
a transfer function dependent only on the fundamental component of the
frequency response. This approach has been used extensively in the characterization
of nonlinear physiological systems (Rodieck, 1965; Tate, 1971; Toyoda, 1974).
615
616 NONLINEAR DISTORTION ANALYSIS
NONLINEAR DISTORTION ANALYSIS 617
In some respects, a linear characterization of a nonlinear system contradicts
the original intent in measuring the system performance. For this reason, (D.6)
Marmarelis ( 1978) proposed the use of white noise analysis to characterize the
nonlinear behavior of systems.
The basic concept behind use of a Gaussian distributed white noise input is which is simply the convolution integral. The first order functional, in fact, can
that the system response can be evaluated across all possible inputs, spanning be used as a measure of the linearity of a system since the difference between
both the system bandwidth and dynamic range. A nonlinear system response the measured nonlinear response and the predicted linear response is that system
can be represented as a power series with functionals (i.e., functions of functions) component which cannot be characterized as linear.
as terms (Volterra, 1959). For any system with input s(t), the output r(t) can For nonlinear systems, higher order functionals are required to describe the
be described by a Volterra functional series as system behavior. To construct the higher functionals, a procedure similar to
the Gram-Schmidt orthogonalization technique is used. The resulting second
00
r(t)=n~O _00 f f
00
••• -ookn(t;,
00
... ,t~)
and third order functionals are given by
00 00
fo
00
where kn(ti. ... , tn) are the Volterra kernels. The Volterra series description is - p h2(t;,t;)dt'· (D.7)
very powerful conceptually, but practically it is rarely used since no simple
method exists for calculating the kernels of the system. This problem can be 00 00 00
solved by using a functional series originally proposed by Wiener (1958) that G3 [h3; s(t)] = { { { h3(t;, t2, t))s(t - t; )s(t - t2)s(t - t)) dt; dt2 dt)
simplifies evaluation of the system kernels by making the terms of the Volterra
series orthogonal for a specific stimulus. Since, to exhaustively test a nonlinear
system, the stimulus must cover all possible amplitudes and frequencies over (D.8)
which the system operates, Wiener chose a Gaussian white noise (GWN) input
from which to construct a hierarchy of orthogonal functionals. Setting the zero
order Wiener functional to a constant value h0 , i.e., where P is the power spectral density of the white noise. The power level P of
the white noise input is assumed constant for all frequencies over which the
system operates and is equivalent to the Fourier transform of the autocorrelation
(D.3) function of n(t). In a form similar to the Volterra series in Eqn. (D.2), the
response of a nonlinear system can be written in terms of Wiener functionals as
the first functional is then given by 00
where h 1 (t') is the first order Wiener kernel and k 1 is the constant term necessary (D.10)
to make the first order functional orthogonal to h0 • To de•ermine k1 , we solve
the equation for all i =I= j.
Because of this relationship between the functionals, the kernels can be easily
(D.5) calculated as follows (Lee and Schetzen, 1965)
hn(t'1 , .•• ,t~) = (l/n!Pn)s{[r(t)- :t: Gk(t) }(t- t'i) ... s(t - t~)}
(D.14)
Therefore, using Eqn. (D.9) the output r(q; of any nonlinear system ca~ be
BIBLIOGRAPHY
exactly characterized for any input signal s(t). The number of terms req~tred
in the summation depends on the degree of nonlinearity of the system. Typically
three to four terms are sufficient.
REFERENCES
619
620 BIBLIOGRAPHY
BIBLIOGRAPHY 621
Skolnik, M. I. Introduction to Radar Systems, 2nd Ed., McGraw-Hill, New York, 1980.
Broad range of topics in radar, with extensive pointers to the literature. "Spaceborne Imaging Radar Symposium, January 17-20, 1983," Puhl. 83-11, Jet
Propulsion Lab., Pasadena, July 1, 1983.
Ulaby, F. T., R. K. Moore and A. K. Fung, Microwave Remote Sensing Active and Passive.
Vol. l, Microwave Remote Sensing Fundamentals and Radiometry, Addison-Wesley, "The Second Spaceborne Imaging Radar Symposium, April 28~30, 1986," Puhl. 86-26,
Reading, MA, 1981. Vol. 2, Radar Remote Sensing and Surface Scattering and Emission Jet Propulsion Lab., Pasadena, December l, 1986.
Theory, Addison-Wesley, Reading, MA, 1982. Vol. 3, From Theory to Applications, "Shuttle Imaging Radar-C Science Plan," Pub!. 85-29, Jet Propulsion Lab., Pasadena,
Artech House. Encyclopedic. Remote sensing physics, with a long chapter on SAR September 1, 1986.
(Vol. 2, Ch. 9).
Wehner, D.R. High Resolution Radar, Artech House, Norwood, MA, 1987. Techniques
for wideband radar systems, including synthetic aperture. Special attention to inverse
synthetic aperture (ISAR) systems.
Survey Articles
Ausherman, D. A., A. Kozma, J. L. Walker, H. M. Jones and E. C. Poggio, "Developments
in radar imaging," Trans. IEEE Aero. and Elec. Sys., AES-20(4), 1984, pp. 363-400.
Barber, B. C. "Theory of digital imaging from orbital synthetic-aperture radar," Int. J.
Rem. Sens., 6(7), 1985, pp. 1009-1057.
Elachi, C., T. Bicknell, R. L. Jordan and C. Wu, "Spaceborne synthetic-aperture
imaging radars: Applications, techniques, and technology," Proc. IEEE, 70( 10), 1982,
pp. 1174-1209.
Moore, R. K. "Radar fundamentals and scatterometers," Chapter 9 in Manual of Remote
Sensing, 2nd Ed., Vol. I Theory, Instruments, and Techniques (Colwell, R. N.,
D. S. Simonett and F. T. Ulaby, eds.), American Society of Photogrammetry, Falls
Church, VA, 1983.
Moore, R. K. "Imaging radar systems," Chapter 10 in Manual of Remote Sensing, 2nd
Ed., Vol. I Theory, Instruments, and Techniques (Colwell, R. N., D. S. Simonett and
F. T. Ulaby, eds.), American Society of Photogrammetry, Falls Church, VA, 1983.
Tomiyasu, K. "Tutorial review of synthetic-aperture radar (SAR) with applications to
imaging of the ocean surface," Proc. IEEE, 66(5), 1978, pp. 563-583.
Reports
Cimino, J. B., B. Holt and A. H. Richardson, "The Shuttle Imaging Radar B (SIR-B)
Experiment Report," Puhl. 88-2, Jet Propulsion Lab., Pasadena, March 15, 1988.
Ford, J.P., R. G. Blom, M. L. Bryan, M. I. Daily, T. H. Dixon, C. Elachi and E. C. Xenos,
"Seasat Views North America, the Caribbean, and Western Europe with Imaging
Radar," Puhl. 80-67, Jet Propulsion Lab., Pasadena, November 1, 1980.
Ford, J.P., J.B. Cimino, B. Holt and M. R. Ruzek, "Shuttle Imaging Radar Views the
Earth From Challenger: The SIR-B Experiment," Puhl. 86-10, Jet Propulsion Lab.,
Pasadena, March 15, 1986.
Fu, L.-L. and B. Holt, "Seasat Views Oceans and Sea Ice With Synthetic-Aperture
Radar," Pub!. 81-120, Jet Propulsion Lab., Pasadena, February 15, 1982.
Kasischke, E. S., G. A. Meadows and P. L. Jackson, "The Use of Synthetic Aperture
Radar Imagery to Detect Hazards to Navigation," Environmental Research Institute
of Michigan, Ann Arbor, 1984.
Pravdo, S. H., B. Huneycutt, B. M. Holt and D. N. Held, "Seasat Synthetic-Aperture
Radar Data User's Manual," Puhl. 82-90, Jet Propulsion Lab., Pasadena, March 1,
1983.
MATHEMATICAL SYMBOLS 623
'
(c
Estimate of Fresnel coefficient ( (complex image)
Geocentric latitude
''t·
11
Satellite latitude (spherical coordinate)
Target latitude (spherical coordinate)
Incidence angle between radar beam and surface normal
(} Antenna horizontal pattern angle
One of the polar coordinates (R, 0, </>)
LIST OF ACRONYMS 631
635
636 INDEX INDEX 637
Antenna: Faraday rotation, 163, 315. 317 Bayman, R. W.. 297 Carande, R. E., 601
active array. 276-277. 318-319. 335-336. 357 oxygen absorption band, 5 BBN Labs. 470 Carlson, A. B.. 283
aperture efficiency. 83, 273 propagation (group) delay. 315. 317. Beckman, P .• 285 Carsey, F .. 592
aperture field distribution. 78-79. 85. 94-96 379-381 Bennett. J. R.. 33. 187, 190, 194. 223. 226. 234 Carver,K.,59,274 ·
ASF ground station. 597. 598 pulse dispersion, 317 Bergland. G. D .. 559 Cassini Titan Radar Mapper (CTRM), 40, 42
beamwidth. 87-88 refraction. 303-304 Berkowitz. R. S.. 257 Causal system, 538, 615
cross-polarized pattern. 278. 351 water vapor absorption band. 5. 315 Bernoulli's theorem, 285 CCRS (Canadian Center for Remote
cross-talk, 351 Attema, E .• 329. 330. 331 Bessel function, 150. 257 Sensing), 33-35
current distribution function. 150 Aushermann, D. A. 504 Binary phase codes, 265 Central limit theorem, 215, 260
directional temperature. 105 Autocorrelation function of image, 232-233 Bistatic radar, 96 Centre National d'Etudes Spatiales (CNES).
directivity pattern. 77. 83-91, 104. 273. 335 Autofocus,222,234-237,320 Bit error rate (BER). 286, 600. See also Noise. 40
effective aperture. 95 in ASF processor. 600 bit error CEOS standard. 603
effective isotropic radiated power (EIRP), by image contrast, 234 Black body radiation, 102. 104-105 Chain Home network, 27
in polar processor. 529-534 Blahut R. E.. 561 Chang, C. Y., 197, 238, 242, 243, 245, 367,
74,341
by subaperture image correlation, 235-237 Blanchard. A. 318 436,443,493
Fraunhofer region, 79
Automatic gain control, 271-273 Blitzer. D. L.. 31 Chang, P. C., 499
Fresnel region, 78
AVHRR,611 Block adaptive quantization (BAQ), 289-293 Chen, W. H .• 494
gain function, 76. 80-84. 127. 223, 273
Azimuth: Block floating point quantization (BFPQ), Chinese Academy of Sciences, 40
Goldstone ground station. 176, 178. 179,
ambiguities, 20. 88-89. 298-303 289-294 Chinese remainder theorem, 243, 302
180. 181
bandwidth time product, 170. 223 Bohm,D.. 97 Chirp:
microstrip phased array, 274-275
compression filter parameters. 588-591 Boltzmann's constant, 75. 97 effective rate, 205
minimum ambiguity area. 21, 274
bandwidth, 434-435 Boresight unit vector, 430 pulse, 133
noise, 106-108
length (in samples), 435 Bracewell, R. N .. 149 replica (for calibration). 329-331, 364, 600
polarization purity, 278
filter spectra. 238 Bragg, scattering, 51-52, 321. 365 step, 133
power pattern, 81
frequency domain processing. 196-208 Brigham. E. 0 .. 558 Chu, S., 560
quad-polarized design. 274-278
resolution, 169-171, 524 Brightness temperature, 102-104 Circular convolution, 549-553
radiation efficiency. 74. 82-84, 106, 273
signal processing overview. 167-169 British Aerospace of Australia, 464 Circulator, ferrite, 276
reciprocity. 95, 341, 352. 364
skew. 190-193,390-392,603 British Royal Aircraft Establishment (RAE), Clustering algorithm, 611
sidelobes, 86-88
spectral analysis. 100 155 Clutterlock, 190, 222-234, 301, 320
slotted waveguide array. 275
time domain processing, 187-196 Brookner, E., 27, 163, 317, 380 in ASF processor, 600
two-way power pattern. 228
Azimuthal range compression. 203 Brown. W. M .. 241 using Doppler spectral analysis, 224-226
uniformly illuminated aperture. 85-86. 88
Browse (of SAR imagery): using energy balance, 227-228
Yagi. 34
Backscatter coefficient: requirements. 488-489 using minimum variance estimator,
AOS (Archive and Operations System). 592.
mean,93. 122, 139.214,322 •system design. 487-489 228-231
603-605 using time domain correlation, 231-234
specific.91-92. 136 Brunfeldt, D.R., 341, 342
Apogee. 573, 576 Clutter rejection filter, 222
Bandlimiting. 238. 546 B-scan. 27
Apollo Lunar Sounder Experiment. 34-38
filter, 545 Burrus, C. S., 560 Coarse aperture time, 517
command module diagram, 38
low pass signal interpolation. 561 Butler. D., 10, 251 Coefficient ordering, 557
optical recorder, 37
Bandpass waveform. 541 Butler. M .. 265 Coefficient of variation, 325
Appiani. E.G., 470
Applied Physics Laboratory. see Johns Bandwidth of azimuth processor, 434-435 Butterfly operation in FFT, 556 Coherent detection. 185
Hopkins University Applied Physics Bandwidth time product: Colony, R., 607
Laboratory azimuth. 169. 174.206.223.503 Calibration: Colwell, R. N., 9, 44, 73
Aptec,464 of matched filter. 130. 132 geometric, see Geometric calibration Committee on Earth Observation Satellites
Archive and Catalog Subsystem, 603 range, 135. 146. 150, 161, 185.200.528 polarimetric, see Polarimetric radar (CEOS),603
Archive and Operations System (AOS), 592. Barber, B. C.. 159. 161. 187. 211, 222. 234. calibration · Common node processor architecture,
603-605 528,580,583,586.590 radiometric, see Radiometric calibration 460-467
Argument of perigee. 573, 576 Barker codes, 265 Calibration processor, 353-357 common signal processor, 460, 467
Ascending node. 576 Barnett. I. A. 243 data flow, 354 functional block diagram, 461
Ascending node of satellite orbit, 576 Barrick, E. E.• 51 processing flowchart, 354-355 1/0 transfer rates, 461-463
Barrow, H. G .• 422 Canadian Center for Remote Sensing Compression processing. 146-148, 183-209
ASF, see Alaska SAR Facility
Barton, D. K.. 71. 73 (CCRS). 33, 35 algorithm overview, 165-169
Aspect angle. 156, 217
Basebanding: Canadian Space Agency, 12, 44, 592 azimuth matched filtering algorithms,
Atmospheric:
complex, 136, 166. 183. 185 Canny, J. F .. 419, 420, 421 187-208
absorption spectrum. 5. 48 computational analysis, 443-445
of pulse compressed data. 213, 224 Canny edge detector, 419-421
amplitude scintillation. 315
638 INDEX INDEX 639
Compression processing (Continued) Cumming, I. G., 238 ELSAG,468
Dongarra, J. J., 452
frequency domain. 196-208 Curlander, J.C., 223, 227, 243, 357, 361, El'yasberg, P. E., 570, 574, 577, 579, 580
Doppler:
time domain, 187-196 374,390 Emissivity, 117
bandwidth, 21
filter mismatch, 163, 174 Cutrona, L. J~ 20, 30, 32, 123 EMMA multiprocessor, 470-472
of processor, 434-435
filter parameters, 430-436, 588-591
beam sharpening, 18-21, 28 computational analysis, 471-472
range matched filtering algorithm, Data compression: functional block diagram, 471
center frequency, 168, 569
182-187 adaptive discrete cosine transform, clutter rejection filter, 222 Entropy, 288-289
computational analysis, 449-452 493-496 navigator, 30, 32 Environmental Research Institute of
digital formulation, 210-214 block quantization techniques, 289-294 parameter bounds, 430-434 Michigan (ERIM), 33-35
filter coefficients, 213-214, 436 of downlink data, 288-289 parameter update rates, 433-434 Environmental Science Services
spectral analysis algorithms, 440-443, efficient coding, 289-294, 462 rate, 168, 569, 590-591 Administration (ESSA), 381
503-534 lossless, 288-289 mismatch, 163-164, 173-175 EOS (Earth Observing System), 9-11. 613
Compression ratio of the processor, 359 lossy,288-289,493-499 sampling rate, 545 Ephemeris (restituted). 594
Concurrent processor architecture, 467-473 vector quantization, 495-499 spectrum, 223, 238 Eppler, D. T., 613
EMMA multiprocessor, 470-472 Data level definitions, 251-252, 254-255, 353 Downlink subsystem, 283-288 Equation of motion, 570
MIMD processor arrays, 469-473 Data synchronization, 594 bit error noise, 285-286. 357 Equatorial coordinate system, 572
SIMD processor arrays, 468-469 Davis, D. N., 460 channel errors, 283-285 Equipartition principle, 97-98
Convolution: Davis, W. A. 422 error protection codes, 283-284 ERIM (Environmental Research Institute
analog, 537-538 Decimation: Dubois, P., 352, 357, 367 of Michigan), 33-35
bandlimited signals, 546-547 in frequency, 555 Durden, S. L., 56 ESA (European Space Agency), 10-13, 44,
circular, 549-553 of sampled signals, 563 592
discrete, 545-553 in time, 555, 556 Earth Observing System (EOS). 9-11. 411 Euclid's algorithm, 243-245
fast, 601 Defocussing (Doppler rate mismatch), 163, Earth Resources Satellite (J-ERS-1). 592 European Remote Sensor (E-ERS-1),.274,
linear, 545-553 174 Earth rotational velocity. 375. 568. 582 329,331,375,467,470,471,592
overlap add method, 551, 552 Depth offocus, 173-176, 195 nutation. 572 European Space Agency (ESA), 10-13, 44,
overlap save method, 552 Deramp processing: precession. 572 592
Convolutional code, 284 computational analysis, 440-443 Earth's angular velocity. 568. 582 Exciters:
Cook, C. E., 133, 145, 146, 147, 149, 150, 151, polar processing, 519-534 Eccentric anomaly. 576. 577 analog (SAW) designs, 265-266
173, 174, 175, 265 range processing, 503-507 Eccentricity (of satellite orbit). 226. 573. 576 autocorrelation function, 267, 268
Cooley, J. W., 554, 558 step transform, 508-518 Echo tracker. 349-351. 357 digital designs, 267-268
Coordinate system: Deutsch, L., 284 E-ERS-1. 274. 329. 331. 375. 467. 470. 471. pulse codes, 265
of satellite orbit, 571-572 DFf (discrete Fourier transform). 238, 549. 592.593.594.595.596 pulse jitter effects, 268
of signal data, 157-159 550 Effective chirp rate. 205 SAW geometries, 266 .
Cordey, R. A, 52 Dielectric constant, 48, 55, 60, 62, 63, 137, 612 Effective isotropic radiated power (EIRP). 74 Exponential probability distribution
Corner reflectors, 338-340 Diffraction integral, 77-79 EHFband. 8 function,215,216,228
beamwidths,339-340 Digital electronics, 279-283 Eigenfunction. 539. 540. 541 External calibration, 337-349
device errors, 339 Diophantine equation, 243 Eigenvalue. 539 distributed targets, 347-349
radar cross section, 339-340, 352 Dirac function, 140, 156, 543 Elachi. C.. I. 9. 38. 44. 50. 104. 117. 411 ground sites, 327, 344-346, 357
Comer turn: Directivity: Electromagnetic spectrum: point targets, 337-343
algorithm, 182 of antenna, 77, 83-91, 104, 273, 335 absorption bands, 5
memory size, 436 of terrain, 139 definitions, 8 Fairbanks, Alaska, 592, 595
Corr, D. G., 330 Discrete convolution, 546 remote sensing applications, 7-10 Farnett, E. C., 150, 213
Correlator design, 428-473 Discrete Fourier transform (DFf), 238, 549, Electromagnetic waves: Farr, T .• 54
computational analysis, 437-452 550 phase velocity, 47 Fast convolution, see Frequency domain
hardware architecture, 452-473 Distortion noise: polarization, 47, 93-94 convolution
post-processor, 473-487 crossover, 262 Electromagnetic wave scattering: Feature extraction, 610
requirements definition, 428-436 harmonic, 261-262, 270-271 Bragg,51-52,321,365 Fenson, D .• 464
Cray computer, 452, 466, 486 intermodulation, 261-262, 271 facet, 51 Fermat transform, 561
Crochiere, R. E., 563 Dive angle, 589 polarimetric matrix, 94 Filtering, 551-553
Cross-correlation: Dobson, M., 344 radiative transfer, 55 Filter weighting functions, 148
for autofocus, 235 Dolph, C. L., 150 specular, 49 Fitch, J. P., 504
coefficient, 232-233 Dolph antenna current distribution function, Stokes matrix, 350-353, 364, 367 Foreshortening distortion, 382-384, 399, 479.
hard limited function, 234 150 surface,6,48-55,352 484
for multisensor image registration, 416, Dolph-Chebyshev weighting, 151 volume, 55-65 Fourier:
418,422 Domik, G., 402 Elliott, D. K. 557 pair, 540
640 INDEX
INDEX 641
Fourier (Continued) image rotation, 395-397, 482
Heat Capacity Mapping Mission (HCMM), 7 calibration tones, 322-324, 356, 364
series, 540 smooth geoid projection, 393-399 Heiskanen, W. A., 484 E-ERS-1design,329-331
spectrum, 554 topographic map projection, 399-410 Herland, E.-A.. 190, 223, 226, 234 post-processor design, 477-479
Fourier transform: Geologic applications of SAR, lava rock Hewlett-Packard, 264 pulse replica loops. 318, 329-331, 356, 364
algorithms: classification, 53-55 HF band, 8, 34 SIR-C design. 331-336
bit-reversed ordering, 557 Geometric calibration: High density digital recorder (HDDR), 594 Interpolation, 191-196, 204, 561-563. See also
in-place, 557 definition of terms, 371 Hillis, W. D .• 468 Sampling
not-in-place, 557 error sources, 372-387 Hirosawa, H .• 344 Intra-pulse range variation, 159-163
butterfly operation, 556 incidence angle map, 386 Hogg, D. C., 104 Inverse SAR, 520
coefficient ordering, 557 Geometric distortion, 372-387 Holt, 8., 610, 613 Ionospheric effects:
computational analysis, 555 clock drift, 373 Homogeneity, 537 attenuation. 163, 317
discrete, 547-549 clock jitter, 372 Homogeneous scene. 228, 537 Faraday rotation, 163, 315, 317
fast, 553, 554-558 electronic delay, 372, 373, 379 Hovanessian, S. A., 506 group delay. 315, 317, 379-381
pair, 540 foreshortening, 382-384, 399, 406 Hulsmeyer, C .• 27 pulse dispersion, 317
radix formulation, 558 image skew, 391-393 Huneycutt, 8. L., 270. 274 Isotropic scatterer, 136
three-dimensional inverse, 523 ionospheric group delay, 379-381 Hunten, D. M .• 40 Italian Space Agency (ASI). 470
twiddle factors , 555, 556 layover, 382-386 Huygen's diffraction integral. 77-79
zero padding, 188, 212 shadow, 385-386 Hwang, K., 452 Jain. A. K., 288, 499
Freden, S. C., 7 slant-to-ground range correction, 480-482 Hybrid algorithm, 206, 437 J-ERS-1, 12, 13, 592
Fredholm integral equation, 140 specular point migration, 387
Jet Propulsion Laboratory (JPL), 33-35,
Freeman,A.,327,341,343,344,349,351,358 Geometry of satellite SAR, 374, 377, 567
Ice kinematics, 592, 607-611 60-61, 155.227,238
Frequency domain (fast) convolution, 169, Geophysical processor, 321-322, 605-613 IEEE, 3ll Jin, M. Y.• 197. 199, 201, 203, 206, 223, 228,
187 German Aerospace Research Establishment
Image registration: 230.432.529
ADSP design, 456-457 (DLR),40
for mosaicking, 412-415 Johns Hopkins University Applied Physics
azimuth algorithm, 196-208 Global Positioning System, 402
for multisensor data analysis, 416-424 Laboratory. 52
azimuth computational complexity, Goddard Space Flight Center (GSFC), 10
using chamfer matching. 419, 422. 606 Johnson, H. W., 560
443-444,446-448,452-453 Goldstein, R., 5 '
using digital elevation maps. 406-408, Johnson. W. T. K., 39
azimuth processing block size, 435-436 Goldstone antenna, 176, 178, 179, 180, 181 418
range algorithm, 182-187 Goodyear Corp., 28
using distance transform, 422-423 Kahle, A., 7
range computational complexity, 449-452 Gordon, F., Jr., 7
using edge operators, 419-422 Kalman filter, 456
range processing block size, 436 Graf, C., 393
using principal component analysis, 419 Kasischke, E. S., 325
Frequency shift, 162 Gram-Schmidt orthogonalization, 617
Impedence mismatch (SNR effect), 115-116 Kennard. E. H .• 96
Fresnel, see also Reflectivity of target (scene) Gravitational constant, 570
Impulse response function: Kepler's first law, 576
integral, 145 Gravitational potential function, 570
analog formulation, 537-539 Kepler's second law. 574
reflection coefficient, 136-139, 231 Gray, A. L., 321, 346. 355
digital formulation. 552 Kepler's third law, 577
region of antenna, 78 Grazing angles, 303-304
sidelobes. 148-153 Kim, Y., 333, 334
Friedman, D. E., 395, 482 Green's function, 135, 138, 148, 155, 164,
weighting functions, 151-153 Kirk, J. C., Jr., 32
Frost, V. S., 419 502,503,529 Inclination, 573, 576 Klauder. J. R., 257
Fu, L.-L., 37 inverse, 139-142, 156,504,523,537 In-phase quadrature detection. 136, 183 Klein, J., 331. 335, 352
Functionals, 616 GSFC (Goddard Space Flight Center), 10 Institute of Electrical and Electronic Kleinrock, L.. 489
Fundamental component of frequency Engineers (IEEE), 3ll Kliore, A., 304
response, 615 Habbi, A.. 493 Institute for Navigational Studies. University Klystron, 30
Fung, A. K., 55 Hadamard transform, 493 of Stuttgart, 341. 343 Kovaly, J. J., 29
Hamming weighting function, 152, 358 Integrated side-lobe ratio (ISLR): KRMS passive radiometer, 612, 613
Gagliardi, R., 104, 105, 106 Hann weighting function, 152 of antennas. 87-88 Kropatsch. W., 383
Gamma probability distribution function, Hardware architecture, 452-473 definition, 256 Kwok, R.. 292, 402, 412, 418, 479, 600, 610
220 common node, 460-467 of impulse response function, 260-261
Gatineau, Quebec, 595 concurrent, 467-473 Integration time in azimuth processor, 169 Lambert's law, 103
Gaussian probability distribution function, design requirements, 452-454 Intera SAR system, 40, 62 Lawson, J. L.. %
97,215 flexible pipeline. 458-459 Interference (communication channels). 541 Layove~382-386,479
Gaussian smoothing filter, 613 pipeline. 454-460 Interferometry. 5, 399, 402 Leberl, F., 402
Gentleman, W. M., 558 post-processor. 486-487 Internal calibration. 329-337 Lee, 8. G., 494
Geocoding: Harris, F. J., 150 antenna, 335-336 Lee, K. W .• 617
computational analysis, 482-486 Hay, G. E., 568 built-in test equipment (BITE), 327, Levels of data products, 251-252. 254-255,
definition, 371 Haymes, R. C., 570, 571. 572, 573. 574, 577 335-336 427
942 INDEX
INDEX 643
Lewis, A, 382 functional block diagram, 469
Noise: Orbital elements, 572-580. See also Orbit of
Lewis, D. J., 445 Matched filter, 127-135, 147, 168, 174, 190,
ambiguity, 2%-305 satellite
Li, F. K., 5, 217, 223, 224, 227, 228, 241, 283, 212
antenna, 101, 106-108 Orbit of satellite. 572-580
285, 299, 301 derivation, 128-134
bandwidth, 110-111 acceleration, 569-571
Like-polarized reflection coefficient, 137 with Doppler frequency shift, 133-134
bit error, 283-286, 357 angular velocity, 582
Linde, Y., 4% signal-to-noise ratio, 130
distortion, 251, 281, 293-294 apogee. 573, 576
Linear convolution, 545-553 Max, J., 279, 291, 293
crossover, 262 coordinate system, 577-579
Linear FM waveform, 133, 134, 144-146, 168, Maximum likelihood classifier, 611
harmonic, 261-262, 270-271 eccentricity, 226. 571, 573, 576
173, 504 Maximum power transfer theorem, 95 intermodulation, 261-262, 271 elements, 573
Linear range migration, 172, 190, 193, 194, McDonough, R. N., 190, 223, 225, 234 equivalent noise temperature, 100, 107, · inclination angle, 573, 576, 581
431-432 MDA (MacDonald-Dettwiler and 110, 118-119 perigee, 573, 576
Linear span of data in polar coordinates, 522 Associates), 33 external, 101-106 perturbations. 579-580
Linear systems, 536-541 Mean anomaly, 576, 577 factor, 75, 108, 111-114 plane, 574-575
amplitude error effects, 259-260 Melt ponds, 612 figure, 100, I 08, 271 precession, 580
convolution, 537-538 Mercer, J. B., 62 galactic, 105-106 radial and tangential speeds, 579
distortion analysis, 257-261 Meyer-Arendt, J. R., 103 operating noise factor, ll3-ll4. 119-120 time of perigee passage, 573, 576
nonstationary, 540 Minimum variance unbiased estimator, 229 operating noise temperature, 110, ll8-ll9 track and target position, 566-573
phase error effects, 259-260 Mismatch ratio of compression filter, power spectral density, 75. 129. 230 trajectory parameters, 572
radar characterization, 136-139 173-175, 196 quantization. 279-283 true anomaly, 573, 576, 577
stationary, 128-129, 141, 541 Mission desi~n flowchart, 44 radio, 106 Overlap-add method (of filtering), 551-552
transfer function, 539-540 MMIC (monolithic microwave integrated receiver. 108-119 Overlap-save method (of filtering), 552-553
Location of target: circuit), 31 saturation, 280-281 Oversampling factor (in step transform), 511
algorithm, 374-376, 600 Monaldo, F. M., 51, 53 source. 101-108
error sources, 345, 377-382 Monolithic microwave integrated circuit spatial image compression, 489, 492 Pack ice, 610
Louet, J., 357 (MMIC),31 speckle,93, 121,214-217,314,324 Page. L., 102
Low pass filter, 544, 545 Mooney, D. H., 222, 242 system noise factor, 113-114 Page, R. M., 27
interpoiation, 561-563 Moore, R. K., 349 temperature ratio, 117 Paired echo technique, 257-260
Low pass waveform, 541 Mosaicking, 412-415 thermal,96-99,220-221,251.359 Papoulis, A, 234, 545
Luscombe, A P., 238 definition, 371 Nominal satellite orbit, 574 Parseval's relation, 129, 132
feathering the seams, 412 Nominal target migration locus, 517 Passive radiometer:
MacArthur, J. L., 52 Motion compensation processing, 529-534 Nonlinear system analysis. 615-618 AVHRR.611
MacDonald, H. C., 28 Moving target detection, 222 Nonstationary linear system, 198, 540 KRMS, 612, 613
MacDonald-Dettwiler and Associates MTI radars, 302 NORDA KRMS passive radiometer, 612. Peak side-lobe ratio (PSLR)
(MDA), 33, 155 Multilook processing, 216-221 613 of antenna pattern, 87-88
Madsen, S. N., 223, 231, 233, 234, 389, 390 by image filtering, 220, 367 North, D. 0 .. 128 of impulse response function, 256, 259-260
Magellan (MGN) radar, 39-42, 265, 273, look filtering, 220, 367 Number theoretic transforms. 560 Pease, M. C., 559
292-294,317,415,456 noise reduction, 217 Numerical transform theory, 560 Penetration depth, 55. 59-60
Magnetron, 30 for PRF ambiguity resolution, 238-240 Nutation of earth rotation, 572 Perigee, 576
Mainlobe broadening, 256 by spectral division, 219-220 Nyquist: Periodic convolution, 548
Maitre, H., 422 thermal noise effects, 220-221 frequency, 542 Permittivity, 46-47
Map projections: Multipath, 337 rate, 184, 283, 388, 546 Perry. R. P .. 507
datum, 371 Munson, D. C., Jr., 504 theorem, 99 Peterson. D. P .. 396
Polar Stereographic (PS), 393, 479 Munson, R. E., 274
Pettai, R., 100, 108, 109. 112. 115. 117
Universal Transverse Mercator (UTM), Ocean waves: Pettengill. G. H .. 39
370,393,479 Nadir interference, 306-307 Bragg resonance, 51 Phase history of target. 23-25. See also
Marginal zone ice, 610 Naraghi, M., 402, 479 • capillary waves, 51 Range Migration
Marmarelis, P. Z., 616 NASA (National Aeronautics and Space spectra,52-53,592,613 corner tum memory, 436
Marr, D.,419 Administration), 10-12, 44, 6o-61, 411, Office of Space Science and Applications migration locus. 517
Martinson, L., 507 592 (OSSA), 592 'quadratic phase error, 432
Mass of the earth, 570 NASDA (National Space Development Offset video frequency. 183, 211 range cell migration memory, 432, 436
Massachusetts Institute of Technology Agency of Japan), 10-13, 44, 592 ofSeasat, 184 range curvature maximum bound, 431-432
(MIT) Radiation Laboratory, 27 National Weather Service, 607 One-bit SAR 211 range walk maximum bound. 431-432
Masscomp computer, 598, 600, 603 Newton's law, 570 Onstott, R. G., 611 Phonon Corp .• 267
Massively Parallel Processor (MPP), Nghiem, S. V., 62 Oppenheim, A V.• 548 Physiological Systems. 615
468-469. See also Concurrent processor Nicodemus, F. E., 103 Optical correlator, 30, 31, 32 PIN diode. 276
644 INDEX
INDEX 645
Pipeline processor. 454-460 Pseudo noise code. 600 sensor-to-target. 159-160 RGS (Receiving Ground Station). 592,
Advanced Digital SAR Processor (ADSP). Pulse: variation (intra-pulse). 159-163 596-597
456-458. 600. 601 compression. 135-152 Range migration, 171-172. 193, 197, 504 Rice, R. F .. 288
control. 460 distortion effects. 163. 315-317. 379-381 correction, 181, 187, 189, 217 Richards, J. A.. 56
functional block diagram. 455 repetition frequency. 305-307 curvature, 172, 190 Ridenour, L. N., 71
reliability. 460 waveforms. 133 interpolation, 188 Rihaczek, A. W .. 133
Planar orbit. 574 maximum bounds, 431-432 Rino, C. L., 315
Planck factor, 99. 101-103, 105 Quadratic phase error. 173-176. 432- 433 memory, 432, 469 Robertson, S. D., 338, 339
Plan-position indicator (PPI). 27 Quantization. see Sampling nominal migration locus, 517 Rocca, F., 437
Platform effects: Quasistationarity approximation. 540 phase history. 23-25 Rodieck, R. W., 615
attitude errors. 320, 349-351. 430-431 Quegan. S.. 163. 389. 390. 482 Seasat example. 178 Royal Aircraft Establishment (RAE), 155
ephemeris errors. 377-379, 480 Queueing analysis: walk, 172, 190, 193, 239, 431-432 Ruck, G. T .. 338
Point target response. 165 for image browse. 489-491 Range signal processing:
Poisson distribution. 489 M/D/I system, 489, 491 analog formulation, 182-187 Sack, M. M., 442, 507, 511. 516. 517, 518
Polarimetric radar calibration, 349-353. response times, 489, 491 compression filter parameters, 213-214 Sampling:
364-367 computational complexity, 449-452 aliasing. 211, 238, 241, 283, 389, 3%, 482,
channel imbalance, 351. 357. 366 Radar: digital formulation, 210-214 510, 544, 545
clutter calibration. 352-353 cross section, 74, 91-94, 136, 214. 322 efficiency factor, 450 ofbandlimited signals, 211, 541-545
cross-polarization leakage. 366 of calibration targets. 337-341 overview, 165-167 Block Floating Point Quantization
cross-products of scattering matrix. 366 equation: processing blocks, 450 (BFPQ), 289-294, 452
efficient coding of scattering matrix. of a distributed target, 120-124. 324, Rawson. R., 33 image resampling, 388-390, 416, 440, 479
366-367 326.347 Rayleigh-Jeans law. 103 Nyquist rate, 184, 283, 388, 546
phase errors. 352, 357. 365 of an image pixel. 123. 358 Rayleigh probability distribution function. one-bit SAR, 211
symmetrization of scattering matrix, ofa point target, 73.-75. 119-120 216. 323 quadrature, 283
364-366 of a single pulse. 122-123 Real aperture radar. see Side-looking real real (offset video), 211
Polarimetric SAR. 57. 93 system: aperture radar (SLAR) sampled signal, 211. 542
Polarization signature. 57 antenna, 273-279 Receivers: of the target phase history, 191-1%
Polar processing. 519-534 block diagram, 253 for ground calibration. 341-342 Scattering. see Electromagnetic wave
azimuth resolution. 524 digital electronics, 279-283 in SAR sensor, 271-273 scattering
geometry formulation. 521 operations. 252-256 Receiving Ground Station (RGS), 592. Scattering matrix:
interpolation, 525 RF electronics, 264-273 596-597 cross-products, 366
intrapulse range variation. 526-529 timing and control; 263-264 Rectangular algorithm, 155-208. See also definition, 94, 350
Polar Stereographic (PS). see Map Radarsat. 12, 592 Compression processing efficient coding, 366-367
projections Rader, C. M .. 560 Reed, C. J., 289 symmetrization, 364-366
Porcello. L. J .• 34 Radiance, 102-104 Reference functions (matched filter): Schaefer, D. H .. 468
Post-processor. 473-487 Radiometric calibration: azimuth, 588-591 Schreier, G., 480
functional block diagram. 474 definitions, 311-313 bandwidth, 434-435 Schwartz inequality, 82, 129
geometric correction. 479-487 error model, 322-325 length (in samples), 435 Scientific Atlanta Corporation, 595. 596
image browse. 487-499 error sources, 314-322 normalization factors, 361-364 Sea ice applications of SAR, 57-62
1/0 transfer rates, 476-477. 486-487 image correction factor, 360-364, 477-478 range. 213-214 ice classificaton, 611-613
radiometric correction. 477-479 noise subtraction, 363-364 Reference mixing operation. 506 ice kinematics, 592, 607-611
requirements. 475-477 post-processor design analysis, 477-479 Reflectivity of target (scene), 136-139, 141, wave spectra, 613
Potential function of the earth. 579 using a topographic map, 409-410 149, 155,214,215.224,228,231,237,506 Seasat.344,348,528
Power density. 74. 76 Radix of FFf. 558 Remote sensing programs. 7-13 Doppler characteristics, 586-588
Poynting relation. 80. 82 Ramamurthi, B.. 492 Resolution: offset video frequency. 184
PPI (plan position indicator). 27 Ramapriyan, H. K.. 402, 468, 469 azimuth: radar parameters, 12
Precession: Raney, R. K.. 322, 360, 580, 583, 585 in matched filtering processing. 26, satellite diagram, 2
of earth rotation. 572 Range: 169-171 Secondary range compression, 194, 199-207.
of satellite orbit. 580 acceleration, 584-586 in polar processing. 524 ' 239,432,436,443
Predictive coding. linear three point. 493 ambiguities, 20, 89-91, 171, 303-308 in real aperture radar. 16 Second time around effect. 20, 242
Preprocessing. see Autofocus; Clutterlock frequency spectrum, 212 in spectral analysis processing, 23, 439 Secular perturbations. in orbital elements,
Presumming. 241 resolution: range: 579
Prime factor transforms. 560 in deramp-FFf processing, 506, 524 in deramp-FFf processing, 506, 524 Selvaggi, F., 470
Principal solution, 149 in matched filtering processing. 15, 162 in matched filtering processing, 15, 162 Semi-major axis (of satellite orbit). 573
Project Wolverine. 30 in uncoded pulse. 15 in uncoded pulse, 15 Sensitivity time control (STC). 271-273
646 INDEX INDEX 647
Settling time. 263 fanshape resampling. 440 Test. J .• 466 Van Zyl. J. J.. 57. 352. 365. 366. 476
Shadow (radar). 385-386 polar processing. 519-534 Three-dimensional inverse Fourier Variable gain amplifier. 271
Shannon. C. E.. 288. 542 SPECAN. 440-443. 452. 453. 456. 470. 504. transform, 523 Vectorization of the transform. 559-560
Shannon-Whittaker sampling theorem. 388. 507.601 Tikhonov. A. N .• 149 Vector quantization. 289. 495-499
541.542.544.546.561 unfocussed SAR. 23. 438-440 T!µle bandwidth product, see Bandwidth Vegetation applications of SAR:
Sharma. D. K.. 282 Specular point migration. 387 time product forest canopy, 56-57
Sherman. J. W .• 77 Specular reflection. 139 Time domain: soil moisture. 62-65
Sherwin. C. W .• 29. 30 Spherical coordinate system. 522 convolution. 537-538 Velocity:
Shuttle Imaging Radar (SIR). 38-39 Spotlight SAR. 507 filter weighting function. 151 angular of earth. 56. 582
SIR-A. 39, 49-50 SPS (SAR Processor System). 592 Time domain processor: angular of satellite • 582
SIR-B.39.274.361-362 Squint: computational complexity. 444-445. 446. radial and tangential of satellite. 579
SIR-C. 11. 39. 274-277. 302-303. 331-336, angle.207.208.503,526,571.583,590 448-449 Venera, 39-40
456 mode processing. 206-208, 519-534 image formation algorithm. 167. 187-196 Venus radar mapper. see Magellan
Sidelobe. 148-153 Stable local oscillator. 263-264 Time of echo propagation, 158-159 Vernalequinox.572
definition. 174 Allan variance. 263 Time of perigee passage. 573 VHF band. 8. 27. 34
leakage in step transform. 514-515 drift. 263 Titan Radar Mapper (CTRM) 40. 42 Video offset frequency:
weighting functions. 150-153 effects on image fidelity. 372 Tomographic analysis. 540 in deramp processing. 524
Side-looking real aperture radar (SLAR). Staggered PRF (for ambiguity resolution). Tomographic imaging. 504 sampling. 211
15-21. 28, 71 242 Tone generators. 342-343 of Seasat. 184
ambiguity constraints. 20-21, 88-91 Stationary: TOPEX,381 Viksne. A., 28
azimuth (along-track) resolution, 16-17 Gaussian processes. 233 Touzi, R. A. 419 Volterra. V.• 616
Doppler bandwidth, 21 phase, 142-144. 199,200.208 Toyoda. J.• 615 Volterra kernels. 616
Doppler shift. 17 random process, 231 Transform:
geometry. 14-15 linear system, 136. 146. 187. 288. 323. 503. Fermat. 561 Wagner. C. A. 376
radar equation. 73-75, 122 537. 546 Fourier. see Fourier transform Walker. J. L.. 520. 529
range (cross-track) resolution. 15 Station reception mask (ASF). 595 Hadamard. 493 Wall. S. D .• 349
swath width. 14 Step chirp. 133 Laplace, 540 Watson-Watt. R.. 27
Sidereal period. 573. 576 Step transform. 504. 507-519 number theoretic. 560 Wave equation algorithm. 437
Siedman. J. B.• 416. 479 autofocus.529-534 prime factor. 560 Wavenumbe~47.613
Signal and Data Routing Assembly azimuth processing. 516-518 z. 543, 547 dimension resolution. 524
(SARA). 596 coarse range analysis. 508-512 Transmit interference. 305-307 space.526
Signal-to-noise ratio (SNR): fine range ambiguities. 514-515 Transmitter: vector. 520. 522
of distributed target in image data. 123 fine range analysis. 512-514 solid state. 270 Wehner. D. R.. 133. 520. 524
of distributed target in raw data. 121-123 Stereo radar. 399, 402 traveling wave tube. 269-270 Whalen. AD .• 97. 129. 136, 215. 231. 233
of matched filter. 130 Stewart, R. H .• 103 Transponders. 341 Whittaker's interpolation formula. 388, 541,
of point target in raw data. 73-75 Stokes matrix. 94. 350-353. 367 Transverse scan cassette drives. 598 542.544,545.546.561
of receiver output, 110-113 Stretch processing. 504 Traveling Wave Tube (TWT). 30 Wiener, N .• 616
Silver. S.• 73, 77. 78, 80. 81. 82. 95 Stutzman. W. L.. 77. 273 True anomaly of satellite orbit. 576. 577 Wiley. C. A.. 1. 17.18. 28, 29
Silverman, H. F., 560 Subaperture correlation. 237. 238, 240, Twiddle factors in FFT. 555. 556 Winebrenner. D. P .• 51
Skewing of the image. in azimuth. 19o-l 93 529 Winograd Fourier Transform Analysis
Skolnik. M. I.. 71. 73. 80. 86. 106. 108. 115, Subsurface mapping, 49-50 UHF band. 8, 27 (WFTA).560
131, 133 Sun Computer. 607 Ulaby. F. T .• 6, 47. 55. 56. 62. 63. 64. 65. 76. Wong. F. H .• 206
Sky Computer. 607 Surface Acoustic Wave (SAW) device. 92, 100. 101. 102. 121. 122 Wong. Y. R.. 422
Slant range. see Range 265-267 Unfocussed SAR, 23-24. 31. 438-440 Woodward. P. M., 132
SLAR, see Side-looking real aperture radar Swath width. 14. 21. 295-296 Uniform quantizer. 291 Wu.C.. 197.206.234,274-275.302-303,
Slater. P. N .• 103 Uniform spherical earth. 570 437.502
Snyder. J. P .• 393 Tapped delay line. 151 United States Geological Survey (USGS). Wu.K. H .• 507
Sobel operators. 419 Target cross section. see Radar cross section 393,412.414.415,416
Space Physics Analysis Network (SPAN). 594 Tate. C .• 615 Universal Transverse Mercator (UTM). see X-SAR, 11. 39. 274-275. 302-303
Spatial bandwidth (azimuth) 25-26 Taylor. A.H .• 27. 151. 152 Map projections
SPECAN, 440-443. 452. 453. 456. 470, 504. Taylor, T. T .• 151. 152 University of Alaska. 592 z~transform. 543. 547
507.601 Taylor series. 143. 168. 565. 566 Zebker, H. A.. 57. 364. 402
Specific radar cross section. 136, 214 Taylor weighting function, 85. 87. 151. 152. Van der Ziel, A.. 96. 97. 99 Zeoli. G. W .• 280
Speckle noise. 93, 121. 214-217. 314. 324 213 Van Roessel. J. W .. 28 Zero padding ofFFT. 188. 212
Spectral analysis algorithms. 437-443 Technical University of Denmark, 40 Vant. M. R.. 503 Zohar, S., 560
deramp FFT. 503 Temperton, C.. 559
Curlander
McDonough
Synthetic
~~
z
V>
rri-i
3: I
Aperture
V> m
)> -i
z-
0 ()
V> )>
Radar
C) ""'O
zm Systems and
~~ Signal Processing
-0 c
o~
a~
~o
z )> JOHN C. CURLANDER
C) ;.o ROBERT N. McDONOUGH
111111 llllllliil I;
047185770X WILEY- WILEY SERIES IN REMOTE SENSING
INTEf SCIENCE