Professional Documents
Culture Documents
John C. Curlander, Robert N. McDonough-Synthetic Aperture Radar - Systems and Signal Processing-Wiley-Interscience (1991)
John C. Curlander, Robert N. McDonough-Synthetic Aperture Radar - Systems and Signal Processing-Wiley-Interscience (1991)
SYNTHETIC
APERTURE
RADAR
Systems and
Signal Processing
John C. Curlander
California Institute of Technology
Jet Propulsion Laboratory
Pasadena, Cqlifornia
Robert N. McDonough
Johns Hopkins University
Applied Physics Laboratory
Laurel, Maryland
A WILEY-INTERSCIENCE PUBLICATION
To my father and mother for their enduring guidance and support (JGC)
This book is sold as is, without warranty of any kind, either
express or implied, respecting its contents, including
but not limited to implied warranties for the book's quality,
merchantability, or fitness for any particular
purpose. Neither the authors nor John Wiley & Sons, Inc., nor
its dealers or distributors, shall be liable to the purchaser or
any other person or entity with respect to any liability, loss,
or damage caused or alleged to be caused directly or indirectly
by this book.
In recognition of the importance of preserving what has been
written, it is a policy of John Wiley & Sons, Inc., to have books
of enduring value published in the United States printed on
acid-free paper, and we exert our best efforts to that end.
Copyright 1991 by John Wiley & Sons, Inc.
All rights reserved. Published simultaneously in Canada.
Reproduction or translation of any part of this work
beyond that permitted by Section 107 or 108 of the
1976 United States Copyright Act without the permission
of the copyright owner is unlawful. Requests for
permission or further information should be addressed to
the Permissions Department, John Wiley & Sons, Inc.
s4
CONTENTS
PREFACE
xiii
ACKNOWLEDGMENTS
xvii
CHAPTER 1
INTRODUCTION TO SAR
1.1
7
9
13
16
1.5 Summary
.
References and Further Reading
j:
.,
"'
22
26
26
28
31
33
44
45
46
48
55
"~,,.}
6G
66
viii
CONTENTS
CHAPTER 5
CHAPTER 2
71
72
75
80
84
91
94
96
99
101
108
116
119
120
124
CHAPTER 3
126
3.1
127
128
131
3.2
Pulse
3.2.1
3.2.2
3.2.3
135
135
142
148
Compression
Linearity, Green's Function and Compression
The Matched Filter and Pulse Compression
Time Sidelobes and Filter Weighting
References
152
CHAPTER 4
Ix
CONTENTS
5.2
5.3
210
214
221
223
234
238
247
CHAPTER 6
System Overview
6.2
154
155
157
164
171
176
182
182
187
189
196
208
210
249
249
256
256
261
263
263
264
273
279
283
283
285
288
289
294
296
305
307
308
CHAPTER 7
7.2
Definition of Terms
7.1.1 General Terms
7.1.2 Calibration Performance Parameters
7.1.3 Parameter Characteristics
Calibration Error Sources
7.2.1 Sensor Subsystem
7.2.2 Platform and Downlink Subsystem
7.2.3 Signal Processing Subsystem
310
311
311
312
314
314
315
320
320
CONTENTS
CONTENTS
7.3
7.4
7.5
7.6
7.7
7.8
Summary
References
CHAPTER 8
Definition of Terms
8.2
Geometric Distortion
8.2.1 Sensor Errors
8.2.2 Target Location Errors
8.2.3 Platform Ephemeris Errors
8.2.4 Target Ranging Errors
8.3
8.4
8.5
Geometric Rectification
8.3.1 Image Resampling
8.3.2 Ground Plane, Deskewed Projection
8.3.3 Geocoding to a Smooth Ellipsoid
8.3.4 Geocoding to a Topographic Map
Image Registration
8.4.1 Mosaicking
8.4.2 Multisensor Registration
Summary
References
CHAPTER 9
I
1\
9.2
322
9.3
326
327
329
337
349
9.4
353
354
358
364
9.5
367
367
370
371
372
372
374
377
379
387
388
390
393
399
411
412
416
xi
452
452
454
460
467
473
475
477
479
486
487
488
489
490
492
References
499
CHAPTER 10
502
504
10.1
10.2
507
10.3
Polar Processing
10.3.1 The Basic Idea of Polar Processing
10.3.2 Polar Processing Details
10.3.3 An Autofocus Procedure for Polar Processing
519
520
524
529
References
535
APPENDIX A
536
424
425
427
428
430
434
436
437
437
443
444
446
449
A.1
536
A.2
541
A.3
A.4
554
A.5
A.6
558
561
References
545
564
APPENDIX B
B.2
565
566
572
xii
8.3
8 .4
CONTENTS
References
APPENDIX C
C.4
C.5
C.6
Summary
References
APPENDIX D
BIBLIOGRAPHY
MATHEMATICAL SYMBOLS
LIST OF ACRONYMS
INDEX
580
588
591
592
593
PREFACE
596
598
603
605
613
614
615
618
619
622
630
634
The forty year history of synthetic aperture radar (SAR) has produced only a
single spaceborne orbiting satellite carrying a SAR sensor dedicated to remote
sensing applications. This system, the Seasat-A SAR, operated for a mere
100 days in the late 1970s. We learned from the data collected by Seasat, and
from the Shuttle Imaging Radar series and aircraft based SAR systems, that
this instrument is a valuable tool for measuring characteristics of the earth's
surface. As an active microwave sensor, the SAR is capable of continuously
monitoring geophysical parameters related to the structural and electrical
properties of the earth's surface (and its subsurface). Furthermore, through
signal processing, these observations can be made at an extremely high resolution
(on the order of meters), independent of the sensor altitude.
As a result of the success of these early systems, we are about to embark on
a new era in remote sensing using synthetic aperture radar. Recognition of its
potential benefits for global monitoring of the earth's resources has Jed the
European Space Agency, the National Space Development Agency of Japan,
and the Canadian Space Agency to join with the United States National
Aeronautics and Space Administration in deploying a series of SAR systems in
polar orbit during the 1990s. A primary mission goal of these remote sensing
SAR systems is to perform geophysical measurements of surface properties over
extended periods of time for input into global change models. To reach this
end, the SAR systems must be capable of reliably producing high quality image
data products, essentially free from image artifacts and accurately calibrated in
terms of the target ' s scattering characteristics.
In anticipation of these data sets, there is widespread interest among the
scientific community in the potential applications of SAR data. However,
xiii
xiv
PREFACE
PREFACE
xv
xvi
PREFACE
models needed in image formation. Appendix C describes the ~ASA SAR data
reception, image formation, and image archive. system newly _implemente~ at
the University of Alaska in Fairbanks, Alaska. Fmally, Appendix D summanzes
a technique for the characterization of nonlinear systems. Througho~t the text,
equations of particular importance have been indi~t~d by an aste~isk.
We believe that this text provides a needed, missmg element m ~he SAR
literature. Here we have detailed the techniqu~s needed . for design ~nd
development of the SAR system with an emphasis on the signal pr~cessi~g.
This work is a blend of the fundamental theory underlying the SA~ i~agmg
process and the practicalsystem engineering required to produce qualtty n~ages
from real SAR systems. It should serve as an aid for both the radar engtn~er
and the scientist. We have made special effort to annotate ou~ concepts ~~t
fi ures plots and images in an effort to make our ideas as accessible as possi. e.
I;is o~r sinc~re beliefthat this work will serve to reduce the _mystery surroundi~g
the generation of SAR images and open the door t~ a wider user commumty
to develop new, environmentally beneficial applications for the SAR data.
ACKNOWLEDGMENTS
JoHN C. CuRLANDER
ROBERT
Pasadena, California
Laurel, Maryland
April 1991
N.
McDONOUGH
This work draws in large part from knowledge gained during participation in
the NASA Shuttle Imaging Radar series. For this reason we wish to give special
recognition to Dr. Charles Elachi, the principal investigator of these instruments,
for providing the opportunity to participate in both their development and
operation.
The text presents results from a number of scientists and engineers too
numerous to mention by name. However, we do wish to acknowledge
the valuable inputs received from colleagues at the California Institute of
Technology Jet Propulsion Laboratory, specifically A. Freeman, C. Y. Chang,
S. Madsen, R. Kwok, B. Holt, Y. Shen and P. Dubois. At The Johns Hopkins
University Applied Physics Laboratory, collaboration with B. E. Raff and
J. L. Kerr has stimulated much of this work. Among those who shared their
knowledge of SAR, special thanks go to E.-A. Berland of the Norwegian Defence
Research Establishment, B. Barber of the Royal Aircraft Establishment, and
W. Noack and H. Runge of the German Aerospace Research Establishment
(DLR). Additionally, without the technical support of K. Banwart, J. Elbaz
, and S. Salas this text could not have been compiled.
We both benefited from the intellectual atmosphere and the financial support
of our institutions. Special recognition should go to Dr. F. Li of the Jet
Propulsion Laboratory for his support to JCC during the preparation of this
manuscript. Additionally, we wish to thank Prof. 0. Phillips for hosting RNM
as the J. H. Fitzgerald Dunning Professor in the Department of Earth and
Planetary Sciences at The Johns Hopkins University during 1986-87. The
financial support provided by the JHU Applied Physics Laboratory for that
position, and for a Stuart S. Janney Fellowship, aided greatly in this work.
xvii
1
INTRODUCTION TO SAR
Nearly 40 years have passed since Wiley first observed that a side-looking radar
can improve its azimuth resolution by utilizing the Doppler spread of the echo
signal. This landmark observation signified the birth of a technology now
referred to as synthetic aperture radar (SAR). In the ensuing years, a flurry of
activity followed, leading toward steady advancement in performance of both
the sensor and the signal processor. Although much of the early work was
aimed toward military applications such as detection and tracking of moving
targets, the potential for utilizing this instrument as an imaging sensor for
scientific applications was widely recognized.
Prior to the development of the imaging radar, most high resolution sensors
were camera systems with detectors that were sensitive to either reflected solar
radiation or thermal radiation emitted from the earth's surface. The SAR
represented a fundamentally different technique for earth observation. Since a
radar is an active system that transmits a beam of electromagnetic (EM)
radiation in the microwave region of the EM spectrum, this instrument extends
our ability to observe properties about the earth's surface that previously were
not detectable. As an active system, the SAR provides its own illumination and
is not dependent on light from the sun, thus permitting continuous day / night
operation. Furthermore, neither clouds, fog, nor precipitation have a significant
effect on microwaves, thus permitting all-weather imaging. The net result is an
instrument that is capable of continuously observing dynamic phenomena such
as ocean currents, sea ice motion, or changing patterns of vegetation (Elachi
et al., I 982a ).
Sensor systems operate by intercepting the earth radiation with an aperture
of some physical dimension. In traditional (non-SAR) systems, the angular
1
INTRODUCTION TO SAR
INTRODUCTION TO SAR
SOlAR
ARRAY
Any remote sensor designed for global coverage at high resolution inherently
generates a large volume of data. An additional factor for the SAR is that
to form an image from the downlinked signal da ta, literally hundreds of
mathematical operations must be performed on each data sample. Consider,
for example, a 15 s ( 100 km x 100 km ) Seasat image frame consisting of several
hundred million data samples. T o digitally process this data into imagery in
real -ti~e requires a computer system capable of several billion floating point
operations per second. As a result, much of the early processing of the data
was performed optically using laser light sources, Fourier optics, and film. The
early digital correlators could process only a small po rtio n of the acquired data.
Furthermore, they generally approximated the exact matched filter image
formation a lgorithms to accommodate the limited capabilities of the computer
hardware. The net result of the limita tions in these signal processors was
generally an image product of degraded quality with a very limited dynamic
range that could not be reliably calibrated. The inconsistency and qualitative
nature of the optically processed imagery, in conjunction with the limited
performance and limited quantity of the digital products, served to constrain
progress in the scientific application of SAR data during its formative years.
Geometric and Radiometric Calibration
VISIBLE-INFRARED
RADIOMETER
-J
SAR DATA
LINK ANTENNA
Figure 1.1
MUL TICHANNEl
MICROWAVE RADIOMETER
ALTIMETER
INTRODUCTION TO SAR
Near infnted
Far i nfrared
Mid infrared
80
60
40
20
o':--'-:-":--'--......_.--'-:-'':--:-':--~~..1_-L-'--L..--L..i-LL-__J::_..l-=:::l
0.3
0.5
5.0
c
0
;;;
..
10.0
15 0 20.0
30.0
Wavelength (m)
.E
~
90 GHz Window
80
60
!
'
..
i:>
40
.0
....J
20
01---.-.............~......-::""
300
500
5.0
10
60 80
0 1
Wavelength (ml
Wavelength (cm)
1.1
In the introduction we alluded to several of the features that make the SAR a
unique instrument in remote sensing: (1) Day / night and all-weather imaging;
(2) Geometric resolution independent of sensor altitude or wavelength; and
(3) Signal data characteristics unique to the microwave region of the EM
spectrum. An overview of the theory behind the synthetic aperture and pulse
compression techniques used to achieve high resolution is presented in the
following section. In this section, we principally address the unique properties
of the SAR data as they relate to other earth-observing sensors. As an active
sensor, the SAR is in a class of instruments which includes all radars (e.g.,
altimeters, scatterometers, lasers). These instruments, in contrast to passive
sensors (e.g., cameras, radiometers), transmit a signal and measure the reflected
wave. Active systems do not rely on external radiation sources such as solar
or nuclear radiation (e.g., Chernobyl). Thus the presence of the sun is no t
relevant to the imaging process, although it may affect the target scattering
characteristics. Furthermore, the radar frequency can be selected such that its
absorption (attenuation) by atmospheric molecules (oxygen or water vapor) is
small. Figure 1.2 illustrates the absorption bands in terms of percent atmospheric
transmission versus frequency (wavelength). Note that in the 1- 10 GHz
(3 - 30 cm) region the transmissivity approaches 100%. Thus, essentially
Percent transmission through the earth's atmosphere for the microwave portion of
the electromagnetic spectrum.
Figure 1.2
1.1
INTRODUCTION TO SAR
between the transmitted electromagnetic (EM) wave and the surface is highly
wavelength dependent. The EM wave interacts with the surface by a variety of
mechanisms which are related to both the surface composition and its
structure. For the microwave region in which spaceborne SAR systems
operate ( 1- 10 GHz), the characteristics of the scattered wave (power, phase,
polarization) depend predominantly on two factors: the electrical properties of
the su rface (dielectric constant) and the surface roughness.
As an example, consider a barren (non-vegetated) target area where surface
scattering is the dominant wave interaction mechanism. For side-looking
geometries (i.e., with the radar beam poi nted at an angle > 20 off nadir), if
the radar wavelength is long relative to the surface roughness then the surface
will appear smooth, resulting in very little backscattered energy. Conversely,
for radar wavelengths o n the scale of the surface rms height, a significant fraction
of the incident power will be reflected back toward the radar system. This
scattering characteristic is illustrated as a function of wavelength in Fig. 1.3
(Ulaby et al., t 986). Note that the variation in backscatter as a function of rms
height and angle of incidence is highly dependent on the radar frequency or
wavelength. A similar wavelength dependence is also observed for the surface
dielectric constant. Generally, a fraction of the incident wave will penetrate
the surface and be attenuated by the subsurface media. This penetration
characteristic is primarily governed by the radar wavelength and the surface
dielectric properties. It is especially important in applications such as soil
moisture measurements and subsurface sounding, where proper selection of the
radar wavelength will determine its sensitivity to the surface dielectric properties.
25
SOIL
MOISTURE
lg cm 311N
TOP 1 cm
0.40
RMS
HEIGHT
lcml
20
. _ _ 41
.
iii
:!:!
0
1-
zw
u
u:
...w
0
,,
10
\I~ 6- --.C.
<.:>
10
a:w
,_
~...... ~
................ __ _
\ \ '""''~
15
,_'"
\0
1<t
20
"
"'
25
"'u
1 1
35
38
39
34
: "\
0
0
0
0
..-.
----
--... 23 02
o---o 1 8
..
-------- ~
........ _
<t
FREQUENCY 1 1 GHz
- 30
.._
20
10
30
la!
FREQUENCY 7 25 GHz
FREQUENCY 4 25 GHz
-..,,.
L..-....1-.....J--...__.__..___,
0
20
10
30
lb)
20
10
30
Figure 1.3 Normalized backscatter coefficient as a function of surface roughness for three radar
frequencies (Ula by et a l., 1986 ).
1.1 .1
Despite the unique capabilities of the SAR to measure properties of the surface,
it's operating range is limited to a small portion of the electromagnetic spectrum.
Thus, a full characteriza tion of the surface properties with a single instrument,
such as the SAR, is not possible. To get a complete description of the
surface chemical, thermal, electrical, and physical properties, observation by a
variety of sensors over a large portion of the electromagnetic spectrum is
required. Figure 1.4 illustrates the various regions of the electromagnetic
spectrum from the radio band (25 cm ~ A. ~ l km) to the ultraviolet band
(0.3 m ~ A. ~ 0.4 m).
Each region of the EM spectrum plays an important role in some aspect of
remote sensing. For characterizing the earth's surface properties, the most
useful bands, in addition to the microwave, are: ( l) Infrared (3-30 m);
and (2) Visible / near infrared (0.4- 3 m). At frequencies lower than 1 GHz,
ionospheric disturbances and ground interference d omi nate the received
signal characteristics, while in the millimeter and submillimeter region
( 100 GHz- 10 THz) a la rge number of molecular absorption bands provide
information about the atmospheric constituents, but little or no information
about surface properties. Sensors that perform measurements in the thermal
infrared region such as the Heat Capacity Mapping Mission ( HCMM )
radiometer (Kahle et al., 1981 ), as well as those in the visible/ near infrared
region such as SPOT and Landsat Thematic Mapper (TM) ( Freden and Gordon,
1983 ), measure surface properties that are complementary to the microwave
measurements of the SAR. The thermal infrared ( 10- 15 11m) band is sensitive
to emissions from the surface (and atmosphere) relating to the vibrational and
rotational molecular processes of the sensed object. Information on the surface
temperature and heat capacity of an object can be derived from these
measurements. In the visible and near infrared regions, vibrational and electronic
molecular processes are measured. This information can be interpreted in terms
of chemical composition, vegetation, and biological properties of the surface.
Within the microwave region ( 1- 300 GHz) there a re several windows in the
atmospheric absorption bands outside the nominal SAR frequency range of
1- 10 GHz. Most active, real apertu re radar systems, such as the scatterometer
and altimeter, operate in the 10 20 GHz region (Ulaby et al., 1982). These are
1.1
INTRODUCTION TO SAR
~-----+------+-----+-----~ (Hz)
1010
MICROWAVE
VISIBLE
'
106
1010
I
~
'
1012
'st ..
f (HZ)
W
--- ---- - - -- - - -- -ITT
NAVIGATION
LONG DIST. COMMUN IC~ M O~ COMM. PT-TO-PT COMM.
RADAR
15235
_ 1
39
~ _!...
RADIOMETER
39
1 55
s_
_......;;
L;....__ _ _
62
._
_ x_
1 39
Figure 1.4
3.6 46 56
~ <.?
Ku
X-RAY
I'
1.1.2
I
I
10~//
INFARED ULTRAVIOLET
RADIO
POWER
y _::!!._
Ka
generally not imaging instruments; rather they collect time series data prima~ily
for oceanographic and meteorological applications. In the extremely high
frequency (EHF) range of the microwave spectrum (30- 300 GHz) only the
atmospheric window regions of 35 GHz, 90 GHz, and 135 GHz are useful for
observation of surface properties. With few exceptions, only passive systems
such as microwave radiometers operate in these regions. These sensors measure
the surface brightness temperature (the intensity of the radiation emitted by
the object), which in conjunction with a surface radiation emission model can
be used to measure surface properties. A key application of EHF spaceborne
radiometry is for measuring ice extent in the polar regions as well as determining
ice type. Other applications include measuring land surface properties such as
snow cover and soil moisture. Historically, there has been very little utilization
of these data sets in conjunction with the SAR data since the resolution is
typically several o rders of magnitude coarser than that of the synthetic aperture
radar. For every resolution cell in a radiometer image, the SAR may have
1000- 10,000 cells. In spite of this large resolution disparity, space borne
1.1
10
11
INTRODUCTION TO SAA
Figure 1.5
platform instrument packages, each carrying 10- 20 instruments that have been
grouped to optimize the synergism resulting from simultaneous observations
(Table 1.1 ). Each platform is designed for a five year life cycle and will be
follo.~ed by two "ident~cal" platforms for a total 15 year observation period.
Add1t1onally, a free-flymg SAR satellite with an instrument similar to the
SI R-Cl_X-_SA R_(Table 1.4) will be launched during this period by NASA. Special
emphasis 1s bemg placed on the signal processing and calibration elements of
the EOS ground data s_ystem to ensure that high precision, geodetically registered
data products are deltvered to the user in a timely fashion.
TABLE 1.1 Selected Instruments from the Sensor Packages Planned for
each of the EOS Platforms
NASA EOS-A
Moderate Resolution Imaging Spectrometer - Nadir/ -Tilt (MODJS-N / -T)
Lightning Imaging Sensor (LIS)
Advanced ~paceborne Thermal Emission and Reflection (ASTER)
Atmosphenc Infrared Sounder/ Advanced Microwave Sounding Units
(AIRS/ AMSU-A / -B)
High-Resolution Dynamics Limb Sounder (HIRDLS)
Stick Scatterometer (STIKSCAT)
Clouds and Earth Radiant Energy System (CERES)
Earth Observing Scanner Polarimeter (EOSP)
Multi-Angle Imaging Spectro-Radiometer (M ISR)
High Resolution Imaging Spectrometer (HI RIS), 2nd platform only
NASA EOS-B
Stratospheric Wind Infrared Sounder (SWIRLS)
Microwave Limb Sounder (MLS)
X-Ray Imaging Experiment (X IE)
Tropospheric Emission Spectrometer (TES)
Stratospheric Aerosol and Gas Experiment III (SAGE III)
Altimeter (ALT)
Multi-Frequency Imaging Microwave Radiometer (MIMR)
Global Geopositioning Instrument (GG I)
ESA European Polar Orbiting Platform (EPOP)
Clouds and Earth Radiant Energy System (CERES)
Synthetic Aperture Radar - C-band (SAR-C)
Atmospheric Lidar (ATLID)
High Resolution Imaging Spectrometer (HRIS)
Advanced Medium Resolution Imaging Radiometer (AMRIS)
Search and Rescue (S&R)
NASDA Japanese Polar Orbiting Platform (JPOP)
Laser Atmospheric Wind Sounder (LAWS)
Synthetic Aperture Radar - L-Band (SAR-L)
Ocean Color and Temperature Scanner (OCTS)
Advanced Visible and Near Infrared Radiometer (AVNIR)
Advanced Microwave Sounding Radiometer (AMSR)
1.2
13
I/)
E
Q)
'1ii
>(/)
"j
nl
(/)
c:c
<
(/)
Cl
;.
...
...
Q)
LL
.2
...
I/)
.!
Q)
Prior to the full implementation of the EOS program by the year 2000, there
will be four free-flying satellites containing SAR systems as part of their
instrument package. The first system, launched in M a rch 1991 , is the Soviet
S-band (ALMAZ) system, followed by the European Space Agency C-band
(ERS-1) system to be launched in summer 1991. The Japanese L-band SAR
(J-ERS-1) will be launched in 1992 and the Canadian Radarsa t, a C-band system
with electronic scanning capability, is planned for 1995. The parameters for
these sensors are given in Table 1.2. The data from three of these instruments
(excluding ALM AZ) will be received by a United States ground receiving station
in Alaska (as well as other facilities worldwide) and operationally calibrated
and processed to high level (geophysical ) products. A description of the
design and operation of this station, the Alaska SAR Facility, is provided in
Appendix C.
Considering that to date the only spaceborne SAR systems for remote sensing
have been t he NASA Seasat-A SAR and the Shuttle Imaging Radars (SIR-A,
SIR-B), for a total of Jess than four months of operation, these upcoming SAR
missions offer a significant opportunity to utilize SAR data for global science.
(We should also note that the recently deorbited USSR Cosmos 1870 SAR
(,l = 10 cm) was used primarily for remote sensing purposes and that the Soviets
have made this data available to the scientific community.) Given the recent
advances in processing and calibration technologies that will be applied to the
data products, these near future free-flying SAR systems should greatly advance
our understanding of the use of SAR data for modeling global processes.
Considering the vo lume of SAR data that is to be collected, it is reasonable to
assume that the number of scientists working with these data sets will increase
tenfold over the next decade.
To properly interpret and fully utilize the information contained in these
d ata sets, an understanding by the user community of the signal processing
p rocedures and the system error sources is crucial. For this reason, we first
provide a complete theoretical development of the SAR imaging process and
signa l processing algorithms. This is followed by a description of the sensor
flight and ground data systems that emphasizes aspects of the sensor and
processor performance in terms of data product characteristics. Our goal is to
provide a useful guide, not only for the SAR system engineer but also for the
scientist using these data sets. We believe that an understanding of the techniques
underlying production of the SAR imagery will enhance the scientist's ability
to interpret the data products.
...
nl
nl
Q.
>Q)
1.2
C'!
....
w
...I
m
<
....
1 ')
14
INTRODUCTI ON TO SAR
SAR
ANTENNA
TRAJECTORY
RADIATED
PULSES
Figure 1.7
Figure 1.6
A.Rm
w~ --11
cos Y/
w.
( 1.2. 1)
W,
The resolution of the radar in (gro und ) range (Fig. 1.7) is defined as the minimum
range separation _of t~o po ints that can be distinguished as sepa rate by the
s~stem. If ~he .am val time of the leading edge o f the pulse echo fro m the mo re
distant po int 1s later than the a rri val time of the t railing ed ge o f the echo fro m
the nea rer p~int, each point can be distinguished in the time history of the radar
echo. If the time extent of the rada r pulse is r , t he m inimum separa tion of two
resolva ble po ints is then
P
( 1.2.2)
where tiR. t~e reso~ution in slant range and c is the speed of light.
A~ we will discus~ in C hapter 3, to o btain a reasona ble resolutio n tiR , t he
8
required pulse dura ti on rP. wo uld be t_oo short to deliver adequate energy per
pulse .to pro duce a sufficient echo signal to no ise ratio (SNR ) fo r reliable
det~ct1on. Ther~fore, a pul~e compressio n technique is commonly employed to
achieve _bo th high resolut10n (with a lo nger pulse) and a high S N R. With
appro~nate pr?cessing of the received pulse (ma tched filtering), th e range
resolutio n o btainable is
!s
15
16
INTRODUCTION TO SAR
1.2
17
1.2.1
As shown in Fig. 1.6, suppose that the radar antenna has a length L. in the
dimension along the line offlight. Then the radar beam (i.e., the angular direction
in space to which the transmitted electromagnetic energy is confined and fro~
which the system can respond to received signals) has an angular spread m
that dimension of eH = A./ L. where)._ is the wavelength of the transmitted energy.
Two targets on the ground separated by an amount
in the azimuth direction
(Fig. t.8 ), and at the same slant range R , can be resolved only if they are not
both in the radar beam at the same time. Thus we have
ox
( 1.2.3)
Figure 1.8
ox=
ox
ox
ox=
ox
The key observ~tion that ultimately led to SAR, and the vastly improved
along-track resolution that makes spaceborne imaging radars possible d ates
from about 1951 , and is attributed originally to Wiley (1965). He observ~d that
two point targets, at slightly different angles with respect to the track of the
moving radar, have different speeds at any instant relative to the platform.
T_he_refore, the radar pulse when reflected from the two targets will have two
distinct Doppler frequency shifts.
For a point target at slant range R and along-track coordinate x relat ive to
the side-looking radar (Fig. 1.8), the Doppler shift relative to the transmitted
frequency is
( 1.2.4)
where V., is the relative velocity, 0 is the a ngle of the target off broadside,
and t~e fact?r of 2 results from the two-way travel inherent in an active system.
(In this _sectw_n, we assume that V. 1 is just the platform speed V.-) Therefore, if
the received s1gn~I at the instant shown in Fig. 1.8 is frequency analyzed, any
energy observed m the return at time corresponding to ra nge Rand at Doppler
18
INTRODUCTION TO SAR
1.2
= A.Rfo,f2V.1
+ R; + H2
where s is the time along the flight path. The range rate is given by
= _ 2R(O) =
fo
R2 = (x - V.1s)2
V.,(x - V.,s)/ R
A.
2V. 1x
A.R(O)
( 1.2.6)
which is the equation of a conic in the (R g, x) plane. From Eqn. ( 1.2.6) and
Fig. 1.9 we can write
2v.,/-/ R(o>/
- >I
x
1A.Jo.
Figure 1.9
19
and
Figure 1.10 Illustra tion or use or range delay and Doppler shirt to loca te the ta rget.
20
INTRODUCTION
TO SAR
1.2
21
bx
= ( A.R )bfo
2v.,
(1.2.7)
bx=
(~)(L V.,) = L. / 2
2V.,
RA.
*( 1.2.9)
This counter-intuitive result, which states that improved resolution comes from
smaller antennas, was first proposed by Cutrona et al. ( 1961 ). This result actually
makes some assumptions that are not always valid, as we will discuss in
Section 1.2.2, however, the resolution of contemporary SARs does approach
this limit. Seasat, for example, had an antenna with an along-track dimension
L. = 10.7 m, and attained a resolution bx= 6 m from an orbital altitude of
H = 800 km.
Although Eqn. ( 1.2.9) predicts that an arbitrarily fine resolution is attainable
by reducing the antenna azimuth dimension, at least one factor operates to put
a lower bound on resolution, even at this simple level of modeling. Since we
need to measure range as well as along-track position, the radar must be pulsed.
When a pulse is transmitted, the radar then goes into a listening mode to detect
the target echo. Suppose the span of the (slant) range to which targets are
confined (i.e., the slant range swath) is W. (Fig. 1.7). We then require that the
time of reception of the earliest possible echo from any point in the swath due
to a particular pulse transmission be later than the time of reception of the last
possible echo from any other point due to transmission of the previous pulse.
Otherwise we will attribute the trailing portion of the previous pulse echo to
a nearby point illuminated by the current pulse. If the near and far edges of
the swath in slant range are R' and R", this requires that (Fig. 1.7)
*(1.2.11)
Equation ( 1.2.11) states that the radar must transmit at least one pulse each time
the platform travels a distance equal to one half the antenna length.
Combining Eqn. ( 1.2. 10) and Eqn. ( 1.2. l l ), we have
*(l.2.12)
~hich req~ires that the swath width W. decrease as the azimuth resolution is
mcreased (1.e., as bx is made smaller).
T~e in.equalities in Eqn. ( 1.2.12) can be rearranged to illustrate the
relat1onsh1p between swath width and resolution as follows
( 1.2.13)
For a satellite in earth orbit, the right side in Eqn. ( 1.2.13) is nearly constant
on t~e ord~r of 20,000. Using Eqn. ( 1.2.1) and Eqn. ( 1.2.9) with the nominal
relat10n (Fig. 1. 7)
W. = vv.i sin 11
the inequality Eqn. ( 1.2.13) yields a requirement on the antenna area of
*( 1.2.14)
22
INTRODUCTION TO SAR
1.2
1.2.2
Doppler Filtering
= [R~ + (x
- xo )2]112
(1.2.15)
<P
=(-
+ R ~ (x - x0 ) 2 /(2R~)]
( 1.2. 16)
wher~ we can approximate R 0 and R0 as equal for the narrow bea m rada rs
used in most practical applications. For this case then
fo
. (-2)
= <P/ 2n =
).Ro [(x 0
x 0 ) + (x - x 0 ) ]
If we define the value of x ~t which the Doppler frequency ceases to be effecti vel y
constant as that x for which the quadratic term in Eqn. ( 1.2.16) contributes a
value of. n/ 4 to <P at the edge of the aperture, then we ca n confine atten tion to
the received waveform collected over an " aperture" X , where
X / 2 = Jx - x0 J <
JiRJ8
or
( 1.2. 17)
The corresp onding time interval (i.e., the integration time of the SAR ) is limited
to
S = X I V,1< _J _;._R_o_/_2
- 4nR / ).
where the time derivative of <Pis the Doppler frequency (in rad / s). Expanding
the relation in Eqn. ( 1.2.15) to second order around some radar position x 0 at
V.1
With this limitatio n, the resolution fro m Eqn. ( 1.2.7) is
*( 1.2.18 )
.JiiRo
proc~ssing system which a tta ins its al ong-track resolu tio n by si mple
~e~uency filter~ng
Figure 1.11
and time.
23
Geometry illustrating rada r target and the quadra tic relation between ra nge
24
INTRODUCTION TO SAR
1.2
25
The range p:~cessing of any particular return, due to a target at x for the
0
sensor at a pos1t1on x, results in a point on the complex Doppler waveform
f(x) =exp[ - J</>( x )] = exp[ - j4nR( x )/ A]
+ (x
- x 0 ) 2 / (2R 0 )] }
( 1.2.20)
fo,(x)
=~
d<f>
2n dx
= - 4nAR/ A.
=-
2(x - Xo)
2R 0
where
B0 = 2v;1S/ (A.R 0 ).
AR = [R~
+ (x -
'xS
Xo)2]112 - Ro
or
AR;::::
(x - x 0 ) 2
2R 0
'
Ix - xol
( 1.2.19)
....
and R 0 is the range at the point of closest approach (i.e., s = 0). Since x = V.is,
A</> is a quadratic function of the along-track time, s, and the change in Doppler
frequency is linear with time. For full resolution, we must use all the data
collected over the interval, X = lJHR 0 , for which the target is in the radar beam.
If this quadratic phase is compensated such that the returns from each pulse
due to the target at x 0 can be added coherently, targets at x # x 0 will correspond
to improperly compensated returns so they will cancel. The processed returns
from the target at x 0 will then dominate returns from other targets at the same
range.
Ix -
x'I < X / 2
f 0 (x )g*(x - x' ) dx
whose magnitude is
AR
lh(x' )I
= l { sin[2n(x' -
x 0 )(X -
Ix' -
lx'- x 0 l< X
taking _careful account of limits of integration and the sign of x'. If the time
bandwidth product of this signal,
Figure 1.12
-26
1.3
INTRODUCTION TO SAR
is sensibly large, say > 10, over regions where lh(x')I is not small we have
lh(x')I =!sin [u(x' - x 0 )] / [u(x' - Xo)JI,
u = 2nX /(AR 0 )
( 1.2.2 1)
This function peaks at x' = x 0 , the target location, and has a width on the
order of
bx
= AR 0 / (2X) =
1/ B,
*( 1.2.22)
*( 1.2.23)
1.3
To gain a perspective on the progress that has been made in the evolution of
synthetic aperture radar systems, we present a brief history of SAR. To set the
stage for the discovery of SAR, we first address the early history of radar from
ground based detection systems to side-looking airborne mappers. We will then
trace key developments in the SAR sensor technology as well as the signal
processor by highlighting the technology milestones leading toward modern
radar systems.
1.3.1
Early History
Prior to discovery of synthetic aperture radar in the early 1950s, radar had long
been recognized as a tool for detection and tracking of targets such as aircraft
27
28
INTRODUCTION TO SAR
1.3
displays to present a full gray scale. It was these among other early technology
developments that set the stage for the evolution of imaging radar.
1.3.2
In the early 1950s, engineers first recognized that, instead of rotating the antenna
to scan the target area, it could be fixed to the fuselage of the aircraft. This
allowed for much longer a pertures and hence improved along-track resolution.
An additional improvement was the use of film to record the CRT display of
the pulse echoes. The early versions of these side-looking aperture radar (SLAR)
systems were primarily used for military reconnaissance purposes. They were
typically opera ted at relatively high frequencies compared to ground based
radar systems, to achieve good a long-track resolution. Some systems (e.g.,
Westinghouse AN/ APQ-97), that operated at frequencies as high as 35 GHz
with pulse durations a small fraction of a microsecond, were capable of producing
imagery at resolutio ns in the 10-20 m range. It was not until the mid 1960s
that the first high resolution SLAR images were declassified and made available
for scientific use. The value of SLAR images for scientific applications such as
geologic mapping, oceanography, and land use studies was recognized almost
immediately ( MacDonald, 1969 ). Perhaps the most widespread interest in the
use of SLAR was generated by the mapping campaigns to Central America
(Viksne, 1969) and South America (van Roessel and de Godoy, 1974). Large
areas of these perpetually cloud-covered regions were mapped for the first time,
dramatically demonstrating the benefits of a high resolution radar imager.
It is generally agreed that the earliest statement describing the use of Doppler
frequency analysis as applied to a coherent moving radar was put forth by Carl
Wiley of Goodyear Aircraft Corp. in June 195 1 (Wiley, 1985). Wiley noted that
the reflections from two fixed targets at an angular separatio n relative to the
velocity vector could be resolved by frequency analysis of the along-track
spectrum. This characteristic permitted the azimuth resoluti on of the return
echoes to be enhanced by separating the echoes into groups based o n their
Doppler shift, as described in Section 1.2. In his patent application, Wiley ( 1965)
referred to his technique as Doppler beam sharpening rather than synthetic
aperture radar, as it is k nown today. His design, shown in Fig. 1.l 3a, is today
referred to as squint mode SAR.
Although the radar group at the Goodyear research facility in Litchfield,
Arizona, was primarily interested in high resolution radar as applied to missile
guidance systems, they pursued Wiley's beam sharpening concept and built the
first airborne SAR system, flown aboard a DC-3 in 1953. This system, which
operated at 930 MHz, used a Yagi antenna wit h a real aperture beamwidth of
100. The coherent video was filtered to extract the desired portion of the
Doppler spectrum, weighting was applied to the baseband analog signal, and
it was summed in a storage tube to achieve a synt hetic beamwidth of
approximately 1 (Fig. l.13b ).
a
(TERRAIN A)
(TERRAIN B)
RECEIVER MECHANISM
FOR
PICKUP OF TIME &
FREQUENCY
SEPARATED
REFLECTIONS
PULSE GENERATOR
ALTERNATELY
ILLUMINATING
TERRAIN A&B,
OR FOR ILLUMINATING ONLY
TERRAIN B, FOR EX.
MECHANISM FOR
SEPARATION OF
REFLECTIONS INTO
FREQUENCY
SEPARATED
GROUPS
MECHANISM FOR
VISUAL PRODUCTION
OF EACH
REFLECTION IN
EACH GROUP
MECHANISM FOR
SEPARATION BY
TIME OF THE VARIOUS
REFLECTIONS IN A
SINGLE GROUP
~gure 1.13
(a) ~a~ar configuration; and (b) Operational How diagram, as proposed by Wiley
his patent apphcat1on for the Doppler beam sharpening radar (Wiley, 1965).
~as earned o~t by a group at the University of Illinois under the direction of
29
30
1.3
INTRODUCTION TO SAR
variation in terrain height produced distinctive peaks that migrated across the
azimuth frequency spectrum. He reported that these experimental observati~ns
could provide the basis for a new type of radar with improved angular resolution.
It was also in 1952 that Sherwin first reported the concept of a fully focussed
array at each range bin by providing the proper phase corrections. Addition~lly,
he put forth the concept of motion compensation based on phase correct10ns
derived from platform accelerometer measurements, as applied to the received
signal before storage. These ideas eventually evolved into development of a
coherent X-band radar system. The first published article that included a
focussed strip image was in a 1953 University of Illinois re~ort. This syst~m
was designed to study sea surface characteristics as well as ship and submarine
wakes.
As a result of the accomplishments of the Illinois group, a much larger effort
was initiated. This study, coordinated by the University of Michigan, was termed
Project W olverine. The study team, whose activities are summa~ized by Cutrona
( 1961 ), was commissioned by the US Army to develop a h1g~ perfo~mance
combat surveillance radar. They developed a number of operational airborne
SAR systems that routinely began producing strip maps by 1~58. It is this gr~up
that is credited with developing the first operational motion compensat10n
system, using a Doppler navigator to measure lo~g-term av_e rage drifts in
conjunction with a gyro to correct for short-term yawing _of the aircraft. Perh~ps
the most important development by Cutrona' s group 1s the onboard op~1cal
recorder and ground optical correlator for converting the coherent SAR video
signal into high resolution strip images.
In conjunction with the development of these early SAR syste~s, there were
a number of other activities wh ich advanced the state of the art m component
technology. Recall that the key difference between the real aperture SLAR
system and the SAR (besides the signal processing required) is that SAR is a
coherent system. This requires both the magnitude and the phase of the echo
samples to be preserved, which implies that the system pulse-to-pulse phase
must be stable. The high power magnetron, which was such an important
development for the SLAR, could not be used directly in the SAR system since
the starting phase of each pulse was random. Instead, the early SAR systems
used a coho-stalo arrangement, where, for each magnetron pulse, the starting
phase of the pulse was measured. This phase was retained in a phase locked
intermediate frequency COHerent Oscillator (coho), referenced to the ST Able
Local Oscillator (stalo ), which was then used to demodulate the received echo.
The development of linear beam power amplifiers such as the klystron in
1939, followed shortly by the traveling wave tube (TWT), was a key advance
in SAR technology, since these devices provided both the high peak power and
phase stability required for SAR systems. The major advance in the TWT over
the klystron is the bandwidth. The klystron ' s bandwidth is limited to only a
few percent of the carrier frequency, while the TWT is capable of octave
bandwidths. Many of today's airborne SAR systems, and some spacebor~e
systems requiring high peak power, still use TWT technology, although sohd
31
state power amplifiers are now used in many applications because of their
increased reliability. Just as the solid state high power transistor technology
matured through the 70s and 80s, the technology of monolithic microwa ve
integrated circuit (MMIC) devices is moving toward the forefront in the 90s
and should become the standard in the next generation of spaceborne and
airborne SAR systems.
1.3.3
32
1.3
INTRODUCTION TO SAR
operational unit and the first successful flight of an op~ical recorder .was
conducted. The recording was performed on 35 mm film using CRTs modified
to generate the intensity modulated range trace. The system featured a Doppler
navigator for drift angle compensation to center the return on zero D_oppler
and an optical recorder whose film advance rate was controlled by the estimated
ground speed.
.
.
The ground processing equipment was housed in a van for transportat~on
to the test sites. It contained both the optical correlator and the film processing
equipment, including a photo enlarger for analyzing strip imagery. This_ system
produced the first fully focussed SAR image in August 1957. The architecture
developed by the Michigan group became the standard for SAR correlators for
nearly two decades while the digital computing technology matured. A_layout
of a modern optical correlator is shown in Fig. 1.14. Improvements in laser
light sources and Fourier optics enhanced the _quality of the ?Ptically proces~ed
image product. Hybrid architectures were also introduced (using acou~to-o~t1cal
and charge coupled devices) to generate digital images from the optical signal,
but the use of film greatly constrained the performance of these systems..A
detailed description of optical processing theory and systems can be found in
Cutrona et al. (1960).
It was not until the late 1960s that the first fully digital SAR correlator was
developed. These ground based systems could not ope~ate in real-time. Init_ially,
onboard optical recorders were used to collect the signal data from wh~c~ a
small portion of the signal film was digitized and processed. These. early d1g1tal
systems were limited in performance due to both the memory requirements ~nd
the number of operations needed to perform fully focussed SAR processing.
Azimuth presummers were typically employed to reduce the data rate ~nd
therefore the processing load on the correlator. The push for a real-time
onboard SAR correlator, particularly for military applications, led to the first
demonstration system in the early 1970s (Kirk, 1975). This system included a
LIGHT
SOURCE
___.. COLLIMATING
MATCHED
FILTERING ~
(3LENSES)
Figure 1.14
FOCUSING
LENS
LENS
MASK
CCD
OUTPUT
ARRAY
ACOUSTO
OPTICAL
DEVICE
OUTPUT
IMAGE
CD
CD
---
---~--r
CD
--
-"'""'
r=wr=w
'
COlm!Ol
CD'
CD'
IGGGGI
-.
0
'
33
Figure 1.15 The onboard SAR processor built by MacDonald-Dettwiler and Assoc. for the CCRS
airborne system (Bennett, 1980).
1.3.4
Just as in the early days of SAR, a majority of current work in high resolutio n
SAR systems is funded by the US Department of Defense (DoD ), and therefo re
information about these systems is not available for open publication. However,
there are a number of civilian SAR systems that were developed under the
sponsorship of NASA, beginning in the late 1960s and early 1970s. The first
system, a single polarization X-band SAR, built originally by the Environmental
Research Institute of Michigan (ERIM) for the DoD in 1964, was declassified
in the late 60s by reducing its range bandwidth to 30 MHz. This system, flo wn
on a C-46 aircraft, was upgraded by NASA in 1973 by adding a second frequency
at L-band and equipping the system with servoed dual-polarized antennas
(Rawson and Smith, 1974). The two receive chains (one per frequency ) fed into
t~o 70 mm optical recorders which captured both the like- and cross-polarized
signals for each frequency. This ERIM SAR was used for a number of scientific
research applications, especially the imaging of arctic sea ice. The Jet Propulsion
Laboratory (JPL) also developed (under NASA sponsorship) an L-band SAR
system that evolved from some early rocket radar tests (see below). The JPL SAR
~ad been upgraded to a simultaneous quad-polarized (polarimetric) capability
in both L- and C-bands by the early 1980s. This system was used for a number
of scientific research applications, especially those relating to geologic mapping
34
INTRODUCTION TO SAR
and the study of geomorphic processes (Schaber et al., 1980). Although neither
of these original systems is in operation today, they have both been replaced
with modern systems of much higher performance. The parameters of these
current systems, along with those of the Canadian Centre for Remote Sensing
(CCRS) SAR, are given in Table 1.3.
Spaceborne SAR History
Considering that both ERIM and JPL conducted most of the early airborne
SAR studies for NASA, it was logical that NASA turned to these two
organizations to build the first (non-military) spaceborne SAR system. Contrary
to popular belief, the Seasat-A SAR was not the first operational spaceborne
system. In 1962, JPL conducted the first of four rocket experiments at the White
Sands, New Mexico, missile test range (Fig. 1.16). These rockets carried an
experimental L-band sounding radar that was being evaluated for the lunar
lander. At the conclusion of these experiments in 1966, this radar was transferred
to the NASA CV-990 aircraft and was eventually upgraded to the JPL airborne
SAR system. The sounder's cavity-backed dipole antenna was replaced with a
dual-polarized planar array and the original magnetron (built by Raytheon)
was upgraded to a TWT. This system, which was used for a number of
applications including the study of oceanic phenomena in the Gulf of California,
collected data that eventually led to the approval of the Seasat SAR. In the
period between the conclusion of the rocket experiments and the approval of
the Seasat mission in 1975, NASA initiated the Apollo Lunar Sounder
Experiment (ALSE). This experiment, conducted jointly by ERIM and JPL,
was flown aboard the Apollo 17 lunar orbiter in December, 1972. It consisted
of four major hardware subsystems (Porcello et al., 1974): ( 1) RF Electronics
(CSAR); (2) IF antennas; (3) VHF antenna; and ( 4) Optical recorder (Fig. 1.17 ).
At the heart of the system is the coherent SAR (CSAR) transmitter/ receiver
subsystem which could operate at any of three radar frequencies (5, 15, and
150 MHz). The objectives of the experiment were threefold: to detect subsurface
geologic structures; to generate a continuous lunar profile; and to map the
lunar surface at radar wavelengths. The data was recorded on photographic
film using a 70 mm optical recorder. The two high frequency (HF) dipole
antennas were used for mapping the subsurface geologic features and the very
high frequency (VHF) Yagi antenna oriented 20 off local vertical was used
primarily for surface mapping and profiling (Fig. 1.18 ). The bulk of the signal
processing was carried out at ERIM using a modified version of their airborne
SAR coherent optical processor. Due to the large dynamic range of the data
(conservatively estimated at 45 dB), the image film was inadequate to observe
a number of subsurface features. At JPL, a small amount of the signal film was
scanned and processed digitally using a PDP-11 computer, while ERIM
constructed several holographic viewers to directly observe and manipulate the
image projection on a liquid crystal display.
The success of the lunar sounder experiment, coupled with the oceanographic
phenomena observed by the JPL L-band airborne SAR, led NASA in 1975 to
---
><
Cll
E
u;
QI
>.
(/)
--
G
'O
M
!!!
QI
Qi
E
...IUIU
a.
E
2Cll
>.
(/)
1.3
Figure 1.17
37
Optical recorder flown as part of the Apollo Lunar Sounder Experiment and later
on SI R-A.
approve the inclusion of a SAR as part of the Seasat mission (Fig. 1.1 ). Despite
the I 0 years of oceanographic observation with airborne SAR systems, the
proposed Seasat SAR created tremendous controversy within the scientific
community. The dissenting camp argued that the coherent integration time was
too long (...., 2.5 s), and would result in decorrelation of the signal due to
movement of the ocean surface. The issue was never resolved theoretically and
finally it was decided that the only possi ble means of resolution would be
actually to fly the SAR on Seasat. As it turned out, the Seasat SAR observed
a number of unique ocean features that significantly contributed to our
understanding of the global oceans (Fu and Holt, 1982). Although the system
(Table 1.2) was designed primarily to image the oceans with its steep 23
incidence angle, Seasat data has found a wide variety of applications. The most
significant of these are in geology, polar ice, and land use mapping (Elachi et al.,
1982a). The success of Seasat, however, was limited in terms of the duration of
the data collection. A complete power failure just JOO days after its July 1978
launch, attributed to a short circuit in the slip rings that articulated the solar
38
1.3
INTRODUCTION TO SAA
TABLE 1.4
i,./
J
OPTICAL
7 RECORDER
<t.
SIM
configuration.
39
Mission
Date
Altitude (km)
Frequency Band (G Hz)
Polarization
Incidence Angle
Antenna Size (m x m)
9.4 x 2.2
SIR-B
1984
225
L( 1.28)
HH
15- 60
10.7 x 2.2
Noise Equiv a 0 (d B)
Swath Width (km)
Az/ Rng Resolution (m)
-25
50
4.7 / 33
- 35
15- 50
5.4/ 14.4
SIR-A
1981
259
L( 1.28)
HH
so
SIR-C
1993, 1994
215
L( 1.28 ), C( 5.3)
HH, HV, VH, VV
15- 60
12. 1 x 2.8(L)
12.1 x 0.8(C)
-50(L), -40(C)
30- 100
6.1 / 8.7
X-SAR
1993, 1994
215
X(9.6)
vv
15- 60
12.1 x 0.4
-26
10- 45
6.1/ 8.7
For many years the surface of Ve nus remained hidden to planetary astronomers
due to t he dense atmosphe re surrounding the planet. In the late 1960s, the
NASA 64 m deep space tracking antenna, in conjunction with the 43 m Haystack
antenna in Massachusetts and 300 m Arecibo radar a ntenna in P uerto Rico
produced t he first detailed map of Venus using radar interferometry ( Pettengilj
et al., 1980). These images, a long with the early scientific results from the 1967
Mariner 5 mission to Venus, led to the a ppro val of the Pioneer mission ( 1978),
which carried a radar altimeter, a nd prompted t he first design study in 1972
for a Venus Orbiting Imaging Rada r (VOi R) system to generate a high resolution
ma p of the planet using SAR technology.
The VOIR went through many design phases before fi nal a pproval by NASA.
These changes resulted from both a strong scientific contingent, which expressed
the need fo r high resolution maps to study the geologic history of the planet,
and the success of t he Soviet Venera mapping missions which demonstrated
the potential value of a high resolution planetary radar. In 1982, a modified
VOIR design was formally approved as the Venus Radar Mapper (V RM ), after
which it was renamed Magellan (MGN ). At fi rst glance this system appears to
be a step back wa rd in technology relative to the earlier Seasat a nd SIR systems,
but, considering the harsh space environment and the limited mass, power,
and d ownlink da ta rates, its performance is quite remarkable. The system
specifications in relation to the most recent Ve nera missions a nd the NASA
Pioneer radar a ltimeter are provided in Table 1.5. A number of novel concepts
were implemented in the Magellan system (Fig. 1.19a), such as burst-mode
im.aging and block adaptive quantization (Johnson.and Edgerton, 1985). The
primary rad a r mapping mission, 240 days in J 990- 91, is designed to generate
a glo?al map of ~en~s at a pproximately 150 m resolution. The signal processing
and image mosaicking a re a ll performed digitally. One of the first Magellan
40
INTRODUCTION TO SAR
TABLE 1.5
1.3
Mission
Launch Date
Frequency Band (GHz)
Polarization
Incidence Angle (deg)
Antenna (m)
Swath Width (km)
Ra/ Az Resolution (km)
Planet Coverage ( % )
Pioneer/ USA
1978
S( 1.75)
Linear
0.5
0.38 (dish)
Variable
23 / 70
92
1.0/ 1.0
25
Magellan/ USA
1990
S(2.38)
HH
15- 45
3.7 (dish)
20- 25
0.12/ 0.12
95
Sensor
subsystem
(130 kg)
antenna
(7 kg)
Data
storage
SAR/comm
antenna
__
_,
Data
Products
b
Figure 1.19 The Magellan system: (a) Spacecraft configuration; (b) End -to-end data path.
41
42
INTRODUCTION TO SAR
..,
.2
~
c
:.c:
u
vi'
<
u
c
"
E
.c
..
.!a
:;;
Ji
"'
.!!
.~
0.
Cl)
"'"
-0
Lil"
;,
....
c
,g
z
~
c
u"
vi'
UJ
u
Figure 1.19 (continued)
...:;
E
c
Ci
'-
TABLE 1.6
Mapper
Mode
Frequency Band (GHz)
Polarization
Incidence Angle (deg)
Az/ Yert Resolution (m)
Range Bandwidth (MHz)
Dynamic Range (dB)
SAR
Ku(l 3.8)
Linear
20- 40
300-600(A)
0.42, 0.85
9
RAD
Ku( 13.8)
Linear
0
30000- 60000(A)
100
92
ALT
Ku(\3 .8)
Linear
0
30(V)
4.3
21
SCAT
Ku( 13.8)
Linear
0
7500(A)
0.1
21
..
Cll
.!!
QI
E
ca
ca
a.
..
'i?!
"
2
>
::::>
Cij c
.~ ~
]
Cl)
u '"
....
0
>.
::::>
-0
E
ci' .,
....
* <
4~
44
INTRODUCTION TO SAR
1.4
operational airborne systems and Table 1.2 for the near-future spaceborne
systems currently under development. Perhaps most notable is the nu~ber of
SAR systems that will be in operation in the 1990s. The strong commitment
by the Europea n Space Agency (ESA), as well as the National Space
Development Agency of Japan (NASDA) and the Canadian Space Agency,
bodes well for advancement in the scientific use of SAR data. Furthermore: the
increasing cooperation between agencies, as evidenced by the American,
German, and Italian cooperation on the SIR-C/ X-SAR instrument packa.ge,
the increasing availability of the Soviet SAR data, and the planned worldwide
participation in the Earth Observing System (EOS) program, should lead to
rapid advancements in both SAR sensor and processor technology.
1.4
EXPERIMENT
REOOIREMENTS
SCIENCE OBJECTIVES
POL.AR ICE
OCEAN WAVES
e SWAniW1Dni
SURFACE/SlJBSURFACE
MAPPING
SOIL MOISTURE
FOREST ECOLOGY
f+
SYSTEM PERFORMANCE
INCIDENCE ANGLE(S)
NOISE FIGURE
DURATION
EIRP
LAUNCH DATE/TIME
ISLFL PSLR
e COVERAGE
STABILITY
CALIBRATION ACCURACY
RESOLUTION
I+
e TARGET
1-4 e
ALTITUDE
DYNAMIC RANGE
DATA LINK
INSTANTANEOUS
COMMANOING
LINEARITY va
FREQUENCY
DATA RATE
ATTITUDE CONmOI.
QIJANTIZATION
EPHEMERIS ACCURACY
LOCATION
'l
DtURNAUSEASONAI.
VARIATION
. In th~ above .example, the system trade-offs were relatively simple and the
science impact, in terms of swath width or calibration accuracy, is generally
:-vell unders.tood. However, trade-offs among other parameters, such as the
~ntegrated s~delobe ratio (ISLR) or the quadratic phase error, are not so easily
in terpre~ed in terms of their impact in limiting science applications. Similarly,
geophysical measurements, such as ice type classification or soil moisture
~ontent, are difficult to translate into system specifications. This section is
in~end~d to ~resent some key applications for SAR data in conjunction with a
bn ef ~1sc.uss1~n of the scattering mechanisms, as an aid for the engineer to gain
some insight into the dependency of various geophysical measurements on the
radar system design.
1.4.1
The design of a SAR for remote sensing begins with scientific goals which are
used. to define a quantitative set of scientific requirements. Generally these
req uirements can be divided in to those affecting the radar subsystem, the
p~oc.essor s.ubsyste~, or the platform and downlink subsystems (including
m1ss10n design). A hst of the key parameters is given in Table 1.8.
. To translate scientific requirements into system specifications, some assumpllons must be made about the target characteristics. This necessitates some
a pr iori understanding of the interaction between the transmitted wave and the
target. Some of the parameters that characterize the received signal depend
weakly on the target characteristics, such as :
Doppler centroid or azimuth spectral characteristics
slant range or round trip propagation time
PARAMETER
SELECTION
e UPLINK FREQUENCY
Figure 1.20 Mission design flowchart illustrating How from science requirements to senso r and
platfo rm specificatio ns.
45
specify the system and then define the experiments that are feasible within its
~erfo~mance constraints. The final design is the result of an iterative process
in which s~ste.m trade-.offs are made to optimize the performance for a specific
set o.f a~pltcalions. A simple example of these trade-offs for a geologic mapping
appltca.t10n would be to consider wide swath as higher priority than system
dynam1~ range or radio~etric calibration accuracy. Given that the system is
constrained by the d~wnlink data rate, the quantization (bits per sample) could
be reduced to downlink more samples per interpulse period and thus obtain
the wider swath.
ORBIT NOOE
ADJUSTABLE
CHARACTERISTICS
a-vs
PLATFORM DESIGN
1.4
46
47
INTRODUCTION TO SAR
TABLE 1.8
Parameters
Sensor Parameters
Radar Frequency or Wavelength
.
.
Antenna Polarization - Ellipticity and Onentatton
Range Bandwidth
Signal to Thermal Noise Ratio
Dynamic Range
Swath Width
Look Angle
this text and can be found elsewhere (Ulaby et al., 1982, 1986). Instead, it is
our intention to provide an overview of the scattering mechanisms as a
foundation from which we can discuss various applications of the SAR data.
If we assume for simplicity that the wave is propagating in a homogeneous,
isotropic, non-magnetic medium, then from Maxwell's equations we can write
an expression for the complex electric field vector as
E( z, t) =A expU(k'z - wt+)]
(l.4.1)
Image Parameters
Range and Azimuth Resolution .
Peak and Integrated Sidelobe Rattos
.
Effective Number of Looks (Speckle Noise)
. Image Presentation (Radio~etric .andGGeome.tnc ~o~:,:t!imetric)
Calibration Accuracy (Rad1ometnc, eometnc an
These requirements are then translated into system specifications such as:
noise temperature or noise figure
antenna gain
f mance
amplitude/ phase versus frequency / temperature per or
transmitter power
.
mechanisms These specifications
which d irectly reflect the surface ;catte:ing type ofsurfa~e It is the dependency
can be used to predict the response or :a~~:t:ristics that m~st be understood to
of these parameters on th~ surface c
h . 1'nformation from the SAR data.
develop models for extraction of the geop ys1ca '
1.4.2
s: (
( 1.4.2)
where f 0 is the carrier frequency and ).. is the wavelength. The wave propagates
in some direction z, with k' related to the wavenumber k by
( 1.4.3 )
Here e, = e/ e0 is the permittivity of the medium relative to that of free space (e0 ).
The relative permeability, is assumed to be unity, which is a good assumption
at microwave frequencies.
The polarization of the electric field refers to the direction of the ampli tude
vector, A, at some instant in time. For a linearly polarized wave, the direction
of A is fixed (i.e., independent of time) relative to the propagation direction as
shown in Fig. l.21a. For an elliptically polarized wave, the direction of A is a
function of time and effectively rotates about the axis of propagation. The easiest
way to conceptualize this is to consider the E field vector as consisting of signal
components oriented along the x axis and the y axis as shown in Fig. 1.21 b.
Each component has the same frequency, but in general a different amplitude
and phase. The vector sum of these two field vectors
E(z, t) =A, expU(k'z - wt+ 1 )]
+ AYexp(j(k' z -
wt+ 2 )]
1.4.4)
( l.4.5)
48
INTRODUCTION TO SAR
1.4
y
SAR SYSTEM
Ay
49
~'
~'
ROUGH SURFACE
; ! ELECTRIC CONSTANT, Er
y
Ay
Figure 1.22
smooth .surf~ce) is very small as compared to the radar wavelength, the scatterin
mecha~1s~ is specular. In specular scattering the incident wave's reflection anJ
transm1ss1on through the surface are governed by Snell ' s la Th
d
w.
us, given a
wave mc1 :nt at an angle YJ, a portion of the energy will be reflected at an angle Y/
and a portion refracted at an angle r/', where
Electric field vector propagation: (a) Linearly polarized wave, and (b) Elliptically
polarized wave (after Purcell, 1981 ).
Figure 1.21
( 1.4.7)
Subsurface Mapping
where c is the speed of light and e0 , 0 are the permittivity and permeability of
free space. Generally, the propagation speed through the atmosphere of an
EM wave in the 1- 10 GHz range can be well approximated by c. At frequencies
above 10 GHz molecular absorption can significantly attenuate the signal, while
for frequencies below 1 GHz the ionosphere is dispersive, resulting in rotation
of the polarized wave, attenuation of the signal amplitude, and a reduced
propagation velocity. These effects are discussed in more detail in Chapter 7.
The interaction of the radiated EM wave with the surface is represented
pictorially in Fig. 1.22. The interaction of the wave and the surface is generally
referred to as scattering and is classified into either surface scattering or volume
scattering. Surface scattering is defined as scattering from the interface between
two dissimilar media, such as the atmosphere and the earth 's surface, while
vo lume scattering results fro m particles within a non-homogeneous medium.
An example of this type of scattering was observed in the Libyan D esert region
of sou~hwest~rn ~gypt. by .the Shuttle Imaging Radar (SIR-A) instrument.
chma~e m this regi on 1s hyperarid, resulting in a surface totally devoid
o vegetation. The s ubsur~ace. composition is a ho mogeneous sand layer of
1- 2 meters depth under which 1s a second layer of bedrock (Fig. t.23 ). Scattering
Tte
Er= 2.5
~ 2m
1.4.3
Figure .23
= 8.0
CHANNELS
1.4
50
51
INTRODUCTION TO SAR
~i~~,;hd:a,
~:~~;g~ht~~~~t.,ePd~~to~~b~e:~) l:;r~::~~~:~~:~v~:~;'~~~r~c~~~~t~~;h~!
the san ayer ts es tma
500
"r -
dielectric constant, e,
or '1
'
produces a
The resultant scattering from the s~ sur ace f
. t natural drainage
.
I oviding a detailed map o ancten
relatively strong signa pr
f
.
f shifting sand This is illustrated
channels buried by thousa~ds o ~entunes o.
'th a L~ndsat scene of the
F' 1 24 by the SIR-A image tn companson w1
h
tn tg. . The Landsat visible wavelength detectors can only measure t e
same area.
surface reflectance which is nearly featureless, while the SAR 's subsurface
imaging capability illustrates a detailed map of the bedrock layer. Such radar
sounding techniques are invaluable for scientists studying the geologic history
of the region, and may also prove useful for locating sources of water deep
below the surface.
Bragg Scattering
= 1,2,3, ...
( 1.4.8)
a strong backscattered return will result. The dominant return will be for the
wavelength where n = l. At steep incidence angles, the scattering is generally
a combination of Bragg and specular scattering. Even for a Bragg surface, the
return can be dominated by specular scattering, which is strongly dependent
on the distribution and extent of the local slope (Winebrenner and Hasselmann,
1988). A natural surface can be approximated by a series of small planar facets,
each tangential to the actual surface, upon which the small-scale roughness is
superimposed. The incident wave therefore has a scatter component that is due
to the local slope (i.e., from facet scattering), as well as a point scatterer
component dependent on the roughness (i.e., from Bragg or resonant reflection).
The resultant backscatter curve as a function of local incidence angle is a
combination of these two mechanisms as shown in Fig. 1.25.
N\
b
0 L-~-'--~--' 20 km
.
b' ()La dsat(b)SIR-A.SIR-A
Figure 1.24 Images ofdesert region between Iraq and s.aud; :i::hia .I;82b )~
'
detail of drainage channels is from subsurface penetration
'
Oceanography. Bragg models are most frequently used for describing scattering
from the sea surface. Due to the large dielectric constant of water, the scattering
mechanism is exclusively surface scattering. The resonance phenomenon on
which the Bragg model is based is well suited to the periodic structure of the
ocean waves. Ocean waves are detectable as periodic bands on SAR imagery,
due to the spatial variation of the short waves within the longer waves, as well
as to the orbital motion of the long waves themselves. However, due to the rms
height variation limitation of the Bragg model (i.e., < J../ 8), only the small
capillary waves or short gravity waves exhibit Bragg resonance. The analysis
of SAR ocean wave imagery is typically performed in the spatial transform
domain where the Bragg resonance can be observed directly. Figure 1.26 shows
a set of ocean wave spectra for an area off the coast of Chile. The SIR-B
spectrum is shown after removal of the system transfer function and smoothing
(Monaldo, 1985). From the wave spectra, parameters such as the direction of
52
INTRODUCTION TO SAR
Backscatter curve for natural surfaces illustrating the two scattering mechanisms:
Facet scattering for steep incidence angles; Bragg scattering for shallow incidence angles.
Figure 1.25
the waves (with a 180 ambiguity), the wavelength (or wave number), and the
wave height can be directly measured. The Bragg resonance is strongest for
waves traveling in the radar look direction. As the azimuth component of the
wave motion increases, the backscattered energy is attenuated and nonlinear
corrections need to be applied for an accurate estimate of the geophysical
parameters (Alpers et al., 1981 ).
Information derived from the directional wave energy spectra can be directly
used for updating and validating ocean wave forecast models. These models
are key elements in predicting global climatology. The measurement of ocean
characteristics is a primary objective of future space orbiting SAR systems such
as the E-ERS-1 and SIR-C, both of which have implemented special modes for
ocean wave imaging. The SlR-C system will feature an onboard processor
experiment, developed by the Johns Hopkins University Applied Physics
Laboratory, to directly generate ocean wave spectra for near real-time analysis
of the ocean wave properties (MacArthur and Oden, 1987). The E-ERS-1 system
also features a special wave mode of operation in which the SAR acquires only
small patches of data ( 5 km x 5 km) spaced at regular intervals ( 250 km)
across the oceans. These patches will be ground-processed to produce wave
spectra images for distribution to the science community (Cordey et al., 1988).
1.4
.>I.
0
N
55
volcanic crater is clearly visible in the right center portion of the frame. Due
to acidification from volcanic fumes, there is very little vegetation in this region.
Two main types of lava flows are easily distinguished. The aa flows, which are
rough, appear brightest in the scene, while the smoother pahoehoe flows
comprise the darker regions. Additionally, as a result of the smoothing effect
from weathering, the change in radar brightness as a function of incidence angle
can be used to identify the relative age of the two lava types. This is especially
prominent in the Kau desert region where the contrast between the lava types
is more distinct at 17 = 48 than at 17 = 28.
1.4.4
Target areas that can be characterized as Bragg scatterers are essentially special
examples of the general scattering problem, which is significantly more complex.
Most natural surfaces are generally of an inhomogeneous composition, and at
some wavelengths or under some conditions they are penetrated by the EM
wave. Thus, scattering from natural terrain is generally a combination of surface
scattering and volume scattering. Volume scattering results from dielectric
discontinuities within the media. Assuming the spatial locations and orientations
of these discontinuities are random, the incident wave scattering will be
omnidirectional. Thus, the portion of the incident wave scattered back toward
the radar will depend on the relative dielectric constant between the two types
of media in the inhomogeneous layer, as well as on the geometric shape, density,
and orientation of the imbedded inhomogeneities. Volume scattering is modeled
using either EM wave theory or principles of radiative transfer (Fung,
1982). The wave approach uses Maxwell's equations, and some restrictive
approximations on the type of scattering, to derive an expression for the scattered
signal. The radiative transfer approach is based on the average power or intensity,
and generally ignores diffraction effects. A detailed treatment of the various
models and their applications is given in Ulaby et al. ( 1986).
A useful quantity for characterizing the scattering within a medium is the
penetration depth. Given a wave incident on a surface, the depth at which the
refracted portion of the wave is attenuated by I / e of its value at the layer
boundary is given by (Ulaby, 1982)
( 1.4.9)
where the relative dielectric constant, a complex number e, = e' + je", must
satisfy e" / e' < 0.1 for Eqn. ( 1.4.9) to be valid. In calculating the penetration
depth from Eqn. ( 1.4.9) the scattering within the medium is assumed to be
negligible.
Vegetation. In volume scattering, just as in surface scattering, the wavelength
56
1.4
INTRODUCTION TO SAR
SIZE DISTRIBUTION OF
CANOPY SCATTERERS _
,..
..::
l~ lll
I mm
SIZE OF SCATTERER
TWIGS
Figure 1.28
LEAVES
BRANCHES
TRUNKS
57
lm
10cm
lcm
Polar Ice
A second example of volume scattering is in the imaging of polar sea ice. Polar
ice has imbedded in it a mixture of salt, brine pockets, and air bubbles. It is a
characteristic inhomogeneous medium with a relatively low dielectric constant
1.4
f-------
MULTl YEAR:
FIRST YEAR:
LOW-SALINITY,
LOW-LOSS
HIGH- SALINITY
HIGH -LOSS
HIGH- LOSS
SURFACE VOLUME
SCATTERING
SURFACE
SCATTERING
SURFACE
SCATTERING
---~-OPEN
59
WATER:
RIDGE
.D
Figure 1.31 Scattering mechanisms for various ice types: multiyear, first year and open water
(courtesy of W. Weeks).
of about c, = 3. (Some ice exhibits an t:, lower than that of sand in hyperarid
Saharan desert.) Depending on ice type, which is usually correlated with age,
the cha racteristic scattering can change dramatically. Generally, at steep
incidence a ngles surface scattering will dominate the ret urn signal, while at more
shallow incidence a ngles volume scattering efTects become prominent, depending
on the radar wavelength a nd the ice type. These scattering mechanisms are
illustrated in Fig. 1.3 1 (Carver et al., 1987). For o pen water the mechanism is
exclusively surface scattering with a la rge specular scatter component depending
on the surface roughness (i.e., on wind speed). The first year ice is also
predo minantly su rface scattering, due to the large t:, resulting from the high
salinity content. However, the surface scatter is mo re d iffuse as a result of the
ice ridges a nd rubble fields. The mul tiyear ice exhi bits both volume and surface
scatteri ng due to the low dielectric constant resul ting from its characteristic low
salinity.
The relatio nship between penetrat io n dept h, bP, and the radar frequency for
each of these ice type (age) categories is show n in Fig. 1.32. As expected from
Eqn. ( 1.4.9), the penetration depth is inversely proportional to radar frequency.
60
INTRODUCTION TO SAR
10
T -lOOC
.c
g.
0
c:
~
.....
Q)
c:
Q)
0..
l. 5
Figure 1.32
3 4 5 6 7 8 910
Frequency (GHzl
15
20
Penetration depth in pure ice, first-year sea ice and multiyear sea ice (Ula by et al., 1982).
However, despite the fact that the real dielectric component e' of the multiyear
ice is smaller than that of the first year, the imaginary component e" typically
offsets this factor, resulting in a deeper penetration depth for multiyear ice. The
value of e" is dependent on a number of factors, such as the ice density,
temperature, and salinity. T hus, depending on the environmental conditions
from the point of fo rmation of the ice until it is observed, e" can assume a wide
range of values. Since e" decreases with decreasing temperature, the penetration
depth could vary widely with a diurnal cycle period during the summer
months.
Figure 1.33 illustrates dramatically the wavelength dependence of scattering
from multiyear ice. T his P-, L- and C-band total power (i.e., all polarizations)
three-frequency image set was acquired by the NASA/ JPL airborne SAR in
61
62
INTRODUCTION TO SAR
March, 1988, over the Beaufort Sea. In the C-band image, the bright regions
correspond to multiyear floes, the darker regions to first year floes. There is no
open water in this scene. As the wavelength is increased, the distinction between
the multiyear and first year ice diminishes and the ice ridges are predominantly
highlighted. This is a result of the increased penetration in the multiyear ice at
longer wavelengths attenuating the backscatter, coupled with a highlighting of
the ridges at longer wavelengths from surface scattering. (The ridge size
approximates the wavelength at P-band.) Scattering from sea ice is another
example where the statistics of the relative phase across polarizations can be
used to produce a more detailed description of the properties of the ice
(Nghiem et al., 1990).
The use of SAR for monitoring the characteristics of sea ice in the polar
region has widespread scientific and commercial application. The environmental
impact of C0 2-induced atmospheric warming could be most severe in the arctic
region, causing wider swings in the freezing and thawing mechanisms that
establish the ice extent, ice concentration, and the physical characteristics of
the ice formations. Changes in the extent of the polar ice cap are correlated
with climatology, since the growth of sea ice is a primary mechanism for removal
of C0 2 from the atmosphere. In addition to the scientific utilization of sea ice
imagery, there are a number of commercial applications of the SAR data.
Airborne SAR imagery has been used operationally for monitoring the
movement of sea ice in the polar region. Ice kinematic maps are useful for
fishing and shipping industries, which require knowledge of the relative
movement and position of the ice for navigation. Additionally, monitoring both
the size and velocity of the multiyear floes is important for the oil industry in
establishing the location of temporary drilling rigs. The demand for this type
of data is sufficiently large that currently a commercial organization, Intera, is
flying an airborne SAR in the arctic region to provide ice floe and ice extent
maps to a number of corporations and government agencies (Mercer, 1989).
Soil Moisture. Another key application of SAR d a ta is in measuring the
moisture content of the soil. As might be expected, the volumetric content of
water in the soil is directly related to the dielectric constant, as shown in
Fig. l.34a (Ulaby et al., 1982). As water is added to the soil, the real part of
the relative dielectric constant increases slowly as most of the water molecules
bind to the soil. However, as the fractional moisture content increases past
15 %, the number of free water molecules increases rapidly, allowing molecular
alignment similar to that of free water. For saturated soil, r,' approaches that
of liquid water (r,' = 80 at 2 < 50 cm) as shown in Fig. l.34b. Similarly, the
imaginary part increases with increasing moisture fraction, although at a slower
rate. The net effect (at a given wavelength) is that the penetration depth decreases
as the soil moisture increases. The reduction in signal penetration with higher
free water content effectively increases the backscattered energy. Thus, the signal
received by the radar results predominantly from surface scattering of the
non-refracted portion of the incident wave. The backscatter coefficient is
1. 4
(a)
I-
2
<l'.
I-
CJ)
u
u
oc
I-
u
lJ.J
..J
lJ.J
Ci
24
22
20
18
16
14
12
10
8
6
4
2
0
0
SOIL
(L - BAND)
10
20
30
(b) 90 -
LIQUID
WATER
80
I-
2
<l'.
70
CJ)
60
I-
2
<l'.
u 50
oc
I-
en
I
40
..J
u 30
lJ.J
..J
lJ.J
Ci 20
10
0
10
0.3
O. l
WAVELENGTH (cm)
~~;e~::v~;p;;rdebn~t~n
sOt1
t~e
both
su rface roughness and the soil moisture content.
Fig 1 35 A
smoot and rough surfaces at C-band are given in
s an example of the rad
t
S
.
.
ar sens1 iv1ty to soil moisture at L-band
eas~t imhage o.f an agricultural region in Iowa is shown in Fig l 36 The b . ,hat
area is t e region where re
t . r II h
. . .
ng
the soil.
cen ram1a
as increased the moisture content of
An accurate estimate of the water c t
.
. .
on! ent in the soil is a critical parameter
for modeling the global h d I .
y ro og1c eye e. These models in turn are used to
63
64
INTRODUCTION TO SAR
25
al
:S!
20
'b 15
'E
....
o 10
c;;
0
u
C7'
c
.:
5
0
-5
"' -10
=
~
mv 0. 05 g~c:m:j-3:------l
~ -15
-20
10
25
15
20
Angle of Incidence !Degrees)
a
30
25
al
:S!
20
'b 15
'E
....
o 10
c;;
0
u
C7'
c
.:
2
~
~
u
"'
al
mv 0. 38 g cm-3
0
-5
-10
-15
-20
10
15
20
25
Angle of Incidence (Degrees)
30
b
Figure 1.35 Dependence of backscatter coefficient on incidence angle and soil moisture ( i.e.,
volumetric water content g/ cm 3 ) for C-band (l = 7 cm), HH polarization at : (a) u,m, = I.I cm ; and
(b) Urms = 4.1 cm (Ulaby et a l., 1986).
~~
w~
Oci
Cl.
a:
:::;
0
Cl.
-o
a: .
~o
Vl
a:
...J
...J
<
u..
z
~
a:
66
INTRODUCTION TO SAR
REFERENCES AND FURTHER READING
1.5
SUMMARY
In this chapter, we have introduced the synthetic aperture radar in terms of its
use as a remote sensing instrument. Emphasis was placed on the potential
application of the SAR data, in conjunction with other remote sensing systems,
to obtain measurements from different portions of the electromagnetic spectrum.
The synergism resulting from combining multisensor data sets from simultaneous
observations is key to our understanding of the earth's processes. The SAR
contributes uniquely to this global database in that it measures both the electrical
and structural characteristics of the surface. Furthermore, this instrument can
generate large-scale, high resolution maps of these surface characteristics
independent of cloud cover, sun angle, and sensor altitude.
In the 40 years since the discovery of synthetic aperture radar a number of
technical challenges, in both the sensor and processor subsystems, have been
met and overcome. It appears that we are now positioned to embark on a new
era in SAR remote sensing, with no fewer than six spaceborne systems planned
for the 1990s. This wealth of data, in conjunction with the diverse set of
applications, should attract a broad scientific community toward the geophysical
interpretation of SAR data products.
In recognition of this widespread interest among both novice and experienced
radar data analysts, this text is structured to provide an in-depth understanding
of both the characteristics of the data and the algorithms used to generate the
image products. We have put special emphasis on addressing the errors inherent
in real systems, and the techniques required to produce radiometrically and
geometrically calibrated images. It is our goal to describe in detail both the
techniques and technologies required for the design and implementation of the
SAR signal processor, since it is in this part of the data system that the calibrated,
registered, multifrequency, multipolarization image products are generated.
We recognize that it is only after these calibrated data products are presented
to the science community that the real work begins. Remote sensing provides
the key to our understanding of the impact of our lifestyle and the industrialized
society on the environment. We are just now beginning to recognize the extent
of the problem, and we expect that the synthetic aperture radar measurements,
in conjunction with other remote sensors, will be instrumental in monitoring
the effects on our changing environment.
67
'
Seasat Radar Images of Mea~s V II
M :
etection of Subsurface Features in
pp. 346- 349.
a ey,
OJave Desert, California," Geology, 12,
:r~fokn~,
'
'
V. I
Colwell, R. N. (I 983b ). Manual of Remote Sen i1
Applications (Estes J E ed ) A
S .
s ig, 0 ume II : Interpretation and
Cutrona L J E N ' ~ . ~ ~ merrcan oc1ety of Photogrammetry, Falls Church VA.
' . . ., . . . ert , . J. Palermo and L. J. Porcefl 0 ( 1960) "O .
'
P rocessing
New York.
ysics an ' echmques of Remote Sensing, Wiley,
Elac h', C . ( 1988 ). Spaceborne Radar Remote S
. .
. .
IEEE Press, New York.
ensmg . Applications and Techniques,
Elachi, C., T. Bicknell, R. L. Jordan and C Wu
"
Imaging Radars: Applications T h ..
( 1982). Spaceborne Synthetic-Aperture
pp. I I 74- I 209.
, ec niques and Technology," Proc. 1EEE, 70,
Elachi, C., J. B. Cimino and M. Settle ( 1986) "O
.
28.6 verv1ew of the Shuttle Imaging Radar-B
Preliminary Science Results " Sc'
'
ience,
' pp. 151 I - 151 6.
68
INTRODUCTION TO SAR
Elachi, C., L. E. Rot h and G. G. Schaber ( 1984 ). " Spaceborne Radar Subsurface Imaging
in Hyperarid Regions," IEEE Trans. Geosci. and Remote Sens., GE-22, pp. 383 - 388.
Elachi, C., E. Im, L. Roth and C. Werner (1991). "Cassini Titan Radar Mapper," Proc.
IEEE (in press).
Ford, J. P., J. B. Cimino, B. Holt and M. R. Ruzek ( 1986). "Shuttle Imaging Radar
Views the Earth from Challenger: The SIR-B Experiment," JPL Pub. 86-10, Jet
Propulsion Laboratory, Pasadena, CA.
Ford, J.P., R. G. Blom, M. L. Bryan, M . I. Daily, T. H. Dixon, C. Elachi and E. C. Xe~os
( 1980). " Seasat Views North America, the Caribbean and Western Europe with
Imaging Radar," JPL Pub. 80-67, Jet Propulsion Laboratory, Pasadena, CA.
Freden, S. C. and F. Gordon, Jr. ( 1983). Landsat Satellites, Manual of Remote Sensing
(Simonett, D . and F. Ulaby, eds.), Chapter 12, Vol. I, Am. Society of Photogrammetry,
Sheridan Press, Falls Church, VA.
Fu, L.-L. and B. H olt ( t 982). " Seasat Views Oceans and Sea Ice with Synthetic Aperture
Radar," JPL Pub. 81-120, Jet Propulsion Laboratory, Pasadena, CA.
Fung, A. K. ( 1982). " A Review of Volume Scattering Theories for Modeling Applications,"
Radio Science, 17, pp. 1007- 1017.
Goddard Space Flight Center ( 1989). Earth Observing System, Reference Handbook,
NASA GSFC, Greenbelt, Maryland.
Holahan, J. ( 1963). " Synthetic Aperture Radar," Space/ Aeronautics, 40, pp. 88-93.
Hulsmeyer, C. ( 1904). " Hertzian-Wave Projecting and Rec~iving Apparatus Ad~pted
to Indicate or Give Warning of the Presence of a Metallic Body such as a Ship or
a Train," British Patent 13,170.
Hunten, D . M., M. G . Tomasko, F. M . Flasar, R. E. Samuelson, D. Strobel an_d D._J.
Stevenson (1984). Titan , in Saturn (T. Gehrels and M. S. Mathews, eds.), U01vers1ty
of Arizona Press, Tucson, pp. 67 1- 759.
Im, E., c. Werner and L. Roth ( 1989). " Titan Radar Mapper for the Cassini Mission,"
21st Lunar and Planetary Science Co11f., Joh11son Space Center, Houston, TX,
pp. 544- 454.
Jensen, H., L. C. Graham, L. J. Po rcello and E. N . Leith ( 1977). "Side-Looking Airborne
Radar," Scientific American, 237, pp. 84- 95.
Johnson, W. T. K. and A. T. Edgerton ( 1985). " Venus Radar Ma pper (VRM): Multimode
Radar System Design," SPJE, 589, pp. 158- 164.
Jordan, R. ( 1980). "The Seasa t-A Synthetic Aperture Radar System," IE EE J. of Oceanic
Eng., 0 E-5, pp. 154- 164.
Kahle, A. a nd A. Goetz (1983). "Mineralogical Information from a New Airborne
Thermal Infrared Multispectral Scanner," Science, 222, pp. 24- 27.
Kahle, A. B., J. P. Schieldge, M. J. Abrams, R. E. Alley and C. J. LeVine ( 1 9~1 ).
"Geological Applicatio n of HC MM Data," JPL Pub. 81-55, Jet Propulsion
Laboratory, Pasadena, C A.
Kirk , J. C., Jr. ( 1975). " Digital Synthetic Aperture Rada r Technology," IEEE
f11ternational Radar Conference Record, pp. 482- 487.
Li, F. and R. Goldstein ( 1989). "Studies of Multi-baseline Spaceborne lntederometric
Synthetic Aperture Radars," IEEE Trans. Geosci. and Remote Sens., GE-28, pp. 88- 97.
MacArthur, J. L. a nd S. F. Oden ( 1987). "Real-Time Global Ocean Wave Spectra from
SIR-C: System Design," IGA RSS'87 Digest, Vol. II , Ann Arbor, Ml, pp. 1105- 1108.
69
MacDonald, H. C. ( 1969). " Geologic Evaluation ofR adar Imagery from Darien Provi nce
Panama," Modern Geology, 1, pp. t - 63.
'
Mercer, J. B; ( 1989). " A New Airborne SAR for Ice Reconnaissance Operations," Proc.
IGARSS 89, Vancouver, BC, p. 2192.
Monaldo, F. M. ( 1985). " Meas urements of Directional Wave Spectra by the Shuttle
Synthetic Aperture Radar," John s Hopkins APL Tech. Digest, 6, pp. 354- 360.
Natio_nal Aeronautics and Space Administration Advisory Council ( 1988). Earth System
Science: A Program/or Global Change, NASA, Washington, DC.
Nghiem, S. ~ J. A. Kong and R. T. Shin ( 1990). "Study of Po larimetric Response of
Sea Ice with a Layered Random Medium Model," Proc. /GARSS '90, Washington,
DC, pp. 1875- 1878.
Pettengill, G. H., D. B. Campbell and H. Mas ursky ( 1980). " The Surface of Venus"
Scientific American, 243, pp. 54- 65.
'
Porcell o, L. J., R. L. Jordan, J. S. Zelenka, G. F. Ada ms, R. J. Phillips, W. E. Brown, Jr.,
S. W. Ward and P. L. Jackson ( 1974 ). " The Apollo Lunar Sounder Radar System "
Proc. IEEE, 62, pp. 769- 783.
'
Purcell, E. M. ( 1981 ). Electricity and Magnetism, Berkeley Physics Course, Vol. 2, 2nd Ed.
McGraw-Hill, New York.
Rawson,~; and F. Smith ( 1974). "Four C hannel Simultaneous X-L Band Imaging SAR
70
INTRODUCTION TO SAR
Polarimetry a a,
f p
"
.
d C D Sapp ( 1969). "SLR Reconnaissance o anama,
Viksne, A., T . C. Liston an
Geophysics, 34, pp. 54- 64.
att R (1957) Three Steps to Victory, Odhams Press, London.
Watson- W '
d A
t " United States
Wiley, C. A. ( 1965). "Pulsed Doppler Radar Methods an
ppara us,
2
THE RADAR EQUA TION
In Section 1.2, we have given a heuristic discussion of the way in which a SAR
achieves higher resolution along track than does a real a perture radar (RAR).
In Section l .4 we indicated many of the links between geophysical parameters
of interest for remote sensing and the corresponding radar signals. In the
remainder of the book, we want to make more precise these matters of SAR
operation and SAR image formation, and their effects on the ability to accurately
determine geophysical information from SAR images.
Since a SAR is a particular kind of RAR, one which maintains precise time
relationships between transmitter and receiver (a " coherent" RAR), with the
"SAR" qualities added in the signal processing, in order to understand SAR it
is necessary to have an understanding of RAR. In this chapter, we develop
carefully the basic mathematical model of a RAR system, the radar equation.
Radar technology, and in particular RAR, has been under continuous active
development for well over a half century. Skolnik ( 1985) gives an account of
the history of the early days of radar, while in Section l .3 we have traced the
historical development of SAR. The state of the art as of about 1950 required
28 volumes to codify (Ridenour, the " Rad. Lab. Series" ). Even to survey in
overview the main aspects of the technology requires a book , for example that
by Skolnik ( 1980) or by Barton ( 1988), while a more detailed review (Skolnik,
1970) runs to over 1500 pages. Therefore in our discussions of RAR we will
necessari ly be selective in choosing topics. Within that framework, however, we
will relate the main ideas of RAR systems to basic physical concepts.
71
2.1
72
2.1
The traditional purpose of radar is to detect the pre~ence ~f. "hard" targets,
such as aircraft, and to localize to some extent their pos1t1ons. The rad~r
transmitter (Fig. 2.1 ) generates a brief (microseconds) high power burst of radio
frequency electromagnetic energy. (The more powerful .th.e better - a few
megawatts is not unusual for a ground based radar.) This is c.onveyed to ~n
antenna through appropriate microwave "plumbing". At the high frequen~1es
of radar (0.1-tOGHz, typically), an antenna structure of rea~onable physical
size acts to confine the radiated energy to a narrow fan or cone m .space, thereby
providing localization in one or two spatial dimensions, respectlv~ly. " .
,,
Having launched the pulse, the transmitter turns off and the re~e1ver l~stens
for any echos of the pulse returned from thf se~tor of the s~y mto which ~he
pulse was launched. Any perceived echo has it~ t1~e of recept1~n ~oted, relat1~e
to the time of transmission of the pulse. This time delay . ts 1~terpr~ted m
terms of range to target, R = cr/2, providing another spatial d1mens10~ for
localization. The power of the received echo relative to that of the _transmitted
pulse scales in free space as 1/ R 4 Mega~atts ~uic~ly tum in_t~ m1.crowatts at
ranges of interest, requiring sensitive receiver circmts, so sens1t1v~, m fact, that
the noise internally generated in the receiver must be reck~ned wit~. The radar
equation expresses this conversion of transmitted power mto r~ce1ved po~er,
in terms of the ratio of received power due to a target reflection to receiver
power due to noise, together with some system and target param~ters. .
The earliest radar receivers used a simple "A-scan" presen~atton, ~1th !he
receiver output power presented as a function of time (range) ~u~mg the hstenmg
time after transmission of one pulse and before transm1ss1on of _the _next
(Fig. 2.2). The "grass" along the baseline is .due to r~ndom thermal no.1se, either
internal to the receiver or entering along with the signal from the environment,
while target echos show up as "bumps" above the grassy baseline. The radar
Energy pulse
Trans
z~:~
Rcvr
--R----i1sl
Ii.-..:
Figure 2.1
73
Time (range)
Figure
~.2
Here SNR0 is the SNR which has been specified as required for reliable operation.
That is equated to the SNR provided by the system for a target at range R, on
the right. The equation may then be solved for any one of its parameters (often
range) in terms of the others to determine operational capability or a system
requirement.
74
2.2
\
\
()a
I
~Ql(R)
I
I
I
I
AmplifierB, F
Decision point /
I
I
Figure 2.3
75
~n~wer for
\
\
\
I
Trans
The value of u is in fact specified such that this last expression yields the correct
This notional noiseless receiver will be assumed here to have a gain relative
to average power which is constant over the frequency band of the signal. This
gain is applied uniformly to both signal input and noise input. Thus the SNR
in the signal band is the same at the output as it is at the input. Therefore, the
SNR; can be equated to the SNR0 which is required at the detection point to
yield the radar equation, Eqn. (2.1.1).
In Chapter 3, we will discuss an important modification of the radar equation,
Eqn. (2.1.1), that resulting by use of a matched filter in the receiver. Such a
filter makes use of the detailed structure of the signal input and the noise
s~atistic~ to maximize the instantaneous signal to noise power ratio at a particular
time of mterest. The procedure generalizes the idea implicit in Eqn. ( 2.1.1 ), that
the receiver has uniform response over a bandwidth B appropriate to that
occupied by the signal. In the remainder of this chapter, however, we will work
through the factors of the radar equation, Eqn. (2.1.1 ), in some detail, both as
a tuto.rial mechanism for introducing necessary radar background, and in order
to pomt out carefully the assumptions involved in their use. A more precise
statement of Eqn. (2.1.1) results as Eqn. (2.7.1).
2.2
We need first to characterize the extent to which the antenna of the radar system
concentrates the power delivered to it by the transmitter into a beam aimed in
the target d.irection. That is expressed in the radar equation, Eqn. (2.1.1), by the
antenna gam G'.
76
2.2
During the time of a radar pulse, while the transmitter is on, suppose that
the average power flowing into the antenna input port is P1 (Fig. 2.3). (This is
often called the peak power of the radar, to distinguish it from the true average
power, which takes into account the transmitter "off" time as well. The ratio
of "on" time to total time is the duty cycle.) If all of this power were radiated
into space by the antenna, and ifthe antenna radiated uniformly in all directions
(isotropic radiation), then at range R from the antenna the power density
(intensity) of the electromagnetic wave would be
(2.2.l)
However, some power is.lost by dissipation in the antenna itself. Also, by design,
the antenna does not radiate isotropically. Rather, the intensity at some space
point with polar coordinates ( R, 0, </>) relative to the antenna is some value
l(R, 0, </>) = G'(O, </>)I 0 (R)
where the gain function G'( 0, </>) has values both greater and less than unity.
Usually, the gain G1( 0, </>) is maximum at 0 = 0, </> = 0, the direction of the
radar beam. The parameter G1 in the radar equation is the maximum (on-axis)
value of G1( 0, </> ).
The gain function G'( (), </>) can be interpreted as the power P( 0, </>) per unit
solid angle Q radiated by the antenna in direction ( (), </>) (the radiation pattern
(Ulaby et al., 1981, p. 97)), relative to the power per unit solid angle P 1/4n
which would be radiated in that direction by a lossless isotropic antenna. This
follows by relating intensity I, power P, and solid angle Q through
l(R, 0, </>) dA = P((), </>) dQ = P((), </>) dA/ R
(2.2.2)
so that
P((), </>)/(P1/4rr.) = (4rr.R 2 /P1 ) J(R, 0, </>)
D'( 0, </>) dQ
</>) = G ( (),</>)/Pe
P( 0, </>) dQ
= 4n
(2.2.5)
sphere
it is clear that the directivity function trades power increase in one solid angle
sector for decrease in another.
. T~e ~articular form of the gain function G'( (), </J) depends on the spatial
dtstnbution of current imposed by the transmitter on the antenna structure.
We will s~mmarize the development of that relationship. However, the design
of a ~hys1ca~ antenna t~ accomplish the current distribution corresponding to
a desired gam pattern 1s a separate problem with which we will not deal.
.One of the. most c~mplete discussions is that of Silver ( 1949), who proceeds
usmg the basic equations of electromagnetic theory to relate antenna currents
and gain. The relations of main interest are summarized by Stutzman and Thiele
( 1981 ). Sherman ( 1970) has given a comprehensive summary of the engineering
results.
The central result for calculation, developed from Maxwell's equations, is
the Huygens diffraction integral. Let us assume that the antenna excitation is
sinus~idal, ~ith radian frequency w, and that the wavelength ..1. = 2nc/w of the
elect~1c field ts m~ch less than the physical extent of the antenna. For simplicity,
and m accord with the usual practice in SAR, we will assume that the field
impressed on the antenna by the transmitter is linearly polarized (i.e., the electric
field vector has a constant direction). The field radiated by the antenna can
then be expressed by a scalar diffraction integral.
We can write the one dimensional electric field vector as
E(R, t)
= E(R, t)x
for example, where x is the unit vector along the x coordinate in space and we
assume linear polarization in that direction. Using phasor analysis, for the scalar
coordinate of this field we write
E(R, t) =Re{ j2E(R) exp(jwt)}
where E(R) is the corresponding complex rms electric field phasor. This can
be written (Silver, 1949, p. 170)
of the power delivered to the antenna is radiated into space, where Pe < 1 is
the antenna radiation efficiency. The correspondingly scaled function
D
= (4rr./ Prad) f
sphere
E(R) =
77
*(2.2.3)
1( (),
(2.2.4)
4~ L.
x [(jk
(2.2.6)
where the geometric terms are defined in Fig. 2.4 and k = 2n/ ..1. is the carrier
wave number. The quantities z, r are unit vectors along the corresponding rays
78
2.2
79
E(x',y')exp[-jk(r-R)](cosO+zs)dx'dy'
Aa
(2.2.8)
in which Eqn. (2.2.7) is to be used to approximate the quantity r - R.
Finally, in the case of interest for us, the quadratic terms in R' /Rare discarded
in the expansion Eqn. (2.2.7). We then enter the Fraunhofer region of diffraction
(the far field), for which case
E(R)
in Fig. 2.4, and s is the unit vector along the spatial gradient of the phase of
the electric field E(x, y) induced by the antenna currents across the aperture.
The direction of the electric field vector at position R is the same as its direction
on the surface A, the "aperture", since we assume free space propagatio~.
This integral Eqn. (2.2.6) expresses the field at an arbitrary space pom~ R
in terms of its values over a planar surface A in the vicinity of the physical
antenna. The purpose of the antenna is to force some specified field distribution
E(x, y) to exist over this surface. With the usual linear phase variation of field
across the aperature, s is constant and indicates the direction of the antenna
radiated power beam. If the field across the aperture furth~r has co~stant p?ase,
then s = z. The antenna design problem, which we wtll not discuss, is to
determine from the desired spatial distribution E(R) of radiation, what should
be the ap~rture field distribution E(x, y), and then to determine what physical
structure will produce that aperture distribution.
. .
.
In working with the (scalar) diffraction integral Eqn. (2.2.6), 1t is convement
to make various levels of approximation, corresponding to increasing distance
of the field point R of interest from the aperture surface. For the clo~est poin~s
(the near field region), no approximations are reasonable, and the mtegral is
taken as it stands. (Silver ( 1949) remarks that, even in the case of no
approximations, within a few wavelengths of the aperture the approximatio~s
leading to Eqn. (2.2.6) do not hold very well. The equation is not useful as it
stands for quantitative work very near the antenna.)
Moving further than a few wavelengths from the a~erture we ent~r the
Fresnel region. Here it is assumed that r A., so that 1/r k. Further, m the
magnitude terms of the diffraction integral Eqn. (2.2.6) we assu~e that r ~ R
(Fig. 2.4). In the more critical phase terms, we use the expansion (keepmg
through second order terms in R' IR):
r 2 =IR - R'l 2 = R 2 + R' 2
r ~ R - RR'
2RR'
(2.2.7)
*(2.2.9)
A.
where we further assume the usual case of constant aperture phase gradient,
so that s is constant. This expression Eqn. (2.2.9) shows that, at least for
cos e ~ 1, in the far field the antenna radiation pattern in space is determined
by the two-dimensional Fourier transform of the electric field distribution over
the aperture.
As the usual criterion for discarding the quadratic terms in Eqn. (2.2.7) it is
required that the phase error thereby incurred in the integrand of Eqn. (2.2.8)
at the boundary of the aperture integration reg_ion be at most rc/8 radian.
That is to say that the far field region is defined by
(k/2R)[R' 2
*(2.2.10)
80
2.2
81
where E is the complex phasor scalar length of the vector E and we assume
free space. (Note that I= III varies as 1/ R 2 since, from Eqn. (2.2.9), IEI oc 1/ R.)
Using Eqn. (2.2.2), this Eqn. (2.2.11) yields the antenna power pattern as
(2.2.13)
Figure 2.5
Near-field and far-field antenna patterns with uniform aperture illumination (from
Skolnik, 1970). With permission of McGraw-Hill, Inc.
2.2.1
on the aperture, where !Xu iXy are the direction cosines of the aperture phase
gradient unit vectors:
z,
For specified !Xx, IXy, i.e., specified s, provided s ~ the maximum of the gain
G'(O, c/>) then occurs in directions (Silver, 1949, p. 176). Henceforth we will
consider only the usual case s =
so that
z,
The result Eqn. (2.2.9) for the far electric field of an antenna is related to time
average power in W /m 2 , the quantity we need in the radar equation, by the
Poynting relation (Silver, 1949, p. 70)
E(x', y')
= IE(x', y')I
G'(O, c/>) =
Eqn.
IL.
82
2.2
This gain function is maximum on the antenna axis ( lJ = 0) (Silver, 1949, p. 177 ),
with:
G1 =max G1(8, </J) = G1(0, </J)
83
If
Aa
~ Aa
A.
9,<J>
*(2.2.15)
This last quantity is the gain parameter G1 in the radar equation, Eqn. (2.1.1 ).
Using the Poynting expression Eqn. (2.2.11~, evaluated,.acr~ss the aper!ure,
the total power Prad radiated by the antenna m the cases= z can be wntten
(Silver, 1949, p. 177):
Prad
= [j(e0 / 0 )]
(2.2.16)
A.
(2.2.17)
where
D1 = (4n/ A. 2
)\L.
L.
*(2.2.19)
(Steering this maximum in space is done by changing the gradient ~f the aperture
phase distribution in the technology of phased array ~ntennas, usmg ax, ay #- 0.
The gain of the steered beam may be less than ~hat m Eqn. (~.2:19).)
The quantity D1 in Eqn. ( 2.2.19) is the on-axis antenna ~am. 1~ the an.tenna
itself were lossless (p. = 1), and is called the antenn~ duectlVlty. It is the.
maximum of the directivity pattern Eqn. (2.2.4). It still includes the aperture
illumination amplitude distribution IE(x, y)I as a function to be chosen.
(In choosing s = Z, the aperture phase has been set to zero.) The Schwartz
inequality
If
(f
(2.2.20)
IE(x,y)I
(2.2.21)
P=P.Pa
(2.2.23)
84
2.2
where
(2.2.25)
is the effective aperture for transmission. An effective directive area can then
also be defined from Eqn. ( 2.2.24)
(2.2.26)
or, correspondingly, a directive area
(2.2.27)
which does not include power loss in the antenna structure itself, or in its feed
lines.
h
The single parameter Gt of Eqn. (2.2.22), the antenna (power) gam, t ~s
wraps into itself a good deal of complexity. It applies only on the beam axis
in the far field, and takes account of ohmic losses in the antenna and an.y
shading (use of nonuniform amplitude distribution) for sidelobe ~o~trol. It is
a parameter the antenna designer must supply, and allows t~e butldmg. of the
radar equation to be carried one step beyond the transmitter, to wnte the
intensity of power incident on the target (assumed to be on the antenna beam
axis) as
IJA a F(x, y) expLJ(2n/ A.) sin(} (x cos <P + y sin <P )] dx dyl 2
IJA. F(x, y) dx dyl 2
(2.2.28)
Although it does not affect the form of the point target radar ~quation, an
important property of an antenna, in addition to its gain, can be discussed here
based on the material above. That is its directivity pattern
85
2.2.2
<P) and
Gt( 0, <P ).
d1(0,</J)=(l/A 8 ) 2
1JLa/ fw.
-L./2
12
expLJ(2n/A.)sinfJ
-W./2
(2.2.29)
This pattern is roughly characterized by its principal cuts for <P = 0, n/2, which
are identical except for scaling by La or W,., respectively. Fig. 2.6 shows the
generic result, with l being the length of the antenna in the direction of the cut
in question. Shading of the aperture is used to reduce sidelobes, using for
example Taylor weighting (discussed in Section 3.2.3 ), which changes the result
to the curve shown in Fig. 2.7.
86
2.2
87
-10
-:!:!2.
-20
1'"''\
I
I
I
I
I
\
\
II
II
11
11
where
I1
1'I
1
'\\
'
0.001
\
\
I
I
,,,1
Figure 2.7 Directivity pattern of uniformly illuminated rectangufar aperture with Taylor
weighting, 30 dB levels, ii = 5.
I
I
,,,,
u = (nan..) sin Q
/-,
q
5
4
U=
10
11
*(2.2.31)
nt sin 9 nD sin Q
A.
A.
Figure 2.6 Directivity patterns of uniformly illuminated circular (solid) and square (dashed)
apertures (from Skolnik, 1970). With permission of McGraw-Hill, Inc.
</>
where
(2.2.30)
88
2.2
89
(L/ ).)OB
PSLR
D/D 0
0.89
l.15
l.2
l.45
l.05
13
21
23
32
25
l.O
0.83
0.81
0.67
0.98
R 9H =RA/La
...... ,..~. ------:;:.:::r-------
l.02
l.15
l.27
18
21
25
l.O
0.75
0.64
,.,,,,,.""
....
Rectangular
Uniform
l -(2x/L) 2
cos( xx/ L)
cos 2 (xx/ L)
Taylor
25 dB, ii= 5
...
:_"::.:...!E.i_ ____ L_
XI
Circular
Uniform
j(l - r 2)
l -
,2
.------------.........
................
.... /
..
x, > A.Rfp/4V.,
for a side-looking system, will re8ult in an apparent target at azimuth
x; = x, -
A.Rfp/2V.,
Figure 2.8
f Dt
~0-------
90
2.3
91
2.3
We now proceed to the next factor in the point target radar equation, Eqn.
(2.1.t ). This concerns the extent to which a target returns energy incident upon
y "'' p~
Wg
Figure 2.9
/------0-----
L
Figure 2.10
Bright
= al(R)/4nR 2
(2.3.l)
That is, a is the target area we would infer, based on /rec by assuming area a
intercepted the transmitted beam in the far field, with the resulting incident
power scattered isotropically. The value of a depends on a multitude of
parameters of the target. It need not have any direct relation to the actual
frontal area presented by the target to the radar beam. The cross section of a
target will be nearly zero if the target scatters little power back towards the
antenna. This can occur because the target is small, or absorbing, or transparent,
or scatters in some other direction, or possibly all of these. The cross section
a may be drastically larger than the target frontal area in the case that some
electromagnetic resonance effect has been excited.
Only for the very simplest shapes (such as used in calibration measurements,
Table 7.1) can the value of a be calculated analytically, for example for a
perfectly conducting sphere or a flat plate, and even in such cases a depends
markedly on wavelength. For shapes other than a sphere, a depends strongly
on the aspect angle of the target to the radar beam. In practice, one can only
say that if a target at range R presents a cross section a of some given value
to the radar, then the radar system will detect it with some corresponding
probability.
In remote sensing applications, the "targets" usually extend in physical size
beyond what one would regard as a point, for example in observation of the
earth surface. In such a case,,each element dA of the extended target (terrain,
sea surface, etc.) can be assigned a local value of a. This inferred target area a,
relative to the geometrical area dA, is the specific backscatter coefficient at the
particular point in question on the extended target
Darker
Bright terrain seen by a range sidelobe masks dimmer targets in the main beam.
a 0 =a/dA
(2.3.2)
92
2.3
This quantity <To usually depends on wavelength and on the aspect from which
the terrain element is viewed. Here we want to discuss that quantity. In Section
2.8 we will discuss a form of the radar equation which is often stated for
distributed targets.
Let us begin by introducing the notion that the specific radar cross section
<To for a terrain element is appropriately considered as a random variable (Ulaby
et al., 1982, p. 476). Consider some nominal region of the earth surface which
we want to image using a SAR. The smallest area dA of that surface with which
we will be concerned is of the order of the resolution cell of the ultimate SAR
image. Usually this will be large enough to encompass multiple physical
scattering centers, each of size the order of the radar carrier wavelength, and
each of which responds to the incident electric field vector from the radar
transmitter. It is the superposition at the receiver of those elemental field phasor
responses over the region dA which determines the voltage at the receiver input,
and thereby the specific cross section <To of that element dA.
Except in idealized situations, there will be different configurations of
elemental scatterers in each terrain element dA of a larger nominally
homogeneous region. The value of <To taken over a collection of nominally
similar terrain elements will therefore not be constant, but will rather appear
to be multiple realizations of a random quantity. The implication of this is that
it is usually unfruitful to attempt to define a single deterministic backscatter
coefficient for each terrain element and to replicate the terrain map of <To in a
SAR image. Even if a terrain element dA contained one, or at most a few,
dominant point scattering centers, so that a single deterministic value <To might
apply, aspect dependence may make the value <To change in an apparently
random fashion over the course of a synthetic aperture.
The backscatter voltage responses due to different isolated terrain elements
can thus in most cases reasonably be modeled as random variables. Since even
two grossly similar terrain elements are usually physically different at the
scale of the radar wavelength, the backscattered fields of two elements can
further be taken as independent random variables. As a consequence of this
independence, the receiver (ensemble) average power for a single pulse, viewing
a larger terrain region, can be taken as the sum of the average powers which
would have resulted if each terrain element in view of the radar were in isolation
(power superposition).
Using the defining Eqn. (2.3.2), the backscattered power intensity Eqn. (2.3.1)
at the antenna for a single terrain element in direction (0, </>)from the radar is
dJ. 00 = [<F0 (0, </>)I(R, 0, </>)/4n:R 2 ] dA
00
<T (
<T (
(2.3.5)
a form which we will expand upon in Section 2.8 in developing the "SAR radar
equation". For the present, we will return to the relation for point targets.
With the increasing availability of radars which respond to the vector
electromagnetic field (polarimetric radars), a more general form of the
backscatter coefficient has become important. Suppose that the electric field
launched by the antenna towards a scattering element (Fig. 2.4) is:
E,(R, t) = [E~(x, y)x
x,
t - kz)]
where
y are unit vectors in space. Then E~ and E; are the horizontal and
vertical polarization components of the field. The polarization component
phasors are
h1
-(ah
-
93
where c; 0 (0, </>) is. the ensemble m~an. of <To in each farticular cell dA.
Conventionally, this ensemble mean 1s given the symbol <T
(2.3.3)
Using power superposition, the ensemble average received intensity for a single
pulse, taken over the ensemble of possible interference patterns in each resolution
cell, is then
T.,
(2.3.4)
av exp(-N)
94
2.4
Similarly, the scattered field at the receiver will have a plane wave
representation in terms of a polarization vector
in which
is the complex scattering matrix of the target. Its terms indicate the extent to
which the two orthogonal spatial components of the incident wave each scatter
into the two orthogonal components of the scattered wave.
Finally, if the polarization vector hr characterizes the extent to which receiver
input voltage is induced by the two components h8 of the scattered wave at
the antenna, the receiver voltage phasor is
In the simple case of an isolated target in the far field, we have expressed the
intensity of backscattered power at the antenna as
I rec = Pt G'a/(4nR 2 ) 2
(2.4.1)
a value which we will assume to be constant over the physical aperture of the
antenna. This intensity represents the scattered electromagnetic field incident
on the antenna structure. Some, all, or none of that field will actually be
effective in introducing signal power into the receiver circuits, which is
necessary in order to detect a target. Again, in building the radar equa~ion,
a single parameter is introduced to cover a number of effects and assumpt10ns,
namely, the antenna (receiving) aperture Ar.
The receiving aperture of an antenna at a particular frequency is an area
defined in terms of the intensity I rec at the antenna structure and the power
Pr flowing towards the receiver, across the antenna/receive.r interface, by
95
The receiver input is taken at the same point in the circuitry as the antenna
output, which we will assume to be the connection between the antenna
structure and the feed line to the first stage of electronics.
The extent to which the power potentially available to be extracted from
the electromagnetic field at the antenna will actually appear in the receiver
depends on the relative impedance levels in the system. Some power potentially
available will be lost through reflection (re-radiation) of the incident field
away from the antenna. In addition, since the elements of any real antenna
will have some non-zero resistance, part of the power represented by antenna
currents induced by the incident intensity /rec will be lost as heat in the antenna.
Both these effects are expressed through the antenna impedance.
The antenna impedance has two components; that due to resistance,
inductance, and capacitance in the structure itself, and a less obvious component,
the "radiation" impedance. This latter expresses the re-radiation of power
through the coupling between the impinging field and the currents induced in
the antenna conductors. Both these quantities can be calculated for simple
structures, or measured more or less precisely.
The power Pr flowing from the antenna port towards the receiver for a
particular incident intensity /rec defines the antenna receiving aperture Ar by
A,= Pr/ /rec
(2.4.2)
(2.4.3)
with
(2.4.4)
98
2.6
99
which is Nyquist's theorem. By Eqn. (2.5.l ), the noise is Gaussian, and by Eqn.
(2.5.5) it is white, with the indicated power spectral density.
A quantum mechanical refinement (van der Ziel, 1954, p. 301) of the statistical
mechanical argument results in a more precise form ofthe Nyquist theorem:
N(f) = 4kTRp(f)
where
Circuit with resistor noise equivalent voltage source.
Figure 2.11
we take the inductor branch current i and capacitor branch voltage v. The
stored energy in the system is quadratic in the state variables
E = (1/2) (Cv 2
+ Li 2 )
IH(f)l2 N(f)
d/
(2.5.3)
+ jwRC + R/jwL)-
(2.5.6)
is the Planck factor. Neglecting the Planck factor, which contributes a non-white
character to the noise, results in an error ofless than 5% in noise power spectral
density so long as hf /kT < 0.1. At radar frequencies, say/< 35 GHz, this allows
the Planck factor to be neglected for T > 17 K. In some applications, for example
sky noise or very low noise receiver front ends, equivalent temperatures below
that limit may be in question, in which cases the more precise form Eqn. (2.5.6)
should be used.
Thus we have a basic result, supported independently by observations. The
thermal noise equivalent source voltage in a resistor of resistance R at
temperature Tis a Gaussian random process with a constant power spectral
density (white noise) 4kTR. Further (van der Ziel, 1954, p. 17), the same result
holds for any passive system at uniform temperature, where the resistance is
the equivalent resistance "looking back into" the output terminals of the system.
If such a system is connected to an impedance matched load, the one-sided
spectral density of the power delivered to the load in W /Hz is just
N 3 (f) = 4kTR/4R = kT
*(2.5.7)
This is the "available power" spectral density of the noise source. If attention
is confined to a frequency band of width B, say by a lossless filter circuit, the
thermal noise power (W) delivered to the matched load is kTB. It is quantities
of this latter form which will appear in the final equation for SNR.
fci
Ec=(C/2)
f10
IHl 2 Ndf
= (C/2)N(f0 )
L"
IH(f)l2
2.6
d/ =
(2.5.4)
evaluating the integral using Gradshteyn and Ryzhik ( 1980, Section 3.112.3).
Letting the arbitrary frequency / 0 in Eqn. (2.5.4) be labeled as a general frequency
/yields
N(f) = 4kTR
*(2.5.5)
100
2.6
Figure 2.12
101
system. We consider first the source, then the receiver, and finally the
combination.
2.6.1
Rour
Source Noise
The point in the radar system which separates source from receiver is arbitrary.
As we have done earlier, we shall take the separating point as the signal port
of the antenna structure. This is the point at which received power P, in
Eqn. (2.4.2) is taken in defining the antenna receiving aperture. All elements
prior to that point in the receiving chain contribute noise to be counted in
source noise power. Past that point noise is counted against the receiver.
In turn, source noise is separated broadly into two parts. The first is antenna
noise due to such local effects as thermal noise in the resistance of the antenna
current paths and thermal noise radiated by any radome structure. The second
is external noise, due to relatively distant noise sources (thermal or interfering).
Jn either case, it is conventional to describe a noise source formally by a
temperature, in analogy to the formula for available noise power spectral density
Eqn. (2.5.7) from a resistor at temperature T,.
(2.6.1)
Thus the external noise sources, as viewed from the antenna terminals, are
assigned a temperature T.xt while the local sources have a temperature T..nt
These temperatures may or may not relate to the physical temperature of any
actual object.
In considering the expression Eqn. (2.6.1 ), we can assume that different
physical sources produce independent random noise voltage waveforms. Hence
noise powers, and thereby noise temperatures, from separate sources simply
add numerically. The expression Eqn. (2.6.1) is frequency dependent, in general,
since the actual noise represented by the thermal noise formalism may not be
white, for example in the case of an interfering signal in view of the antenna,
or a radio star radiating at some specific frequency. Jn the case of a narrowband
noise, the temperature is implied to refer to the center of the band. More
generally, an equivalent constant temperature is used across the band of the
receiver such that kT,.B 0 gives the correct total power, where B0 is a measure
of system bandwidth appropriate for noise calculations.
External Source Noise
Let us consider first the external source temperature, defined by Next= kT.xt
where Next is whatever noise power would flow out of the antenna into a
matched receiver system which could not be accounted for by noise sources
local to the antenna structure. It will be helpful to develop some of the
conventions used to describe the situation. Ulaby et al. (1981, Ch. 4) present a
more complete summary.
Radiation reaching the earth from the sky is described in terms of Planck's
102
2.6
law. The motivation for this is that the frequency dependence of radiation
reaching the earth from the principal physical source, the sun, is thereby well
described at visible and infrared frequencies in terms of a single temperature
parameter.
Consider first a closed cavity whose walls are at constant physical temperature
T. The walls of the cavity are assumed to constitute a black body, an idealized
passive object which by definition absorbs and re-radiates all incident radiation.
It is a basic result of theoretical physics (Page, 1935, p. 547) that the radiation
inside the cavity is omnidirectional and homogeneous, with energy frequency
spectral density per unit volume at any point in J / m 3 Hz (Planck's law)
u = 8nh(f/c) 3 [exp(hf/kT)-
1r
(2.6.2)
The apparent intensity spectral density per unit solid angle incident on any
point in the cavity in W /m 2 sr Hz is then
B = uc/4n
= (2hf 3 /c 2 )[exp(hf/kT)
- l]- 1
103
BAd dO
where Bis the brightness perceived by the antenna. In radiometry (Slater, 1980,
p. 88; Nicodemus, 1967; Meyer-Arendt, 1968), the surface giving rise to the
radiation receives central attention, and is assigned a spectral radiance in
W/m 2 sr Hz
L = J/A. cos 0
(2.6.4)
where J is the spectral radiant intensity (W /sr Hz), the angular power spectral
density emitted by surface area A. in direction 0.
An antenna of directive area Ad at range R subtends a solid angle Ad/ R 2 as
seen by the radiating surface element A. (Fig. 2.13). Thus the power impinging
on the antenna surface in W /Hz is
(2.6.3)
This is defined as the "brightness" (or radiance) (Ulaby et al., 1981, p. 192) of
the source, the cavity wall.
We ultimately want to calculate the power incident on an antenna directive
aperture Ad. To that end, we need the power per unit area impinging on the
antenna from various directions (Fig. 2.13). The noise power density available
Figure 2.13
*(2.6.6)
It happens that the main contributor to radio noise, the sun, as perceived from
earth generally obeys the functional form of the Planck law, Eqn. (2.6.3),
104
2.6
= kTf 2 /c 2
(2.6.8)
Even though the Planck factor in Eqn. (2.6.3) may not be negligible in some
applications, Eqn. (2.6.11) as it stands defines T.i such that the correct value for
B results from its use.
Proceeding one final step, it is then useful to extend the black body
Eqn. ( 2.6.10) to the general case Eqn. ( 2.6.11 ), and to express the result in terms
of an available noise power spectral density into a matched load in the form
of Eqn. (2.5.7)
(2.6.12)
where T..xt is a (possibly frequency dependent) temperature so defined by the
actual noise density at the antenna terminals. Considering the directionally
dependent brightness Eqn. (2.6.11 ), the available power density Eqn. (2.6.10)
takes the form
N.,,(f) =
= (k/4n)
(2.6.9)
The antenna available power, without -;:onsidering antenna self-loss, is then
Next=
T...1 = (1/4n)
(2.6.10)
using Eqn. (2.6.1 ). The region of integration is that portion of the antenna
pattern which views the black body. In the case of an antenna inside a cavity,
from Eqn. (2.2.5)
(2.6.13)
where we use Eqn. (2.6.11) and the definition Eqn. (2.6.9). Comparing
Eqn. (2.6.13) with Eqn. (2.6.12) then yields
f BAd(O, </J)dQ
= (Bc 2 /4nf2)
105
*(2.6.14)
T..x1 = T;,D.
D'(O,</J)d0=4n
(2.6.15)
sphere
where
2
(2.6.11)
D,=(l/4n)
fno D'(O,</J)dQ
(2.6.16)
is a receiving directivity taking into account the sidelobe structure of the antenna.
Since the antenna directivity is by definition normalized as in Eqn. (2.2.5), the
directivity D. is always less than unity. In the case of a nominal point source,
such as the sun or a planet, the temperature function in Eqn. (2.6.11) is
106
2.6
(2.6.17)
On the other hand, if the antenna were pointed at the sun, a very high value
7;. as in Eqn. (2.6. 7) would be expected over the narrow sector of the sun's disk.
At typical radar frequencies, pointing away from the sun the main noise
contribution is from the sun's radiation scattered into the antenna by the earth's
atmosphere. Combined with galactic noise, the result is nearly constant at
Tn = 10 Kover the radar band (Gagliardi, 1978, p. 103). In the case of a SAR,
with the antenna viewing the earth surface, the external noise temperature can
be calculated nominally using Eqn. (2.6.15) for a body at 300 K. The factor
Eqn. (2.6.16) results by integration of the beam pattern over the radar footprint.
Since the external noise in the environment of the antenna is directionally
dependent, as well as frequency dependent, even at a specified frequency, the
calculation of a single temperature T.xt for use in the radar equation involves the
antenna sidelobe structure, the pointing direction of the main beam, the type of
atmospheric layers in view of the antenna, and so on. Skolnik (1980, Ch. 12)
discusses many of the considerations involved. The user of the radar equation
sweeps all these considerations into a single parameter which will presumably
be supplied: the total source external equivalent noise temperature T.xt
r--
PHYS
PHYS
P1a
Available power and physical temperature of lossy system. P: Sijlnal; k~~.,: Noise.
physical temperature Tphys The- available input noise power density from the
source resistor is then kTphys by Eqn. (2.5.7), so that the available output noise
power density attributable to the input must be kTphys/ L. On the other hand, the
total available output noise power density from the source and circuit
combination at temperature Tphys must also be kTphys as for any system at
constant temperature. The difference between available output power density
and that attributable to the source is then just
Nini= kTphys(l - l/L)
(2.6.18)
The other component of source noise is thermal noise arising in the lossy portions
of the antenna structure. These effects are lumped together into an antenna
temperature T..nt again defined such that kT..ni is the correct available noise
power density from the antenna, if the antenna source noise Next were not
present. We will suppose that these losses are expressed as a portion of the
available signal power reaching the antenna which is not available atthe antenna
output terminals, that is, by the antenna radiation efficiency Pe
In general, suppose that only some portion 1/ L < 1 of the power available
from a source is available at the output of a system (Fig. 2.14): P 0 a =Pia/ L. (In
the antenna case, L = 1/ Pe) This available power loss implies power absorption
in the system. (Available power relations become actual power relations in
operation if the system impedance is matched at input and output.) Such power
absorption implies in turn the presence of resistive elements, which generate
internal thermal noise which we want to characterize.
Suppose that the circuit in question were connected to a source resistance at
its input, and that the combination of source resistance and circuit were at a
P1a /L
kl PHYS
kl PHYS
Figure 2.14
107
to be added to the actual source noise temperature to account for the resistive
noise in the system. For example, for a matched attenuator at Tphys = 290 K
which delivers 63% of its input power to its output port (L = 1.58, or a 2 dB
loss), the equivalent input noise temperature is T. = 170 K, which must be added
to the source noise temperature.
In application to the antenna noise question, we usually want the source noise
temperature T. referred to the antenna output (receiver input) terminals, just
beyond the loss element represented by the antenna efficiency parameter Pe In
that case, the available noise power density, referred to the antenna output, is
N. = kT,. = k[T.,, 1/L
+ (1
- l/L)J;,hysJ
(2.6.20)
where Tphys is the temperature of the antenna structure. This value should be
used in conjunction with the antenna power gain Gr in the radar equation,
because Gr includes the signal attenuation factor Pe due to antenna losses. On
108
2.6
7;,xt
+ (L -
1) Tphys
2.6.2
Receiver Noise
depends on the absolute level of receiver input noise power P0 This in turn
depends on the input impedance conditions, which govern the extent to which
available source noise power is delivered to the circuit.
Because the available output power Eqn. (2.5.7) of a thermal noise source is
independent of source or load impedance, it is a great convenience to assume in
system noise calculations that all units have matched impedance sources and
loads. Were such to be the case, the actual powers would be identical to the
available powers. Such is not necessarily the case. However, we can assume load
matching, since output SNR is independent of load (with some exceptions
discussed by Pettai ( 1984) ), signal and noise being treated the same by the load.
But source mismatch will require a factor in the radar equation to adjust the
results, calculated assuming source matching and available power, to the actual
case. For the moment, we assume impedance match between all system elements.
Any system which generates noise can be characterized by a "noise factor"
F, or a "noise figure" IO log F. (The terminology is not consistently applied often noise figure is used for both.) The unwanted output might be due to internal:
noise, thermal (white Gaussian) or otherwise. It might also be due to the'
deterministic generation from the input of frequency components which later;
interfere with signal (nonlinear effects present in mixers, for example), or loss of
signal power in converting from RF to IF. Some possibilities have been
summarized by Skolnik (1980, p. 347). Pettai (1984, Ch. IO) gives a more
complete discussion. Various different definitions of noise figure can be made
(Pettai, 1984, Ch. 9). We will discuss some of them in turn, indicating their use
in the radar equation.
109
Recall that the signal power'entering into the receiver is expressed in the radar
equation in terms of the receiving aperture Ae of Eqn. (2.4.4 ). This by definition
relates to the signal power which would flow into a matched receiver. If, in
considering noise power into the receiver, we also assume impedance matching,
we then have to do with available noise power quantities kT at the input, and
available power gain Ga to transfer them to the output, along with the signal.
We thereby arrive at the output SNR for matched conditions, which is the SNR
in operation, except for the internal noise effects indicated in Eqn. (2.6.21).
The available power gain Ga of a circuit is the ratio of power available at the
circuit output, which depends on both the circuit and the source, to power
available from a source connected at the input, which depends only on the source.
We take this quantity relative to some frequency of interest, with the ratioed
powers referred to unit bandwidth over a narrow (infinitesimal) band. Thereby
all gains, temperatures, and noise factors generally become functions of
frequency.
The available power from a circuit is the power that would be delivered to a
matched load. For example, in Fig. 2.12
P; = e;/4R.
is the available input power, corresponding to Rio= R . From Fig. 2.12 then
(2.6.22)
Pettai ( 1984, Ch. 7) has discussed this quantity carefully. It is independent of the
actual load conditions at the circuit output, but depends on the input impedance
conditions. It is not the ratio of output power to input power under operating
conditions, unless the input and output are matched, so that R. = Rio and the
circuit is loaded by RL = R 001 It depends on source impedance, a fact which, as
we shall see, feeds directly into a property of "the" noise figure of a circuit.
Receiver Noise Temperature
Using available power gain, the additional output noise contributed by a circuit
can be expressed in terms of an equivalent temperature. Suppose a source of
110
2.6
111
Then
(2.6.28)
(2.6.24)
Assuming T. to be constant over the band, and letting
where
(2.6.25)
is the "operating" noise temperature of the combined source and receiver. Since
the gain Ga is used to refer the receiver noise Nini to the input, the receiver
equivalent temperature T. and the operating noise temperature T.,p depend on
the impedance of the source feeding the circuit.
The equivalent noise temperatures Te,, Te,, ... of a cascade of elements combine
easily. Each unit of the cascade is specified by its available power gain Ga; and
equivalent input noise temperature Te;, both specified for the impedance and
temperature conditions present in the cascade. Then for three elements, for
example, the total available excess output power is
leading to
T. =Nini/Ga, Ga Ga3
2
*(2.6.26)
The operating noise temperature Eqn. (2.6.25) wraps together source and
receiver noise into one parameter. It is sometimes convenient to continue to
keep these as separate. To that end, it is usual to define a noise factor F in
terms of which to characterize the intrinsic receiver noise. This is taken in
reference to the specific source impedance which will feed the circuit in operation,
but with the source assumed to be at a standard temperature T0 = 290 K. The
receiver itself is assumed to be at its physical operating temperature. Then the
("standard") noise factor Fis the ratio of the total available output noise power
density Noa with the input at temperature 'JO, to the output available power
density attributable to input:
(2.6.30)
Using Eqn. (2.6.23) to express the total receiver output available noise power
density in terms of the equivalent receiver temperature J'., we have
F =(Nini+ GakTo)/GakTo
= 1 + G.kT./GakT0 = 1 + T./To
(2.6.31)
112
2.6
113
whiCh is to say
T.=(F- l)T0
*(2.6.32)
Note that F, like T., depends on the source impedance, through Ga, ~lt~ough
not on its temperature. (The dependence of F on source impedan~ 1s m fact
more profound than simply via Ga (Pettai, 1984, p. 149); the matter mvolves t_he
particular distribution of noise sources in the receiver.) ~lso note that n01se
factor, like noise temperature, may depend on frequency smce we deal always
with power spectral densities.
It is interesting to note the noise factor of a lossy element at standard
temperature T0 From Eqn. (2.6.19) and Eqn. (2.6.31), this is
(2.6.33)
Note that Eqn. (2.6.33) is not correct unless the element is at standard
temperature.
Jn terms of receiver noise factor F, using Eqn. (2.6.25) and Eqn. (2.6.32) the
radar equation Eqn. (2.6.27) becomes
so that always SNR 0 < SNRi from Eqn. (2.6.37); all else being equal, SNR can
only degrade in the presence of system noise. The amount of degradation is
governed by the ratio of output noise density N 1n 1 due to internal sources to
amplified input noise density GakT.. Since the former is nominally the same in
each of the various stages of a receiver, while the latter increases from stage to
stage, it is the noise figure of the earliest receiver stages which mainly
controls the output SNR, an observation we will make more precise below as
Eqn. (2.6.46).
The standard noise factor F defined in Eqn. (2.6.30) is the operating noise
factor Eqn. (2.6.36), but assuming that the source temperature T. is the standard
temperature T0 = 290 K. Using Eqn. (2.6.24) in Eqn. (2.6.36), we have
*(2.6.38)
Only in the particular (and unusual) case T. = T0 does this become the common
expression
T.=(F-l)T0
there results the relation between the operating and standard noise factors
(2.6.35)
In order to rescue the functional form Eqn. (2.6.35), an "operating" noise factor
F can be defined. The operating noise factor of a combined system, including
source and the receiver, is defined as the ratio of the actual available output
noise power density Noa to the available output noise power density ifthe receiver
had no internal noise sources
tl;;
F 0 P = 1 + (F - l)(To/T.)
*(2.6.39)
(2.6.36)
This parameter has the advantage that it ta~es into acc~unt t~e _act~al source
temperature T., so that the receiver output signal to noise rat10 ts simply
Fsys = (F - 1) +
that is
(2.6.37)
where SNRi is the output SNR in the case that the receiver, of bandwidth Bn,
had no internal noise sources. Since
T./To
*(2.6.40)
which is the form Eqn. (2.6.35), but not limited to T. = T0 The system noise
factor, like the operating noise factor, accounts for both source noise and internal
114
2.6
115
power densities due to internal sources of Nintl, Nint 2 when fed by the impedances
R., R0011 , respectively. (Note that these powers do depend on source impedance,
since the flow of internal noise power depends on the character of the complete
driving circuit.) Then the combined output noise density is
= (Nint2
(2.6.43)
+ Ga2Nint1 + Ga2Ga1kTo)/Ga2Ga1kTo
+ (F 2 -
1)/Ga 1
The available power gain Ga and the circuit (standard) noise factor Fare defined
so as to combine easily for cascaded networks. Consider first the gains Ga1 and
Ga 2 of two cascaded networks. Since available output power is independent of
load, the available output power density N 1 (Fig. 2.15) is just
*(2.6.46)
The cascade relation Eqn. (2.6.46) makes precise the decreasing importance of
internal noise in the later stages of the electronics chain. The same result follows
from Eqn. ( 2.6.45 ), by referring the total output noise to the input of the cascade:
All the relations above deal with available power gain G. and available power.
The source resistance R. enters through the dependence of Ga on R., and in other
ways. The resulting SNR, calculated using available power, is not the actual SNR
unless the receiver input impedance is in fact R . In practice, one strives to meet
that condition approximately, by use of an impedance matching transformer at
the receiver input, for example. However, some mismatch may exist, perhaps
introduced intentionally by noise tuning to increase output SNR. In that case,
the actual output SNR and the available power output SNR will differ by some
factor, which can be included in the radar equation as a mismatch factor Lm:
(2.6.47)
Figure 2.15
for example. In the case of tuning, the factor Lm may be less than unity. In that
case, SNR has been improved by deliberate mismatching. Skolnik ( 1980, p. 345)
mentions the effect, and Pettai ( 1984, p. 149) analyzes the matter.
From another point of view, the operating noise factor depends on source
impedance. If, by the noise factor F corresponding to the factor F 0 P in the radar
116
2.6
equation, we imply "the" noise factor of the receiver, we must have reference to
a specific source impedance. If that is the source impedance which matches the
receiver input impedance, then all is well if the source and receiver are matched
in operation. However, if mismatch is present at the input during operation, the
corresponding operating noise figure Eqn. (2.6.38) is not the correct number to
use in the radar equation. It must be modified by some factor Lm as in
Eqn. (2.6.47).
We turn now to a simplified example of application of the expressions
developed in this section for noise characterization.
2.6.3
An Example
In this section we want to give an example of noise calculations using the above
relations. The analysis will be simplified in comparison with an actual situation.
We will consider only the primary effects in operation; a thorough analysis is
complicated, specific to each situation, and beyond our aims.
Consider then the system schematized in Fig. 2.16. A down-looking antenna
views the earth, with the received signals passed through a waveguide connection
and isolator (to protect the receiver during pulse transmission) to the carrier
frequency (RF) amplifier. After amplification, the signal is shifted to another
frequency band by a mixer and local oscillator (LO), and then passed through
an intermediate frequency (IF) amplifier and filter chain to the output. In a radar
receiver, the IF amplifier output would be detected to determine its power as a
function of time for decision making, in a simple system, or perhaps digitized for
further processing. On the other hand, the amplified IF signal might be converted
to another carrier frequency for telemetry to a ground station.
The down-looking geometry is such that the antenna effectively sees only the
TPHYS =
180 K
+
I
I
I
I
I
I
250K
Ga= 20dB
F=4dB
I
I
I
I
I
I
400K
L=SdB
t... 1.5
Ga= 60dB
F=3dB
117
where p is the reflectivity or reflectance or, in the case of the sun as the energy
source, the albedo. A nominal value p = 0.1 is reasonable as an order of
magnitude for the earth surface in the microwave region at 20 viewing angle
from vertical (incidence angle) (Elachi, 1987, p. 146).
The antenna structure itself has losses expressed by the radiation efficiency Pe
of Eqn. (2.2.17). The signal loss resulting from this is already accounted for by
use of the power gain in the radar equation, rather than the directivity. The
implied noise increase is expressed by the loss Le= 1/ Pe For argument we take
Pe = 0.95. The antenna feed and extraneous losses might amount to 1 dB, and
we lump those losses with the antenna loss. These together comprise the source
noise temperature T..
The circulator we take to have a loss 1 dB in the signal direction in its
operating position, with the transmitter feed connected. Along with the antenna,
we assume a circulator physical temperature say J;,hys = 180 K.
The RF amplifier, as fed by its actual source impedance, we take to have an
available power gain 20 dB. The (standard) noise figure, with the same source
impedance, and measured with the amplifer at its operating temperature, we take
as 4 dB. The RF output undergoes cable loss of 1 dB before reaching the mixer.
The mixer has a conversion loss (RF to IF) of 5 dB, and a noise temperature
ratio (Pettai, 1984, p. 101 ): ,,
(2.6.48)
NVVVVVVVVV
300K
t:=0.9
Figure ~.16
where T.. is the output noise temperature under operating conditions, assuming
an input temperature T0 The local oscillator is followed by an extraneous loss
of 1 dB, and the IF amplifier, as shown in Fig. 2.16. The later components operate
at 400 K.
Let us first determine the source temperature T. (Fig. 2.16). The detailed
situation is diagrammed in Fig. 2.17. The antenna and feeds are assumed to be
118
2.7
L12 = L1 L2 = 1.33
TANT= (l-1)TPHYS= 59 K
180 K
G1 =0.95
L1 = 1 I Pe
TexT
=ET
t---0----1
I
I
I
I
I
I
250K
I
I
I
119
400 K
Ts
TPHYS = 180 K
G:
=248K
=270K
0.79
1.16
47
Te:
F:
matched, so that the attenuation units shown have available power gains
Ga= 1/L.
The generator temperature is just the temperature of the earth, modified by
the emissivity: T.. 1 = eT = 270 K. The available power gains G 1 = p. = 0.95,
G2 = -1 dB= 0.79 cascade as G12 = G1 G2 = 0.75, corresponding to antenna
and feed loss L 12 = l/G 12 = 1.33 ( 1.2 dB). From Eqn. (2.6.19), this corresponds
to an effective input temperature T,. 01 = 59 K, considering the physical temperature 180 K.
The sum of T.. 1 and T,.01 , 329 K, is brought forward using G 12 to yield a total
source temperature T. = 248 K. This value is mainly driven by the high earth
temperature. Were the antenna to be situated on earth and looking at a cold sky
(T.,1 = 50 K, say), the antenna losses (more precisely, the implied noise sources)
would be proportionally more significant than in the earth viewing case.
The receiver chain can be characterized in terms of noise figure or noise
temperature. For illustration, we will consider both procedures. Let us first seek
the receiver equivalent (input) noise temperature T., leading to a total operating
noise temperature T;,p as in Eqn. (2.6.25). This would be appropriate if the radar
equation were written in the form Eqn. (2.6.27). The receiver chain is expanded
in Fig. 2.18.
The equivalent input noise temperature of each 1 dB loss follows from'
Eqn. (2.6.19), taking the varying physical temperatures into account. The noise
factors then result from Eqn. (2.6.31). Those of the two amplifiers follow from
Eqn. (2.6.32). The noise temperature ratio t = 1.5 of the mixer yields an output
noise temperature Eqn. (2.6.48) of T0 = 435 K, were the input to have a
temperature 290 K. Considering the - 5 dB gain, this yields an equivalent input
noise due to the mixer internal noise
T. = 435/0.316 -
I
I
I
290
= 1086 K
and a corresponding noise factor from Eqn. (2.6.31). (Mixer noise temperatures
being high, the ratio t is a numerically more convenient quantity.)
100
2.51
438
0.79
1.22
65
0.32
4.74
1086
0.79
1.36
104
106
2.00
289
Figure 2.18 Parameters in receiver Fig. 2.16. G: Available power gain. F: Noise factor. T.,:
Equivalent input temperature of self noise.
Cascading the various temperatures T. back to the source point, where T,. is
taken, using Eqn. (2.6.26) yields a receiver equivalent input value
T.
= 47 +
552 + 1+17 + 5 + 18
= 640 K
Then, from Eqn. (2.6.25), T;,P = 888 K would be used in Eqn. (2.6.27). The
cascade relation makes clear the deleterious effect of extraneous losses early in
the receiver chain, and the importance of an early low noise gain, especially
before the lossy mixer stage.
Alternatively, proceeding in terms of noise figures, the cascade relation
Eqn. (2.6.46) yields
= 1.16 +
= 3.21
Finally we now have the "simple" radar equation, Eqns. (2.6.27), (2.6.34), and
(2.6.37), which of course is not simple at all, since its parameters embody a wealth
of complexity, and in any particular case are not easy either to calculate or to
120
2.8
= P1G CTAe/[(4nR
2 2
) kF0P T.B 0
*(2.7.1)
where F 0 P is the operating noise factor and T. is the total source equivalent noise
temperature, including ohmic noise generated in the antenna, both at the radar
carrier frequency. (In case of any impedance mismatch, the loss factor Lm should
be included in the denominator.) If it has been decided what value of SNR0 is
required for reliable detection with tolerable false alarm rate, or for adequate
performance more generally, this equation indicates the trade-offs among the
system parameters, the target characteristics (CT), and the maximum range.
Consideration of these trade-offs leads directly to the concept of the matched
filter, to be developed in Chapter 3.
The development of the radar equation above assumes only a single pulse is
available for processing. If only a single pulse is used to make the decision as to
presence or absence of a target at some range, a signal to noise ratio SNR0 at
the detection point of the order of 15 dB is required for reliable operation.
Normally, however, measurement using more than one pulse is used, with the
power from the multiple pulses averaged before a decision is taken. In that case,
the signal power might be assumed constant from one pulse to another, while
the noise power fluctuates randomly. Alternatively, the signal power itself due to
a possible target might be assumed to fluctuate in accord with some stated
statistical behavior. This latter situation is similar to that discussed above in
defining the specific backscatter coefficient for an extended scene, in which case
the objective is to estimate the mean of the backscattered power for each scene
element.
Calculation of the SNR needed for each single pulse in order that the average
of some number of pulses behave reasonably as a detection criterion for point
targets has been carried through in detail for cases of interest in practice. Various
statistical assumptions about the nature of the underlying target randomness are
analyzed. The single pulse SNR needed for detection in the multiple pulse case
is of course less than needed if only the single pulse itself is to be used. In rough
terms, the single pulse SNR required decreases by the square root of the number
of pulses whose power is averaged before a decision is taken. As a specific case,
for a simple hard target with 300 pulse powers averaged, a SNR of 0 dB yields
adequate performance, while 16 dB is required for a single pulse decision. The
subject is elegant and thoroughly analyzed (DiFranco and'Rubin, 1968), but we
will not pursue it further.
2.8
121
geometrical area of the scene as a random variable, with a mean CTo which in
general varies from one scene resolution element to another. The quantity of
interest in the radar system is then not the deterministic power of a single echo
pulse received in response to a target with some deterministic cross section CT,
but rather the (ensemble) average power for a single pulse with terrain in view
having average specific cross section CT 0, which will generally depend on which
scene elements are in question.
In those terms, the radar equation of the previous section, Eqn. (2.7.1 ),
becomes
*(2.8.1)
where the integration is over the terrain illuminated by the antenna beam and
sidelobes, and we take account that the effective receiver aperture depends on
the direction from which the received field impinges. Taking account that, for a
receiving antenna,
this is the usual radar equation for a distributed target (Ula by et al., 1982, p. 463 ).
This form Eqn. (2.8.l) of the radar equation, appropriate for average power
received from a distributed target, expresses the average power due to terrain
backscatter as it competes with average thermal noise. However, any particular
realization ofa SAR image will use as data particular realizations of the (random)
received power for each pulse used in the processing, and each pulse will in turn
involve some particular realization of the random variable CT 0 in each scene
element. The processed image will have intensity in each image element which
is some realization of a random process, whereas what we want in each image
element is the value of the mean backscatter CT 0 for that element. The discrepancy
is speckle noise, and results in a mottled appearance of the SAR image of a
terrain which is nominally homogeneous.
The fact of speckle is inherent in the nature of the radar signal itself, whose
voltage is the result of random interference of the backscattered electric field from
the multitudinous facets of a distributed scene, as discussed in Section 2.3. In
remote sensing applications, it is necessary to reduce the speckle noise in the
image, and this is done by averaging multiple realizations of the backscatter
coefficient from the same scene element. In Section 5.2 we will discuss a means
for doing that, and the resulting statistical improvement of the smoothed image.
The quantity SNR0 in the form Eqn. (2.8.1) of the radar equation says nothing
directly about speckle noise, but affects the relative influence of speckle. Unlike
the case of detection of point targets, for detection of distributed targets one can
only seek to set a value of SNR0 from the radar equation such that thermal noise
is not the dominant noise effect in the image. Further processing designed to
122
2.8
defeat speckle will then be relatively more effective in improving the image for
remote sensing use.
Since the distributed target radar equation serves the general purpose of
expressing the mean influence of receiver noise on the image, over some ensemble
of random images, it is useful to assume that the radar views a homogeneous
scene, in the sense that the mean backscatter coefficient u 0 is constant over the
scene, and the same for each position of the radar. The radar equation,
Eqn. (2.8.1), then appears as
*(2.8.2)
with the integral taken over the footprint of the radar beam on the earth.
Equation (2.8.1), and its special case Eqn. (2.8.2), are exact, insofar as the
parameters can be precisely specified. It is informative to recast them in various
other forms, however. Although only approximate, these reveal the role of
various parameters more readily related to SAR systems and the resulting
images than the parameters of the exact equations. We will now develop two
of these alternative forms.
In normal SAR imaging situations, as in Fig. 1.6, we can approximate R as
constant and equal to the slant range at midswath. The cross beam extent of the
footprint is by definition the region of terrain over which the antenna gain is
appreciable. We might take the gain G(O, </>)as approximately constant at the
midbeam value G, the parameter in the radar equation, over the 3 dB azimuth
beamwidth (JH, and zero outside the beam. In the range dimension, the
appropriate limit for the footprint is related to the time extent of the radar
pulse. This is because the radar return voltage at any instant, in the case of a
distributed target, is comprised of contributions from a .slant range span
.1.R = crp/2, corresponding to the radar pulse time width, projected on the
horizontal using the incidence angle 11 Then approximately:
123
introducing the average power Pav over both the on time rP and off time
of the pulse. The terrain point in question is in view for a time
TP - tP
using Eqn. ( 1.2.6), from which NA = S/P follows. Finally assuming the nominal
fully focussed resolution Eqn. ( 1.2.7), ox= La/2, there follows
(2.8.7)
Using Eqns. (2.8.6) and (2.8.7) in Eqn. (2.8.5) there results finally (after
recalling /p TP = 1):
*(2.8.3)
*(2.8.8)
This is a form which has been called the SLAR radar equation (Ulaby et al.,
1982, p. 572 ).
Equation (2.8.3) expresses the average SNR of a single radar pulse viewing a
This is the SAR radar equation in Cutrona ( 1970). It expresses the average signal
to thermal noise ratio of a SAR image point whose mean backscatter coefficient
is u 0 It is valuable as an indicator of the role of its various parameters. (Note
for example that the azimuth resolution ox does not appear.) However, it will be
124
REFERENCES
appreciated from the use of simple nominal relations in its derivation that it
should not be used for numerical calibration work.
In Section 7.6 we will investigate more fully SNR and calibration considerations in SAR images. We now pass on to development of the basis for the SAR
imaging algorithms.
REFERENCES
Barton, D. K. (1988). Modern Radar System Analysis, Artech House, Norwood, MA.
Bohm, D. ( 1951 ). Quantum Theory, Prentice-Hall, Englewood Cliffs, NJ.
Colwell, R. N., ed. ( 1983 ). Manual of Remote Sensing, American Society of Photogrammetry,
Falls Church, Virginia.
Cutrona, L. J. (1970). "Synthetic Aperture Radar'', Chapter 23 in Radar Handbook
(Skolnik, M. I., ed.) McGraw-Hill, New York.
Di Franco, J. V. and W. L. Rubin ( 1968). Radar Detection, Prentice Hall, Englewood Cliffs,
NJ (Reprinted by Artech House, Dedham, MA, 1980)
Elachi, C. ( 1987). Introduction to the Physics and Techniques of Remote Sensing, Wiley,
New York.
Gagliardi, R. (1978). Introduction to Communications Engineering, Wiley, New York.
Gradshteyn, I. S. and I. M. Ryzhik ( 1980). Table of Integrals, Series, and Products,
Academic Press, New York.
Hogg, D. C. and W. W. Mumford ( 1960). "The effective noise temperature of the sky,"
The Microwave Journal, 3(3), pp. 80-84.
Kennard, E. H. ( 1938). Kinetic Theory of Gases, McGraw-Hill, New York.
Lawson, J. L. and G. E. Uhlenbeck (eds.) (1950). Threshold Signals, McGraw-Hill, New
York.
Meyer-Arendt, J. R. ( 1968 ). "Radiometry and photometry: Units and conversion factors,"
Applied Optics, 7(10), pp. 2081-2084.
Nicodemus, F. E. ( 1967). Radiometry. Chapter 8 in Applied Optics and Optical
Engineering, Academic Press, New York.
Page, L. (1935). lntrodution to Theoretical Physics, Van Nostrand, New York.
Pettai, R. (1984). Noise in Receiving Systems, Wiley, New York.
Ridenour, L. N., editor-in-chief, MIT Radiation Laboratory Series, McGraw-Hill, New
York, Vols. 1-28. Various titles and dates.
Sherman, J. W. III ( 1970). "Aperture-antenna analysis," Chapter 9 in Radar Handbook
(Skolnik, M. I., ed.) McGraw-Hill, New York, pp. 9.1-9.40.
Silver, S., ed. ( 1949 ). Microwave Antenna Theory and Design, McGraw-Hill, New York.
Skolnik, M. I., ed. ( 1970). Radar Handbook, McGraw-Hill, New York.
Skolnik, M. I. (1980). Introduction to Radar Systems, McGraw-Hill, New York.
Skolnik, M. I. ( 1985). "Fifty years of radar," Proc. IEEE, 73(2), pp. 182-197.
Slater, P. N. (1980). Remote Sensing-Optics and Optical Systems, Addison-Wesley,
Reading, MA.
125
~eey
'
Stutzman, W. L.and G. A. Thiele( 1981 ). Antenna Theory and Design Wiley New York
Ulaby,
Remote
'
'
Add'F. T.,WR. 1K. Moore. and A. K . Fu ng (1981). M"icrowave
Sensing
Vol. I
1son- es ey, Readmg, MA.
'
'
Ulaby,
Add'F. T.,WR. K Moore. and A K Fung (1982) M'1crowave Remote Sensing Vol 2
1son- es 1ey, Readmg, MA.
'
'
van der Ziel, A. (1954). Noise, Prentice-Hall, New York.
Whalen, A. D. ( 1971 ). Detection of Signals in Noise, Academic Press, New York.
3.1
127
3.1
In Chapter 2 the basic functional units of a radar system were discussed. The
transformation of power fed to the antenna input by the transmitter into power
at the receiver output due to scattering from a target was described. The
competing influence of thermal noise was emphasized. The result of the
development was the (point target) radar equation, Eqn. ( 2. 7.1 ). Its specialization
to a side-looking radar viewing a spatially extended terrain appears as ;
Eqn. ( 2.8. l ), an approximate form of which is the side-looking aperture radar
(SLAR) equation, Eqn. (2.8.3). Finally, drawing upon some nominal relations
for synthetic aperture radar systems from Chapter 1, the SAR radar equation,
Eqn. (2.8.8), was developed.
In this chapter, we want to describe some developments in radar signal
processing which helped overcome the limitations implied by the point target
equation, Eqn. (2.7.1). The discussion will lay the basis for later description of
SAR imaging algorithms. Ultimately, a clear understanding of the simple SAR
relations of Section 1.2, underlying the SAR radar equation, Eqn. (2.8.8), will
evolve.
The discussion begins with the development of the matched filter. (The
terminology is not meant to imply any connection with the question of
impedance matching discussed in Section 2.6.2.) The matched filter is important
in its own right, but it is of considerable interest also in pointing the way towards
the solution of a fundamental problem in early radar systems: the conflict
between detectability and resolution.
After developing the matched filter, and examining its target resolution
properties, we discuss the procedure of pulse compression from a point of view
126
The point target radar equation, Eqn. (2.7.l ), indicates the main trade-offs
available in a simple radar system. Early radars had ranges for targets of interest
such as aircraft which were rather short for surveillance and warning purposes.
Interest therefore centered on extending the range R for targets with specified
values of cross-section <1, while realizing some specified adequate value of output
signal to noise ratio SNR0 An apparent barrier was the fact that all of the
remaining parameters of the radar equation are limited by available hardwa;e
technology.
The transmitter power P 1, the average power while the radar pulse is turned
on, is limited by the capability of RF power generation technology. Even if
possible, its increase is costly, and involves scaling up components which are
already large and costly. The antenna gain G has a theoretical maximum value
(p = 1) related to the antenna physical area A by G = 4nA /A. 2 , as in Eqn.
(2.2.24), so that the antenna linear extent La relative to a wavelength is
controlling. It is difficult to build antennas with ratios Lal A. greater than a few
100, and this practical limit was reached early on. The receiving aperture A. is
directly related to the gain by G = 4nA./ A. 2 , and is not an independent parameter.
The source noise temperature T,, is largely imposed externally, while the receiver
noise figure F 0 P depends on the technology of the time, and is reducible only
to some limited extent. Finally, the receiver bandwidth must be wide enough
t? pass the trans~itter pu~se, so that roughly B0 ~ 1/ p where p is the on
time of the transmitter. Thts latter would appear to be limited by the required
< 2bR/c. If a pulse length r larger
slant range resolution bR of the radar:
than this limiting value is used, two targets separated by bR in range will create
overlapping returns in the receiver, which may not be distinguished as arising
from two separate targets.
One further possibility remains. All of the development of Chapter 2 assumed
that the receiver did nothing more sophisticated than amplify the input signal
(and noise), while adding its own noise contribution. The receiver frequency
response function was taken to be essentially constant over some band
appropriate to the signal. The earliest aim of radar signal processing, as distinct
from radar signal observation, was to determine how the receiver might be
more effective than a simple amplifier. The fundamental advance which resulted
was the technology of pulse compression. This is the exact one-dimensional
analog of SAR image formation processing, and its development will lead directly
to SAR algorithms. We begin with an earlier development which is related, that
of the "matched filter".
128
3.1.1
3.1
129
In a classic study, North ( 1963) considered the following problem. Suppose the
radar transmits a waveform s(t). This is intercepted by a target at some range
R and scattered back to the receiver, where it arrives with time delay t = 2R/ c.
Assume the idealized situation that only the pulse amplitude is changed in the
process. The receiver input is thus
f:
f:
IH(jro)l2 df.
00
= (N /2)
(3.1.3)
lh(t)l2 dt
00
r(t) = as(t - t)
+ n(t),
where n(t) is the waveform of the combined source noise and equivalent receiver
noise referred to the receiver input. The noise n( t) is assumed to be white
(i.e., to have a constant power spectral density N W /Hz, one-sided, over the
receiver band).
We are at liberty to choose the receiver to be any linear, time invariant
system we please, so that the receiver transfer function H(jro) is to be chosen.
In order that we perceive the target to be present, and assign to it the correct
range R, we want the power output of this receiver at time t to be as high as
possible a "bump" above the surrounding "grass", characterized by the average
value of the noise power at the output (Fig. 2.2). We have no direct interest in
the receiver power output at times other than the time the target return is
received. The receiver itself contributes no noise, since the input noise n( t)
includes equivalent receiver self noise.
The mathematical problem to be solved is thus to choose the transfer function
H(jro) such that (where we allow complex time waveforms for generality and
use ensemble expectation C) the quantity
where N /2 is the two-sided noise density and we have used the Parseval relation
in the last step.
From Eqn. (3.1.1), as the quantity to be maximized we can take the output
signal to noise ratio SNR0 Using Eqn. (3.1.2) and Eqn. (3.1.3), with a change
of variable of integration in the former, this is
SNR =(2a /N)IJ~ 00 h(t)s(-t)dtl
0
s~oo lh(t}l2 dt
(3.1.4)
(3.1.5)
(3.1.6)
is maximum. Here g.(t) and g 0 (t) are the receiver outputs for signal and noise
inputs respectively. We take g.(t) to be deterministic, and use the fact that the
noise n( t) has zero mean, so that the random variable g0 ( t) also has zero mean.
The output of a linear stationary system with input/(t) and transfer function
H(jro) is the convolution (Appendix A)
where E is the total energy of the received pulse as( t - t ). Since the choice
h(t) =ks*( -t) attains this upper bound, that filter impulse response is the
choice which maximizes the receiver output SNR. Since k is arbitrary, we can
choose k = 1, and obtain just
g(t) =
f:
00
where h(t) is the system unit impulse response, the inverse Fourier transform
of the transfer function. Hence, with signal as(t - t) as input, we have the ,
output value
g.(t) =a
f:
00
*(3.1.7)
H(f)
f:
(3.1.2)
f:
s*(t) exp(j2n/t) dt
00
(3.1.8)
130
3.1
where
S(f) = A(f) exp[jt/J(f)]
where P. is the average power over the signal duration tP. If the noise bandwidth
of the receiver is Bn, then the attained average output SNR is
where Pn is the average input noise power. Thus the matched filter achieves a
SNR increase equal to the bandwidth time product of the transmitted pulse.
Assuming use of a matched filter in the receiver, the radar equation, Eqn.
( 2. 7.1 ), becomes
SNR0
*(3.1.9)
where tP is the pulse length and E1 = P1 <pis the energy of the transmitted pulse.
For a simple transmitter pulse, say a rectangular envelope burst of RF carrier;
the pulse duration and bandwidth relate as <p :::::: 1/ B, with B some reasonable
measure of bandwidth, say the noise bandwidth Bn. Then the matched filter
radar equation, Eqn. (3.1.9), is just the simple radar equation Eqn. (2.7.1 ). The
pulse bandwidth time product is unity. In the case of a simple transmitter pulse,
the development of the matched filter formalism therefore~dded little of practical
importance. However, the solution to the above optimization problem, the
matched filter, provided a precise foundation upon which to base understanding
of some ad hoc procedures. In the earlier form of the radar equation, Eqn.
(2.7.1), the noise bandwidth Bn appears. It was clear that this bandwidth should
somehow be optimized to improve SNR. Obviously, the receiver band should
be adjusted so that in some sense "most of" the signal pulse is passed but the
noise is blocked "as much as possible". This led to procedures for tailoring the
131
3.1.2
E = Ps<p
Resolution Questions
In Section 3.1.1 the radar equation was derived assuming a matched filter
receiver. In the case of a simple transmitter pulse, for which pulse duration tP
and bandwidth Bare related nominally by B = 1/rp, the result Eqn. (3.1.9) is
essentially the same as the radar equation developed earlier, Eqn. ( 2. 7.1 ). That
is, the simple receiver with uniform response over its passband is nearly the
matched filter for this case. On the other hand, the matched filter radar equation
in general involves the energy of the transmitted pulse, and thereby its time
duration for a fixed (and limited) available transmitter power, but nowhere
does the pulse bandwidth appear. This is a significant difference, and the
difference has to do with target resolution. Use of a matched filter opens the
possibility to use a long high energy pulse for good SNR, but without sacrificing
resolution. The resolution expression, t5R = crp/2 is no longer in effect, as we
shall now see.
To determine resolution, we need to find the extent to which a point target
in space is "smeared out" by viewing it through the radar sensing system. With
no signal processing, a point target produces a response at the receiver output
which is essentially the time history of average power of the transmitted pulse,
which has width rP. Thus, two point targets separated in slant range by less
than dR. = crp/2 will produce receiver outputs which overlap in time. Such a
response is impossible to distinguish from a return due to a single target of
space extent wider than a point. It cannot be guaranteed that two targets closer
together in range than crp/2 will be distinguished as two targets. This is the
resolution limit of the simple radar system.
On the other hand, suppose a matched filter processor is used. An isolated
point target produces a response s(t - t) at the filter input, where r = 2R/c is
the delay since transmission. The corresponding filter output is the convolution
f:>
= :> s*(t' -
t)s(t' - r) dt'
132
3.1
Shifting origin to center the response at timer, and making a change of variable,
this is
g(t) =
f:
The pulse most often used to do this job is the linear-FM, or" chirp", pulse
s(t) = cos [2n(f.,t
s*(t' - t)s(t')dt'
(3.1.11)
S;(t) =
The time width of this function g( t ), the matched filter output in response to
a point target, controls the resolving capability of the system. That width depends
on the details of the transmitted pulse. For example, if the pulse is a simple
burst of RF, so that (using complex notation)
s(t) = a(t) exp(jw 0 t);
which has a power function Ig(t )12 of width '5t :::::: 1/ B Hz at the 3 dB point.
Thus, the time resolution in the matched filter output in this case has nothing:
to do with input pulse length rP, but only pulse bandwidth B. The width of the
matched filter input rP would be the time resolution afforded without matched
filter processing, while the time resolution with processing is 1/ B. The ratio of
these, the pulse "compression" ratio afforded by matched filter processing, is
the bandwidth time product BrP of the transmitted pulse.
The important point is that use of a matched filter, in addition to enhancing
detectability by maximizing receiver output SNR, decouples pulse length from'
range resolution. Therefore, long pulses of tolerable average power can be used
to obtain large energy E = P. rP for satisfying the detectability requirements,
while at the same time a wide bandwidth can be used to obtain good resolution.
COS
[2n(f.,
+ i~f)t],
~f=
B/N
a(t) = l,
(3.1.14)
f~00
+ Kt 2 /2)],
(3.1.10)
00
g(t) =
133
+ w0 )(t -
r)]
where the Doppler shift is w0 = -4nR/ A., with R the target range rate. The
matched filter for the transmitted pulse, which is what will have been designed
into the receiver, has impulse response h(t) = s*( -t). The resulting matched
filter output function, shifted for convenience of notation to have time origin
at r, is
f(t.fo) = exp(jw.t)
f:
00
(3.1.15)
134
3.2
fo
/
/-fo=kt
I
-.-----1--- - ---
I
I
// __________l __
135
appear the same at the matched filter output as a target at some other range
which is not moving. Another way to say this is that the linear FM wave has
frequency and time "locked" together. A frequency shift tif at the matched
filter input causes a time shift llt = llf/ K at the output.
Extensive discussion of ambiguity function analysis can be found in the
references mentioned. Since the systems of interest to us in this book all use
the linear FM pulse, we need not pursue the matter further in generality. We
will later return to some specific results as needs arise.
3.2
PULSE COMPRESSION
As we discussed in Section 3.1.2, the matched filter output, Eqn. (3.1.13), realizes
time compression of pulses of unit (or at least constant) spectrum magnitude
in the ratio of the bandwidth time product Br:P. This is the case in particular
for the common linear FM waveform. That the two objectives of SNR
maximization and resolution improvement (by compression) are realized by
the matched filter follows because, if the transmitted waveforms( t) has spectrum
S(f)
= A(f) exp[jt/J(f)]
--'ttp-~I
--I
Figure 3.1
PULSE COMPRESSION
while on the other hand, as we will discuss below, the general pulse compression
filter is
=If( -t,fo)l2
As an example, for the linear FM pulse Eqn. (3.1.14), a contour of the ambiguity
function is shown in Fig. 3.1. For a target with no motion relative to the radar
line of sight, fo = 0 and the time width of the matched filter output power
function is nominally t / B, as developed above. For a .target known to be af
some particular range R = cr:/2, t = 0 and the Doppler shift due to targett
motion can be measured with resolution nominally 1/ r:P. The locus of the peak
of the ridge of the function Eqn. (3.1.16) isf0 =Kt, so that a target which is:,
in fact in motion with a consequent Doppler shiftf0 will be assigned to a range
different from its true range by an amount llR = ( c/2 )(f0 / K ). This is the source
of the adjective "ambiguity" in ambiguity function. A target which is at some
particular range and moving may (and will, for this example of the linear FM)
(3.2.t)
A(f) =I= 0
(3.2.2)
The two filters Eqn. (3.2.1) and Eqn. (3.2.2) are identical provided A(f) = 1
over the signal band, or at least A (f) = const.
Looking ahead to application to imaging algorithms, it is desirable to consider
pulse compression processing in its own right, apart from considerations of
detection and matched filtering. We begin with a development which will
generalize to SAR image formation, and then develop some material of later
use having to do with the properties of the linear FM waveform, and with some
modifications of the compression filter to alleviate time sidelobes in its response.
3.2.1
136
3.2
137
PULSE COMPRESSION
that the system we deal with be linear (but not necessarily time invariant in its
properties). We will first discuss the linearity of the radar hardware and signal
processing, and then describe the target features which enter linearly into the
radar received signal.
R
Radar systems are designed and operated to be linear in the various voltage
waveforms, at least up to the output of the IF amplifier and filter stages. In the
coherent radars of later interest to us, the (nonlinear) operation of average
power formation at the IF output is replaced with the linear operation of
"quadrature demodulation", also called "I, Q detection", or "complex
basebanding". In this, the high frequency structure of the IF output is stripped
away by shifting the signal to a frequency band centered on zero frequency,
leaving the low frequency envelope waveform (Whalen, 1971, Chapter 3). As a
result, all the operations in an imaging radar and its associated signal processing
are designed to be strictly linear. The only exception is the final operation of
forming the real image intensity from the signal processor output, the so-called
"complex image".
In the radar range equation, Eqn. (2.7.1), the target cross-section u appears.
This is the area we impute to the target based on the power it scatters toward
the receiver, under the assumption that the target is an isotropic scatterer (which
it might or might not actually be). If multiple targets are in view, or if we view
an extended region with multiple scattering elements, the receiver response will
depend on the characteristics of all the ta,gets. Since the electromagnetic field
equations are linear in field strength, rather than in power, the cross-sections
of the individual targets are not immediately appropriate for combining into a
total response. In fact, as we discussed in Section 2.3, for extended targets with
specific cross sections u 0 ( 0, q, ), the superposition of mean elemental cross-
sections by means of the expression Eqn. (2.3.4):
T,
dA
is only approximately correct, and only when interpreted with care, as discusse4
by Ulaby et al. ( 1982, p. 508 ). In order to preserve and make use of linearity~
it is therefore more appropriate to deal with receiver voltage, rather than power,
To that end, we want to describe the target in terms ,of its effect on electric '
field, rather than on average power. The Fresnel reflection coefficient Cis the 1
appropriate quantity to introduce (Ulaby et al., 1981, p. 73).
7
Consider an extended target of area A, which we will take as planar, and .
normal to the radar beam center (Fig. 3.2 ). Let E10 ( x, y) be the electric '
field phasor incident at some point on A, and let E.(x, y) be the corresponding
reflected field. The incident field is assumed linearly polarized. Then (Ulaby
et al., 1981, Chapter 2) the reflected field is also polarized, in the same direction
---------/
,---- -----
//
//
..,
..,
EIN
//
Es
L_____________________
// / ' A, DA
Figure 3.2
A terrain element of area A illuminated by an incident field E10 The scattered field
(3.2.3)
E.(x,y)[exp(-jkr)/r]dA
(3.2.4)
zr = 1,
S=i
Further, using Eqn. (2.2.13), the incident field phasor at the terrain is
(3.2.5)
138
3.2
PULSE COMPRESSION
139
From Eqn. (2.3.3 ), the received intensity and the terrain backscatter coefficient
are related as
*(3.2.6)
r=IR-R'I
this is of the form of a convolution of the target (terrain) reflectivity coefficient .
((R') with the Green's function (impulse response)
h(RIR') = const exp[ -j2klR - R'IJ/IR - R'l 2
It is through Eqn. (3.2.6) that the radar observable Erec is linearly related to .
the terrain "complex image" elements ((R).
It is interesting to relate the (power) backscatter coefficient cr 0 of the surface
with the Fresnel (voltage) reflection coefficient(. Using the far field expression
Eqn. (2.2.9) with(}= 0 (Fig. 2.4), the received field phasor Eqn. (3.2.4) is
(3.2.10)
(3.2.11)
E.(x, y) dx dy
Then
*(3.2.12)
IEr.cl2 = ( 1/ A.R)2IL E.(x, y) dx dyj2
= (1/4nR 2)DA
IE.(R')l 2 dA'
introducing from Eqn. ( 2.2.19) the directivity DA of the terrain region, considered.
as an aperture over which the field E. is maintained.
Further assuming a constant terrain illumination IE;11 I2, and recognizing that :
intensity I= IEl 2/Z 0 , Eqn. (3.2.8) yields the electromagnetic intensity at the)'
receiving antenna as
(3.2.9)
;~
'.,
It is cr 0 (R) which is the terrain "image". Using the radar and processing system,
one attempts to reconstruct it as nearly as feasible. This is done by fin~.t forming
approximations to ((R), so-called complex images, and from them constructing
a statistical estimate of their mean square, that estimate being taken as an
estimate of the real image cr 0 (R). The associated calibration techniques are
discussed in Chapter 7.
In the case of an extended target, then, our first objective is to produce the
complex reflectivity distribution ((R') of the target from the observed receiver
voltage phasor functions
Or(R) = aErec(R)
140
3.2
where a is a system constant which we will absorb into the Green's function
Eqn. (3.2.7). Combining Eqns. (3.2.6) and (3.2.7), we then can write the voltage
output phasor of the linear receiver as the convolution
v.(R)
h(RIR')((R') dA'
f_'XJ
f~
00
(3.2.13)
(3.2.14)
where R = cr:/2 is essentially the receiver voltage time variable, and the finite
length of the target, or the finite coverage of the radar beam, will limit the
interval of integration. We want to determine the complex image ((R') given
the signal v.(R) and the impulse response h(RIR'). Note that, if ( = o(R' - R0 )
represents a unit point target at range R 0 , where c5 is the Dirac delta function
(unit impulse), the receiver response is
v.(R)
=f
00
h- 1 (R 0 IR)v,(R)dR
h(RIR')((R') dR'
Thus the impulse response h(RIR') can be calculated as the receiver output
should the reflectivity function be an ideal impulse, since the receiver system is:'
known.
141
yields
This convolution v.(R) is generally a two dimensional data set, with one
dimension of R being time during each radar pulse, and the other being the
position of the radar along its trajectory of motion, in the case of SAR. We
will eventually deal with the signal processing involved in inverting the relation
Eqn. (3.2.13), which is a Fredholm integral equation of the first kind.
As a prelude, consider the one dimensional case
v.(R) =
PULSE COMPRESSION
= f
= f :
00
(3.2.16)
This is to say that the indicated operation on the received data v,(R) exactly
reconstructs the complex reflectivity distribution ((R) in view of the radar. The
processing by h - 1( R 0 IR) produces an image of the reflectivity distribution, and
the operations involved in its application constitute an imaging algorithm. The
processing amounts to correlating the received signal v,(R) with a function
h - 1( R0 IR) for various values corresponding to ranges R 0 = er: /2 where the
reflectivity function is to be determined.
Let us now consider how to determine the inverse Green's function h - 1(Ro IR)
from the specified Green's function h(RIR 0 ). Consider first the case that we
have available the radar system output time function v,(R) over the infinite
time span ( - oo, oo ), an assumption which we will o0viously need to m.odify ~ater.
Suppose also (the actual situation for the current case of one d1me~s1~nal
"range" processing) that the radar, in addition to being a linear system, ts ttme
stationary, i.e., h(RIR 0 ) = h(R - R0 ). (Here we have used a common abuse of
notation in designating the one-variable function h(R) with the same letter as
the two-variable function h(RIR 0 ).) Then defining a corresponding h- 1(RolR) =
h- 1 (R 0 - R), the convolution integral Eqn. (3.2.15) which we want to solve
becomes
f~
h- 1 (R 0
f~
R)h(R - R 0)dR
0 h- (R
1
-R 0 -x)h(x)dx=c5(Ro-Ro)
or
Linear processing of the received signal v,(R) of Eqn. (3.2.14) with this operator
IYI ~
00
(3.2.17)
142
3.2
S(f) =
The solution, Eqn. (3.2.18), is obvious in this simple case. The filter H- 1 (!),
which compresses the signal h(x) back to an impulse, simply undoes whatever
the radar linear transfer function H(f) has done.
In the particular case that
143
(3.2.18)
where we mean that
PULSE COMPRESSION
f~cxi
(3.2.19)
The integration is in general not possible to carry out in closed form. However,
the principle of stationary phase provides a useful approximation.
If we consider (say) the real part of the spectrum Eqn. (3.2.19), we have
Re[S(f)] =
f:
(3.2.20)
00
H(f) = exp[jt/J(f)]
we have
H- 1 (!) = 1/exp[jtjl(f)] =exp[ -jt/J(f)]
so that
and we recover the matched filter as the compression processor. (Recall that
R = ct/2 relates range and receiver signal time.) In the general case that
IH(f)I =fa 1, the compression processor is not the matched filter; the filter
amplitude 1/IH(f)I =fa IH(f)I.
3.2.2
Cook and Bernfeld ( 1967, Chapter 3) have given a careful discussion relating
the matched filter with compression processing. The developments there also .
make precise the relationship locking time with frequency for linear FM
waveforms having large bandwidth time products, an important basic concep~
we have so far referred to only in passing. Since SAR processing mostly involv~.
compression of linear FM waveforms, we will here summarize some pointf,;
relating to the procedure. Much of the development involves an approximate;
way of calculating the spectrum of a time waveform.
'
The Principle of Stationary Phase
There may exist time ranges of the interval of integration over which the angle
2nft - </>(t) changes rapidly with respect to the changes of the function a(t).
Then the contribution to the integral value from regions of adjacent positive
and negative loops of the cosine function will nearly cancel, with no net
contribution to the value of the integral. Application of the principle of stationary
phase amounts to taking note of that fact, and concentrating attention elsewhere,
over intervals where the angle of the cosine function changes only slowly.
The location of such time ranges, with slowly varying angle 2nft - </>(t),
depends on the particular value off for which we are trying to calculate the
number S(f), since f appears as a parameter in the angle. Ranges of time for
which we do get net contribution to the integral are characterized by the fact
that the integrand does not oscillate rapidly, which is to say that the phase
angle 2nft - </>(t) is nearly constant. Thus we can confine attention to time
ranges near the stationary points of the phase function, which are times t(f)
for which
d[2nft - </>(t)]/dt = 0,
2nf
= d</>/dt
(3.2.21)
Since we are confining attention to times t near solutions t(f) of Eqn. (3.2.21 ),
we can expand the integrand of the Fourier transform Eqn. (3.2.19) as a Taylor
series around t(f ). Keeping only the zeroth order term in a( t ), and terms through
the quadratic in the angle, noting that the first order term in the angle is zero
by the definition Eqn. (3.2.21) of t(f), and for simplicity of notation assuming
that only a single stationary point exists, we obtain (where we write tr = t(f))
tr+A
x
which is incidentally of the form of the complex envelope of a narrow bani!';
signal (Whalen, 1971, Chapter 3)
~
v(t) = a(t) cos[wct + </>(t)]
expU~(tr)(t-tr) 2 /2]dt
(3.2.22)
t 1-A
where 2A is the interval (in general a function off) over which the quadratic
approximation to the phase function in Eqn. (3.1.19) is reasonable.
144
3.2
PULSE COMPRESSION
Amplitude
145
Phase
av"<l9il/2n)
Jo
exp{j sgn[~(tr)Jny 2 } dy
(3.2.23)
In the particular case that the upper limit of the integral can be extended with
little error to infinity, the Fresnel integral that arises can be evaluated
(Gradshteyn and Ryzhik, 1965, Section 3.691.1) to yield
S(f) = [2n/ltP(tr)I] 112 a(tr)
*(3.2.24) .
The special case of the linear FM pulse should be noted explicitly. Thus we , '
consider
s(t) = exp[j2n(f.t + Kt 2 /2)],
for which tP( t) = 2nK = const. The stationary phase relation Eqn. (3.2.21) yields
Figure 3.3 Amplitude and phase spectra of linear FM signals with various bandwidth time
products. Phase shown is residual after removal of nominal quadratic phase (from Cook and
Bernfeld, 1967 and after Cook, 1960. Proc. IRE, 48, PP: 300-316. IEEE)
That is to say, for any frequency f, only time portions of the signal located near
the value Eqn. (3.2.26) contribute to the spectrum at the frequency in question.
Frequency and time are approximately locked together in the linear FM ;
waveform.
Since the phase of the signal Eqn. ( 3.2.25) is exactly quadratic in time, the '
expression Eqn. (3.2.22) is exact, with the range of integration changed tq,
ltl < -rp/2, the full pulse extent. The approximate expression Eqn. (3.2.23) iti
replaced by the exact expression
exp[j(sgn K)ny~] dy
B = IKl-rp,
f - !. = yB/2
For adequately large bandwidth time product B-rP, the Fresnel integral in
Eqn. (3.2.27) can be evaluated; and it is found that S(f) ~ 0 for If - fcl > B/2,
so that the quantity B, defined by Eqn. (3.2.28), is the signal bandwidth. Cook
and Bernfeld ( 1967, p. 139) calculate that to be the ~ase for B-rP > 100
(Fig. 3.3 ). In the band, for large B-rP the same expression as Eqn. ( 3.2.24) results:
S(f) = IKl- 112 exp[j(n/4) sgn (K)] exp[ -jn(f - fc) 2 / K],
(3.2.28)
If-I.I< B/2
*(3.2.29)
The principle of stationary phase can also be applied to the inverse transform
relation
s(t) =
f:
00
A(f)expj[t/J(f) + 2nft] df
146
3.2
obtaining
s(t) = A(J;)[2n/lrfr(.ft)l]1;2
x expj{2nJ;t
+ t/J(J;) + sgn[rfr(J;)]n/4]}
(3.2.30)
(3.2.31)
For the lar~e bandwidth time product quadratic phase function Eqn. (3.2.29),
the expre~s1on Eqn. (3.~.30) reduces to the linear FM, Eqn. (3.2.25), while Eqn.
(3.2.31) yields the lockmg relationship J; = fc + Kt.
The .above relation~ are approximate. They will be more or less accurate
dependmg on the specific nature of the signals(t) in question the more so the
larger the bandwidth time product BrP of the waveform. For~ signal s(t) with
both a smooth envelope a(t) and a smooth spectrum amplitude A(f), according
to Cook an~ Bernfeld (1967, p. 49), a bandwidth time product nominally of to
suffices to yield accurate t/J(f) and </>(t) using respectively the approximations'
Eqns. (3.2.24) and. (3:2.30). If on~ of a(t) or A(f) is discontinuous, BrP needs
to be 20 ~r 30, whtle 1f both are discontinuous, BrP needs to be 100. This latter
case apphes to the nominally time limited, band limited linear FM waveform.
147
PULSE COMPRESSION
The envelope of this is a pulse of form (sin t) / t with time width nominally 1/ B,
centered at the time delay r = 2R/ c corresponding to the target range R. This
is just the result Eqn. (3.1.13).
On the other hand, even if the input spectrum A (f) is not rectangular, the
(sin t)/t form may still be a good approximation to the filter time output. Again
according to Cook and Bernfeld (1967, p. 49), provided s(t) is a linear FM with
BrP > 20, and provided the proper matched filter is used for the S(f) that is
the actual spectrum (not having unit amplitude A(f) if BrP is considerably
smaller than 100), the matched filter output envelope will have the (sin t)/t
form. Thus, although in practice time bandwidth products much larger than
20 (or even 100) are used, even for products as small as 20 the resolution result
c5t = 1/ B is valid, although not necessarily the linkage expression Eqn. ( 3.2.26 ).
The implications of such results are important in considering SAR azimuth
compression algorithms. In contrast, for a transmitted pulse which is a simple
burst of carrier: s(t) = exp(jwct), !ti< p/2, the.matched filter output for a
target at range R will be just the expression Eqn. (3.1.12) delayed by 2R/c:
g(t) =
f
f
t+r 0/2
2R/c-rp/2
Compression Processing
= (rp -
Let us now consider the compression properties of the matched filter. Regardless
?f what waveform s( t) is transmitted, a matched filter will be us~d in a receiver
mtended for ~arg~t detection. Its impulse response is h(t) = s*( -t) and its
transfer funct10n is H(f) = S*(f), where S(f) is the spectrum of s(t). For a
t~rget .at range R ~rom ~he radar, so that (ignoring scale factor) the received
signal is the transmitted signal delayed by a time 2R/ c, the receiver output will be .
It - 2R/cl ~ rP
(3.2.33)
The time correlation form of the matched filter output expression:
g(t) =
(3.2.34)
s*(x - t)s(x - r) dx
g(t) =
(f
-fc+B/2
-fc-B/2
f'fc+B/2)
makes it easy to see pictorially why the resolution of the large time bandwidth
product linear FM pulse is so much sharper than that of the simple RF burst.
In Fig. 3.4, one can visualize the lower function of each pair:
s*(x - t) = cos[2nfc(x - t)
(
+ nK(x -
t) 2 ]
sliding along the upper function s(x - r) as t varies. For each pair (t, r), the
product function in the integrand of Eqn. ( 3.2.34) contains sum and difference
frequencies (with x as the "time" variable)
fc-B/2
= 2B cos [2nf(tc -
2R/c)]
= K[(x - )
(x -
t)]
The sum frequency term will integrate to zero, as will the difference frequency
148
3.2
PULSE COMPRESSION
in the signal band. (Only in the case A(f) = 1, or at least A(f) = const, such
as for example the linear FM with large bandwidth time product, are the
matched filter S*(f) and the compression filter 1/S(f) the same.)
In the case of a transmitted signal s(t) of finite bandwidth, so that the
qualification in Eqn. (3.2.35) has effect, the problem of finding the compression
filter H(f) is "ill-posed" (Tikhonov and Arsenin, 1977), in the sense that the
conditions of the problem do not lead us to a unique solution. (Since the signal
s(t) has zero frequency content outside the band If - !cl< B/2, we can add any
out of band components to H(f) and not change the filter output G(f) =
H(f)S(f).) The problem is "regularized" (made to have a unique solution) by
adding some extra conditions solely for that purpose. If we choose to add the
condition that the filter H(f) have zero spectral amplitude outside the signal
band (which corresponds to the "p~incipal solution" of such problems (Bracewell
and Roberts, 1954)), we obtain a compressed output as in Eqn. (3.2.32) above
(for R = 0, say):
Figure 3.4 Correlation of linear FM waveforms. Average product peaks near zero value of
difference frequency !J.f = K ( t - r ).
f = K ( t - ) unless t
3.2.3
S(f) =I- 0
since then
H(f)S(f) = 1
149
(3.2.36)
We have in fact always done that without comment. Radar receivers are always
so constructed.
For the linear FM waveform with high bandwidth time product, the matched
filter Eqn. (3.2.35) is the appropriate compression processor if we use the
principal solution. We thus reconstruct the complex reflectivity profile ((R) in
view of the radar as in Eqn. (3.2.16) with the best resolution attainable by linear
processing. (The adjoining of out of band components to the filter output is a
nonlinear process, since zero filter input does not then correspond to zero
output.) However, with that reconstruction of ((R) we have sidelobes to contend
with, just as in the case of a finite antenna aperture (Section 2.2.2). The first
sidelobes of g(t) of Eqn. (3.2.36) are only 13 dB lower than the main lobe. Thus,
for example, a target 13 dB stronger than an adjacent target one resolution cell
away will mask its weaker neighbor.
These time (or range) sidelobes in the ambiguity function Eqn. (3.2.36) must
be dealt with to obtain a properly functioning system. Cook and Bernfeld ( 1967)
discuss the problem in general in the context of signals with large B-rP
products. Suppose we maintain the desirable constant power requirement that
ls(t)I = a(t) = 1, and vary IS(f)I (analogous to antenna illumination) to attempt
to improve the ambiguity diagr&m Eqn. (3.1.11). Assume we will always use a
matched filter H(f) = S*(f) wh~tever S(f) may be (thereby deviating from the
true compression filter H(f) = 1/S(f) over the band). Then some improvement
is possible (Cook and Bernfeld, 1967, Chapter 3), but only at the expense of
needing to generate rather inconvenient phase behaviors </J( t) for the transmitted
signal s(t) = exp[j</J(t)].
The usual way to deal with undesirably high time sidelobe levels in the
matched filter output is to unmatch the filter. There is thereby an inevitable
reduction in output SNR, and a consequent decrease in detection performance
150
3.2
which, although usually not severe, must be evaluated. Beyond that we deal
with a trade-off between desirable improvement in sidelobe structure, and
consequent undesirable, but usually tolerable, broadening of the mainlobe of
the filter output function (degradation of resolution). Again, Cook and Bernfeld
( 1967, Chapter 7) have given a thorough discussion in the context of the radar
matched filter receiver, although the general subject is discussed ubiquitously.
Farnett et al. (1970) give a convenient summary, while Harris (1978) has given
a particularly comprehensive discussion of the available alternatives in the case
of time sampled data. Here we will follow only one line of thought, leading to
some filters commonly used in SAR processing.
Let us again assume that the transmitted pulse is the linear FM with constant
envelope:
PULSE COMPRESSION
out that not only is the maximum sidelobe level no larger than the requested
bound, but that all the sidelobes attain that bound - hence the distribution is
called also the Dolph-Chebyshev weighting.
A flexible and convenient approximation to the Dolph weighting function
Eqn. (3.2.37) is the Taylor weighting function (Cook and Bernfeld, 1967, p. 180;
Taylor, 1955). Again relative to the center of the band this is:
il-1
W(f + /.) = 1 + 2
(3.2.39)
m=l
n-1 (
in complex form (positive frequencies ),,and that the bandwidth time product
is large, so that the spectrum has constant amplitude over the band B, say unity:
S(f) = exp[jl{!(f)]. The receiver filter is taken as
151
F(m, A, n) = 0.5(-1r+ 1
l_J
l -
n-i
n-l
A2
. (m/CT)2
)
2
(n-05)
( 3.2.40)
(1 - (m/n) 2 )
n=l
n#m
H(f) = W(f)exp[-jl{!(f)]
where A is as in Eqn. ( 3.2.38) (determined from the requested sidelobe level) and
where W(f) is a real function to be found. Assume W(f) to be symmetric around
the band center .r.,.
We can then formulate the optimization problem of minimizing the mainlobe
width of the filter output jg(t)j, for a specified maximum sidelobe level, where
G(f) = H(f)S(f) = W(f). The answer is (Cook and Bernfeld, 1967, p. 178) the
continuous form of the Dolph ( 1946) antenna current distribution function ..
Over the band, this is:
W(f + / 0 ) = nAI 1 (z)/ B cosh(nAz)
z = nA[l - 4(f/B) 2 ]
CT=
n[A 2
+ (n - o.5) 2 r
1 2
'
(3.2.41)
This latter happens also to be the factor by which the Taylor mainlobe is
broadened beyond the Dolph mainlobe width. We want CT to be not too much
larger than unity, so that to some extent n (quality of approximation) and A
(sidelobe level) are coupled. Reasonable nominal values are of the order of
n ~ 3 for 25 dB sidelobes and n ~ 6 for 40 dB sidelobes.
The Taylor weighting function Eqn. (3.2.40) can be realized with reasonable
convenience, either directly as a filter in the frequency domain, or in the time
domain. Time domain realization makes use of the fact that
g;- 1 (F(w)cos(aw)) = [/(t +a)+ f(t - a)]/2
and / 1 is the modified Bessel function of first kind and order 1. The parameter A i$1 .
set by the requested maximum sidelobe level a such that the maximum (voltage)'.'~
sidelobe is a factor
a= 1/cosh(nA)
ft
below the mainlobe peak. (For example, if we demand that the largest sidelobe!Jf
be 40 dB below the peak of the mainlobe, then a= 0.01 and A = 1.69.)
In addition, at the band edges, f = B /2, W(f) has impulses of strength1
1/ B cosh( nA ), which fact makes this weighting inconvenient to realize. It turns; "
so that the cosine terms in the Taylo~ filter Eqn. (3.2.39) can be realized by a
linear combination of delayed and advanced (by integral multiples of 1/ B)
replicas of the filter input (so-called tapped delay line realization).
For typical sidelobe levels, the numbers F(m) of Eqn. (3.2.40) in the Taylor
filter approximation to the Dolph filter become small rather rapidly as
m increases towards n. For example, for n = 6 and 40 dB sidelobes, the
filter coefficients are: F(l,. . ., 5) = 0.3891, -0.945 x 10- 2 , 0.488 x 10- 2 ,
-0.161 x 10- 2 , 0.035 x 10- 2 (and incidentally CT= 1.043, so that the main
lobe broadens only by 4.3% ). This suggests dropping higher order terms in
Eqn. (3.2.39), without changing n, which would involve recalculating the
152
coefficients. If this is done in the (6, -40 dB) case, for example, there results
In practice, any of these cases may provide satisfactory sidelobe behavior with
negligible main lobe broadening beyond that of the full Taylor approximation.
At this point we have summarized a reasonable range of results from classical
one-dimensional (range) radar theory. In Chapter 5 we will generalize them to
the case of a two-dimensional imaging radar.
REFERENCES
Bracewell, R. N. and J. A. Roberts ( 1954 ). "Aerial smoothing in radio astronomy,"
Austral. J. Phys., 7, pp. 615-640.
Cook, C. E. and M. Bernfeld (1967). Radar Signals, Academic Press, New York.
Dolph, C. L. ( 1946). "A current distribution for broadside arrays which optimizes1
the relationship between beam width and side-lobe level," Proc. IRE, 34(June),
pp. 335-348.
Farnett, E. C., T. B. Howard and G. H. Stevens (1970). "Pulse-compression Radar,"'
Chapter 20 in Radar Handbook (Skolnik, M. I., ed.), McGraw-Hill, New York.
Gradshteyn, I. S. and I. M. Ryzhik ( 1965). Tables of Integrals, Series, and Products,,~
Academic Press, New York.
Harris, F. J. (1978). "On the use of windows for harmonic analysis with the discreii~i:
Fourier transform," Proc. IEEE, 66(1), pp. 51-83.
..~
North, D. 0. (1963). "An analysis of the factors which determine signal/noi5'
discrimination in pulsed-carrier systems," Proc. IEEE, 51 (7), pp. 1016-1027 (Reprint'
of: RCA Technical Report PTR-6C, June 25, 1943).
'
Rihaczek, A. W. (1969). Principles of High Resolution Radar, McGraw-Hill, New Yo
(Reprinted by Peninsula Puhl., Los Altos, CA, 1985).
Skolnik, M. I. (1980). Introduction to Radar Systems, McGraw-Hill, New York.
Taylor, T. T. (1955). "Design of line-source antennas for narrow beamwidth and lowi
side lobes," IRE Trans. Ant. and Prop., AP-3(1), pp. 16-28.
REFERENCES
153
4.1
4
IMAGING AND THE
RECTANGULAR
ALGORITHM
155
introduce the rectangular (range Doppler) coordinate system, and describe the
corresponding signals received by a SAR, assuming a "chirped" transmitter
waveform. Range migration of the received signals over the many pulses needed
to carry out SAR processing is described in detail. The difficulty of dealing with
range migration has led to various ways in which the correlation operations of
the rectangular algorithm have been realized, and we will distinguish among
those algorithms from that point of view. In this chapter we will describe four
of the methods which have been used. In Chapter 10 we will discuss one more,
deramp processing, which has been used less commonly in remote sensing work,
but which is nonetheless of importance.
The algorithms discussed in this chapter realize range migration correction
by interpolation operations on a rectangular grid of data, in either the time or
frequency domains. The frequency domain realizations have been developed
mainly by the Jet. Propulsion Laboratory of NASA and by MacDonaldDettwiler and Associates of Canada. A time domain SAR compression
algorithm, which operates without using fast convolution in the azimuth
coordinate, has been developed by the British RAE. In Chapter 10 we will
discuss the polar processing algorithm, which has its heritage in the aircraft
SAR systems which have been under steady development since the 1950s.
4.1
Here the two dimensions of R' are the geographic coordinates of the terrain.
The dimensions of Rare time t = 2R/c within each pulse, and time of travels
of the radar platform along its motion path, or equivalently R and x = V.s,
where V. is platform speed.
The convolution Eqn. (4.1.1) has one more hidden assumption, namely, that
the reflectivity coefficient ((R') is independent of the radar position R, at least
156
4.1
over the (usually small) change of aspect angle during the time that any particular
point is illuminated (the time extent S of the synthetic aperture), and constant
in time. That may often be the case. Otherwise, the image to be derived from
the data will be a weighted combination of the reftectivities C(R') as observed
from some range of positions R at varying times.
The two-dimensional inverse Green's function h - 1 (R 0 IR), corresponding to
the Green's function (impulse response) h(RIR'), is defined by
4.1.1
157
f:
f:
= h(RIR 0 )
(4.1.4)
00
00
This last follows by substituting Eqn. (4.1.1) and using the definition Eqn.
(4.1.2), as done in Eqn. (3.2.16) for the one-dimensional case. Here R represents
any coordinates used to describe the data, ll.(R) is the complex received data
phasor array, and '(R0 ) is the complex image function at an arbitrary point
R0 . In general, the reconstructed image '(R0 ) will replicate the ground truth
only insofar as the system resolution properties allow the Dirac function to be
reconstructed from the impulse response, as discussed in Section 3.2.3.
The usual (real) image is finally an estimate of the mean intensity of ((R 0 )
as in Eqn. (3.2.12).
From Eqn. (4.1.3), it is evident that the image formation process is one of
correlation of the data ll.(R) with the inverse Green's function. It is further
clear from Eqn. (4.1.2) that the inverse Green's function can be described
operationally in terms of whatever correlation operati~ns will compress the
system unit impulse response h(RIR 0 ) into the image of an impulse. In
developing an image formation algorithm, we therefore first need to determine
what the system impulse response is, working from the known system properties.
We then must specify the correlation operations necessary to convert the impulse
response into an impulse. Applying exactly those correlation operations to the
full data set ll.(R) then produces the complex image C(R0 ).
Figure 4.1 A terrain point is located by the radar position xe when the point is in beam center,
and the corresponding range Re.
158
4.1
Data Coordinates
p(t) =
L s(t -
(4.1.5)
nTP)
where T. is the pulse repetition period and the sum includes all pulses for which
the targ~t is in the radar beam. Note that we assume synchronization of t~e
detailed pulse waveform s(t) with the repetition period. That is, the radar ts
time coherent. Since SAR is based on Doppler shift, it is essential that
pulse-to-pulse phase changes be recoverable from the radar signal, requiring
coherent operation.
At any arbitrary time t, the radar is at some slant range R(t) from the target
point with image coordinates (xc, Re) (Fig. 4.2). The real received signal. v,(t)
at that instant has the value which the transmitted signal had at some time r
earlier, scaled by a factor which is locally constant:
(4.1.6)
v,(t) = ap(t - t)
= [R(t
159
Thus
As it moves along its path, the radar transmits narrowband pulses, typically
the linear FM signal. The multipulse real transmitted signal is then
- t)
+ R(t)]/c ~ [2R(t) -
R(t)t]/c
2R(t)/[c
+ R(t)]
L ans[t -
~ 2R(t)/c
nTP - 2R(t)/c]
(4.1.7)
t= s
+ t',
The operations required for image formation are those of correlation of the
radar data with the impulse response. We therefore have to do with a two
dimensional correlator. However, the range R(t) varies over the time of
each pulse for which the point target is in view. The received pulses
s [ t - nTP - 2R ( t) / c] are therefore distorted versions of the transmitted pulses
s(t - nTP). The distortion can be different for each pulse of the received pulse
train, since the local functional form of the time varying range R(t) depends
on the differing geometry along the radar trajectory. Were it necessary to account
for these effects in processing, the two dimensional correlation would not
decouple into a sequence of two independent one dimensional procedures. It
is therefore important to examine the consequences of this pulse dependent
distortion. We follow the development of Barber (1985).
We want to determine the effect of variation of the range R(t) from sensor to
a target point during the time span <p of reception of a particular pulse. Let
time t 1 be nominally at the midpoint of a received pulse, and consider the
expansion
R(t) ~ R(ti)
+ R(ti)(t -
ti)+ R(ti)(t - t 1 ) 2 /2
(4.1.8)
The radar views a terrain point at (xe, Re) from positions (x, R).
s(t) = exp[j2n(fct
+ Kt 2 /2)],
(4.1.9)
160
4.1
161
(4.l.12)
where
(4.1.13)
't'P
*( 4.1.14)
where we consider only the positive frequency components of the real signal.
If we were to assume no distortion of the pulse waveform, except for scale factor
this would be the single pulse impulse response. We would then compress the
received signal by correlating it with delayed versions of s*(t) (equivalent to
matched filtering withs*( -t)). The compressor output would be:
f:
( 4.1.15)
where the input is actually one pulse of the (positive frequency part of the)
distorted return Eqn. (4.1.7).
Substituting Eqn. (4.1.7) and Eqn. (4.1.9) iiito the integral of Eqn. (4.1.10),
and being careful of limits, for t ~ ti for example we obtain
the relation Eqn. (4.1.13) will follow. Eqn. (4.1.15) indicates that a modest
bandwidth time product will suffice for the validity of the approximation
Eqn. (4.1.14). This is in correspondence with the discussion of Section 3.2.2, in
which it was noted that, for BrP > 20, the matched filter output for the linear
FM would have the form Eqn. (4.1.14).
In the actual case that R(t) is not constant over the pulsewidth p the series
expansion Eqn. (4.1.8) can be used in the received pulse waveform Eqn. (4.1.7).
We assume the matched filter Eqn. ( 4.1.10) will still be used as matched to the
transmitted pulse Eqn. (4.1.9). We want to determine the effect of the resulting
input distortion on the filter output Eqn. (4.1.11).
We could work directly with approximate evaluation of Eqn. (4.1.11).
However, it suffices to assume a large bandwidth time product for the transmitted
pulse. This-all,ows us to relate frequency displacement to time shifts, as discussed
g(t) =
00
g(t) = exp(jwct)
fa
exp[ -j4nR(t')/...1.]
(4.1.11)
162
4.1
in Section 3.2.2. Specifically, the filter input signal Eqn. (4.1.7) with range
Eqn. (4.1.8) will have a phase variation
</> = 2n{.fc[t - 2R(t)/c]
+ K[t -
2R(t)/c]2/2}
+ K[t- 2R(t)/c]}{l
(4.1.16)
- 2R(t)/c}
The frequency function Eqn. (4.1.16) differs from the nominal variation
f=f0
+ K(t- 2Rifc)
by an amount which depends slightly on time within the pulse, but which is
approximately
Jx>lOV.1rp=2m
Because of the assumed high bandwidth time product, this frequency shift
corresponds to a time shift Af/ K of the filter input, and correspondingly a
range shift at the filter output
AR=
163
-Rd /K
+ .fcRi)
R1 =
RiR.1
- V.1 sin (}
+Rf= v;
this becomes
(4.1.17)
When the point target of Fig. 4.3 is viewed from the forward and rear edges of
the real radar beam, nominally at (} = f}u/2, the range shift Eqn. (4.1.17) will
be opposite in direction. The difference represents a distortion, which should
be much less than the range resolution interval, which is
JR= c/2B = c/21KlrP
*(4.1.18)
The factor 2V/c is extremely small, while the other factor is not large. We
conclude that any defocussing due to distortion of the received pulse is negligible.
It might be remarked that we have only considered returned pulse distortion
effects related to the geometry of the situation. There may also be pulse distortion
due to the frequency dependent propagation speed (dispersion) of the earth's
ionosphere. Polarization change due to the earth's magnetic field may also be
noticeable. We will consider these effects in Ch. 7. Brookner (1977) has given
a summary of the effects, with useful charts of sample calculations. In a study
specifically concerning SAR, Quegan and Lamont ( 1986) indicate that the effect
on image focus can be severe for low frequency ( L-band) and an aircraft system
operating at long range, but is less marked for a spaceborne system. The effects
lessen at higher frequencies.
164
4.1
With the approximation of constant range from radar to target point over
the width of a transmitted pulse, the received signal Eqn. (4.1.7) from a point
target is
00
v,(t) =
(4.1.19)
S=xN8
Locust='tn ~.,.,
~"-
S n+1
n= -oo
/
where Rn is the range to target during the time of reception of the nth pulse:
Rn= R(tn), with tn the center (say) of the time interval over which pulse n is
received.
We now segment the received signal Eqn. (4.1.19) (the voltage out of the
radar receiver) of a single scalar variable (time) into a two-dimensional data
set. This is convenient to do because the formalism of two dimensional Green's
function analysis can be segmented now into two sequential one-dimensional
problems.
We define specifically
nTP :>;; t < (n
+ l)Tp
----
----~---11
I I
--------------
Sn-1
j ____ a(__j__L
- ----0---1I'
I
I
-----------------
II
I
I
I
t
Figure 4.4
4.1.2
~ Vr (Sn,t-'tn)
(4.1.20)
That is, v,(nTP' t) is the received signal from the time of transmission of pulse
n until the time of transmission of pulse n + 1. (In fact more than one pulse
may be "in flight" from the radar to the target simultaneously, in which case
some integral number of pulse periods intervenes between transmission of pulse
n and the time origin of the corresponding received signal.) Ifwe define a "slow"
time variable s as the time of flight of the vehicle along its track, in contrast
with the "fast" time variable t of the radar signal voltage, then v,(nTP, t) is a
function v,(s, t) sampled in slow times at the pulse repetition frequency. Using
the transformations R = c(t - nTp)/2 and x = V.s, we will also write the data
set as v,(x, R) when convenient.
The (slow time sampled) two-dimensional Green's function of the system is
now seen to be that sketched in Fig. 4.4. This is an array of (fast) time delayed
versions of the transmitted pulse, with the delays -rn = 2Rn/ c depending on
target position (xc, Re) and radar position as determined by the geometry of
the problem. The Green's function is inherently sampled in slow time by the
pulsed radar, and will additionally often be sampled in fast time for digital
processing.
165
=2R/c
The point target response Fig. 4.4 is dispersed in fast time by the structure
of the transmitted pulse, and in slow time by the multiple (perhaps thousands
of) pulses which reach the target as the radar travels past it. Ideally, we would
like the compressed signal to be a point, as was the target. In practice, the finite
b~nd~id.th of the transmitter and the finite time during which the target is in
view hmit us to a compressed version of the target of nonzero width in the two
dim~nsions of the image. As discussed in connection with range processing in
Section 3.2.3, we then content ourselves with the principal solution of the
problem. Roughly speaking, the ideal point target (impulse function) has infinite
bandwidth in slow and fast time. The physical radar has finite bandwidth and
obliter~tes all but a finite band of target return frequencies. Then, by linear
processmg of the radar observables, we can produce only a finite bandwidth
(smeared image) approximation to the observed point target.
In concept, the image formation procedure is straightforward. It is exactly
that .operational procedure which compresses to a point the radar response to
a pomt target. Assuming a point target with coordinates (xc, Re) (Fig. 4.1), let
us now describe the procedure. We will take advantage of the possibility of
segmenting the two-dimensional correlation Eqn. (4.1.3) into a sequence of two
one-dimensional correlations.
Range Processing
The received signal v,(nTP, t) from each transmitted pulse s(t) is first passed
through the matched filter with impulse response s*( -t), or, equivalently,
correlated over time t' with the replica s*(t' - t). Dropping a scale factor i-P,
166
4.1
167
the positive frequency portion of the result, for pulse n, is the filter or correlator
output Eqn. (4.1.14)
9n(t) = exp(jwct)exp(-j4nRn/A.)[sin(u)]/u
*(4.1.21)
Sn
-------------
where
Here Rn is the range from the radar at time of transmission of pulse n to the
terrain point with coordinates (xc, Re).
The carrier structure of the signal Eqn. (4.1.21) is stripped away by the linear
operation of complex demodulation, which amounts to a left shift by le in the
freqency domain, to obtain the complex low pass signal
bn(t) = exp(-j4nRn/A.)[sin(u)]/u
(4.1.22)
t =2Rlc
Figure 4.5
Locus ofrange compressed returns from point target in plane of slow and fast times (s, t ).
The signal g(slxc, Re) of Eqn. ( 4.1.24), which we want to compress as the second
operation of the rectangular algorithm, is in fact the Doppler signal received
from the point target as the radar moves by. Hence this second compression
operation is the "Doppler" part of the range-Doppler processing algorithm.
The waveform in slow times of the azimuth signal g(slxc, Re) of Eqn. (4.1.24)
is not necessarily simple, since R(s) is a nonlinear function of slow time s, the
form of which depends on the target parameters (xc, Re). Thus, while the slow
time ("azimuth") compression operation will be a correlation, the correlator
waveform will depend in general on which point in the image we are computing.
That is, in full generality, to compress the point target function Eqn. (4.1.24)
we need to compute the correlation
(4.1.25)
using a separate correlator function h- 1 for each point of the image. The "time
domain" image formation algorithm described by Barber ( 1985) implements
correlation in just this way. However, there is a considerable gain in
computational efficiency if the correlation can be implemented as a matched
168
4.1
in which foe /R depend only on Re. Thus the operation can be realized as a
fast convolution (matched filter) over slow time for each range Re of the image.
As with linear FM range pulse compression, with a bandwidth time product
of 20 or more the correlation operation yields an output Eqn. (4.1.25) whose
modulus is a pulse
i((s~lsc,
(4.1.26)
In such an expansion, it is often possible to neglect terms of order higher than
the quadratic, although the possibility of realizing the correlation expression
Eqn. (4.1.25) by matched filtering does not depend on that assumption. Rather,
we need to determine that the retained coefficients in the expansion Eqn. (4.1.26)
are independent of sc over the filter span S in slow time. (In Appendix B we
give a detailed discussion of the terms in the expansion Eqn. (4.1.26).)
We can identify the leading time derivatives in the expansion Eqn. (4.1.26)
in terms of the Doppler center frequency and Doppler rate of the slow time
signal Eqn. (4.1.24). The time rate of change of phase cf>(s) in the complex
exponential is just Doppler (radian) frequency, so that we have
169
s~
*(4.1.31)
Azimuth Resolution
The width of the pulse Eqn. (4.1.31) is nominally
<>s = 1/B0
where
(4.1.32)
cf>= -4nR(s)/).
(p /2n = io = -2R(s)/).
(4.1.27)
is the Doppler bandwidth. The time S is that nominal time for which a point
target is effectively in view. It is the SAR "integration time", and is determined
by the antenna horizontal beamwidth. The target is therefore located in azimuth
with spatial resolution
*(4.1.28)
*(4.1.33)
Both of these are functions of sc and Re in general, since R(s) contains sc, Re
as parameters.
Assuming that a quadratic expansion Eqn. (4.1.26) suffices, which is often
the case, the Doppler signal Eqn. ( 4.1.24) becomes
where V. 1 is the speed of the radar platform relative to the target point. For an
antenna of physical extent La along track, the nominal beam width is (}" = )./ L 30
so that any particular earth point at range Re is illuminated for a nominal time
Re =
- .A.inc/2,
*(4.1.34)
g(slsc, Re)= exp( -j4nRc/ .A.) exp{j2nl/nc(s - Sc)+ /R(s - sc)2 /2]},
Is - sci< S/2
(4.1.29)
This is a linear FM wave with center frequency inc and frequency rateiR As
we discuss in Appendix B, the FM parameters inc and iR can depend strongly
on Re, but usually depend only weakly on sc. The azimuth correlation operation
Eqn. (4.1.25) can then be realized approximately using a correlator function
(using the leading terms of the expansion Eqn. (4.1.26))
(4.1.30)
For the geometry of Fig. 4.6, where the radar beam center has a squint angle
xcl we have
v; (s 1
R(s) ~Re +
v;,(s -
inc=
iR = -2R(sc)/). = -2v;1/.A.Rc
*(4.1.35)
170
4.1
171
value La/2 of Eqn. (4.1.37), since the correlator output Eqn. (4.1.25) has time
resolution 1I B 0 in any event. (This assumes compensation for the antenna
pattern in the correlator, that is the compressor operator must be used.) On
the other hand, to use the potential wider Doppler band requires sampling (at
the radar pulse repetition frequency) at a rate somewhat greater than the Doppler
band to be processed (Appendix A). Such an increase in PRF may result in
range ambiguities.
Correlator Structure
......
......
............
......
......
---------------';:a'
,,,..
................................
Figure 4.6
Simplified encounter geometry for a radar with a beam center squinted at angle 6,.
The correlation operation Eqn. (4.1.25) on the azimuth Doppler signal can
efficiently be implemented as a matched filter operation for each particular
value of R0 , provided the parameters f 00 , fR are sufficiently independent of s0
over the span S to allow the use of fast convolution. In Appendix B these
parameters are discussed in detail, and expressions presented which allow
assessment of the situation in any particular case. In practice, considerations
of range migration, which we will elaborate on below, also enter into the question.
In Seasat-like cases, the approximations involved are usually justified, and
azimuth compression is usually implemented as the more efficient matched filter
operation, rather than by correlation. In either case, all the factors dealt with
in range compression must be considered, and in particular weighting of the
filter for sidelobe control is necessary.
The geometry of the encounter between radar and target is closely involved
in the azimuth correlator or matched filter structure through the expression for
slant range R(s) in terms of target position (x 0 , R0 ). The structure of the impulse
response function in the slow time domain, Eqn. (4.1.24), may or may not be
closely approximated as a linear FM in the Doppler domain. If it is not, then
terms in the expansion of R(s), Eqn. (4.1.26), of order higher than the quadratic
will need to be considered. It is also possible that the azimuth impulse response
depends significantly on the location x 0 of the target, as well as on R 0 This
will be the case only for rather long slow time span S, or for high squint
geometries. If such is the case, use of a matched filter may not be possible for
azimuth compression, since the filter response function called for would then
change over the filter time span. At the least, some tracking of the azimuth
filter parameters must be implemented over such an image span (Section 9.3.2).
4.1.3
Two further considerations have impact on the way in which azimuth processing
is carried out: range migration and depth of focus. Range migration is an
inevitable consequence of SAR operation, but may or may not be so severe as
to require compensation, depending on system parameters. Azimuth resolution
in SAR depends closely on the bandwidth of the Doppler signal, as in Eqn.
(4.1.33). Since the phase of the Doppler signal Eqn. (4.1.24) is</>= -4nR(s)/A.,
if the Doppler signal is to have a nonzero bandwidth, the range to target must
change during the time of view S, and the compressed point target response
172
4.1
necessarily occurs at different ranges for different pulses (Fig. 4.5). This is the
phenomenon of range migration.
173
The second important quantity, depth of focus, relates to the fact that the
azimuth correlator parametersfocJR in Eqn. (4.1.30) depend on range Re. Use
of a somewhat incorrect value offoe is not particularly serious, leading to some
loss of signal to noise ratio and increase in ambiguity level (Section 6.5.1 ), but
mismatch of the correlator value offR to that of the signal can cause unacceptable
loss of azimuth resolution (defocusing).
Using Eqn. (4.1.35), we can find the mismatch in azimuth chirp constantfR
if the range Re used in the correlator differs from the range of the target point
sY
(4.1.39)
The linear part of this is range walk and the quadratic part is range curvature.
The total change AR= R(s) - Re is range migration, and might involve higher
order terms in the expansion Eqn. ( 4.1.26), but usually does not.
We can easily determine a rough criterion to indicate whether range migration
compensation is needed. Again consider the simple geometry of Fig. 4.6, with
a beam squint angle o. Using Eqns. (4.1.35) and (4.1.39), for the maximum
values - sc = S/2 we have
(4.1.44)
This mismatch causes a phase drift between the correlator function Eqn. ( 4.1.30)
and the signal, just as we discussed in Section 1.2 relative to the unfocussed
SAR processor. At the Doppler band edges (s = S/2), for negligible mismatch
we require (somewhat arbitrarily) a phase error in Eqn. (4.1.30) due to mismatch
of fR limited by
( 4.1.45)
(4.1.40)
Using the nominal relations Eqn. ( 4.1.34) and Eqn. ( 4.1.37), we have the synthetic
aperture length as
Using Eqn. (4.1.44), in terms of the azimuth bandwidth time product Eqn.
(4.1.38) this can be written
(4.1.46)
( 4.1.41)
In order that migration not require compensation, the maximum distance Eqn.
(4.1.40) should be less than (say) 1/4 of a range resolution cell oR. Thus we
have the criterion
(A.Rc/ox)(lsin O.I + A./8ox) <
oR
*(4.1.42)
*( 4.1.43)
174
4.1
the criterion Eqn. (4.1.45) becomes y < 1/ B0 S. For values y(B0 S) < 2, where
B0 S is the azimuth filter bandwidth time product, there is little loss of
resolution due to using a compression filter with chirp constant f ~ = fR + ofR
with a linear FM input with constant fR (Fig. 4.7). With the Seasat
value B 0 S = 3500, for example, this amounts to a proportional error
(ofR)lfR ~ 0.6 x 10- 3 (0.3 Hz/s at the nominal fR = 500 Hz/s). From Eqn.
(4.1.44), for the nominal R 0 = 850 km this corresponds to a mismatch oR. =
(0.6 x 10- 3 )(850 km)= 510 m, so that over the swath in slant range of 35 km
about 70 different filters would be needed for no loss of resolution. The depth
of field of the processor is 510 m, using this criterion.
In addition to resolution changes, however, compression filter mismatch
disturbs the sidelobes of the matched filter output. For example, whereas the
filter output with matched JR has sidelobes down 13 dB (Eqn. (4.1.22)), for even
the case of yB 0 S = 2.5 (mismatch ratio y = 0.7 x 10- 3 with bandwidth time
product 3500), the first sidelobe is only 7.5 dB down from the peak (Fig. 4.8a).
Even mismatches only moderately larger, say yB0 S = 5 (ofR = 0.7 Hz/s for
Seasat ), cause serious disruption of the shape of the filter output (Fig. 4.8b ).
On the other hand, for sidelobe control the matched filter will always be
used with some sort of weighting. The presence of this weighting considerably
ameliorates the degrading effects of chirp rate mismatch, since the influence of
175
2.0 (a)
Cl>
'O
,e
ii
1.8
1.6
1.4
E 1.2
1.0
,!;! 0.8
111
0.6
0.4
0.2
00
0.4
0.8
1.2
1.6
2.0
2sfyS
2.4
2.8
3.2
3.6
1.8 (b)
1.6
~
1.4
1.2
~ 1.0
~ 0.8
Iz
0
0.6
0.4
0.2
o...._~~~~~~~~_._~__._~~...._~_._~__,,~___,
0
8
0.4
0.8
1.2
1.6
2.0
2s/yS
2.4
2.8
3.2
3.6
Figure 4.8 Distortionlofmatched filter output for linear FM pulse with filter mismatch y = j(CJfR)/fRI
in terms of bandwidth time product (from Cook and Bernfeld, 1967). Courtesy J. Paolillo.
errors at the ends of the matched filter is decreased by the weighting. For
example (Cook and Bernfeld, 1967, p. 158), with weighting designed to produce
sidelobes down 40 dB, a mismatch factor y = 8/ B 0 S raises the first sidelobe
only by 4 dB. However, the mainlobe widens by an additional factor 2.3 beyond
that produced by the original weighting (Fig. 4.9). For y = 4/ B 0 S, the widening
is by a factor 1.4. This value of y for Seasat corresponds to a mismatch in fR
of 0.6 Hz/s, or a depth of field in R 0 of 1 km. The azimuth filter in that case
would need to be updated 35 times across the 35 km Seasat slant range swath
to stay within the limit.
The determining parameter in such matters, unity for n/4 phase error, is
yB 0 S
Compressed pulse widening factor due to filter mismatch y =
terms of bandwidth time product (from Cook and Bernfeld, 1967).
Figure 4.7
using Eqn. (4.1.38) and Eqn. (4.1.48). Therefore short wavelength systems are
more resistant to filter mismatch than long wavelength systems, that is they
176
2.4
2.2
C>
ec
2.0
~;:: 1.8
5l
~ 1.6
1.4
1.2
4
1
s0S
10
have better depth of focus. Also, depth of focus degrades quickly as azimuth
resolution becomes finer.
4.1.4
An Example
178
32
4.1
(a)
24
".!::'.
a.
16
<(
8
0
0
(b)
10
20 30 40 50 60 70
Relative sample points X 102
80
90
160
....0
179
one horizontal cut through Fig. 4.12a. Since the very bright antenna dominates
the scene, its corresponding data are clearly visible in Fig. 4.12a. We are viewing
the system impulse response.
The curved trajectory in Fig. 4.12a is the locus of the pulse by pulse range
compressed peak responses, the nearly parabolic trajectory Eqn. ( 4.1 .26). Range
migration correction is needed, as discussed in Section 4.1.3. The linear (walk)
component, the first term in Eqn. ( 4.1.40), is removed using a procedure discussed
in Section 4.2.3 below. The result is the locus of Fig. 4. I 2b, with only the
quadratic (curvature) migration component present.
Without range curvature, each vertical (constant range ) cut through the
complex data field whose amplitude is shown in Fig. 4. I 2b would yield a complex
function of slow time, a linear FM Doppler signal. However, with curvature
Q)
::J
120
120
80
100
40
80
x
Q)
"
::J
.!::'.
a.
<(
0
2100
60
2120
21 40
2160
Relative sample points
2180
40
20
Figur e 4.11
Video offset signal and range compressed result for pulse viewing bright scattering
point of Fig. 4.10 (from McDonough et al., 1985)).
0
300
CD
240
x 160
Q)
::J
>"'
80
0
300
250
200
150
100
50
Figure 4.12 Range migration of Goldstone antenna before (a) and after (b) nominal conection
(from McDonough et al., 1985).
180
4. 1
.OD
181
each cut passes through two arms of the parabolic locus (except for the single
cut at the apex). Each segment of the linear FM waveform traversed by a single
range cut has adequate bandwidth time product to lock together slow time and
Doppler frequency. Therefore, the two branches of the parabola cut at different
slow times map into different Doppler frequency regions. This is evident in the
Doppler amplitude spectra shown in Fig. 4.13.
The procedure of range (quadratic) curvature correction assembles the spectra
of Fig. 4.13 for the various range cuts into a single Doppler spectrum
corresponding to the range of the parabola apex. That spectrum is then processed
with the Doppler compression filter to yield a line of complex image in slow
time. Fig. 4. 14a shows the result of separately compressing four subbands of
the available Doppler spectrum to obtain four statistically independent images
of the antenna point. Fig. 4. I 4b finally shows the result of adding the intensities
of the four images to obtain a single image line along slow time, at the range
of the antenna point ("multilook" processing). Fig. 4.14b is the constant range
cut through the antenna point in Fig. 4.10 .
OD
C<l!BlliED DATA
b
(a) Four single-look images of Goldstone antenna in Fig. 4.10. (b) One four-look
image resulting from images of (a) (from McDonough et al., 1985 ).
Figure 4.14
Point reflectors on Goldstone (dry) Lake, showing attained resolution and sidelobe
structure (from McDonough et al., 1985).
Figure 4.15
182
4.2
In Fig. 4.10, the antenna point so dominates the scene that no other structure
appears in the image cut Fig. 4.14b. On the other hand, Fig. 4.15 shows a detail
of the image of small point reflectors near the large antenna (Fig. 7.16) on
the dry bed of Goldstone Lake (a smooth background which appears dark to
the radar). The sidelobe structure and mainlobe width of the radar and image
formation algorithm response to a point target are plotted as cuts through the
rightmost reflector point.
COMPRESSION PROCESSING
The received waveform at the radar in response to a unit point target with
coordinates (xc, Re) (Fig. 4.1) is the impulse response:
h(x, Rlxc, Re)= cos{2n[fc(t - r:) + K(t- r:) 2 /2]}
(4.2.3)
COMPRESSION PROCESSING
The unique aspect of SAR processing is the compression of the complex range
data Eqn. (4.1.24) in the slow (azimuth) time variables. In order to carry that
out, it is necessary that the results of range processing of perhaps thousands of
radar pulses be available. Since each radar pulse produces thousands of range
time samples, the memory requirements on the computer are considerable. In
addition, range data are naturally produced and ordered with range as the
minor index, and pulse number as the major index. For azimuth processing,
the reverse is needed. This leads to the necessity for some kind of "corner
turning" in order to access the data matrix by columns after having stored it
by rows. With the availability of increasingly large random access memory, or
with the construction of special purpose computing units, these difficulties have
tended to recede in importance. However, in the earlier development of SAR
processing algorithms for data from space platforms they were a considerable
hindrance to achieving high speed image formation.
In Chapter 9 considerable attention is given to the computing systems which
have been developed to carry out the SAR imaging process. Here we will be
concerned entirely with the signal processing algorithms which act on the data,
assuming it is available where and when needed. The variety of approaches
taken by various designers is a reflection of the difficulty of the problem. There
is no clear-cut "best" way to proceed, although lately the trade-offs among
various alternatives have become much clearer.
We begin the discussiQn with some details common to all processors which
use the rectangular algorithm. We then discuss an azimuth compression
algorithm which is in concept the most direct of the various algorithms in
current use, the time domain processor. This is followed by a detailed discussion
of azimuth compression algorithms which operate in the Doppler frequency
domain. The computational aspects of these algorithms are discussed in
Section 9.2.
4.2.1
(4.2.2)
4.2
183
The slow time variable is s = x /Va, where Va is the speed of the platform along
its path.
The received signal Eqn. (4.2.2) is often converted to some different frequency
band (S-band for example), perhaps for transmission to ground, and further
converted after ground station reception to a relatively low frequency carrier
f 1 (the offset video frequency) (Fig. 4.16). The result is an offset video impulse
response function
h(s, tlxc, Re)= cos {2n[f1 t - 2R(s)/A. + K(t - 2R(s)/c) 2 /2]},
lH (f)I
-fc
<>
.l BR--o--,
fc
-I'0------0-1
To be specific, we will assume henceforth that the transmitted pulse is the linear
FM that is commonly used in remote sensing SAR systems
s(t) = cos[2n(fct + Kt 2 /2)],
(4.2.1)
Figure 4.16 (a) Conversion of RF signal to video offset signal (f1 / 0 ). (b) Complex basebanding
(I, Q detection) of offset video signal.
184
4.2
The range of s values over which this is effectively nonzero depends on the
radar antenna beamwidth Ou =A./ L 8 , since that determines the length of slow
time S = OuRcf V. 1 for which any particular terrain point is in view.
The received data array v,(s, t) will be roughly rectangular, and will extend
in slant range R = ct/2 the full swath width W. and in azimuth x = V.s
some indefinite amount depending on the amount of data which must be
simultaneously accessed for image processing. The impulse response Eqn. (4.2.4)
will cover a region, as indicated in Fig. 4.17, which is of extent cr:p/2 in slant
range R for every x. The extent of x over which the impulse response is nonzero
is not sharply defined, since the edges of the antenna beam are not sharp. The
midpoint of the region of the impulse response, shown in Fig. 4.17 as a solid
line, is the curve Eqn. (4.2.3 ), which is often well approximated as a parabola.
The real valued data v,(s, t) is naturally sampled in slow time s at the radar
pulse repetition frequency. In fast time t the sampling is done after down
conversion to the offset video frequency at a rate a little above the Nyquist
rate (Appendix A). This is typically somewhat greater than 2Ba. where Ba is
the bandwidth of the radar pulse around the carrier (Fig. 4.16).
As an example of the size of this real data matrix, the Seasat offset video
frequency f 1 = 11.38 MHz required a sampling frequency somewhat greater
than 2BR = 38 MHz, and 4f1 = 45.53 MHz was used. The target point
illuminated may be located anywhere in the range swath. Therefore provision
must be made to store sampled values for each pulse over a time span nominally
equal to the slant range swath width W. plus the pulse width r:P. For Seasat,
this was (2/c)(37 km)+ 33.8 s = 280 s; in fact, 300 s was used, resulting in
(exactly) 13680 real data samples to be stored. In the along-track coordinate
x, the Seasat impulse response spans about 4000 pulses, while something like
COMPRESSION PROCESSING
185
cP
= 2n[(fc - Jdt
+ Kt 2 /2]
and frequency rate K the same as the transmitted waveform. The appropriate
operation is to shift that part of the spectrum left to center on zero.
The case fL > fc may also occur in the system hardware arrangement, in
which case the spectra in Fig. 4.16a cross over. It is then the negative frequency
components of Eqn. (4.2.5) which have frequency rate K (rather than -K).
The basebanding operation is then to shift the left half of the spectrum to the
right to center on zero. In either case ofA> fc orfL <fc, the resulting spectrum
V,.(s,f) of the basebanded data corresponds to a complex time function tJr(s, t)
for each pulse time s = nI;, which from Eqn. (4.2.4) is
v,(s, t) = 0.5 exp[ -j4nR(s)/A.] exp{jnK[t - 2R(s)/c]2},
It A
Figure 4.17
Span in memory of responses to point targets at (xc, Re) beam center coordinates.
2R(s)/cl ~ r:p/2
The spectrum of this, except for the constant, is the phase factor
exp[ -j4nfR(s)/c]
(4.2.6)
186
4.2
corresponding to the time shift t = 2R (s) / c, times the spectrum of the complex
base banded transmitted pulse Eqn. (4.2.1)
.S(t) = (0.5) exp(jnKt 2 ),
(4.2.7)
COMPRESSION PROCESSING
187
Since the pulse Eqn. (4.2.7) by assumption has a large bandwidth time product,
its bandwidth is just BR= IKltP, and its spectrum is (Eqn. (3.2.29))
t+<p/2
g(s, t)
(4.2.13)
t-rp/2
Ill< BR/2
(4.2.9)
Since the transmitted spectrum Eqn. (4.2.8) has constant amplitude, the
compression filter is just the matched filter
Ill< BR/2
(4.2.10)
Applying this filter to the basebanded signal spectrum Eqn. (4.2.9) yields for
each radar pulse a filter output
G(s,f) = H(f)V,(s,f) = exp[-j4nR(s)/A.] exp[-j4nlR(s)/c],
lll<BR/2 (4.2.11)
Since the correlation operation Eqn. (4.2.13) is stationary in this case of range
processing (Appendix A), i.e., the integrand involves s(t' - t), and not s(t'lt),
the matched filter realization Eqn. (4.2.11) is exact. There is no reason to carry
out range processing as a correlation, unless it is more efficient than the fast
convolution processing involved in matched filtering. That will only be the case
for a transmitted pulse which spans a small (less than say 64) number of time
samples, so that the time bandwidth product is less than 64. This is rarely the
case, although in at least one aircraft system (Bennett and Cumming, 1979)
range processing (as well as azimuth processing) was realized as a time domain
correlation (convolution).
It is worth recalling that the matched filter output Eqn. (4.2.12) is
approximately correct even for transmitted pulses with rather small bandwidth
time products, on the order of 20, provided the full signal and matched filter
bandwidth are used for whatever pulse is transmitted (Section 3.2.2).
It is in the stage of azimuth (slow time) processing that matters become more
complicated. This is because the azimuth impulse response function Eqn. (4.2.12)
depends on range R 0 , through Eqn. (4.2.3 ). The compression filter is thereby
non-stationary, and processing in the frequency domain (fast convolution)
requires care. Second, the data to be compressed lie along the range migration
curve Eqn. (4.2.3). Both these effects were discussed in Section 4.1.3. We will
now discuss the ways in which they affect SAR azimuth compressor design.
*(4.2.12)
where BR = IKltP is the transmitted pulse bandwidth and sinc(u) =(sin u)/u.
This is of nominal time width & = 1/BR, and is the result of range compression.
The collection of these over slow time s constitutes the data for azimuth
compression.
The range compression operations will usually be carried out digitally. We
describe the details in Section 5.1. The effect is that values of the range
compressed data Eqn. (4.2.12) are available only at fast times which are integer
multiples tk = k/f. of the sampling interval 1/f. of the complex video data
(f. > BR). This time quantization step due to sampling is usually of the order
of the nominal width & = 1/BR of the compressed response function Eqn.
(4.2.12). As a result, range interpolation of the range compressed data array is
4.2.2
The most straightforward way to deal with the problems of range migration
and point dependent impulse response in azimuth processing is that implemented
in the processor of the RAE of Great Britain (Barber, 1985). In this procedure,
azimuth correlation is carried out on basebanded range compressed data,
corresponding to Eqn. (4.2.12), using the correlation kernel Eqn. (4.1.30), taking
account that h- 1 depends weakly on s0 Since only one image point is produced
for each correlation operation, the process is markedly slower than procedures
which use fast convolution in the slow time domain. On the other hand, no
approximations are necessary such as are required to use fast convolution in
azimuth time in the usual case of range migration and non-constant filter
parameters.
188
4.2
+ fR(s -
sc) 2 /2]
+ jR(s -
sc) 3 /6}
(4.2.15)
with both foe and fR depending (markedly) on Re and (weakly) on Sc. (The
cubic term in the expansion Eqn. (4.1.26) is retained for better accuracy.) The
basebanded range compressed data g(s, t) corresponding to Eqn. (4.2.12) are
collected along the trajectory (Fig. 4.5)
2: h-
(4.2.17)
or with some more accurate numerical procedure. Regardless of sn, sc, Re, the
values of h- 1 (s 0 lsc, Re) in Eqn. (4.2.17) can be calculated using Eqn. (4.2.15)
and refined orbit data, according to the equations of Appendix B. Analysis
indicates that for an L-band SAR, and especially operating at high latitudes,
the cubic term in Eqn. (4.2.15) is only marginally negligible. Although there is
no difficulty in doing a precise calculation of these values, with negligible error
the values of foe fR, jR can be calculated on a grid of some reasonable fineness
(e.g., 10 x 10 km), and polynomial interpolation used to the particular sc, Re
of interest.
Although values of h- 1 (snlsc,Rc) are available for any sn, sc, Re, the same
is not true of the data function g(snlsc, Re), since for the, specified sc, Re the
locus point R(snlsc, Re) will only by coincidence coincide with a range sampling
point. For each sn, in general, range interpolation is needed to find the value
Eqn. (4.2.16). This is conveniently done by "zero padding" the Fourier
coefficients corresponding to the basebanded range compressed data spectrum
Eqn. (4.2.11) before taking the inverse transform to obtain time samples
corresponding to Eqn. (4.2.12). The procedure, discussed in Appendix A, yields
values Eqn. (4.2.16) on a finer grid (within memory constraints) than the
COMPRESSION PROCESSING
189
grid of the time sampled Eqn. (4.2.12) to allow the interpolated value at the
range nearest to R(snlsc, Re) to suffice.
The considerable additional calculations needed to produce a final SAR
image are relatively generic to all processors, and will be discussed separately
in later chapters. These involve such things as Doppler filtering to obtain data
needed for subaperture processing as a part of formation of multilook images
(Section 5.2), radiometric (Chapter 7) and geometric (Chapter 8) corrections,
resampling to a standard grid (Chapter 8), automatic determination of values
foe fR (clutterlock and autofocus) (Section 5.3), and so on. Some of these are
not needed by a time domain processor, such as resampling, and some (e.g.
autofocus) need not be used if accurate orbit information is available, but all
will be discussed to some extent in latter sections.
4.2.3
The procedure of forming an image from SAR data encounters two basic
difficulties. The first is that the system impulse response h(x, Rlxc, Re) depends
strongly on Re. That is, the system responds differently to targets which are at
different ranges Re from the radar at the center of the radar beam. The difference
is embodied in the functional form of the range compressed data impulse
response Eqn. (4.2.12) arising from the differing shape of the range to target
function R(s) for different target positions. In time domain azimuth processing,
as in Section 4.2.2, one acknowledges that fact, and uses for correlation whatever
impulse response function corresponds to the image point in question. On the
other hand, if one wishes to use a more efficient fast convolution azimuth
processing (Appendix A), then approximations connected with depth of focus
enter (Section 4.1.3).
The second fundamental difficulty encountered is the fact that range R(s) to
a point varies with po.sition of the radar along its track. Therefore, the numbers
representing the system impulse response are found in data memory along a
curved locus R = R(s). A processing algorithm must access the data to be
compressed in azimuth along that trajectory, the shape of which depends on
the target range Re. In time domain processing the access is done directly, and
is relatively slow. In algorithms using fast convolution, other procedures have
been developed for access in the frequency domain. In this section and the next
we will describe the two most common such procedures used with the
"rectangular range Doppler" algorithm. In Chapter 10 we will discuss a
procedure which has been developed for so-called polar processing.
The Data Array
Consider then the complex base banded range compressed response Eqn. ( 4.2.12)
due to a unit point target at beam center coordinates Xe, Re (Fig. 4.1)
g(s, R) = h(s, Rise, Re)= exp[ -j4nR(s)/ A.] sinc{(2nBR/c)[R - R(s)]}
(4.2.18)
190
4.2
COMPRESSION PROCESSING
191
R=R(s) ~ ,/"
(4.2.19)
with foe fR being the Doppler chirp parameters for the scene in question,
depending markedly on Re and weakly on sc. (In Eqn. (4.2.18), we ignore the
antenna weighting pattern for simplicity of writing. It can easily be included in
the compression filter, but often is not in order to provide sidelobe control.)
For a particular pulse number m, corresponding to azimuth time sm, the
values of Eqn. (4.2.18) for various range bins Rn,
,,I
Sm
, ,,"
,
,
,,
'/
Data sample
/\R~
Once these values are found, azimuth compression proceeds by computing their
spectrum over some range of slow time s, multiplying by the corresponding
matched filter spectrum for the image range Re in question, and inverse
transforming. This achieves the matched filter computation of the correlator
output Eqn. (4.2.14) for a full azimuth line of image.
The procedure described in this section has been used by Bennett et al. ( 1981 ),
Herland ( 1981, 1982), and McDonough et al. ( 1985). The interpolation
necessary to compute the numbers which would be present in the data matrix
along the trajectory R(s) (Fig. 4.18), given the numbers which are present at
the nodes of the matrix, is carried out mostly in the time domain, before azimuth
Fourier transformation. The remaining interpolation operations are carried out
in the Doppler frequency domain. In effect, the bulk of the range walk, the
linear component of R(s) in Eqn. (4.2.19), is removed before azimuth Fourier
transformation of the data, with the remaining small range walk, and the full
range curvature (the quadratic term of R(s )), removed in the frequency domain.
Skewing the Data Array
We begin by choosing a nominal value R~ of Re, say the midswath value, and
a nominal s~, say the midscene value. The corresponding Doppler center
frequency foe is assumed to be known, perhaps by a clutterlock proc.edure
(Chapter 5) used in conjunction with the simple model for foe as a functton of
Re developed in Appendix B. For the entire data field, at all ranges R, we now
remove an amount of range migration corresponding to a range independent
linear walk,
(4.2.21)
R
Figure 4.18
ranges R . R
= R(s)
G~ = Gk exp(j2nkAR/ NJx.)
where the slant range sampling interval is Jx. = c/2f.. In particular, for
(4.2.22)
192
4.2
193
and stored. Which set to use for any given data row (radar pulse) is determined
by calculating the index p such that
+ 1)/8,
p = 0, 1,. . .,P - 1
(4.2.24)
Since the operations of range compression and interpolation are both linear
and stationary, they commute. We can therefore interpolate directly on the
complex basebanded range data Eqn. (4.2.6), and then apply the range
compression filter. Therefore we can precompute P sets of range compression
filter coefficients
p = O,. . .,P - l
where H;,. is the usual range compression filter and Fk(P) is the appropriate set
of interpolator coefficients Eqn. (4.2.23) calculated for i:x = pc5x. corresponding
to p as in Eqn. (4.2.24 ). After compression and interpolation, the shifting
operation by the appropriate integer number of range bins amounts simply to
re-indexing the output of the compression filter before storing in the data matrix.
After this compression and interpolation process, the data corresponding to
a point target at some beam center slant range Re lie within 1/ P of a complex
range bin of the locus given by
R
Figure 4.19
COMPRESSION PROCESSING
where .1R is the total shift Eqn. (4.2.22) carried out in correcting for the nominal
linear range migration. This can be written
+ A.focsc/2
(4.2.25)
The last term of this represents a skewing of the final image, which can be
removed after azimuth compression. The remaining terms of R(s) - Re represent
a residual range migration after the interpolation and re-indexing procedure.
Doppler Domain Interpolation
Fk(i:x) = exp(j2nkcx/N)
(4.2.23)
Each row of the data matrix will in general be associated with a different value of a
a= integer(.1R)/c5x.
where "integer" indicates the integer part of the number. To avoid the necessity
of computing the interpolating filter coefficients during data processing, a can
be quantized into some appropriate number P oflevels (four or eight, typically),
and the corresponding sets ofinterpolator coefficients exp(j2nkcx/ N) precomputed
For Seasat-like systems, the Doppler center frequency foe varies by only a few
hundred Hertz over the range swath, while the azimuth extent of the point
target response is a few seconds at most. Even for the larger values of A., say
at L-band, for which migration effects are more severe, the residual linear and
the quadratic terms together, Eqn. (4.2.25), amount to only a few tens ofrange
bins over the full point target response history. For Seasat, for example, from
Eqn. (4.l.34) the nominal integration time is S = 2.4 seconds. Using a nominal
value foe - foe= 100 Hz, the residual range walk in Eqn. (4.2.25) is 28 m, or
about 5 range bins, while withfR = 500 Hz/s the curvature amounts to 7 range
bins. Thus, the bandwidth time product of the interpolated and shifted data in
194
each range bin is on the order of 1/12 the full azimuth product ( 3200 for Seasat ),
or about 250 per bin, which is more than enough to lock together time and
Doppler frequency in each bin. (A basebanded waveform of length T and
two-sided band B, sampled at J. = B, yields a number of samples N = BT, the
bandwidth time product.)
With time and frequency locked together by
S-
4.2
COMPRESSION PROCESSING
195
I
I
,
I
,
,,
I
Slope -2/)..foc
For each value of Re for which an image line ((s, Re) is to be constructed, we
need to assemble the proper Doppler spectrum for azimuth compression
processing from data G'(f, R) located at Re + J' R for each frequency f of the
discrete spectrum over the Doppler band. Although Re will be an integral
number of range bins, generally J' R will not, so that there will not usually be
a data node at (f, Re + J' R). Interpolation is then needed, to calculate
G' (f, Re + J' R) from adjacent values G' (f, nJx.). Simple polynomial interpolation
using perhaps four adjacent values suffices. This finally corrects the last range
migration effect, and compression in azimuth follows using the appropriate
sidelobe weighted compression filter.
In the case of small range migration, such as for a Seasat-like system with
beam squint angle e. at most a fraction of a degree, it may not be necessary to
do any time domain adjustments. All the range migration can then be removed
in the Doppler frequency domain using Eqn. (4.2.26) (Bennett et al., 1980), taking
foe= foe= 0.
Criterion for Success of the lnlerpolalion
The procedure we have described here is simple and accurate, unless the linear
range walk is excessive. The potential difficulty in the case of large range walk
(which the technique of secondary range compression is designed to circumvent;
as described in Section 4.2.4) can be understood from Fig. 4.20. By removal of
the nominal linear range walk in the time domain, we are ip effect carrying out
compression processing along the indicated diagonal line through memory. As
shown in the figure, targets with different values of Re have their data lying
near the same diagonal. Since the azimuth chirp constant fR depends on Re,
along the line of analysis there occur linear FM functions in the Doppler domain
with different chirp constants. These will all be compressed by the same azimuth
compression filter, embodying some fixed value fR. Any target for which the
filter constant fR differs from the target constant more than allowed by the
"\,/
I
I
I.
,
,,,
I
I
I
I
I
R
Figure 4.20 Two point targets with extreme range migration may involve chirp constants which
exceed the azimuth depth of focus.
depth of focus will be defocussed. Therefore, the length of azimuth time used
in batch processing in the fast azimuth compression process must be short
enough so that for whatever nominal range walk is present, the span of values
Re is within the depth of focus of the processor. In extreme cases (for example,
with squint angle more than a degree, especially at L-band and lower), this
may force the azimuth FFT length to be shorter than would otherwise be desired.
Since the nominal range walk locus in Fig. 4.20 is given by Eqn. (4.2.21 ),
where foe is the selected (say midswath) value used in the compensation
procedure, and s~, R~ are say the midscene values, the slope of the nominal
walk line is
ds/dR = -2/Afoc
For an azimuth analysis time span L\s, the span of target values Re included is then
196
4.2
holds quite closely, with V taken as a velocity parameter which depends only
weakly on sand not on R. Therefore, the change in target fR across the span ARc is
or
(4.2.27)
If we require a mismatch ratio Eqn. (4.1.48)
The parameter e depends on the system depth of focus, discussed in Section 4.1.3.
There it was determined that
y = 2/B0 S
was within good tolerance, where B 0 S is the system azimuth bandwidth time
product. Using Seasat values, say Re= 850 km, A.= 0.235, and (marginally)
with e = 0.001, if we want to use say 8K azimuth points for efficient fast
convolution, with a PRF of 1650 Hz we must have
COMPRESSION PROCESSING
197
(4.2.29)
e.
4.2.4
The advantage in processing speed which fast correlation, based in the frequency
domain, has over time domain correlation is considerable. There is a strong
will result in a received response to a unit point target whose positive frequency
portion is
s(t - 2R/c) = exp{j[2nfc(t - 2R/c) + t/>(t - 2R/c)]}
where
(4.2.30)
198
4.2
Range compression of the received data is easily carried out as the first operation
of image formation. The result corresponds to an impulse response which is
the range compressed version of Eqn. ( 4.2.31 ). Let S(v) be the spectrum of the
base banded transmitted signal:
S(v) = ffe{exp[j<f>(t)]},
=0,
COMPRESSION PROCESSING
199
and the function h is that on the right of Eqn. (4.2.33 ). This is the impulse
response of a two-dimensional system which is approximately stationary in s,
but nonstationary in R, both through the explicit appearance of R 0 and through
the strong dependence of f 00 , JR on R 0 We wish to determine its inverse, the
corresponding image formation operator to be used on range compressed
basebanded data.
Image Formation and Secondary Range Compression
otherwise
where the inverse Fourier transform is two dimensional, G and Hare the two
dimensional Fourier transforms of g(s, R), the range compressed complex data,
and h(s, RI R 0 ), and the quantity G/ H is defined as zero for any frequencies for
which His zero. Writing the two dimensional inverse transform in Eqn. (4.2.36)
as a sequence of one-dimensional transforms, we have
so that
g(s,t) = BRexp[-j4nR(s)/A.]sinc{nBR[t- 2R(s)/c]}
where
( 4.2.32)
*(4.2.37)
The response function Eqn. (4.2.32) involves both s0 and R 0 other than in the
combinations s - s0 and R - R 0 That is to say, the linear radar system is
nonstationary (Appendix A). However, the corresponding impulse response is
well approximated as
where the convolution is in the variable Rand i}(f, R) is the Doppler spectrum
of the range compressed data field taken for fixed R. We now need the function
h(f, RIR 0 ) in order to describe the imaging algorithm.
The Doppler spectrum of the system function Eqn. (4.2.34) is
(4.2.33)
where we redefine the function h in so writing. In this, we take note that s0
enters into the expression Eqn. (4.2.30) only in the forms - s0 , and in the weak
dependence of f 00 , JR on s0
From Eqn. (4.2.32), we can then write the impulse response for range
compressed data as
h(s, RIR 0 ) =BR exp[ -j4nR 1 (s)/ A.] sine{ (2nBR/c)[R - R 1 (s)]}
*( 4.2.34)
where
(4.2.35)
H(f,RIR 0 ) =BR
f:
00
x sinc[(2nBR/c)(R - R 0
where we have explicitly inserted R 1 (s) from Eqn. (4.2.35), and where we also
include the two way antenna voltage pattern G(s) in azimuth. (This is the
one~way power pattern G( (), </>)evaluated at constant slant range and expressed
as a function of azimuth time.) Since we include the pattern G(s), the limits can
be left as infinite, although the antenna effectively imposes the limits ( - S /2, S /2 ),
where S is the integration time of the SAR. In evaluating this integral, a second
order approximation based on the method of stationary phase, discussed in
Section 4.2.2, leads to the result of Jin and Wu ( 1984 ).
200
4.2
COMPRESSION PROCESSING
201
For the second spectrum, since we have the inverse transform relation
a/21'
(n/a)
exp(j2nfs)df = sinc(as)
-a/21'
we have
The points
(4.2.39)
J:
G1 (f-f')G 2 (f')df'
00
=exp[ -j4nRe/A.
which is just the locking relationship between time and frequency familiar for
waveforms with high bandwidth time product.
In the integral Eqn. (4.2.38) we do not replace slow time s in the amplitude
factors of the integrand by the stationary points Eqn. (4.2.39) everywhere, but
rather only in the second order (s 2 ) term of the sine function. This is because
we want to allow for a large range walk term foes in the locus R 1 (s), and
therefore make no approximation there. Specifically, the linear part of the range
migration at the end of the integration time, A.lfoelS/4, may be larger than the
quadratic part, A.lfRIS 2 /16. On the other hand, if the linear range walk is small,
no harm is done by the approximation of s = in the quadratic term of the
sine argument, because for small range walk the stationary phase approximation
becomes increasingly accurate.
With these replacements, we obtain the spectrum Eqn. (4.2.38) as
H(f, RI Re)= BRG(s) exp(-j4nRe/ A.)
f:
A(RIRe) =
(4.2.45)
(4.2.40)
where
(4.2.41)
(4.2.42)
Therefore, we need to compute the convolution of two constituent spectra G1 (f)
and G2 (f) (the spectrum of the product g 1 g 2 ).
For the first spectrum, we have at once from Eqn. (3.2.29) that
'
with = (f - foe)/ fR. The result Eqn. (4.2.43) is the central result of Jin and
Wu (1984), up to a constant multiplier {Af0 e/2)(2/lfRl) 112
Jin and Wu ( 1984) present plots of their function IA(RIRe)I for various values
of the parameter ex = (A.foe BR/ c )2 /I fR I, shown here as Fig. 4.21. The parameter
ex is the bandwidth (2R/c) "time" (x) product of the chirp transform evident
in A(RIRe) of Eqn. (4.2.44). Even for rather large (many kHz) values of foe ex
is small (say < 10), so that, for a side-looking SAR, A(RIRe) never has the
shape of a chirp in frequency. Rather, A(RIRe) is of the shape of a typical low
bandwidth time product spectrum.
Proceeding further towards the explicit form of Eqn. ( 4.2.37), from
Eqn. (4.2.44), letting x = cv/2 it is recognized that
B.lc
A(RIRe) = (c/2)
since the waveform Eqn. (4.2.41) has high bandwidth time product.
and
+ fRs 2 /2)]
-B.12
00
g 1 (s) = exp[j2n(f0 es
+ j(n/4)sgn(fR)JlfRl- 1' 2
-B.fc exp{j[2nvR
- (re/ fR)(A.foe/2) 2 v2 ]} dv
202
4.2
COMPRESSION PROCESSING
203
where we write
x B[f,R
r
Secondary range compression function for various values of IX= ()..f 00 BR/c) 2I fR (from
Jin and Wu, 1984). IEEE.
Figure 4.21
+ R(f)IR
0 ]}
*(4.2.49)
The imaging algorithm Eqn. (4.2.49) is the final result obtained by Jin and Wu
(1984). The computation of the function B(f, RIR 0 ) from the range compressed
spectra g(f, R) as in Eqn. (4.2.48) is referred to as "secondary range compression",
or "azimuthal range compression". The collation of values B[f, R + R(f)IR 0 ]
in Eqn. (4.2.49) is also referred to as "frequency domain range migration
correction".
Therefore,
where fR and foe depend on R 0 Then from Eqn. (4.2.43) and Eqn. (4.2.46),
The expression Eqn. (4.2.49) contains the operational prescription for forming
the image. The raw radar data are first compressed in range in the usual way
to obtain the field g(s, R). Fourier transformation in the slow time coordinate
s for every range R, ignoring range migration, yields g(f, R). These data are
then correlated over R for each fixed frequency f (and for each R 0 , in general)
with the function A*( RI Re), to form the field B(f, RI Re). Then, for every range
R 0 of interest in the image ((s, R 0 ), a spectrum B[f, R + R(f)IR 0 ] is assembled.
That is, for each frequency f for some particular range R 0 , we read out the
number B[f, R + R(f)IR 0 ], where
(4.2.50)
B./c
-B./c
+ R(f)]} dv
( 4.2.47)
+ R(f)IR is multiplied by
0
204
4.2
to form a single point of the composite Doppler spectrum of ((s, Re). Finally,
inverse Fourier transformation yields all azimuth points ((s, Re) of the range
line Re.
Since range compression processing will have been digital, the ranges for
which image will be computed are the values at which compressed range function
samples were produced (the range bins), the interval between samples being
Ax.= c/ J., where f. is the sampling rate of the range complex video signal.
The spacing in the discrete version of the Doppler frequency variable f depends
on the span in slow time s over which the azimuth FFT blocks are taken. Thus
the field of values B(f, RI Re) of Eqn. (4.2.48) is on a specified grid in the (f, R)
plane. For any particular discrete value off, and some specified discrete range
Re for which the line of image is being constructed, there will not in general be
a discrete range value R(f) of Eqn. (4.2.50) available on the grid. Therefore
interpolation is necessary between neighboring nodes of B(f, RI Re) to find the
needed value. Polynomial interpolation using a few points in range at the
frequency of interest suffices.
As mentioned above, foe and fa depend weakly on sc and strongly on Re.
The procedure of the last paragraph must then be carried out in range blocks
of size small enough that these parameters are sensibly constant over the block.
The variations with sc are usually slow enough to allow use of FFT blocks in
slow time of reasonable length ( 4K or SK, typically). In range, the changes in
foe fa are more rapid, and typically these parameters are changed every few
tens of range resolution intervals, depending on the processor depth of focus.
The parameters are updated, perhaps in accordance with one of the models of
Appendix B, as the image production moves across the range swath.
Combined Primary and Secondary Range Compression
Jin and Wu ( 1984) indicate that the parameters in A(RIRc) need not be updated
at all across a reasonable swath width in range, so that only the parameter
values in the phase of the Doppler filter w(f) are critical. For such casesy the
secondary range compression operation Eqn. (4.2.48) can be combined with
range compression, and therefore done with no additional computations needed
beyond what is needed in any case for range compression. The operation
Eqn. (4.2.48) of forming B(f, RI Re) by correlation with the range compressed
data can then be realized as
B(f, R) =
f: ~{g(s,
COMPRESSION PROCESSING
205
lvl <
Ba/c
(4.2.52)
using Eqn. (4.2.46), can simply be combined in a product with the primary
range compression filter. The result is an adjusted range compression filter,
relative to range time t = 2R/c, with transfer function
H(f)
= exp(-jnf 2 /Ke),
(4.2.53)
R')}A*(R' - RIR:)dR'
00
~{f~
at f = Ba/2 we have
g(s,R')A*(R' - RIRc)dR'}
~ ~{~; 1 {G(s,v)A*(-v)}}
(4.2.51)
where G(s, v) is the transform of the range compressed data g(s, R) and A*(v)
is the transform of A*(R), the (say) midswath value of A*(RIRc>
since K = Ba/rp. The restriction Eqn. (4.2.56) then takes the form
(4.2.57)
206
4.2
Taking account that "range" frequency v and "time" frequency f are related
by v = 2f /c, the secondary filter Eqn. (4.2.52) has a phase function
so that
Equation (4.2.58) is essentially that set forth by Jin and Wu (1984). Wong and
Cumming ( 1989) have made a similar calculation and present examp~es.
Equation (4.2.58) is well satisfied across the entire range swath of a Seasat-hke
system with moderate (,...., 5-10) squint.
The Hybrid Correlation Algorithm
In the case of small range walk, the secondary range compression process
reduces to the hybrid correlation algorithm of Wu et al. (1982b). As Jin and
Wu (1984) show by computations (Fig. 4.21), the function A(RIRe) of Eqn.
(4.2.44) has width the order of one range resolution interval, or about one range
sampling interval, so long as
207
*( 4.2.59)
a value for Seasat of about 1500 Hz. In that case,
COMPRESSION PROCESSING
<i(s,v) = H(v)V,(s,v)
the range compressed data Doppler spectrum itself. Then only the interpolation
operation is needed in order to assemble the composite spectra from the azimuth
transformed data g(f, R). For proper operation in the usual form (Wu et al.,
1982b), the point target response h(s, Rise, Re) should have a high .bandwi?th
time product in each range bin, and not simply over the full SAR mtegratton
time. This will be the case for range walk small enough that the secondary
compression procedure can be dispensed with. In an earli~r version of the hybrid
correlation algorithm (Wu, 1976), interpolation was not envisioned, and simple
nearest neighbor values of the spectra in each range bin were used for the
numbers B[f, R + R(f)]. This proved not to be entirely satisfactory in general.
Squint Mode Processing
(4.2.60)
The inverse range transform then yields the field B(f, R):
B(f,R) = ~; 1 {B(f,v)}
Finally these data are used in the migration correction and azimuth compression
procedure of Eqn. (4.2.49).
Chang et al. ( 1992) present simulations to show that this modified version
of the algorithm Eqn. (4.2.49) is accurate in achieving compression for a
208
Seasat-like system at L-band (with 40 look angle) with a squint angle of 15-20,
whereas the algorithm Eqn. (4.2.49) itself begins to degrade at a squint angle
of about 5. Calculations are presented to show that, at a smaller look angle
(20), the algorithm Eqn. (4.2.49) is adequate at squint up to about 10, while
the modified algorithm at squint 20 is successful at a full range transform span
of 40 km, and by reduction of the range transform span to 10 km can operate
at squint as high as 80. At C-band, the algorithm Eqn. ( 4.2.49) itself performs
adequately for squint of 40 with a 40 km range transform span and 35 look
angle. Matters improve still further at smaller look angles and narrower range
transform span.
The algorithm of Chang et al. ( 1992) is therefore adequate for a broad range
of SAR systems. The only restriction is that, since the range curvature terms in
Eqn. ( 4.2.38) are only approximated by usfog the method of stationary phase
to arrive at the spectrum Eqn. (4.2.43 ), the pro~essor degrades if range curvature
is excessive. The situation worsens at lower frequency and higher altitude, since
the range curvature !lR, measured in range resolution cells bx., from
Section 4.1.3 is
REFERENCES
Barber, B. C. ( 1985). "Theory of digital imaging from orbital synthetic-aperture radar,"
Inter. J. Remote Sensing, 6(7), pp. 1009-1057.
Bennett, J. R. and I. G. Cumming (1979). "Digital SAR image formation airborne and
satellite results," 13th Inter. Symp. Remote Sensing of the Environment, Ann Arbor,
Michigan, April 23-27.
Bennett, J. R., I. G. Cumming and R. A. Deane ( 1980). "The digital processing of Seasat
synthetic aperture radar data," Record, IEEE 1980 Inter. Radar Conf, April 28-30,
Washington, DC., pp. 168-175.
REFERENCES
209
5.1
5
ANCILLARY PROCESSES
IN IMAGE FORMATION
At the heart of any SAR imaging algorithm is the set of correlation operations
by which the point target response (distributed spatially due to the nonzero
pulsewidth and antenna beam width) is compressed to an approximate point.
One family of such procedures, the rectangular algorithm, has been described
in Chapter 4. Another, the polar processing algorithm, will be dealt with in
Chapter 10. In both cases, some operations in addition to correlation are usually
needed. In this chapter we describe five techniques, with particular reference to
the rectangular algorithm, although some of the discussion is more general.
First, we briefly note the precise arrangement of computations for digital
implementation of range compression using fast convolution. We then discuss
the phenomenon of speckle noise, and describe the use of multilook imaging
for its alleviation. We then give a detailed description of some methods by
which the Doppler center frequency foe and azimuth frequency rate parameter
fR, necessary for azimuth compression procedures, can be determined from the
radar data itself. Finally, we describe some ways of resolving the basic image
position ambiguity which arises in pulse radar, which time samples the Doppler
signal underlying SAR operation.
5.1
With rare exceptions, all SAR processors carry out range compression of the
raw data for a large number of radar pulses before beginning azimuth
compression to compute a block of image. Even though some of the details
of range compression depend on the way that azimuth compression is to be
210
211
carried out, the main elements are sufficiently alike to make a separate
description efficient. The methods of Appendix A are the basis of the processing
described here. Section 9.2.5 considers the computational complexity of the
procedures.
The continuous time real radar return signal for some particular pulse is of
some bandwidth BR centered on the carrier freqency fc. By linear frequency
shifting operations, this ultimately appears at the input of the A/ D converter
as a real signal corresponding to the point target response Eqn. (4.2.4) of
bandwidth BR centered on the offset video frequency / 1 , with / 1 > BR/2
necessary to minimize aliasing (Fig. 4.16a), but often / 1 ~ BR/2. (In the Seasat
case, for example, the range pulsewidth = 33.8 s and range chirp constant
K = 0.563 MHz/ s resulted in a bandwidth BR = 19.0 MHz, and the offset
video frequency chosen was 11.38 MHz.) For proper digital processing, the
continuous time signal v.(t) is sampled at some rate greater than the Nyquist
frequency, which is twice the frequency of the highest frequency component in
the signal being sampled, / 1 + BR/2 (Appendix A). With / 1 ~ BR/2, a usual
and convenient choice is !.r = 4/1 ( 45.53 MHz for Seasat ). This results in some
implementational simplification, since then
(Note that a range bin is not the same as a range resolution cell.)
The range samples are now filtered by the digital range compression filter.
If the filter impulse response is h(t), this is sampled at the same rate !.r as the
range data. With a radar pulse length rP there are required Q = rpf.r samples
(1536 for Seasat). With a linear chirp, the effective transmitted pulse is
s(t) = cos[2n(f1 t
+ Kt 2 /2)],
(5.1.1)
212
5.1
relative to the offset video frequency / 1. The filter function is the matched filter:
h(t) = s(-t) = cos[2n(f1t - Kt 2 /2)],
(5.1.2)
An FFT size N which is the next power of two greater than or equal P + Q - 1
is chosen (2 14 = 16384 for Seasat), and the data and filter sequences filled with
zeros to that length (zero padding). Alternatively, a smaller value can be used
with the overlap-add or overlap-save procedures described in Appendix A.
In Fig. 5.1 are sketched the (periodic) time and frequency waveforms involved
in digital compression of the offset video range pulse Eqn. (5.1.1) using the filter
Eqn. (5.1.2). In every case, the region of computation is the first period of the
function in positive time or frequency, shown as solid lines in Fig. 5.1.
213
The range compression filter coefficients Fig. 5.1 b are computed as the
N-point FFT of the sequence computed from Eqn. (5.1.2):
n = O,Q/2-1
n = Q/2, N - Q/2 - 1
h0 = h[(n - N)/.f.,],
(5.1.3)
n = N - Q/2, N - 1
taking account that the sequence h 0 is periodic with period N and that we want
always to enumerate sequences with positive indices. Since this sequence is real,
the even-odd separate procedure of Appendix A can be used conveniently.
Further, since we will carry out complex basebanding on the result, only the
coefficients Hk fork= 0, N /2 - 1 (Fig. 5.lb) need be computed (Fig. 4.16).
By themselves the filter coefficients Hk of Eqn. (5.1.3) suffer from the
problem of range sidelobes, discussed in Section 3.2.3. Before using them they
must be modified by some appropriate weight sequence, such as the sequence
corresponding to the Taylor weighting (Farnett et al., 1970):
If - !1 I ~ .f.,/4
The weighted filter coefficients are correspondingly
...
-vu
compressed
-data ..
I
at.!1
-----~.o
I
I
/1
e
N/4
[~'JI~ . rhl\
,_ .
N/2
Figure 5.1
k = 0, N/2 -1
Steps in range compression. Solid lines on frequency spectra are base Fourier domain.
Dashed lines are periodic repetitions of spectra of digital signals.
(5.1.4)
k = O,N/4-1
Gk= Hk-N/4Fk-N/4
k=N/4,N/2-1
(5.1.6)
214
5.2
h(t) = exp(-jnKt 2 ),
(5.1.7)
215
Any particular realization ((R) of Eqn. (5.2.3) will yield an image l((R)l2
which is different from the mean Eqn. (5.2.2). The difference is speckle noise.
In this section we want to investigate the statistics of the individual real images
l((R)l2. Also, we will discuss some ways to generate estimators of the desired
image Eqn. (5.2.2) from available samples ((R).
Image Statistics
5.2
L~R'=
-oo
The resolution element of any SAR is large with respect to a wavelength of the
radar system. As a result, it is generally unfruitful to attempt to define a
deterministic backscatter coefficient for each terrain element to be imaged.
Rather, as discussed in Section 2.3, the sought image is the local mean of the
radar cross section per unit area of each patch of the terrain in view. This is
defined in terms of the random specific cross section
a 0 (R) = a(R)/dA
0,
R 0 ) dx 0 dR 0 dx' dR'
00
( 5.2.4)
If we now assume that the expected value of the terrain reflectivity function C
is independent of aspect angle over the range of angles for which the terrain point
is in the radar beam, using Eqn. ( 4.1.2) the delta function is recovered in
Eqn. ( 5.2.4) to yield
( 5.2.l)
tf[C(x, R)] = &[((x, R)]
(5.2.5)
Thus, the computed complex image function C(x, R) is a random variable whose
mean is the mean of the terrain reflectivity function.
We are mainly interested in the statistics of the random variable
Z(x, R) = IC(x, R)I
where ((R) is the terrain reflectivity function defined in Eqn. (3.2.3). Its
approximation in any particular realization,
C(R) =
f:00
h- 1 (RIR')6.(R')dR'
is the complex image derived from the radar voltage phasor signals vr(R) by
processing with the inverse of the radar system function (Section 4.1 ).
the magnitude of the computed complex image, whose mean square is "the
image". Ifwe assume that the real and imaginary parts of the complex Gaussian
random variable C(x, R) are independent and zero mean (implying incidentally,
from Eqn. (5.2.5), that the complex terrain function (has zero mean) with equal
variances a 2, then Z(x, R) has the Rayleigh density. This follows from the computation (Whalen, 1971, Chapter 4 ):
p(Z,c/>) = det[o(a,b)/o(Z,c/>)]p(a,b)
(5.2.6)
216
5.2
where we write
+ b2 )/2u 2 ]
(5.2.7)
217
The image then has a randomly fluctuating intensity /(R) at each pixel, which
leads to the grainy appearance of speckle. For purposes of visual interpretation,
it is generally desirable to reduce those fluctuations, and to cluster the observed
intensities /(R) closer to the mean intensities / 0 (R), since it is the mean intensities
which are usually the required image information. This is usually done by
computing some number of nominally independent images (looks) of the same
scene, and averaging them, pixel by pixel. Alternatively (Li et al., 1983), a single
high resolution image can be locally smoothed.
If we let JdR) be the average of L independent realizations (looks) l;(R) of
the intensity /(R) for a pixel at R:
L
/L=(l/L) LI;
and hence
(5.2.12)
i= 1
2x
p(Z)=
(5.2.8)
I(x, R) = Z 2
= l{(x, R)l 2
(5.2.9)
u1(x, R) = 10 = 2u 2
where u 2 may depend on (x, R). From Eqn. (5.2.9), the exponential density of ,,
the samples l(x, R) is equivalently:
p(J) = (1//0 )exp(-J//0 )
*(5.2.10)
Mu/I/look Images
Although there are many assumptions in the above derivation, analysis of typical
SAR images supports the final result that the image resolution cells have
intensities I which follow the exponential distribution:
Prob{/~ t} =
f:
p(I)dl = exp(-f//0 )
(5.2.11)
= (1/L) 2
L <1~ =
<1~/L
I= 1
(This reduction will be less if the look intensities are unequal or the looks
are not independent.) An image such as Eqn. (5.2.12), is called an L-look
image.
In SAR, independent looks J1(R) can be generated from data taken at different
aspect angles as the vehicle moves past the terrain (Fig. 5.2, drawn for the
common case of four looks). Thus the first look is generated from the forward
quarter of the antenna along-track beam, the next from the next quarter beam
back, and so on. Since signals from all parts of the beam reach the radar receiver
superimposed, however, such segregation of data can not be done in the time
or space domains. However, the high azimuth bandwidth time product of a
useful SAR locks together time and frequency, which allows the look data to
be sorted in the Doppler frequency domain. That is, data with high Doppler
frequency necessarily originated from terrain points in the forward edge of the
azimuth beamwidth, while the same point in the rear quarter of the beam
produces a low Doppler frequency and appears in the lowest quarter of the
Doppler band.
To produce such independent looks in the Doppler domain, the Doppler
spectrum of the range compressed data at each range bin is analyzed, after
range migration correction. That is, the spectrum is analyzed just before the
218
5.2
219
r-~t-~~~-1-~~_::=-f
f oc
... ......
Figure 5.2
......... ......
... ......
...... .........
azimuth compression filter is applied. The spectrum is then divided into (say)
four subbands by filters before compression, suitably tapered to avoid sidelobes
in azimuth time (Section 3.2.3), and overlapped to some extent to avoid loss
of too much signal energy, but not so much as to lose independence of the
looks (Fig. 5.3). Since the Doppler band width B0 is essentially independent of
range Re at beam center, the look filters can be taken with constant bandwidths
Bi> (nominally B0 / L for L looks) and with center frequencies evenly spaced
across the band B0 . Since foe changes with range Re, the look filter complex
Fig. 5.3 slides in frequency as a unit as the range bin Re in question changes.
Since the resolution in each look I;(R) is inversely proportional to the
bandwidth Bi> of Doppler data compressed in that look, processing only 1IL
of the full Doppler band B0 degrades the resolution in each look by 1IL as
compared to the resolution available if all data were compressed to form a
single image (single-look processing). Thus, for example, a single look Seasat
image uses the full Doppler band of 1300 Hz and attains a resolution ideally
c5x = V.1/ B 0 = 6600/1300 = 5.1 m, while a four look image has resolution in
each look 4 x 5.1 = 20.4 m, with the resolution in the superposition of the four
looks being the same as each look separately. (The exact resolution attained
in a multilook image depends on the details of implementation of the look
filters, since the precise answer depends on the bandwidth taken for each look
filter.)
Mull/look Processing
If the capability to produce single look images is desired in the processor, the
full Doppler data band B0 must be produced using an FFT of adequate length
in the azimuth time variable. Since the full synthetic aperture time S must be
used for the filter function, something markedly longer must be used for the
Doppler spectrum and look filters. (Antenna pattern weighting not shown.)
data block in order to achieve fast convolution efficiency (Appendix A). Then
there is no particular reason not to implement multilook filters by simply
combining the amplitude characteristic of Fig. 5.3 for each look with the single
look full band compression filter to produce the L multilook filters to apply to
the azimuth Doppler data. Since the compressed data in Doppler frequency
has only nominally 1/ L the bandwidth of single-look data, a sampling rate 1/ L
that needed for single look images suffices. This rate reduction is easily brought
about by doing the inverse FFT of the compressed data with an (N / L)-point
IFFT, where the original single look spectrum was taken with an N-point
transform. If something other than L-look imagery, with L a power of 2, is
desired, some zero padding is useful to bring N / L to an integral power of 2.
With this procedure, slow time registration of the images of the individual looks
is automatic, since the compression filter for each look retains exactly the proper
phase function to place the image pixels at the proper azimuth positions.
Alternatively, some computational and memory savings can be realized if
there is no intention to produce single look images with the processor. In that
case, the largest set of Doppler frequency data ever needed at any one time is
that corresponding to the band of one of the multiple looks, of bandwidth
Bi>= B0 / Lfor an L-look image. The memory savings in such a case are obvious.
The computational savings in a frequency domain processor follow because
doi~g~FFTs oflength N /Lrequirescomputation of the order L(N /L)log(N /L),
which ts less than that for one FFT of length N, which requires computation
of order N log(N). In time domain processing, the savings are in the ratio of
N 2 to L(N I L) 2 , since both the data length and the compression filter length
decrease in the ratio N / L for each look computation. In either case of time or
220
5.3
frequency domain processing, with reduced data span, the look filtering should
be done in the time domain to avoid taking a full band FFT of the Doppler
data. A conventional FIR filter is applied to the PRF-sampled azimuth time
data in each slant range bin to produce the data for each look. Since the band
of each look is only 1/ L the band of the Doppler data, decimation !s ~s~d as
well as filtering to reduce the data rate to the minimum needed for the mdlVldual
look bands.
.
If the segmentation procedure of the last paragraph is used, compensation
must be made according to which subband the image came from before
superposing them. The images for each look must be shifted along track
explicitly, if the same compression filter is u.sed for ~ac~ look. The necessary
correction can be done in the Doppler domam by adjusting the filtered output
after compression by the delay factor exp[ -jnfocUoci - !oc)lfR] to ac~ount
for the different Doppler center frequencies foci in each look. ~ltern~tively,
these factors can simply be included in the look filter to result m a different
filter to be used for each look.
Thermal Noise Effects
221
so that system noise adds a bias to the desired image I 0 Since the quantity li!2
also has the exponential density, its mean is also the image standard deviation,
so that the biased noisy single-look image still has unity SNR.
The system noise bias in the image estimate Iii 2 can be removed if an estimator
Pn of the noise power is available. That can be obtained from receiver output
voltage during a pre-imaging period with no input, or from a dark part of the
image with little terrain backscatter evident. The image is then computed as
using the fact that Ilj2 is exponentially distributed, with variance equal the
square of its mean.
In the case that Pn = lnl 2 , a single sample of system noise, Var(Pn) =
S(Pn) 2 = P~, and
= S(J)/a1 = 1
since the mean I 0 of the exponential density dis~ribution Eq~. (5.~.10) equals
its standard deviation. The SNR of an L-look image, assummg mdependent
looks, from Eqn. ( 5.2.13) is
SNRt = I 0 /(Jo/../L) = .jL
It might be noted that a multilook image has intensity which is the sum of
common-mean exponentially distributed variables, and thereby has the gamma
(or x2 ) density.
.
Radar system (including thermal) noise adds an independent Gaussian
component to the complex image pixels. The complex image is then
i=(+n
where ( is a realization of ' and n is an independent complex Gaussian noise
output. The mean image is then
1 2
'
*(5.2.13)
where SNR1 = I 0 / Pn is the ratio of mean image output without system noise
to mean system noise power. This is the expression usually presented (Ulaby
et al., 1982, p. 492). Some practical difficulties of the procedure are discussed
in Section 7.6. From Eqn. (5.2.13) it is clear that the nominal
SNR
improvement with multilook processing degrades to something less than
in the presence of finite SNR1
JL
5.3
JL
In SAR image formation, using a high resolution (focussed) system of the type
discussed in Chapter 4, the compression operation in azimuth (slow) time is
the crucial ingredient which makes the system function. The azimuth compression
filter is the filter appropriate to the range compressed point target response
Eqn. (4.1.24 ):
(5.3.l)
222
5.3
The filter therefore involves the parameters of the range migration locus R(s),
the slant range to a point target as a function of slow time. The locus R(s) is
usefully expanded in a Taylor series about the slow time sc at which the target
is in the center of the radar beam (Fig. 4.1). Although at least one processor
(Barber 1985a) uses terms through the third order in slow time, it usually
suffices to retain only the second order term:
(5.3.2)
where the Doppler center frequency foe and azimuth chirp constant
defined as:
f De =
2Rc/ A,
fR
are
( 5.3.3)
223
5.3.1
Clutterlock Procedures
224
5.3
225
We will now indicate in some detail the specific choices which have been
made in developing these clutterlock algorithms. The precise arrangement of
procedures is not especially critical, since slight to moderate misplacement of
foe ( < 0.05 B0 ) only leads to some loss of SNR and some increase in ambiguity
levels (Li and Johnson, 1983). However, some of the procedures can lead to
noticeable SNR and ambiguity effects with certain scene characteristics, so that
the availability of a repertoire of procedures is useful.
Clutterlock by Doppler Spectrum Analysis
lo
b
Fl~ure 5.4 Use of Doppler spectrum to estimate / 00 requires spectral smoothing with range
adjustment. (a) Bright point target induces bias in estimation of / 00 by peak location of Doppler
spectrum. (b) Drift of Doppler spectrum center as range moves across swath.
226
5.3
227
- - Sc
A
Figure 5.5 Two point targets with responses dispersed in Doppler by azimuth beamwidth. With
aperture analysis span S', the target at s: contributes only partially to any clutterlock procedure.
A different algorithm based on the same general idea has been used in the JPL
processor. Nominal parameters fDe and fR are first computed from the satellite
orbit and attitude data, such as may be available, in order to carry out image
formation for a small ( 1 km or so) span of slant range and a span of azimuth
time which is also small, but which has enough pulses to carry out an FFT of
some reasonable length (say 5 km or so). The piece of image is to be small
enough that variations in reflectivity with aspect angle will be small, and also
small enough that fDe can be taken constant over the image.
In one version (Curlander et al., 1982), four real images of a four-look
processor are produced, but not added. The total energy E; in each of the four
images is found, simply by summing the pixel intensities over each image. Since
each image is from a different quarter of the full azimuth Doppler spectrum,
the image energies correspond to the Doppler spectrum powers in each quarter
spectrum.
Were the trial fDe to have been correct, from symmetry of the antenna pattern
and the locking of azimuth time to Doppler frequency, we would expect equal
energies in the sum of the two lower frequency look energies and in the sum
of the energies of the two upper frequency looks. In general this will not be the
case, and some non-zero value will be found for the number
( 5.3.5)
The trial value of fDc is then incremented by some nominal amount, say 10 Hz,
and the entire procedure repeated to obtain a new value AE. Some number
(say 16) of such values are computed and plotted vs. fDe The value of fDe for
which a linear fit to the AE(fDe) values intersects AE = 0 is taken as the estimate
JDe for the particular range of the image piece used in the computations. The
entire procedure is then repeated for each 1 km or so span of slant range across
the range swath, and a linear fit made to the resulting values Joe(R) to determine
the final (assumed linear) relation of foe to Re.
Although somewhat computationally intensive, the procedure was reported
to be accurate to within a few Hz over ocean regions, which are nearly
homogeneous in scattering properties, and to within a few tens of Hz over
urban regions. This accuracy estimate was based on the observed variation of
the estimates across the swath about the deterministic model of foe(R).
In the algorithm of Li et al. (1985), rather than four subaperture images, a
R) is produced, using a trial value fDe
single full aperture complex image
at each range, computed from nominal orbit parameters. Azimuth spectra ( (f, R)
are produced and averaged over a number of adjacent range bins spanning a
small region (say 1 km) over which foe(R) is nominally constant. Each average
power spectrum l((f, R;)l 2 is then balanced to find the frequency above and
below which half the power lies. That collection of estimates JDe(R;) is then
fitted to a linear model fDc(R) to determine the final values fD 0 (R).
"s,
228
5.3
Even though the use of azimuth compressed (image) data obviates the
problem of Fig. 5.5, Li et al. (1985) note that some bias of foe is present. It is
attributed to variation of the true reflectivity '(x, R) of discrete targets with
respect to aspect angle, so that they may appear more strongly in some parts
of the Doppler spectrum than in others. The effect was not noted for
homogeneous scenes.
Jin (1989) worked out the statistics of the quantity AE of Eqn. (5.3.5),
assuming that the computed real images had elements which were exponentially
distributed (Section 5.2), and independent from one resolution cell to another.
He determined that, approximately, the mean of AE of Eqn. (5.3.5) was related
to the deviation Afoe of the value f ge used in the computation of the images
from the true value foe by:
229
(5.3.9)
where
(5.3.6)
where
Afoe = foe -
f ge
(5.3.7)
and
Ct=
2[W(O)- W(Bp/2)]
If
B,/2
because of the time and frequency locking effect of the high azimuth bandwidth
time product. The function Z is the Doppler spectrum of the complex
reflectivity '
If the azimuth compression operation is carried out with a filter H- 1 (JIR),
then the computed complex image ((s, R) has spectrum
W(f)df
2(f,R) = H- 1 (/IR)g(f,R)
-B,/2
Here
with power
12(/, R)l 2 = IH- 1 (flR)l 2W(f - foe)IZ(f, R)l 2
where AE is the value at hand, so that, from Eqn. ( 5.3. 7), foe can be estimated as
where again W is the two way antenna power pattern in the Doppler domain.
In this, the term IH - 1 I2 is known, and is unity if the compression filter is not
weighted for sidelobe control. The term IZl 2 is an exponentially distributed
random variable, since the spectrum Z is a linear operation on the complex
Gaussian process '(s, R).
Using the assumed constant mean of 1'1 2 over the scene, Jin and Chang
(1992) derive the minimum variance unbiased estimator AJoe of the deviation
The correction procedure is iterated using the value Joe as a new value fge
Minimum Variance Unbiased Centroid Estimation
Jin and Chang (1992) and Jin (1989, Appendix B) have considered clutterlock
for a homogeneous scene, that is, one for which the exponentially distributed
intensities of the scene elements have constant mean, so that the backscatter
coefficient u 0 is constant. For such a scene, the azimuth time variation of the
Here fge is the Doppler center frequency used in forming the image (and foe
is the true value, about which the antenna pattern W(f - foe) is assumed to
be symmetric. They find
*(5.3.10)
230
5.3
where
w(f) = (1/a)W'(f)/W 2(f)
(5.3.11)
with
B,/2
a= SP
231
The integrals in this last equation are just the spectral energies of images
created from weighting of the portions of the Doppler spectrum below and
above the trial centroid fge The Doppler band can be further subdivided into
multiple (e.g. four) "looks", with energies E'1 , .,E~ computed from four
weighted subapertures. The denominator term,
[W'(f)/W(f)] 2 df
-B,/2
I W'(f)I,
.1.foe = (E'1
~O
+ E~ - E~ - E~)/ E
*( 5.3.12)
where the energies E; refer to an image using the modified azimuth compression
filter
R- 1 (f,R) =
W'~f) =
is proportional to the total image energy. Thus, to within a known scale factor,
Finally (Jin, 1986), the values Eqn. (5.3.12) for the various homogeneous
subregions are combined in proportion to their inverse variances as
K
.1.foe =
Wk.1.f~e
(5.3.13)
k=l
<Tt 2
=SP
f[
B,
W'(f)
df
W(f)+ N 0
The above integration is over the image band, and N 0 is the system noise
power spectral density of the image:
N0 =
JB,
W(f)df/(BPSNR 0 )
232
5.3
function:
233
f:ao
R(t)exp(-j2nft)dt
From Eqn. (5.3.15), the image azimuth correlation function is then
where
R(t) = 8((s + t}(*(s)
( 5.3.14)
f:ao
IH(f- L\f)l2W(f)exp(j2nft)df
( 5.3.17)
f:ao
where
S(f)exp(j2nft)df
(5.3.15)
any shift in the power spectrum, say to S(f - foe), is evidenced by a phase
factor in the correlation function:
R( t):::;.. R( t) exp(j2nfoe t)
This suggests that we can determine foe by analysis of the phase of the slow
time correlation function of a computed image line ((s, R), which can be
estimated using Eqn. (5.3.14).
Suppose that the true scene is homogeneous, with independent intensities in
each resolution element. Then the reflectivity ((s, R) has a power spectral density
in the azimuth variable which is constant:
For small !if, for any specified t = t 0 the amplitude and phase of the integral
in Eqn. (5.3.17) will be proportional to ll.f:
For the selected t 0 , the value R,(t 0 ) is estimated based on Eqn. (5.3.14). The
angle of that complex number, say
+ 2na0 t 0 A.f,
+ ao(foe - foe),
where
The azimuth data Eqn. (5.3.16) are passed through a compression filter
H(f - JO.,) with an amplitude spectrum H(f) shifted to some presumed
Doppler center frequency foe The line of image thereby produced has power
*( 5.3.19)
The procedure is iterated. Madsen ( 1989) suggests that the first sample of the
estimated autocorrelation be used, so that t 0 is the first available lag value.
The coefficient a 0 in Eqn. (5.3.18) is derived under some reasonable
assumptions by Madsen (1985). As Madsen (1989) suggests, its determination
can be obviated by plotting a succession of values / 0 , found with different foe
in order to determine the value fo for which fo =foe implying from
Eqn. (5.3.19) that foe= foe
The considerable computational efficiency of Madsen's method comes about
partly because it is not necessary to compute any power spectra, but mainly
because of the possibility of computing the estimate of R,( t 0 ) using hard limited
data. In particular, let x, y be any two real stationary Gaussian processes, and
let x., Ys be their hard limited versions:
x.(t) = 1,
= -1,
x(t)
x(t) < 0
234
5.3
(5.3.20)
5.3.2
Autofocus
Most SAR image formation processors in current use carry out determination
of the azimuth chirp constant fR in the same way, using the subaperture.
correlation method (Bennett et al., 1981; Curlander et al., 1982; Wu et al., 1982;
McDonough et al., 1985 ). The exceptions are those processors such as in (Barber,
1985a), which use direct computation of fR from orbital data according to the
expressions of Appendix B, and processors, such as in ( Herland, 1981 ), which
use the fact that the image contrast
235
Suppose that two complete intensity images were produced for some modest
sized patch of terrain, taken small enough in range extent that fR could be
considered constant, and large enough in azimuth extent to allow convenient
FFT size. Each image is produced from a different part of the Doppler spectrum,
as in multilook processing. Some nominal value f R. is used in the processing.
After formation of the two images, they will be registered in azimuth time by
shifting one relative to the other by exactly the amount corresponding to
Eqn. ( 5.3.21 ):
( 5.3.22)
where f f,c, f5c are the centers of the subbands used in forming the images,
and f R. is the trial value used. If we were forming a single multilook image, the
registered subimages. would now be added. However, we now make the
observation that, if the value JR. used in processing is not the correct value fR,
the registration will be incorrect because the imposed azimuth shift Eqn. (5.3.22)
will not accord with the actual relation in the image:
(5.3.23)'
Thus, the two images, which should be identical on the same time scale s, will
in fact be displaced from one another in time, with the amount of the
displacement being a measure of the mismatch in fR between scene and processor.
In the processor of Curlander et al. ( 1982 ), the outer two looks of a four-look
processor are used in this procedure. A nominal value of f R. is chosen, two
images / 1 (s,R) and / 2 (s,R) are produced, and the cross correlation function
is estimated for each range R of the image. These correlations are averaged
over range to obtain a single average cross correlation function. The location
in time of the peak of that function is found, for example by reading off the
peak of a local quadratic fit around the nominal peak. This gives the measure
of slow time misregistration of the two images:
(5.3.24)
This is taken as one point on a curve of f>s vs. JR., and the entire process is
cycled for new nominal values JR., displaced slightly (a few Hz/s) from one
another. The correct value of fR for the range used in the images is taken as
the value at which a linear fit to the points on such a curve crosses the axis
f>s = 0, implying from Eqn. ( 5.3.24) that f R. = fR. The entire procedure is stepped
along in range across the swath of the SAR. The procedure works best over
land areas, where point-like targets exist which act to sharpen the crosscorrelation peaks, with a reported accuracy of a few tenths of a Hz/s.
236
5.3
In another version of the same idea (McDonough et al., 1985), the model
equation of Appendix Bis used:
range bin:
(5.3.27)
(5.3.25)
where V is an equivalent speed, very nearly constant with both range and
azimuth position over a typical scene. Some nominal value of V is chosen,
perhaps from nominal orbit data, or simply the approximate value Eqn. (B.4.12):
where V., H are nominal satellite speed and altitude, and Re is the nominal
earth {adius. Using the nominal value of V in the model Eqn. ( 5.3.25) for fR,
a moderate size piece of image is formed from each of at least two Doppler
subbands. A value JR for each range in the image is computed from Eqn.
(5.3.25), using the nominal V, and that value fR(Rc) used in the compression
processing.
Suppose the two images are produced with Doppler bands having center
frequencies which differ by some amount Aloe Then we will expect the pixels
in each range line of the two images to differ in slow time location s by an
amount, from Eqn. (5.3.22):
As' = fl.foe/ f R
We will compensate each range line of one of the images by that amount, so
as to register the two images in slow time. In reality, however, the pixels in the.
two images along any range line will differ in slow time location by an amount
237
where I 1 and I 2 are the intensities of the pixels in the two images and the sum
is over whatever portion of image slow time has been computed.
Since, in this version, we change the value fR over range as in Eqn. (5.3.25),
there is a systematic azimuth displacement as a function of range. We need
to compensate that dependence before averaging the correlation functions
Eqn. (5.3.27) over range. This can be done by computing the average as
where the sum is over whatever range bins are available in the image and R 0
is the smallest value of Re used in the computations Eqn. (5.3.27). The value
yP of y for which p.(y) peaks is then the measure of Js in Eqn. (5.3.25):
*(5.3.28)
which may be solved for the unknown value V.
In the particular case that the range interval used in the image formation is
sufficiently small that fR rather than V can be considered constant, the formulas
reduce to the earlier case Eqn. (5.3.24):
p(y)=
LI
1 (s+y,Rc)I 2 (s,R 0 )
s,Ri;:
where fR is the true value for that range in the scene. After compensation,
therefore, the pixels along any range line will still misregister by an amount as
in Eqn. (5.3.24):
Js =As' - As= Af0 c(l/ fR
1/fR)
= AfocPRc/2)(1/V 2 -
l/V'
(5.3.26)
where Vis the correct value for the velocity parameter in Eqn. (5.3.25) and 1t:'1
is our nominal choice.
We now measure Js by cross correlating the two images. The process ~
termed subaperture correlation, because the two images arise from separa- .
Doppler sub bands which correspond to different parts (subapertures) of tiler.
full antenna beam due to the locking of time and frequency. Since Js depend&
on range R 0 , we first compute the correlation function in slow time along each
238
5.4
5.4
239
F!(jw) = J.
F[j2n(f
+ mf.)]
\\
/
foc-fp
Figure 5.6
....
--._,,
,/
L_
Doppler spectrum at
(5.4.1)
f 0c
fp/2
foe+ fp
m=-co
Here F(jw) is the spectrum of the function f(t) whose samples are the numbers
J;..
In application to SAR azimuth processing, this means that the Doppler
spectrum computed as the FFT of the range compressed and basebanded data
for any image line is periodic. The period is the pulse repetition frequency,
J. = fp, since the sampling is that due to the pulsed nature of the radar. This
periodicity is of no concern in the various azimuth compression and filtering
operations involved in making a full resolution image, since all calculations are
done digitally and all azimuth filter spectra are also periodic. (A separate
question concerns whether or not the antenna pattern G( s) is adequately limited
to induce bandlimiting of the Doppler spectrum, so as to avoid aliasing by
sampling at the rate fp.) However, an ambiguity problem can arise in image
registration. In this section we will describe the problem and two methods to
resolve the ambiguity. These are discussed in Cumming et al. (1986) and in
Chang and Curlander (1992). The method described by Cumming et al. (1986)
was also suggested by Luscombe ( 1982) in a paper concerned with many aspects ,
of clutterlock, autofocus, Doppler aliasing, and ambiguity resolution. It was
implemented at JPL in 1983 for processing of SIR-B data.
while we assume the same expression with a value f 'oc for the Doppler center
frequency, differing by some multiple of the sampling frequency: f'oc =foe+ mfp
As a result, we assume the data for each image point to lie nominally along
the dashed range walk line of Fig. 5.7, whereas they actually lie along the solid
line. The same situation holds in both the slow time and Doppler frequency
domains:
AR = R - R' = -(.A./2)(foc - f'oc)(s - sc) = - (.A.nifp/2)(s - Sc)
= -(.A.mfp/2fR)(f - foe)
(5.4.3)
This is the range amount by which the azimuth time domain data locus would
be offset in the time domain migration correction procedure of Section 4.2.3,
or the first order range offset in assembling the Doppler frequency spectra in
the process of secondary range compression, as indicated in Eqn. (4.2.49).
Now consider the procedure of registration of the multiple looks of a
multilook image. Each frequency f in the subband of the first look, centered
s
I
In a sampled Doppler spectrum of the form of that in Fig. 5.6, only one
replication is the right one, that centered at the Doppler center frequency foe
The clutterlock algorithms described in Section 5.3 determine a value /'oc in .
the base region of the replicated spectra: 0 ~ f'oc < J. = fp There is one step;;
in the processing chain which is sensitive to whether our estimate f 'oc is the
true value foe or one of its replications, f 'oc = foe + mJ., 'm =I= 0. That is in the
range migration correction. The fact is used in the range subaperture correlation
algorithm (Cumming et al., 1986) to determine the correct foe
Suppose that the true range walk Eqn. (4.1.39) for some particular image
range Re of interest is described by
R(s) ~ Re - (.A.foc/2)(s - sc)
( 5.4.2)
I
I
,___ -
(Alnfp/2) (s - s 0 )
Figure 5.7
Range walk locus error resulting from use of ambiguous Doppler spectrum with m "# 1.
240
at
5.4
241
= l:I1(Rn + R)J2(Rn)
n
averaging over azimuth to enhance stability. The correlation p(R) will tend to
peak at the offset
*(5.4.4)
where Lif0c is the difference in assumed look center frequencies. The value of
m may be calculated from Eqn. (5.4.4). This yields the true value
foe = f
~e
- mfP
allowing the full image to be processed with the proper range migration
correction.
where o(). is the error in measuring beam center pointing angle (squint). That
is, we require
(5.4.5)
Since azimuth resolution is nominally (Eqn. (4.1.37)):
(5.4.6)
and nominally B 0 =
The need for such an ambiguity resolution procedure .is greater at higher
frequencies, being usually unnecessary at L-band. This is because the true value
of foe can be calculated, at least to within half a pulse sampling interval fp
provided the antenna beam pointing direction is known to within nominally
half a beamwidth. This follows because Doppler frequency is related to pointing
angle () off broadside by nominally ( Eqn. ( 1.2.4))
this becomes
LiR = (mRe/4)(A./ox)2
*(5.4.8)
242
5.4
At C-band, say A. = 5 cm, and at the altitude of the space shuttle, Re = 250 km,
with a common single-look azimuth resolution bx = 6 m, and range resolution
bR = 7 m, this yields (form= 1)
tiR/bR = 0.6
Hence the misregistration is less than a resolution cell per ambiguity cycle, and
may be difficult to sense. (The cross-correlation uses single-look images, which
have a signal-to-speckle noise ratio of only 0 dB.) The situation worsens at
X-band.
Ambiguity Resolution Using Multiple PRFs
/
I
,,
,,---...... \
I
fee -fp1
C\
fee
,,.........,
/
'
''
., /
243
from which we conclude foe= fb. Chang and Curlander (1992) set forth a
more deductive solution, which has the possibility of extension to account for
measurement errors.
Assume first that all frequency values are integers. Then by definition the
expression Eqn. (5.4.9) is a congruence (Barnett, 1969, Chapter 6):
foe
=f ~e mod(/~),
( 5.4.10)
i = 1, ... ,M
That is, the integer difference foe - f ~e is an integer multiple of the integer f ~.
Given the numbers f ~e and f~, we want to solve the simultaneous set
Eqn. (5.4.10) for the unknown foe
The existence of a solution to Eqn. (5.4.10) rests on the Chinese remainder
theorem (Barnett, 1969, p. 115): If the members of each pair f~, f i # j, have
no common integer divisors, other than unity, then the simultaneous set
Eqn. (5.4.10) always has solutions, and, further, those solutions are congruent
modulo f M = f ~ x . . . x f ~. That is, the solution foe is determined to within
the product fM, and the ambiguity span of the Doppler centroid is expanded
to that value.
The proof of the Chinese remainder theorem is by construction, and is given
by Barnett (1969, p. 115). In the new baseband 0 ~foe< fM, the solution is
i,
foe=
L MinJ~e mod(JM)
i=
( 5.4.11)
where
(5.4.13)
fo
fee +fp1
Figure 5.8
True Doppler spectrum and differing ambiguities induced by two PRFs f~,
J;.
The solution of the equation Eqn. (5.4.13) can be constructed (Barnett, 1969,
p. 51) by chaining backwards through Euclid's algorithm (Barnett, 1969, p. 47)
for finding the greatest common divisor (gcd) of the integers M; and f~. This
will be illustrated by an example.
244
5.4
245
J:;e =
flfe = 5275mod(1745) = 40
Then Eqn. ( 5.4.11) yields
The set Eqn. (5.4.13) is (M 1 =1745, M 2 = 1652):
1745n 1 + 1652k 1 = 1
(5.4.14)
1652n 2 + 1745k 2 = 1
(5.4.15)
a = 1745 = 1652 x 1 + 93
b = 1652 = 93 x 17 + 71
+ 22
22 x 3 + 5
5 x 4+ 2
r 1 = 93 = 71 x i
'2 =
71 =
r 3 = 22 =
Chang and Curlander ( 1992) develop a slight extension of this algorithm, aimed
at noninteger measured values fte From Eqn. (5.4.10), it is clear that the
integer part of the f te arises from the integer part of foe so that the integer
parts of the fte can be used in the above procedure, and their (common)
fractional part added to the resulting foe To allow for estimation errors in the
f ~e the measured f te are least squares fit to numbers a1 constrained to have
common fractional part, and the integer parts of the resulting a 1 used in the
algorithm. Their common fractional part is added to the solution foe found.
Also, the case is considered that the true value foe may change slightly
over the time interval from one PRF burst to another, so that, in place of
Eqn. (5.4.10), we have
r 4 =5=2xi+l
r5 =
2= 1
f be
xi+ Q
which identifies 1 (the last nonzero remainder) as the gcd of (1745, 1652).
Now carry out the back solution of the Euclid array according to the scheme
(the dotted quantities are combinations of quotients):
1=5-ix2
ix (22 - 5 x 4) = -ix 22 + 9 x 5
=-ix 22+9 x (71-22 x 3)=9 x 71-29
=f tc mod(f~),
i= l, ... ,M
( 5.4.16)
= 5-
x 22
= 9 x 71 - 29 x (93 - 71 x i) = -29 x 93 + 38 x 71
= -29 x 93 + 38 x (1652 - 93 x 17) = 38 x 1652 - 675 x 93
= 38 x 1652 - 675 x (1745 - 1652 x 1)
= -675 x 1745 + 713 x 1652
Any two of the values fbe say f be and f '&e are used as data in the maximum
likelihood estimator of foe and k 1 , assuming k 2 = k 1 + i, with i some deterministic
value. Thus, with p 12 (!!.fbe,t!.ff,e) being Gaussian, say:
246
REFERENCES
247
REFERENCES
f De - kJp +foe
which still contains the integer i as a parameter. To obtain some smoothing
e~ect, .two measur~ments are. used to determine it. It can be assumed that f'f,e,
foe differ from foe by considerably less than a PRF, f~ or f~, respectively.
= lcJbe -Jf,e)mod(f~)I
and
j;
u2
1
3
3
= l(f De
- foe) mod(f p)I
A
l(Jbe -Jf,e)mod(f~)I
248
Luscombe, A. P. (1982). "Auxiliary data networks for satellite synthetic aperture radar,"
Marconi Review, 45(225), pp. 84-105.
Madsen, S. N. (1989). "Estimating the Doppler centroid of SAR data," IEEE Trans.
Aerospace and Elec. Sys., AES-25(21 pp. 134-140.
Madsen, S. N. (1985). "Speckle Theory," Electromagnetics Institute of Technical
University of Denmark, Report LD62, November.
McDonough, R. N., B. E. Raff and J. L. l<.err(l985). "Image formation from spaceborne
synthetic aperture radar signals," Johns Hopkins APL Technical Digest, 6(4),
pp. 300-312.
Mooney, D. H. and W. A. Skillman ( 1970). "Pulse-Doppler Radar," Chapter 19 in Radar
Handbook (Skolnik, M. I., ed.), McGraw-Hill, New York, pp. 19.1-19.29.
Papoulis, A. ( 1965). Probability, Random Variables, and Stochastic Processes, McGrawHill, New York.
Ulaby, F. T., R. K. Moore and A. K. Fung (1982). Microwave Remote Sensing, Vol. 2,
Addison-Wesley, Reading, MA.
Whalen, A. D. ( 1971 ). Detection of Signals in Noise, Academic Press, New York.
Wu, C., B. Barkan, W. J. Karplus and D. Caswell ( 1982). "Seasat synthetic-aperture
radar data reduction using parallel programmable array processors," IEEE Trans.
Geosci. and Remote Sensing, GE-20(3), pp. 352-358.
6
SAR FLIGHT SYSTEM
6.1
SYSTEM OVERVIEW
The SAR data system is comprised of three main subsystems (Fig. 6.1):
(1) SAR sensor; (2) Platform (spacecraft or aircraft) and data downlink;
6.1
ci
...I
>
w
u
..>i'
~
0
"O
"O
"'El=
.:a
'iii
Q::;
i.::
:i
e..:
...I
"O
"'
"'
i:o::
>
w
-;-
...I
e"'
.J:J
"'
.t:l
;::J
..."'0
ct
~i
'<;'
:.: a:
~ 51
-' UI
;;:u
.5
"O
"O
"'
~
zw
00
cg:
D. II)
... -u
"" z
I- 0
- a:
CJ I-
...I
"O "'
i:o:: "O
<"O
r.ll
;;i
cS
...I
UI
"" ...I
w
w
oil
-"' -
-u
Ow
CJ oz
_,o
""
z rr:
t>
1. Amplitude and phase errors across the system bandwidth which degrade
e ~
';"';; 8"'
0
"' ...
"' "'
UI
251
example, the antenna typically has three major subassemblies: (1) Feed; (2)
Radiators; and ( 3) Support structure. The characteristics of each radar assembly
will be addressed in more detail in the following sections. The spacecraft (S/C)
bus generally provides the downlink processor (including the communications
antenna) and the onboard recorders for temporary storage of data.
In discussing the SAR ground data system, we will refer to the various levels
of data products as defined by NASA (Butler, 1984 ). Their definitions, as adapted
specifically to SAR data products, are presented in Table 6.1. We will treat the
ground receiving station and Level 0 processing for removing the telemetry
artifacts as part of the data downlink. The ground data processing subsystem
consists of a Level lA processor which produces the single-look, complex image
by performing two-dimensional matched filtering of the data, followed by a
Level lB processor that performs radiometric and geometric corrections on the
Level lA output image. This Level lB processor may also perform low pass
filtering for speckle noise reduction and detection of the complex image for
video display. The final stage of the ground processing is the Level 2, 3 processor
that derives geophysical information (e.g., soil moisture, surface roughness) from
either a single image frame (Level 2) or multitemporal coregistered images
(Level 3).
Each element of the data system introduces noise of some type that corrupts
the signal, effectively degrading the overall system performance. Typically, this
performance is characterized by the system impulse response function. Additional
performance characteristics relating to the radiometric and geometric calibration
accuracy will be discussed in detail in Chapters 7 and 8. The key noise sources
degrading the synthetic aperture radar performance can be categorized as
follows:
<'>
SYSTEM OVERVIEW
CJ
rr:
z!z
-o
!u
"""o
z
""
"'
. <:>
QI
:I "O
.'fl <IS
IL
250
252
TABLE 6.1
Level 1B
Level 2
Level 3
Level 4
SYSTEM OVERVIEW
253
TO
PLATFORM
BUS FOR
RECORDING
Level Definitions
Level
Level 0
Level lA
6.1
adjusting the matched filter reference function. The residual, random linear
error component degrades the impulse response function. The nonlinear noise
sources to some degree can be modeled as white additive noise. However,
frequency harmonics will arise from saturation effects that must be treated
separately. These issues will be discussed in more detail in Section 6.2. The
thermal noise will result in measurement errors on an individual sample basis,
but over a large number of samples the mean noise power can be accurately
estimated and subtracted from the average received power to derive an accurate . _.
backscatter coefficient estimate.
Prior to addressing the system performance specifications for individual
assemblies in the radar subsystem, it is instructive to briefly review the radar
system operation, keeping in mind that there are a number of variations on
this operational scenario. A block diagram of the SAR sensor is shown in Fig. 6.2.
Transmission
The coherent radar signal originates in the stable local oscillator ( stalo ). This
signal is gated into the exciter subsystem according to a predefined pulse duration
and pulse repetition frequency (PRF). The exciter modulates the pulse tone
with a frequency or phase code. This signal is then translated to the desired
carrier frequency by a series of mixers, amplifiers, and bandpass filters. At the
carrier frequency, the RF signal is input to a high power amplifier (HPA) which
is either a cascade of solid state amplifiers or a traveling wave tube (TWT}
device. This high power signal is then passed through a circulator switch to the
OR
DOWNLINK
TIMINGANO
CONTROL
-----------
FREQ.
CONTROi.ER
MULT.
MICROPROC.
IF I
BPF
RECEIVER
TRANSMITTER
RF ELECTRONICS :------
!'
I
I
I
ANTENNA :
..... ___________ _,
I
I
I
Figure 6.2 Block diagram of SAR subsystem illustrating four assemblies and key subassemblies.
antenna subsystem. The antenna feed network consists of coaxial cable and/or
waveguide with power dividers. It divides the signal into a number of parallel
coherent paths (assuming a phased array antenna) which terminate with
radiating elements or slots. The feed network may also contain phase shifters
254
6.1
The return echoes are collected by the same antenna radiator and feed system
as was used for signal transmission. The exception is an active array where the
T /R module paths are not the same on receive as on transmit. A circulator
then switches the echo signal into the receiver where it is bandpass filtered and
input to a low noise amplifier (LNA). A variable gain amplifier (VGA) typically
follows the LNA to normalize the signal amplitude according to the target
backscatter. This signal is then frequency shifted to an intermediate frequency
(IF) for narrowband filtering to the pulse ban~width, amplified, ~nd agai~
shifted either to baseband or offset video by a senes of filters and mixers. This
video signal is split by a power divider and digitized using dual analog-to-digital
convertors (ADCs) clocked to sample the in-phase (I) and quadrature (Q)
components of the baseband signal. Alternatively, a single ADC sampling at a
rate twice the system bandwidth can be used to digitize the offset video. The
ADC output is then time expansion buffered in a high speed random access
memory (RAM) to achieve a constant rate data stream. This data is the~ pas~ed
to a formatter unit which appends the header (e.g., GMT clock, cahbrahon
signals, engineering telemetry) and outputs the data to the platform bus.
Downlink
The platform bus transfers the formatted video data to an onboard signal
processing system via the digital data routing electronics for recording,
processing, or transmission to a ground receiving station (Fig. 6.3). The ground
receiver and Level Oprocessor demodulate the carrier signal, strip off the channel
coding (that is applied for bit error protection), and correct for telemetry artifacts
(e.g., data sequence, polarity). The platform bus may include high density digital
recorders (HDDRs) for temporary data storage, digital signal processors for
removal of the pulse code modulation (range compression), and/or synthetic
SYSTEM OVERVIEW
The recovered digitized SAR video data is either recorded on HDDRs by the
Level 0 processor (and the tapes shipped to the Level 1 processing facility), or
the data is retransmitted and recorded at the Level 1 facility for real-time or
off-line processing (Fig. 6.4 ). The first stage of the Level 1 processing performs
data synchronization and reformatting. Since the data is recorded in range line
order, the Level lA signal processing in the range dimension.typically preceeds
the azimuth processing. Almost all correlator systems use two one-dimensional
reference functions as an approximation for the exact two-dimensional matched
filter. Since the Level lA output is a single-look complex image, this processing
stage is essentially reversible. The processing operations in the Level 1B
processor include: ( 1) radiometric corrections to remove the cross-track signal
power modulation; (2) geometric resampling to a map grid; and (3) detection
and lowpass filtering for speckle noise reduction. In general these operations,
which produce the radiometrically and geometrically calibrated imagery for
extraction of surface information, are not reversible. The final processing stage,
the Level 2, 3 processor, generates the high level, non-image products. This
processor converts the calibrated imagery into geophysical data units that
represent some type of surface characteristic. Very few of the Level 2, 3 products
can be produced in a fully automated fashion, due to the complex scattering
CORRELATIVE
DATA
HIGH DENSITY
DIGITAL
RECORDERS
FROM
LEVEL 0
PROCESSOR
FROM SAR
SENSOR
SUBSYSTEM
255
DATA
SYNC AND
FORMATTING
SAR
CORRELATOR
(LEVEL 1A)
SAR
GEOPHYSICAL
PROCESSOR(S)
(LEVEL2.3)
SAR POST
PROCESSOR
(LEVEL 18)
TO USERS
AND LARGE
SCALE
MODELS
ON-BOARD
SAR
CORRELATOR
TO USERS
..-----.~:"'
DOWNLINK
PROCESSOR
~
DOWNLINK
ANTENNA
GROUND
LEVELO
PROCESSOR
PROCESSING
SUBSYSTEM
RECEIVING
STATION
Figure 6.3 Block diagram of the platform and data downlink subsystem illustrating major
subassemblies.
....___.,.. 0-0
HDDR
ARCHIVE
Figure 6.4 Block diagram of the ground data processing subsystem illustrating major processing
stages.
256
mechanisms that give rise to the target reflectivity. Since this processing requires
operator interaction, it is typically not considered as an element of the SAR
ground data system, even though it is an essential processing stage for extracting
information from the SAR data.
An example of an end-to-end SAR data system design and operation is
presented in Appendix C. This appendix describes the NASA Alaska SAR
Facility ground processing system. This system is designed to process data from
the ESA-ERS-1 SAR, the NASDA J-ERS-1 SAR and the Canadian Radarsat
systems. It includes all elements of the signal processing chain described above,
including a Level 2, 3 processor for multitemporal tracking of sea ice and
production of ice concentration maps and ocean wave spectra contour plots.
6.2
257
( 6.2.1)
where A(w) is the amplitude transfer characteristic and B(w) is the phase
characteristic. These terms can be expanded in a Fourier series as follows
6.2
L an cos( new)
(6.2.2a)
n=l
ao
B(w) = wb 0
ao
bn sin( new)
(6.2.2b)
n=l
+ b0 ) +
ao
Jm(bi) x
m=l
me)]
(6.2.3)
where J; is the Bessel function of the first kind and ith order. The first term in
Eqn. (6.2.3) is the desired signal, weighted by the zero order Bessel function.
Each term of the summation consists of two echoes, advanced and delayed
replicas of s( t) weighted by the mth order Bessel function. The desired output
signal relative to the input signal is delayed by b0 and the paired echoes are
displaced from the desired output by me (Fig. 6.6). Note that the first phase
S1(t)
INPUT SIGNAL
S 1(co)
Figure 6.5
DISTORTIONLESS
FILTER
H(co)
DISTORTION FILTER
Y(co) A(m) EXP OB(co))
OUTPUT SIGNAL
So(m)
258
6.2
259
B(o>)
s(t)
b
Figure 6.6
( conNnued)
distortion term b 1 actually gives rise to an infinite series of paired echoes which
are generally neglected beyond the first pair. The peak sidelobe ratio (PSLR)
from each amplitude ripple term is given by
PSLRa
= 20log(~)
2a0
(6.2.4)
260
6.2
and similarly the peak sidelobe for each phase ripple term is given by
PSLRP =
20log(~)
261
formulation can be made for the mainlobe broadening. Thus the overall system
performance is given by
(6.2.5)
(6.2.10)
(6.2.6)
( 6.2.11)
where <T; is the standard deviation of the phase or amplitude error of the ith
subassembly and Km 1, is the fractional broadening from the ith subassembly.
Measurement Techniques
where <18 is therms amplitude error about a linear fit across the frequency band,
and for phase errors
ISLRP = 20 log <TP
(6.2.7)
where <Tp is the rms phase error in radians about the desired phase function.
Similarly, the mainlobe broadening for amplitude and phase errors respectively
is given by
- a;)- 2
(6.2.8)
Km1, = (1 - a~)-2
(6.2.9)
Km1.
= (1
td(w) = -dB(w)/dw
(6.2.12)
Numerical quadrature is used to derive the phase versus frequency data points.
A least square error quadratic fit is applied to these points and the rms phase
error is calculated from the residuals. For timing error measurements, a counter
can be used to measure the relative differences between the leading edges of a
series of timing pulses. The variance of these measurements determines the
timing jitter, which can then be converted into phase error by
( 6.2.13)
where a 1 is the rms timing jitter and f is the frequency of the measured signal.
where Km1 is the broadening factor relative to the theoretical mainlobe width.
Each element in the radar subsystem will produce a phase and amplitude
error characteristic. To derive an overall performance specification for the radar,
it is typically assumed that each error source is an independent process,
characterized by some probability distribution function (PDF). The resultant
PDF of all error sources is assumed Gaussian, by the central limit theorem,
with mean and variance given by the sums of the mean and variance!
contributions of the individual error processes. This formulation allows the
effective ISLR of the system to be calculated as the sum of the ISLR contributions
from each subassembly or component comprising the subassembly. A similar
6.2.2
While most radar systems are designed such that their components operate in
the linear region over a wide range of inputs, the actual performance of the
radar can never be strictly categorized as linear. Given that the return echo
amplitude modulation is random, some fraction of the data (i.e., the tails of the
probability distribution) will always be in saturation (Fig. 6.7). If the percentage
of the data in saturation is small, the system is in a quasi-linear operation mode,
where the nonlinearities are characterized by the level of harmonic or
262
6.3
So
263
Figure 6.7 System transfer function illustrating the effect of saturation on the echo signal where
s1 is the input signal and s0 is the output signal.
where tm is the system memory or settling time. The system memory can be
measured using a two-pulse input, where the response to each pulse is measured
as a function of the time spacing between inputs. The minimum time interval
which results in identical responses to the two inputs is the settling time. This
parameter could also be measured directly from the autocorrelation function
of the system response to white noise. The nonlinear characteristics of the analog
to digital conversion process will be considered in more detail in the section
on ADCs.
6.3
This section will review the four assemblies of the radar subsystem in terms of
their performance characteristics and design trade-offs.
6.3.1
where t is the round trip propagation time of a pulse, fc is the carrier frequency
and u~(-r) is the Allan variance of the crystal oscillator. The Allan variance,
264
6.3
From Eqn. (6.2.7) and Eqn. (6.2.8), the azimuth impulse response function is
degraded by
ISLRP =. - 34.5 dB
Kmi p =
1.0007
6.3.2
265
Among the various pulse coding schemes, frequency coding and phase coding
are commonly used, with frequency coding by far the most popular. The
frequency codes can be categorized as linear or nonlinear FM. The linear FM
or chirp code is used in most radar systems, primarily due to its ease of
implementation and its insensitivity to Doppler shifts. Almost all currently
operational (non-military) SAR systems, as well as those planned for the 1990s
(with the exception of Magellan), use a linear FM chirp (Fig. 6.8a). Nonlinear
FM codes (e.g., Taylor weighted) are used primarily in military applications
where very low sidelobes are required (Fig. 6.8b). The nonlinear chirp permits
exact matched filtering (i.e., range compression) without the severe SNR loss
that would result from an equivalent processor w~ighting of a linear FM signal
(Butler, 1980).
Phase code modulation is used primarily in systems where the available
resources (i.e., power, mass) are limited or in situations where a relatively
inexpensive coding implementation is required. Most popular is the binary
phase code, where a 180 phase shift is switched into the circuit at periodic
intervals (Fig. 6.8c). The sequence of O's (no shift) and 1 's (180 shift), which
occur at uniform intervals of At (a chip), is chosen to achieve the best possible
sidelobe characteristics. For small pulse compression ratios (:::;;; 13 chips per
pulse), Barker codes are commonly used due to their optimal equal-level sidelobe
characteristics. However, since longer codes are required for most SAR systems
(e.g., Magellan has 60 chips per pulse), pseudorandom sequences such as the
maximal-length sequence are more common. A detailed treatment of these and
other phase coding techniques is given in Cook and Bernfeld (1967).
Dispersive Delay Lines. The most common implementation of the FM code
is a surface acoustic wave (SAW) dispersive delay line (DDL) of the configuration
shown in Fig. 6.9a. The SAW DDL typically consists of two complementary
transducers, each composed of a number of electrodes whose periodicity varies
RF Electronics
The RF electronics assembly can be divided into the following main subassemblies:
(1) Exciter; (2) Transmitter; and (3) Receiver. We will discuss the performance
and design trade-offs of each subassembly.
Exciter
The exciter subassembly generates a coded pulse waveform from the continuou& ,
tone stalo output. As described in Chapter 3, coding of the transmitted pulse .
provides a range resolution, '5R, dependent only on the bandwidth of the pulse' ;
code (i.e., (;R = c/2/Jp_ where BR is the pulse code bandwidth). Since '5R ;if.
independent of the pulse duration, the transmitter peak power requirements
can be reduced by extending the pulse duration without degrading either the
resolution or the SNR. This peak power reduction simplifies the transmitter
design, increasing both performance and reliability as well as reducing the risk
of breakdown or arcing in the high power cables.
-------71 -------71
1__ /
__ /
I
I
I
I
J+---rp-j
(a)
180
rum
Al
14--"P-I
(b)
(c)
Figure 6.8 Pulse coding schemes: (a) Linear FM code; (b) Nonlinear FM code; (c) Binary phase
code where TP is the pulse duration, Ba is the pulse bandwidth and lit = 1/Ba is the chip
duration for the binary phase code.
266
6.3
THERMAL COMPRESSION
REFLECTIONLESS
BONDED A u CONTACTS
HIGHLY POLISHED
PIEZOELECTRIC
SURFACE
267
(having higher density at the higher frequencies). The position and length of
the electrodes set the phase and amplitude response. The DDL is essentially a
linear filter whose group delay va ries over the system bandwidth. The delay
versus frequency characteristic can range from a li near, flat amplitude response
to a nonlinear weighted response for sidelobe supp ression. T ypical time
expansion facto rs are on the order of 1000 where, for example, a 30 ns input
is gated from the stalo to produce a 30 s pulse. For large time bandwidth
products (TB > 1000), spurious internal reflect ions can degrade the phase and
amplitude performance characteristics. To reduce these effects an inclined
transducer geometry is used (Fig. 6.9b, c ). Without special compensation, at
TB ~ I 000 the peak sidelobes of the autocorrelation function are typically 30
to 35 dB down from the mainlobe (Phonon Corp., 1986).
T he adva ntages in using a D DL fo r pulse code generation are that it is a
proven technology, the performance specifications in terms of TB and pulseto-pulse jitter meet most system specifications, and it is relativel y lightweight.
Its key d isadvantages are that it is inflexible (i.e., fixed code) and that it is lossy
(up to 60 dB at TB= 1000).
Digital Pulse Coding. Exciter technology is at a transition point, where most
existing exciters utilize the DDL technology while current and future system
designs use digi tal technology. There are several techniques for digitally
generating the pulse code waveform. The digital phase shifter method, used in
SI R-C, essentially switches a n inline phase shifter through a piecewise linear
approximation of a quadratic function (for linear FM) over the pulse duration
(Fig. 6.!0a). To achieve PSLR and ISL R performance in the -30- -35 dB
range, the slope of the linear phase approximation must be updated at an
interval Lit that satisfies (Klein, 1987)
( 6.3.2)
.2. cm
c
Figure 6.9 Surface acoustic wave dispersive delay l~ne: (a) Dou.hie .d ispersion inline geometry;
(b) Double dispersion inclined geometry;( c) Close-up view ofSIR-B mchncd transducer geometry.
268
6.3
269
Transmitter
RF
MOD
GATE
Traveling Wave Tubes. The tube commonly used in most airborne and some
spaceborne systems is the traveling wave tube (TWT). The TWT consists of an
electron gun (heated cathode, control grid and anode), a delay line, and a
collector (Fig. 6.1 la). The electron beam, formed by an electric field, passes
SYNCHRONOUS
LOGIC
PRF
CU<
Anode
Slow wove
structure
Cotholle
RF
MOO
GATE
CNTRL
CNTRL
Heoter
PRF
CU<
Focusing structure
SYNCHRONOUS
LOGIC
a
b
Figure 6.10 Digital pulse coding schemes: (a) In-phase shifter; (b) Prestored code.
I
Psat
are pulse-to-pulse timing jitter and rms phase and amplitude errors. The p~ !'
and amplitude errors can be characterized in terms of mainlobe and sidelo~;?
characteristics of the pulse autocorrelation function as discussed in Section 6.2.t"
Typical numbers are aP ~ 3 rms and CT8 ~ 1.0 dB. The amplitude distortion in
1
the DDL is not a factor since the signal is clipped in the transmitter (see ne,~\.
section). The pulse-to-pulse jitter degrades the azimuth iqipulse response. TJW;.~
jitter error is translated into azimuth phase error by:
,
Power
output
--i---i---
I
I
I
I
,.
.,
Lineor
dynamic
ISoturotion I
I region I
_:~r:_
II "'9dB II
I
<Tp
where a, is the standard deviation of the pulse-to-pulse jitter and f is, t~j
operating frequency of the DDL. For a 10 MHz SAW DDL, a 0.5 ns jitter,
produces aP = 2, resulting in an ISLR of -24 dB.
= 2nfa1
______ J_____=~"'4 dB
Smin,
Tongentiol
sensitivity
Power input -
Figure 6.11
amplifier.
Traveling wave tube: (a) Circuit layout; (b) Gain characteristic of broadband TWT
270
6.3
271
Receiver
through a delay line, where the beam energy is transferred to the delay line,
effectively amplifying the RF signal. The TWT is characterized by both high
gain and large (octave) bandwidths. For radar applications, the tubes are
typically operated in saturation (Fig. 6.llb) to maximize the available output
power and to ensure a stable power level despite variation in the input signal.
However, operation in this region makes the TWT a nonlinear device and
harmonics of the fundamental signal are generated that must be removed using
a bandpass filter. The efficiency of microwave TWTs ( 1-10 GHz) has improved
to 30-50% with advanced collector designs. Typical gains are 45 to 60 dB.
The receiver assembly is typically divided into a radio frequency (RF) stage,
an intermediate frequency (IF) stage, and a video frequency (VF) stage
(Fig. 6.2). The RF front end basically consists of a: ( 1) Limiter to prevent high
power signals (from the transmitted pulse or interfering radars) from damaging
the system; (2) Bandpass filter (which is wide compared to the pulse bandwidth)
to limit the spurious signal power; and (3) Low noise amplifier (LNA) whose
noise figure is a key factor in establishing the overall system signal to thermal
noise ratio. The noise figure is given by
Solid State Amplifiers. Most lower frequency spaceborne SAR systems (i.e.,
F = 10 log(SNRif SNR0 )
L- and S-bands) employ solid state amplifier designs for improved reliability.
A parallel-cascaded design is used to achieve the required output power.
Consider the SIR-B amplifier network as an example (Fig. 6.12). The low power
signal is initially split into three parallel channels. Each channel is amplified
with a set of (Class A) predriver amplifiers operating in the linear region. These
are followed by isolation circulators and Class C driver amplifiers. This driver
signal is input to a power amplifier subassembly which consists of a series of
bipolar transistor stages to achieve the required gain. Combiners are then used
to reassemble this parallel network output into a single high power signal. This
SIR-B design using 50 W bipolar transistors produced a 1.5 kW output power
at about 12-15 % efficiency (Huneycutt, 1985). Current technology using GaAs
FETs can achieve 20-25 % efficiency at C- and L-bands and about half of that
at X-band.
POWERAMP: 10.SdBGAIN
i.........................................................
DRIVERSTAGE: 54dBGAIN
':
CLASS A
PRE DRIVER
i
:'
CLASS A CLASS C
PRE DRIVER DRIVER
l\
:'
:'
:
!l ll
' \~
TO
ANTENNA
POWER
DIVIDER
where SNR1 and SNR0 are the signal to noise ratios at the input and output
of the amplifier respectively. This measure is a figure of merit for noise internally
generated in the amplifier (Section 2.6.2). A typical noise figure is 3-4 dB for
an L-band amplifier and about 1 to 1.5 dB higher at C-band.
The intermediate frequency stage typically consists of: (1) IF amplifier(s);
(2) Variable gain amplifier (VGA); and (3) Bandpass filter(s), slightly wider
than the pulse bandwidth, to limit the system noise. The VGA is used to set
the quiescent gain of the system for a given data acquisition sequence. However,
for some systems the instantaneous dynamic range of the signal is such that a
sensitivity time control (STC) or an automatic gain control (AGC) is required
to reduce the signal dynamic range. These techniques are discussed in a later
section.
The video frequency stage consists of: ( 1) Low pass filter; and (2) Video
amplifier to match the output of the receiver to the ADC input. A second VGA
may be included in this stage.
At each stage a number of directional couplers are inserted as test points
and a calibration signal is injected using a directional coupler, typically at the
front end following the circulator, or just preceding the IF amplifier.
'
FROM CHIRP
GENERATOR
(6.3.4)
DRIVER
STAGE
POWER
AMPLIFIER
DRIVER
STAGE
POWER
AMPLIFIER
PORT
272
TABLE 6.2.
6.3
Components
Peak-to-peak
Amplitude
Errors (dB)
Peak-to-peak
Phase Errors
(deg)
0
0
0.1
0.1
0.1
0.1
0.1
0.2
0.3
0.5
0.5
0.5
0.5
0.5
0.5
0.5
1.0
2.0
Attentuators
Power dividers
Circulators
Directional couplers
Mixers
Switches
Limiters
Amplifiers
Filters
-(R
sTc-
(r)siny(r))
G 2 ('t')
112
(6.3.5)
where G( 't') is the nominal vertical antenna pattern as a function of echo delay
time r projected into the cross-track ground plane, y( 't') is the look angle, and
R is the slant range. The STC function used in the Seasat receiver is shown iJt .
Fig. 6.13.
.
An automatic gain control (AGC) is typically designed to compensate foo
intrapulse variation in the return echo power, minimizing changes in the echo1 .
dynamic range resulting from variation in target reflectivity. Essentially, theSe>
devices employ a control loop with a detector (integrator) to estimate the
received power across a portion of the echo. The integrated power estimate iS:
fed back with a negative gain to the receiver VGA. The trade-off in AGC
performance is dependent on the time constant of the servo loop. It must be ..
short to minimize the feedback error, yet sufficiently long for the integrator to
make an accurate estimate of the echo power.
PRF
EVENT
273
RCVR PROTECT
WINDOW
'"'"I
I
1-38 J&B-r--._
1.2,.) I
I
I
STC
RESPONSE
,..
Figure 6.13
The main shortcoming of these variable gain amplifiers is that they make
radiometric calibration of the data extremely difficult. Not only does the inverse
of this gain function need to be applied in the signal processor before matched
filtering, but corrections for any changes in the relative phase characteristic of
the receiver transfer function must also be compensated. Although ideally these
characteristics can be measured preflight as a function of temperature, generally
neither an AGC nor an STC can be reliably used when precise amplitude and
phase calibration is required.
6.3.3
Antenna
The SAR antenna assembly typically consists of a single high gain antenna used
for both transmit and receive consisting of a feed system, structural elements
(including deployment mechanisms), and radiating elements. For the Magellan
system, the SAR antenna is also used for the communications downlink. The
basics of antenna design can be found in any of a number of books (e.g.,
Stutzman and Thiele, 1981). The key antenna parameters affecting the SAR
performance are the antenna gain (or directivity) and its beam pattern. The
antenna gain is directly proportional to its area. Assuming uniform illumination,
the gain is given by (see Section 2.2)
G = pD = p(4nA/A. 2 )
( 6.3.6)
274
where 17 is the incidence angle, fp is the pulse repetition frequency (PRF), and
c is the propagation speed oflight in free space. Similarly, to prevent overlapping
azimuth Doppler spectra, the antenna length must satisfy
(6.3.8)
where V.1 is the relative sensor to target velocity.
From Eqn. (6.3.7) and Eqn. (6.3.8) the minimum antenna area required in
order to satisfy the ambiguity constraints is
Amin
(6.3.9)
b
Antenna designs: (a) Microstrip phased array L-band antenna used in SIR-B
(b) Slotted waveguide X-band antenna used in X-SAR. (Courtesy of H. Ottl.)
'
Figure 6.14
275
276
6.3
1.0
1.0
1.0
1.0
0.7
0.7
0.36
0.2
0.08
277
TO
ELECTRONICS
PATCH
~~'
INPUT/OUTPUT
Figure 6.15 SIR-CL-band antenna feed system (one-half of symmetrical design) illustrating the
incorporation of active elements to achieve amplitude taper.
PIN diode design. The C-band HPA is a 3-stage GaAs FET operating in Class
A for a 25 dB gain, while the L-band is a 3-stage silicon bipolar transistor
design operating in Class C for a 29 dB gain. The LNA designs are GaAs FET
and silicon bipolar for the C- and L-bands, respectively, each achieving a noise
figure of 1.5 dB. The ferrite circulator provides 20 dB of isolation at 0.5 dB
insertion loss.
.
The design of the SIR-C antenna is illustrative of the future of spaceborne
SAR technology. Although SIR-C uses discrete components for its T /R modules
and phase shifters, monolithic microwave integrated circuits (MMIC) are
approaching the point where they can now be considered viable for a spacebome
SAR application. The advantage is that the electronics can be incorporated into
the printed circuit board with the microstrip radiator ;md the feed network,
providing a fully integrated system. The MMIC devices have been demonstra~
at frequencies from under 1 GHz to over 100 GHz. As the RF frequency of~
device is increased, generally both the output power and the efficiency drO,:t
Typical numbers for L- or C-band devices are 40 to 50% efficiency at 5-10 W;
output power, dropping to 25 % efficiency and 3-5 W output at X-band.
A key issue limiting wide application of this technology remains the manufacturing
yield and therefore the production costs.
l-BAND PANEL
r::::
} CBAND
Antenna Performance. The antenna gain (or efficiency) and the pattern shape
are certainly two key considerations in the antenna design; however, a number
of other specifications must be met for adequate performance. As previously
discussed for the receiver, phase and amplitude errors across the passband will
degrade the system impulse response function. In the antenna assembly we must
278
6.3
279
where Oe,+ifp refers to the range or azimuth angles that give rise to signal
components within the processing bandwidth (including ambiguous regions).
A typical performance requirement for the cross-polarization isolation as defined
in Eqn. ( 6.3.11) is - 25- - 30 dB. SAR ambiguities wilfbe further discussed in
Section 6.5.1.
6.3.4
CROSS-POLARIZED
PATTERN, Gxpor
Figure 6.17 Like- and cross-polarized patterns illustrating high cross-polarized sidelobes for a
mainlobe null design.
also consider the phase and amplitude errors as functions of the off-boresight
angle within the mainlobe of the antenna. It is not unusual for the antenna to
be a major source of phase and amplitude error, especially in the case of a
microstrip phased array antenna which inherently has relatively small bandwidth.
The polarization purity is also an important consideration in the antenna
design. This is especially true in a multipolarization radar such as the SIR-C,
where the relatively low power cross-polarized return is used to derive
information about the target. In this, as in any pulsed radar system, ambiguous
responses can arise, not only from the desired radiated pattern but also from
the spurious cross-polarized signals. This can be a significant error source if
the cross-polarized pattern is designed such that it has a null coinciding with
the peak gain of the like-polarized mainlobe as in Fig. 6.17. This pattern is
designed to minimize the function
SQNR = 6nb
+ l.8(dB)
(6.3.10)
where 08 is the azimuth mainlobe beamwidth and Gxpoi(O) and G1po1(9) are the
azimuth cross-polarized and like-polarized gain patterns, respectively, as a
function of off-boresight angle 0. However, due to the finite sampling of the
azimuth spectrum, the signal components outside of the /p/2 region fold over
into the main portion of the azimuth signal band.
A consideration often overlooked in the antenna specification is that the
cross-polarized pattern may have large sidelobes when its null is placed in the
mainlobe pattern. For a linearly polarized antenna the horizontal pattern
function to be minimized is
Ir=
-n
(6.3.11)
Figure 6.18 Transfer function of an ideal ADC.
( 6.3.12)
280
6.3
where nb is the number of bits per sample. The actual SQNR is typically less
than that given in Eqn. (6.3.12) due to errors in the quantizer. The ADC errors
can generally be classified as either timing errors or quantization level errors.
Errors classified as timing errors are sample clock jitter and sample bias, which
result in a relative phase error between the two ADCs in a quadrature sampling
design. Sample jitter gives rise to a phase error according to Eqn. ( 6.3.3 ), where
u, is now defined as the standard deviation of the sample jitter and f is the
sampling frequency. Sample bias errors are- typically stable or slowly varying
and can be measured with calibration signals and corrected in the ground
processor. Quantization level errors result from DC bias (a shift in all
quantization levels) or errors in the relative spacing between levels (differential
nonlinearities). The DC bias is easily corrected in the signal processor by
estimating the mean of the digitized video signal. The ADC SQNR is reduced
by the ratio of the bias voltage to the full-scale voltage of the ADC. Similarly,
the differential nonlinearities can be estimated by the processor if a full scale
sinewave test signal is available. Comparing the ADC sinewave histogram to
the ideal probability distribution function (PDF), the differential nonlinearity
(Dq) in least significant bits is given by
Dq =
L(X;/X)- 1
;
wher~ N. and Nq ~re the saturation and quantization noise powers respectively,
p(x) i~ ~he Gaussian PDF for the input signal, and v; are the quantization
and digital reconstruction levels respectively, and
X;
( 6.3.16)
is the total number of digital reconstruction levels as shown below
'
XLv+ I
>
S;
For a uniform quantizer (i.e., having equally spaced quantization levels x across
the ADC dynamic range), we can plot the signal to distortion noise ( sat~ration
P.lus quantization) as a function of the standard deviation of the input Gaussian
si~nal and the number of bits per sample, nb. These curves are plotted in
Fig. 6.19. Note that at low standard deviations (i.e., weak signals) the noise is
domi~ated. by the qu~ntization component, which appears as a log linear
function with a 6 dB improvement in the SDNR for each quantization level
( 6.3.13)
P;
where x; is the total number of counts in the ith bin, P; is the expected fractional
number of counts in the ith bin for an ideal ADC, and x is the total number
of samples in the histogram~
The SNR given by Eqn. (6.3.12) describes the ADC performance given a full
scale deterministic input signal. Since the digitized SAR video is a random,
Gaussian distributed, zero mean signal, the SNR depends on the statistics of
the echo (Zeoli, 1976). The assumption of a Gaussian distribution is reasonable
considering that the typical antenna footprint is large and that the echo consists
of scattering from a diverse ground area. The noise energy is calculated for each
sample as the square of the difference between the input analog value and its
digital reconstructed value. This noise is commonly classified into saturation
noise and quantization noise components. The saturation noise is defined as
the noise arising from input analog signals that exceed the maximum or
minimum range of the analog-to-digital converter, while the quantization noise
is the error resulting from input signals within the ADG dynamic range. For a
Gaussian input signal these noises are given by
(6.3.14)
(6.3.15)
281
40
,,
m _30
=
a:
z
0
"'
20
10
10
20
30
40
Standard deviation in dB
Figure 6.19 Distortion noise as a function of input power and number of bits per sample.
282
6.4
according to Eqn. (6.3.12). For input signals with large standard deviations
(high power) the saturation component dominates. Thus, independent of the
number of quantization bits, as the input signal power increases each curve
tends toward unity SDNR.
Therefore, in terms of the optimal gain setting for the video amplifier preceding
the ADC in the SAR receive chain, there is a unique value that produces the
maximum signal to distortion noise ratio,- SDNR = S/(Nq + N.) (Sharma,
1978). As the number of bits per sample increases for a given input signal power,
the gain setting (that gain maximizing the signal to distortion noise) should be
reduced to balance the saturation and distortion noise components. In setting
the gain in the receiver subassembly, it should be noted that, in any one imaging
period (e.g., a frame or a synthetic aperture), the standard deviation of the echo
may vary from a very low value (a low backscatter region) to a high value (a
bright backscatter region). Thus, the dynamic range of the echo over time
intervals on the order of the synthetic aperture time or longer may be much
greater than the instantaneous dynamic range of the return from targets within
a small time interval. For many types of natural targets, instantaneous dynamic
ranges of 25 dB within a short time interval are not uncommon. Adding to this
is the additional dynamic range required to accommodate the antenna pattern
modulation, the range attenuation, and the cross-track variation in the sample
cell size. The instantaneous dynamic range required in the ADC may be 40 dB
or more. Receiver techniques to reduce this dynamic range, such as the sensitivity
time control (STC) or the automatic gain control (AGC), have major drawbacks
in that these devices degrade the system radiometric calibration accuracy.
With the advent of high speed, wide dynamic range ADCs the need for either
an STC or an AGC to reduce the echo dynamic range is greatly diminished.
Table 6.3 lists some of the commonly available ADCs. Devices capable of
TABLE 6.3.
Sampling
Frequency(MHz)
Bits/sample
ADC
Channels
10
12
20
20
30
36
50
60
100
100
250
300
525
8
10
10
12
12
10
8
8
1
2
2
1
1
2
2
1
8
4
1
1
Manufacturer
Analog Devices
TRW
Analog Devices
Sony /Tektronics
Analogic
Nicolet
Sony /Tektronics
Analogic
Biomation
Hughes
Tektronics
Hughes
283
100 Msamples/s at 8 bits/sample can be bought "off the shelf". For radar
systems with bandwidths over 50 MHz, in-phase and quadrature sampling can
be employed using two devices, each operating at half the Nyquist rate of 2BR.
In most radar systems, oversampling is applied to minimize the effects of
aliasing. For Seasat, the system range bandwidth is 19.0 MHz and the real
sampling frequency is 45.54 MHz, resulting in an oversampling factor
( 6.3.17)
where fs, the sampling frequency of the I, Q detected complex signal, is one
half the real sampling frequency. When calculating the effective distortion noise
for an ADC that uses oversampling, a reasonable approximation is that the
quantization noise will be reduced by the oversampling factor, while the
saturation noise is essentially unaffected. This noise reduction occurs during
the range matched filtering operation in the signal processor. An analogous
reduction in quantization noise occurs in the azimuth signal processor as a
result of the PRF to processing bandwidth (Bp) oversampling of the azimuth
spectrum. The assumption inherent in the above statement is that the quantization
is essentially white over the range and azimuth spectra of the echo data. This
has been demonstrated by simulation of the noise spectra (Li, et al., 1981).
6.4
Most spaceborne SAR systems and a few airborne systems downlink the digitized
SAR echo data to ground receiving stations. The key downlink characteristics
that affect the SAR system performance are: ( 1) The noise introduced by bit
errors; and (2) the downlink data rate (which limits either radar swath width,
duty cycle, or dynamic range). These two factors are interdependent since
increasing the bandwidth of the downlink signal processor to increase the data
rate also increases the noise bandwidth and therefore the probability of a bit
error. A detailed treatment of the trade-offs in the design of communication
systems, link budgets, and error encoding schemes can be found in the literature
(Carlson, 1975). Here we will consider the SAR system design options, given a
downlink communications system with a known probability of bit error Pb (or
bit error rate) and bandwidth (or maximum data rate).
6.4.1
Channel Errors
Following quantization of the SAR video signal, the data stream is passed to
the platform data bus. There it is either captured on a high density recorder
for non-real-time transmission to the ground receiving station, or directly
downlinked via the communications subsystem signal processor. This signal
processor typically encodes the data with some error protection code (e.g.,
284
6.4
convolutional code) and modulates the downlink carrier signal with the resultant
coded data using quaternary phase shift keying (QPSK).
The error statistics of this system depend on the type of error protectio n
code used. Altho ugh rando mly occurring bit errors are typically assumed for
the link, if a convolutional code of Jong constraint length is used, burst error
statistics can result (Deutsch and Miller, 1981). It should be noted that NASA
has adopted a co nvolutional code, constraint length 7, rate 1/ 2, as standa rd
for the Shuttle high rate data do wnlink. The NASA TDRSS downlink fro m
the Shuttle is relayed by White Sands Receiving Station to G oddard Space Flight
Center ( GSFC) via a high rate Domsat link. The data transfer is actually through
a cascade of two links (TDRSS and Do msat ), each using a different coding
scheme. The effects of the two links in tandem could cause severe burst errors.
Consider the situation shown in Fig. 6.20 for the NASA high rate shuttle data
transmission. The pro bability of bit error for the entire link is given by
Pb= ( 1 - Pbi)Pb2
+ (1 -
Pb2)Pb1
285
An analysis of the Shuttle to TDRSS link indicates tha t the signal-to-noise ratio
is 6.5 dB resulting in Pb~ 10 - 5 with an average burst length of 4-5 bits and
an expected period between bursts of 2 x 10 5 bits for the 1/ 2 rate, length 7 code.
T o determine the effect of bit errors o n the SAR performance, we assume
that the bit errors occur randomly in time. This allows us to apply Bernoulli 's
theorem, where the probability of an m-bit error in an nb-bit code wo rd is given
by:
( 6.4.3 )
For a single m-bit error wit hin any code word designated by v;, the resultant
code word vi contributes a noise term
N~'
Pb1 Pb2
(6.4.1)
where Pb 1 is the bit error probability for the shuttle to TORS to White Sands
segment and Pb2 is the bit error probability for the White Sands to Domsat to
GSFC segment. The third term in Eqn. ( 6.4.1) represents the coupling between
the two links, which could produce burst errors with a longer expected burst
length than is cha racteristic of either link individually. However, if the
perfo rmance of each link is sufficiently high, the probability of occurrence of
the bursts is small and
=_I.I Jx;.
I=
I J- 1
j ,. i
(x - V; )2 p(x) d x
( 6.4.4)
x,
( 6.4.2)
~DCM SAT
SPACE
SHUTILE
6.4.2
GSFC
Figure 6.20 N ASA space shullle high rate communications downlink signal path.
A major constraint in the design of most current spaceborne SAR systems (e.g.,
E-ERS-1, J-ERS-1 , Radarsat) is the available downlink data rate. For these
systems, the swath wid th is either da ta rate limited, or the system d ynamic range
has been degraded by reducing the number of bits per sample. To illustrate the
downlink ca pacity required by a typical SA R system, we present the following
example.
286
6.4
287
8-blt auant!zer
40
,,
m
c
ex:
To determine the average data rate we need the PRF. The Doppler bandwidth
is given by
20
(/)
10
where V. 1 ~ 7.5 km/sis the spaceborne sensor to target velocity. Assuming the
same oversampling factor in azimuth,
-10 .L----4~-.....--..---..--.L...----,---.----r-~--1
30
20
10
40
50
Assuming the ADC output is buffered to achieve time expansion over the entire
inter-pulse period, the average (sustained) real-time downlink data rate is
Standard deviation in dB
figure &.21 Effect of random bit errors on signal to distortion noise ratio as a function of signal
power for 8 bit quantization.
Example 6.2 Consider a spaceborne SAR system with the following characteristics:
Quantization nb = 8 bits/sample;
Bandwidth BR "== 20 MHz;
Antenna Length La = 12 m;
Swath Width~= 100 km;
Incidence Angle '1 = 45.
The required minimum slant range swath width is approximately
W.
~ ~sin '1
= 71 km
2W./c =
471
_______________________________........._
288
6.4
4. Reduce the quantization to fewer bits per sample (nb) at the cost of
increased distortion noise and therefore a degraded impulse response
function and radiometric calibration accuracy (Chapter 7).
Assuming the swath width is maintained, these data rate reduction options
essentially become a trade-off between degrading either: (1) Geometric (spatial)
resolution; or (2) Radiometric resolution (dynamic range).
If a tape recorder is available onboard for capture of the real-time output,
then the sensor duty cycle could also be factored into the required downlink
capacity. Furthermore, if an onboard processor were available to generate the
image data in real time, the resolution degradation could be performed by
multilook averaging, thus reducing the speckle noise in the process.
6.4.3
Data Compression
Spatial data compression has long been used as a technique for data volume
reduction. Generally, the assumption in most compression algorithms is that
some type of redundancy exists in the representation of the data (Jain, 1981).
Many data compression algorithms have been devised to reduce redundancy
based on the statistics of the data set. Compression algorithms are classifed as
either lossy or lossless.
The lossy (or noisy) algorithms are designed to achieve a relatively large
compression factor with the loss of some information (i.e., added noise) in the
reconstructed data. Conversely, a lossless (or noiseless) algorithm is capable of
exactly reconstructing the original data set from the compressed data stream.
For an application such as reducing the downlink data rate, lossy algorithms
are rarely considered for scientific instruments. This is due to the inability to
predefine what an acceptable information loss would be, since the data is to
be used for a variety of research applications. Lossy algorithms will be considered
in more detail in the ground segment of the SAR data system (Chapter 9) for
the distribution of browse image products.
Lossless compression, on the other hand, has been routinely used to reduce
the downlink data rate for optical instruments (Rice, 1979). The redundaneyi
in the data set is typically characterized by its zero order entropy (Shannon, 1948)
L,
H0
= L P 1 log2 P 1
(6.4.6)
i=l
289
An analysis of SAR raw data from the NASA DC-8 airborne system indicates
that H 0 ~ 6- 7 bits/sample. Thus, assuming 8 bit quantization, the maximum
achievable compression factor is on the order of 1.2. An analysis of this sort
must take into account that the SAR data is stationary only over a small time
and space interval, and therefore the entropy of the data depends on the local
target characteristics. Furthermore, when characterizing the SAR data, care
must be taken to ensure that the radar system is not limiting the data dynamic
range prior to the ADC.
Assuming that a 20% savings could be achieved in the downlink data rate
without loss of information, data compression could provide a substantial
improvement in the radar system performance (wider swath, more bits per
sample, etc.). However, realistically there is no lossless data channel, since bit
errors from the transmission will always degrade the data. In fact, most lossless
compression algorithms result in the data being more susceptible to bit errors,
effectively increasing the BER for a given link performance. To offset this factor,
error protection codes must be applied to the data before transmission. Since
the overhead for error protection is typically 20 % or more, a real savings in
the downlink data rate is not achieved.
Several studies have been performed using lossy compression to reduce the
downlink data rate. They conclude that the vector quantization algorithm
exhibits good performance (Reed et al., 1988). Compression factors as high as
10: 1 have been claimed, but to date a full error analysis has not been performed
to quantitatively assess the actual impact on image quality.
6.4.4
290
6.4
SAR
ANALOG
VIDEO
SIGNAL
BFPO
AOC
L - - - - - ' Thresholds
DOWNLINK
RECEIVER
1----19!
nb
SAR
DEFORMATTER
BFPO -1 1-+--1. . CORRELATOR
(LEVELO) 1------1~.__ __..
(LEVEL 1A)
Thresholds
291
where nb, mb, and n1 are the number of bits required to represent the original
data sample, the BFPQ data sample, and the threshold, respectively. The
instantaneous dynamic range of the BFPQ data is that of an mb bit uniform
quantizer. However, its adjustable dynamic range is that of the original nb bit
quantizer. Thus, the BFPQ will preserve the full information content of the
input data stream if the dynamic range of the original data within any I. x l.
data block does not exceed the dynamic range of the mb bit quantizer.
The assumption in the BFPQ design is that, within a given block of data,
the signal intensity with high probability does not exceed some prescribed
dynamic range. Thus, selection of the block size is essential to proper
performance of the BFPQ. The factors to be considered in selection of the block
size are:
1BIT
I BIT
ABSOLUTE
7 BIT
3 BIT
~FFER
7 81
VALUE. IF SIGN i...;.e;T,;.__ __,'+I TABLE LOOK UP ' - " " - -.....
BIT-0, TAKE 1'
(ROM)
MAGNITUDE
BITS
COMPL
12 BIT INPUTS
3 BIT OUTPUTS
(4K x 3)BITS
I xI I
Gaussian statistics for the data set used in estimating each threshold. Due
to the speckle noise a minimum of 50 to 100 samples is required.
2. The block should be small in range relative to the variation in signal power
due to antenna pattern modulation and range attenuation. The design
should allow a maximum variation of only 1-2 dB from these effects.
3. The block should also be small in range relative to the number of samples
in the pulse; and small in azimuth relative to the synthetic aperture length.
Typically the data is approximately stationary over 1/4 to 1/2 of the pulse
and synthetic aperture lengths.
128
COMPARISON TABLE
WITH 32 LEVELS
SBIT
PING PONG TABLE
FOR THE INDEX OF
ONE RANGE LINE
128 x 5 BITS
(RAM)
5 BIT
t,__~5B~IT--..1~--(COOl_NG_OF_T_HE_MU~L~TI~PU-E-R)-;.!~MATIER
I BFPO(B,4)
Functional block diagram of the block floating point quantizer (BFPQ): (a) SAR
data system with a BFPQ; (b) Design of the SIR-C BFPQ with nb =. 8, mb = 4, n, = S.
Figure 6.22
The BFPQ algorithm divides the digitized SAR video data into blocks, where
the sample variance within a block is small compared to the variance across
blocks within the data set. The variance of the samples within each block is
estimated to determine the optimum quantizer, which minimizes the distortion
error for that block. In effect, the BFPQ operates as a set of quantizers with
different gain settings. The problem of quantizing for minimum distortion given
a certain probability density p( x) was first addressed by Max ( 1960 ). He showed
that, given a Gaussian distributed input, using a minimum mean square error
criterion, a uniform quantizer is optimum. Assuming Gaussian statistics within
the data block, the BFPQ algorithm is as follows:
1. For each input data block the standard deviation a is calculated. Typically
this is implemented by calculating the mean of the absolute value of each
sample and relating this to a by
f
Jiica
-I
~ Ix;!+ 0.5
x I = Li= I
x .. + I
exp ( - x 2/2a 2) d x
(6.4.8)
x,
where Lv is the number of quantization levels and the X; are the normalized
quantizer transition points.
292
6.4
Look-up
table
ADC
Output
8 bit no.
Determine
1128
(lll+IOI)
ii:E
threshold
. Next
burst
8
Input
Voltage
I
{
Slgnblt
Magnltudeblt
'}
Threshold determined
In previous burst
Figure 6.23 BFPQ design used for the Magellan SAR with nb = 8, mb = 4, n, = 8 (Courtesy of
H. Nussbaum).
293
2. Each sample in the block is scaled by the estimated standard deviation for
that block and the result compared to the optimum quantization levels for
a mb bit quantizer with u = 1.
3. The resulting mb bit word and the estimated threshold (which is an n, bit
quantized value of lxl) are downlinked.
4. The BFPQ decoder in the ground receiver determines the correct multiplier
(gain) from the quantized threshold and reconstructs a floating point
estimate of the original data sample.
Example 6.3 Consider the BFPQ design used in the Magellan spacecraft
mapping Venus (Kwok and Johnson, 1989). Due to the small mass and power
budgets available on a deep space probe, such as Magellan, the peak downlink
data rate is constrained to approximately 270 kbps. Additionally, the data link
is available only 50 % of the time since the SAR and the communications system
share the high gain antenna. To achieve the prime mission objective of mapping
the entire planet within one year at 150 m resolution, some type of data
compression was required.
A BFPQ of (8,2) was adopted (i.e., nb = 8 bits, mb = 2 bits). The analog
video signal data is quantized to values between -128 and 128, while the block
size used for the estimate of each threshold is set at l, = 16 range samples and
la = 8 azimuth pulses. The system, shown in Fig. 6.23, is designed such that the
estimated threshold value is applied to a following data block. The standard
deviation is estimated by the absolute sum method given in Eqn. (6.4.8). The
input data is normalized by this value and quantized according to the uniform
quantizer levels given in Table 6.4.
BFPQ Output
0.98160' ~ x
0 ~ x < 0.981611
-0.981611 < x < 0
x ~ -0.981611
l
l
0
0
0
0
l
giv~n.by Eqn. (6.4.8), is precalculated and stored in a look-up table. Thus, th~
8 bit mput sample and the 8 bit threshold address a 2 bit output sample from
table accqrding to Table 6.4. The ground reconstruction simply
mverts this process, and a gain function calculated from the threshold is used
to reconstruct the original data stream according to Table 6.5. The performance
curves for the. M~llan desi~n are s~own in Fig. 6.24. Note that the (8, 2)
BFPQ SNR d1stortlon curve is essentially a set of 2 bit SNR curves spaced
across the dynamic range of the 8 bit curve.
It is important to note that with the Magellan BFPQ we can never achieve
a better sig~al to di~tortion noise ratio (SDNR) than is given by the peak value
fo.r the 2 bit q~antizer. However, we can maintain that performance over a
wider range ?f mput values (approaching that of the 8 bit quantizer) using the
BFPq te~hmque. The effect of the distortion noise incurred by using the 2 bit
quantization depends. on the relative level of other noise sources in the system.
For most s~stem ~es1gns, the SD_N~ should be small relative to the signal to
thermal noise ratio (SNR). This is based on the radiometric calibration
requirements (Chapter 7), which assume the thermal noise power is known and
can be subtracted from the total received power to derive the backscattered
energy. Since the distortion noise is nonlinear and cannot be subtracted, it must
~he look-~p
Decoder Input
l
l
0
0
l
0
l
0
Reconstructed Value*
l.7211
0.5211
-0.5211
-l.7211
294
6.5
295
Example 6.4 Assume that the measurable range of target backscatter coefficients
(i.e., the noise equivalent a 0 ) and the wavelength, .A., are specified by the scientist.
Furthermore, assume the mass and power budgets are constrained by the launch
vehicle such that:
IXI
1. Maximum antenna area (A), and therefore the antenna gain ( G ), are limited
by the mass;
2. Maximum radiated power (P1) is limited by the available de power and
system losses (L.);
3. Minimum noise temperature ( T,,) is determined by the earth temperature
( "'" 300 K) and the receiver noise figure;
4. Slant range ( R) is determined by the imaging geometry and the platform
altitude.
30
"'O
.:
8-bit
a:
Q
ti)
20
10
/8,2)BfPQ
0
10
20
The designer has little flexibility to meet the SNR requirements given these
system constraints. Consider the single pulse radar equation for distributed
targets (Section 2.8)
30
40
50
60
(6.5.l)
Standard deviation In dB
Figure 6.24 Distortion noise as a function of input power for the Magellan BFPQ.
be small relative to the thermal noise or very small ( < -18 dB) relative to the
signal power for calibrated imagery.
6.5
The design of the SAR system is generally dependent on the ap~licatio~ for
which it is intended. Typically, specifications are provided to the design engmeer
by the end data user such as: (1) Resolution; (2) Incidence angle; (3) Swath
width; (4) Wavelength; (5) Polarization; (6) Calibration accuracy; (7) SNR,
and so on. Additional constraints are imposed by the available platform
resources and mission design (e.g., launch vehicle): ( 1) Payload mass, power, ;,
and dimensions; (2) Platform altitude; (3) Ephemeris/attitude determination x
accuracy; (4) Attitude control; (5) Downlink data rate, and so on. Given these~
inputs the system specifications are determined: (1) System gains (losses); (2) ~
Rms amplitude error versus frequency; (3) Rms phase error versus frequ~ncy;
(4) Receiver noise figure; (5) System stability (gain/p~ase .versus time/
temperature), and so on. The final design is the result of an iter~tive proce~ure,
balancing performance characteristics among subsystems to achieve the optimal
design. The following example is presented to illustrate these trade-offs.
where 11 is the incidence angle and Bn is the noise bandwidth. The system
parameters available for enhancing the SNR are:
1. Increase the pulse duration, rP, at the cost of increased average power
consumption;
2. Decrease the antenna length, L3 , while increasing the width, W,., to keep the
area constant to maintain the constraint in Eqn. ( 6.3.9). This will reduce the
swath width and increase the average power consumption due to the higher
'PRF required;
3. Reduce system losses by improving the antenna feed system (waveguide) or
by inserting T / R modules into the feed to improve the system gain; again
at the cost of increased power consumption.
Note that all of the options considered to improve the SNR require an increase
in the available power. If additional power is not available then the designer
must request a modification in the given requirements. Lowering the altitude
will produce a significant increase in SNR due to the R 3 factor, but will !;lecrease
the swath width. The 3 dB swath is approximately
( 6.5.2)
296
6.5
where Jv., is the antenna width. The effect of reducing R on the swath could be
compensated by reducing Jv., and increasing La to keep the antenna area and
swath constant. A small drop in SNR and a reduction in the azimuth resolution
~x ~ La/2 would result, but the net effect would be a significant increase in SNR.
The design procedure illustrated in the above example is intended to
demonstrate the interrelationship between user performance specifications, radar
system parameters, and platform resources. No single algorithm can be defined
that will optimize the design across the wide range of applications, since the
priority ordering of the system performance parameters depends on the data
utilization. At best, the final design will be a ~ompromise between the available
resources and the user's needs.
6.5.1
SPECTRAL
POWER
-fp
Ip
Figure 6.25
The ambiguous signal power at some Doppler frequency fo and some time
delay To can be expressed as (Bayman and Mcinnes, 1975)
00
Ambiguity Analysis
SaUo. To) =
A key element in the radar system design is the antenna subsystem. As we have
discussed in Section 2.2.1, the antenna gain is proportional to its area.
Additionally, its dimensions in range ( Jv.,) and azimuth (La) approximately
determine the 3 dB beam width (assuming no amplitude weighting) by
(}H
297
= J../La
Ov = J../Jv.,
Azimuth
(6.5.3a)
Range
(6.5.3b)
These in turn affect the resolution in azimuth and the available swath width in
range. The shape of the antenna beam, specifically its sidelobe characteristics,
is also key to the performance of the radar system. The discussion in
Example 6.4 considers only the signal to thermal noise requirements of the
system. An additional noise factor, ambiguity noise, is also an important
consideration, especially for a spaceborne SAR. Equations (6.3.7) and (6.3.8)
presented rough guidelines for determining the antenna dimensions. These
bounds are based on the criteria that the 3 dB mainlobe of the antenna pattern
does not overlap in time for consecutive echoes, and that the azimuth 3 dB
Doppler spectrum is less than the PRF. Obviously, these constraints are very
approximate and may not meet the required signal to ambiguity noise ratios.
The azimuth ambiguities arise from finite sampling of the Doppler spectrum
at intervals of the PRF (Fig. 6.25). Since the spectrum repeats at PRF intervals,
the signal components outside this frequency interval fold back into the main
part of the spectrum. Similarly, in the range dimension (Fig. 6.26), echoes from
preceding and succeeding pulses can arrive back at the antenna simultaneously
with the desired return. For a given range and azimuth antenna pattern, the
PRF must be selected such that the total ambiguity noise contribution is verj
small relative to the signal (i.e., -18 to -20 dB). Alternatively, given a PRF or
range of PRFs, the antenna dimensions and/ or weighting (to lower the sidelobe
energy) must be such that the signal-to-ambiguity noise specification is met.
G2 (fo
mtn=-oo
m,n,,.O
( 6.5.4)
where m and n are integers, G 2 (f, T) is the two-way far field antenna power
pattern, and u 0 is the radar reflectivity. The integrated ambiguity to signal ratio
is therefore given by
00
m.n ~-oo
B.12
_
8 12
G (f + mfp, T + n/fp) u (f
+ mfp, T + n/fp) df
0
ASR(T)=~m.;::.=..
""~0'--~~~----,,...,,.--,.,..-~~~~~~~~~~~~~
B.12
G 2(f, T) u 0 (f, T) df
(6.5.5)
-B.12
ECHO
ENERGY
Tp =INTERPULSE PERIOD
Tp __1_
fp
RANGE
AMBIGUITY
NOISE
AMBIGUOUS
REGION
cli'o
ISS,1:
/i'"'c-t
Figure 6.26
298
6.5
where B is the azimuth spectral bandwidth of the processor. Note that the
ASR is ;ritten as a function of r, or equivalently the cross-track position in
the image. Since the system ambiguity specifications typically refer to the
integrated azimuth ambiguity and the peak range ambiguity (which depends
on cross-track position), the expression in Eqn. (6.5.5) is not very useful for
design engineers. It requires both the two dimensional antenna pattern and .the
target reflectivity to be formulated in terms of the Doppler frequency and time
delay. Additional relations are required to derive these quantities from the
measured data. Typically, antenna patterns are given as a function of offboresight angles and <ro is given as a function of local incidence angle. For
design purposes it is more useful to rewrite Eqn. (6.5.5) separating the azimuth
and range ambiguity components. In the following two sections we will analyze
the effects of each type of ambiguity separately.
299
(6.5.7)
(6.5.8)
Azimuth Ambiguity
(6.5.6)
where we have assumed that the target reflectivity is uniform for each azimuth
pattern cut (including sidelobes) at each time interval dr within the record
window. Additionally, we have assumed that the azi muth antenna pattern at
each elevation angle within the mainlobe is similar in shape and that the coupling
between range and azimuth ambiguities is negligible. These assumptions are
generally valid for most SAR systems. The AASR as given by Eqn. (6.5.6) is
typically specified to be on the order of - 20 dB. However, even at this value
ambiguous signals can be observed in images that have very bright targets
adjacent to dark targets. As previously described, SAR imagery can have an
extremely wide dynamic range due to the correlation compression gain for point
targets. Thus, even with a 20 dB suppression of the ambiguous signals, a value
Figure 6.27
Seasat image of New Orleans, LA (Rev. 788) illustra ting azimuth ambiguities.
300
6.5
301
There are two factors that cause the effect of azimuth ambiguities to be more
severe as the wavelength is reduced. The first is demonstrated by Eqn. (6.5.9),
in that the range dispersion is proportional to ,1. 2 ; at shorter wavelengths the
ambiguous energy will be more focussed and the peak AASR increased. A
second factor is the effect of undetected platform pointing errors. The azimuth
bandwidth (for a given azimuth antenna dimension) varies inversely with the
rada r freque ncy. Therefore, the Doppler shift as a function of pointing error
increases linearly with frequency. The standard deviation of the Doppler centroid
estimation error for a given pointing error can be written as
(6.5.10)
e.
Figure 6.28
where
is the squint angle, <19, is the standard deviation of the squint angle
error, and V. 1 is the relative sensor-to-target speed. From Eqn. (6.5.6) a Doppler
centroid estimation error would result in the processing bandwidth (Bp) being
offset from the mainlobe of the azimuth spectra. Since the ambiguous signal
energy is higher at the edges of the mainlobe than it is in the center (see
Fig. 6.25), an increase in the AASR results.
For cases where the squint angle determination uncertainty becomes so large
that
(6.5.11)
v.
tl.xAz
= 23
km
tl.xRA = 0.2 km
Because the ambiguous targets are significantly displaced from their true
locations, the range migration correction applied to the signal data ~t t~e
ambiguous target location is offset from the true value, resulting in blurnng m
302
6.5
303
Range Ambiguity
Range ambiguities result from preceding and succeeding pulse echoes arriving
at the antenna simultaneously with the desired return. This type of noise is
typically not significant for airborne SAR data, since the spread of the echo is
very small relative to the interpulse period. As the altitude of the platform, and
therefore the slant range from sensor to target, increases, the beam limited swath
width increases according to Eqn. (6.5.2).
For spaceborne radars, where several interpulse periods (TP = 1/fp) elapse
between transmission and reception of a pulse, the range ambiguities can become
significant. The source of range-ambiguous returns is illustrated in Fig. 6.26.
For PRFs satisfying the relation
T. > 2A.R tan I'/
P
cW,.
range ambiguities do not arise from the mainlobe of the adjacent pulses.
Typically this is considered an upper bound on the PRF. To derive the exact
value of the range ambiguity to signal ratio (RASR), consider that, at a given
time ti within the data record window, ambiguous signals arrive from ranges of
j
(6.5.15)
where AR is the range displacement in meters and As is the time separation
between the centers of the two looks. Since As, A., and fp are known, m can be
determined by a range cross-correlation of the two single-look images. Note
that, in the absence of edges or point-like targets in the images, the correlation
peak-to-mean ratio is quite small due to the speckle noise in the single look
images.
A limiting factor in the performance of this ambiguity resolving technique
arises from the fact that AR is proportional to both A. and As, which are inversely
proportional to frequency. For X-SAR, with m = 10 we get AR~ 20 meters.
At a complex sampling frequency off.= 22.5 MHz this represents an offset.of
approximately 3 pixels. Since these are single-look pixels, the speckle noise
makes it nearly impossible to exactly determine m.
(6.5.16)
= 1, 2, ... nh
( 6.5.17)
where j, the pulse number (j = 0 for the desired pulse), is positive for preceding
interfering pulses and negative for succeeding ones. The value j = nh is the
number of pulses to the horizon. To determine the contribution from each
ambiguous pulse, the incidence angle and the backscatter coefficient must be
determined for each pulse (j) in each time interval ( i) of the data record window.
Assuming a smooth spherical model for the earth, the incidence angle l'/;i at
some point i within the data record window (corresponding to a range delay
t;) and some pulse j is given by (Fig. 8.1)
(6.5.18)
The target distance is R, = IR, I and R, = IR, I is the sensor distance from the
earth's center and Yii is the antenna boresight angle corresponding to l'/ii This
boresight angle can be written in terms of the slant range Rii as follows
(6.5.19)
In this formulation we have ignored any refractive effects of the atmosphere.
Typically, this is a good approximation for earth imaging, except at grazing
angles (i.e., j approaching nh). Additionally, when imaging through dense
304
6.5
Jl IJl
Sa,
Sa,=
L"
a~Gf;/R~sin(17ij)
for j =F 0
I
I
I
where a~ is the normalized backscatter coefficient at a given '1ii and ~ij is t~e
cross-tr~~k antenna pattern at a given Yij The exact dependence of~. on 17 is
a function of target type, radar parameters, and environn;iental con~itions (see
Chapter 1). The antenna pattern dependence on yij is a function of the
illumination taper across the array. For a uniformly illuminated aperture, the
far-field pattern is given by Eqn. (2.2.30)
= sinc 2 [nW.. sin( </>ij)/ A.]
(6.5.23)
-30
1..-_ _ __,__ _ _ _ _
T_H_ER_MA_L_N0_1s_E_E_au_1v_A_L_EN_T_a__ _......1.-.._ ____,
320
325
330
335
340
345
350
Figure 6.29 Plot of SIR-B performance (in noise equivalent u 0 ) versus cross-track position for
y = 55. System noise floor dominated by range ambiguities, sharp spike in noise floor is nadir return.
(6.5.22)
j7"0
4>ij =
____~~----.....J
I
F~rt----"1~S~U~R~F=AC~E~BA:C~K~S~C:ATT~E~R~-=M~U:HL~E~M:A:N:S~L:AW:_
-25
i= -n.
Gij
20
(6.5.20)
where Sa, and Si are respectively the range ambiguous and desired si_gnal i;x>wers
(at the receiver output) in the ith time interval of the data recordmg wmd~w,
d N is the total number of time intervals. From the radar equation,
~~n. (6.5.1 ), only the parameters that do not cancel in the ratio of Eqn. (6.5.20)
need be considered. Thus
( 6.5.21)
for j = 0
305
"4------SWATH LIMITS-------<..i
'b
Si
(6.5.24)
location of the ambiguous pulses and the nadir returns are outside the data
recording window. The SNR performance is shown in Figure 6.29 for a typical
set of SIR-B parameters.
6.5.2
PRF Selection
The set of values that the above listed radar parameters (PRF, DWP, etc.) can
assume is constrained by a number of other factors. This is especially true in
the case of the PRF. As we have shown in the preceding discussions on azimuth
and range ambiguities, the AASR and RASR are both highly dependent on the
selection of PRF. A low value of PRF increases the azimuth ambiguity level
due to increased aliasing of the azimuth spectra. On the other hand, a high
PRF value will reduce the interpulse period and result in overlap between the
received pulses in time. The PRF selection is further constrained for a SAR
system that has a single antenna for both transmit and receive. The transmit
event pmst be interspersed with the data reception for a spaceborne system
since, at any given time, there are a number of pulses in the air. Additionally,
the PRF must be selected such that the nadir return from succeeding pulses is
excluded from the data window. The transmit interference restriction on the
PRF can be written as follows
Frac(2Rifp/c)/ fp >
-rP
+ -rRP
-rRP
(6.5.26a)
(6.5.26b)
306
6.6
SUMMARY
307
SIR-A
and
( 6.5.26c)
where R 1 is the slant range to the first data sample (i.e., j = 0, i = 1), RN is the
slant range to the last (Nth) data sample in the recording window, rP is the
transmit pulse duration, and Rr is the receiver protect window extension about
t . The functions Frac and Int extract the fractional and the integer portions
of their arguments, respectively. These relationships are illustrated in the timing
diagram, Fig. 6.30.
The nadir interference restriction on the PRF can be written as follows:
~<..:>
z
ex:
~
= 0,
I, 2,. .. nh
(6.5.27a)
nh
(6.5.27b)
2H /c + j/ fp > 2RN/ c
2H/ c + 2tP
j = 0, I, 2,. ..
+ j/JP < 2R 1 / c
I
I
I
37
SIR-B.-J
I
I
I
I
I
26
where H :::::: R. - Rt is the sensor altitude above the surface nadir point. We
have assumed in the above analysis that the duration of the nadir return is 2tP.
The actual nadir return duration will depend on the characteristics of the terrain.
For rough terrain the significant nadir return could be shorter or longer than
2tP. An example of the excluded zones defined by Eqn. ( 6.5.26) and Eqn. ( 6.5.27)
is given in Fig. 6.31.
'tp
..........~---.-~-.....L.~~_.:..;
15-t-~.-~-+..;.._-r"'---+__JL-r~
1000
1100
1200
1600
1700
PRF, Hz
~
't RP
SEASAT
I
I
Figure 6.31 Plot of PRF against y for SIR-B illustrating excluded zones as a result of transmit
and nadir interference.
~'+-------+-~r--m
ILll :-nL_-_.'"':_. .4=------f
...
I
I
2R,
fp
'tp
6.6
SUMMARY
b
Figure 6.30 Timing diagram illustrating the constraints on PRF selections: (a) Transmit
interference; (b) Nadir interference.
308
instrument and its major assemblies. This was followed by a discussion of the
spacecraft bus and data downlink subsystem.
The SAR sensor subsystem consists of four major assemblies: (1) Timing
and control; (2) RF electronics; (3) Digital electronics; and (4) Antenna. Their
performance can be analyzed in terms of a linear distortion model. Quantitative
relationships between the linear system errors and the resultant impulse response
function were given. Additionally, the non-linear performance characteristics of
the SAR were described in terms of the signal to distortion noise ratio.
The platform and data downlink subsystem is often a limiting factor in the
SAR performance, in that the available data rates, power, and mass may be
insufficient to accommodate the instrument. To reduce the data rate, the system
performance is often degraded. Alternatively, a data compression technique,
block floating point quantization, can be employed. This concept was described
in detail with an example of the Magellan SAR design.
The chapter concluded with a discussion of various aspects of the SAR system
design. A detailed treatment of ambiguities was presented with examples from
the Seasat and SIR-B systems. The limitations of nadir and transmit interference
were also presented as another factor in the PRF selection.
The intent of this chapter was to introduce the various error sources that
result from the sensor and data downlink. These errors to some degree can be
compensated in the signal processor by adjusting the matched filter reference
function. However, some component of the sensor and data link errors will be
passed through to the final image product. An understanding of the sources
and characteristics of these errors is essential for proper design of the ground
data system and interpretation of the SAR imagery.
REFERENCES
Bayman, R. W. and P.A. Mcinnes (1975). "Aperture Size and Ambiguity Constraints
for a Synthetic Aperture Radar," IEEE 1975 Inter. Radar Conj., pp. 499-504.
Beckman, P. ( 1967). Probability in Communication Engineering, Harcourt, Brace and
World, New York.
Berkowitz, R. S., et al. ( 1965). Modern Radar, Linear fm Pulse Compression, C. M. Cook,
Chapter 2, Part IV, Wiley, New York.
Butler, D. ( 1984) "Earth Observing System: Science and Mission Requirements Working
Group Report," Vol. I, NASA TM 86129.
Butler, M. ( 1980). Radar Applications of SAW Dispersive Filters, Proc. I EE, 127, Pt. F.
Carlson, A. B. ( 1975). Communication Systems: An Introduction to Communications
and Noise in Electrical Systems, McGraw-Hill Book Company, New York.
Carver, K. and J. W. Mink (1981). "Microstrip Antenna Technology," IEEE Trans. Ant.
and Prop., AP-29, pp. 2-24.
Chang, C. Y. and J.C. Curlander (1992). "Algorithms to Resolve the Doppler Centroid
Estimation Ambiguity for Spaceborne Synthetic Aperture Radars," IEEE Trans.
Geosci. Rem. Sens. (to be published).
REFERENCES
309
7.1
7
RADIOMETRIC
CALIBRATION OF SAR
DATA
Historically, SAR image data has been used for a variety of applications (e.g.,
cartography, geologic structural mapping) for which qualitative analyses of the
image products were sufficient to extract the desired information. However, to
fully exploit the available information contained in the SAR data, quantitative
analysis of the target backscatter characteristics is required. In general, any
scientific application which involves a comparative study of radar reflectivities
requires some level of radiometric calibration. Typically, these comparisons are
performed spatially across an image frame or temporally from pass to pass in
multiple frames. However, comparisons may also be made across radar systems
(e.g., Seasat and SIR-B), or across frequencies or polarization channels with
the same system (e.g., L-HH and C-VV).
Ideally, all data products generated by the SAR correlator are absolutely
calibrated such that an image pixel intensity is directly expressed in terms of
the mean surface backscatter coefficient. This requires the signal processor to
adaptively compensate for all spatial and time dependent variations in the radar
system transfer characteristic. This procedure, referred to as radiometric
correction or compensation, establishes a common basis for all image pixels~
such that a given pixel intensity value represents a unique value of backscattered
signal power, independent of its location within the data set. For absolute
calibration, a constant scale factor is required that compensates for the overall
system gain (including the ground processor), in addition to an estimate of the
noise power to determine the relative contribution of the thermal noise in the
recorded signal. Absolute calibration is essential for comparison of multisensor
data as well as for validation of the measured backscattered signal characteristics
using scattering models.
310
DEFINITION OF TERMS
311
In this chapter, we will introduce a set of definitions for the basic calibration
terms as well as image calibration performance parameters. From this basis,
we will discuss various system calibration procedures required to produce the
measurements needed for radiometric correction. We will describe the internal
(radar system) and external (ground) devices used to insert a known, deterministic
signal into the radar data stream for characterization of the system transfer
function. Finally, and perhaps most important, we describe the ground data
system procedures for measuring these calibration signals and correcting the
image data such that the output products are routinely calibrated.
7.1
DEFINITION OF TERMS
General Terms
312
terms of its ability to measure the amplitude and phase of the backscattered
signal. This calibration process generally consists of injecting a set of known
signals into the data stream at various points and measuring the system response,
either before or after passing through the signal processor. We distinguish
calibration from system test, in the sense that calibration is performed as part
of the normal system operation, while testing is only performed prior to or
following the normal operations.
The calibration process can be divided into two general categories:
(1) Internal calibration; and (2) External calibration. Internal calibration is the
process of characterizing the radar system performance using calibration signals
injected into the radar data stream by built-in devices (e.g., calibration tone,
chirp replica). External calibration is the process of characterizing the system
performance using calibration signals originating from, or scattered by, ground
targets. These ground targets can be either point targets with known radar
cross section (e,g., corner reflectors, transponders), or distributed targets with
known scattering characteristics (e.g., u 0 ).
The calibration process is distinguished from verification in that verification
is the intercomparison of measurements from two (or more) independent sensors
with similar characteristics. The consistency between independent sensors of
the measurements of the same target area under similar conditions can be used
to verify the calibration performance specifications of each instrument. Instrument
validation refers to the comparison of geophysical parameters, as derived from
some scattering model, to known geophysical parameter values (e.g., surface
roughness) as determined from ground truth measurements. The validation
process assumes that reliable models are available to derive the geophysical
parameters from the u 0 values. Otherwise, the instrument measurement errors
cannot be separated from the model uncertainty.
7.1.2
7.1
DEFINITION OF TERMS
313
For a multi-polarization SAR, both the relative amplitude and the relative phase
stability must be specified to determine the cross-channel calibration performance.
The polarization channel balance is the uncertainty in the estimate of backscatter
coefficient ratio between coincident pixels from two coherent data channels.
Similarly, the polarization phase calibration is the uncertainty in the estimate of
the relative phase between coincident pixels from two coherent data channels.
The phase uncertainty should include both the mean (rms) value and the
standard deviation about the mean, since the second order statistics of the phase
error can contribute significantly to uncertainty in the target polarization
signature (Freeman et al., 1988). These polarization parameters should be
specified for each radar channel combination.
For a multifrequency SAR both the relative and absolute cross-frequency
calibration must be specified for each cross-channel combination. The absolute
cross{requency calibration is defined as the uncertainty (precision) in the estimate
of the backscatter coefficient ratio between two pixels (or image areas), either
simultaneous or time separated, from frequency diverse radar channels. The
relative cross{requency calibration is the uncertainty in the estimate of the
cross-frequency ratio of relative backscatter coefficients between two image
pixels or homogeneous target areas. Phase calibration is not meaningful across
314
Parameter Characteristics
7.2
7.2
315
Sensor Subsystem
Included in our discussion of the sensor subsystem are the effects of the
atmospheric propagation errors, as well as those of the radar antenna and the
sensor electronics.
Atmospheric Propagation
The propagation of both the transmitted and reflected waves through the
atmosphere (in which we include the ionosphere' can result in significant
modification in the electromagnetic wave parameters. The key atmospheric
effects are: ( 1) Attenuation of the signal (amplitude scintillation); (2) Propagation
(group) delay; and (3) Rotation of the polarized wave (Faraday rotation). These
effects are typically localized in both time and space and are therefore extremely
difficult to calibrate operationally.
Amplitude scintillation does not occur naturally above 1 GHz, except along
a band of latitudes centered on the geomagnetic equator and within the polar
regions during periods of peak sunspot activity, which occur in 11 year cycles
(Aarons, 1982). The peak in 1990 nearly coincides with the launch of the ESA
ERS-1, however the effects will be small for this system, which is a C-band SAR
(A.= 5.6 cm), since the perturbation strength is proportional to wavelength
squared. An analysis for the Seasat SAR (A.= 23.4 cm), which was launched
just prior to the peak sunspot activity in 1979, concluded that fully 15 % of the
nighttime Seasat images would show significant degradation. However, an
evaluation of the processed image data does not support this analysis (Rino,
1984). At higher frequencies (above 10 GHz), attenuation from water vapor
absorption could also effect the SAR measurement accuracy (Chapter 1).
7.2
z t:>
cc ~~f-
~d
C/)
91 ul
8~
:::iE
__J
di H
.s.,
t>
~..
f-
a: (.)
<(
Cl u..
~i:g
"':::s
z~
c:ol0
() ::i
'5
Q~
~a:
~~
Z;:.?
__J
Q(S
tu~
:c
~a..
(.!)
t-
.,>
00
"!
@ ~"'~
(.!)
317
Group delay is also an ionospheric effect that is most severe for low frequency
(::::;; 1 GHz), high altitude ( > 500 km), polar orbiting SARs. An uncompensated
group delay will degrade the SAR performance in two ways. First, the slant
range estimate will be offset according to error in the propagation velocity
(Chapter 8). A second effect is pulse distortion, which results in spreading of
the pulse (i.e., the ionosphere behaves like a linear dispersive delay line
(Fig. 7.2)). An EM wave propagating through a medium ionosphere typically
experiences a two-way group delay of 50 to 100 ns, increasing to as high as
500 ns during peak sunspot periods, with a nominal pulse dispersion of less
than 1 % for Seasat-like parameters (Brookner, 1973).
Faraday rotation is the effect of the ionosphere on a lipearly polarized wave,
producing a rotation in the wave orientation angle. The amount of rotation is
directly related to the ionospheric dispersion resulting from the earth's magnetic
field. It is inversely proportional to the radar carrier frequency squared. At
frequencies above 1 GHz the rotation is small under most ionospheric conditions
and can be neglected.
An example where the atmospheric effects are significant is the Magellan
SAR designed to map Venus (Chapter 1). The Venusian atmosphere is more
(.') 0
<(
(.) <(
RECEIVED
PULSE
TRANSMITTED
PULSE
"Cl
ii=:
<
rn
"Cl
s::
__J
~~
t-
iD
0
a:
z~
LL
H: ~
~
a..
g:~
---0-40n
-.s
~8
::;i
a:
l
~1
~w
w~
1j
0
2... ,
0
IQ
I::
.,
"": g
"'"=
! !!'
~~
t~
~g
~ If
en
a:
ii
I-'(.)
g:e
a:i
1J
-n-27ns
,A,,, ..
/\
I/ "
~~
l\1(\ ..
+5ns
),\ .
Figure 7.2 Ionospheric pulse dispersion for short pulse with a Gaussian envelope. The results are
for grazing angles during severe ionospheric conditions for two way propagation of a 1 GHz wave.
The pulse attenuation is given by 'P" (Brookner, 1985.)
316
318
dense than that of the earth. The highly elliptical orbit of Magellan results in
both very shallow and very steep incidence angles over the orbital period. The
result is that the long propagation path through the dense atmosphere causes
significant attenuation and refraction of the EM wave, altering the incident
surface geometry and in some cases the orientation of the wave.
Antenna
The SAR antenna can be a major source of calibration error. There are several
factors that limit the antenna subsystem calibration. First, to achieve the required
SNR, a large antenna gain is required and therefore a large physical aperture
area. Spaceborne antenna systems are typically over 10 m in the azimuth
dimension. To maintain pattern coherence, the structure must be rigid such
that its rms distortion is less than A./8. Considering the spaceborne environment,
both zero gravity unloading and the large variation in temperature will cause
distortion in the phased array. This distortion can result in gain reduction,
mainlobe broadening, and increased sidelobe levels.
A second key factor limiting the antenna calibration is that the characteristics
of the antenna cannot be easily measured using internal calibration devices. As
we will discuss in a later section, most internal calibration systems bypass the
antenna subsystem and inject known reference signals directly into the radar
receiver electronics. In general, the only method to calibrate the antenna in
flight is by use of external calibration targets. However, this approach limits
monitoring of the antenna performance to certain discrete places within the
orbit. Any intra-orbital variation in this subsystem performance cannot be well
characterized.
A final consideration for the antenna is specifically for the case of an active
array (Fig. 7.3). An active array has phase shifters and transmit/receive (T/R)
modules inserted in the feed system to improve the system SNR and provide
electronic beam steering. Typically, hundreds of active devices are used in such
a design. This presents a difficult problem in characterizing the performance of
each device, which may degrade or fail during the mission lifetime.
Antenna calibration implies precise characterization of the gain and phase
transfer characteristic across the system bandwidth as a function of off-boresighf
angle. Additionally, the cross-polarization isolation is an important factor, not '.
only in the mainlobe of the antenna pattern but also in the sidelobe regions
that are aliased back into the mainlobe by the PRF sampling (Blanchard and
Lukert, 1985).
,----,
I
I
L---,
I
I
I
I
I
I
Sensor Electronics
The sensor electronics, which include both the RF and digital assemblies, are
typically well characterized by internal calibration devices. The system
performance, which is given in terms of the rms phase and amplitude errors
across the system bandwidth, can vary as a result of component aging or thermal
variation. The internal calibration loops employ either coded pulse replicas ()r
calibration tones to determine the system response function.
319
320
A key element in determining the overall system calibration accuracy and the
image quality is the sensor platform. A stable platform with precise attitude
and orbit determination capability is a necessity for the generation of calibrated
data products. Uncertainty in the sensor position and velocity primarily affects
the geometric calibration, degrading the target location accuracy and the
geometric fidelity of the image. This will be discussed in more detail in Chapter 8.
The platform attitude variables, in conjunction with its ephemeris, are key
parameters for determination of the echo data Doppler parameters. Even with
parameter estimation routines, such as clutterlock and autofocus, the initial
predicts must be sufficiently accurate for the estimates to converge properly. It
should be noted that these Doppler parameter estimation techniques are target
dependent, thus the convergence accuracy, and therefore the system performance,
depend on the surface characteristics. It is preferable to have attitude sensors
capable of measuring the sensor attitude to within one tenth of a beamwidth
in azimuth and several hundredths of a beamwidth in elevation.
The platform control is an important factor determining the quality of the
SAR image products. A large attitude rate, if not tracked by the SAR azimuth
reference function, will degrade the image quality by reducing the SNR within
the processing bandwidth. For block processing in azimuth, the Doppler
centroid varies as a function of time over the synthetic aperature length, which
results in the processing bandwidth being properly centered at only one point
within the block. The calibration error bias can be corrected, if the attitude rate
is known, by adjusting the processor gain for each block according to the signal . . .
loss.
Random errors caused by the data downlink have little effect on t~ i
radiometric calibration for distributed targets. A severe bit error rate (i.e., .:.
> 10 - 3 ) can degrade the impulse response function and therefore affect t~)
external calibration accuracy if point targets are used. If an entire echo line of/
data were lost in the Level 0 (telemetry data) processing, the internal fidelity
of the data set would be degraded. The effect is most severe for multichannel
systems such as an interferometer or a polarimeter, where the loss of a line of
echo data in one channel will cause a relative channel-to-channel phase error.
7.2.3
7.2
321
SAR Correlator
The SAR correlator (Level lA processor) forms the image products from the
digitized video signal data by convolving the raw data with a two-dimensional
matched filter reference function (Chapter 4 ). The reference function coefficients
are derived from the Doppler characteristics of the echo data. Typically, the
SAR correlator processing algorithm approximates the exact matched filter
function with two one-dimensional filters. Additionally, in the frequency domain
fast convolution algorithm, the Doppler parameters are assumed constant within
a processing block. For large squint angles and large attitude rates, these
approximations are inadequate, producing matched filtering errors. The result
is an increased azimuth ambiguity level, loss of SNR, degraded geometric
resolution, and geometric distortion (image skew). The accuracy of the matched
filtering is especially critical when external calibration targets are used to derive
the sensor induced errors, since the sensor and processor errors cannot be
separated to identify the error source. As we will discuss in more detail in
Section 7.6.1, a technique has recently been developed to minimize the effect of
matched filtering errors on calibration performance (Gray, 1990). However, as
described above, these errors will still affect the image quality (impulse response
function) characteristics.
Post-Processor
322
7.3
7.3
(7.3.4)
where ifx,y is the expectation over x and y and Ax, ARg are the azimuth and
ground range resolution cell sizes of the unprocessed raw video signal (the beam
footprint).
Substituting Eqn. (7.3.4) and Eqn. (7.3.2) into Eqn. (7.3.1), we can write the
mean received power for a homogeneous target as
(7.3.5)
where Pn is the mean noise power over some block of data samples used in the
estimation of a 0
If we ignore the effects of system quantization and saturation noise, the mean
received power for a homogeneous target is related to the digitized video signal
by
The process of radiometrically calibrating the SAR image data can be red~ced
to estimation of the bias and scale factors that relate the backscatter coeffietent
to the image data number (DN). Assuming the system is linear, we can write
the receiver output power as
Pr
2
="In
L.,,d11 /Ml =
where nd,i is the complex data number of the (i,j) digitized sample and M 2 is
the number of samples averaged. From Eqn. (7.3.5) we can write
0
where p is the total received power, P. is the signal power, and Pn is the additi~e
(thermah noise power. Ignoring the effects of ambiguities, the signal power is
related to the mean radar cross section ii by
(7.3.2)
ii d
i,j
(7.3.1)
P. = K'(R)ii
(7.3.3)
spaced at intervals equal to the unprocessed resolution cell size. The amplitude
A ( x, y) is modeled as a Rayleigh distributed, stationary process, while the phase
l/J(x, y) is uniformly distributed and stationary. The expected radar cross section
is therefore
Summary
323
ii~ - Jin
K(R)
(] = --=----=
(7.3.6)
K(R) = K'(R)AxARg
(7.3.7)
where
Thus, if the scale factor K(R) and the mean noise power Pn can be estimated
over a small area ( M x M samples) of the data set, then the mean backscatter
coefficient a 0 can be determined from Eqn. (7.3.6).
In general, Pn and K(R) will be both frequency and time dependent given
the radar component aging, thermal stress, and platform motion. However, the
frequency dependence is significant only in. terms of the processor matched
324
7.3
filter error characteristics. For a point target, these errors will be expressed in
terms of mainlobe broadening and increased sidelobe energy in the point target
response function. For a distributed target, the processor matched filtering
integrates the frequency response, thus the shape of this response is not
significant, since only the integrated power affects the radiometric calibration.
In general, the noise power and scale factor should be written as functions of
time P 0 (t) and K(R, t), and can on!y be considered constant over a small block
of data.
Since the calibration correction parameters vary with time, the estimates of
these parameters cannot be extrapolated over a large area. Additionally, there
is a large uncertainty in the a 0 estimate if M is only a few pixels. This is due
to the inherent speckle noise in the data resulting from a large number of
independent scatterers within a single resolution cell (Section 2.3, 5.2). Since
the intensity of a one-look pixel (M = 1) obeys the exponential probability
distribution function Eqn. ( 5.2.9 ), this uncertainty is 3 dB. Stated differently,
there is about a 50% probability that the single-look pixel value lies outside
the a 0 3 dB range. The estimate of the noise power also must be derived
from a large number of pixels to determine the statistical mean. On an individual
pixel basis, the actual noise power may deviate significantly from the mean
noise estimate.
The variation in noise power over time primarily results from variation in
the radar receiver chain component gains. This drift can usually be measured
from receive-only noise measurements, when the transmitter is placed in a
standby mode and only the thermal noise is recorded. The changes over time
in thermal noise power can be monitored using internal calibration signals that
measure the overall receiver gain characteristic.
A formulation for the range dependent scale factor K(R) in terms of
measurable quantities can be derived from the radar equation, as we will show
in the next section. It is dependent on radar system parameters such as the
antenna gain pattern, the transmit power, and the sensor-to-target slant range.
Errors in the estimates of these system parameters will degrade our estimate
of K(R) and therefore the radiometric calibration.
To evaluate the sensitivity of a 0 to errors in the estimate of K(R) and Po
we take the partial derivative of Eqn. (7.3.6) with respect to each parameter.
The uncertainty in the estimate of a 0 for a given error in K(R) is:
(7.3.8)
325
P =0
= O; lf{P
0 }
0 -
where 8 represents the expectation and K(R) and P0 are the estimated values.
Combining Eqn. (7.3.8) and Eqn. (7.3.9) and rearranging terms, the fractional
uncertainty in the estimate of a 0 from errors in the noise power and the correction
factor is given by
2 ( SK )2 ( Sp )2
( Sa)
a
= K(R)
+ a KCR)
0
(7.3.10)
If we assume that the distribution of the estimation errors for each term is
Gaussian and uncorrelated, and if we further assume that the variances are
small, then the coefficient of variation of the K(R) estimation error is given by
the sum of the coefficients of variation of the individual parameters (Kasischke
and Fowler, 1989)
Bi = Bi + BR + + BR
1
(7.3.11)
where the coefficient of variation, Bx= sx/x, is the ratio of the standard deviation
to the sample mean for the random variable x. Combining Eqn. (7.3.10) and
Eqn. (7.3.11) the error model becomes
Ba
2 + 2 + + 2 + (
BK
BK 2
'
BK
)2]1/2
sP.
K(R)a 0
(7.3.12)
where Ba = Sa! a 0 Using the relationship in Eqn. (7.3.6), we get a final expression
for our error model as
fid -
Pn
(7.3.9)
0
where Sa sK, and sp. are the standard deviations of the estimates of a , !'(R),
and P 0 , respectively. We have assumed that estimates of K(R) and P 0 are
326
7.4
7.5
Given the radar equation for a distributed target as defined in Eqn. (2.8.2), we
can write the receiver output signal power as
(4n)J R4
(7.4.1)
for a homogeneous scene, where we have assumed that the antenna is reciprocal
(i.e., Gt= Gr= G), Pt represents the radiated power, Gr is the overall receive
gain, and AxAR 8 is the ground area of each precompression resolution cell.
(The point target radar equation would use the term u, the radar cross section
of the point target, in parentheses in Eqn. (7.4.1).)
From Eqn. (7.3.2), Eqn. (7.3.4) and Eqn. (7.4.1), the range dependent scale
factor K(R) is given by
2
(7.4.2)
Ax= A.R/La
AR 8
= crp/(2sin17)
(7.4.3)
(7.4.4)
where in turn rP is the pulse duration, La is the antenna length, and 17 is the
incidence angle.
Inserting Eqn. (7.4.3) and Eqn. (7.4.4) into Eqn. (7.4.2) and rearranging terms, '
we get
(7.4.5)
In evaluating Eqn. (7.4.5), certain terms are known to high precision and can
be ignored in an analysis of the system calibration accuracy. These include: ( 1)
Wavelength, A.; (2) Pulse duration, rP; (3) Antenna length, La; (4) Slant range~
R; and(S)Theconstant term, c/128n 3 Therefore, we can re~rite Eqn. (7.4.5)as
(7.4.6)
327
and the incidence angle 17, which depends on the platform roll angle. Additionally,
the noise power term in Eqn. (7.3.5) must be estimated.
The calibration techniques to estimate these parameters are broken into
internal calibration and external calibration measures. The internal calibration
uses data from built-in calibration devices to measure primarily the transmitter
power output and the receiver gain. Typically these devices will only be used
to track the system drifts over time. External calibration techniques generally
use image data of calibration sites equipped with point targets of known
scattering properties, or images of distributed target sites with known u 0 These
data are used primarily for absolute gain and antenna pattern estimates. The
following section will describe each of these techniques in detail.
7.5
7.5
z<( en
-
:S en
a: >0
....J
a: ><(
..J
LL <(
a: z
w <(
a..
Ill
:J
<(
(.)
z<(
~ :J
(/)
i==
Ci5
<(
i==
<(
~I-
a:
UJ <(
~~
Ill
:J
<( a_
(.)
11llOIUUUnOHllllllllllllllOUHlll11UUlllllUlllOllllUOHUl1111111111111
.... z
go
co
....
a: <(
a.. a:
m
::::i
<(
.... <(
<( 0
a:
a:
.....
Q
,.....
:5 er
0
(/)
cc
a: 2..
a:
~~
0-
mW
-..J :s
w
II
o::::>
LL
<(
a:
a: en
0 <(
en w
z :s
w
en
::::!: (/)
0 ~ I:J (/) (/)
UJ
a:
a_
~ ,_
<(
>-
Cl)
UJ
I-
<(
I-
>
5:
<( (.)
a:
<(
A A ' '
UllHll
..... "'""'"''"'""""'""'""""""""'ll"ttlln1111uunuu
""'
g~.... g
Cf
::::!:
a_
UJ
UJ
a: z
ffi::::!:
z ....J
UJ
zUJ ~
(5
UJ
(jj
0
(/)
a:
~ Ill
<(
i== 0
<(
:J ::::!:
<(
:J
::::!:
a_
<( (.)
(.)
Jt.
llllllllUllllllllllllllllllllllHUlllllllllllllllllllllllllllllllHlllllH1111Hff1
0(/) 0
329
Perhaps the most difficult task in calibrating the SAR system is not in
collecting this set of calibration data, but rather in performing the calibration
analysis to derive the correction parameters. As shown in Figure 7.4, the final
stages in generating calibrated data products are: ( 1) Assembling the calibration
metadata and calibration site imagery into a database; (2) Performing analysis
of this data to derive the radiometric correction factors (i.e., the K(R) and P0
terms as functions of time); and (3) Incorporating this information into the
operational processing data flow to routinely generate calibrated data products.
This section addresses specifically the sensor calibration measurements and the
ground calibration site design. The following sections will address the calibration
processor design and data analysis in some detail.
7.5.1
UJ
!zUJ
Ci5
Iz-- I
en
<( ....
a: z
z
j::
:J
<(
(.)
a:
Ill <(
Ill
fil
roG
i==
(/)
<(
<(
UJ
<( UJ
'"""'"'"'''''""'"""""'''''''"""'"""""""'''111111"
Internal Calibration
The internal calibration measurements are only useful in conjunction with the
preflight system test results that define the relationship between these built-in
device measurements and the key system performance parameters. This is
especially true for a spaceborne SAR such as the E-ERS-1 SAR or the SIR-C.
For systems such as these, extensive testing of the RF electronics, digital
electronics, and the antenna are made over temperature and, when possible, in
a vacuum environment. Key system parameters such as: transmitter output
power, transmitter and receiver losses, receiver gain, antenna gain and pattern,
RF /digital electronics linearity and dynamic range, and phase/amplitude versus
frequency stability are measured as functions of temperature at each (unique)
radar gain and PRF setting. Proper placement of internal calibration devices,
such as temperature, current, and power meters, will permit determination of
the system performance as a function of variation in these parameters.
Obviously, this technique assumes that the variation in system performance
can be modeled as a function of these observable parameters. Furthermore, we
assume that these calibration devices are themselves accurately calibrated and
stable over time. In addition to these built-in test meters, most radar systems
perform in-flight RF test measurements using calibration loops. To illustrate
the two fundamental approaches to the RF internal calibration design we
consider as examples the ESA E-ERS-1 SAR and the NASA/ JPL SIR-C designs.
IUtllffHllUUlllUllllllUHllllllUllHllllllUlllllHllHllllllllltllllll
::::!:
j::
<
<(
(/)
(/)
<(
en
~ 0z
UJ
UJ
Cf Ci5
UJ
Cf
>- zUJ 0 1 - - - z
a: 0
b
<( ~ z
~ a:
<(
UJ a_
a: ..J
ma..
::::i
328
UJ
I-
UJ
330
7.5
UP
CONVERTER
PULSE
EXPANSION
FREOUENC
GENERATO
331
TO ANTENNA
FROM HPA-----x.....,..--58-d_B_ _ _ _ _ _ _ _ SAR
X...____
IF REPLICA
FROM
5.18 GHz
FREQUENCY-.,..--...r
GENERATOR
CALIBRATOR!--------,
RF REPLICA
LNA
DOWN
CONVERTER
IF
AMPLIFER
COHERENT
DETECTOR
Figure 7.5 Internal calibration loop design used by ESA ERS-1 SAR. A similar design is employed
by the X-SAR shuttle radar (Attema, 1988).
system. The high power amplifier (HPA) output is coupled into a bypass circuit
that has two possible paths. The calibration loop signal (RF replica) passes
through the entire receiver chain, bypassing only the antenna, while the pulse
(IF) replica loop additionally bypasses the entire RF stage of the receiver and
inserts a signal into the front end of the receiver IF stage.
The details of the calibrator block in Fig. 7.5 are shown in Fig. 7.6. The
calibration loop is used only during the turn-on and turn-off phases of the data
collection operation. The high power amplifier (HPA) output is coupled
( - 58 dB) into the calibrator bypass circuit and demodulated to an intermediate
frequency (123 MHz). The signal is then filtered, attenuated, and shifted back
to its original RF center frequency where it is coupled ( -44.5 dB) into the
front end of the receive chain prior to the low noise amplifier (LNA). An HPA
power out measurement is performed using a power meter. This measurement
is then sent to the control processor for incorporation into the downlink data
stream.
The pulse replica loop is used primarily during the data acquisition phase
of the operations. This loop injects a replica of the transmitted pulse into the
data stream during the quiet periods between pulse transmission and echo
reception. A delay line is used to properly insert this echo into the data stream
without interfer~ng with the received signal. A command from the control
processor is used to set the signal level to be compatible with the selected IF
amplifier gain in the receive chain. The pulse replica loop injects this attenuated
signal into the receiver following the LNA at an intermediate frequency to ,
minimize the front-end noise contamination. It is impor\ant to note that tiu;
pulse replica loop cannot directly measure the system gain variation since the.
primary source of gain drift is the front end LNA.
,
The E-ERS-1 internal calibration loops will be used as follows to correct for
system errors (Corr, 1984). The relative change in transmitter output power
times the receiver gain variation is measured by the calibration loop during th~
turn-on/off sequences. The gain at any time during the data acquisition period
is then estimated assuming a linear variation over the period. This is a reasonable
43dB
POWER
METERAGC
FROM
CONTROL
PROCESSOR
TO CONTROL
PROCESSOR
123.2MHz
x
X -
11 dB
j_
44.5 dB
TO
FROM
RECEIVER ------:;......;:~--------SAR ANTENNA
Figure 7.6
assumption since the period between turn-on and turn-off is relatively short
(nominally < 5 minutes). The pulse replica loop is primarily used to obtain the
relative gain and phase characteristics (minus the LNA) across the system
bandwidth. This transfer function estimate is then used to determine the exact
range pulse code for use in the ground signal processor. If the pulse code (e.g.,
chirp) generator is not stable (e.g., phase drift), then a frequent update in the
range compression function may be required for formation of the synthetic
aperture.
SIR-C Internal Calibration
332
7.5
RF
ELECTRONICS
DIGITAL
ELECTRONICS
fca1
STABLE
CALIBRATOR
LOCAL
'----~OSCILLATOR
..___ _ __.
f slo
I /"""\.
'-J
REFLECTED
POWER
FORWARD
POWER
"-------'TRANSMITTERi-----i
rO
EXCITER
Figure 7.7 Internal calibration loop design used by NASA/JPL SIR-8 and SIR-C instrument.
in the receiver transfer characteristic. Prior to the data acquisition during the
turn-on phase of operation, the calibrator generates a tone spanning the full
dynamic range of the receiver. This continuous tone signal is injected into the
receiver data stream via a directional coupler. It scans across the passband,
dwelling at each frequency position for a fixed number of pulses. Typical numbers
for SIR-C would be a scan over 11 frequency positions, dwelling at each position
for 64 pulses ( "'0.05 s ).
During the data acquisition phase, the tone is set in a fixed position in the
center of the system bandwidth at a power level more than 12 dB below the
expected signal power. The calibration tone (cal tone) signal power is set at this
low level to ensure that it does not contribute significantly to receiver saturation.
Details of the SIR-C calibration subsystem are shown in Fig. 7.8. The caltone
frequency is derived from the stalo frequency f. 10 , the sampling frequency J.,
,---------------------,
I
fs10
I SYNTHESIZER
fcal
POWER
LEVELER
CONT~OLLED
REGUlATED
D.C. POWER
STATUS
SIGNALS
STEP
ATTEN.
TEMPERATURE
CONTROL
SIGNAL
INJECTION
COUPLER
II
I
I
_ _J
333
and the PRF fp It is selected such that the calibration tone falls into a discrete
FFT bin during the signal processing. The calibration output power is controlled
by a thermal compensation circuit to maintain less than 0.1 dB variation over
a range of operating temperatures. A step attenuator is used to adjust the caltone
signal power such that it is always 12-18 dB below the echo signal power. The
resulting caltone will be phase locked with the radar from pulse to pulse. This
permits coherent integration of consecutive echoes to effectively increase the
caltone power relative to the echo power for a precise measurement of receiver
gain.
The caltone is extracted from the data during signal processing by performing
an FFT on each echo line within a data block (e.g., 1024 samples by 512 lines).
Each transformed line is then summed coherently in the along-track direction.
For example, a 1024 sample range transform effects a 30 dB gain in the caltone
to receiver output power (P. + Pn) ratio. This gain is achieved since the caltone
energy is confined to a single FFT bin, while the received signal energy is spread
across all 1-024 bins. A ppase coherent azimuth summation of 512 transformed
lines achieves an additional 27 dB gain in the caltone power level. However,
this is partially offset by the unfocussed SAR aperture gain which is approximately
15 dB (35 lines) for a nominal SIR-C mode. Thus a caltone to signal ratio of
30 dB can be achieved from processing a 1024 by 512 block of data, assuming
the initial caltone to signal data ratio is set at -12 dB. The resulting caltone
estimation error ( <0.01 dB) is small relative to the expected caltone power
drift ( -0.1 dB). The results of a simulation using NASA/JPL DC-8 SAR data
(unfocussed aperture gain "'10 dB) are shown in Fig. 7.9 (Kim, 1989).
The caltone gain estimate is used to normalize the data samples acquired
during a time interval around the processed block of data. Typically the signal
processing generates an image frame from each 15 s block of data. Caltone
estimates from the beginning and end of the data block are routinely produced
to verify system stability over the 15 s period. The raw digitized video data are
then normalized according to the estimated mean caltone power level after their
conversion to a floating point representation and after subtraction of the caltone.
The caltone subtraction can be performed in either the time domain or the
frequency domain, given estimates of both the caltone gain and phase. If zero
padding of the data is required to achieve the "power of two" FFT block in
the range correlator, then the caltone energy will be dispersed according to the
fraction of zero samples. This greatly complicates the frequency domain
estimation and subtraction procedures. In this case, the caltone subtraction is
most efficient in the time domain. The caltone scan sequence during the turn-on
and turn-off phases of the data collection measures the gain and phase variation
across the system bandwidth. These measurements can be used to adjust the
range reference function for optimum matched filtering during the signal
processing.
The caltone scheme described above has one distinct benefit in that it can
be used to measure the gain variation throughout the data take. However, its
shortcoming is that it does not measure transmitter output power. This can be
334
7.5
Spectru m of
335
temperature. In this scheme, the absolute measure of radiated power can only
be determined using external calibration devices such as a ground receiver.
I Line
l00
25 0
0 0
0 0
255 8
511 . 5
Frequency
767 3
1023 0
75 0
CD
'O
550 0
.I:
u
Cll
Q.
Ul
25 0
0 0
0. 0
255 . 8
511 . 5
Frequency
767 3
1023 . 0
Figure 7.9 Plot of 1024 bin range transform of NASA DC-8 SAR data with a caltone inserted
at bin 512. Note the built-in radar caitone is set out of band at bin 975 (Kim, 1989).
done using a power meter \Fig. 7.7). However, the precision of such a meter is
typically not adequate to meet the calibration accuracy requirements. Alternatively,
the transmitter performance can be characterized in terms of its output power
versus temperature characteristic. Generally, relative changes in the transmitter
output power over a short time period are highly correlated to its operating
The internal calibrators described above are useful devices for measuring relative
system drift over short periods of time (minutes to hours). These drifts arise
primarily from thermal effects. It is important to note, however, that neither of
the techniques described above measures the antenna gain variation, which can
be the predominant error source. This is especially true in a spaceborne system,
which undergoes zero gravity unloading effects and large variations in
temperature. Generally, changes in the antenna gain and its radiation pattern
can only be measured in-flight by external calibration techniques, which will
be discussed further in the next section. This is because the desired pattern is
the far field pattern, which requires a calibrator at a distance of 2L; / J.. from
the antenna ( -4 km for the E-ERS-1 C-band). Theoretically, this far field
pattern can be synthesized from precise near field gain and phase measurements,
but practically the required precision cannot be achieved in an operational
spaceborne environment.
For an active array, such as the SIR-C antenna, the problem is further
complicated since there are several hundred transmitter and receiver modules
on the backplane of the antenna. Since each of these devices has its own gain
and transfer characteristic, system calibration is an especially difficult task.
External calibration devices will be used extensively to measure the overall
performance of this system. However, to monitor short term variations, an
internal calibration scheme has been devised (Klein, I 990a). A simplified
schematic of this system is shown in Figure 7.10. The antenna performance
verification loop, termed the radio frequency built-in test equipment (RF BITE),
consists of a second antenna feed (BITE feed) system. When an RF frequency
modulated pulse is sent to the antenna for transmission via the regular feed
system, this signal is coupled into the BITE feed system via a meandering
coupling line. The signal power at the antenna feedthrough points (from the
backplane to the radiating elements) is collected in the BITE feed system and
coupled into the receiver chain for digitization and incorporation into the
downlink telemetry. Additionally, the T / R module LNAs can be characterized
using the same BITE feed system. This is done by injecting a calibration tone
into the BITE feed and coupling this signal into the receiver chain at the
feedthrough points. The caltone signals are then collected by the regular antenna
feed system, digitized, and incorporated into the downlink telemetry. The system
is designed such that, during the turn-on phase (or by ground command), each
LNA and HPA can be turned on individually, by panel, or by leaf(three panels)
to measure the performance of the active elements during system operations.
The utilization of the RF BITE measurements for calibrations however
requires that the relative phase and gain over temperature of each ~oupler b~
known to an accuracy such that the antenna pattern can be synthesized.
Additionally, for the RF BlTE to be sensitive to system errors, each coupler
336
7.5
C-BAND ANTENNA PANEL
7.5.2
18
I cal
RF BITE FEED
FROM
ANTENNA
FEED
TO RECEIVER
Figure 7.10 Simplified schematic of SIR-C C-band antenna performance verification loop. A
similar loop is installed in the L-band antenna.
337
T/RMODULE
CALIBRATOR
(CW TONE)
In the previous section, the key parameters affectmg radar ~ystem ~ahbrat1on
were identified. These included the radiated power, the receiver gam, and the
antenna pattern, boresight gain, and angle. Generally, internal calibration lo~ps
can be used to estimate relative changes in the transmitter power and receiver
gains as a result of temperature variation or component aging. Built-in test
meters can provide additional data on the sensor performance, but they are
subject to the same types of errors as the sensor itself. Measureme!1t of the
antenna performance during in-flight operations is very difficult a~d ts usu~lly
not attempted. Instead, the antenna performance is characterized durmg
External Calibration
The use of ground targets with known scattering properties to derive the radar
system transfer function is referred to as external calibration. The advantage of
an external calibration procedure over internal calibration is that the end-to-end
system performance can be directly measured. Therefore, system parameters
which are difficult to measure, such as the antenna pattern, the boresight gain
and angle, and the signal propagation effects, can be characterized using external
calibration techniques. The shortcoming of this approach is that the calibration
sites are typically imaged infrequently. The result is an insufficient sampling of
the system transfer characteristic to measure either short term system instabilities
or platform motion effects. Operational calibration of any spaceborne SAR
system requires both external calibration to estimate the end-to-end system
performance (including the absolute gain) and internal calibration to monitor
the relative drift of the system between external calibration sites. The external
calibration techniques generally involve two types of target: (1) Point targets
or specular scatterers of known radar cross section (RCS); and (2) Distributed
targets of large homogeneous area with relatively stable, well characterized
scattering properties (e.g., a 0 ).
Point Target Cal/brat/on
Point targets are typically man made devices such as corner reflectors,
transponders, tone generators, and receivers. Each of these devices spans a
geometric area much less than a resolution cell, but exhibits a radar cross section
that is bright with respect to the total backscattered power from the surrounding
target area within the resolution cell.
To minimize calibration errors from the background area, the point target
RCS should be at least 20 dB larger than the total power scattered from the
SAR image resolution cell (i.e., a 0 oxoR1 ). There are a number of effects other
than the background power to be considered when deploying calibration targets.
The pointing angle of the device relative to the radar must be precisely measured
(e.g., an uncertainty < 1.0), since generally the radar cross section is highly
dependent on orientation. An additional consideration is the contribution from
multipath. This occurs when either the transmitted or reflected signal scatters
off the local terrain or nearby structures and is received by the SAR antenna
simultaneously with the calibration target return. A final point is that the device
RCS should be characterized by measuring its scattering properties in a
controlled environment (e.g., anechoic chamber) over a range of temperatures
and viewing angles. The concern is that, for a passive device such as a corner
338
7.5
reflector, t he RCS is very sensitive to disto rtions in the plates forming the sides
of the reflector. Fabr ication errors or warping from thermal cycling could cause
a significant change relative to the theoretical RCS of the device.
cb.
40
/
20
Passive Calibration Devices. The most frequently used devices for SAR
calibration are corner reflectors. By far the most popular reflector is the
triangular trihedral design (Fig. 7.11 ). The triangular trihedral radar cross
section is given by (Ruck et al., 1970)
'
I/
a;
,,
Q;
..
IV
20
>
C1>
_,
...
-300 -
"v
'
,,
'1
r"\.
,f
[\
<I> .
40
C1>
~
where a is the length of one side. This design is preferred since it is relatively
stable for large radar cross sections and exhibits a large 3 d B beamwidth ( -40)
independent of wavelength and plate size.
An example of the dependence of radar cross section and beamwidth on
pointing angle relative to the axis of symmetry is given in Fig. 7.12 (Robertson,
1947). This figure shows the response of a triangular tri hedral (~ = 0.6 m) .to
a K-band radar (A.= l.25 cm). The variation in RCS as a function of device
orientation is an important consideration if the device is to be deployed in a
permanent configuration and imaged as a target of opportunity during normal
operations. This approach was used for several of the Seasat corner reflectors
'
<I>.
-40 -
:!:!
(7.5.1)
I/
<I>.
339
oo
100_
..........
I' \
a.. 40
<I>.
,,,,...
C1>
>
~ 20
Q;
~-
'\
I/
-200 -
........
<I> .
20
.....
'/
\.
~
I
'
0
4> -100
40
I
20
'"
0
-40
<I> .
-....... !'...
looo"'"
'l/I
'
-20
II/
--
20
40 -40 -20
Angle e (Degrees)
300
r--..
I'
A
I"'--
20
Figure 7.1 2 Relative radar cross section patterns as a (unction or angle relative to the axis or
symmetry; 0 is vertical elevation angle, <P is the horizontal angle (Robertson, 1947).
Figure 7.11
which were imaged from both ascending and descending passes over the
calibration site. These devices were oriented with the axis of symmetry
perpendicular to the surface. For Seasat at a 20 look angle this resulted in
only a few dB of lost RCS, but eliminated the need to re-orient the devices for
each pass. A summary of the RCS and beamwidth parameters for various
reflector designs is given in Table 7.1.
The construction of the reflector must be to an error tolerance that is small
relative to the radar wavelength. Typical specifications for surface irregularity
are for an rms variation less than 0.12, resulting in a 0.1 dB RCS Joss; the plate
curvature should be less than 0.2A. for a 0.1 dB RCS Joss; and the orthogonality
requires plate alignment of better than 0.2 in each axis for a 0.1 dB Joss.
Assuming another "'0.2 dB uncertainty from pointing (orientation) of the device,
typical numbers for device accuracies are on the order of 0.5 dB. However,
additional calibration errors may result from uncertainty in estimating the
background backscatter or from multipath effects. For this reason it is desirable
to find a suitable location for deployment where these contributions are small
(i.e., < - 20 dB) relative to the RCS of the corner reflector.
340
7.5
Maximum
Reflector
Sphere
Square plate
RCS
3dB
Beamwldth
211:
0.44~/a
Luneberg lens
-40
Triangular trihedral
-40
Square trihedral
-40
Shape
Di
341
and the antenna patterns, which are key parameters that cannot be measured
with internal calibration devices. The tone generators are used in pairs to
produce two continuous frequency tones offset by some fraction of the system
bandwidth at orthogonal polarizations. These devices are primarily used to
measure the cross-polarization isolation of the radar. A comprehensive ground
calibration site design typically would include all three device types.
A functional block diagram of a transponder is shown in
Fig. 7.l 3a ( Brunfeldt and Ulaby, 1989 ). The peak radar cross section is given by
TRANSPONDERS.
(7.5.2)
where G1, Gr are the transmit and receive antenna gains and G. is the net gain
of the transponder electronics. This design provides the flexibility to achieve
the desired RCS by selecting amplifiers with the required gain. The antenna
selection is driven primarily by cross-polarization isolation and beamwidth
requirements, with gain a secondary consideration. With a two-antenna design,
as pictured in Fig. 7.13b, the cross-coupling between antennas is an important
consideration. since this signal is amplified by the. transponder gain. The required
cross-coupling performance ( < -80 dB) is achieved by spatially separating the
antennas. Typically, standard gain horn or microstrip patch antennas are used.
However, if large cross-polarization isolation and low sidelobes are required a
corrugated horn may be used.
The functional design of a ground calibration
receiver is shown in Fig. 7.14. Basically, these systems consist of a receive
antenna, an envelope detector circuit that can lock onto the radar PRF a
digitizer, and a storage device. This system may be integrated with a transponder
for a dual-function device. Such devices are currently being produced in small
numbers by the University of Stuttgart (Freeman et al., 1990d). Ground receivers
can be used to directly measure the azimuth antenna pattern and to indirectly
measure the elevation pattern by deploying a number of receivers cross-track.
If the relative boresights of the SAR and receiver antennas are co-aligned, then
the peak SAR radiated power can be determined from
COMPACT GROUND RECEIVERS.
(7.5.3)
~here EI~P is the effective isotropic radiated power, R is the slant range, Pr
is the received power as measured from the digitized signal, and Gr, G. are the
antenna and electronic gains of the receiver unit. The use of ground receivers
~n be a highly accurate technique for measurement of the SAR antenna pattern,
smce the forward radiated power is measured. This is a much stronger signal
than the reflected RCS or the background u 0 . However, if the SAR antenna is
not reciprocal, then the receivers cannot determine the SAR receive antenna
342
7.5
RS 232C
INTERFACE
TRANSMIT
ANTENNA
RECEIVE
ANTENNA
343
Gt
Gr
PREAMPLIFIER
VARIABLE
ATTENUATO R
SWITCHED
ATTENUATOR
AMPLIFIER
DIG ITAL
REGISTRATION
UNIT
DETECTOR
v
FRONT END
CONTROLPANEL
POWER
SUPPLY
Figure 7.14 Ground calibration receiver design by the Institute for Navigational Studies ( INS)
at University of Stuttgart, Germany (Freeman, 1990d).
bandwidth. The cross-polarization isola tion of the SAR receive antenna can be
determ ined from the raw signal data by
= Gr(f,)
xp
(7.5.4)
Gr(J;}
where Gr(J;) and c r(f,) a re the SAR receive antenna like- and cross-polarized
gains, respect ively. These signals, offset in frequency by J. - J;, will be shifted
by the one-way Doppler associated with the relat ive sensor to target position
for that range line. T he q uantity in Eqn. (7.5.4 ) can be measured in the ground
processor from a Fourier transform of each range line. Across the SAR azimuth
aperture, the received tone generator signal can migrate through several bins
in the FFT due to the Doppler shift. T hus, if azimuth summation of adjacent
range lines is required to reduce the signal estimation erro r, care should be taken
that the tone falls within a discrete FFT bin for each range line used.
LINEAR POLARIZED
HORN TRANSMIT
ANTENNA, GAIN
b
Figure 7.13
Gt
Active transponder design by Applied Microwave Corporation (Brunfeldt, 1984).
pattern, since a ground receiver can only measure the overall SAR tra nsmit
chain characteristic.
Tone generators typically consist of a
linearly polarized antenna and a signal generator, as shown in Fig. 7. 15. These
devices are used in pairs, with each unit tra nsmitting o ne of two orthogonal
polarizations at a frequency offset from the other by some fraction of the system
SIGNAL
GENERATOR
VARIABLE
ATIENUATOR
POWER
METER
PORT
AC
POWER
SUPPLY
Figure 7.15 Block diagram of continuous wave tone generators to measure antenna crosspolarization isolation.
344
7.5
345
L-band image of the Goldstone site acquired by the DC-8 SAR is shown in
Fig. 7.17. Since each reflector has been surveyed to determine its true location
this image can be used to assess the scale and skew errors (Chapter 8) as weli
as the absolute location error of the DC-8 system.
'
The elevation antenna pattern is determined by fitting the RCS measurements
from each device with a least squares error polynomial. Across the mainlobe
return, a qu~drat.ic fit is sufficient to characterize most antennas (Fig. 7.18 ).
The uncertamty m each estimate is given by the device errors (fabrication,
deployment, etc.), the uncertainty in the background contribution (i.e., ao()x()R ),
8
~~~<9
GOLDSTONE
LAKE
~00 m
~~
~ivT~~"-.
JPL
'-~
r-1
I I
I
I II
._~il-!..,;;.--....;:t_24.
"
ff ,'
;ec;.0 (llHEDRAL
t\ 6 ' TRIHE"DAAL
.a. 8' TRIHEDRAL
a. f
I~I
Jjfo
V)
/II
GOLDSTONE SAR
CALIBRATION SITE
/#
/
QC'1<.
.//
:>/~
~'<"
_r):j,-:~._0
( ~Q
L .J ( / '
\
Figure 7.16
e; L+C ?ARCS
-1'.0 L ...CTONEGEN
(1
~I
I I
I I
/:z:I
I I
I I
BM
'-<.:.:to;:&"
-..........
Figure 7.17
SAR.
L-band total intensity image of Goldstone, California, acquired by NASA/ JPL DC-8
346
7.5
,I
/
N'-TWO-WAY PATTERN
ALONG
TRACK
cross track.
and the image measurement errors. Assuming these error sources are uncorrelated,
the pattern estimation error is given by
Sp =
347
associated with the platform attitude variation (e.g., roll angle errors) and
thermal variation can be neglected. The short-term stability performance
(short-term relative calibration) is an important measure for many scientific
analyses.
REFLECTOR
RCS MEASUREMENT
WITH TOLERANCES
Figure 7.18
(7.5.5)
where M is the number of devices used in the pattern estimate and sCR, s8R,
and sM are the standard deviation of the device RCS estimate, the background
a 0 estimate, and the image measurement error, respectively.
The image measurement error as well as the background error can be
significantly reduced by using a technique proposed by Gray et al. ( 1990). Their
approach is to integrate the return power over a local area surrounding the
reflector, rather than to attempt to estimate the peak return. The total power
in an equivalent adjacent area is also estimated, and the difference between
these two powers is that attributed to the RCS of the reflector. Thus, the only
0
error in the estimation procedure is the variation in background a between
the area containing the device and the reference area. This variation can be
minimized by selecting the calibration site such that th'l reflector is placed in
a large homogeneous backscatter region. The remaining error contributor is
that of the device itself, which can be mediated by measuring the reflector (or
transponder) under controlled conditions such as in an anechoic chamber, or
on an antenna range.
The short term stability of the radar system can also be assessed by placing
a second group of devices at some distance down-track from the main calibration
site. These two calibration sites should be sufficiently close that the errors
SAT.
11
11
.--POINT
TARGETS
NOISE
FLOOR
0
Figure 7.19 System gain characteristic illustrating the operating point for the calibration devices
(e.g., reflectors, transponders).
348
7.5
Eqn. (7.4.1 ), four parameters vary as functions of cross-track position within the
swath. They are:
( 1)
(2)
( 3)
(4)
Slant range, R
Ground range resolution, L\R/sin 'I
Elevation antenna pattern, G 2 ( </J)
Backscatter coefficient, u 0 ( 'I)
Both the look angle y and the incidence angle 'I can be written in terms of the
slant range, the platform ephemeris, and the platform attitude as given in
Eqn. (8.2.4) and Eqn. (8.2.5). Typically, the most important platform parameter
for calibration is the roll angle estimation error, which causes the antenna
pattern to be offset relative to its expected cross-track location. A plot of the
Seasat antenna pattern correction factor (roll= 0) as a function of slant range
(or equivalently cross-track pixel number in a slant range image) is shown in
~~~
To extract the antenna pattern from the range compressed signal data, the
received signal power variation due to u 0 , R, and sin 'I must first be estimated.
Typically the slant range, R, the range bandwidth, BR, and the platform position,
0
R., are well known. Additionally, for each of the main calibration sites, the u
-1.9968
CJ
0>'
.2
-5.6927
~
I
349
versus 'I dependence is known, leaving just the elevation antenna pattern and
the roll angle as the key parameters to be estimated. It should be noted that
the total received power consists of both the signal power and the noise power.
Thus the noise power must be subtracted prior to performing any corrections
on the cross-track signal power. If the noise power is subtracted after range
compression then the compression gain must be taken into account as described
in Section 7.6. In some cases, where the SNR is low, the thermal noise can
dominate the signal return power, resulting in a large antenna pattern estimation
error unless the the noise power is known to a very high precision.
To reduce the effects of thermal noise, a large number of range compressed
(or range and azimuth compressed) lines can be'. incoherently added in the
along-track direction. The number of lines integrated must be short relative to
the rate of change of the roll angle. This technique was used by Moore (1988)
to estimate the SIR-B antenna pattern over the Amazon rain forest.
A similar echo tracker approach was implemented operationally in the SIR-B
correlator to estimate the roll angle prior to the antenna pattern correction
stage (Fig. 7.21). For each standard image frame, consisting of -25 K range
lines, 1 K, range compressed lines spaced throughout each 5 K block were
incoherently averaged, smoothed using a low pass filter, and fit with a least
square error (LSE) quadratic polynomial. The error function was weighted
according to the estimated SNR of each data sample. The peak of the estimated
pattern was extracted and averaged with estimates from the other four (5 K
line) image blocks to provide a single roll angle estimate for the image. As
ex~ected, this technique worked well for regions of relatively low relief. In high
rehef areas the LSE fit residuals were used to reject the estimate and revert to
attitude sensor measurements. A roll angle echo tracker technique was needed for
SIR-B because of the large uncertainty in the shuttle attitude determination.
The estimated (3u) attitude sensor error was on the order of 1.5 in each axis
wi.th drift ~ates as high as 0.03 /s (Johnson Space Center, 1988). Results usin~
this techmque to measure the roll angle variation for SIR-B are shown in
Fig. 7.22 (Wall and Curlander, 1988).
The distributed target approach to antenna pattern and roll angle estimation
should not be considered as a replacement for the point target estimation
procedure. Rather, this techniqueshould be treated as an approach (target of
opportunity) that can be used to fill gaps between the point target site estimates
for monitoring intra-orbital variation. Additionally, distributed targets can
me~su~e performance over wide swath areas (e.g., 100 km E-ERS-1 swath),
which is very costly using point target devices.
7.5.3
-9.38&7 '-~5i52~-1~10-4~1~658~-2~208i..._;~2iL60--~33~1-2~3864~--:44~1~6--:49~68:-:--
350
7.5
351
i8.8
RAW DATA
,,.,...
SIR-8
OT 106.3
GMT: 286/00:44:40
1 POINT/1K LINES
.J
RANGE
.J
COMPRESSION
a:
...
I-
::c:
18.6
C1
in
w
a:
AVG 1KLINES
PER SK BLOCK
NEXT
BLOCK
18.4 ,___.__..____,__..___,__...._--L_...J--.:.J~..1
39.0 45. 7 52.4 59.1 65. 7 72.4 79.1 85.8 92.5 99.2 105.9
TIME, sec
23.9 r----.--..----.--...---..-...--------.-NO
.J
.J
a:
~ 23.8
YES
::c:
C1
USE ATTITUOE
SENSOR ESTIMATES
iii
w
a:
0
SIR-8
OT 90.3
GMT: 285/01:02:00
1 POINT/1K LINES
23. 7 .____.__..____,__..._........_ _.___._ _._--''---'
1.0 4.5 8.0 11.6 15.1 18.6 22.1 25. 7 29.2 32.7 36.2
TIME, sec
AVG SPEAK
LOCATIONS
Figure 7.22 Echo tracker roll angle estimate as a function of time for two SIR-B data segments.
Each estimate results from the integration of 1000 range lines.
CALCULATE
ROLL ANGLE
where q/ and ff characterize the radar receive and transmit systems respectively
and JV is the additive noise term. For an ideal system, ff and q/ could be
characterized as identity matrices with some complex scale factor. Polarimetric
system errors can be modeled as channel imbalance and cross-talk terms
(Freeman et al., 1990a), i.e.,
ROLL ANGLE
Figure 7.21
Flowchart of the SIR-B echo tracker routine to estimate the platform roll angle.
(7.5.7)
6 = (s"" Suv)
Svu svv
where each element
given by
S
Q
(7.5.6)
Inserting Eqn. (7.5.7) into Eqn. (7.5.6) we get an absolute phase term I/Ir+ I/I"
which is not significant since it only represents the relative position of the
dominant scatterer within the resolution cell. The gain term A.A1 represents
in Eqn. (7.3.1).
the common gain across all channels and is equivalent to
This gain can be estimated from calibration site data as described in the previous
section. The cross-talk terms c5 1 , c5 2 , c5 3 , and c5 4 represent contamination resulting
from the cross-polarized antenna pattern, as well as poor isolation in the
JP.
352
transmitter switches and circulators. These terms can be directly measured using
polarization selective receivers and tone generators as described in the previous
section. The b 1 and b2 terms are directly measurable from the raw signal data
by evaluating the ratio of like- and cross-polarized tone generator signals in
each H and V channel. Similarly, receivers with exceptionally good crosspolarization isolation performance ( >40 dB) with antennas oriented for
like- and cross-polarized reception can be used to estimate b3 and b4.
The channel imbalance terms f 1 and f 2 are generally complex numbers whose
amplitude and phase characteristics must be precisely known fo~ many
polarimetric applications (Dubois et al., 1989). A reasonably good estimate of
the amplitude imbalance can be obtained from internal calibration pr~cedures,
assuming the antenna H and V patterns are similar and the borestghts are
aligned. However, the phase imbalance can only be estimated using external
targets since the antenna contribution cannot be ignored. The relative gain and
phase of the channel imbalance terms f 1 and f 2 can also be estimated using
active devices such as transponders, where the scattering matrix of the target
can be controlled. It can be shown that three transponders with independent
scattering matrices, such as (Freeman et al., 1990a)
7.6
where we have ignored errors in the device construction and deployment and
Arr= .j;;; is given by Eqn. (7.5.1). The relative channel phase imbalance ~an
be estimated from a trihedral reflector or from a distributed target, assummg
that the dominant scattering mechanism is a single bounce type scatter.
A limitation in the technique as presented by both van Zyl and Klein (other
than the reciprocity assumption) is that the channel imbalance can only be
estimated in a local area around the reflector. If the target scattering could be
. modeled such that the relative change in zuu/ zvv were known as a function of
353
incidence angle across the swath, then the amplitude balance as a function of
cross track position could be estimated using a distributed target technique.
The absolute value of zuu/ zvv could then be determined using a single device
or group of devices in a local area. In the NASA/JPL SAR processor for the
DC-8 polarimetric system, the phase error between the H and V channels is
routinely estimated using a distributed target (such as the ocean) and software
has been distributed to the investigators to perform clutter calibration on their
images using the approach proposed by van Zyl. It also should be noted that
in the calibration of polarimetric data the cross-polarized terms zuv. zvu are
av~raged (after phase compensation) to obtain a single value (see Section 7.7).
This approach. is based on the fact that all natural targets are reciprocal, and
therefore the difference between the cross-polarized terms is due only to system
errors. A final point is that in all these techniques we have assumed the noise
power to be negligible. For distributed target calibration techniques to be valid,
the data should be averaged over a large number of independent samples to
reduce the effective noise power, keeping in mind that the parameters to be
estimated may be dependent on their spatial position, limiting the area over
which the estimate can be performed.
7.6
can be used to solve for all six error terms.
An alternative approach, using known characteristics of a distributed target
scattering matrix in addition to passive corner reflectors, has been proposed by
van Zyl ( 1990) and Klein ( 1990b ). Given a target dominated by single-bounce
surface scattering, the target imposes no cross-polarized term and the relative
HH to VV phase is constant. Thus, assuming reciprocity (i.e., b1 = b4, b2 = b3,
f 1 = f 2 ), these terms can be calibrated without the use of any point target
calibration devices. To determine the channel amplitude imbalance, a corner
reflector such as a triangular trihedral is required whose scattering matrix is
given by
In the SAR ground data system, the signal processing consists of a raw data
correlation (Level lA processing) to form the SAR image, followed by a
post-processing stage (Level 1B processing) to perform the image radiometric
and geometric corrections. The geometric correction algorithms will be addressed
in Chapter 8. The remainder of this chapter will be used to describe the
radiometric calibration processing. The radiometric calibration processing
involves analysis of the internal and external calibration data, generation of the
calibration correction factors, and application of these corrections to the image
data. The calibration processing data flow is shown in Fig. 7.23. There are three
major ground data system elements. The calibration subsystem (CAL) is
typically an off-line workstation tasked to perform analysis of the internal and
~xternal calibration dafa as well as the preflight test data. The catalog (CAT)
is the data base management system responsible for archiving the calibration
data including preflight test data. The CAT is also responsible for reformatting
the engineering telemetry data into time series records for each .internal
calibration device (e.g., P.(ti), i = 1, N). These data are then accessed by the
CA~ in c?njuncti~n with the calibration site imagery to derive the necessary
rad1ometnc correction parameters for the SAR correlator (COR). The corrections
are .precalc~lated and stored in the CAT for eventual access by the correlator
dunng the image processing operations. Typically, the correction factors are
~lso stored as time series (e.g., G( </>, tJ, ti = 1, M) where the sampling frequency
ts dependent on the stability of the sensor and the calibration device used for
the measurement.
354
7.6
PREFLIGHT
TEST DATA;
GROUND SITE DATA
ENGINEERING
CALIBRATION
SITE RAW DATA
TELEMETRY
1-LOOK COMPLEX
IMAGERY
AEFORMATIED
RAW DATA
TEL.EMETRY DATA
METADATA
ARCHIVE
RADIOMETRIC
CO'IRECTIC1'1
Fl'CTORS
(VIACAS)
SHORT TERM
CALIBRATION
ARCHIVE
Data flow diagram showing the transfer of calibration data between the correlator,
the catalog and the calibration processor.
Figure 7.23
7.6.1
Calibration Processor
The calibration processor supports the system calibration during three phases
of operation:
1. Preflight test data analysis;
2. Calibration processing (i.e., correction factor generation/application);
3. Verification processing and performance analysis.
Each of these phases is described in the following subsections.
Preflight Test Data Analysis
The preflight test data analysis is used to derive the relationship between the
internal calibration device measurements and the radar performance parameters.
For example, the transmitter power output may depend uniquely on its baseplate
temperature. Preflight testing can establish the functional relationship between
the transmitter output power and the baseplate temperature sensors to provide
a means of indirectly calibrating the transmitter drift during operations.
Additionally, the stability of the sensor, which is established in preflight tests,
is used to determine the required sampling of the internal calibration data and
the number of external calibration sites.
The preflight testing is especially important for the SAR antenna characterization,
since its performance cannot be directly measured using internal calibration
355
devices. For the SIR-C active phased array antenna, the thermal sensors on
the antenna backplane will be used to calibrate the T /R module output power
and gain drift over the mission. Additional parameters, such as the DC current
drawn by each panel, will be used to indicate if a T /R module or a phase shifter
is performing anomalously.
Calibration Processing
The preflight test data analysis results are used to interpret the in-flight telemetry
in terms of the system performance. The key calibration parameters to be
estimated during the preprocessing are the radiated power, the antenna patterns,
the receiver gain, the noise power, and the roll angle.
Depending on the system stability, measurement of the amplitude and phase
drifts as functions of frequency across the system bandwidth may also be
required. Generally, the effects of quadratic and higher order phase and
amplitude errors on the radiometric calibration accuracy are neglected since
they do not affect the total power, but rather the shape of the impulse response
function (Chapter 6). If the area integration technique (Gray, 1990) is used to
estimate the device RCS, then matched filtering errors will not affect the
estimation accuracy of the calibration correction parameters. However, other
image quality characteristics, such as the geometric resolution and sidelobe
performance, will be degraded.
An overall calibration processing flowchart is shown in Fig. 7.24. This chart
is drawn assuming that the calibration corrections are incorporated into the
operational image processing chain. The functions attributed to the calibration
processor (CAL) are as follows:
1. Calibration site image analysis of single point targets to determine
mainlobe broadening (Km1), sidelobe characteristics (ISLR, PSLR), and
absolute location accuracy;
2. Multiple point target analysis to determine geometric distortion (scale,
skew, orientation errors) and the elevation antenna pattern;
3. Raw data analysis of tone generator signals to determine cross-polarization
isolation of the receive antenna;
4. Engineering telemetry analysis to estimate drift in the system operating
point (i.e., change in receiver gain or transmitted power);
5. Generation of calibration correction factors, K(R, t;), including antenna
pattern and absolute calibration scale factor;
6. Distributed target calibration site analysis for antenna pattern estimation.
The correction factors are passed from the CAL to the SAR correlator (via the
CAT) for incorporation into the processing chain as shown in Fig. 7.24.
If the roll angle variation is slow relative to the azimuth coherent integration
time, then the radiometric correction factor can be directly applied to the
azimuth reference function, eliminating the need for an additional pass over the
356
PREFLIGHT
TEST DATA
CALTONE SCAN
SIGNAL DATA
7.6
RAW SIGNAL
DATA
RON SIGNAL
DATA
DERIVE/
MODIFY
RANGE REF
AUTOFOCUS
l --ot..
CLUT~~~LOCK r----'-----i!!.~2!!~
RANCE
AEF.
Id (R,1)
t, (A,1)
ANTENNA ~~-~
I\ T, P-'TIERN
CALCULATE
G(O,)
P,
K(R)
GENERATE
AZ. REF.
FUNCTIONS
WITH K(R)
CORRECTIONS
."----------"!:
HIV BALANCE
(AECAL)
GENERATE
AISOLUTi G41N
a v. A
LOOK-UP
TABLE
CAi.IBAATIOtf PROCESSOR
SIGNAL DATA
FLOW
CALIBRATION PARAMETERS
PARAMETER
DATA FLOW
OUT UT
PRODUCT
Figure 7.24 Calibration processing flowchart illustrating the major software modules.
357
2. Monitor the caltone (SIR-C) or the pulse replica loop (E-ERS-1, X-SAR)
during the data take to derive drifts in system gain / phase characteristic;
3. Estimate receive-only noise power during turn-on and turn-off sequences;
derive noise power at any point in data acquisition sequence using drift
measurements ;
4. Perform echo-based attitude tracking using clutterlock and echo (roll)
trackers;
5. Apply cross track radiometric corrections to image data ;
6. Perform raw data quality analysis (QA) functions such as evaluation of the
bit error rate (BER) and histogram, and range spectra;
7. Incorporate all radar performance, calibration correction factors, and
quality assurance data into the image ancillary data records.
For polarimetric SAR data calibration, the above list of correlator functions
must be extended to include: ( l) Like-polarized return (i.e., zHH> zvv) phase and
amplitude balancing using distributed targets ; (2) Phase compensation and
averaging of cross-polarized terms (i.e., zHv' Zvtt); and (3) generation of
normalized Stokes matrix (Dubois and Norikane, 1987). A detailed description
of the various software modules and data flow diagrams for the SIR-C calibration
processor is given by Curlander et al. ( 1990).
An operations scenario for the calibration processing would be as follows.
The first step is to perform analysis of selected image and telemetry data over
the time interval for which the data is to be calibrated. The correction factors
are generated as a time sequence for each parameter and then stored in the
CAT database. The database generates a processing parameter file for each
image to be processed which includes the calibration correction parameters and
nominal system performance data, as well as the radar and mission parameters
for that time interval. In the COR, the calibration correction parameters are
applied to normalize the image data. Finally, the performance data is transferred
to the image ancillary data files and appended to the output data products.
Verification Processing and Performance Analysis
The absolute calibration accuracy and relative precision of the data products
can be verified by establishing ground verification sites either equipped with
point target devices, or covering homogeneous backscatter regions of known
cr 0 (Louet, 1986). For the verification site imagery, the nominal calibration
corrections, as derived from the engineering telemetry and the calibration site
data, are applied to the image products. The backscatter estimate, as derived
from the image, is then compared to the point target RCS or the distributed
target cr 0 to derive the calibration error. These parameters, which define the
calibration performance, are valid over a limited time interval that depends on
the system stability. They should be appended to the data products as an
ancillary file to aid the scientist in interpreting the data.
358
7.6.2
7.6
The radar equation for the received signal power from a distributed target of
uniform a 0 (Section 7.3) can be extended to the processed image. Recall that
the mean received power is given by Eqn. (7.3.5)
(7.6.l)
After the azimuth and range compression operations are applied to the digitized
video signal, the mean power in a homogeneous image is given by (Freeman
and Curlander, 1989)
(7.6.2)
where ox, oRg are the image azimuth and ground range resolution cell sizes,
N 1 = LrLaz is the number of samples integrated during the correlation processing.
and WL = W. W..z is the total loss in peak signal strength due to range and
azimuth weighting functions (e.g., Hamming weighting). The parameters Lr,
Laz are the range and azimuth reference function lengths and W., W..z are the
range and azimuth reference function weighting loss factors, respectively. The
parameter L refers to the number of looks or the number of resolution cells
incoherently added (assuming no normalization) to reduce the speckle noise.
The ratio of the two terms to the right of the equality in Eqn. (7.6.2) is equivalent
to the multipulse SNR equation in Eqn. (2.8.8). The second term in Eqn. (7.6.2)
is multiplied by N 1 (rather than Nl} since noise samples dO'not add coherently.
Conversely, the signal power, represented by the first term in Eqn. (7.6.2), can
be considered as a phase compensated coherent integration. The difference
between the behavior of the signal power and noise power terms can be explained
by noting that echo signals add coherently in voltage while noise terms are
mutually incoherent and can only be added in power. A non-coherent integration
(such as forming multiple looks) affects the signal and noise power terms
equivalently.
359
P!
P. =
oxoRLNf WL
AxAR.
(7.6.3)
where AR., oR are the precompression and image slant range resolutions, and
Ax, ox are the precompression and image azimuth resolutions respectively.
Equation (7.6.3) is sometimes called the processing compression ratio.
The question now arises as to whether there is an improvement in the signal
to noise ratio (SNR) as a_result of the signal processing. Again consider a
distributed homogeneous target. We wish to evaluate the expression
SNR 1 P!Pn
--=-SNR P:,P.
(7.6.4)
where the superscript I refers to image data. Substituting from Eqns. (7.6.3),
(7.6.2) and simplifying we get
SNR 1
OxORN1
SNR
AxAR.
--=---
(7.6.5)
(7.6.6)
where Oor Ooa are the range and azimuth oversampling factors respectively.
Thus, there is no increase in the image of SNR for returns from a uniform
extended target as a result of the image formation, except by the product of
two oversampling factors. These oversampling factors are the ratio of the PRF
to the azimuth Doppler bandwidth and the ratio of the complex sampling
frequency t? the range P?lse bandwidth. No further increase in the signal to
thermal noise (SNR) ratio (e.g., by using smaller processing bandwidths) is
360
7.6
+ LN, WLPn
(7.6.8)
(7.6.9)
Assuming the mean received power is given by some mean image pixel value
-
P Ir
--
2 - "I
f..., nPIJ /Ml2
(7.6.10)
np -
i,j
K (R) -
(t1)
(7.6.13)
Recall that the azimuth reference function size was assumed to be equal to the1
number of pulses spanning the azimuth footprint, i.e.,
(7.6.14)
361
Substituting Eqn. (7.6.14) into Eqn. (7.6.13) we see that the range dependence
1
of K (R) is inversely proportional to R 2 It is also interesting to note from
inserting Eqn. (7.6.14) into Eqn. (7.6.12) that the image noise power actually
increases linearly with range.
Up to this point, we have assumed that no normalization is applied to the
reference function or the multilook filter to compensate for the number of
samples integrated. For example, if each term in the azimuth reference function
is normalized by the number of azimuth samples Laz as is done in many SAR
processors, then the image correction factor K 1(R) is inversely proportional to
4
R and the noise power varies as 1/ R. Only if an azimuth reference function
normalization of
is used will K'(R) be inversely proportional to the
3
traditional R that appears in many forms of the radar equation. A
normalization will also result in a constant noise power independent of range
position within the image. These relationships are summarized in Table 7.2.
Misunderstanding of the relationship between the image signal power and the
slant range/attenuation factor may explain the range dependent variation in
many SAR images found in the literature.
Consider the Seasat correlator as an example. The number of pulses in the
azimuth footprint is given by Eqn. (7.6.14). Evaluating this equation using the
values: A.= 0.24 m, fp = 1647 Hz, R = 850 km, La = 10.7 m, and V.1 = 7.5 km/s,
we get Laz = 4187 pulses. For the frequency domain fast convolution processor,
only block sizes of powers of 2 can be used in the FFT. Thus, it is convenient
to use a reference size of 4096 and an azimuth block size of 8192, resulting in
4096 good image samples per block. The azimuth reference function coefficients
(i.e., fDc fR) are adjusted as functions of R, but typically for Seasat the length
is fixed at 4096 to maintain an even power of 2. Thus, the azimuth resolution
cell size increases linearly with range such that there is a slight resolution
degradation ( -4%) across the swath. In this case, the average signal level
varies as 1/R 3 , while the noise level is independent of range, resulting in an
SNR proportional to 1/R 3
As a second example, consider the SIR-B correlator design implemented by
NASA/JPL to perform the operational SAR processing (Curlander, 1986). In
that design, the azimuth processing block size per look (for a four-look image)
was fixed at 2048 samples. To accommodate the varying footprint size over the
range of look angles ( 15 to 60), the number of nonzero terms (i.e., La2 ) in the
J"L:
J"L:
TABLE7.2 Effect of Azimuth Reference Function Length Laz and Normalization on the
Expected Image Power
Normalization
None
1/Laz
1/Az
None
Length
Variable, ocR
Variable, ocR
Variable, oc R
Fixed
Signal Power
2
ocl/R
ocl/R 4
ocl/R 3
ocl/R 3
Noise Power
SNR
ocR
ocl/R
Constant
Constant
l/R 3
1/R 3
1/R 3
1/R 3
362
7.6
363
K,=~
(7.6.19)
should be applied. This yields an image with constant mean noise power equal
to the input noise level in the raw data. This is a useful representation since
Waz W,. can be determined directly from the ratios of the processed to
unprocessed mean receive-only noise power with and without weighting applied.
A second basic requirement is that all interpolations such as the range cell
migration correction, or the slant-to-ground range reprojection, preserve the
data statistics. The specific criteria for the interpolation coefficients such that
the data statistics are preserved are presented in Chapter 8. Assuming the
normalization factors in Eqn. (7.6.18) and Eqn. (7.6.19) are applied to the
reference functions, the radar equation as given by Eqn. (7.6.2) becomes
(7.6.17)
(7.6.20)
The SIR-B reference function was always normalized by the azimuth FFT block
size (i.e., 2048 samples) independent of Laz Since this correction ~actor is
independent of range, it does not affect the range dependence of either the
expected signal power or the SNR. Hence for the SIR-B image product~ the
signal power varies as 1/R 2 while the noise varies as R with an SNR proportional
to 1/ R 3
(7.6.15)
assuming the full aperture is processed. For SIR-B the processing bandwidth
was estimated using
BP= (0.8)/p ~ (0.8)B 0
(7.6.16)
Correlator Implementation
The radiometric calibration algorithm should produce image products that are
both relatively and absolutely calibrated. Simply stated, in a relatively calibrated
image each pixel value (i.e., data number or gray level) can be uniquely related
to some backscatter coefficient (within an error tolerance), independent of its
cross-track position or time of acquisition. In an absolutely calibrated image,
the coefficients specifying the relationship of each relatively calibrated data
number to a backscatter value (within an error tolerance) are given. For example,
assuming a linear relation, u 0 is given by
(7.6.21)
where we have assumed that a two parameter stretch, i.e., a gain K 0 and a bias
Ke, are used to minimize the distortion noise associated with representing the
image within. the dynamic range of the output medium.
To derive the image correction factor K 1(R), each of the parameters in Eqn.
(7.6.13) must be estimated. The terms A., L, WL, R, L., Laz bx, bR 8 are all well
known or easily measured and contribute little to the overall calibration error.
Significant errors come only from uncertainty in the estimation of P1, G,, G2 ( <P ),
and P0
The thermal noise P0 can be estimated by averaging a block of samples from
the turn-on and turn-off receive-only noise segments in each data take.
Throughout the data take, the drift in receiver gain, G., can be estimated from
364
7.7
a caltone. Therefore, the thermal noise estimate at the center time of the image
frame, tc, is given by
(7.6.22)
where GcAL(tc) is the ratio of the system gain at time tc to the gain at the
turn-on time, t 0 This gain drift may also be characterized by other internal
calibration devices such as a leakage chirp or thermal sensors.
The radiated power P, is most accurately measured using a set of ground
receivers. The variation in P 1 over the time interval between ground receiver
measurements can be tracked using internal meters (power, temperature) or by
a leakage chirp. Similarly, the receiver gain can be directly measured by a
calibration tone or a leakage chirp. The antenna is typically measured preflight
to obtain a nominal pattern. Inftight variation from thermal stress or zero
gravity unloading is typically measured using external targets. Either a distributed
homogeneous target, or point targets (e.g., transponders or corner reflectors),
can be used to measure the two way pattern from the SAR image. Alternatively,
the transmit pattern can be directly measured using ground receivers and, if
reciprocity can be assumed, the two way pattern inferred from this measure.
The antenna boresight, or equivalently the pattern roll angle, can be refined by
analysis of the antenna pattern modulation in an uncorrected image by
estimating the location of the peak return power from a least square error fit
of the image data.
365
receiver channel is 2.8 cm longer than the other, the two channels are 180 out
of phase. Thus the balancing operation in Eqn. (7.7.1) would effectively cancel
the cross-polarized return (in the absence of other system errors and noise),
resulting in a value of zero for zHv independent of the target scattering
characteristics. To compensate for this systematic phase offset, prior to balancing
a phase difference correction must be applied to the data. The mean phase
difference is given by
N
fbx =
L arg(zuvziu)/ N
(7.7.2)
i=1
where the summation is performed over some representative set of data samples
spanning the entire image frame. Since just one cross-polarized channel need
be corrected to compensate for this phase error, Eqn. (7.7.1) becomes
ZHv = [zuv exp( -jfbx) + ZvnJ/2
(7.7.3)
fb1 =
L arg(z88 ziv )/ N
(7.7.4)
i= 1
This correction is then applied to all pixels in one of the like-polarized images, i.e.,
7.7
(7.7.5)
The polarimetric data products are typically represented in a Stokes matrix
format. This is achieved by first performing a symmetrization of the scattering
matrix. The symmetrization procedure is as follows (Zebker et al., 1987). Given
four radiometrically corrected images (in a complex amplitude format) that
represent the two like-polarized target backscatter measurements (i.e., z88 and
zvv) and the two cross-polarized measurements (i.e., zuv and zvu), the
symmetrization procedure is to average the cross-polarized terms such that
ZHv
= (zuv + Zvn)/2
(7.7.1)
on a pixel by pixel basis. The inherent assumption in thls process is that for
all natural targets s8 v = svn Therefore any differences between z8 v and zvu
must arise from radar system errors.
366
REFERENCES
other a unique correction factor may be required for each image frame.
Calib~ation of the like-polarized channel amplitude imbalance cannot be
performed using distributed targets since the ratio ~HHI svv ~s very target
dependent and cannot be predicted. Since the scattermg matnx of a corn~r
reflector such as a triangular trihedral is well known (sHH/ svv = 1), an analysis
of the return from this target can be used to balance the like-polarized channel
amplitude in that local area. Amplitude imbalance can arise from H,.v pat~ern
misalignment, which would require balancing to be performed at multiple pomts
ac)'oss the swath. This can be accomplished using an array of reflectors deployed
across the ground track. Another, as yet untested, approach would be to perform
the absolute like-polarized channel balancing at a single point within the swath
(using a reflector), and. then to use a distributed target such as t~e ocean to
perform a relative balance at all other points across. the swath. This appr.o.ach
requires that the target sHH/ svv not change as a function of cross-~rack pos1t10n.
However, it does not require that the ratio be known. The reqmrement that
sHHI svv
is never valid for an airborne system, since the range of incidence angles is so
large. However, for a spaceborne polarimetric SAR, where '1 varies over the
entire swath by only a few degrees, this relative balancing technique may be
feasible.
The final step in the polarimetric calibration is correction of the crosspolarized leakage terms that typically result from poo~ i~olation in t?e ante~n.a
or transmitter switch, or from platform attitude variation. We beheve th~s is
best implemented using the previously described clutter based techmque
proposed by van Zyl ( 1990). These corrections can. be applied as ~ postprocessing step (on the Stokes matrix) and are typically not operationally
applied in the SAR correlator.
.
Following the polarimetric calibration steps outlmed above (except the
cross-polarized leakage term correction), the Stokes matrix products are formed.
This first requires generation of the six cross-products
= z~HZ~v
JHHHH = z~HzifH
lvvvv = zvvz~v
J HVHV = Z~v zitv
JHVVV = Z~vZvv
JHHVV = Z~ttZvv
JHHHV
(7.7.6a)
(7.7.6b)
367
7.8
SUMMARY
This chapter has addressed the issue of SAR radiometric calibration primarily
from the signal processing perspective. The basic terms were defined and an
end-to-end system view of the various error sources presented. Several internal
calibration schemes were described in detail to identify the system measures
that can and cannot be performed using built-in test equipment. We then
addressed the techniques and technology currently employed for external
calibration with ground sites. The relative merits of point target versus
distributed t~rget calibration sites were discussed and several techniques using
clutter statistics for calibration were presented.
The second portion of the chapter concentrated on design of the ground
processor to utilize the acquired calibration data for operational correction of
the data products. We described a configuration using an off-line calibration
processor to analyze both the internal calibration device measurements and the
calibration site imagery. This system generates correction factors that are passed
to the correlator for application to the image data. We derived an appropriate
form of the radar equation that explicitly indicates the processor induced
gains/losses and discussed the effect of various processor implementations on
this equation. We concluded with a brief discussion of the calibration procedures
for a polarimetric SAR system.
(7.7.6c)
(7.7.6d)
(7.7.6e)
(7.7.6f)
REFERENCES
Aarons, J. (1982). "Global Morphology of Ionospheric Scintillations," Proc. IEEE, 70,
pp. 360-378.
Attema, E. (1988). "Engineering Calibration of the ERS-1 Active Microwave Instrument
in Orbit," Proc. IGARSS '88, Edinburgh, Scotland, pp. 859-862.
Blanchard, A. and D. Lukert ( 1985). "SAR Depolarization Ambiguity Effects," Proc.
IGARSS '85, Amherst, MA, pp. 478-483.
368
REFERENCES
369
IEEE Standard Test Procedures for Antennas (1979). ANSI/IEEE Std. 149-1979, Wiley,
New York.
Johnson Space Center ( 1988 ). "Payload Accommodations Document," NSTS 07700,
Vol. 14, Rev. J, Houston, TX.
Kasischke, E. S. and G. W. Fowler (1989). "A Statistical Approach for Determining
Radiometric Precisions and Accuracies in the Calibration of Synthetic Aperture Radar
Imagery," IEEE Trans. Geosci. and Remote Sensing, GE-27, pp. 417-427.
Kim, Y. (1989). "Determination ofthe Amplitude and Frequency ofCaltone for SIR-C,"
Internal Memorandum, Jet Propulsion Laboratory, Pasadena, CA.
Klein, J. (1990a). "SIR-C Engineering Calibration Plan, JPL-D6998,' Jet Propulsion
Laboratory, Pasadena, CA.
Klein, J. (1990b). "Polarimetric SAR Calibration using Two Targets and Reciprocity,"
Proc. IGARSS '90, College Park, MD, pp. 1105-1108.
Louet, J. (1986). "The ESA Approach for ERS-1 Sensor Calibration and Performance
Verification," IGARSS'86, Zurich, pp. 167-174.
Moore, R. K. ( 1988). "Determination of the Vertical Pattern of the SIR-8 Antenna,''
Inter. J. Remote Sensing, 9, pp. 839-847.
Raney, K. (1980). "SAR Response to Partially Coherent Phenomena,'' IEEE Trans. Ant.
Prop., AP-28, 777-787.
Rino, C. L.- and J. Owen (1984). "The Effects of Ionospheric Disturbances on
Satellite-borne Synthetic Aperture Radars,'' SRI International, Technical Report,
Contract DNA011-83-C0131, Menlo Park, CA.
Robertson, S. D. (1947). "Targets for Microwave Radar Navigation,'' Bell Syst. Tech. J.,
26, pp. 852-869.
Ruck, G. T., D. E. Barrick, W. D. Stuart and C. K. Krichbaum (1970). Radar Cross
Section Handbook, Vol. I, Plenum Press, New York.
van Zyl, J. J. (1989). "Unsupervised Classification of Scattering Behavior using Radar
Polarimetry Data," IEEE Trans. Geosci. and Remote Sensing, GE-27, pp. 36-45.
van Zyl, J. J. (1990). "Calibration of Polarimetric Radar Images Using Only Image
Parameters and Trihedral Corner Reflector Responses,'' IEEE Trans. Geosci. and .
Remote Sensing, GE-28, pp. 337-348.
Wall, S. D. and J. C. Curlander (1988). "Radiometric Calibration Analysis of SIR-B
Imagery," Inter. J. Remote Sensing, 9, pp. 891-906.
Zebker, H., J. J. van Zyl and D. N. Held (1987). "Imaging Radar Polarimetry from
Wave Synthesis," J. Geophys. R., 192, pp. 683- 701.
8.1
371
8
GEOMETRIC CALIBRATION.
OF SAR DATA
In Chapter 7 we discussed the procedures for relating the received signal data
DEFINITION OF TERMS
8.1
DEFINITION OF TERMS
For many scientific applications (e.g., geologic mapping, land surveys) the
geometric fidelity of the data product is critically important. Geometric
distortion principally arises from platform ephemeris errors, error in the
estimate of the relative target height, and signal processing errors. We define
geometric calibration as the process of measuring the various error sources and
characterizing them in terms of the calibration accuracy parameters. The terms
geometric correction and geometric rectification will be used interchangeably to
describe the processing step where the image is resampled from its natural
(distorted) projection into a format better suited to scientific analysis. Geocoding
is the process of resampling the image data into a specific output image format,
namely a uniform earth-fixed grid, which typically is a standard map projection.
Mosaicking refers to the process of assembling, into a single frame, multiple(~ 2)
independently processed (geocoded) image frames that are overlapping in their
coverage area.
The geometric calibration parameters can be divided into absolute error
terms, as referenced to some fixed coordinate system, and relative error terms,
which describe the distortion within an image frame. The absolute geometric
calibration of an image can be described by two parameters: location and
orientation. The absolute location error is the uncertainty in the estimate of any
image pixel relative to a given coordinate system (e.g. geodetic latitude and
longitude). The image orientation error is the angular uncertainty in the estimate
of a line in the image as compared to a line of reference, such as an axis of the
coordinate system (e.g., the angle between an image isorange line and the
equator).
The relative geometric calibration parameters describe the internal geometric
fidelity of the SAR image. The relative image calibration can be characterized
in terms of two parameters: scale and skew. The relative scale error is the
fractional error between a distance as represented in the image and the actual
geographic distance. This error term is typically specified in the range and
azimuth (or line and pixel) dimensions. The relative skew error is the error
between a given angle as represented in the image and the actual angle. For
example, two roads that intersect at a right angle may be represented in the
image at a crossing angle of 91, which is a relative skew error of 1.
For a multiple channel radar system, there is an additional parameter required
to describe the image-to-image misregistration. This relative misregistration error
is defined in the along-track and cross-track dimensions as the relative location
372
8.2
error (displacement) between two coincident pixels from image data acquired
by two separate radar channels.
The characterization of the image geometric calibration in terms of the above
listed parameters is not unique. The representati~n we presen~ here is convenient,
since these parameters are directly measurable m the SAR image.
8.2
GEOMETRIC DISTORTION
Before describing the various techniques for geometric corre~tion of the image
products, we first address the geometric distortions inherent m the uncorre~ted
image data and the source of these distortions. T~ey ca~ ~ener~lly be categon~
as resulting from sensor instability, platform mstabihty, signal propagation
effects, terrain height, and processor induced errors.
8.2.1
GEOMETRIC DISTORTION
373
delay to derive the actual propagation time used in the slant range calculation,
that is,
R =
c('t" -
't" 0 )/2
(8.2.3)
Here 't" is the total delay from the time a control signal is sent to the exciter for
pulse generation until the echo is digitized by the ADC. This delay is precisely
known since it is controlled by the radar timing unit which in turn is based on
the stalo frequency. Error in the estimate of the propagation time will result in
a slant range error which in turn will bias the incidence angle estimate. From
Fig. 8.1, we can write
(8.2.4)
where I'/ is the incidence angle, y is the look angle, R. and R1 are the magnitude
of the spacecraft and target position vectors relative to the center of the earth, and
Sensor Errors
The sensor stability is a key factor controlling the internal geometric fidelity
of the data set. For example, the consistency of the interpulse or intersample
period is governed by the accuracy of the timing sign~ls. sen~ to the .P~lse
generator and the analog to digital convertor (ADC). Vanatton i? these timmg
signals is dependent primarily on the stability of the local oscillator (stalo ).
Typically, short term variation in the stalo frequency t.hat produces s~mple-~o
sample variation (clock jitter) is negligible from an im~ge geometric fidelity
standpoint. Perhaps more significant is the long-term dnft of the stalo. For. a
mapping mission, such as the Magellan Venus radar mapper, the stalo dnft
must be measured over the course of the mission to determine the actual PRF,
since this establishes the along-track pixel spacing, that is
(8.2.1)
where Lis the number of azimuth looks andfp is the pulse repetition frequency
(PRF). The magnitude of the swath velocity V.w is given by
(8.2.2)
where R and R are the magnitudes of the sensor and target position vectors
and y a~d V ar~ the sensor and target velocity vectors, respectively. A fractional
error in the s~alo frequency translates into a similar fractional error in the PRF
and therefore the along track pixel spacing, which results in an along track
scale error.
A second sensor parameter that directly affects the geometric fideli~y of the
data set is the electronic delay of the signal through the radar transmitter and
receiver. This electronic delay 't"e must be subtracted from the total (measured)
y = cos- 1 [(R 2
+ R~ -
R;)/(2RR.)]
(8.2.5)
(8.2.6)
where f. is the complex sampling frequency. From Eqn. (8.2.6) we see that
errors in either y or f. translate into cross-track scale errors, as will be shown
in the following section on target location errors.
A third type of error, which may be more accurately classified as a platform
error than as a sensor error, is drift in the spacecraft clock. Any offset between
the spacecraft clock and the clock used to derive the ephemeris file from the
spacecraft tracking data will result in target location errors. If the spacecraft
ephemeris is in an inertial coordinate system, then the planet rotation must be
derived from the time difference between the actual data acquisition and the
reference time for the inertial coordinate system. Drift in the spacecraft clock
will result in an error in the target longitude estimate according to
where w. is the earth rotational velocity, sd is the clock drift, and ( is the target
latitude. An along-track position error will also result from clock drift according
to sd V.w, where V.w is the swath velocity.
374
8.2
GEOMETRIC DISTORTION
375
where R. and Rt are the sensor and target position vectors, respectively. The
~lant range R is gi~en by Eqn. (8.2.3). For a given cross-track pixel number j
m the slant range image, the range to the jth pixel is
SENSOR
c
R; = -( < - <.)
2
(8.2.8)
where tlN represents an initial offset in complex pixels (relative to the start of
the sampling window) in the processed data set. This offset, which is nominally
0, i~ required for pixel location in subswath processing applications, or for a
design where the processor steps into the data set an initial number of pixels
to compensate for the range walk migration.
The Doppler equation is given by
I
I
NADIR
+ - (j + tlN)
2J.
(8.2.9)
Ren
CENTER
where A. is the radar wavelength.foe is the Doppler centroid frequency, and v.,
Vt are the sensor (antenna phase center) and target velocities, respectively. The
target velocity can be determined from the target position by
lw
(8.2.10)
OFPLANETV
Figure 8.1 Relationship between look angle, y, and incidence angle, 17, for a smooth spherical
geoid model. The spacecraft position is given by R, = H + R where R is the radius of the earth
at nadir and H is the S/C altitude relative to the nadir point.
8.2.2
The location of the (i,j) pixel in a given image frame can be derived from
knowledge of the sensor position and velocity (Curlander, 1982). More precisely,
the location of the antenna phase center in an earth referenced coordinate
system is required. The target location is determined by simultaneous solution
of three equations: (1) Range equation; (2) Doppler equation; and (3) Earth
model equation.
The range equation is given by
(8.2.7)
where roe is the earth's rotational velocity vector. The Doppler centroid in
Eqn. (8.2.9) is the value offoe used in the azimuth reference function to form the
given pixel.
An offset between the value of foe in the reference function and ,the true JOe
r
causes the target to be displaced in azimuth according to
(8.2.11)
where tlfoe is the difference between the true foe and the reference foe.JR is the
Doppler rate used in the reference function, and V.w is the magnitude of the
swath velocity. To compensate for this displacement, when performing the target
location, the identicalfoc used in the reference function to form the pixel should
used in Eqn. (8.2.9). The exception to this rule is if an ambiguous foe is used
m the reference function. That is, if the true foe is offset from the reference foe by
more than fv/2. In this case, the pixel shift will be according to the Doppler
offset between the reference foe and the Doppler centroid of the ambiguous
Doppler spectrum, resulting in a pixel location error of
?e
(8.2.12)
376
8.2
where m is the number of PRFs the reference f De is offset from its true value
(i.e., the azimuth ambiguity number). Using Seasat as an example, with m = 1,
V. = 7.5 km/s, f, = 1647 Hz, and fR = 525 Hz/s, the azimuth target location
e;~or associated ~ith a processing Doppler centroid offset by one ambiguity is
approximately 23 km. Additionally, there is a small range offset which is given
by Eqn. (6.5.7). Nominally, for Seasat this is on the order of 200 m.
The third equation is the earth model equation. An oblate ellipsoid can be
used to model the earth's shape as follows
2
x, +y, +~= 1
(R.
+ h) 2 .
R~
GEOMETRIC DISTORTION
377
SAR
ISOOOPPLER
CONTOUR
S/CTRACK
(8.2.13)
,,
""'/
_,/
;
where R is the radius of earth at the equator, h is the local target elevation
relative to the assumed model, and RP, the polar radius, is given by
RP= (1 - f)(R.
+ h)
SENSOR, Rs
(8.2.14)
where f is the flattening factor. If a topographic map of the area imaged is used
to determine h, then the earth model parameters should match those ~sed to
produce the map. Otherwise, a mean sea level model such as that given by
Wagner and Lerch (1977) can be used.
The target location as givt;n by {x 1, Yo z,} is determined from the simultaneous
solution of Eqn. (8.2.7), Eqn. (8.2.9) and Eqn. (8.2.13) for the three unknown
target position parameters. This is illustrated pictorially in Fig. 8.~.. Thi.s fi~ure
shows the earth (geoid) surface intersected by a plane whose position is given
by the Doppler centroid equation. This intersection, a line of constant Doppler,
is then intersected by the slant range vector at a given point, the target location.
The left-right ambiguity is resolved by knowledge of the sensor pointing
direction.
The accuracy of this location procedure (assuming an ambiguous f De was not
used in the processing) depends on the accuracy of the sensor position and
velocity vectors, the measurement accuracy of the pulse delay time, ~nd
knowledge of the target height relative to the assumed earth model. The l~c.atio.n
does not require attitude sensor information. The cross-track target position is
established by the sampling window, independent of the antenna footprint
location (which does depend on the roll angle). Similarly, the azimuth squint
angle, or aspect angle resulting from yaw and pitch of the platform, is determined
by the Doppler centroid of the echo, which is estimated using a clutterlock
technique. Thus the SAR pixel location is inherently more accurate than that
of optical sensors, since the attitude sensor calibration accuracy does not
contribute to the image pixel location error. The following sections discuss the
relationship of platform ephemeris errors, ranging errors, and target elevation
errors to the image geometric calibration accuracy parameters.
X, VERNAL EQUINOX
Figure 8.2
Geocentric coordinate system illustrating a graphical solution for the pixel location
equations.
8.2.3
The platform position and velocity errors can be broken into three components:
(1) Along-track errors; (2) Cross-track errors; and (3) Radial errors. We will
examine the effects of each of these in terms of the azimuth and range target
positioning error.
Along-Track Position Error, ARx. An along-track position error causes an
azimuth target location error according to
(8.2.15)
where .1.R. is the along-track sensor position error. The cross-track or range
location error from an error in AR. is negligible.
378
8.2
GEOMETRIC DISTORTION
379
Sensor Velocity Errors, Li Vx- Li Vy, Li Vz. The along.track, cross-track, and radial
(8.2.16)
where LiR is the cross-track sensor position error. A small azimuthal target
displacem~nt will result from a shift in the earth's rotational velocity at this
new cross-track target position according to Eqn. (8.2.11 ). However, the effect
is quite small and can be neglected for most applications.
Radial Position Error, LiRz. A sensor radial position error is essentially an error
in the estimate of the sensor altitude, H. From Eqn. (8.2.5) the change in look
angle for a given change in the sensor radial position is
Liy =cos
_ 1 [R
+ R; - Rt]
2R.R
LifocA.R V.w
2
(8.2.19)
(8.2.20)
2v.t
=[sin(~+ Li17) _
sm 17
l]lOO%
(8.2.24)
(8.2.18)
Perhaps a more severe effect resulting from a radial sensor pos1tton e~ror
than the target location error is the image cross-track SFale error. Cons1d~r
the look angle offset Liy resulting from a radial position error LiRz. This
approximately translates into an equivalent incidence angle error ~i.e.~ ~y ~ Li17 ).
Therefore, the ground range pixel spacing given by Eqn. ( 8.2.6 ), which is mversely
proportional to sin 17, results in a range scale error of
k,
The range location error from these sensor velocity error components is
negligible. However, an along-track velocity error does produce an azimuth
scale error in the image according to
where V. is the earth tangential speed at the equator, ( 1 is the geocentric latitude
of the ta~get, ai is the orbital inclination angle, and Liy, the change in loo~ a~gle,
is given by Eqn. (8.2.17). The resultant target azimuth location error ts given
by Eqn. ( 8.2.11) which can be rewritten as
fiX2 ~
(8.2.23)
A radial sensor position error will also cause an azimuthal target location error
according to the resultant Doppler sh~ft Lifoc which is given by
2
Lifoc = ; (cos Ct sin ai cos y)Liy
(8.2.22)
where ()5 is the squint angle of the sensor measured relative to broadside. From
Eqn. (8.2.20) with
(8.2.17
(8.2.21)
(8.2.25)
8.2.4
380
8.2
velocity of the electromagnetic wave was equal to the speed of light, c. In general
this is a good approximation, however under certain ionospheric conditions a
significant increase in the signal propagation time relative to propagation time
in a vacuum can occur. This additional delay, T 1, is given by
(8.2.26)
where R 1 is the propagation path length through the ionosphere,!. is the radar
carrier frequency, and K 1 is a scale factor that depends on the ionospheric
electron density (NTV). Figure 8.3 is a plot of ionospheric group delay versus
carrier frequency (Brookner, 1985). At a grazing angle 11 = 80, for severe
I -
18
fr - f ~
R,(fi -
Ji)
(8.2.27)
381
Tl .o
GEOMETRIC DISTORTION
el/m
~w
Cl
a..
::>
cilr
Ar 3 = - 2sin11
a:
<.':)
u
a:
w
I
a..
where Ar is the slant range timing error (e.g., AT., Tr). For example, a 10 ns
electronic delay measurement error results in 1.5 m location error in the slant
range image, which translates into Ar 3 = 4.4 m in the ground range image for
Seasat (11 = 20). Similarly, an unmodeled propagation delay of r 1 = 50 ns
results in a 7.5 m slant range error and a ground range error Ar 3 = 22 m.
8z
Q
0.0001
0.1
Figure 8.3
(8.2.28)
0.2 0.3
20.0
Plot of ionospheric group delay (two-way) versus radar carrier frequency for both
severe and medium ionosphere (Brookner, 1985).
(8.2.29)
382
8.2
GEOMETRIC DISTORTION
383
SAR
~h
will result from a height estimation error Ah, where '1 is the local incidence
angle. The target range location error is then given by
,
Ar 4 =Ah/tan '1
(8.2.30)
NEAR
'~ANGE
''
GROUND RANGE
IMAGE
SLANT RANGE
IMAGE
Figure 8.5 Relationship between slant range and ground range image presentation for a side
looking radar.
sloped away from the radar (oc-) have effectively a larger local incidence angle
thus decreasing the range pixttl size.
.In relatively high relief areas, as shown in Fig. 8.6b, a layover condition may
exist such that the top of a mountain is at a nearer slant range than the base.
In this .cas~, the ii_nage of the mountain will be severely distorted, with the peak
app~a.nng m the 1m~ge at a nearer range position than the base (see Fig. 8.21).
Add1t1onally, echo signals from multiple target locations will arrive at the SAR
re~~iving antenna simultaneously. Therefore the fraction of scattered power
ansmg ~rom. each. target c~nnot be resolved. To properly correct this type of
geometr.1c d1s~ort1on reqmres some assumption about the scattering model.
Theoret1ca.lly, if the backscatt~r coefficient as a function of the incident geometry
for a particular target area 1s known, the relative power contribution of a
384
8.2
GEOMETRIC DISTORTION
RADAR BEAM
SAR
RADAR BEAM
I
II
"
NEAR RANGE
/
/ FARRANGE
NEAR RANGE
c
c
c
SLOPING SURFACES
a
SAR
RADAR
ANTENNA
RADAR BEAM
~ RADAR-IMAGE PLANE
""
"
NEAR RANGE
SLOPING SURFACE
b
Figure 8.6 Geometric distortions in SAR imagery: (a) Foreshortening; (b) Layover; (c) Shadow;
(d) A combination of imaging geometries illustrating secondary peak.
385
386
8.3
particular range bin from each iso-range target (in the layover region) can be
determined and assigned to the correct cross-track pixel in the resampled
(rectified) image. Practically, this would be an extremely difficult process, since
for each output pixel a search would be required over an area of digital elevation
data whose targets could produce the identical range and Doppler histories. Of
course, the available a 0 versus 17 model is only approximate, and therefore a
radiometrically calibrated image cannot be recovered and obviously the phase
information is lost.
An image distortion related to the layover effect is radar shadow. Shadowing
occurs when the local target slopes away from the radar at an angle whose
magnitude is greater than or equal to the incidence angle of the transmitted
wave (oc- > y). When a shadow condition occurs, the shadow region does not
scatter any signal. In the rectified image, these areas are typically represented
at a signal level equal to the system thermal noise power. This will prevent a
negative power representation of shadow area in the noise subtracted imagery.
To perform scientific interpretation of data products with these types of
distortion, the scientist must relate the backscatter coefficient to the local incident
geometry of the EM wave. Therefore, as an ancillary data product, a local
incidence angle map (i.e., 17 1(i,j)) should be provided with each terrain corrected
image. This map, in conjunction with the calibrated image, provides the
investigator with both the backscatter coefficient and the incidence angle
for each resolution cell. Given this ancillary data set, the user can directly
characterize the target reflectivity as a function of imaging geometry.
Additionally, the incidence angle map provides information on the location of
the radar layover and shadow regions, which is important since these data cannot
0
be calibrated in terms of a
Although it is somewhat beyond the scope of this text to derive the full set
of geometric conditions which would result in radar layover and shadow, we
should point out that it is a rather complex process to search over regions of
the digital elevation map (DEM) to determine if a secondary peak is intersected
by the radar beam. Figure 8.6d again illustrates radar shadow and layover
regions. An incidence angle map should indicate that segment ab is a layover
region since the local slope is greater than the incidence angle. Segment be is a
normally illuminated (foreshortened) region where the local 17 values are
provided. Segments cd and cl are shadow regions and should be indicated as
such. Even though the local slope oc(i,j) is less than 17(i), it is intersected by a
hidden ray, not the actual radar beam. Similarly, segments de and fg are
foreshortened regions where the local incidence angle
111(i,j) = 17(j) - oc(i,j)
(8.2.31)
GEOMETRIC RECTIFICATION
387
SENSOR
Figure 8.7
k~own, the incidence angle ma~ can be ge~erated in advance of the processing,
usmg a qEM. The radar data is not reqmred for this process.
. A fi~al, per~aps more subtle, source of geometric distortion is specular point
m1gratl~n. This occurs as shown in Fig. 8.7 for rounded hilltops where the
predo~mant scattere~ location is dependent on the incidence angle of the
transmitted .wave. This effect can be important when registering two image
frames acqmre~ at differen~ incidence angles. For example, in stereo imaging,
where the relative target displacement from two images at different incidence
angles determines the target height, specular point migration can be a significant
error source.
8.3
GEOMETRIC RECTIFICATION
388
8.3
In this section, we will present algorithms for performing the image geometric
rectification. Our algorithms are based on a model of the sensor imaging
mechanisms and do not require tiepointing to derive the correction factors.
Essentially, there are three main categories of geometric rectification algorithms:
(1) Ground plane, deskewed projection; (2) Geocoding to a smooth ellipsoid;
and ( 3) Geocoding to a topographic map. Each of these algorithms use the
pixel location technique previously described in Section 8.2. Therefore the
geometric calibration accuracy of the corrected data products is directly related
to the target location error.
8.3.1
Image Resampllng
cj
V.(i
+ j)
(8.3.1)
GEOMETRIC RECTIFICATION
389
In other words, if the input complex image data samples are uncorrelated then
a unit energy interpolation filter preserves the image statistical inform;tion.
For correl.ated data samples, with an autocorrelation function given by Pv the
filter requirement becomes (Quegan, 1989)
r1c;1 2 +2Rer
C;cfpv(i-j)=l
(8.3.3)
i j > i
and moments with the criteria ofEqn. ( 8.3.2) and Eqn. ( 8.3.3 ), the autocorrelation
function, and therefore the texture of the resampled output image, will be altered
(ex~pt ~n the special case of nearest neighbor resampling). Depending on the
apphcatton of the data, other criteria for determination of the filter coefficients
may be used which are a better match to the desired image characteristics
(e.g., t?e i~pulse res~onse function and sidelobe levels). In any case, a data
analysts or mterpretatton scheme that utilizes textural information must account
for the effects of resampling.
It is not unusual for the image geometric rectification to. be applied to a
detected (intensity) image product. The detection process, which involves
squaring the real and complex values, doubles the spectral bandwidth of the
?riginal .image and ~herefore requires twice the sampling frequency of the input
image (see Appendix A). If the sampling is not doubled (which is usually the
case) aliasing occurs (the severity of which depends on the scene content) and
the detected samples will be correlated.
In the case of resampling the intensity image, we are again interested in
preserving the output image statistical distribution and the moments relative
to the input image. Since, as was discussed in Section 5.2, the input intensity
image has an exponential rather than a Gaussian distribution (as in the real
and imaginary components of the complex image), the image statistical
distribution will not be preserved. Assuming the intensity image is oversampled,
such that the data are independent, the interpolated image can be described in
terms of gamma distributions (Madsen, 1986).
Given an interpolation filter of the form
/ 0 (i)
L djl (i + j)
1
(8.3.4)
where JI!, V0 are the complex input and output (amplitude) images respectively
and the cJ are complex resampling coefficients. It can be shown that the
interpolation of Eqn. (8.3.1) preserves the statistical distribution of input data,
including all moments, if
where JI> I~ are the input and output (intensity) images respectively and the d.
are real interpolation coefficients, preservation of the image mean sets a conditio~
on the resampling coefficients of
(8.3.2)
(8.3.5)
390
8.3
The preservation of the second moment and the variance requires ( Quegen, 1989~
LL d;djlP1U j
i)l =
(8.3.6)
t'
(8.3.7) _
(8.3.8)
Azimuth: bxaz =
(8.3.9)
V.w/fp
The parameter 11(j) is the incidence angle at cross-track pi~el number j. The
slant range to that pixel is given by Eqn. (8.2.8) and the magmtude of the swath
. .
velocity V.w is given by Eqn. (8.2.2).
The process to convert the input image to a ground plane deskewed pr0Ject10n
at uniform ground spacing is given by Curlander ( 1984 ). The output cross-track
and along-track pixel spacing arrays are first generated by
X8 z{i) = ibXaz;
391
where Xaz and x 8 , are the azimuth and ground range input spacing arrays and
Na, N, are the input array sizes in azimuth and range, respectively. The primed
values are the output arrays. Typically the output spacing is chosen such that
8.3.2
GEOMETRIC RECTIFICATION
x~z(i') = i'bx~z
i = 1, N,;
i' = 1, N~
j= l,Na; j' = 1, N~
(8.3.lOa)
(8.3.lOb)
resulting in square pixels. The output spacing array thus serves as a pointer to
the input spacing array to generate the resampling coefficients. These coefficients
should be determined to preserve the image statistics according to conditions
outlined in the previous section. The real and imaginary parts are resampled
separately.
In establishing the two one-dimensional resampling arrays in Eqn. (8.3.10),
we assumed that the azimuth and range input pixel spacings were independent.
While it is true that the range spacing is independent of azimuth, the azimuth
spacing does have some dependence on range position. This comes from the
target "'.elocity term in Eqn. (8.2.2) which can be approximated by
1__
J1i
( V.
(8.3.11)
where oc; is the orbit inclination angle and ( 1 is the geocentric latitude. We can
evaluate the error resulting from the assumption that J1i is constant within an
image frame. For a 100 km swath, the worst case latitude error at the swath
edge is less than 0.5 and the associated scale error is less than 0.05%. Therefore,
across a 100 km swath image, the assumption that azimuth pixel spacing is
independent of range position results in a worst-case distortion of 50 m.
An additional consideration is that the uncorrected SAR image is naturally
skewed unless the data is frequency shifted to zero Doppler during the processing.
For spaceborne systems, either the earth rotation or an off broadside (squint)
imaging geometry will result in a Doppler shift in the echo data (Fig. 8.8a).
Assuming the processing is performed at the Doppler centroid, an image range
line is skewed relative to its orientation on the earth (Fig. 8.8b ). Thus, the
output image must be deskewed according to its relative change in Doppler.
Using the near range pixel Doppler centroid as a reference (i.e.,j = 1), this skew
is given by
(8.3.12)
where AnsK is in output aximuth pixels. For most systems this deskew can be
approximated as a linear function where
(8.3.13)
where ksK is a skew constant approximated from Eqn. (8.3.12). The deskew
operation is not required if the azimuth reference function is centered about
392
ALONG TRACK
~
EARTH
U
/
/ /~/
//
OOPPtE~ /
/
I I I I
i
!I 1
I
'
8.3.3
'
ZERO
I 1
\
DOPP!..ER DOPP!..ER
CENTROID
~ZERO
OOPPtER
PARALLEL
a
ALONG
---+-----TRACK
393
I I
GEOMETRIC RECTIFICATION
parameter estimation, while orientation errors arise from both skew errors and
ephemeris errors (primarily platform velocity).
OIRECTlO~
ROTATION
ISO
8.3
ALONG
--+----i~TRACK
Geocoding is the process of resampling the image to an earth fixed grid such
as Universal Transverse Mercator (UTM) or Polar Stereographic (PS) map
projections (Graf, 1988). A key element for routine production of geocoded
products is the use of the radar data collection, processing, and platform
parameters to derive the resampling coefficients. The technique described here
is based on using a model of the SAR image geometric distortion rather than
operator intensive interactive routines such as tiepointing (Curlander et al.,
1987). The geocoding routine is based on the absolute pixel location algorithm
described in Section 8.2.2. Recall that this technique relies on the inherent
internal fidelity in the SAR echo data to determine precise sensor to target
range and antenna pointing (squint angle), without requiring specific information
about platform attitude or altitude above nadir. The geocoding procedure
generally consists of two steps: (1) Geometric rectification; and (2) Image
rotation.
Geometric 'Rectification to Map Grid. The initial step in the rectification
Illustration of image skew from earth rotation induced ~oppler shift: (a) Pio~::
iso-Doppler lines; (b) Image format when processed to Doppler centroid; (c) Image format w
processed to zero Doppler (Courtesy of K. Leung).
Figure 8.8
zero Doppler and the data is shifted (by applying a phase ramp) prior to azimuth
compression (Fig. 8.8c ). The zero Doppler ap~roach is.efficient for small Doppler
shifts, but can cause significant complexity 10 the azimuth correlator for large
squint angles.
.
.
"d
If the platform squint (yaw, pitch) rate requues that the Doppler centroi
be updated along track, then each azimuth processing block must _be deskewed
separately and, in general, resampled prior to i:nerging the bl~cks 10to. the fi?al
image frame. In practice this azimuth resamphng can be avoide_d ~Y. 10clud~ng
a phase shift in the azimuth reference function. If the Doppler shift is 10creas~ng
block-to-block (i.e., larger skew), then an additional overlap bet~een process~ng
blocks is required to ensure that there are no gaps in the merged image follow10g
deskew.
The residual angular skew in the rectified imag~ as ref~renced ~o an orthogonal
coordinate system is a key measure of geometnc fidelity. Typ1cal num~rs for
high precision image products are skew e~rors less than 0.1 and0 image
orientation errors relative to some reference bne (e.g. true north~ of 0.2 Skew
errors are predominantly processor induced artifacts from errors 10 the Doppler
0
procedure is to generate a location map for each image pixel using the location
algorithm in Section 8.2.2. Here we assume a smooth geoid at some mean target
elevation for the entire im~ge frame. Following generation of this location map,
the image pixels can be resampled into any desired cartographic projection by
mechanization of the equations appropriate for the desired earth grid. A good
reference for these map projecfions is published by the United States Geological
Survey (Snyder, 1983). The relationship between the complex image pixels in
the slant range-Doppler reference frame and the map projection can be
expressed in terms of coordinate transformations as follows (see Fig. 8.9)
(x, y) = T1 (x', y')
(8.3.14a)
(8.3.14b)
where (x, y) is the coordinate frame defined by the original SAR image, (x', y')
is the coordinate frame of the rectified image, ( l, p) is the coordinate frame
defined by the map grid, and p is the angle between grid north and y'
(Fig. 8.9). The coordinate system transformations are given by T1 for the rectified
to original image and by T2 for the geocoded to rectified image. A method for
calculating Pis presented in the next subsection.
The rectified image is in a grid defined by (x', y') where the abcissa (x') is
parallel to the cross-track direction and the ordinate (y') is parallel to the
spacecraft velocity vector at the frame center. A rectified image in the geocoded
format is generated by rotation of the rectified image into a grid defined by
394
8.3
395
GEOMETRIC RECTIFICATION
where the coefficient set {ai, bi} of each block is derived from the corner locations.
The block size is selected according to the geometric error specification for the
output image.
The transformation in Eqn. (8.3.14a) requires resainpling of the complex
image, which involves two-dimensional (2D) interpolation of each of the real
and imaginary components. To reduce the number of computations, these
equations can be rewritten such that each 2D resampling can be performed in
two one-dimensional ( lD) passes. The decomposition of the 2D resampling
into two 1D resampling passes is performed as follows (Friedmann, 1981)
p (GRID NORTH)
(l,p)
GEOCODED IMAGE COORDINATE FRAME
(x', y') RECTIFIED IMAGE COORDINATE FRAME
Pass 1:
y=v
Pass 2:
(8.3.16)
u =x'
(8.3.17)
where the coefficient set {ei,,h} is determined from the set {ai, b;} for that block.
The first"I>ass represents a rectification in the along-track direction and the
second pass represents a rectification in the cross-track direction as shown in
Figure 8.10. An intermediate image is generated by Pass 1 in the (u, v) grid and
the two-pass rectified image is in the desired (x', y') grid.
x'
Figure 8.9
'lit.
'i
ft iIii: :E~RCATIOO
:ECTIRCATIOO
(8.3.15a)
(8.3.15b)
Figure 8.10
396
8.3
GEOMETRIC RECTIFICATION
397
. _ (cos<X;)
P::::::sm 1 cos'
(8.3.18)
VERTICAL
SHEAR
l!!t:..:. bt
(x')
= (
y'
co~ P
where pis the image rotation angle. Again, 2D resampling of the complex image
to effect the rotation can be separated into two lD resampling passes by
decomposing the rotation matrix into the following form
(x')
y'
= (
1
- tan
l)
Pass 2:
x' =q
HORIZONTAL
SHEAR
8.11
Pass 1:
(8.3.19)
q = lcosP + psinP
(8.3.20)
gp =
Pass 2:
r=p
u=q
1 +tan P
(8.3.21)
=fo + f1q
+ f2y' + f 3qy'
y' = - q tan P + r sec p
v
where Pass 1 represents an image shear along y' and Pass 2 is an image shear
along l. An intermediate image is generated in the grid (r, q) and the desired
geocoded image in ( l, p ).
To minimize aliasing of the image data, oversampling must be performed
prior to image rotation (Petersen and Middleton, 1962). The amount of
oversampling required to avoid overlapping image spectra is given by
F~ure
P)(')
sin
- sm P cos P
Pass 3:
q=
l cos
p + p sin p
r=p
~here t~e coefficient set {e1,J;} is determined from the set {a1, b1}. The images
i; the gnds defined by (.u. v) and ( q, r) are intermediate images generated during
t e three stage resamphng. The oversampling of the image data is incorporated
398
8.3
into the first pass. The cross-track rectification and an image shear are combined
into the second pass. The third pass is a second image shear and resampling
that takes the (q, r) coordinate intermediate image into a geocoded format.
Figure 8.12 illustrates the intermediate stages during generation of a geocoded
image using the above scheme. The along-track corrections are applied and the
image is oversampled in the first pass. In the second pass, the cross-track
corrections are applied and the image is sheared. A final shear and an
undersampling in azimuth then transform the image into the desired output
grid. An example of this algorithm as applied to Seasat data is given in
Fig. 8.13. This image is from an ascending pass (Revolution 545) of an area near
Yuma, Arizona (( ~ 33N). A small segment of the original 100 km image frame
was selected for processing. The unrectified image data (detected from the
complex format for illustration) is shown in Fig. 8.13a. This image is oriented
at an angle, f3 = 21.9, relative to true north as determined from Eqn. (8.3.18)
for the Seasat inclination angle, ex;= 108. Figures 8.13b, c, d show the outputs
of the three resampling passes. Note that the final image in UTM projection
aligns the agricultural field boundaries with the image line and pixel axes.
The UTM projected image in Fig. 8.13 can be compared with a geocoded
image from a descending Seasat pass covering the same area (Rev. 681) as
shown in Fig. 8.14. The ease with which changes can be detected between the
various fields in the two images demonstrates the benefits of using a common
coordinate system for representing the data products. A second example given
in Fig. 8.15 compares a geocoded Seasat scene to a SIR-B scene again covering
PASS2
"
RECTIFICATION, VERTICAL SHEAR
~_,_~l
PASS3
HORIZONTAL SHEAR
Figure 8.12
rota tio n.
GEOMETRIC RECTIFICATION
399
Flg_u~e a._13 Seasat image of Yuma, AZ (Rev. 545) showing intermediate geocoded products: (a)
O ngmal image; (b) Pass I output is azimuth corrected a nd oversampled ; (c) Pass 2 output is range
corrected a nd range skewed ; a nd (d ) Pass 3 output is azimuth undcrsampled a nd azim uth skewed.
the same ground area. These data sets, acquired six years apart, dem onstrate
the utility of the geocoded format for monitoring changes in land use. However,
the most striking difference in the two images is the distortion in the mountainous
region. Seasat had an incidence angle, '1 = 23, while this particular SIR-B
image was acquired at '1 = 44 . Since the geocoding was performed assuming
a smooth oblate ellipsoidal earth model, the foreshortening distortio n (which
is more severe for Seasat) remains in the final image product. An extensio n of this
geocoding technique to account for variation in the local topography is described
in the following section.
8.3.4
""'
0
0
a
Multitemporal geocoded Seasat images near Yuma, AZ: (a) Rev. 545, ascending pass; (b)
Rev. 681 , descending pass.
Figure 8.14
ILLUM\
""'0
-4
Figure 8.15 Multisensor geocoded images near Oxnard, CA: (a) Seasat image acquired at 'I= 23 in
9/ 78; (b) SIR-B image acquired at 'I= 44 in 10/ 84.
402
GEOMETRIC RECTIFICATION
time geocoding system is required this approach greatly simplifies the design
(Chapter 9).
Given the target elevation values in the output grid, the next step is to
generate a latitude, longitude versus (i,j) pixel number map for the complex
slant range SAR image, using the location algorithm outlined in Section 8.2.2.
For a given element in the output grid (10 , p0 ), the fractional pixel location in
the original SAR image (1 0, p 0) is determined by a two-dimensional coordinate
transformation of the output image to the input grid, as described in the previous
section. This transformation provides the target location R1(0) in the original
imageassuming a smooth geoid as shown in Fig. 8.16. The pixel number (1 , p )
0 0
uniquely identifies a time t(/ 0) and a range R(p0). This time is used to calculate
the spacecraft position R,.(1 0) from an orbit ephemeris file by polynomial
interpolation. The spacecraft ephemeris is nominally in a geocentric rectangular
coordinate system. For simplicity we assume the coordinate system is rotating
with the x axis at longitude zero (Greenwich meridian), the z axis at grid north,
and the y axis completing the right hand system.
The next step is to convert the geodetic latitude and longitude of the target
into this rectangular coordinate system. Given the reference ellipsoid in
Eqn. (8.2.13), the target position R1(0) can be represented in terms of its
geographical coordinates
403
----
..l
R (o)
DISPLACEMENT OF t
POINT ON THE GEOID
Figure 8.16 Illustration of the geometry producing relief displacement of terrain features in radar
imagery.
404
8.3
q COS '
COS
(8.3.22a)
(8.3.22b)
(8.3.22c)
R:
where
(8.3.23)
and ,, x are the geodetic latitude and longitude of the t~rge~ an~ ~e RP are
the equatorial and polar radii of the DEM reference elhpsotd. ~tmt~arly, the
geographic coordinates of a point at an elevation h above the elhpsotd
are given by
xh = x 0 + h cos ' cos X
(8.3.24a)
+ h cos' sin X
z0 + h sin '
(8.3.24b)
Yh = y0
zh =
(8.3.24c)
From the spacecraft position vector R.(1 0) and the target ~?sition v~ctors R,(O),
R,(h), the relative slant range vectors to each target pos1tton are given by
R(O) = R.(1 0) - R 1(0)
(8.3.25a)
= R.(1 0) - R1(h)
(8.3.25b)
R(h)
405
GEOMETRIC RECTIFICATION
EXTRACT
hFROM
DEM
CALCULATE
DISPLACEMENT
R(h) R(o)
ADJUST
RESAMPLING
LOCATIONS
(8.3.26)
where Anaz is in samples and/R" is the Doppler rate at range R(h). The azimuth
and range pixel numbers of the displaced target are given by
CALCULATE
GEODETIC
COORDINATES
(8.3.28)
(8.3.27)
PASS 1:
ALONG TRACK
CORRECTION,
OVERSAMPLING
PASS2:
CROSS TRACK
TERRAIN
CORRECTION,
SKEW
PASS3:
ALONG TRACK
SKEW ANO
t----1~
UNDERSAMPLING GEOCOOEO,
TERRAIN
------'CORRECTED
IMAGE
Figure 8.17 Flowchart of the procedure for image geocoding with terrain correction.
406
CALCULATE
GEODETIC
COORDINATES
VERSUS
PIXEL No.
CORRELATE
DEM WITH
IMAGE
Figure 8.18
Rs
CALCULATE 6
FROM
MISREGISTRATION
AND TARGET h
t.Rs
.0
8.3
GEOMETRIC RECTIFICATION
409
<t
E
0
LO
:::>
...J
...J
.0
Comparison o( ascending and descending Seasat images near Los Angeles, CA,
geocoded to a smooth ellipsoid, with the same images geocoded to a DEM; (a) Rev. 351, ascending,
smooth; (b) Rev. 660, descending, smooth; (c) Rh. 351 ascending, DEM; and (d) Rev. 660
descending, DEM.
Figure 8.21
radiometrically saturated. This arises from the increase in the effective scattering
area of a resolution cell sloped away from the radar (i.e., a+ in Fig. 8.6a). T he
ground range resolution is given by
OR 8
\:E:::>
..J
..J
408
= c/ (2BR sin(IJ -
a))
(8.3.29)
410
8.4
8.20c and 8.2lc, d and an increase in the total image power. A more correct
representation of the scattered power (from a unit ground area) would be to
normalize each pixel by the actual resolution cell area, which depends on the
local slope as derived from the DEM. Assuming no radiometric co~re~tions
have previously been applied, the corrected image should first be multtphed by
a factor
g 1 = sin(17 - (X)
(8.3.30)
to account for the increased (decreased) cell area resulting from a positive
(negative) surface slope.
A second radiometric correction factor, that is also incidence angle dependent,
is the antenna pattern. Given the polar antenna gain function, G( </> ), where </>
is the off-boresight angle relative to the look angle y, we can project this pattern
onto the ellipsoid. From Eqn. (8.2.5)
(8.3.31)
where R1b = I R1(h) I is given by Eqn. (8.3.~4) and R~ = I R(h)I is from Eqn.
(8.3.25b). The parameter R. is the S/C altitude relat1v~. to the c~nter of ~he
ellipsoid and y is the actual look angle (i.e., antenna electnc_al bores1ght relative
to nadir including the platform roll angle). Thus for a given target at some
height, h, the parameters Rb and R 1b are determined from ~qn. (8.3.24) and
Eqn. (8.3.25b). The off-boresight angle in the polar ~atter~ ts then ~alculated
from Eqn. (8.3.31). From this pattern, a second rad1ometnc correction factor
to be applied to the terrain geocoded image is determined
(8.3.32)
where we have assumed the antenna is reciprocal. A final correction factor for
the range attenuation is given by
(8.3.33)
Combining these three corrections and assuming they are applied to the complex
data, then
gT(h, (X) =
J glg2g3
(8.3.34)
where g 1 , g 2 , and g 3 are given by Eqn. (8.3.30), Eqn. (8.3.32), and Eqn. (~.3.33),
respectively. Equation (8.3.34) is the relative radiometric correction reqmred to
normalize the received amplitude signal from a target at elevation h on a slope
(X relative to the ellipsoid. To date, no system has operationally applied both
the radiometric and geometric topographic corrections to SAR image products
as described in this section.
8.4
IMAGE REGISTRATION
411
IMAGE REGISTRATION
412
for the geocoded data products to which all instrument processing systems
adhere. In this area, not only is there a lack of consistency among processing
centers handling data from different sensors, but there is often little agreement
across processors for the same sensor. Jn an effort to solve the problem, a
number of committees have been formed to provide recommendations for
standards in spaceborne data. One group, the Consultative Committee on Space
Data Systems (CCSDS), has dealt mainly with downlink data stream formats.
A second group, the Committee on Earth Observations Satellites (CEOS), has
addressed specifically both optical (Landsat) and SAR data products in terms
of image format and presentation. However, a community consensus has not
been reached on key items such as standards for the ellipsoid, the map projection,
the output image grid spacing, or image framing within the grid. These will be
important topics of discussion for the multi-national working groups being
formed under the EOS program.
8.4.1
8.4
IMAGE REGISTRATION
413
this empirical co ~recti.on app_lied to the image data in the boundary region may.
?e~rade the calibration. Given two adjacent images acquired at different
incidence angles, the data in the overlap region will have a different mean
0
intensity since the a varies as a function of '1 The feathering process to blend
the seams adjusts this mean, and therefore degrades the calibration accuracy,
to generate. an aesthetically pleasing image product. Jn principle the effect of
the smoothing can be accounted for in the calibration scale factors however
practically it_ is relatively complex to keep track of these correction p~rameters.
The~efore, ~his process ~houl~ only be performed when generating photoproducts
or video displays for visual interpretation.
An ~x~mple of a three-frame mosaic using Seasat data covering an area of
geologic interest near Wind River, Wyoming, is shown in Fig. 8.22. The images
were first geocoded to a UTM projection at 12.5 m spacing using USGS 24,000
to l_DEM _d at_a .. The images were radiometrically corrected assuming a smooth
geo1d. The md1v1d ual frames were registered to each other using cross-correlation
Mosaicking
The generation of large scale maps using SAR imagery requires a capability to
assemble multiple image frames or strips into a common grid. These mosaics
could then be cut into standard quadrants and stored in the database according
to a grid structure. One possible convention for selecting these quadrants is the
US Geological Survey map quadrant system. For example, in this system, the
250,000: 1 maps have a quadrant on the order of 100- 150 km on a side.
Given that the image data from the various sensors has been geocoded to
a standard database, the generation of a large scale mosaic is a relatively simple
process (Kwok et al., 1990). It is analogous to assembling a jigsaw puzzle onto
a template, where in our case the template is a map grid. The analogy is poor
in the sense that the geocoded images do not fit nicely together. Rather, there
is generally an overlap or gap between adjacent frames, and therefore there
needs to be a convention on how to merge these data; specifically, which portion
of the data is to be discarded or how the gaps are to be filled when generating
the image mosaicks.
Jn general, even if the systematic geometric distortions have been properly
corrected in generating the geocoded image products, there remains a random
residual error in registering each frame to the map base. It is therefore necessary
to cross-correlate adjacent image frames (assuming there is sufficient overlap
region) to determine this residual misregistration error. Typically the correlation
is performed over a number of small patches along the overlap region and the
average misregistration is used to correct the offset. The new image is then set
into the grid, replacing the existing image data in the overlap regio n.
To blend the seams, a feathering process is needed. This procedure consists
of deriving the mean of the image in a small area on either edge of the seam
from a data histogram. An averaging process is applied in this border region
by adjusting the mean using a linear ramp function. Obviously, if a larger
boundary region is selected then the seam transition will be smoother. However,
Figure 8.22 Mosaic of lhree Seasat image frames near Wind River, Wyoming geocoded using a
USGS 24,000 to I DEM.
'
414
8.4
IMAGE REGISTRATION
415
and the seams smoothed by feathering the output. A second example of the
mosaicking process is a larger-scale Southern California mosaic as shown in
Fig. 8.23. This image, which is comprised of 33 Seasat geocoded frames, covers
approximately 240,000 km 2 It is particularly useful for studying the geologic
formations and fault lines in the region. Figure 8.24 is a 32-orbit mosaic of
Venus compiled from data acquired by the Magellan spacecraft. The image
dimension is approximately 500 km on a side. Each image strip comprising the
mosaic is 20 km wide and extends the entire vertical dimension of the image.
ST
MAP QUADRANT
250 :000:1
Figure 8.23 Mosaic of 33 Seasat image frames of Southern California region covering
approximately 240,000 km 2 .
Figure 8.24 Multiorbit mosaic of the "crater farm" region of Venus, centered at -27S, 339 E.
The largest of the craters shown is 50 km in diameter.
416
8.4.2
Multisensor Registration
Given two data sets, such as Seasat SAR and Landsat Thematic Mapper, the
data from each sensor can be geocoded into a common projection (e.g., UTM)
and grid spacing. There remains, however, a residual misregistration between
the two scenes that must be corrected before the pixels can be said to be
coincident. Generally, this registration is a relatively simple process for similar
data sets (e.g., Landsat Band 3 and Band 4), but the SAR image brightness,
which depends on surface roughness and dielectric constant, may not correlate
with the optical image brightness, which depends on the reflectance (i.e., chemical
composition) of the surface.
A good example of this discrepancy is shown in Fig. 8.25. Figure 8.25a is a
geocoded Seasat image without terrain correction, while Fig. 8.25b is a Landsat
Band 4 image. Both cover approximately the same 75 x 75 km ground area
near Yuma, Arizona. In the upper region of the image pair there is essentially
a radiometric reversal in the relative brightness. In this area, the ground cover
is a bare, sandy soil, which to the SAR is a low backscatter target, while to the
Landsat Band 4 detectors this region appears very bright. Also notable is the
detailed terrain information in the Seasat image (lower right) as compared to
the Landsat data. A third distinct difference is the grainy appearance of the
SAR image resulting from the speckle noise. This image pair clearly demonstrates
that conventional cross-correlation techniques are not sufficient to register the
two images to subpixel accuracy.
A more rigorous approach to the image registration problem is to extract
some feature or set of features that is known to be invariant across ttre data
sets. The traditional approach to extracting this feature set is to manually select
a set of tiepoints that are common across the multisensor data set. These
common points are then used as input to a polynomial warping routine
to correct the misregistration (Siedman, 1977). We previously described in
Section 8.3.4 how the SAR image could be precisely registered to a DEM by
illuminating the map from the SAR imaging geometry. A similar procedure can
be applied to the Landsat data as shown in Fig. 8.26. In this case, the DEM
is illuminated from the same sun angle as the Landsat image to obtain the
correct shadowing effect. The Landsat image is then cross-correlated with the
illuminated DEM to determine the residual translational misregistration. If this
technique is used, then both the SAR and the Landsat images are registered to
a common map base (e.g., the USGS 24,000: 1 DEM), and therefore they are
also coregistered.
The technique described above works well in high relief areas where the
DEM data can provide a common reference. However, given a global data set,
most of the data is either relatively flat terrain (or ocean), or there are no
precision DEMs available. For these data, an alternative image registration
approach is required. A number of candida te image processing techniques for
both the feature extractio n and matching can be found in the literature. Many
of these techniques, although originally intended for o ther applications, can be
.D
'-
.0
a~
C t:::.
c._,
"' 00
E
"
0 ...
u :;
...cO.., "'8'"'
f :;i;
:I
c:n >
ii: ~
417
418
8.4
IMAGE REGISTRATION
419
GEOCODEDPRODUCTS
PATCH
PRE-SELECTION
SEGMENTATION
EDGES
REGION BOUNDARIES
PRINCIPAL COMPONENTS
MATCHING OF SUB-PATCHES
CHAMFER MATCHING
BINARY CORRE LATION
DYNAMIC PROGRAMMING
Figure 8.26 Comparison of Landsat image framelettes with simulated imagery from DEM images:
(a) and (c) are simulated images; (b) and (d) are Landsat data.
used for multisensor registration. Fig. 8.27 shows a generalized flow for a
multisensor registration algorithm, where a number of techniques are made
available at each stage of the processing to accommodate the variety of sensor
and target types, as well as varying environmental conditions. A report
evaluating this approach to multisensor registration, including a number of
candidate algorithms, has been published by Kwok et al. (1989).
Consider, for example, the image pair presented in Fig. 8.25. A simple
cross-correlation would yield a very weak correlation peak (or peak-to-mean
ratio) in the region of the sand dunes, as a result of the dramatic radiometric
difference between the two images. A better approach would be to extract
features that are invariant across the two scenes. Three candidate techniques
are: ( 1) Edge operators; (2) Statistical analysis using the stationarity properties
of local regions; and (3) Principal component analysis. Jn the remainder of this
section we will address the edge operators in some detail.
There is a large body of literature on the subject of edge detection. However,
in almost all cases only optical image data are considered. For the SAR imagery,
since it is corrupted by speckle noise, techniques based o n the first and second
o rder directional derivatives (e.g., Sobel or Roberts operators) will perform
poorly. This is especially true in terms of localization of the edges, since these
operators produce large responses in the edge region. Similar performance
limitations are characteristic of statistical edge operators such as those proposed
by Frost et al. (1982) and Touzi et al. (1988). An alternative procedure, using
a two-dimensional smoothing operator such as a Marr- Hildreth operator
(Marr et al., 1980) or a Canny edge detector (Canny, 1983, 1986), exhibits
significantly improved localization and edge detection performance relative to
the derivative and statistical operators.
An example of a Canny edge detector as applied to Seasat, Landsat TM,
and SPOT images is shown in Fig. 8.28. This region, the Altamaha River, GA,
shows a variety of target types (rivers, fields, roads, etc.). The Seasat image,
acqui red in July, 1978, has a significantly greater number of detected edges,
primarily due to the statistical characteristics of the original image. The SPOT
(Band 3) and the Landsat (Band 4), both acquired in July, 1984, are markedly
8.4
IMAGE REGISTRATION
421
similar, although there are textural differences in the images that give rise to
some dissimilar lines. Perhaps the key point demonstrated by this example is
that the matching routines must be adaptive, to optimize their performance for
a given set of data and imaging conditions. For example, the width of the Seasat
edge operator could be increased to reduce the number of spurious edges as
compared to the optical data. An example of the effects of varying the spatial
filter width is given in Fig. 8.29. In fact, the matching routine may require an
iterative procedure in which, for each pass, the filter parameters would be
adjusted until some cross-image similarity criterion is satisfied.
u
.c
Figure 8.29 Effect of variation in spatial filter width parameter u in Canny edge detector for SAR
image of Altamaha River, GA: (a) Original image (512 x 512 pixels); (b) Edge image with u = I
pixel; (c) Edge image with u = 2 pixels; (d) Edge image with u = 4 pixels.
420
422
.0
423
424
is clear that no fixed procedure or set of procedures will satisf~ a~l t~e m~tching
requirements. This task is perhaps a good c~ndidate f~r an artificial mte.lh.gen~e,
rule based approach for selecting the optimal algonthm and determmmg its
parameters. Furthermore, the comp~ta~ional load .for some of the m?re
complex algorithms may mandate a distnbuted (massively parallel) processmg
architecture or a neural network type implementation.
Independent of the final system design selected to perform the multisensor
registration task, the payoff in developing a capability to routinely gener~te
registered multilevel data sets will be far re~ching. !he.se products are cru~al
for presenting the data in a format allowmg denvat1on of the geophysical
parameter information, which in tum is used to drive large scale models of
the earth's global processes.
8.5
SUMMARY
This chapter completes our discussion of the SAR image calibra.tion and
the correction algorithms. In Chapter 7 we presented the techmques for
characterizing the radar system transfer function using both internal a~d
external calibration devices. Throughout Chapter 7 we assumed a smooth ge01d
in order to concentrate on the issues associated with radiometric calibration.
The problem of geometric distortion in the SAR imagery was pres~nted. in
Chapter 8. We initially reviewed the basic definitions of t~e geom.etnc cahbratton
terms and introduced a set of parameters to charactenze the image accuracy.
This discussion was followed by an error analysis to identify the key sources
of geometric distortion and target location error. However, the bulk of the
chapter was dedicated to the geometric correction algorithms.
We presented automated techniques to map the natural SAR. correl~tor
output image (i.e., without resampling) into a rectified format (umform pi~el
spacings), either in the SAR azimuth/range grid or into~ standard map gnd.
Much of the discussion was centered around the techmques to perform the
image rotation and to compensate for the terrain e~ects. yve presente~ a
three-pass resampling technique that requires only one-dm:~e~sional res.ampl~ng
operations. We proposed a technique to correct for the terram mduced d1stort~on
during the second pass. Specific equations were presented to calcu~ate the pixel
displacement as well as the radiometric correction factors resultmg from the
local relief.
The chapter concluded with a discussion of an application of ~eoco~ed
imagery to multiframe image mosaicking and multisensor image registration.
A number of examples were presented from Seasat SAR and Landsat TM data
sets to illustrate the pros and cons of the various algorithms. We compared the
performance of a number of edge detectors for matching, co~cluding ~rinci~ally
that much work remains to be dooe in the area of multisensor image registration.
A final point is that we are only now beginning to mechanize these radiometric
and geometric algorithms in terms of making them a part of the automated
REFERENCES
425
426
9
THE SAR GROUND SYSTEM
1.
2.
3.
4.
Process control;
Data management;
Flexibility, evolvability; and
Reliability, maintainability.
427
428
The design process for the SAR correlator generally consists of the following
steps:
1. Definition of the processor requirements;
9.1
9.1
429
430
9.1.1
9.1
The extreme bounds for the Doppler centroid, foe and the Doppler rate, fR, must
first be established. This includes the limiting values that each parameter can
assume, as well as the maximum rate of change in both the along- and cross-track
dimensions. The rate of change of the Doppler parameters in the along-track
direction becomes critically important in selection of the correlation algorithm
since, for the frequency domain fast convolution technique, there is an inherent
assumption that the Doppler parameters are constant over the synthetic aperture
period. These parameters can be expressed in terms of the relative sensor to
target position and velocity vectors as follows:
2
(9.1.1)
431
Jmax
!max
De' R
3000
"N 5000
tOOO
(9.1.2)
(9.1.3)
e.
CD
.!!
N
e.
a:
4000
1000
goo
(9.1.5)
300
The second term in Eqn. (9.1.4) is a small contributor to the Doppler rate
400
500
600
700
I I
1525 35
45
55
65
LOOK~
F~gur~9.1 ~lot of fo. ([~)and fR (e) for SIR-CC-band SAR at worst case attitude (yaw= 1.4,
pitch - -1.8 ) as a function of slant range for two orbit inclinations (57, 90).
432
9.1
433
fmin
RW
samples
(9.1.6)
cl>q = 22112
C R
RC -
A.J.B~
8cfll'in
samples
(9.1.7)
I
I
I
I
I
I
I
I
C(
(!)
a:
w
~
w
>
i=
.~
I I
.,
_a:
(9.1.8)
0.001
0.0001 ~----:-1:-------L-----L-_J
0
1.0
2.0
3.0
NORMALIZED DISTANCE
Figure 9.2
resultmg from
ur
=---
J Rmax 'tc2
(9.1.10)
~here we have assumed a n/4 phase error and -r0 is the coherent integration
time. ~or the frequency domain fast convolution algorithm, the processed block
duration (from center to edge of aperture) is
(9.1.11)
434
9.1
where Naz is the FFT length and Laz is the azimuth reference function length.
The update requirement is therefore
(9.1.12)
since within a data block the fast convolution algorithm requires that the
Doppler parameters remain constant. If the requirement in Eqn. (9.1.12) is not
met, the data must be preprocessed to correct for the phase errors (motion
compensation) or an alternative algorithm (e.g. time domain convolution) could
be used.
A matched filtering error in the Doppler centroid foe results in lost signal
power and increased azimuth ambiguities. T~e maximum time between reference
function updates for a given foe drift (i.e., fj)~x) is given by
(O.l)B0
ud
(9.1.14)
where is the block duration. The update time ud could be increased by
performing motion compensation of the data before processing. The application
of this technique would require precise attitude rate information to perform
phase adjustment of each line.
In almost all cases, the cross-track update rates are driven by theDoppler
rate dependence on the slant range, as shown in Eqn. (9.1.4). Sirtlilar error
analyses can be applied to determine the maximum number of samples between
updates. Typical numbers are on the order of two to eight bins, depending on
the error specification.
9.1.2
The fraction of the Doppler bandwidth (Bp) used in the processor is a design
parameter determined by the azimuth ambiguity to signal ratio specification
(AASR), as defined in Chapter 6. A typical AASR specification is on th~ order
of - 20 dB. Given the azimuth antenna pattern and PRF, the bandwidth BP
can be determined, assuming a homogeneous target area, by
m~oo
00
AASR =
m;'O
B /i
_:.,,
G (f
-WdR
+ mfp)df
If.
B.12
-B,/Z G
(f)df
(9.1.15)
L
az
De
where we have assumed that the allowable centroid error is 10% of the Doppler
bandwidth B 0 , which produces a relatively small degradation in the SNR and
AASR. Thus, a further requirement to use the fast convolution technique is
435
where G 2 (f) is the two-way azimuth antenna pattern. For example, consider
a spaceborne system with a uniformly illuminated azimuth aperture. Assuming
fp = 1.1 Bo, from Eqn. (9.1.15) a value BP= 0.75 B0 provides an AASR =
(9.1.13)
/max
fR
. (9.1.17)
Note that, since fR is range dependent, the length of the azimuth reference must
be updated as a function of cross-track position to keep the azimuth resolution
constant.
Azimuth FFT Block Length
436
9.2
The range FFT block size is determined by the number of samples in the echo
window and the reference function length. The range reference function length is
(9.1.22)
where f. is the complex sampling frequency and tP is the tr~nsmitted pulse
duration. The range FFT length, N~, is usually chosen to be the smallest power
of 2 that satisfies
N~ ~
L,/(l - g,)
(9.1.23)
437
!he selection of the appropriate SAR correlation algorithm for data processing
is dependent on the signal data characteristics, the system throughput
requirements, and the output image quality specifications. There is no simple
procedure for evaluating the trade-offs among these factors. Rather, a fairly
complex analysis is needed, requiring consideration of the design and
implementation constraints in conjunction with signal processor architectures
and the available technologies. A fundamental trade-off to be made is the relative
importance of system throughput versus image quality.
The key element in the processing chain is the azimuth processing stage,
which involves formation of the synthetic aperture to focus the azimuth return
into a high resolution image. In this section, we consider the trade-offs between
what we consider to be the two fundamental azimuth correlation techniques:
(1) spectral analysis (e.g., unfocussed SAR or SPECAN); and (2) matched
filtering (e.g., frequency domain or time domain convolution). We recognize
that there are a number of other possible techniques, such as the polar
processor with step transform (Chapter 10), the hybrid algorithm (Wu et al.,
1982), and the wave equation processor (Rocca et al., 1989). Generally, these
techniques are used for special situations (e.g., inverse SAR, large squint angles,
high phase precision) and will not be considered here.
The processor performance in terms of output image quality depends on the
characteristics of the echo data. A primary characteristic driving algorithm
selection is the time bandwidth product of the azimuth signal. This parameter,
which is the product of the processing bandwidth and the coherent integration
time, given by
(9.2.1)
(9.1.24)
We will consider two commonly used spectral analysis algorithms. These are
the unfocussed processor, which applies no phase compensation to correct for
the quadratic phase history of the target, and the deramp-FFT or SPECAN
438
9.2
439
c/>q = 2A.R(V.1't'cu
)2
(9.2.2)
where q, is the relative change in quadratic phase between the center and the
edge of the aperture and i- 00 is the unfocussed aperture.tim~. For c/>q = rr./4, the
coherent integration time for unfocussed SAR processmg ts
Thus, for a spaceborne system such as Seasat, where A. = 0.235 m and R = 850
km, <>x ~ 316 m which is too coarse for most science applications. However,
in the case of an airborne X-band system such as the Canadian STAR-1, where
A. = 3.2 cm and R ~ 10 km, an unfocussed azimuth resolution of <>x ~ 13 m is
achievable with c/>q = rr./4. This is acceptable for many applications.
The unfocussed SAR processor was used by many of the early SAR systems.
This processor does not compensate for the along-track phase shift resulting
from the change in sensor-to-target range. In its most rudimentary form this
processor consists of summing adjacent pixels over the unfocussed aperture
length
(9.2.3)
Since the azimuth resolution is given by
(9.2.4)
~UNFOCUSSEDAPERTURELENGTH.....j
SENSOR
s\.
. .... --
l1R W1_
...
. ...
....
iso PHASE/
.. ..
ISO RANGE
PATH
where cu is given by Eqn. (9.2.3). However, in general, this will not produce
good quality imagery, since the inherent assumption is that the beam is steered
to zero Doppler. For squint angles producing a significant Doppler shift (e.g.,
foe > 0.25 B0 ), the azimuth ambiguities will be severe. Additionally there is
uncompensated range walk which will cause the targets to be dispersed in the
range dimension. Thus, a more practical algorithm requires a preprocessing
step where the data is multiplied with a factor W., =An exp{j2rr.f00 n/ fp} that
shifts it to zero Doppler and also weights the terms in the summation to reduce
the sidelobes. The data block should also be skewed by the range walk
Eqn. (9.1.6) prior to summing to minimize the range dispersion.
The computational complexity of the unfocussed SAR algorithm in terms of
floating point operations (FLOP) per input data sample (assuming complex
data) can be evaluated as follows:
1. Azimuth reference function multiply (to shift to zero Doppler and weight
the sidelobes) requires one complex multiply per input sample
2. Summation of the elements in the data block requires one complex add per
input sample
Thus the aggregate computational complexity for the unfocussed SAR processor
is
C0
TARGET
Figure 9.3 Sensor to target imaging geometry for SAR. Unfocussed aperture for </>q = n/4
(i.e., &R = A./16) is given by Eqn. (9.2.3).
where we have assumed six operations (four multiplies, two adds) per complex
multiply and two operations per complex add. Also, we have ignored the
440
computations required for the reference function generation, which are negligible
assuming the Doppler centroid is slowly varying relative to the image frame size.
0
Zz
ij2
rr: w
rr:
...
:!:
!!
8
...cc
Q
w
CJ
c
:!!
rr:
rz:W
D.
D.
'5
c:
0
co
ou
u.
wz
rr:w
ti)
ti
:c CJ
... 0
=>z
;!!! :::>
NU.
c
:::E
c
g~
D.
8
~
eif
..c:
w
w gi c
CJ w ...
Zrr:c
c D. Q
rr: :::!!
0
0
t:u.
rr:
c:z:
;::
rr: :::>
o!
U. N
c
...
i:::
c
0-1
:::EO
0 D.
rr: rr:
w
...
:!:
0
z
:::>
u. !::;
u. :::>
~::I!
:z:~
... D.
:::> :::!!
::!lo
N
c
E
~
ou
t:
.,~
w u.
:z:
rr: ...
w::>
>:::!!
cQ
ti)
w
CJ
c
:!: N
c
;!!!
0.
IZI
3:....
.g
....
~
0
"
:;
e
..c:
ti)
rr:
rz:W
w ...
-1W
D. :::!!
D. c
Orr:
cc
D.
u.
w
rr:Z
w
:z: CJ
... 0
=>z
;ji :::>
N IL
c
'-
OS
....
co
..lol
0
"
::0
~!::;
w
wmc
CJ w ...
Zrz:c
Co. C
rr: :::!!
0
0
c~l!::
...ow
c9~
c m ti)
ti)
rr: rr:
WW
D. w
D. ::I!
0C
err:
c
D.
IL:::>
w:::E
rr:~
:c D.
... :::E
io
N
c
S
..c:
.g=E
OS
0
z
.......
where nu is the update interval in range samples times the update interval in
azimuth blocks.
~o
u. i:::
w=>
rr:5
:c>
... z
::>O
:::EO
....~
w
w gi c
iw ...
c rr:c
rr: Q
Q~
rr: :z:
;~
rr: :::E
oIL !;I!
w
D. CJ
CZ
:c ::::i
ti) D.
Z:::E
Cc
~:a
err:
C:.
...cc
Q
w
c
CJ
;!!!
" 0
co
:I (;I
~u
~ .. Q
..
a;
GI """
~
~
i]
U.
OS
IL
wz
rr:w
:c CJ
... 0
=>z
:!!!:::>
NI.I.
c
441
442
9.2
16 real multiplies
12 real adds
9.2.2
Summing the total number of operations in Steps 1-4 above, the aggregate
computational complexity in floating point operations (FLOP) for azimuth
correlation with the SPECAN algorithm per sample input to the azimuth
correlator is
7/nu
+ 5log 2 Naz + 34
(FLOP/sample)
(9.2.6)
where N az = csf.p is the azimuth block size and 't"cs is the coherent integration
time.
For multiple block processing, typically the blocks will be overlapped, with
the samples from the edges of the block discarded. The fractional block to block
overlap is
where l:!..N is the number of samples in the overlap region. Then the multiblock
azimuth correlator computational complexity is
(9.2.6a)
A rule of thumb for determining whether the SPECAN algorithm can be
effectively used is that the range curvature must be less than 1 pixel (Sack et al.,
1985). From Eqn. (9.1.7), setting NRc = 1 we get
12 - 8c
JR cs - ),J.
1r
Given the requirement for a high precision azimuth correlator that can produce
imagery at an azimuth resolution near the fully focussed aperture ideal
performance, spectral analysis algorithms are inadequate. The frequency domain
convolution (FDC) algorithm, which consists of two one-dimensional matched
filters (as described in detail in Chapter 4 ), provides a close approximation to
the exact two dimensional matched filter. This algorithm can be used for most
spaceborne systems operating in the nominal strip imaging mode, assuming
secondary range compression (SRC) is employed. For large squint angles (i.e.,
> 10 OH), an additional processing stage may be required (Chang et al., 1992).
The modification entails performing the azimuth transform prior to application
of the SRC.
The computational complexity of the FDC azimuth correlator given in
Fig. 9.4b can be assessed as follows. Assuming Naz input samples constitute the
azimuth processing block, the number of computations per data sample input
to the azimuth correlator (for processing a single block of data) can be broken
down as follows:
o.
8 real multiplies
6 real adds
3. Azimuth Laz-point reference function generation (time domain) and Nazpoint transform
443
where BP= 't"cslfRI Thus Eqn. (9.2.8) gives maximum TB, and therefore the
maximum block size that can be used in the SPECAN algorithm, assuming the
range curvature cannot exceed one range bin. For Seasat, where J. = 22.76
Msamples/s and A.= 0.235 m, the maximum TB is 449. The resulting coherent
integration time is on the order of cs = 0.95, which is equivalent to an azimuth
resolution at a range R = 850 km of Jx ~ 14 m, as compared to 19.5 km for
the real aperture resolution, 316 m for the unfocussed SAR processor, and about
6 m for the fully focussed synthetic aperture. For a system such as the ESA
ERS-1, where A.= 5.6 cm and J. = 19 Msamples/s, the maximum TB= 2256,
which results in a maximum cs = 1.0 s which is greater than the nominal full
aperture observation time.
C~A =
(9.2.8)
444
9.2
where nu is the cross-track update interval (in samples) times the along track
update interval (in blocks)
4. Reference function multiply
1 complex multiply
5. Nazpoint inverse FFT
445
8 real multiplies
6 real adds
3. Time domain L 82-point complex convolution
L 32 complex multiplies
L. 2 - 1 complex adds
(9.2.11)
Thus, for example, if the reference function length plus the block skew is 40 %
of the block size, then ga = 0.6 and 1.7 times as many computations per input
pixel are required for multiblock processing than for processing a single block.
We have also assumed that the squint angle is relatively small, such that the
standard frequency domain convolution algorithm can be used. For larger
squint angles, the algorithm must be modified to perform.the forward azimuth
FFT prior to the secondary range compression, thus requiring an additional
two corner turns for the da!a and an additional complex multiply per sample.
9.2.3
The most precise approach for SAR correlation is the matched filter tim~ domain
convolution (TDC) algorithm. Conceptually it is the simplest algorithm for
(9.2.12)
446
9.2
soo....-~~~~~~~~---,,...--E-R-S--1'1~~.,S-E_A_SA-T~~-,
~I
500
Time
Domain
Convolution
447
400
c
300
TP = 1I fp = 607 s
200
Spectral Analysis
I (deramp FFT)
~e. h.ave converted the Seasat real sampling frequency to complex samples by
dlVldmg by 2. After range compression the range line length is
100
10
15
n
Figure 9.5
(9.2.14)
length (L = 2").
9.2.4
(9.2.13)
A.= 0.235 m
R = 850 km
L 0 = 10.7 m
448
9.2
the processor efficiency from Eqn. (9.1.19) is ga = 0.5. Since we are performing
multiblock processing, the computational complexity from Eqn. (9.2.9) and
Eqn. (9.2.11) is
449
or
CsA ~ 86 FLOP /input sample.
Thus
C~nc = 328
R~c = C~NR/TP
R~oc = 3.28 x 10 9 FLOPS
(9.2.15)
h
real-time full aperture azimuth compression of the Seasat SAR data usmg t e
frequency domain fast convolution algorithm requires an azimuth correlator
capable of executing nearly 3.3 GFLOPS!
15
RToc ~
~
NR CsA/ TP
328 GFLOPS
Example 9.2
Naz =
/Sc
fp r = fp -V TjJJJ--
From Example 9.1, the full aperture reference function Laz = 4099 samples. For
quarter aperture, four-look processing, Laz = 1025, which is less than the
maximum block size constraint. Since the block must be a- power of 2 less than
Naz we select
Naz =
which is 100 times as many operations as the FDC and over 300 times the
computational rate of the SPECAN algorithm.
In summary, the SPECAN algorithm requires the fewest computations of
the three azimuth correlators (excluding the unfocussed SAR) and can pt;ovide
reasonable image quality for small time bandwidth product (TB) data sets such
as the ESA ERS-1. To achieve the full azimuth resolution for larger TB data
sets, either the time domain or the frequency domain convolution algorithms
can be used. The time domain convolution is inherently more precise, but at
an extremely large computational cost for spaceborne systems, since its
computational complexity increases linearly with the number of pulses in the
synthetic aperture. The frequency domain convolution provides a good
compromise between throughput and image quality in that, for most systems,
the image degradation is very small relative to TDC, but the computational
requirements are on the order of the SPECAN algorithm.
1024 pulses
+ 5 log2 Naz
9.2.5
Range Correlation
For the cross-track or range dimension processing we will only consider the
frequency domain fast convolution algorithm. Similar to the azimuth correlation,
450
9.2
451
fp = 1646.75 Hz
1 complex multiply
Ne= Int(4.7) + 1 = 5
and
g,
= (6
+ 10log2 N~)/g,
(9.2.16)
N,
e
Therefore
C~oc = 173 FLOP /input sample
(9.2.18)
The computational rate can be reduced by increasing the processor block size.
If a processing block of N~ = 8192 were selected, then Ne= 1 and g, = 0.83.
+1
(9.2.17a)
with a real-time rate from Eqn. (9.2.18) of
where Int represents the integer operation. The range efficiency factor is given by
g, = N N'
= 6840 = 0.67
52048
(9.2.17b)
In the above analysis we have assumed that the residual block fraction at the
end of the range line is processed as a full block. If this fractional block is
Since the computational load on the processing system for range correlation is
dependent on the processor block size, unless there is a large change in Doppler
across a range line, requiring an update in the reference function secondary
452
9.3
9.3
ERS-1
1.0
~I
0.9
I
FDC
0.8
453
0.7
fc
0.6
0.5
0.4
0.3
0.2
0.1
0
9.3.1
The design process to determine the system architecture must consider more
than just the basic computational rate of a machine (Hwang, 1987). Initially,
a trade-off study should be performed to prioritize the relative importance of
the system throughput versus flexibility. In other words, the more specialized
we can make the processor to generate a single type of output with a similar
set of processing parameters (i.e., block size, FFT length, range migration, etc.),
the better we can tailor the architecture to achieve extremely high throughput.
A second, equally important, consideration is the radiometric accuracy
requirement. If high precision radiometric calibration is not required, we can
for example consider fixed point arithmetic for the mathematical operations,
or truncate the range correlator output prior to corner turn. If however a high
precision output is required, a full floating point (or a block floating point)
representation is needed, increasing the complexity of the correlator hardware.
A third key design parameter is the resolution requirement. The resolution
specification on the output image product not only impacts the number of
computations per input data sample, as discussed in the previous section, but
is also a key driver determining the required processor memory capacity.
To optimize the implementation of the azimuth corr6lator, an important
parameter to consider is the fraction of computations that are FFT operations.
This is shown in Figure 9.6 for the SPECAN and FDC algorithms. (The
unfocussed SAR and the time domain convolution do not require FFTs.) For
the frequency domain convolution, assuming the reference function length is
1-8 K samples, the fraction of FFT computations is over 80% of the total
computations. For the SPECAN algorithm this fraction is over 50%. Therefore,
the optimal architecture for implementation of these algorithms requires a highly
10
12
15
n
(Laz= 2")
Figure 9.6
length.
efficient technique for performing FFTs. This will be addressed in detail in this
section for each of the architecture designs.
We will categorize the various SAR correlator architectures into what we
consider to be the three fundamental designs: (1) Pipeline; (2) Common Node;
and ( 3) Concurrent Processor. There are a number of possible variations or
combinations of these basic designs and we will address some of them with
examples of real systems. For each architecture, the key design parameters
to be considered are: (1) Peak I/O data rates; (2) Memory capacities;
(3) Computational requirements per processor; (4) Reliability/redundancy of
the design; ( 5) Maintainability/ evolvability of the design; and ( 6) Complexity of
the control system. These design parameters should be evaluated in conjunction
with the current technology to factor into the trade-off analysis the relative cost
of the hardware. For example, a memory requirement of 32 Mbytes is not
especially stringent with current technology, considering that 4 Mbit chips are
currently available. A typical cost per byte of RAM is on the order of 1/20 of
a cent. Thus, a 32 Mbyte memory might cost $16 K. Conversely, if the architecture
requires an 1/0 bandwidth of 100 MB/s, that forces a departure from standard
454
data bus architectures {such as the VME bus), or even the newer fiber optic
ring networks {FDDI), to say an {as yet) unproven HSC star architecture,
which could be quite costly.
Perhaps the most important consideration that is overlooked by many system
designers is that the hardware technology evolves faster than the software.
Typically, new hardware {such as the high speed data bus architectures) will
operate in only a very limited environment. Using such equipment in a custom
designed SAR correlator could require a sigriificant amount of software to be
developed at the microcode level. The software drivers necessary to communicate
with peripheral devices are a chronic problem for system engineers attempting
to incorporate the latest state-of-the-art technology into their system. It is
usually advisable when building an operational system to use equipment one
version removed from the most recent release. The system should be designed
such that technology upgrades can be incorporated within the basic structure,
requiring a minimum amount of redesign.
::J~j
~!i
8
';-
--- - .
IL~:;
1:1!~~
i= !<
---
~Q
9.3.2
Cl
~~~
::Ea:
The optimal system architecture for achieving extremely high throughput SAR
correlation is the pipeline machine. A generalized functional block diagram of
a pipeline processor is presented in Fig. 9.7. The data is input to the processor
from some type of storage device {e.g., a high density tape drive or the SAR
sensor ADC). Each processor element {or functional unit) performs some type
of operation on the data array {x 1 ,x 2 ,. .. ,xn} to generate a new array
{ A 1 {x 1 , .. , xn), ... , Am{x 1 , , xn) }, where each operation A; may be performed
on any or all of the input data samples. The pipeline consists of a series of these
functional units, connected by a data bus. The movement of data and the
arithmetic operations are controlled by a digital clock whose cycle time is
compatible with the hardware elements comprising each unit. The pipeline is
terminated by a second storage device whose 1/0 data rate requirements may
be either higher or lower than the input device, depending on the functional
operations applied to the data.
We can apply this generalized description of the pipeline processor to the
SAR correlator as shown in Fig. 9.8. In this simplified diagram of a pipeline
SAR processor, we first divide the processing task into modules that relate
directly to the major stages of the SAR processing: (1) Range correlation; (2)
Corner turn; (3) Azimuth correlation; (4) Geometric rectification; and (5)
M ultilook filtering. Each of these modules may be further bi;oken into functional
units. For example, the range correlator consists of a forward FFT unit followed
z
0
x I=
~:5
fil~ !ii
8w
--
fi3
a:
--
I.!.!
ii!;
-- - - .
~
:c
4
w
~t:
wz
o::::i
I-
CJ
w~
~'9~
5
.1
--
::::>
::E
.J~~
j
';-
- - ~- o:;
ILZ ::::>
l:l!~::E
if w !<
~1ffi
811.~
I
a:
~Ci:
:.::
5::::>
::E
-- - ~
~~
5<
QC
t;
w
w
I-
0-
--z
0
~
g::e
_;iffi
-
:.::c
Ow
Ot)W
::!w~
:; I::::iw;:EC
11:1<i!
-Z
STORAGE
DEVICE
FUNCTIONAL
UNIT
A
Figure 9.7
FUNCTIONAL
UNIT
B
FUNCTIONAL
UNIT
N
STORAGE
DEVICE
455
456
9.3
10p
~z
457
lCLKl !SYNC
ltANGE
UNE LENGTH
LINE
8UFFER
20 .... ,
i.-
COllElATION
flAK OFFSET
UNSCRAMILEl
20 .... ,
CLUnERLOCK i . ACCUMULATOR
LINSCRAMILER
'
TRIPLE IUfffl
LOOK
MEMORY
LOOK OVERLAP
INTRALINE
NUMIER OF LOOKS
AOOER
RANGE
01
FFT
Ml,ILll-
FFT LENGTH
RANGE
FFT
-~
fsYN
DATA,CLKt
AUTOFOCUS
CROS$
COUELATOI
REFERNt"f FUNC
RANGE
RffflENCE
GENERATOR
W!IGHTING fUNC
DETECTION
ffT LlNGTH
INTEGER RANGE
GLL SHFT
COEFFICIENT SELECT
INTERPOLATION
COEFFICINTS
I 0
llSAMPLEi
IN AZIM.ITH
10 M-tz
LNE UNGlH
OUTPUT SELECT
c'
FFT LENGTH
10 .... ,
UN.SCRAMIUl1:
AZIMUTH
mI
AZIMUTH ILOCK
LENGTH
CORNER
TURN
MEMORY
FRAM& AOORlSS
OFFSET
NUMlll OF LOOKS
PHCENT ZERO-FILL
-~
DfRAMP REFIRlNCE
FLINCllON
DOPPLER
PROCESSING
REFERENCE LENGTH
ANO wtK;jHJ INU
Of.RAMP OR
CONVOLUTION
AZIMUTH REFERENCE FUNCTIC>f'll
AZIMUTH
FFI LENGTH
ffT
'
RANGE
-TION
MIMCW:Y
OOPPllR OffSEllAn
M...... ,-.... PAYW
... _. -,.N11i
COEFFICIENT
SELECT
INTERPOLATION
COEFFICIENTS
I 0
RISAMPLER
N IANGl
COEFFICIENT
UpPATI RATE
Figure 9.9 The Advanced Digital SAR Processor (ADSP) functional block diagram showing
control parameters to each module. (Courtesy of T. Bicknell.)
458
9.3
Figure 9.10
(8 boards/ set) are identical, and there are four such board sets (two for the
azimuth correlator and two for the range correlator). Similarly, the memory
boards used in the corner turn and multilook memories ( 14 total ) are designed
identically. This introduces the possibility of sharing these boards among the
various modules at the cost of throughput.
Consider the architecture of Fig. 9.11, where the range and azimuth
correlators share the same modules. Instead of a continuous data transfer, as
in the straight pipeline operation, the data is input to the bent pipe correlator
in bursts. Each burst is one processing block (N. 2 x N, samples) of data. In
the first pass of the data through the system, the complex interpolator module is
bypassed, and the range reference function is read into the reference function
multiplier unit. The range compressed output is stored in RAM until range
processing of the data block is complete. The matrix transpose of this data
block is then fed back into the correlation module, which is reset for azimuth
compression. The complex interpolator can perform range migration correction
and slant-to-ground range correction in the same step, or alternatively it can
output the slant range imagery. The azimuth compressed output is again stored
in RAM until the block processing is complete. The feedback loop is then
switched, transferring the processing data block to the multilook module, while
the next block of data is input to the correlation module for range compression.
The correlator design described above is just one example of how a flexible
pipeline design could be used for SAR processing. In general, this approach is
less expensive in terms of the number of digital boards required to implement
the correlator. However, it does require a more complex control system to
switch the data paths, and it is significantly slower than the straight pipeline
architecture. The Alaska SAR Facility correlator was originally planned to be
a bent pipe design. However, a trade-off study of cost versus performance
indicated that the straight pipeline was the optimal approach.
CORRELATION
CORNER-TURN
MULTI-LOOK
M:X>UE
I COt.f'l)( IFEFEFENCE I 1
FU~IDN I FFr
FFT I INTERP0-1
I LATON
f MULTIPLY I
MULTI-STAGE
RAl"-Va.1
Kr;f$
I.EMORY
!MULTI
DETEC1 I.CO<
Tx:>N
Fil.TEA
459
Figure 9.11
460
9.3
461
I
I
HIGH DENSITY
STORAGE
DEVICE($)
....
I-
'l
,.
HOST/CONTROL
CPU
--
r
I
OAT A TRANSFER
NODE/BUS
--
-- .
-
ARITHMETIC
PROCESSOR
UNITCSl
....
I
I
FFT
UNIT<S)
....
Figure 9.12
assuming the 5 bit real data stream is converted to a complex 8 bit I, 8 bit Q
representation in the input interface. This data is transferred via the node to
462
9.3
RANGE
CORRELATOR
FFT-1
REF MUL T
463
r 1 = 22.53 MB/s
FFT
therefore
i----~r
INPUT/
~
NODE
OUTPUT
...;
I SWITCH
INTERFACE
r2
HDDR
II-
r2
CORNER
TURN
(RAM)
r1 = r 1
I REF MULTIPLY
HDDR
I' INTERPOLATION
AZIMUTH
CORRELATOR
FFT
+ 3r 2 =
r1 = r 1
+ 7r 2 = 232.5 MB/s
the range correlator module. Assuming both the reference function multiply
and forward and inverse transforms are performed within the module, the output
data rate is
FFT 3
FFT 2
"-
(9.3.1)
(9.3.2)
Figure 9.13a Common node SAR processor architecture with computational units grouped
according to processing sequence.
FFT
1
where we have allocated 8 bytes for the complex floating point representation.
We have similar transfer rates into and out of the corner turn memory and the
azimuth correlator before output to the HDDR. The aggregate data rate through
the node is therefore
--
HDDR
r1 = r 1
112.5 MB/s
INPUT I
OUTPUT
INTERFACE
-I
-
t
NODE
SWITCH
I
-
I-
CTM 3
CTM 2
CORNER
TURN
<CTM;
HDDR
I
I-
AP 3
AP 2
"-
ARITHMETIC
PROCESSOR (AP)
.___
Figure 9.13b Common node architecture with computational units grouped according to type.
464
9.3
465
For this system a 10: 1 slowdown is planned from the real-time E-ERS-1
data rate. This translates into an input data rate r 1 = 2 MB/s assuming the
data is unpacked into byte format. The data for each transfer is buffered in the
IOC-24; thus the aggregate data rate must include both inputs and outputs to
the IOC. Since the corner turn is performed in the IOC local memory, one
input/output transfer pair is eliminated. The aggregate data rate through the
IOC is therefore given by
(9.3.3)
(console)---..,
MicroVAX II
High level
Control
APTEC IOC-24
FORMAT
BUFFER
Range
Process
200MFlops/s
Installed
fp =
Tp
1680 Hz
= 37.1 s
f. = 15.0 Msamples/s
N r = 6428 complex samples
Lr =
FS
PEs
Azimuth
Process
300MFlopsis
Installed
106Mb/s
Figure 9.14 Common node architecture implemented by British Aerospace of Australia for
E-ERS-1 processing (Fenson, 1987).
J. TP =
into Eqn. (9.3.4), we get r 2 = 3.94 MB/s, while from Eqn. (9.3.3) r 1 = 19.76 MB/s,
which provides an 18 % margin for the peak 1/0 below the 24 MB/s maximum
capacity of the IOC-24.
Each processor element contains a RISC controller, local memory, and
arithmetic processors capable of 50 MFLOPs. To evaluate the number of PEs
required for E-ERS-1 azimuth correlation, from Eqn. (9.2.10) Laz = 1034
complex samples. Using Laz = 1024, Raz= 2Laz ga = 0.5, and nu= 4, from
Eqn. (9.2.11) we get C~oc ~ 267 FLOP per complex input sample. From
Eqn. (9.2.15), the real time processing rate is R~oc ~ 2.63 GFLOPS. At one tenth
real-time (i.e., q1 = 0.1), the azimuth c6rrelator must perform a minimum of
263 MFLOPs, which requires 6 PE boards. Assuming the range compression
is executed using the FDC algorithm with an 8 K FFT, the range correlator
m1;1st perform 187 MFLOPS at one-tenth real-time, requiring four additional
PE boards.
For processing at a rate one tenth real-time, the common node architecture
fits well within the current computing technology. Commercially available array
466
processors such as the STAR VP-3 (rated at 150 MFLOPS) or the Sky
Computer,(SKYBOLT) board level processor (rated at 80 M~LOPS) could
also be used to meet the one tenth real-time performance requirement. For a
board level array processor system, the data transfer nod~ beco1:11es the int~rnal
bus of the host CPU. At the one tenth real time rate it is feasible for a smgle
host computer, augmented by 6-7 SKYBOLT pr_ocessors, to a~hieve the
450 MFLOPs necessary for simultaneous range and azimuth compression. There
is currently no single processor CPU that can meet this goal, however, there
have recently been a number of multiprocessor systems introduced (e.g., CRAY
Y-MP/4, ALLIANT 2800) that are capable of over 500 MFLOPS. Howe~er,
an architecture based around a supercomputer may not be the most cost effective
solution, since it does not readily provide for future expansion.
A reasonable compromise to the expense of the supercomputer would be
to use a superminicomputer class host with an attached processor to perform
the bulk of the computations. The SIR-C processing system employs such a
design. This architecture, shown in Fig. 9.15, utilizes an Alliant FX-8
superminicomputer with four high speed ports (~SPs) for data 1/0 (Test
et al., 1987). Each HSP is rated at 30 MB/s. The disk ar~ay, manufactur~d by
Maximum Strategy, performs an eight-way hardware stnpe over a maximum
of 32 disks, providing a storage capacity of 64 GB, at an 1/0 data ~ate of 10-12
MB/s. This system is sufficiently flexible to process the l~r~e vanety of SIR-C
data collection modes since it is fully programmable. Additionally, all elements
are redundant (including the disk array with a parity disk), p~oviding for graceful
degradation with hardware failures. The SIR-C processor achieves 300 MFLOPS
across the two array processors and an additional 96 MFLOPS peak performance
with the eight Alliant processor boards. The !imiting facto~ is the 1/0 to the
disk array since in this design the range and azimuth correlations are performed
within th~ array processor ( 32 MB memory) and the corner turn is performed
467
in the host CPU ( 192 MB memory). Each array processor operates on a different
data block, performing identical computations.
It should be noted that similar common node architectures are being utilized
by: NASDA for J-ERS-1; CCRS for the E-ERS-1 and RADARSAT; and the
DLR for the E-ERS-1 and X-SAR data processing. The popularity of this
architecture is traceable to its price/performance ratio, as well as the fact that
off the shelf computer hardware is adequate to meet most throughput requirements.
An advanced version of this architecture is being sponsored by several agencies
within the US Department of Defense. They plan to develop a general purpose
signal processor similar to that shown in Fig. 9.12. The primary objective of
this development program, which involves a number of major defense contractors
such as Hughes, TRW, IBM, and AT & T, is to develop a system incorporating
a high speed switch and a set of VHSIC processing modules that meet interface
standards in accordance with military specifications (e.g., ADA). The switch
(or data node) is more like an intelligent controller, recognizing when a process
(e.g., an FFT) is complete and routing the results to another processing module
for the next stage of processing. This system is a general purpose signal processor
specified to perform in the 2-3 GFLOPS range in an extremely compact
configuration (e.g., < 1 m 3 ).
9.3.4
A third class of system architectures for the SAR signal processor is the
concurrent or parallel system. Here we are referring primarily to loosely coupled
multiprocessor systems with distributed local memory (Fig. 9.16), such as the
1/0
CHANNELS
110
CHANNELS
ALLIANT FX - 8
192MB
CE8
RAM AM
CACHE
IP
STAR
STAR
DISK
VME
VP-3
VP-3
ARRAY
BUS
Figure 9.15 Common node architecture implemented by JPL for SIR-C processing.
a
b
Figure 9.16 Functional block diagram of a concurrent (massively parallel) processor: (a) Twodimensional topology; (b) Three-dimensional topology.
468
Single instruction multiple data (SIMD) systems are parallel processors which
operate synchronously under the same control unit. Physically the processor
elements (PEs) can be connected in any communication topology. For example,
the MPP is a two-dimensional (planar) array where each PE can transfer data
only to its four nearest neighbors. Conversely, the Connection Machine is an
n-cube topology where any PE can be connected to n other PEs according to
some predefined configuration that may be optimal for a given application
(Hillis, 1985).
The SAR correlation algorithm has been implemented on the MPP by a
group at the GSFC (Ramapriyan et al., 1984). A functional block diagram of
this system is shown in Fig. 9.17. The array unit (ARU) consists of a 128 x 128
(i.e., 16,384) array of PEs, each with its own 1024 bit local memory. The cycle
time is 100 ns (i.e., 10 MHz clock), however, each PE can perform only bit
serial arithmetic. The result is that this system is highly efficient for fixed point
operations, but its performance is dramatically reduced for floating point
operations. For example, the MPP is measured to perform 1.86 GOPS for
8 bit integer multiply operations, but only 39.2 x 10 6 complex multiplies per
second (Schaefer, 1985).
Data input (output) occurs through 128 bit wide ports at the 10 MHz clock
rate with 1 bit flowing to (from) each PE in the first (last) column of the array.
The array is controlled by an array control unit (ACU) which is microprocessor
based. The data management and application software are housed in the
9.3
469
STAGING
MEMORY
128 BIT
INPUT
INTERFACE
--
ARRAY
UNIT
CARUl
128 BIT
OUTPUT
INTERFACE
Figure 9.17 Concurrent processor SIMD architecture used by Goodyear in the Massively Parallel
Processor (MPP).
The multiple instruction multiple data processors can be categorized into either
shared memory tightly coupled machines (e.g., Cray Y-MP/4), where a single
bus is shared by both the processors and the memory, or distributed memory
470
multicomputer systems, where each processor node has local memory and is
interconnected by some topology (e.g., ring, hypercube, etc.). In this section we
will address only the latter type of MIMD architecture. A number of MIMD
topologies have been created for specific processing applications, such as the
BBN butterfly switch (BBN Labs, 1986), where the arithmetic processors are
arranged to access other processors' local memories to efficiently execute the
FFT operation (among other signal processing_ tasks). As previously discussed,
both the communication efficiency and the program complexity are major
concerns in utilizing this type of architecture for SAR processing.
9.3
REGION2
REGION 1
PN 1
PN 128
PN 128
REGION BUS
IRCU
8 regions/ system
32 families/region
128 PN boards/family
The current system design uses 16 bit microprocessors (the iAPx286 chip),
with a 32 bit bus architecture for future microprocessor upgrade.
The EMMA-2 architecture has been selected by the Italian Space Agency
(ASI) for E-ERS-1 fast delivery processing (Selvaggi, 198J). The requirement
is to produce three 100 km image frames per 100 minute orbit, which translates
into a throughput of about 1/120 of real-time. To achieve this throughput rate,
the SPECAN algorithm was selected with quarter aperture, multilook processing.
As previously discussed (Section 9.2.1 ), the maximum coherency time for spectral
analysis processing of the E-ERS-1 C-band data is on the order of 0.7 s such
that full aperture resolution can be achieved with negligible degradation.
Furthermore, the EMMA-2 implementation requires only quarter aperture
471
PN
PN
PN
PN
PN
PN
HCSM
HCSM
HCSM
SIPC
MINI
COMPUTER
(SPECIAL
PERIPHERALS)
Fl~ure ~.18
Con~ur~ent
coherency, which for E-ERS-1 is only 0.16 s. From Eqn. (9.2.6) and Eqn. (9.2.6a)
~-
C~A = 45
472
assuming /p = 1680 Hz and fl,= 5871 complex pulses per echo line after range
compression.
To evaluate the computational rate for range compression, we will assume
the fast convolution (FDC) algorithm is used. The EMMA system architecture
of each processing node constrains the maximum FFT to N~ = 2048 complex
points. From Eqn. (9.2.17), inserting the E-ERS-1 parameter values of N, = 6428
and L, = 557 complex samples into Eqn. (9.2.17), we get g, = 0.78 and Ne= 4.
From Eqn. (9.2.16)
9.4
POST-PROCESSOR SYSTEMS
473
and the multilook processor, about 100 PNs will be required for the E-ERS-1
task.
Concurrent Processor Rellablllty and Malntalnablllty
9.4
POST-PROCESSOR SYSTEMS
Thus far in this chapter we have presented various aspects of the SAR correlator
architecture and design. The emphasis throughout the discussion was on the
need to produce image products at high data rates. The question that naturally
arises is what to do with the correlator output. In other words, there must be
a back-end data analysis and distribution system to handle the high output
data rate. In Fig. 9.19 we illustrate one possible approach to the design of the
back end system. Following the correlator are two major processing elements:
the post-processor and the geophysical processor. The post-processor performs
Level 1B processing, which encompasses the radiometric and geometric correction
of the output imagery, as well as multilook averaging and the generation of
browse products. The geophysical processor (Level 2, 3) mosaics multiple SAR
image frames, formats them into map quadrants, performs SAR image registration
with other sensor or geographical data sets, and derives some geophysical
characteristic(s) from this product (e.g., wave height, soil moisture, surface
roughness). These geophysical measurements are then input to large scale models
for estimation of global processes such as ocean circulation or hydrological
cycles (Level 4 products). In this section, we will address specifically the
474
SAR
RAW DATA
(LEVEL 0)
CCHBATI;A
._____
9.4
~ ~ST-
MN3E.OATA
a"MAPS
GE<J'HVSCAL ....____
~
PHYSICAL_
L.EVa
181
LEVa
1A
LEVa
RADIOMETRIC
182
OOARECTDI
OTM;ENG.
TELEMETRY
LEVa
2,3
....------, LEVa
183
.----.LEVa
LEVa
MULTI-
184
SIN!OR , _ _..
RJSICJll
GEOPH'Y'SICAL
PRlC6SSNG
lAroE
SCH.E
MOOB.S
9.4.1
POST-PROCESSOR SYSTEMS
475
Post-Processing Requirements
The post-processor design depends on the data rate and data volume output
from the SAR correlator, the variety and accuracy requirements for the various
product types, and, perhaps most importantly, the precision required in the
computations. In our analysis, we will assume the SAR correlator produces
only single-look, complex, full resolution image data without any geometric
resampling or radiometric corrections applied. All multilook filtering and
detection operations are performed in the post-processor. In this formulation,
we move all output product options to the post processor, resulting in a
correlator output that is of a single type, thus simplifying the archive. The
correlator processing is also reversible, allowing us to recover (most of) the raw
data by applying the inverse of the compression filters. This would permit an
archive of only the single-look image data without retention of the Level 0 raw
data. (To be fully reversible we must retain all partially filtered data, i.e., the
reference function length in each dimension, and perform full floating point
computations throughout the correlation.) The volume of data to be archived,
the location of the archive relative to the SAR correlator, and the quality of
the original raw data set (which indicates the amount of reprocessing required)
are key factors in determining at what level of data product the permanent
archive is to be maintained .
--+
Data Volume and Throughput
CORREi.ATM:
~TASE1'S
----
b
Figure 9.19 Functional block diagram of SAR ground data system: (a) Top level organization;
(b) Details of post-processing subsystem.
For spaceborne SAR systems, such as SIR-C or E-ERS-1, the acquired data
volume is almost always constrained by the downlink data rate, roL The range
line length in complex samples (ignoring the overhead from the ancillary data
headers) is given by
(9.4.1)
architectural trade-offs in the post-processing system. The details of the postprocessing algorithms were presented in Chapters 7 and 8.
In many SAR processing systems, the radiometric and geometric correction
procedures are not functionally separate from the SAR correlation process-_ In
fact, most of these operations can be incorporated into the SAR correl~tton.
processing chain without additional passes over the data set. The functional
breakdown between correlation processing and post-processing assumed here
is just one possible design and is not necessarily optimal for the computational
performance aspects of the system. However, it does provide for maximum
flexibility in terms of the variety of output product types that can be produced.
A SAR processing system dedicated to a single application or user grou~ may
combine a number of these processing steps with the range and azimuth
compression, since the variety of products is not required. Some of these
trade-offs were previously discussed in Chapters 7 and 8.
where nb is the quantization. In Eqn. (9.4.1) we have assumed that the onboard
digital system time-expansion buffers the downlink data across the entire
interpulse period. After range compression, the number of good samples per
range echo line is given b~
(9.4.2)
476
9.4
where 'Jd1 is the datatake duration, then we can write the correlator instantaneous
output data rate as
r00
nuNJP (bytes/s)
(9.4.3)
where nu is the number of bytes per pixel (e.g., for a 64 bit complex representation
nu= 8). Substituting Eqn. (9.4.1) and Eqn. (9.4.2) into Eqn. (9.4.3), we get
(9.4.4)
where qd is the instrument duty cycle (i.e., the fraction of total time that the
SAR is operating). For a real-time processing system Eqn. (9.4.4) specifies the
input data rates that the post-processor must be capable of processing.
-rP = 33.4s
nu= 8 bytes/pixel
nb = 5 bits/sample
fp =
f. =
1646.75 Hz
22.77 Msamples/s
roL = 112.7Mbps
(Note that r 0 L for Seasat, which had an analog downlink, represents the output
data rate from the ground digital units.) From Eqn. (9.4.4), the corresponding
correlator output data rate is
r 00 = 40.1 MB/s
POST-PROCESSOR SYSTEMS
4n
are to be applied to each input data sample. Since these correction algorithms
depend on system characteristics, such as the sensor stability over time, the
pla.tfor~ ephemeris and attitude accuracy, and the frequency and type of internal
cahbrat1on measurements, the number of operations could range from only a
few to several hundred per pixel, depending on the system stability. For this
reason, we emphasize the methodology for scoping the size of the post-processor,
followed by specific examples for a quantitative evaluation of the computational
rate.
9.4.2
Radiometric Correction
1.
2.
3.
4.
The internal calibration data includes: (1) engineering telemetry used to assess
syste~ gain/phase errors or drift in the operating point of the system; (2)
receive-only noise power; and (3) calibration loop data such as injected
calibration tones or leakage pulses (e.g., chirps). The external calibration data
consists of images of point target calibration devices or distributed homogeneous
target sites.
For this analysis we assume that the calibration data evaluation in Steps 1
and 2 is performed offiine by a dedicated calibration analysis workstation. This
is a reasonable assumption since a significant portion of the analysis may involve
operator interaction to select targets and interpret the telemetry data. Additionally,
much of the analysis is performed only occasionally since the time constants
for variation are large relative to the sampling period and the point target sites
are typically observed infrequently.
~nits simplest form, the radiometric correction factor is a scalar array that
vanes as a function of cross-track image pixel number. This correction factor
is dependent on
1.
2.
3.
4.
478
9.4
If we assume that the system is stable over some time period after which ~ new
3
]
12
'
(9.4.6)
where R( 1) is the slant range to the first image pixel, 'l(J) is the ~nciden~e angle
at cross-track image pixel I, G(J) is the antenna pattern .projected m~o the
image plane, and GsTd I) is the sensitivity time control gam as a function of
time (sampling interval).
. .
.
Typically, it is reasonable to assume that Kr(J) ts mdepend~nt of (slow) tI!11e
over scales of 10-15 s (which constitute an image frame), with the exception
of the roll angle rate. Changes in the roll angle will cause the a~tenn~ pattern
modulation and the incidence angle to change relative to the samplmg wmdow by
Arir = 2f RJ. tan '1 / c
samples/s
At =--r_ _
(9.4.7)
where rp1 is the post-processor input rate in complex sample~. For Se~sat
real-time processing RRc ~ 10 MFLOPS for the radiometnc correction.
POST-PROCESSOR SYSTEMS
479
9.4.3
Geometric Correction
Inherent in the SAR data is geometric distortion caused by the side looking
geometry, surface terrain, system sampling errors, and platform velocity
variation. Assuming the location of any pixel can be determined relative to a
fixed earth grid (e.g., UTM, Polar Stereographic), the images can be geometrically
rectified by performing a two-dimensional resampling (Siedman, 1977). The
pixel locations can be derived by tiepointing (either operator assisted or
automated), or predicted using a model for the sensor imaging geometry and
the target elevation. The latter approach requires precise knowledge of platform
(actually antenna phase center) position and velocity during the imaging period.
It should be noted that the geometric fidelity of the resampled image product
is not depencfent on knowledge of the platform attitude. If the range and Doppler
information inherent in the echo data is used in the target location, as described
in Chapter 8, then the value of foe reflects the antenna yaw and pitch angles,
and the range gate is independent of roll angle. Therefore, the only significant
error contributors in the target location procedure are the satellite orbit
determination uncertainty and the target elevation relative to the reference geoid.
It has been shown that the aforementioned tiepointing procedure can be
used to geometrically rectify a SAR image using a polynomial warping algorithm
(Naraghi et al., 1983). However, this approach is ineffective for images with
significant relief due to the local distortion caused by foreshortening and layover
effects. A more precise technique, proposed by Kwok et al. (1987), uses only a
few point targets of known position (latitude, longitude, elevation) to refine the
accuracy of the ephemeris using the SAR range and Doppler equations. It
requires a minimum of two targets distributed in range to provide incidence
480
angle diversity and two targets in azimuth to determine the along-track scale
errors. This approach is described in detail in Chapter 8. The tiepoint selection
and image registration are performed oftline in the calibration analysis workstation,
and therefore do not contribute to the post-processor computational rate
requirement.
Geometric Correction Procedure
For a spaceborne platform with a relatively small amount of drag, the position
errors (.dx, .dy, .dz) derived from a single site are highly correlated over a small
arc. Additionally, since the position and velocity errors are also highly correlated
with each other, the corrected platform ephemeris can be repropagated, thus
allowing all image data for that arc to be geometrically calibrated. The geometric
correction procedure is as follows:
1.
2.
3.
4.
5.
the SAR correlator to a ground projection, with uniform pixel spacing in both
POST-PROCESSOR SYSTEMS
481
azimuth an~ range ~irections, .we first generate a grid of location versus pixel
number as discussed m the previous section. The resampling process is as follows:
interpolator ), requiring
N, real multiplies per I and Q
(N, - 1) real adds per I and Q
3. Repeat Steps 1 and 2 for the range dimension.
The aggregate number of floating point multiplies per complex input pixel is
FLOP /complex input pixel
(9.4.9)
where gor goa are the ove~sampling factors in range and azimuth respectively.
Ex~?1ple 9.7 Assum~ for the single-look Seasat image, where bx ~ 6 m, that
a umform output spactng of bxaz. = 3.125 mis selected for the azimuth dimension
and .c5x8 ~. = 12.5 m for the ground range dimension. The input slant range
spacmg is
ax.= c/(2J.) =
6.58 m
(9.4.9)
482
where
9.4
bXaz,;::::
POST-PROCESSOR SYSTEMS
483
and the range and azimuth oversampling factors are given by Eqn. (9.4.9) and
Eqn. (9.4.11), with <5x8 bXaz. replaced by the output grid line and pixel spacing
<5x1, <>xP respectively.
4.07 m and
(9.4.11)
From Example 9.6 the Seasat real-time correlator output data rate is 5
Msamples/s which would require a post-processor computational rate of 330
MFLOPS for real-time geometric correction.
If multilook detection is performed prior to geometric correction, the azimuth
input pixel spacing is reduced by the number of looks. However, if the output
spacing requirement is not reduced the number of computations remains the
same. Any resampling operation performed after detection should use intensity
data to preserve the first and second order statistics (Quegan, 1989).
Geocoding to a Smooth Geoid. To geocode the correlator output into a
standard map projection we perform a three-pass resampling process, as
described in Chapter 8 (Friedman, 1981 ). Pass 1 is azimuth geometric correction
and oversampling. Pass 2 is range geometric correction and skew. Pass 3 is
azimuth undersampling and a second skew to effect the desired image rotation.
The procedure is shown pictorially in Fig. 8.12. The azimuth oversampling is
to prevent aliasing from the rotation. The oversampling factor is given by
1
goa =--p
cos
(9.4.12)
where p is the rotation angle. Since a third pass must also be added to the
number of computations, the aggregate number of floating point operations
per input sample for geocoding to a smooth geoid is given by
(9.4.13)
where gua the azimuth undersampling factor, is given by
gor = 4.8
gua = 0.72
per complex input pixel. Assuming an input data rate of 5 Msamples/s, the
computational requirement for the geocoding from Eqn. (9.4.5) is
Roe,
This extremely high computational rate results from the requirement for a
single-look complex output oversampled to a 4 m uniform spacing. A more
realistic pos~-processing scenario is presented in the following example.
Example 9.9 Assume we have a one tenth real-time Seasat processor, such
that the SAR correlator output data rate is
rco = 0.5 Msamples/s
If the data is first L-look averaged, such that- <>xaz = LV.w/fp, requiring
4 .FLOP per samp~e, the d~ta rate is reduced to r~, = r~.! L. Assuming L = 4,
with an output pixel spacmg of <5x1= bxP = 12.5 m, we get the following
484
9.4
oversampling factors
POST-PROCESSOR SYSTEMS
485
gor = 1.54
gua = 1.0
(9.4.14)
where the superscript L refers to the look averaging. From Eqn. (9.4.14) for
N 1 = 4, Coe,= 106 FLOP/sample. For rp1 = 0.25 Msamples/s and L = 4
where g0 m is the map oversampling factor. We have assumed in Eqn. (9.4.15) that
no r?tat~on of the m~p is required and that the input and output DEM pixel
spa~mg ts the same m both the line and pixel dimensions (e.g., northing and
eastmg for a UTM projection).
The computational complexity for the image resampling is given by
Eqn. (9.4.13), with the additional calculations required in Step 3 to determine
the foreshortening displacement. Thus the computational complexity for the
geocoding with terrain correction is
(9.4.16)
~xample _9.10 Given a DEM with sample spacing t5xm = 25 m and an output
image gnd of t5x1= t5xP = 12.5 m, the oversampling factor is g0 m = 2. From
Eqn. (9.4.15)
CoEM
For a 100 x 10? km map, there a.re NoEM = 16 Msamples per frame. Assuming
one tenth real time throughput (1.e., At= 150 s), the computational rate is
RDEM
CDEM N DEMI At
= 7.6 MFLOPS
486
9.5
Assuming a one tenth real-time rate, four looks, and one half of the data
geocoded (i.e., rp1 = 0.25 Msamples/s, L = 4)
R~c, =
5.8 MFLOPS
487
SYNC/
DEMJ)(
RT= R~c,
10 MB/$
DISK
9.4.4
JU<EBO)(
Post-Processor Architecture
Figure 9.20
Example hardware architecture for real-time post processor subsystem using only
commercial hardware.
archive is the 1/0 data transfer rate. Typically the sustained transfer rate is less
than 500 KB/s, which translates into a minimum of 200 s to download an
image. For real-time processing a network of these devices would be required
to achieve the required data rates ( -6 MB/s).
In Fig. 9.20, one possible architecture for a real-time post-processor system
~shown. Assuming an input data rate of 40 MB/s (i.e., 5 M complex samples
~er second), the data is first frame synchronized to identify the start of a range
lme and the sample boundaries. This custom interface board can also be used
to demultiplex the data across several input channels to reduce the input data
rate to a value compatible with each post-processor unit. Since the input data
must be blocked into image frames for geocoding, the CPU memory must have
sufficient capacity to stage the input processing block, the DEM and workspace
for the intermediate products. This is on the order of 400 MB for a 100 km
single l?ok complex Seasat image frame. To reduce the required memory, a
processmg block smaller than the image frame can be used at the cost of a
significant increase in the complexity of the data handling software and large
1/0 rates between the CPU and peripheral storage.
9.5
T~e
h!gh data rate output from the post-processor is not easily accessed by the
community for visual interpretation of the imagery. The scenes are
o~ten ma c~~plex format and are too large for video display (i.e., 8 K x 8 K
pixels_). Additionally, for real-time SAR correlation, the output data rates are
too high for electronic distribution across wide area communication networks
to ~~ientists who. may be located at a site remote from the SAR signal processing
~acihty. To provide the users rapid access to the most current data base, browse
image products are often generated and stored online. The scientists can then
scient~fic
488
9.5
log-on to the browse image data base management system and select imagery
for transfer to their home institutions across more conventional communication
channels.
An analogy to this data access scenario is the card catalog systems used in
a library (many of which are now electronic data bases). A user can search the
card catalog by title or author, if the specific book is known, to determine the
book location and status (e.g., on loan). Alternatively, if only the subject area
of interest is known, the subject catalog can be used to access all books related
to a specific topic within the library system. Contained in the catalog is a
synopsis or an abstract summarizing the book content, as well as detailed
information on its location. Similarly, an image browse system provides the
user with a low resolution summary of the image information contents. It could
be accessed by image file number or by site name if the user knows of a specific
scene. Also, as in the library catalog, if the user knows only of a location (i.e.,
latitude, longitude, area) a search can be made across all the image data products
in some specified region acquired during the time period of interest. The image
catalog contains information as to the processing status and the types of products
available.
The key science requirements in a browse data generation and distribution
system are twofold: good reconstructed image quality (at the user site); and a
short transfer delay time. The specifications controlling the browse system
performance are the channel capacity and the computational capacity of both
the transmitting a11d receiving computer systems. Generally, to achieve the
required access times for interactive browsing for some given link capacity,
spatial compression of the data products is required. The image compression
algorithm should be designed to minimize the number of computations needed
for image reconstruction since this capability must be replicated at each user
site. Additionally, the algorithm should be optimized for the unique characteristics
of the SAR image data, namely:
Following are the system requirements necessary for the design of a browse
data processing and distribution system:
Image Quality Specifications
reconstructed image resolution, ~x x ~Rg (m)
signal to compression noise ratio, SCNR (dB)
reconstructed image size, N 1 x NP (pixels)
489
Data Access
Given these inputs, we can then perform the analysis necessary to derive the
required compression ratio, d. Typically, the required compression is larger
than can be achieved by any lossless compression algorithm and, depending
on the required minimum signal to compression noise ratio, only a few lossy
compression algorithms are suited for SAR data compression (Chang et al.,
1988a).
9.5.2
To determine the required compression ratio we must establish the system access
load. We assume a Poisson distributed access pattern where each access consists
of a single image file transfer. For this analysis we will further assume that a
single serial port is shared by all users. We do this without loss of generality
since the extension to multiple image transfers and multiple communication
channels can be made simply by redefining the image size and the channel
capacity. The browse system will therefore be modeled as a MID I 1 queueing
system, where M represents a Poisson distribution, D is a deterministic time
required to encode and transmit the image file, and 1 indicates a single system
for processing and distribution.
It can be shown that for this system the mean response time, T, approaches
(Kleinrock, 1975)
(9.5.1)
where Wis the waitin:g time to access the system and 4, 7;, and ~ are the
encoding, transfer, and decoding titnes, respectively. The wait time is given by
c
(9.5.2)
where A. is the mean number of images transferred per second. The transfer time
is given by
(9.5.3)
where nb is the number of bits per image pixel, N 1 and NP are the line and pixel
dimensions of the image file, d is the compression ratio, and r c is th~ channel
capacity in bits per second. Furthermore, we can write
(9.5.4)
490
and
(9.5.5)
7
where Ce and Cd are the numbers of computations per pixel required to encode
and decode the image and Re and Rd are the computational rates (in FLOPS)
of the encoding and decoding processors, respectively.
We can now insert Eqn. (9.5.2)-Eqn. (9.5.5) into Eqn. (9.5.1) and write an
expression for the compression ratio, d. However, this relatively complex
algebraic equation is not very useful since, in most cases, the compression
algorithm encoding and decoding computational complexity factors (i.e., Ce,
Cd) depend on the compression ratio. Instead we will illustrate the use of these
equations with an example.
ec
i='
GI
j::
GI
Ill
c
0
Q.
Ill
GI
a:
Example 9.11 Consider a browse system designed such that the images are
compressed upon receipt from the post-processor and stored in a compressed
format, so that the encoding time is T. = 0. Furthermore, assume that the
decoding procedure is such that the receiving system can decode the data faster
than the channel can transmit, i.e.,
Te= 60s
2
10
Access Frequency
20
15
(Images/hour)
allowing the decoding process to be fully overlapped with the image data transfer
(i.e., Td = 0). Equation (9.5.1) becomes
T=W+I;
(9.5.6)
Inserting Eqn. (9.5.2) and Eqn. (9.5.3) into Eqn. (9.5.6) we can plot the total
access time (including the queue) as a function of access frequency, A., and the
compression ratio, d, given the image size (N., NP' nb) and the link capacity
(r 0 ). If we assume a 1 K x 1 K pixel image is required for the user display, the
browse system must first reduce the original full resolution input image frame,
either by segmenting or averaging the original image. We will assume a byte
representation for each pixel and that the communication link is a 9.6 Kbps
line. No channel coding is included. The results shown in Fig. 9.21a indicate
that a compression ratio of 15-20 provides data access in less than 2 minutes
for 20 access requests per hour. If a 1 minute encoding time is required
(Fig. 9.2lb) following the request receipt (i.e., T. = 60 s), then the queue begins
to grow large as the request frequency approaches A.= 20 images/h. For this
case a reasonable solution would be to add a second 9.6 Kbps line.
Image Quality
d=20
d=15
d=10
i='
GI
E
j::
--'41
Ill
c
0
Q.
Ill
GI
a:
El
El
0
0
9.5.3
10
15
Access Frequency
A.
20
25
30
(Images/hour)
b
Figure 9.21 Response time of browse (M/D/1) system as a function of access frequency ..i and
compression ratio d for (a) Encoding time T. = O; (b) T. = 1 min. (Courtesy of C. Y. Chang.)
491
492
9.5
the compression algorithm that can achieve the desired compression ratio, given
some image quality criterion.
For this measure, the traditional parameter used is a signal to compression
noise ratio
SCNR =
lOlog[~n~,J;~(nPu IJ
n;,)2]
(9.5.7)
IJ
where nP; is the pixel value in the original image and n;,J is the reconstructed
pixel value following transmission and decompression of the data. To achieve
a visually good quality image, the compression noise should be of the same
magnitude or less than the other noise sources in the data. For the SAR, system
noises such as thermal, bit error, quantization, and saturation are typically on
the order of 10-12 dB below the signal level, while the target dependent noises,
such as range and azimuth ambiguities, are nominally 15-18 dB down. The
exception is speckle noise. For a four-look image the signal to speckle noise
ratio is only 3 dB (Section 5.2). If 8 x 8 averaging is performed on this data
the speckle noise is then about 12 dB below the signal level and becomes
comparable to other noise factors.
The SCNR required for browse applications will therefore depend on the
processing applied to the image data before compression. If a low SCNR is
acceptable, as in the case of high speckle noise (one-look images), a large
compression ratio can be achieved, and thus we effectively trade distortion noise
for a higher resolution at a given link capacity. If we assume the browse image
size is that of a typical video display (i.e., 1 K x 1 K pixels), and that to achieve
this reduction we 8 x 8 average the four-look data, a SCNR ~ 15 dB is required
for good quality reconstructed images.
An additional consideration is the spectral distribution of this noise power.
In the above comparison with the various system and target noise sources, we
assumed that the compression noise is essentially white across the spatial
spectrum of the image. In fact, many compression algorithms add a high
frequency noise characteristic, resulting from block encoding of the input data.
There are various techniques to distribute this noise more evenly across the
spectral bandwidth, although they typically result in an increased overall
compression noise (Ramamurthi and Gersho, 1986).
9.5.4
493
the browse application, only lossy techniques will be considered in detail. The
lossy algorithms can be grouped as follows:
1.
2.
3.
4.
Predictive Coding
Transform Coding
Vector Quantization
Ad Hoc Techniques (e.g., fractal geometry)
We will discuss each briefly as it applies to the SAR image browse application.
Predictive Coding
Transform coding maps data from the spatial image domain to a representation
tfiat is more efficient for encoding the image information. The most frequently
utilized transfurms are the cosine and the Hadamard. The Hadamard transform
offers a lower computational complexity than the cosine transform at a reduced
performance. However, transform coding almost always yields better performance
than predictive coding at the same compression ratios, and it offers more
flexibility in that any compression ratio can be specified if the resultant image
distortion is acceptable. The major disadvantage is the computational complexity,
since both the encoding and decoding procedures require a large number of
two dimensional transforms.
A comparative analysis of the compression algorithms listed at the beginning
of this section has recently been performed for SAR (Chang et al., 1988b ). That
report concludes that an adaptive discrete cosine transform (ADCT) procedure
is the optimum approach for coding SAR image data in that it produces the
best SCNR for a given compression ratio. Essentially, the steps of the adaptive
494
9.5
495
= 2(2 log2 S -
1 + l / S)
+ N 1 Np/ (2S 4 )
(FLOP / pixel)
(9.5.8)
where the first term on the right is for the transform (Step 2) and the second
term is the sorting operation (Step 3). For example, a 1 K x 1 K browse image,
coded using block size S = 16, requires C~ocr ~ 22 FLOP per input pixel for
encoding and 14 FLOP / pixel for decoding which does not require sorting. For
a 128 pixel block, the encoding and decoding complexity each increase to
26 FLOP / input pixel (i.e., the sorting is negligible). An alternative transform
algorithm, the Hadamard transform, is sometimes preferred, in which integer
a rithmetic is employed since it requires only addition operations. The performance
of the Hadamard transform exhibits a slightly degraded SCNR relative to the
cosine transform.
Some results from coding Seasat browse images (after 8 x 8 averaging of
the four-look image) are shown in Fig. 9.22 and Fig. 9.23. For this data we
have used a 16 pixel block size with four activity classes. Note that the image
becomes blocky at the higher compression ratios, even though the SCNR
remains above 15 dB. For the Detroit scene, the statistics vary widely from the
urban regio n to the la ke, thus sk ewing the classes and degrading the ADC T
performance.
For a browse application, where the user typically has little processing
ca pability at the home institution (or on a ship, or in the field), the transform
coding generally exceeds the maximum decoding complexity requirement.
Fl~ur~ 9.~2 Ada~ti_ve di ~crete cosine transform (ADCT) compression of Seasat image of Detroit,
M1ch1gan. (a ) Ongmal image; (b) Compression ratio, d = 10, SCNR = 18.4 d B; (c) d = 30
SCNR = 16.0 dB; (d) d = 50, SCN R = 15.1 dB.
'
496
9.5
497
4. Transmit the index of the selected codeword fo r each vecto r and the image
codebook.
The performance of this algorithm is dependent on how well the subset of the
source data used to train the codeboo k (Step 2) represents the enti re source
data set. If the statistics vary at different porti ons of the image, such as in the
Detroit scene of Fig. 9.22, and if the codebook does not contain vectors from
the bright city areas, for example, these areas will be highly distor ted in the
reconstructed image. Assuming we select 2m codewords as the codebook size,
the maximum compression rati o is
(9.5.9)
Figure 9.24 Vector quantization (VQ) compression of(8 x 8) averaged Seasat images: (a ) bnglnal,
Kennewick, Washi ngton; (b ) d = 14.8, SCNR = 14.3 dB; (c) O riginal, Detroit, Michigan;
(d) d = 14.8, SCNR = 16.2.
498
REFERENCES
where the vector block size is S x S and nb is the number of bits per pixel. The
second denominator term is the overhead associated with transmitting the
codebook. As an example, consider a 1 K x 1 K pixel browse image with S = 4,
m = 8, nb = 8. The compression ratio is 15: I. The codebook therefore represents
approximately a 6 % overhead.
The n umber of computations for the encoding procedure in Step 3, using a
fully searched codebook, is
c~o
= (Mq + 1)2m
499
(9.5.10)
where q is the fraction of the original image used in training the codebook
and M is t he number of iterations required to train the codebook. For
Ad Hoc Techniques
There are a number of other compression routines that do not fall into these
basic categories. Several of these have been evaluated for the SAR application.
Among those evaluated are (Chang et al., 1988)
Fractal Geometry
Micro Adaptive Picture Sequencing (MAPS)
Block Adaptive Truncation (BAT)
However, none of these algorithms could improve on the performance to
complexity ratio of the VQ. Either the computational burden was too high (e.g.,
fractals), or the performance was poor (e.g., MAPS is very blocky in low activity
areas), or t hey could not achieve sufficiently high compression ratio (e.g., BAT
is a lways less t han d = 8). A more detailed consideration of these algorithms
is presented in a review paper by Jain ( 1981 ).
REFERENCES
a
Figure 9.25 VQ compression of averaged Seasat images: (a) Original, Los Angeles, California;
(b) d = 15.1, SCNR = 11.7 ; (c) Original, Beaufort Sea; (d) d = 15.1, SCNR = 17.2 1.
500
Davis, D. N. and G. J. Princz ( 1981 ). "The CCRS SAR Processing System," 7th Canadian
Sym. on Remote Sensinf!, Winf!il'~g, Manitoba~ pp. 520-526.
Dongarra, J. J. (1988). "Performance on Various Computers Using Standard Linear
Equation Software in a Fortran Environment," Argonne National Laboratory
Technical Memorandum, No. 23.
Fenson, D. (1987). "British Aerospace of Australia, ERS-1 Data Acquisition Facility,"
Technical Document.
Friedman, D. E. (1981). "Operational Resampling for Correcting Images to a Geocoded
Format," 15th Inter. Symp. on Remote Sens. of Envir., Ann Arbor, Ml, p. 195.
Habibi, A. (1971). "Comparison of the nth-order DPCM Encoder with Linear
Transformations and Block Quantization Techniques," IEEE Trans. Comm. Tech.,
COM-19, pp. 948-956.
Heiskanen, W. A. and H. Moritz( 1967). Physical Geodesy, W. H. Freeman, San Francisco,
CA, pp. 181-183.
Hillis, W. D. (1985). The Connection Machine, MIT Press, Cambridge, MA.
Hwang, K. ( 1987). "Advanced Parallel Processing with Supercomputer Architectures,"
Proc. IEEE, 15, pp. 1348-1379.
,
Jain, A. K. ( 1981 ). "Image Data Compression: A Review," Proc. IEEE, 69, pp. 349-387.
Jin, M. and C. Wu (1984). "A SAR Correlation Algorithm which Accommodates Large
Range Migration," IEEE Trans. Geosci. and Remote Sensing, GE-22, No. 6.
Kleinrock, L. (1975). Queueing Systems, Vol. 1: Theory, Wiley, New York.
Kwok, R., J. C. Curlander and S. S. Pang (1987). "Rectification of Terrain Induced
Distortions in Radar Imagery," Photogram. Eng. and Rem. Sens., S3, pp. 507-513.
Lee, B. G. (1984). "A New Algorithm to Compute the Discrete Cosine Transform,"
IEEE Trans. Acoust. Speech Sig. Proc., ASSP-32, pp. 1243-1245.
Lewis, D. J., B. C. Barber and D. G. Corr ( 1984 ). "The Time Domain Experimental SAR
Processing Facility at the Royal Aircraft Establishment Farnborough," Satellite
Remote Sensing, Remote Sensing Society, Reading, England, pp. 289-299.
Linde, Y., A. Buzo and R. M. Gray ( 1980). "An Algorithm for Vector Quantizer Design,"
IEEE Trans. Comm., COM-28, pp. 84-95.
Naraghi, M., W. Stromberg and M. Daily (1983). "Geometric Rectification of Radar
Imagery using Digital Elevation Models," Photogram. Eng., 49, pp. 195-199.
Quegan, S. (1989). "Interpolation and Sampling in SAR Images," IGARSS '89 Symposium,
Vancouver, BC, Canada.
Ramamurthi, B. and A. Gersho (1986). "Nonlinear Space-Variant Postprocessing of
Block Coded Images," IEEE Trans. Acoust. Speech Sig. Proc., ASSP-34, pp. 1258-1268.
Ramapriyan, H. K., J. P. Strong and S. W. McCandless, Jr. ( 1986). "Development of
Synthetic Aperture Radar Signal Processing Algorithms on the Massively Parallel
Processor," NASA Symposium on Remote Sensing Retrieval Techniques, Williamsburg;
VA, December 1986.
Rocca, F., C. Cafforio and C. Drati (1989). "Synthetic Aperture Radar: A New
Application for Wave Equation Techniques," Geophysical Prospecting, 37, pp. 809-830.
Sack, M., M. R. Ito and I. G. Cumming (1985). "Application of Efficient Linear FM
Matched Filtering Algorithms to Synthetic Aperture Radar Processing," Proc. IEE,
132, pp. 45-57.
REFERENCES
501
Schaefer, D. H. (1985). "MPP Pyramid Computer," Proc. IEEE Syst. Man. Cyber Conj.,
Tucson, AZ.
Schreier, G., D. Kossman and D. Roth ( 1988). "Design Aspects of a System for Geocoding
Satellite SAR Images," ISPRS, Kyoto Comm. I, 1988.
Selvaggi, F. ( 1987). "SAR Processing on EMMA-2 Architecture," RI EN A Space Meeting
Proceedings, Rome, Italy.
Siedman, J. B. (1977). "VICAR Image Processing System Guide to System Use," Jet
Propulsion Laboratory Publication 77-37, Pasadena, CA.
Test, J., M. Myszewski and R. C. Swift ( 1987). "The Alliant FX Series: Automatic
Parallelism in a Multi-processor Mini-supercomputer," in Multiprocessors and Array
Processors, Simulation Councils, San Diego, CA, pp. 35-44.
van Zyl, J. (1990). "Data Volume Reduction for Single-Look Polarimetric Imaging
Radar Data," submitted to IEEE Trans. Geosci. and Remote Sensing.
Wolf, M. L., D. J. Lewis and D. G. Corr (1985). "Synthetic Aperture Radar Processing
on a Cray-1 Supercomputer," Telematics and biformatics, 2, pp. 321-330.
Wu, C., K. Y. Liu and M. Jin (1982). "Modeling and a Correlation Algorithm for
Spaceborne SAR Signals," IEEE Trans. Aero. Elec. Syst., AES-18, pp. 563-575.
503
Within the limitations imposed by depth of focus, the function Eqn. ( 10.0.1)
corresponds to a stationary system function
10
(10.0.2)
where (Re is the slant range at beam center)
OTHER IMAGING
ALGORITHMS
and </>1(R) = </>(2R/c). With the definition Eqn. (10.0.2), the response function
Eqn. ( 10.0.1) is just
+ </>(t)]
( 10.0.1)
where P, is the two dimensional spectrum of the basebanded data O.(s, t) before
range compression. The algorithm Eqn. (10.0.3) was developed in particular by
Vant and Haslam (1980, 1990).
Another class of processing algorithms different from rectangular rangeDoppler processing has grown up, based on alternate schemes for attaining
range resolution in pulse compression radar. These are based on the "deramp"
processing scheme for range compression (Section 10.1 ). The idea is to do
whatever is necessary to salvage the process of simple frequency filtering on
the Doppler spectrum of the azimuth signal, while at the same time making
use of the full target spectrum thereby attaining improved resolution (focussed
processing). Such algorithms have been mainly developed for use in airborne
systems, but are not restricted to such systems. They are, however, particularly
well adapted to systems which are squinted away from side-looking so as to
deliberately aim (say) forward at some limited region of interest, as for example
in a spotlight mode SAR. Such systems are in contrast to the Seasat-like
deployments we have been mainly considering so far, in which the objective is
to map the terrain below the vehicle more or less uniformly, with squint only
a nuisance to be compensated in the processing.
In the case of the large bandwidth time product of the azimuth Doppler
signal imposed by the usual geometries, high resolution azimuth processing can
be done using the techniques of matched filter processing. From the point of
view of the Green's function h(x, Rix', R') and its inversion (Section 3.2.1), the
return signal v.(x, R) of the radar, in response to a distributed target with
complex reflectivity ((x', R'), is
,
v,(x, R) =
504
10.1
f h-
(10.0.4)
Delay
to
3 (t)
Delay
tr
K (tr-to)
Figure 10.1
the kernel h(x, Rix', R') is of a very simple form, and in fact is just that kernel
which is inverted by Fourier transformation. Thereby the image function '(x, R)
results from Fourier transformation of the data function v,(x, R). Application
of compression filters and inverse Fourier transformation as needed in the
rectangular algorithm do not occur. The focussed image results by a single
two dimensional Fourier transform operation. The cost is (perhaps considerable)
data preprocessing to form the signals ii, from the radar data v,.
The algorithms of the class to be discussed go by various names in
their variants, such as deramp FFT processing (sometimes called stretch
processing), step transform processing, SPECAN processing, and polar
processing. Ausherman et al. ( 1984) have given an overview of the class. All of
these algorithms have links to the methods of tomographic imaging, which
Munson et al. (1983) and Fitch (1988) discuss. We begin with a discussion of
deramp processing, which is the direct predecessor of the step transform method
of SAR imaging.
cos 2n(Kt0 t
+ fct 0
Kt~/2)
( 10.1.3)
s(t)v~(t)
10.1
+ Kt 2 /2),
( 10.1.1)
If this is scattered back by a unit point target at range R 0 = ct0 /2, the received
signal will be
v,(t) = cos 2n[fc(t - t0 )
505
cosmct
where h- 1 (x, Rix', R') is the inverse Green's function. In the case of the along
track variable x, the kernel h(x, Rix', R') is approximately a linear FM, and
the inversion kernel h- 1 (x, Rix', R')is therefore another linear FM, the azimuth
compression filter. Convolution is necessary to apply the inverse kernel to the
data, as in Eqn. (10.0.4). Range migration enters as a complicating factor.
The algorithms we will describe in this chapter take an alternative point of
view. The received radar data v,(x, R) are pre-processed into signals data v,(x, R)
such that, in the corresponding superposition equation:
v,(x, R) =
+ K(t -
t 0 ) 2 /2],
( 10.1.4)
The waveform Eqn. (10.1.3) to be Fourier analyzed is nonzero only over the
time span for which the factors Eqn. ( 10.1.l) and Eqn. ( 10.1.2) overlap. If that
overlap could be arranged to be the full pulsewidth r:P, or nearly so, the frequency
Kt 0 of the signal Eqn. (10.1.3) would be recovered with a resolution 1/r:p, so
that the resolution in t 0 approaches 1/IKlr:P = 1/B, which is the full resolution
afforded by the pulse compression waveform Eqn. (10.1.1). This can be done
by delaying the pulse Eqn. ( 10.1.l) by some reference time t., say the midswath
time:
s*(t - t,) = exp(-j2n.fct) exp[ -jnK(t - t,) 2 ],
506
10.2
(b)
f
//(a)
/
Figure 10.2
(c) Received.
I
(Fig. 10.2). The reference pulse Eqn. ( 10.1.5) is generated such that its length
At is the timewidth of the slant range swath over which returns are expected.
The result of the reference mixing operation then is a preprocessed signal at
baseband:
It - t0 1~ rp/2
(10.1.6)
This function v (t) is a constant frequency sinusoid, available over the full
transmitted pul:e duration p whose frequency K(t. - t 0 ) is a direct measure
of the target range parameter t0 The precision to which that frequency can be
measured is IKl&o = 1/-rp, so that target range resolution is
507
allow for a target return at any position in the swath, the processor FFT must
have time length At, even though any particular frequency bin is occupied by
signal for at most a much shorter time p By lengthening the processor time
to At we have degraded the signal to noise ratio of the system. Further, there
are generally present signal frequencies in v.(t) ranging from K(t. - tnear) to
K(t. - tea.), where tnear and tear correspond to the two extremes of the range
swath. Thus the deramped signal v.(t) has a bandwidth IKIAt, whereas v.(t),
the radar return itself, has only the band IKlp Thus the sampling rate of the
deramped signal must be artificially high. The system is simplest to arrange in
the case that At and p are roughly equal. This means that either the
swath must be narrow, less than a pulse width, or that subswaths must be
processed with multiple reference functions used to dechirp each subswath signal
separately, perhaps using the step transform procedure discussed in Section 10.2.
The potential application of deramp processing to SAR azimuth compression
is clear. The algorithm has recently been called the SPECAN (SPECtral
ANalysis) algorithm in that context (Sack et al., 1985). A number of difficulties
arise, however, which can make the procedure somewhat involved for high
resolution image formation. In addition to the problems mentioned above in
regard to range processing, which are also present in the application to azimuth
processing, the phenomenon of range migration can make it necessary to
assemble together from various range bins the data to be applied to the azimuth
FFT processor. Finally, since the azimuth chirp constant fR depends on slant
range across the swath, the relation between FFT bin number and image
point azimuth position changes with range, a circumstance which requires
interpolation operations to construct a uniformly sampled image. The situation
is discussed by Sack et al. (1985), and in detail by Wu and Vant (1984). Both
Sack et al. ( 1985) and Wu and Vant ( 1984) give a detailed analysis of the step
transform, an important modification to which we now pass.
10.2
where BR = IKI is the bandwidth of the transmitted pulse. Thus the resolution
of full bandwidth pulse compression processing is realized.
All of the operations involved in carrying out the deramp procedur~ ~re
linear, since the reference function Eqn. ( 10.1.5) is independent of target position
in the swath. Therefore a complex reflectivity distribution '(R) across the swath
is reproduced by the system of Fig. 10.1, with the squared magnitude of each
complex Fourier coefficient at the output of the FFT processor used for
filtering the deramped received signal being the real reflectivity l((RW at the
corresponding range. A radar system with this type of range processmg has
been called a stretch radar (Hovanessian, 1980, p. 114).
A practical difficulty arises in deramp processing. Normally the swath width
At is considerably larger than the pulse length
(Fig. 10.2). Since we need to
The basic idea of deramp processing can be realized in a version known as the
step transform. The method as applied to range compression is discussed by
Perry and Kaiser (1973) and by Martinson (1975). Perry and Martinson ( 1977)
also mention the technique in the context of along-track SAR processing. An
analysis of the along-track application is given by Sack et al. (1985), and by
Wu and Vant (1984). Wu and Vant (1985) analyze the modifications that need
to be made in the case of a highly squinted (spotlight) SAR, in which case the
along-track Doppler signal is not necessarily well approximated as a linear FM.
With simple deramp range processing (Section 10.l), difficulties arise if the
range swath timewidth At over which a return signal can occur is noticeably
longer than the width
of a transmitted pulse. Even in the case of a swath
only the width of the transmitted pulse, in deramp processing the deramped
signal v.(t) of Eqn. (10.1.6) will not capture the return signal over the majority
508
509
I
I
I
I
I
I!::./= K (tn-t 0 )
,~
I
I
=K(t-tn)
of its width unless the target is near the center of the swath (Fig. 10.2). This
suggests separating the full swath of interest into a number of subswaths, each
of width considerably less than that of a transmitted pulse, with each subswath
provided with its own local reference signal (Fig. 10.3). Thereby essentially all
of the signal span of any return can be captured, with different time segments
of the full pulse appearing in different subswaths. Full resolution processing
then requires simultaneous processing of the signals from multiple subswaths.
The step transform is the two-stage procedure which implements the scheme.
Coarse Range CoeH/clents
Consider then a simple subswath, the nth, centered on a reference time tn,
with a target at range time t 0 (Fig. 10.4). The deramped signal for that subswath,
similar to Eqn. (10.1.6), is
vn(t) = exp(jtf>) exp[j2nK(tn - t 0 )t],
It - tnl
tit/2
(10.2.1)
11
Vn(f) = exp(j<f>) {
where
tf>
= nK(t~
t 0 )(tn - tit/2
+ t)]
x exp( -j2nft) dt
- t~)
Carrying out Fourier transformation of this over the interval tit centered on
tn determines the frequency K(tn - t0 ) to a resolution bf= 1/ tit (assuming that
the interval in question is not at the end of the target pulse), and thereby
determines the range R 0 of the target to a resolution bR = c/21Kltit, coarser
by the ratio tit/rp 1 than the full resolution capability c/21Kl-rP of the system.
This so-called coarse range processing yields the same range information in
adjacent subswaths, since any particular target appears in multiple subswaths,
although at. different frequencies separated by the frequency step Ktit
corresponding to the time shift tit of the reference linear FM signals.
It is the further processing of the redundant coarse resolution information
about each target in adjacent subswaths ( subapertures) which leads to the
exp[j2nK(t0
(10.2.2)
where
510
10.2
(10.2.3)
= (const)exp(-j2nKtnt0 )
511
P>t
(10.2.4)
so that the band to be analyzed over the subaperture time tis p/At. This requires
a sample spacing in tn of At/ p, rather than At, in order to avoid aliasing.
The oversampling factor P typically used is on the order of 2 or 3, unless the
coarse resolution filter has very well controlled sidelobes. That is, two or three
times as many subapertures are generated than are sketched in Fig. 10.3.
Digital Coarse Range Analysis
Sack et al. (1985) describe the digital algorithm of step transform range
compression in detail. For a target at t 0 = m& (Fig. 10.6a), the basebanded
radar return, when multiplied by the appropriate deramping chirp in terms of
the local index l on subaperture n, is
l = 0, L- 1
( 10.2.5)
(a)
vn<!>
J = K (tn -
I II I
- - --------o
t=t'+t 0 -.M.
r--
to
= 111&
.:1t
=L8t
t K.:1t
_j_
_L
1/.:1t
Subaperture n
(b)
~
i=O
~N8t
...__ _ _ _ _ ___.!---Aperture A
2
~.._~__.o~---'-----40~~~'--~~o~~~-'-~~-
t n- 1
t0
tn+1
3
tA
tn
~M=L8t--1
Figure 10.5 Coarse range bins in deramp range compression.
Figure 10.6 Time sampling in step transform. (a) Single subaperture; (b) Oversampling of
subapertures for fine resolution transform (case p = 3).
512
+ 21]}
(10.2.6)
Taking the FFT of the sequence Eqn. ( 10.2.6) over the apertur<:_ time variable
I yields the discrete coefficients corresponding to the spectrum V,(f):
513
L-1
V,(kln) =
V~(kln)
v,(lln)exp(-j2nkl/L)
1=0
x sin(Lu)/sin(u),
L - 1
u = n[K(<5t) 2 (n - m) - k/L]
(10.2.7)
( 10.2.11)
These coefficients are found by tracing through the matrix of values V,(kln) as
in Fig. 10.7, remembering that the coefficients V,(kln) are periodic ink.
The fine range analysis is then obtained by taking the /-point FFT of the
coefficients Eqn. (10.2.11). The resulting coefficients g(rlA) are such that
The coefficients Eqn. ( 10.2.7) provide a complete coarse range analysis in every
subaperture n, with resolution 1/IKIAt. The various subaperture coefficients
Eqn. (10.2.7) are processed together, with respect to the time index n, to obtain
the final fine resolution analysis of the range returns. We select sequential
subapertures indexed by i and centered at uniformly spaced times t 0 = i(N&)
(Fig. 10.6b ). Allowing for oversampling to avoid ghost images in the final output,
we have
& 0 = NJt =
tit/P = L&/p,
N=L/P
Analysis A+ 1
Analysis A
I
-------1
L-1
( 10.2.8)
(10.2.9)
using Eqn. (10.2.8). In the expression Eqn. (10.2.7) for the coefficients V,(kln),
we have thereby arranged for u to be held constant as n changes by iN from
one subaperture to another. The remaining variation of V,(kln) with n resides
in the phase angle,
v = nK(&) 2 (n - m)(n - m - L)
= nK(Jt) 2 [n(n -
L)
+ m(m + L) - 2mn]
(10.2.10)
The variation with n represented by 2mn is what will "do the job" in determining
the target index m when we transform over n. The factor n(n - L) is an unwanted
variation with n, and must be compensated.
Figure 10.7 Subaperture selection for fine resolution analysis in step transform (case p
2).
514
10.2
(taking n0 = 0):
515
(a)
lg(rlA)I
= lsin(Lv)/sin(v)l lsin(Jw)/sin(w)I
? yr (k)
( 10.2.12)
where
v = nK('5t) 2 (NA - m)
w = n[KN(<5t) m + r/I]
2
(10.2.13)
I
I
I
I
1/L1t
1/tp
? g (rlA)
~~:
l
~
I , i: I
I I
(I bins)
r=O, l-1
(10.2.14)
f
B = 1/St
(Lbins)
k= 0, L-1
(b)
I
I
then
~Aperture
lw = n[K(bt) 2 mNI
+ r]
n[r
+ m sgn(K)]
( 10.2.15)
m mod (J) =
-r sgn(K)
~~~~~~~:...._A-1H-\-A-~~Pt-~~~/
~/K(m-NA)31
Figure 10.8 Frequency bins and response functions in step transforms. (a) Coarse and fine
frequency bins. (b) Response functions sin(Lv)/sin(v), sin(Jw)/sin(w).
(10.2.16)
I
I
( 10.2.17)
fine resolution cells to cover a coarse cell, where we used Eqn. (10.2.14) and
the definition p = L/ N. But the fine analysis computes I fine bin coefficients,
a proportion p more than needed. Therefore, only the first 1IP proportion of
the fine resolution coefficients need be reported out of each fine resolution FFT,
the next set of fine resolution bins being picked up as the first 1/ p of the
coefficients of the fine FFT for the next value of A.
This data selection process is what defeats sidelobe leakage in the coarse
FFT. From Fig. 10.9, it is clear that any target which is in a particular coarse
bin will have a frequency over aperture number i which is in the first 1/P part
of the band of the fine resolution FFT. However, if the response in the coarse
bin in question is in fact a sidelobe leakage from the next adjacent coarse bin
(or if the mainlobe has been so broadened by weighting for sidelobe control
that the adjacent bin comes in through a skirt of the mainlobe ), it will have a
frequency over i in the range below the lowest 1/ p part of the fine band, and
~o fort?. T?us, for say P= 3, we will not see sidelobe leakage until the response
is leakmg m from three coarse bins away from the one under analysis. That
one we will see, because it is aliased into the lowest region of the ith aperture
spectrum due to the periodicity of the discrete fine resolution spectrum. One
then chooses P large enough that the aliased response is from a sidelobe far
enough out on the coarse resolution response that it is of negligible amplitude.
516
10.2
Leakage
f
Aliased
active region
517
i:
Azimuth Compression
When the step transform algorithm is used for azimuth SAR processing, it is
the linear FM of the sampled Doppler signal which is analyzed. The procedure
is just that which has been detailed above, with the us~al complications o~ range
migration and change of Doppler frequency rate fR with range to be considered.
Sack et al. (1985) have given a good discussion.
So far as change offR with range is concerned, the step transform procedure
is simply adjusted every so often across the range swath as fR is updated. Since
the input and output sampling rates (;t of the algorithm are independent of fR,
no interpolation is needed on the output to produce a uniform image grid. The
only complication internal to the algorithm is the requirement Eqn. (10.2.9)
that the coarse ra:nge bin stepout p used from one subaperture to another be
an integer. This is most conveniently done by adjusting N inversely to the
change infR, so that the overlap ratio {J depends onfR Since the percent change
in J. over the swath is normally small in space based systems, no great change
res~lts in the system operation on that account. However, whatever N is used
must also perforce be an integer, and changes in N involving a fractional p~rt
of an integer cannot be accommodated. Sack et al. ( 1985) suggest then ~smg
some number, say J, of reference ramps, with time origins spaced at multiples
j of (;s/J:
sn(j) = (n
The term 2al in the exponent is accounted for from one aperture to the next
by the nature of the noninteger stepout in n. However, the factor a(a + L)
represents a term depending on n, since a is in general different for every n, and
must be compensated in forming the sequences v;(kln) of Eqn. (10.2.11) from
the coarse coefficients V,(kln), just as was the previous term n(n - L).
Range migration is handled much as in the algorithms of Section 4.2.3. That
is, the appropriate data is gathered together along the curved migration path
in range/azimuth memory before Fourier transformation in the Doppler
domain, but after range compression. If we assume the nominal linear range
walk is removed in the time domain as described in Section 4.2.3, then we deal
essentially only with range curvature. Each coarse resolution FFT in the Doppler
domain operates over some limited span S' of azimuth time. If S' is adequately
small, the residual migration (mainly quadratic) over the aperture S' will be
less than one range bin, or half a range bin, or whatever is desired, depending
on the precision of processing needed. As Sack et al. ( 1985) note, this establishes
an upper bound to the coarse aperture time S'. Since we have as always the
nominal migration locus (after walk removal):
( 10.2.18)
where s is slow (azimuth) time, the worst case situation, at the end of the full
synthetic aperture, yields a range migration due to the curvature over the
subaperture, and a limit (for quarter-cell accuracy):
(S/2 - S') 2 ]
+ j/J)(;s
v;
( 10.2.19)
Provided 2R 0 (;R/S 2
> 1, this is always the case. Otherwise, for correction
to within a quarter range resolution cell the subaperture time is bounded
518
10.3
as
With apertures S' chosen in each range bin, the appropriate coarse resolution
bins are then patched together to form the input to the fine resolution FFT.
Another constraint is imposed by the necessity for range migration correction
when using the step transform for slow time processing. Each fine resolution
FFT relates to a number of targets separated in slow time by the sampling
interval bs, which in this case is the radar pulse repetition interval. In frequency,
these targets fill the band of width 1/ S' corresponding to the resolution of the
coarse resolution FFT. Thus each coarse resolution frequency coefficient relates
to a number of targets, which are separated in frequency by up to 1/S' Hz,
corresponding to a maximum separation in slow time of 1/lfRIS'. (Since the
analysis band lfRIS of the coarse resolution FFT is just 1/bs, this can also be
written as S(bs)/S'.)
The data for each fine resolution FFT is gathered together by selecting a
single coarse resolution coefficient from each coarse resolution time interval
and applying the appropriate range curvature correction (again assuming the
linear range walk has been previously compensated). Therefore, each of the
targets contributing to a particular coarse bin must have the same curvature
correction to be applied, again to within a range bin, or some appropriate
fraction (say a quarter) thereof. Now consider (Fig. 10.10) two targets, in the
same coarse Doppler bin, separated by the maximum amount ll.foc = 1/S'. In
slow time this corresponds to
ll.s = 1/lfRIS' = S/B 0 S' = S(bs)/S'
The two targets are not in general at the same Re. The largest discrepancy in
range curvature correction required by any segment of length S' common to
the two targets occurs at the positions shown. From Eqn. ( 10.2.18), the difference
in range curvature corrections required for those two targets, assuming fR to
be the same for both, is
(V;1/2Rc){(S/2) 2
[S/2 - S(bs)/S']2}
= (V:i/2Rc)[S 2 (bs)/S'](l - bs/S') ~ bR/4
519
t\R 1 ___.j
I _.I
I
I
I
I
I
I
I
S c2 -+---_.....
S c1 -+-----.-
as (SIS')
R
Figure 10.10
10.3
(10.2.20)
POLAR PROCESSING
POLAR PROCESSING
520
Figure 10.11
POLAR PROCESSING
521
For each pulse, the received signal Eqn. ( 10.3.2) is deramped, just as in the
first step of deramp range compression (Section 10.n using the waveform
Eqn. ( 10.3.l ), delayed and conjugated:
Let us consider first the situation of a unit point target located at a fixed vector
position R0 possibly in space (Fig. 10.11). The origin of coordinates is some
arbitrary point in the general vicinity of the region to be imaged. A radar moves
in space along some path described by a vector Ran(t), which is assumed to be
known at every instant. The radar transmits pulses s(t), assumed to be linear
FM with frequency rate K:
s(t)exp(j2n.f.,t) = exp[j2n(.f.,t + Kt 2 /2 )],
(10.3.1)
For the moment we ignore the change in range to target from time of pulse
transmission until reception. With the origin of time taken at the instant of
transmission of a pulse, the received waveform is then
(10.3.2)
( 10.3.3)
where tan = 2Ran/ c is the known range from radar to coordinate origin during
pulse period n. The result is a video signal
g(t) = d(t)v.(t) =exp{ -j2n[(.f., + Kt)(tn - tan) - K(t~ - t;n)/2]}
522
POLAR PROCESSING
523
From Fig. 10.11, the range from radar to target during pulse n is
Rn= [(Rt - Ran)(R1 - Ran)J 112
= Ran[l - 2R1Ran/R:n + (R1/Ran) 2 ]
1 2
(10.3.5)
+ K(t -
2Ran/c)]Ran R.}
( 10.3.7)
Figure 10.12
Eqn. ( 10.3.9) is
The name polar processing for this process derives from the fact that, as indicated
in Eqn. ( 10.3.8), data for a particular pulse are indexed, in t, along the radial
vector Ran of a spherical coordinate system (Fig. 10.12). For a two dimensional
image (the usual case), this becomes a polar coordinate system. The dimensions
of the data region Fig. 10.12 in the space of the wave number r. of Eqn. ( 10.3.8)
depend on the pulsewidth in time, along the direction of the radar position
vector Ran at each pulse. In the cross-track dimension, normal to Ran the data
span is a circular segment of nominal angle 2(AO)/ A., where AO is the span of
aspect angles traversed by the radar while viewing the target. For a radar far
removed from the target region, since the origin is local to the target,
the span of angle AO is nearly the span of aspects through which the radar
views the target. In turn, the span AO leads to an approximate linear span of
data of extent 2(AO)Ran/ A., where Ran is the nominal range to target.
As always, invoking linearity of the process, for a distributed region with
complex reflectivity
1) to be imaged, the signal field corresponding to
"R
(10.3.10)
This is the two dimensional inverse Fourier transform of the sought image
((R.). The inverse Green's function (compression) operation is then simply
524
The expression Eqn. ( 10.3.11) at once yields the nominal resolution of the
polar processor. In range, the extent of data in that wavenumber dimension
(Fig. 10.12) yields the range resolution
POLAR PROCESSING
525
A./2/lfJ
x
Figure 10.13
Index grid ( O) for polar format data with required index grid (
x
x )for FFT processing.
v.(t) = s(t -
(10.3.12)
In some situations the deramp processing described in Section 10.1 may indeed
be feasible, if implemented at some video offset frequency. This is especially the
case if the region of targets to be imaged is small enough that the entire region
is covered simultaneously by a single pulse, so that a single reference ramp of
reasonable width can be used to deramp the return from any target point
(Fig. 10.2).
More generally, the equivalent of the deramp operation Eqn. ( 10.3.4) can be
realized in the Fourier transform domain. To that end, the data Eqn. (10.3.2)
across the full target region are first downconverted to some convenient video
offset frequency / 1 , then time sampled and Fourier transformed. For a
transmitted pulse Eqn. ( 10.3.1 ), and again assuming the range of the radar from
the target can be considered constant during the time of pulse period n, the
t0 )
exp( -j2nfct0 )
(10.3.13)
where S(f) is the baseband spectrum of the transmitted pulse. After complex
basebanding, we have available frequency samples of
( 10.3.14)
where
( 10.3.15)
is constant, depending however on pulse number.
A result equival~nt to that of the time domain deramp operation Eqn. ( 10.3.4)
can now be realized by frequency domain operations on the spectrum
526
Eqn. ( 10.3.14 ). The procedure consists in adding values 2nftan to the phase of the
spectrum Eqn. (10.3.14), where tan= 2Ran/c, and dividing out the known
spectrum S(f). This results in deramped data
G(f) =an exp[ -j2nf(tn - tan)J
(10.3.16)
(10.3.17)
POLAR PROCESSING
527
Si~ce the. transit ti~e T is not constant, the received signal v.( t) is not simply
a time shifted vers10n of the transmitted signal s( t ), but rather has a different
waveform, which furth~rmore depends on the specific form of the function R(t).
Jus~ as was the case with matched filter range compression, this can introduce
a difficulty. The matter involves further consideration of the form of the delay
function .T(t') and the way in which it affects the received signal waveform v,(t).
We consider only signals in the baseband.
To that end, let us consider the spectrum of the received signal:
Tp/2
V,(f) = ff1{s(t')} =
s(t')exp{-j2nf[t'
+ T(t')]} dt (10.3.22)
-Tp/2
( 10.3.18)
Defining the storage index by
r. = (2f/c)Ran
We can use an expansion of T(t') around the time of launch of the midpoint
(say) of the pulse, taken as t' = 0. Using Eqn. (10.3.21), and defining R1 and R,
as the ranges to target at the times of transmission and reception of the midpoint
of the pulse, i.e., R 1 = R(O) and R, = R[T(O)], we obtain from Eqn. (10.3.21)
(10.3.19)
T(O) = T0 = (R 1 + R,)/c
t(O) = t 0 = (R,
=t(O) =
=t 0
(10.3.20)
which are essentially the same numbers as in Eqn. ( 10.3.10 ). Fourier
transformation of the data Eqn. (10.3.20) yields the complex function anC(R1),
however. Since the amplitude of an is unity, as in Eqn. ( 10.3.15), the factor
disappears when the image 1(1 2 is taken.
Intra-Pulse Range Variation
As we have discussed in Section 4.1.1, the approximation that the target range
is a constant value Rn during the time of pulse period n is reasonable in the
case of a side looking radar, and in the case of the longer wavelengths, in
particular at L-band. However, for a higher frequency system, X-band, say, and
particularly for a system with considerable forward squint, this may not be the
case. We need to examine the approximation again, and to describe a means
for compensation of the resulting effects if it is not well satisfied.
As in Section 4.1.1, suppose that v,(t) is the signal rrceived at time t in
response to a unit point target. This is the value of the signal transmitted at
some time t' which is earlier than t by the two-way transit time T. The time T
in general depends on t, or equivalently on t': T = T(t'). Let R(t) be the slant
range of the radar from the target at time t. Then the total travel time T(t')
must satisfy
c-r(t') = R[t'
+ T(t')] + R(t')
(10.3.21)
+ R,)/(c -
R,)
(10.3.23)
Using these, the factor in the phase in the exponent of the integrand of the
received spectrum Eqn. (10.3.22) becomes
t'
+ T(t') =
T0
(10.3.24)
{f
2
Tp/
-Tp/2
+ T(t')] + j2nft
80
dt'}/S(f) (10.3.25)
we have
(10.3.26)
528
10.3
s(t')exp{ -j2nf[(1
(10.3.27)
POLAR PROCESSING
529
-.12
s(t) = exp(jnKt 2 ),
The spectrum Eqn. (3.2.29) is
S(f) = exp( - jnf 2 / K)
to within a constant. From Eqn. ( 10.3.27) we have (dropping the quadratic term):
G(f) = exp(j2nr5 R,)S[(1
+ i 0 )f]/S(f)
(10.3.29)
+ t 0 ) 2 = 1 + 2t0
Since approximately
t0 =
2R/c
10.3.3
The deramp operation described in Section 10.3.2, in which a linear phase term
exp(j2nftan), with phase proportional to the range Ran = ctan/2 of the radar
from a reference point, is subtracted from the data spectrum, obviates the
problem of range migration correction in forming the image in polar processing.
In addition, it yields a kernel (Green's function) in Eqn. (10.3.20) which is
trivially inverted (by Fourier transformation). However, in order to carry out
the procedure, it is necessary to know the range Ran of the radar antenna from
the reference point (the origin in Fig. 10.11), and to know that pulse by pulse.
Any errors in this range, due perhaps to errors in the navigation system of an
aircraft, or tracking errors in the case of a satellite platform, will introduce
phase errors into the deramped data corresponding to Eqn. (10.3.20), and
degrade the image. Following the work of Walker (1980), who describes the
effects of such errors on the image, and drawing upon the subaperture correlation
procedure for autofocus described in Section 5.3.2 in reference to the rectangular
algorithm, it is possible to suggest a scheme for the automatic compensation
of some of these motion compensation errors which can arise in polar processing.
Let us first make a specialization of the procedures described in
Section 10.3.2 to the (usual) case ofa planar image, so that R1 (Fig. 10.11) is a
two-dimensional vector. This allows use of a two-dimensional data field. Let
us also assume that in the deramp operation leading to Eqn. (10.3.16) a
(measured or presumed) vector R~n is used, which might not be the same as
530
POLAR PROCESSING
531
the true radar position vector Ran at pulse n. As before, the single pulse data
transform Eqn. ( 10.3.14) is essentially
( 10.3.31)
G(f) = S(f)exp(-j4nfRn/c)
where we have neglected any of the Doppler effects discussed in Section 10.3.2.
The deramp operation is carried out with a phase factor exp(j4nfR~n/c) to
yield data Eqn. ( 10.3.16) to be Fourier transformed as
G(f) =exp[ -j4nf(Rn -
R~n)/c]
( 10.3.32)
(10.3.33)
where s(t) is some error. Then (Fig. 10.11)
(10.3.34)
where we assume both R, ands are small with respect to
first order terms. Then, again to first order,
R~n
( 10.3.35)
True (R ) and measured (R~.) antenna, positions with planar target (R,) region.
(10.3.36)
we can write Eqn. (10.3.36) as
We now take explicit account of the assumption that R, lies in a plane, say the
(x, y) plane, and write
( 10.3.37)
where R~P is the unit vector in the (x, y) plane (Fig. 10.14):
R~P = i cos(t/J~n)
+ j sin(t/l~n)
G(f) = exp[j2n(rp R,
+ 2fRa s/c)]
( 10.3.39)
(10.3.38)
For simplicity, we will henceforth drop the subscript n, as well as the prime, so
that all quantities subscripted "a'', referring to the antenna position, are
considered as measured values. Taking account that
532
This can be done if we assume the antenna position azimuth angle l/Ja(t)
(Fig. 10.14) to be a monotonic function over the full synthetic aperture
observation time.
From Fig. 10.14 we have
y/x = tan[l/Ja(t)]
(10.3.42)
From this we can determine t for any point (x, y) in the data array, possibly
by table look up if l/Ja(t) is a complicated function. More usually, .Pa(t) will
admit a low order rational fraction approximation. For example, if the path of
motion of the radar vehicle is nominally a straight line, so that
X 8 (t) =
Xo
533
and define
(10.3.47)
In Eqn. (10.3.47) we recall that t(xp, Yp) is a known function, and note that
g(xp, Yp) depends only linearly on all the parameters of the motion error as
written in Eqn. (10.3.45).
'
Now consider some point (xk, Yk) in the data array. Expanding g(xp, yp) of
Eqn. (10.3.47) at that point, we have
+ Xot
(10.3.43)
(10.3.48)
(10.3.44)
where (x, y) are the deviations of (xp, yp) froin (xk, Yk). If we now make an
image using the data over the subregion (subaperture) locally around (xk, Yk),
a target at (xi, Yi) will appear in the image displaced by the phase error terms
in Eqn. ( 10.3.36) which are linear in the data coordinates. Those are the linear
terms of Eqn. (10.3.46). The target point will therefore appear with image
coordinates
then
+ y0 t)/(x 0 + Xot)
For a specified point in the data array, Eqn. ( 10.3.44) can be solved for t ~s an
explicit function of y/x. The same is the case if X 8 (t), Ya(t) are quadratic, or
can at least be so approximated over the span of time of interest in the process.
In any case, from the measured radar position R~ relative to the reference point,
which corresponds to a particular data index rP, values of t(x, y) can be found
for any index in the data array.
Let us now consider the distortion term in the data G(f) of Eqn. ( 10.3.39).
We first consider an expansion of s(t) to some adequate order about some
nominal time t 0 , taken as t = 0:
( 10.3.45)
where for illustration we truncate at the second order. The distortion term in
Eqn. ( 10.3.39) is then
(4nf /c)Ra s = (4nf /c){k cos(a)
rP = (2f/c) sin(a)
(10.3.49)
where we use Eqn. ( 10.3.37) and Eqn. ( 10.3.41 ), and note from Eqn. ( 10.3.40) that
Rap= rp = rp/rp
+ og/oxk
Yik =Yi+ syo + og/oyk
Xik =Xi+ Bxo
+ [sin(</Ja)/rp]rp} s
= 2nrPs + (4nf/c)kscos(a)
= 2n[xpsxo
POLAR PROCESSING
(10.3.50)
The left sides of these two equations can be measured. The right sides are linear
in the error parameters sxo Bxo Byo .... Therefore, by computing at least as
many correlation pairs as there are parameters in the model Eqn. (10.3.43) to
be determined, we can solve for the error parameters, perhaps using a least
squares procedure if the set of equations Eqn. (10.3.50) is over determined.
With specific values of the coefficients in Eqn. ( 10.3.45) in hand, the function
g(xp, Yp) of Eqn. ( 10.3.47) is fully determined, again using the function t(xp, yp),
perhaps by a look-up procedure on tan[l/Ja(t)]. The data function Eqn. (10.3.36)
534
can then be compensated by multiplying the data array entries point by point
by the compensator
535
REFERENCES
Ausherman, D. A., A. Kozma, J. L. Walker, H. M. Jones and E. C. Poggio (1984).
"Developments in radar imaging," IEEE Trans. Aero. and Elec. Sys., AES-20(4),
pp. 363-400.
where
(10.3.52)
The image is thereby fully corrected for the distortions due t?. moti~n
compensation errors, except that it appears with a constant pos1t1on shift
Eqn. ( 10.3.51) in accord with the constant vehicle position offset. . .
In carrying out the procedure indicated, values for the coeffic~ents m the
derivatives ag/xk, ag/ayk are needed in Eqn. (10.3.50). These mvolve the
derivatives dt/dxP, dt/dyP, which may be known explicitly if the motion of ~he
vehicle was taken as a simple approximation such as in Eqn. ( 10.3.43 ). Otherwise,
we can write
d(yp/xp)/dxP = d{tan[l/la(t)]}/dxP
= {d
( 10.3.53)
In this the left side is a simple function of the point (xp, Yp), which can be taken
specifi~ally at any of the subaperture cent~r poi~ts (xk, Yk) of interest. The ~rst
terni on the right can be found, numencally 1f necessary, from the vehtc~e
traject~ry in the vicinity of the subaperture points. The nee~ed term dtzdx~ is
then calculated at the point (xk, Yk). A similar procedure yields the denvat1ve
dt/dyk.
.
.
.
Polar processing in general, even without autofocus constderattons, mvo~ves
a considerable amount of data interpolation. The ultimate image formation,
the Fourier transform of the data field G(xp, yp), will be done digitally using
the FFT. It is therefore necessary that data values be available referred to a
rectangular grid on the data array. In contrast, the uniformly s~aced s~mpling
done by the video digitization for each pulse produces values umformly mdexed
along rays in the data array, with the rays not in general.parallel, but r~ther
in the polar format. Without careful consideration of the ~ompu.tatt~~al
operations; it is difficult io make a clear state~ent about the relat1v~ des1rab1h~y
of polar processing and the rectangular algonthm. The trade-offs mvolved ~ill
also depend on the radar system deployment, specifical~y t~e slant range, squmt
angle, and swath size. This algorithm is commonly used m auborne SAR systems
for target detection.
A.1
537
so that
mO[f /n] = O[(m/n)f] = (m/n)O[f]
APPENDIX A
That is, such a system is homogeneous over the rational numbers. Although
this is good enough for practical purposes, strictly speaking one must specify
homogeneity as a separate property in the definition of a linear system. That
is, in addition to Eqn. (A.1.1) we require independently that
DIGITAL SIGNAL
PROCESSING
O[af1 (t)]
(A.1.2)
for arbitrary scalar a. (Note that the output of a linear system must be identically
zero if the input is zero.)
For linearity to hold we do not require that
O[f(t - ti)]= g(t - ti)
A.1
The first part of the definition of a linear system is that the system output in
response to the sum of two inputs is the sum of the outputs in _res~nse to the
two inputs taken separately. Symbolically, if the system operat10~ is expre~sed
as O( ), and if we choose to think of the inputs and outpu!s as time f~nctlons
f(t) and g(t) respectively, then a system is linear (almost) if and only if
O[f1(t)
(A.1.1)
for any inputs f 1 , f 2 from the class of functions for which the system out~ut is
defined. (The system output must be well defined, in the sense that an mput
f (t) uniquely determines the output g( t ), although the reverse need not be true.)
It follows from Eqn. (A.1.1) that, for integer m, n, we have:
If this latter property is true, that is, if a time shift of the input causes only a
corresponding time shift of the output, the system is in addition called stationary.
Stationarity, although convenient, is not a fundamental property on a par with
linearity. Without stationarity, the world of signal processing can proceed
relatively unimpeded, but without linearity considerable complications ensue.
Impulse Response and Convolution
= O[n(f/n)]
= nO[f /n]
536
f'"oo J(t')'5(t -
t') dt'
(A.1.4)
The linearity properties Eqn. ( A.1.1) and Eqn. ( A.1.2 ), and the definition of the
impulse response h(tlt'), then yield at once
g(t)
= O[f(t)] =
=
f:
00
O[f]
(A.1.3)
o[f:
f(t')t5(t - t')dt'
00
f_
00
J(t')h(tlt')dt'
(A.1.5)
00
This is the convolution integral, expressing the output g(t) of a linear system
in terms of the input f(t) and the impulse response h(tlt').
538
= O[b(t -
t')]
t'JO)
= h(t)l1-r
(A.1.6)
(In the last step of Eqn. (A.1.6) we have used a common a~use of notati~n in
designating both a function of two variables t, t' and a function of ?ne ~anable
t by the same letter h.) For a stationary system, the convolution mtegral
Eqn. (A.1.5) then becomes
g(t) =
realizability is unnecessary in this sense. The system in effect can look ahead if
we introduce some time delay in the processing and store the input data for
that length of time.
System Transfer Function
In any system analysis problem, matters proceed more simply if the waveforms
of interest, such as f(t) and g(t) in Eqn. (A.1.8), are expressed in terms of the
eigenfunctions of the operators of the system in question. In the case of a linear
stationary system, for an input
(A.1.7)
wheres is an arbitrary complex number, from Eqn. (A.1.8) we have the output
(assuming convergence of the integral):
00
g(t) =
(A.1.8)
f_ 00 h(t')exp[s(t - t')]
H(s) =
f:
(A.1.9)
t'
h (t')
=l:1
t
g (t)
t'
F(s)exp(st)ds
= O[f(t)] =
(A.1.11)
o[fF(s)exp(st)ds J= fF(s)O[exp(st)] ds
F(s)H(s)exp(st)ds =
G(s)exp(st)ds
(A.1.12)
where we define
G(s) = H(s)F(s)
(A.1.10)
If this can be done, then the corresponding output function has the expression
g(t)
)Ir
h(t')exp( -st')dt'
00
From Eqn. (A.1.9), for any s the function exp(st) is an eigenfunction of the
system. The corresponding eigenvalue is H(s) of Eqn. (A.l.10). We then hope
to be able to find coefficients F(s) such that an arbitrary input function can be
expressed in terms of the set of eigenfunctions exp( st) by the linear combination
f(t) =
Figure A.1
dt' = exp(st)H(s)
t'
539
f(t) = exp(st)
which has the graphical interpretation shown in Fig. A.1. By a change of variable,
Eqn. (A.1.7) becomes also
g(t) =
(A.1.13)
540
Thus convolution in the time domain, Eqn. (A.1.8), has been transformed into
multiplication in the eigenfunction ("frequency") domain, Eqn. (A.1.13).
g 0 = H(jn2n/T)f0
f:
lf(t)I dt < oo
(A.l.14)
00
which suffices for our needs, the set exp(jwt), lwl :S oo, is appropriate, and leads
to the Fourier transform theory. Then we have the Fourier transform pair:
~- 1 [F(jro)] =
f:
F(jro)exp(jrot)dro/2n
(A.1.15)
541
Signal Representation
f(t) =
(A.1.19)
A.2
00
~[f(t)] =
f:
f(t)exp( -jrot)dt
(A.1.16)
00
lf-f.l>B/2
IF(jm)I =
o,
B/2
00
f(t) =
f exp(jn2nt/T)
(A.1.17)
n= -oo
if f(t) is real, since then F(jro) = F*( -jw), from Eqn. (A.1.16).
T/2
with
= (1/T)
-T/2
f(t)exp(-jn2nt/T)dt
(A.1.18)
IF(jm)I = 0,
(A.2.1)
542
It was known very early that discrete samples taken at any rate J. > B sufficed
to represent exactly such a bandlimited waveform. The minimum sampling rate/
is called the Nyquist frequency. Shannon (1949) gave a proofofthis fact, and the
result is sometimes called the Shannon sampling theorem. The theorem is stated
as follows.
Let a signal f(t) be integrable (Eqn. A.l.14) and bandlimited as in
Eqn. (A.2.1), and let f. > B be the sampling frequency. Then exactly (there is
no interpolation error):
f(t) =
L"'
(A.2.2)
543
where l>(t) is the Dirac delta function. This sampled signal has a Fourier
transform
F!(jw)
(A.2.5)
n- - co
which is .a pe~iodic function of frequency f with period f.. The last notation
on the nght m Eqn. (A.2.5) introduces incidentally the Z-transform of the
sequence of numbers f 0 :
n= -oo
where
(A.2.6)
n= -co
sinc(u) = [sin(u)]/u
(A.2.3)
Eqn. (A.2.2) expresses the bandlimited function f(t) at any arbitrary t exactly
in terms of the infinite sequence of its samples f 0 = f(n/ f.) at discrete times
t 0 = n/ f.. Equation (A.2.2) is also Whittaker's formula for interpolation of
bandlimited functions.
Since the representation Eqn. (A.2.2) is exact for a bandlimited function, and
since the function on the right is analytic, we can conclude that no bandlimited
function (not identically zero) can vanish except at isolated points. Thus, a
strictly bandlimited function which vanishes everywhere outside some interval
ltl ~ T/2 is an impossibility. Nonetheless, a bandlimited function may well
vanish at a countably infinite set of isolated points, as witness Whittaker's
function Eqn. (A.2.2) with all f(n/f.) set to zero except for lnl ~ N 0 Further,
since the interpolating functions Eqn. (A.2.3) have a bound which decays
monotonically with increasing u, beyond some distance away from the first and
last nonzero samples the bandlimited function so constructed becomes and
remains small, although not strictly zero.
f./2
f -f./2
f!(t) =
L"'
n= -co
fnl>(t -
n/ f.)
(A.2.4)
F(z )z 0 -
dz/2nj
lzl= I
(A.2.7)
displaying incidentally the inversion formula for the Z-transform. On the other
hand, from the inversion formula Eqn. (A.l.15) for the Fourier transform, we
must have
fn = f ( n/f.) =
"'
ff./2+kf,
"'
ff.12
-f./
k= -oo
k=~oo
F(jw) exp(jwn/f.) df
-f./2+kf.
Whittaker's interpolation formula Eqn. (A.2.2) is valid without error only for
functions which are strictly bandlimited, Eqn. (A.2.1 ), and only provided the
sampling frequency f. > B. A different view of the, same result gives insight
into the errors which are introduced if either ofthese requireipents is violated.
Suppose then that f(t) is any function, bandlimited or not, having a Fourier
transform F(jw). Let f. be some arbitrary sampling frequency, and let
J;. = f(n/ f.) be the time samples of f(t). Let these be encoded into a
mathematical construct called (historically) the "sampled signal":
F!(jw) exp(jnw/f.) df =
J./2
-f./
F[j(w
+ kw.)]exp[j(w + kw.)n/f.]df
"'
(A.2.8)
00
L"'
k= -oo
F[j(w +kw.)]
(A.2.9)
544
_,.,,.-,..,,._,,,-,,
...
'
IF I 0CJ?) I
'\
-8/2
''
Is -B/2
B/2
Is >B
-fs
= 0,
elsewhere
f./2
(A.2.10)
By linearity, the response of the filter Eqn. (A.2.10) to the input l!(t) of
Eqn. ( A.2.4) is
00
g(t) =
L
n= -oo
00
l 0 h(t - n/ f.) =
Is B
2 2
~ Alias region
~
ls<B
Bandlimited spectrum (solid line); replications due to sampling (broken lines), without
aliasing; and reconstruction filter.
Ill~
---J
(B-fr)/2
Figure A.2
= 1/J.,
J._B
(Reconstruction filter)
-i/2
s
H(jw)
545
IF Oro) I
.. -, - .. -...
;"
.... _,,,
',
DISCRETE CONVOLUTION
Figure A.3 Spectrum of bandlimited signal and replications due to sampling. Aliasing present
due to f, < B.
F!(jw) (Fig. A.3), and will be passed by the low pass interpolation filter
Eqn. (A.2.10). This is the notorious "aliasing" effect, so called because energy
content of l(t) which is actually at frequencies above .f./2 is mistaken by us
as belonging to frequencies below f./2. The corresponding energy is traveling
under an assumed name. This problem is impossible to recover from once
induced, so that it is essential to include an appropriate bandlimiting filter in
the processing just ahead of any sampling operation.
The representation Eqn. (A.2.9) of F!(jw) in terms of shifted replicas of F(jw)
is the starting point for analyses which seek to estimate or bound the errors
we make in interpolating using Whittaker's formula, Eqn. (A.2.2), in the case
that the samples l(n/ f.) are not properly obtained (either because we are
sampling a function which is not in fact bandlimited, or because we are sampling
too slowly). Papoulis (1966) has developed a number of such results. Usually
the SAR data system will have been designed such that aliasing is not a problem
in the radar receiver data. However, the azimuth Doppler signal is bandlimited
only by the falloff of the azimuth antenna pattern, and some analysis of aliasing
is needed before setting the Doppler sampling rate (the PRF). This is considered
in terms of azimuth ambiguity analysis in Section 6.5.1.
Now that we have in hand the fundamental result that the transform F!(jw),
Eqn. (A.2.9), of the sampled signal l!(t), Eqn. (A.2.4), and the transform F(jw)
of the analog signal l (t) are in fact identical on the baseband Il I ~ .f./2, provided
bandlimiting and adequate sampling_ rate hold, we can develop some useful
relations allowing computation of system output functions in terms of the
discrete samples of the input.
n= - oo
A.3
DISCRETE CONVOLUTION
The main operations involved in SAR image formation are filtering of signals
by linear stationary systems. If we regard an input signal as a function l(t)
546
of continuous time, and if the filter is a stationary system with impulse response
h(t), then the output is exactly the ("linear") convolution, Eqn. (A.1.7):
g(t) =
(A.3.1)
If both i(t) and h(t) are time sampled at a rate J., then certainly the integral
gk = g(k/J.) ~
h[(k- m)/J.Ji(m/J.)(1/J.)
(A.3.2)
m=-oo
which is the discrete convolution of the two sample sequences h(n/J.), i(n/ J.),
scaled by 1I!..
Discrete Convolution of Band/Im/led Signals
The key fact is that, if both i(t) and h(t) are bandlimited, such that
F(jw) = H(jw) = 0, Iii> B/2, and if the sampling is at or above the Nyquist
rate, J. > B, then Eqn. (A.3.2) is exact. Since the output g(t) of a linear analog
system with bandlimited input is necessarily also bandlimited, then its exact
samples g0 , computed by Eqn. (A.3.2),-suffice to reconstruct the complete output
g(t) exactly at all times, in principle by using Whittaker's interpolation formula,
Eqn. (A.2.2). Hence, in the case of a bandlimited input, the exact analog output
g(t) can be computed in terms of time samples of the input. (If the system
function H(jw) is not bandlimited, the samples h0 of its impulse response are
not used in Eqn. (A.3.2). Rather, it is necessary to replace the system by
its bandlimited companion.)
That the discrete 'convolution Eqn. (A.3.2) is exact for bandlimited inputs
follows at once from the fact that, in the band Iii ~ B/2, from Eqn. (A.2.5)
and Eqn. (A.2.9) we have
<X>
inz-n,
(A.3.3)
n= -co
fn = (1/ N)
<X>
Fk exp(j2nkn/ N),
O~n~N-1
k=O
gkz-k = f.H(jw)F(jw)
k=-oo
(A.3.5)
N-1
Fk =
<X>
=(1/J.)
547
where z = exp(jw/ J.), and similarly for H(jw ). Then in the same band we have
J.G(jw) =
DISCRETE CONVOLUTION
n=O
imhnz-n-m
O~k~N-1
m,n= -co
<X>
=(1/f.)
L
k,m=-co
h[(k-m)/J.]i(m/f.)z-k
(A.3.4)
is an identity in the two sets of numbers {!0 }, { Fd, which are in general
complex. This pair is the discrete Fourier transform (DFT) and its inverse. In
case the numbers f,,, 0 ~ n < N, are the only nonzero samples of a properly
548
549
DISCRETE CONVOLUTION
sampled bandlimited function, then the numbers Fk, 0:::;; k < N, have the
interpretation (Eqn. (A.2.5) and Eqn. (A.2.9)):
(A.3.6)
/.
/
They are therefore uniformly spaced samples of the spectrum F(jw) on the band
Ill< J./2.
-N
2N
...,
.,....,... ,/' I
.,...
1----I
--
I
I
I
I
I
I
/' ()
\
\ , )>'/
In linear convolution
H is not presert
Let us now investigate what Oppenheim and Schafer (1975) have called the
"periodic convolution" of two sequences In, hn which vanish except on the base
interval [O, N - 1]. The periodic convolution can be calculated using discrete
Fourier transforms Eqn. (A.3.5), and with care can be made identical to the
linear convolution Eqn. (A.3.2). The periodic convolution is defined as the
convolution of sequences Jn, fin which are the results of extending In hn from
the base span periodically, with period N:
N-1
Um= L fim-Jn
(A.3.7)
n=O
Gk=
m=O
N-1
Hk,Fk" w-km+(m-n)k'+nk"
m,n,k',k"=O
N-1
= (l/N)
Hk,pk,wm<k'-kl = HkFk
(A.3.8)
m,k'=O
where we defined W = exp(j2n/ N). Thus we have a transform analysis of the
periodic convolution operation:
(A.3.9)
t
Figure A.4 T!me waveforms in case of linear convolution realized by circular convolution using
DFT. Case of madequate sample span.
where !?2 and !?2 - indicate the operations Eqn. ( A.3.5) of taking the discrete
Fourie.r transform and its inverse. The sequence Un results as the indicated DFT
ope~at10~s on t~e ~equences fo, hn, which vanish outside [O, N - 1], but which
are 1dent~cal to In, hn on that i_!lte;val. The sequence Un so computed, the periodic
convolution of the sequences In hn, is the "circular" convolution of the sequences
In, hn, ra~her ~ha~ the (linear) convolution Eqn. ( A.3.2) of the sequences In, hn.
~he. s1tuat1on 1s "!.ad~ clear by the diagrams of Fig. A.4. This shows the two
penod~c sequences f.., hn whose convolution Un is obtained by inverting the
coeft!.c1ent ~equence FkHk. The heavy outlining indicates the base periods of Jn
and hn, which are the only spans over which In and hn have nonzero values.
A~~ value of Un on t_he base interval [O, N - 1] involves both values of fin
~nsmg from the base mterval,'and values arising from a neighboring period of
hn, where~s the values of gn on the base interval involve only values of ii on
the b.ase mterval. Therefore gn and Un will be different numbers, even onn the
base mterval. Further, t?e linear convolution values gn are potentially nonzero
for 0:::;; n:::;; 2(N - 1). Smee the values Un simply repeat beyond n = N - 1, the
values of gn for N :::;; n :::;; 2( N - 2) are not available from the circular convolution.
Linear Convolution Realized as Circular Convolution
550
551
DISCRETE CONVOLUTION
/\
,, \
\
N~L+M-1
t-M
f~ = fiL+n
O~n~L-l
(A.3.10)
\
N
....
,....
____ _,,~~,..
_,, ....
------....
Figure A.5 Linear convolution realized by circular convolution. Sample span extended to avoid
circulatory replications.
sufficiently such that, fo! computation of the last value of Un, the troublesome
replication of hn in a neighboring period interval has not yet moved in to take
part in the computation, while the first replication of fn after the base period
has not yet begun. If fn = 0 outside the span 0 ~ n ~ L - 1, and if hn = 0
outside the span 0 ~ n ~ M - 1, then the linear convolution sequence Un in
Eqn. (A.3.2) has nonzero values at most over 0 ~ n ~ L + M - 2. We must
have N ;;:?: L + M - 1 in order that the last value Un of interest not be replaced
by a replication of g0 This also assures that the first replication of hn outside
the base period has just not begun to overlap the values of fn in the base period.
Thus, for Un and 9n to be identical over the base period P ~ n ~ N - 1, it is
both necessary and sufficient that N be chosen for the computation such that
N ;;:?: L + M - 1. Thereby all possible nonzero values of Un occur on the base
span, and these are identical to the values 9n The DFT procedure Eqn. (A.3.9)
computes the linear convolution Eqn. (A.3.2).
Another way of saying this is that, if we have sequences fn, hn of lengths L
and M, respectively, and if we attempt to compute N values Un 0 ~ n ~ N ~ 1,
of their linear convolution by the discrete Fourier transform procedure usmg
2L
""
,,,,
,l!
I
I
~
I
I
"Add"
region
"Overlap"
region
Figure A.6 "Overlap-add" method of filtering a long data stream using N-point transform.
552
DISCRETE CONVOLUTION
553
Then certainly
Saved
/,-\
' '
,______ ______..............._,,,,
"'........................
-..----
where g~ is the output of the syst~tn in response to the ith input segment of
Eqn. (A.3.10). If the impulse response sequence hn is of length M, we need only
choose some convenient value N ~ L + M - 1 to carry out the component
convolutions in Eqn. (A.3.11) correctly. The results are simply added. As shown
in Fig. A.6, the result of convolution of the input sequence !~ with the impulse
response sequence hn will generally spread over more than one data interval.
Then part of the output from one ith component convolution Eqn. (A.3.11)
must be added to the outputs of other segments with which it overlaps. This
procedure is called the overlap-add method of filtering an ongoing data stream.
Alternatively (Fig. A.7), the input stream can be segmented into blocks of
length L = N, the contemplated transform size. If the impulse response sequence
h is oflength M, then we know from above that the first L + M - 1 - N = M - 1
p~ints of the calculated convolution are incorrect, and only the remaining
N - M + 1 points represent progress in computing the output data stream gn.
The procedure is then to shift the input segment M - 1 points farther back on
the input stream than would otherwise be necessary, with the result that the N
input points to each convolution calculation are made up of the last M - 1
points of the previous section, saved and reused, and the first N - M + 1 points
of the data stream fn which have not yet been used. The good N - M + 1
output points from each calculation are pieced together appropriately to form
the output stream Un. This procedure is called the overlap-save method, since
the input segments must be overlapped, necessitating saving some of the input
points from one computation to another.
It might be noted that, so far as computation time is concerned, we lose by
using the discrete Fourier transform procedures as we have described them to
compute a convolution sum. For suppose we want to convolve an L-point
sequencewithanM-pointsequence. TheoutputsequenceisoflengthM + L- 1.
Each of the middle L - M + 1 points requires M multiplies and M - 1 adds
for its calculation, while the M - 1 points on each end of the output sequence
require (M - 1)2 operations, a total operation count of L(2M - 1) - M - l,
of order 2ML. If we use the discrete Fourier transform technique, we need to
calculate two N-point transforms, one requiring L multiplies and L - 1 adds
for each of the N values F k and the other requiring M multiplies and M - 1
adds for each of N values Hk. We then have N multiplies to calculate the Gk
~~~~---~~~--./
(A.3.11)
M-1
N-M+1
ht-n
t-M
t
Discard
Good
Figure A.7 "Overlap-save" method of filtering a long data stream using N-point transform.
algorrthm ~htch has made possible much of what is now called signal processing,
and to which we turn attention in the next section.
554
A.4
k=O
Fkexp(j2nkn/N),
n = O,N - 1
(A.4.1)
k = O,N - 1
(A.4.2)
N-1
Fk =
n=O
fn exp(-j2nkn/ N),
Since we have to do here with finite sums, convergence questions do not arise.
The pair is valid as an identity for complex data numbers fn as well as for real
data.
Two distinct uses are made of the DFT. The first is spectral analysis, in
which we seek the Fourier spectrum Eqn. (A.1.16) of a time waveform which
is bandlimited to Iii~ B/2. As in Eqn. (A.2.9), the signal spectrum F(j~) on
the band exactly equals the scaled spectrum F!(jw )/f. of the sampled signal
f!(t) of Eqn. (A.2.4), constructed from the samples fn taken at a. rate f. > B.
Since in practice only some finite number N of the samples fn are nonzero, the
spectrum function F!(jw) has sampled values as in Eqn. (A.3.6):
555
Let us consider first the decimation in frequency scheme, and suppose for
simplicity that N is an even number. Decimation in frequency algorithms evolve
by first noting from Eqn. (A.4.2) that
Nn-1
Fk =
F! (j2nkf./ N) = F k>
N-1
fnexp(-j2nkn/N)
n=O
N/2-1
n=O
+ L
n=N/2
fnexp(-j2nkn/N)
(A.4.3)
N/2-1
F2m =
F2m+1 =
Un - ln+N12)exp(-j2nn/N)exp[-j2nmn/(N/2)]
n=O
N/2-1
n=O
(A.4.4)
(A.4.5)
Thus the even numbered Fk, Eqn. (A.4.4), are calculated as the N /2 point
transform of the sequence f 0 + !n+N12 , while the odd numbered Fk, Eqn. (A.4.5),
are the N /2 point transform of the sequence (J;. - ln+N/ 2) exp(-j2nn/ N), where
the exponential multipliers are the so-called "twiddle factors". The two N /2
point transforms are further subdivided into N / 4 point transforms, and so forth,
until we deal ultimately with two-point transforms.
For each n, the operations in forming the sequences to be transformed as in
Eqn. (A.4.4) and Eqn. (A.4.5):
9n = fn + fn+N/2
(A.4.6)
(A.4.7)
556
are just those of taking a two-point transform (the case N = 2 of Eqn. ( A.4.2) ),
followed by adjustment of the second output coefficient by a twiddle factor.
Thus, for say N = 8, we first do four 2-point transforms using(/o,f4), ... , (/3,f1>
The four output coefficients g0 of Eqn. (A.4.6) are the inputs to a four-pomt
transform, and the four twiddled outputs g~ of Eqn. (A.4.7) are the inputs to
another four-point transform. In carrying out the first of these, we first do two
two-point' transforms using (g 0 , g 2) and (g 1, g 3 ) and twiddle the second output
coefficients of each. This yields numbers (h 0 ,hi), (h 0 ,h~), each pair of which is
the input to a final two-point transform. Collecting all these together, we have
done four (N /2) (complex) two-point transforms at each of three (log 2 N)
computation stages, with twiddle factors applied at each stage.
Looked at another way, in Eqn. (A.4.4) and Eqn. (A.4.5) we have N /2
two-point transforms, with the resulting coefficients adjusted by twiddle factors
(one of each pair of which is unity), and finally two (N /2)-point transforms on
the resulting adjusted coefficients.
If then N is a power of 2, N = 2m say, we need m stages of decimation to
get down to the final two-point transforms. Each two-point transform (a
"butterfly") requires two (complex) additions, and since there are altogether
m(N /2) two-point transforms, we require mN complex additions. Applying the
twiddle factors uses N /2 complex multiplications for each stage except the last,
for a total of N(m - 1)/2 multiplications involving twiddle factors. The grand
total of complex operations needed for the N-point transform is thus:
mN + (m - l)N /2 = (3/2)N(log 2 N - 1/3)
Various reorderings of the computation can reduce the operation count even
below this, but the lion's share of the improvement is indicated by the transition
from N 2 behavior to N log 2(N).
Decimation In Time
f 2mexp[-j2nkm/(N/2)]
m=O
N/2-1
+exp( -j2nk/N)
m=O
(A.4.8)
Thus, the coefficients in the N-point DFT appear as sums of coefficients in two
DFTs, each of length N /2, with adjustment of the second set by twiddle factors
exp( -j2nk/ N). Assuming that N /2 is even, each of these constituent ( N /2)-point
DFTs can be calculated by further subdividing the sequences f2m and f2m+1
and carrying out a total of four transforms, each of length NI 4. The process
557
cycles until we finally deal with constituent transforms of length just 2 points.
The total operation count in carrying out the original N-point transform with
this procedure turns out to be precisely the same as for the decimation in time
procedure, and in fact the graphs of the computations are duals of one another.
Coefficient Ordering
558
Since the FFT computation is the heart of fast convolution SAR image
formation algorithms, as well as of so much of signal processing, it is worth
giving some additional discussion of the choices that can be made, as well as
some indication of the more recent developments in the subject of fast
convolution in general.
559
where
Gk = ( lk
Hk
+ Y~12-k)/2
= (Yk -
Y~12-k)/2j,
k = O,N/4
Gk= G~12-k
A.5
to
THE FFT
Hk = H~12-k'
k = N/4
+ l,N/2
with. th~ sequence lk being the (N /2)-point complex transform of the numbers Yn
Similarly, an N-point inverse transform Eqn. (A.4.1) leading to a real sequence
is computed using an (N /2)-point complex transform as:
f2n = Re(y0 )
f2n + 1 = Im(y0 ),
n = O,N/2- 1
(A.5.2)
with
Gk= (Fk + F~12 -k)/2
Hk = exp(-j2nk/N)(Fk - F~12 _k)/2,
k = O,N/4
Gk= G~12-k
Hk
= H~12-k
k = N/4
+ 1,N/2
In th~ seco~d way of dealing with real data (Bergland, 1968b ), a complex FFT
algorithm 1s. pruned to remove all redundancy in computing a value and its
complex conjugate, and to eliminate all computation of values known to be zero.
We often have to do with data which are real numbers, for example, the time
samples of the real offset video signal resulting from each pulse of a SAR system.
There are two standard ways of computing an FFT for a. real data sequence.
In the first (Brigham, 1974, Chapter 10), the N-point transform sequence Eqn.
(A.4.2) of N real numbers fn is computed using a complex FFT routine with
N /2 input points Yn = f 20 + jf2 n+ 1 , n = 0, N /2 - 1. The transform coefficients
are:
Fk =Gk+ Hk exp(-j2nk/ N),
k = O,N/2
Fk = F~-k
k = N /2
+ l, N
- 1
(A.5.1)
?e
560
561
and its inverse can be formulated which maps sequence convolution into
transform multiplication. The transform, when realized on a binary machine,
requires no multiplies, N(N - 1) adds, and (N - 1)2 circular register shifts for
an N-point input sequence. The algorithm is particularly suitable for fixed point
realization of convolutions of relatively short sequences. If, in addition, in this
transform the prime N is of the form N = 2m + 1, in which case N is a Fermat
number, the Fermat transform is obtained, which requires only (m + l)N adds
and no multiplies.
The entire field of fast algorithms for signal processing applications is
discussed in depth and generality in the useful texts by Elliott and Rao ( 1982)
and by Blahut (1985). The advantages to be gained from use of algorithms
other than traditional FFT procedures of radix 4 + 2 in SAR calculations are
relatively unexplored at this time. We therefore will end our discussion of the
matter here, having pointed out perhaps some possibilities.
A.6
Let us finally consider interpolation of a bandlimited low pass signal l(t), with
spectrum F(jw) which vanishes for Ill> B/2. As always, we assume the signal
to have been sampled at an adequate rate f. > B to produce a sequence
f., = l (n/ f.). Suppose further that all but a finite number N of the samples are
zero. Then we know from Eqn. (A.2.2) that we have exactly
N-1
l(t) =
n=O
(A.6.1)
valid for all t, which provides error free interpolation (or extrapolation) of the
given N samples. Beyond this there is only the question of implementation to
consider.
We at once remark, however, that Whittaker's formula, Eqn. (A.2.2), is often
not the most reasonable way to carry out such an interpolation computationally.
It is usually more efficient to introduce a time shift in the transform domain,
since we now have at hand an efficient way to compute transform coefficients
(the FFT).
In general, if l(t) is a function withFourier transform F(jw), the transform
of the delayed version g(t) = l(t - T) is
G(jw) =exp( -jwT)F(jw)
(A.6.2)
Therefore, if l(t) is bandlimited to the band Ill ~ B/2, so also will be g(t), and
samples of g(t) at the same rate as those of l(t) (f. > B) will suffice for
reconstruction. From Eqn. (A.2.9) and Eqn. (A.6.2), the transforms of the
562
(A.6.3)
= 0, N -
BI 2.
(A.6.4)
The sought samples 90 = g(n/ f.), n = [O, N - l], are then just the inverse FFT
of the numbers Eqn. (A.6.4 ).
.
.
As a special case, suppose that we want to interpolate to the m1dpomts of
the original sampling intervals. Then we have T = 1/2f., and Eqn. (A.6.4)
becomes
= (1/N)
L Fkexp[jnk(2n -
n=O,N-1
1)/N],
k=O
If we define a sequence
F~
by
k = O,N -1
k=N,2N-1
so that F~ is the zero-padded version of Fk, then the inverse FFT of the sequence
F~ is
g~ = ( 1/2N)
N-1
F k exp(jnkn/ N),
k=O
= 0, 2N -
n = 0, N-1
n = 0, N - 1
so that we obtain the full set of interpolation points on the grid of fineness
1/mf. with one operation.
.
.
sought interpolation of the original function f(t).
This procedure obviously generalizes to the case T = 1/mf., m which case
we compute the N-point FFT of the sequence f 0 , extend the resulting sequence
Fk by appending zeros to fill out the N-~oint Fk seque~ce to a length mN,
and compute the inverse FFT over mN pomts. The result ts a sequence of mN
points g~ such that
563
Then, from
i=O
N-1
9; exp( -j2nki/pN) =
n=O
k = 0, pN - 1
making the change of variable n = i/p, so that the Gk sequence is just the Fk
sequence, but considered over p of its base periods of length N. If then the g
0
sequence is low pass filtered in the time domain by a digital filter to remove
spectral components from k = N through k = pN - 1, the result has a spectrum
which is exactly the F k sequence padded out by zeros to a length pN. The result
therefore is the sequence with the values f(n/f.) interpolated on the mesh with
fineness p If.. Discarding all but every qth sample in the result therefore
accomplishes sample rate increase by the rational factor p / q.
564
REFERENCES
Agarwal, R. C. and C. S. Burrus (1974). "Fast one-dimensional digital convolution by
multidimensional techniques," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-22( l ),
pp. 1-10.
Bergland, G.D. (1968a). "A fast Fourier transform algorithm using base 8 iterations,"
Math. Computation, 22, pp. 275-279.
Bergland, G. D. (1968b). "A fast Fourier transform algorithm for real-valued series,"
Comm. ACM, 11(10), pp. 703-710.
Blahut, R. E. (1985). Fast Algorithms for Digital Signal Processing, Addison-Wesley,
Reading, MA.
Brigham, E. 0. ( 1974 ). The Fast Fourier Trans/orm, Prentice-Hall, Englewood Cliffs, NJ.
Burrus, C. S. and P. W. Eschenbacher (1981). "An in-place, in-order prime factor FFT
algorithm," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-29(4), pp. 806-817.
Chu, S. and C. S. Burrus (1982). "A prime factor FFT algorithm using distributed
arithmetic," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-30(2), pp. 217-227.
Cooley, J. W. and J. W. Tukey (1965). "An algorithm for the machine calculation of
complex Fourier series," Math. Computation, 19(90), pp. 297-301.
Crochiere, R. E. and L. R. Rabiner (1981). "Interpolation and decimation of digital
signals - A tutorial review," Proc. IEEE, 69(3), pp. 300-331.
Elliott, D. K. and K. R. Rao ( 1982). Fast Transforms: Algorithms, Analyses, Applications,
Academic Press, New York.
Gentleman, W. M. and G. Sande (1966), Fast Fourier transforms - for fun and profit,
AFIPS Fall Joint Computer Conf, San Francisco, November 1966, Spartan Books,
Washington, DC, pp. 563-578.
Johnson, H. W. and C. S. Burrus ( 1982). "The design of optimal DFT algorithms using
dynamic programming," Proc. IEEE Inter. Conf Acoust., Speech, and Sig. Proc., Paris
(May), pp. 20-23.
Oppenheim, A. V. and R. W. Schafer (1975). Digital Signal Processing, Prentice-Hall,
Englewood Cliffs, NJ.
Papoulis, A. ( 1966). "Error analysis in sampling theory," Proc. IEEE, 54( 7), pp. 947-955.
Pease, M. C. ( 1968). "An adaptation of the fast Fourier transform for parallel processing,"
J. ACM, 15, pp. 252-264.
Rader, C. M. (1972). "Disqete convolutions via Mersenne transforms," IEEE Trans.
Computers, C-21, pp. 1269-1273.
Shannon, C. E. (1949). "Communication in the presence of noise," Proc. IRE, 37(1),
pp. 10-21
Silverman, H. F. ( 1977). "An introduction to programming tl\e Winograd Fourier
transform algorithm (WFTA)," IEEE Trans. Acoust., Speech, Sig. Proc., ASSP25(2), pp. 152-165.
Temperton, C. (1979). "Fast Fourier transforms and Poisson solvers on CRAY-1," pp.
361-379 in: Super-Computers, Vol. 2, Infotech International.
Zohar, S. (1979). "A prescription of Winograd's discrete Fourier transform algorithm,"
IEEE Trans. Acoust., Speech, Sig. Proc., ASSP-27(4), pp. 409-421.
APPENDIX B
The esse~~ of SAR, a~d the root of its dramatic resolution properties, lies in
the ~oss1_b1hty ~f carrym_g out compression processing of the Doppler shifted
carn~r signal m the az~muth coordinate as the vehicle flies by the target
(~ection 1:2.2~. For an isolated point target, the waveform of that Doppler
signal, which is the slow (azimuth) time variation of the phase of the output
of the range compression filter, is (Eqn. (4.1.24)):
g(s) =exp[ -j4nR(s)/A.]
(B.0.1)
~h~re _R(s) is range from radar to target and s is slow time. This signal is
mtrm~1ca~ly_ sampled by the pulsed nature of the radar, with a sampling frequency
fp wh1c~ is Just the radar PRF. The waveform of this point target response is
needed m order to construct the compression filter. The detailed nature of the
rang~ ~unction R(s) _is therefo~e cru~ial to the construction of a SAR processor,
and it is that behavior to which this Appendix is devoted.
. Th~ range function R(s) for practical geometries is a complicated expression
mvolvm~ many ~arameters of the relative motion of satellite and target, the
latt~r hem~ earned al?ng by the rotating earth, and possibly having some
motion. of its own relative to the earth surface. However, because of the limited
~eamw1dth of ~he ra~ar. antenna pattern, a specific ~oint target creates radar
s~gnal only dunn~ a hm1ted span of slow time. That time span, the integration
time of the S~R, is usual~y small enough that a Taylor series expansion of R(s)
abo~t a nommal center time, say sc, can be terminated after the first few terms
to ~1eld an adequately accurate approximation to the full function R(s).
Typically, only terms through the quadratic in slow time need to be retained
'
565
566
in which case the azimuth function Eqn. (B.0.1) is a linear FM signal in the
Doppler frequency domain. Therefore, in this Appendix we will seek expressions
for R(s) and its first few derivatives evaluated at the time scat which the target
in question is in the center of the radar beam. Those lead to the parameters
needed in the azimuth compression filter of a SAR processor.
We will derive three different forms of expression for these derivatives of
R(s) evaluated at beam center, arranged to accord with three different situations
in which one wants to calculate them. First, we will need versions of the
parameter computations which can use the accurate values of satellite position
and velocity obtained by observing the trajectory of the vehicle. Second, for
prediction of the azimuth filter parameters during system design, as well as for
their computation in the case that the satellite orbit and orientation are known
rather precisely, it is useful to have accurate formulas in terms of these quantities.
Third, we need analytical models upon which to base the data fitting procedures
involved in clutterlock and autofocus (Chapter 5).
I Xs
(B.1.1)
(B.1.2)
the scalar slant range. It is convenient to expand R(s) as a Taylor series about
some time sc, which will be the time of passage of the nominal beam center
across the target, but which we can take as arbitrary for the time being. Then
we can write
R(s) = Rc + Rc(s - Sc)+ Rc(s - sc) 2 /2 + ... ,
Figure B.1
(B.1.5)
(B.1.3)
and seek the various derivatives of R(s) evaluated at the special time of beam
center on the target, rather than seeking the analytical form of R(s) directly.
For ~enerartty, suppose that the target moves with respect to the surface of the
rotattnfg earth. Let r be the target position, with coordinates taken relative to
a set o axes fixed on the rotating earth's surface. That is,
From Eqn. (B.1.2), suppressing henceforth the explicit appearance of the slow
time variable s:
(B.1.4)
567
568
system. Then:
569
in which
(B.1.6)
(B.1.7)
is the acceleration of the target relative to the earth's surface. Using Eqn. ( B.1.11)
and standard identities, Eqn. (B.1.15) becomes
RR+
where we is the (constant) earth's angular velocity. Further (e.g. Hay, 1953, p. 80 ),
t =We
R2 =
+ lv,1
+ V,
2
-
(R. - Rt)a,
*(B.1.16)
(B.1.9)
Considering the Doppler signal Eqn. (B.0.1), with phase
<P
i 0 (s) =
(B.l.11)
*(B.1.14)
This expresses the first derivative R of slant range in terms Qf t~e t~r~et pos~tion
and velocity on the earth and the satellite position and veloetty m its orbit.
Differentiating Eqn. (B.1.14), we have
f:
Vt = We x vto +We x r + v,
+ V,
-4nR(s)/A
From Eqn. (B.1.7), with Eqns. (B.1.8), (B.1.9), and (B.1.10) taken into account,
we obtain
= We X Rt
-2R(s)/A
In particular,
foe= -2Re/A
(B.1.17)
fR = -2Re/A
(B.1.18)
corresponding to the range function expansion in Eqn. (B.1.3), are the Doppler
center frequency and Doppler rate for use in the SAR processor.
The expressions Eqn. (B.1.14), Eqn. (B.1.16) indicate explicitly how target
motion relative to the earth surface affects the parameters foe and fR through
changes in R(s) and R(s). The expressions form the basis for assessment of
defocusing caused by uncompensated target motion, in conjunction with depth
of focus considerations (Section 4.1.3). Henceforth, however, we will assume the
target is a terrain point fixed on the earth surface, and take
a,= v, = 0
Satellite Acceleration Given Earth Potential Function
We
- (R. - Rt)(a,
+We
Xv,)
Vt) v,
(B.1.15)
To use the expressions Eqn. (B.1.14) and Eqn. (B.l.16) to obtain the Doppler
parameters foe andfR, we need to know the motion of'the satellite. The tracking
system and the orbit smoothing processor will normally provide the satellite
position and velocity R., v. as functions of slow time s (vehicle flight time),
although some interpolation may be needed between the times at which these
570
/s
571
+ (1.5B2/R;)(l
- 5 sin 2 (.)]R.
+ (2B2 z/R;)k}
(B.1.22)
This shows both the nonuniformity and noncentrality of the force field, through
the terms with(. and k, respectively. Higher order terms in the potential function
Eqn. (B.1.21) are available, and could be used for more accurate calculation of
A., or for calculation of A. in a third order expansion of the slant range function
R(s).
The expansion Eqn. (B.1.22), when used in Eqn. (B.1.16), together with
Eqn. (B.1.14) yields the parameters foe and fR For example, if we assume a
central force field, B 2 = 0, and a target fixed on the earth, we obtain
R. =
V.R./R.
= - [VU(p)]lp = p(sJ
((.)]
(B.1.21)
where ( is the latitude of the satellite on a sphere with center at the earth
center. The coefficient is B 2 = -4.405 x 10 10 m 2 (El'y~sberg, 196?, P 199).
With the form of the potential Eqn. (B.1.21), and takmg a coordmate system
(Fig. B.l) such that
sin((.)=
z/ R.
v.v. - R2
*(B.1.24)
All quantities here are evaluated at the time sc of passage of the terrain point
of interest through the radar beam center.
The expression Eqn. (B.1.23) exhibits terms due to satellite motion (perceived
due to squint and orbit eccentricity) and earth rotation. Both of these are
generally significant. In expression (B.1.24), however, for rough calculations it
may be adequate to use the approximation
(B.1.20)
A.= -VU[R.(s)]
*(B.1.23)
(B.1.25)
where
w e x RI 12
*(B.1.26)
The expression Eqn. (B.1.25) differs from Eqn. (B.1.24) only in the small
centripetal acceleration() and R2 terms, and in the small term (ro. x R,)'(ro. x R).
More accurately, Eqn. (B.1.24) defines a speed parameter V for use in
Eqn. ( B. l .25) in place of V., '. The matter is discussed in more detail in Section B.4.
Coordinate System
572
rectangular system with the z axis the axis of rotation of the earth, positive
towards the north pole (Fig. B.1 ). The positive x-axis points in a fixed direction
in inertial space, the direction of the vernal equinox, also called the first point
in Aries, and denoted symbolically as Y, the sign of the ram. The earth rotates
on its axis in this fixed coordinate system. The origin of the system can be
regarded as fixed at the center of the earth, so that the system itself moves with
fixed orientation around the sun as the earth travels in its orbit. Since throughout
we neglect the influence of the sun on the satellite, that this coordinate system
moves around the sun is of no concern.
The vernal equinox in this context is a specific direction, rather than a time.
The xy-plane of the equatorial coordinate system of Fig. B.1 is the plane of the
earth's equator. The z-axis, the axis of rotation of the earth, is tilted at nominally
23.5 with respect to the plane of the earth's orbit around the sun. As a result,
for nominally six months of each year the sun lies below the xy-plane, while
for the other six months it is above. At one precise instant each year, to an
observer at the earth's center the center of the sun would appear to pass through
the xy-plane headed into the positive-z hemisphere. That instant, nominally
some time on March 21, is the time of the vernal equinox, and the direction
from earth center to sun center at that instant is called the vernal equinox.
A slight complication arises in use of the vernal equinox as a coordinate
direction. Because of a variety of perturbing effects, the earth's axis of rotation
moves with respect to the plane of the earth's orbit. There is a mean circular
movement (precession) around the cone with central half angle 23.5, with a
period of 25800 years, and a small wobbling (nutation) about that mean, with
a period of 18.6 years. As a result, the direction of the vernal equinox moves,
and it is necessary to specify a date to which the equatorial coordinate system
in question relates. Until 1984, that was taken as the beginning of 1950, precisely
defined. Since 1984, the year 2000 is the convention. The vernal equinox actually
occurred at the first point (horn) of Aries (the ram) about 2000 years ago, so
that the current equatorial system has an x-axis rotated about (2000/25800) x
360 = 28 away from that star.
Before considering detailed formulas for target position R, in Eqn. (B.1.14)
and Eqn. (B.1.15), we will describe determination of the satellite vectors from
orbital elements.
B.2
573
~f th~ orbit ellipse:.rather than at its center.) Such an orbit can be described by its
or~ital elements , t~ese bein~ constants of the ellipse and of its orientation
relative to the equatonal coordmate system (Fig. B.2). They are (Haymes 1971
p. 498):
'
'
a,
e,
Ot:;,
the inclination of. the ellipse (the angle between the plane of the ellipse
and the xy-coordmate plane);
n, the_ longitude .of the ascending node (the azimuthal angle of the point at
whi~h the orbit cuts the xy-plane in passage of the satellite from the lower
hemisphere to the upper, that point being the "ascending node")
w, the argu~e.nt ~f perig~e (t~e angle ("argument") along the orbit ~lane,
taken ~osit1ve m the dtrectton of satellite travel, from ascending node to
t~e ~om!. of close~t ap~roach of the satellite to the earth center
( pe~igee ), th~t pomt bemg on the major axis of the ellipse);
P, the s1~ereal ~n.od (the ti~e required for one transit of the ellipse by the
satellite) - this is not an mdepen4ent parameter of the orbit
T, the time of peri~ee passag.e (the absolute time at which the sateilite passed
through the pomt of pengee during the orbit in question).
A satellite which finds itself in a central force field, Eqn. (B.1.19), will move in
a planar orbit which is one of the conic sections (Haymes, 1971, p. 41). If the
satellite is to be of use for remote sensing, it must be in an orbit which is
nominally elliptical. The elliptical orbit is further often arranged to be a near
circle. Were the earth to be a uniform sphere, a satellite would move in a strict
elliptical orbit, with the center of the earth at one of the foci of the ellipse.
(Note that thereby the origin of the equatorial coordinate system is at a focus
x'
Figure B.2
574
575
v. = const
(B.2.2)
Equation (B.2.2), the first integral of Eqn. (B.2.1), is the "areal integral'', and
indicates that a vector normal to the plane of R.(s) and V.(s) is a constant in
time. Hence R., v. evolve in a plane, and the orbit is planar.
Since the orbit is planar, we can confine attention to that plane and introduce
the polar coordinates shown in Fig. B.3. Then
R. = R.ur
(B.2.3)
(B.2.4)
= -(/R;)ur
+ (2R.ii + R.61)01
Figure 8.3
the transformation R. = l/uand using Eqn. (B.2.6)and Eqn. (B.2.7), there results
(B.2.5)
I;
l{
2 2
2
2
- K u (d u/dex )
K 2u 3 -
u2
Thus
with solution
u =A cos(ex - w) + /K 2
where A and w are constants of integration.
Transforming from u back to R. = 1/u, and defining
(B.2.9)
(B.2.7)
To find R. as a function of ex, which will turn out to be the equation of an
ellipse, we use Eqn. (B.2.6) to eliminate time s from Eqn. (B.2.7). Introducing
(B.2.8)
yields
R.=(e/A)[l +ecos(ex-w)]-1
(B.2.10)
576
Therefore R. has minimum and maximum values (the values at perigee and
apogee):
(R.)min =
(B.2.12)
p = 2nalf2I112
there results
+ e cos(IX -
ro)]
*(B.2.13)
(B.2.11)
Then defining
R. = a(l - e 2)/[1
577
e/ A( 1 + e)
(R.)max = e/A(l - e)
E - e sin(E) = M
This ~ust be solved numerically for E given M. (Since, in cases of interest to
us, e 1s very small, Newton's method converges in a few steps). More geometry
leads to
+ e)
The ellipse Eqn. (B.2.13) in the orbit plane is described by three of the orbital
elements - the semi-major axis a, the eccentricity e, and the phase angle ro
(the argument of perigee). Two other elements, the longitude n of the ascending
node and the inclination IXi of the orbit, relate the orbital plane to the equatorial
coordinate system. The sidereal period P and the time T of passage of the
satellite through perigee serve to locate the satellite in its orbit for any given
times.
It is convenient to introduce a number of angles with regard to the movement
of a satellite around its orbit (Fig. B.3). The angle
f=1X-ro
+ e)/(1
[i"] [
which appears in Eqn. (B.2.13) is the "true anomaly". The angle E (Fig. B.3)
is the "eccentric anomaly". This is the central angle, measured from perigee, of
the point on the circumscribed circle where a line througli the satellite parallel
to the minor axis intersects the circle. Finally, the "mean anomaly" is defined as
j"
= 01
c~s IXi
k"
-SID IXi
i'J
This is the angle of the satellite, measured from perigee, if the motion were
circular with period P, the sidereal period.
Ur] [
[ ;~
COSIX
-s~n IX
sin IX
OJ[ j"i"J
COS IX 0
0
k"
578
where we introduce the radial and tangential speeds. Writing the vectors u,, u
1
in terms of the inertial system, using Eqn. (B.2.14 ), then yields the components
of v. in the equatorial system.
Higher derivatives of R. could be found in the same general fashion, but we
will refrain from doing that, except to note that
Ci.= (-2/R;)esin(a-w)
579
*(B.2.19)
(B.2.14)
Perturbations of the Nominal Orbit
Since
z. = R. sin( a 1) sin(')
(B.2.15)
Let us now consider the satellite velocity Eqn. (B.2.4). From Eqn. (B.2.13)
we have
R. =
*(B.2.16)
*(B.2.17)
All of the above has assumed a central force field, Eqn. (B.2.1), leading to the
orbit being strictly an ellipse in a plane. However, since the earth is an oblate
spheroid, bulging at the equator, with somewhat of a pear shape (larger below
the equator), and with even higher order nonsphericities, the force field acting
on the satellite is not central. The result is that the satellite orbit is not a simple
ellipse in an invariant plane. The analytical treatment of the changes in the
elliptical orbit due to noncentrality of the force field is straightforward, but of
some complexity. A detailed treatment is given by El'yasberg (1967, Chapter
13), taking into account the first term of the earth's potential function,
Eqn. (B.1.21 ), past the simple inverse square force behavior (that involving the
coefficient B2 ).
The perturbing effects of higher order terms in the earth's potential function
are most conveniently expressed as perturbations of the nominal elliptical orbit.
To first order, two of the orbital elements increase or decrease monotonically
with time ("secular perturbations"), the argument of perigee wand the ascending
node n. Three of the orbital elements remain constant, again to first order: the
length a of the semi-major axis, the eccentricity e, and the inclination a1 Over
the course of one revolution of the satellite in its orbit, the accumulated changes
in the perturbed elements are (El'yasberg, 1967, p. 212):
<>w =
<>n =
2
(R)
. +(R)
s mm
s max =2a=2e/A(l-e )
(B.2.18)
Using Eqn. (B.2.13), Eqn. (B.2.17), and Eqn. (B.2.18) in Eqn. (B.2.16), we have
Eqn. (B.2.4) as
v. =
where
p = a(1 - e 2 )
*(B.2.18a)
(B.2.20)
e = -1.5B 2 = 2.634 x 10 25 m 5 /s 2
(B.2.21)
580
<>w/P
<>O./ P
(B.2.22)
P = 2na u
IJ
(B.2.23)
As an example, for Seasat with a= 7161.39 km, e = 1.86 x 10- 3, <X; = 108.02,
from Eqn. (B.2.23) we have a period P = 100.5 min. Then Eqn. (B.2.20),
Eqn. (B.2.21), and Eqn. (B.2.22) yield
(B.3.1)
We will ~se this to eliminate Rt from the formulations of Section B.1 in favor
of R, which latter can be expressed in terms of the pointing parameters R, y,
581
582
and
rfl.
583
We have
(B.3.2)
Since we continue to assume that the target point is fixed to the rotating earth,
R, is the local radius at the target point, and is constant. On the other hand,
(B.3.3)
where
(B.3.7)
(With this definition, for a right looking radar with forward squint of say 1,
o. = 1, while for a left looking radar squinted 1 forward we have o. = 179.)
From Fig. B.4,
(B.3.4)
(B.3.8)
v. R -
w. (R. x R)
(B.3.5)
sin( v) cos((.) = cos( ai)
R = R.. cos(y) -
v.
(B.3.10)
(B.3.9)
where we write
(B.3.11)
which also holds for sin(a) = 0, as is seen directly from Fig. B.l.
Using Eqn. (B.3.7), Eqn. (B.3.8), and Eqn. (B.3.lO)in Eqn. (B.3.6), we obtain:
RR = R.[R. - R 1 cos( O)] - w.R.R1 sin(O)[sin(O.) - (w./w.) cos((.) cos( o. - v)]
w.
= de
for the instantaneous satellite angular velocity and use Eqn. (B.2.14~.
Equation (B.3.6) yields the range rate R, and thus, fromfv = -2R(s)/A., the
instantaneous Doppler frequency / 0 , for arbitrary pointing angles 'l''. t/I from
satellite to target point. If we have particular reference to the compression filter
parameter foe however, we are interested. in poin~ing along the center ~ the
radar beam. In that case, for an exactly stde-lookmg radar, we ~ave t/I - n/2
(looking to the right of track), or l/J = - n/2 (looking left). In operation, however,
slight yaw and pitch of the satellite about its nominal forward path lead to an
angle I/I which differs from n/2 by something typically less than a degree, a
(B.3.12)
This form of Eqn. (B.3.6) is a result of Barber (1985), taking into account that
we have defined v with the opposite sign.
For the special case of a circular orbit, so that R.. = 0, Eqn. ( B.3.6) becomes
'
R=
(B.3.13)
584
RR
+ R2 =
As R
585
+ V R
- w. [V. x R
+ R. x R]
(B.3.14)
R = v. - w. x Rt= v. - w. x (R. -
R)
(B.3.15)
Substituting Eqn. (B.3.15) into Eqn. (B.3.14), and using simple identities, we
obtain
RR+
R2 =
A5R
+V
V. - 2w.[(R. - R) x V.]
+ (w x R.)[w x (R.-R)]
0
(B.3.16)
Substituting for R and R1 and its derivatives in terms of u,, 0 0 and up from
Eqn. (B.2.3), Eqn. (B.2.4), Eqn. (B.2.5), and Eqn. (B.3.1), and substituting u.,
0 1, and uP in terms of their rectangular components from Eqn. (B.2.14 ), in order
to carry out the operations with
.R;
1 -
vams m
qn. .3.17), we obtam the case considered specifically by Raney ( 1987). If we
furth~r dro~ the terms of second order in the small quantity w./w., that is the
term mvolvmg ro~ and R 2 , Eqn. (B.3.17) becomes
w:
1 -
R cos(y)]
- R sin( y) sin( ix) sin( ixi)[ cos( I/I) sin( ixi) cbs( ix)
- sin( I/I) cos(ixi)]}
With this.JR follows from Eqn. (B.1.18). It is worth noting explicitly in this that
R, y are not independent, either one determining the other through (Fig. B.4 ):
R~ = R:
+R
2RSR cos(y)
(B.3.18)
R. - R cos(y) = R 1 cos(O)
(B.3.19)
586
and
(B.3.20)
R sin(y) = R, sin(O)
1000.00
+ tan(O)sin(Y,)sin(ex)sin(exi)]}
0.00
*(B.3.21)
where the speed of the spacecraft is
-1000.00
and the speed Va of the point P (Fig. B.4) below the spacecraft nadir point on
the earth is
-2000.00
-3000.00 ~-A-~~:1:-.L..-L.-L..L.....J:.......J...-L-L-1.-L....L.-J
0.00
100.00
200.00
300.00
Orbit angle ex
a
Returning to the general expression Eqn. (B.3.17), we can use the expression
Eqn. (B.3.12) for .R, together with various geometric relations in Fig. B.1 and
Fig. B.5 to write, after some labor,
RR=
+ R.R w; cos(O)
-515
u;
~ -516
x.)
(B.3.22)
Q)
-517
...
.!!!-518
C-519
Examples
'c
'c
-520
-5210)~~0~~~~~~~~!--L.--L_J
=:~~~ ~:xa~~l~oppler center frequency for SEASAT example. (b) Azimuth chirp constant for
587
588
From Eqn:(B.2.13), R. = 7163.09 km; from Eqn. (B.2.17) and Eqn. (B.2.18),
w. =a= 1.04 x 10- 3 rad/s; from Eqn. (B.2.16), R. = 13.76 m/s; from
Eqn. (B.2.19), ii.= -1.9 x 10- 3 m/s 2 and w. =a= -4 x 10- 9 rad/s 2
Supposing we are interested in targets at range R = 850 km, and assuming
the nominal unsquinted value t/I = n/2, we only need yet a value for R, to
complete the calculation, Eqn. (B.3.6), Eqn. (B.3.17). This can be taken as the
earth radius at the subsatellite point:
R, = (xt
+ Yt + zt)
112
(B.3.23)
+ Yt)/R~ + ztf R~ =
B.4
589
!.:et us first consider the behavior offoe andfR as functions solely of slant range
R from radar to point target. From Eqn. (B.3.12), we have
RR= R.[R. - R, cos(())]
- R.R,w.[sin(O.) - (we/w,) cos((.) cos((), - v)] sin(())
(B.4.1)
In this, as a point target moves across the swath only R and () change
(Fig. B.4). From Fig. B.4, we have
(B.3.24)
with Re= 6378.16 km, RP= 6356.78 km being the equatorial and polar radii
of the ellipsoidal earth. Also,
1 - () 2 /2
(B.3.25)
(B.4.2)
+ R2-
x [sin(O.)-(we/w.)cos((,)cos(O. - v)]
2R.R cos(y)
(B.4.3)
For certain purposes, expressions for foe andfR having less accuracy than those
developed in Section B.1, Section B.2, and Section B.3 suffice. Indeed, for
clutterlock and autofocus procedures (Section 5.3 ), such simplified models are
necessary. In this section we will examine some appropriate approximations.
Thus, from Eqn. (B.4.3), we have the approximate model for variation ofjj
De
across the range swath:
*(B.4.4)
where the constants are
C1
= -2R.H/A.
C2 = [2w.(R,R 1)
112
(B.4.5)
For a circul~r orbit, R. vanishes, and only the second term of the model
Eqn.. <B.4.4) is pres.ent.. For an unsquinted beam, with e. = O, only the earth
rot~tton term remams m Eqn. (B.4.5). However, even with the nearly circular
orbit. of Seasat (eccentricity e = 0.00186) the satellite can have a dive angle
sufficient for the first term of the model Eqn. (B.4.4) to make a noticeable
590
contribution to foe Similarly, a squint of only a small amount can cause the
satellite motion term to dominate in Eqn. (B.4.5).
w; {1 -
V = R,R1
Variation of Doppler Rate with Range
In seeking a similar model for variation of fR across the range swath, from
Eqn. (B.3.17) we could express R in terms of R, sin(y), cos(y), and coefficients
which do not vary with R along a fixed pointing direction for a frozen satellite,
that is across the swath. The coefficients in such an expression are rather
compli~ated, however, and it is preferable to examine the terms in R for order
of magnitude directly before attempting to approximate the significant ones.
As Barber ( 1985) has indicated, the terms involving parameters of noncircularity of the orbit, that is, R., R;, and w., are negligible in determining
fR, for an orbit of the normal small eccentricity used for remote sensing of the
earth surface. From Eqn. (B.3.22), for a circular orbit we have:
RR=
-R 2
+ (<.0 /w,)
0
( (.) -
v)]}
(B.4.9)
using Eqn. (B.3.9). With the approximation that sin 0 is small, the parameter
2
V is nearly independent of range, leading to the model
*(B.4.10)
as R varies across the swath. This is the model used in autofocus procedures.
Dropping the earth rotation terms in Eqn. (B.4.9), there results
(B.4.11)
whe.re V. is the satellite speed. Introducing the spacecraft altitude H, and
recognizing that R, = R 0 , the nominal earth radius, Eqn. (B.4.11) results in the
simple approximation
+ R.R1w;{ cos(O)
591
(B.4.6)
(B.4.12)
This expression for Vis not accurate enough for other than rough computation,
however.
REFERENCES
For small squint angle 0, the earth rotation term may dominate this, in which case
(B.4.7)
from Fig. B.4. For a nominal y = 45, say, and squint o. < 8, Eqn. (B.4.7)
amounts to at most only 0.01. In both cases, the corresponding term of Eqn.
(B.4.6) can be dropped.
For a nominally side-looking system, we can then take Eqn. (B.4.6) as
RR= V 2
(B.4.8)
Raney, R. K. (1987). "A comment bn Doppler FM rate," Int. J. Rem. Sens., 8(7),
pp. 1091-1092.
C.1
ASF OPERATIONS
ARCHIVE AND
OPERATIONS SYSTEM
(AOS)
RECEIVING GROUND
STATION (RGS)
593
GEOPHYSICAL
PROCESSOR
SYSTEM (GPS)
APPENDIX C
~kEJ
i-
ALASKA
SAR
PROCESSOR
POST
PROCESSOR
ARCHIVE
ICE/OCEAN
AND
GEOPHYSICAL
CATALOG
ANTENNA
RF
SYSTEM
USERS
PROCESSOR
SYSTEM
IMAGE
ANALYSIS
WORKSTATION
MISSION
PLANNING
SYSTEM
Figure C.1
The NASA Office of Space Science and Applications ( OSSA) is sponsoring the
development and operation of an integrated SAR ground data system capable
ofreceiving, processing, and distributing data from a series of non-NASA sensors.
We present an overview of this system, the Alaska SAR Facility (ASF), as a
design example of a recent implementation to illustrate the data flow from
reception to the end user. The primary application of the data received at the
ASF is polar oceans research, including the study of sea ice, open oceans, and
glaciology. Additionally, a number of studies will be conducted in the areas of
hydrology, geology, and forest ecosystems (Carsey and Weeks, 1989). This
system, which is installed at the University of Alaska in Fairbanks, will receive
SAR data from three satellites: ( 1) The European Space Agency (ESA) sponsored
European Remote Sensor (E-ERS-1); (2) the National Space Development
Agency of Japan (NASDA) sponsored Earth Resources Satellite (J-ERS-1); and
(3) The Canadian Space Agency sponsored Radarsat System. The first data
from ERS-1 is expected in the summer of 1991 with the J-ERS-1 data to follow
in 1992 and that from Radarsat in 1995. the key parameters for these satellites
are given in Table 1.2.
The unique feature ,of the ASF is that this facility is the first fully integrated
ground data system, in the sense that it operationally processes the received
signal data from the modulated RF downlink to a variety of high level data
products, including area maps of certain geophysical quantities (ocean wave
spectra, sea ice type, and ice kinematics). The facility consists of four major
elements: (1) The Receiving Ground Station (RGS); (2) The SAR Processor
System (SPS); (3) The Archive and Operations System (AOS); and (4) The
592
C.1
ASF OPERATIONS
594
forwarded to the station. These data collection periods are entered into the
RGS tracking computer along with the most recent satellite orbital elements.
During a data acquisition pass the receiving antenna tracks the satellite to
receive the (real-time) SAR downlink. The acquired data is demodulated and
routed (at 105 Mbps for ERS-1) to high density digital recorders (HDDRs) for
temporary staging until the high precision (restituted) ephemeris is available
for processing. After reception of the restituted ephemeris (24-48 hours after
data acquisition), the recorded data is transferred from the HDDR to the SPS
via an interface that is customized for each of the three satellites due to the
differing formats. This interface performs data synchronization into ~an~e .line
records, decoding or descrambling the data if necessary and unpackmg it mto
an integer (two bytes per sample) format for processing. The SPS then process~s
this reformatted raw digitized video data into image products. The system is
designed to generate simultaneously two types of output imagery. The high
resolution (one-look complex or four-look detected) image data is routed to
the HDDRs. The second product, an 8 x 8 averaged low resolution image, is
passed directly to a post-processor over the ASF local area network (LAN).
This low resolution imagery is formatted to the CEOS standard ( 1989) and is
then transferred to the AOS for online storage in -an optical disk jukebox. All
ancillary data for each image frame (which covers approximately 100 km x
100 km) are stored in the AOS Database Management System (DBMS) for
access by the science team and the GPS.
The data products (including both high and low resolution images, as well
as the geophysical data) are ordered electronically via the AOS DBMS using
the NASA Space Physics Analysis Network (SPAN). These products are copied
onto 9 track, 1/2 inch computer compatible tape ( CCT) or 5 1/4 inch write-once
read-many digital optical disk (WORM DOD) and shipped to the user. A
special processing request is required for generation of the one-look complex
images or for geocoded image products (Chapter 8). The projected turnaround
for the standard ASF output product is 2-4 days. For the special prodycts, an
additional delay of 3-4 days is to be expected.
The geophysical processor has two modes of operation. In the automated
mode, the system daily initiates its own query of the AOS database and selects
a set of low resolution images for processing, based on preset criteria such as
image location. After processing, it automatically returns geophysical products
to the AOS for archive and electronic transfer to users. In the manual mode,
the GPS responds to requests from either remote users (via the AOS interface),
or from local users (via the workstation console). The system is designed such
that the automated processing is interrupted by a user request, which carries a
higher priority. Following servicing of the user request, the system then retur.ns
to its automated processing routine. The GPS .is designed to handle a dally
minimum of 1O pairs of ice motion vector maps, 20 ice classification images,
and 20 image framelettes (512 x 512 pixels) which are processed into ocean
wave spectra plots. The algorithms for the GPS will be described in Section C.5.
ASF OPERATIONS
595
120E
Figure C.2 Station masks for ERS-1 at 5 elevation: Kiruna, Sweden (top), Gatineau, Quebec
(right), and Fairbanks, Alaska (bottom).
The Alaska SAR Facility operates in conjunction with several other primary
ground receiving stations and a number of portable stations deployed around
the world. Since ERS-1 does not have a high rate data recorder on-board, the
SAR acquisition area is limited to the station reception range (mask). Figure C.2
illustrates the station maskS, for the three primary Northern Hemisphere stations,
located at: Kiruna, Sweden; Gatineau, Quebec; and Fairbanks, Alaska.
All elements of the ASF were built by JPL, with the exception of the RGS
which was built by the Scientific Atlanta Corporation under contract to JPL.
The facility is operated by University of Alaska students and staff, with sustaining
engineering support from JPL. In the following sections we will describe each
major element of the ASF in more detail. A summary of the ASF data products
produced from the ERS-1 SAR data is given in Table C.1.
596
TABLE C.1
Level*
Product
Description
Source
Volume/Day
RGS
IA
Basic Image
< 40 minutes at
105 Mbps
2 frames
(30 km x 50 km)
18
Bulk Image
18
Browse Image
Complex Samples
5 bits I, 5 bits Q
One-look Complex,
161, 16Q bits/sample,
Resolution < 10 m
Four-look Detected,
8 bits/pixel,
Resolution - 25 m
256-look (8 x 8 avg),
8 bits/pixel
Resolution < 200 m
From either Bulk or
Browse Image, 8 bits/
pixel, UTM or PS
projection
3-4 classes, 4 bits/pixel
Online: 5 km and
100 m grid products
Online: contour plots
Offtine: spectra,
Resolution - 25 m
Ice velocity vectors,
Online: 5 km and
100 km grid products
18
Geocoded Image
597
SPS
SPS
30-50 frames
(100 km x 100 km)
SPS
30-50 frames
(100 km x 100 km)
FREQUENCY SELECTIVE
SURFACE !DICHROICI SUBREFLECTOR
X-BANO FEED
.....
AOS
20 browse frames,
1 bulk frame
(100 km x 100 km)
GPS
20 frames
(100 km x 100 km)
GPS
20 framelettes
(512 x 512 pixels)
GPS
10 pairs
(100 km x 100 km)
( 1000 km x 1000 km)
......
......
'
........
__
-,-...J L....,-I
I
.......
.,..... / "
10 METER
REFLECTOR
*Definition of data product levels can be found in Table 6.1. UTM, Universal Transverse Mercator;
/1
PS, Polar Stereographic.
C.2
The Receiving Ground Station (RGS) is designed to collect data from each of
the three SAR satellites. However, the initial implementation of the RGS will
only support ERS-1 data reception. This system will acquire and track the
satellite using both the ephemeris predicts (as provided by the mission control
center) and an S-band satellite beacon which is in continuous operation. The
RGS can track the satellite down to a 5 elevation angle from the horizon,
im;:luding zenith passes. The SAR data is downlinked from the satellite on an
X-band, 8.14 GHz, carrier signal. This QPSK modulated signal is demodulated
in the RGS receiver and the resulting complex 105 Mbps, 5 bit/sample data
stream is routed to the HDDRs via a Signal and Data Routing Assembly
(SARA). The RGS was designed, built, and installed by Scientific Atlanta Corp.
under contract to JPL.
The antenna layout is shown in Fig. C.3. The parabolic dish is 10 m in
diameter and supports both X-band and S-band feeds. The pedestal is controlled
by servo devices to perform tracking at a slew rate of up to 15 /s in a
Figure C.3
thermal environme?t that ranges from -65 to +90F, with wind gusts to
40 mph. The system is completely automated to acquire and track using program
commands from. a Hewlett-Packard HP-320 computer. Test and installation
were co~pl~ted m June,.1989. The RGS antenna is located on the roof of the
Elvey ~~ddmg at the Umversity of Alaska (Fig. C.4). This system is also capable
of rece1vmg Landsat and SPOT downlink data streams.
598
I-
=>
a. (/)
a:
=>o
I-
oc
lo u:
<
<
c
I-
(/)
w
a:
..:.
-I
(,.)
<
LL
:::>
a:
w
LL
x.... ~
I-
-z
a.
:IE
N
M
N
(,.)
ci>
a:
-I
(,.)
:::>
a:
w
(,.)
-I
a:
w 0
(/)
I-
a. -I
w
a: a:
:::> a:
a. 0
-I
(j
-I
c
w ....
a.
(/)
Figure C.4
aQ
(/)
II-
(/)
w
a: w
(.!)
a:
w
3: <
LL
O;=:
:::> c I...JZ:::>
I-
-I-
-- "!\.
-- .-
U<C
-I
a:
I-
0
0
:::>
(/)
:::>
Ill
j::
-I
:::>
:IE
a.
::IE
0
0
-I
a:
0
0
"'
a:
It)
(,.)
::IE
>
a.
a: w
a: 0
<
::IE
<
0
0
I-
(/)
(/)
==
(/)
(/)
a:
a.
z <
0
a:
0
(/)
(/)
0
0
a:
< a.
<
c
c
3:
<
<
LL
I4
a:
I- I-
I-
w
z
a:
w
a:
w
c( (/)
~
,.... co
J:
I-
a: (/)
WW
c a:
J:
LL
IN
M
N
,....
a.
(/)
<
== I-
00
ci>
a:
-I I-
....
:::> (/)
a. a:
zO
-0
u u
599
600
The image correlator is a custom designed system comprising a single rack (35
boards) of digital hardware. The system is a second generation design based
on the Advanced Digital SAR Processor (ADSP) built by NASA/JPL for
601
76
27
2
27,500
12.5 kW
35
18
1
13,000
20 MHz
6.5 GFLOPS
Magellan, Seasat
10 MHz
3.3 GOPS
E-ERS-1, J-ERS-1
Radarsat
5kW
602
DCRSi
HODA
'.\ .
..... .
INPUT
INTERFACE
RANGE
PROCESSOR
RANGE
RADIOMETRIC
CORRECTION
CORNER
TURN
MEMORY
(RESAM PUNG
TO GROUND
Pl)(ELS)
HEADER
INFORMATION
!J II
___._...::.c.....;__~--
TO
POST
PROCESSOR
CONTROL
CLlITTER
~ - -:;;~ PROCESSOR .,.__ _
LOC
_ K _ _- - J
OUTPUT
(MASSCOMP)
AUTOFOCUS
I
BUFFER
~
I
I
(DESKEW)
'---t -~
\'
:1
603
h8
OUTPUT
INTERFACE
REDUCTION
MULTI-LOOK
MEMORY
't,
TO
POST
PROCESOR
Figure C.7
,.
''
in a range-line format. The output data is sent both to a HDDR for recording
and to an averaging module (8 pixels x 8 pixels) to produce a low resolution
image for on-line archive and image display. All ancillary data is transferred
to the post-processor alo ng with the low resolution imagery for preparation of
the CEOS standard format data files. This data is then transferred to the dual
ported disks fo r data staging, prior to file transfer to the AOS.
C.4
Figure C.6 The custom SAR correlator (left rack), the Ampex DCRSi recorders and the data
routing assembly (Courtesy of T. Bicknell).
DCRSi
HODA
C.5
a:
0u..
(I)~
c0
Zw
a: (I)
~>
w (I)
0
<(~
(J <(A.
0 ~~
w <Co
0 Q(J
<(
w
z
2:
0
w
w t:=
:c
~z Oz
(J
<(
2:
a: :c a: ~o <Co
<( (J w c- ~-~
_, a: z <( <(~ w<C
<( <( w
(I) a:
0 (J ffi 3:w
!:: :!:
-z oz
_,
~
!
(J li::w a:w
Q u.. (J 00 a:i 0
w~
...
2:w
:c~
(J (I)
a:>
<((I)
a:
w
(Ju..
- (I)
zz
O<C
a: a:
~~
Uw
w_,
_,_
w u..
0
z
C.5
A.
:c
(I)
The Geophysical Processor System (G PS) derives information about the surface
characteristics from the SPS image products. This system has three primary
functions: (a) Multitemporal ice motion tracking; (b) Ice type classification;
and (c) Wave spectra analysis (Fig. C.9). The system is designed for fully
automated operations, performing quality assurance checks to ensure product
<(
u..
w
>
a:
0 a:
~
0 0_, 0~
~
~u.. (J
a: (I) w <( z
~w
w<C a:
<( >
cno Q (.)
~
0 :!:
9w
<(
(I)
a:
w
tn
~>
~(I)
::>~
>
:c
A.
<(
0
z
~ (I)
a:
0 (I) z ~
::> a:
0
0
:; ::>
~ 0
<( (J A.
(.) w
~
ID tn <( a:
GEOCODED IMAGES
HIGH-RES IMAGES
t
a:
0
~
<(
(I)
>
(I)
ID
::>
(I)
a:
0
w
~
z
za:
ow
_z
(I) z
~<(
:!: _,
A.
2:
w
(J
w
a:
<o
ID~
(I)
<-
~Q
<Cw
Q
a:
w
a:
ww
ANCILLARY DATA
ICE MOTION
TRACKING
ICE TYPE
CLASSIFICATION
WAVE SPECTRA
ANALYSIS
_,~
::>~
A.
A.
<(
Q <(
:!:
~o
u..
w :!:
:ca:
SPATIAL QATABA'SE
-BUOY DATA
-TEMP DATA
-WIND DATA
-ICE EDGE MAP
IMAGE/PRODUCT
DATABASE
-IMAGE DATA
-PRODUCT DATA
604
605
z
0
u;
(I)
w
a:
0
z
606
C.5
consistency. A high level interface with the AOS performs data~ase queries,
electronic transfer of image data to the G PS, and transfer of geophysical products
to the AOS. The software is designed to be modular, to allow flexibility in its
architecture for post-launch optimization of the system performa~ce. The GPS
requires input data that has been previously geocoded to either a. P~lar
Stereographic (PS) or Universal Transverse Mercator _(UTM_) project.ion.
Additionally, the GPS requires that the SPS perform radiometric corr~ct1ons
to remove the cross-track power variation (see Chapter 7), although 1t does
GMT
REQUEST
TIME
INTERVAL
MOTION
PREDICTOR
IMAGE PAIR
IMAGE
DATABASE
LOCATION
NWS
(SURFACE
WIND.
TEMP.)
AVHRR
607
The ice motion tracker performs matching of common ice floes in SAR image
pairs separated by time scales of days to weeks. A diagram of the motion
algorithm is shown in Fig. C.10. The candidate image pairs are selected from
a listing of all recently acquired image data received daily from the AOS
database. The location of each newly acquired image is input to a motion
predictor algorithm that uses National Weather Service (NWS) wind and
drifting bouy data to select the most probable archive images for matching
(Colony and Thorndike, 1984).
ANCILLARY
DATABASE
SAA DATA
CATALOG
LEVEL 1C
GEOCODED
IMAGES
SURFACE
TEMP
AVHRR
PREPROCESSING:
SPECKLE FIL TEAING
SEGEMENTATION
1------~ CLASSIFICATION
4 T YPES
PACK
ZONE
' f-i1ERARcf:iicAi:.
AREA
CALIBRATION
FEEDBACK
TO MOTION
PRED ICTOR
FEATURE
MATCHING
CONSIST ENCY
CHECK
CLASSIFIED
IMAGE
MOTION ECTORS
Figure C.10
Ice motion and ice classificatio n a lgo rithms (Kwo k et a l., 1990).
c
-
1438
1481
Figure C.11 Ice motion pair from Beaufort Sea acquired by Seasal illustrating the rotatio n a nd
deformation over a three day period : (a) Rev. 1438 acquired October 5, 1978; ( b) Rev. 1481 acquired
Octo ber 8, 1978; (c) Edge ma ps; and (d) Motio n vectors.
en
co
,,,
',,,
" ''''"""""
' "''''""'
''
-2
'\
' ' ' ''""""""
'""""
"
,,,,,,,
'' ''
' ' " " ""
' ' ' ' ' ' '' \ '
'' ' ' ' ' ' \ \ \ \ \ \
\
........
\
\ \ '\
'
'
'\
'
-2
-2
-2
-2
-2
-2
-2
-J
-2
-2
-2
- -
-
-3
-J
-1
-2
-2
-2
-2
-J
co
-?
-J
-2
-l
-l
-]
-l
0)
-5
-3
-3
-J
-2
''-\\'\\\\\\\\\
'\ '\ '\ '\ '\ \ \ '\ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \
\ \ \ \ \ \ \ \
Figure C.12
-3
-3
-5
-2
-J
-5
-)
-5
-l
-l
-5
-5
_,
- 4
-J
-3
-2
-J
-2
-J
-
-
-;;
-
-2
-J
-]
-l
-5
-l
-;;
-5
-3
-4
-?
-2
- -
-5
-2
-2
-2
-2
-1
-J
-2
-2
-;
-2
-~
:.
_,
-2
-3
-l
-3
-l
-J
-)
-l
-l
-l
-2
-2
-)
-)
-3
-3
-2
-2
...
.....
-3
-l
-)
-)
-2
-4
-2
-~
-2
-2
-l
-2
-2
-2
-2
-2
-1e
- 12. - lO
-2
-2
-)
-1
-l
-2
-2
-2
_,
-
-~
-7
_,
-5
-
- - - _, -
2
- I
-:
-2
Ice motion o utput products on 5 km grid from Beaufort Sea acquired by Seasat : (a) Rev.
1409 acquired October 3, 1978; (b) Rev. 1452 acquired October 6, 1978; (c) T ranslational vector grid ;
and (d) Rotational grid.
-~
- :.
-)
-I
-5
610
C.5
The selected image pair is first evaluated using a coarse feature extraction
technique to generally determine the area of common floes. The image pair is
then catagorized as either pack ice or marginal zone ice. Since the pack ice
undergoes little rotation within the time scales considered in this system, this
catagory of ice imagery can be processed using a straightforward hierarchical
area correlation technique. Conversely, ice in the marginal zone can move
several dozen kilometers per day and undergoes both deformation and rotation
(Fig. C.11 ). The matching procedure for ice in the margin is based on a feature
extraction procedure that is invariant to rotation and insensitive to deformation
(Kwok et al., 1990). This procedure is used to derive a sparse field of rotational
E
::.c:
CD
. .'JP',,____
l
'
and translational vectors that are in turn used to initialize an area correlation
routine which produces a regular grided output product (Fig. C.12). These ice
motion products ( 100 km x 100 km) are averaged and overlaid on land
boundary maps to derive a regional (Arctic) time series product.
Ice Classification
The ice classification routines identify the various types of ice based on the
measured radiometric brightness of the SAR image pixels. The algorithm requires
information on surface temperature derived either from NWS data or by passive
radiometry (e.g., AVHRR). The temperature information is used to select a
look-up table (LUT) for the classification. The image is first segmented into
three or four classes using a clustering algorithm. These classes are then related
to ice types using a maximum likelihood classifier given the target scattering
information in the LUT. The scattering characteristics of the various ice types
are based on ground scatterometer measurements (e.g., Onstott et al., 1979).
The major ice types, categorized by age, are: (a) Multi-year ice; (b ) First-year
ice; ( c) New ice; and ( d) Open water. Currently, it is expected that this procedure
can produce reliable (95 % correct) results only during the winter season,
SLBSCENE
FOURIER ANALYSIS
SAR CLASSIFICATION % MY= 32.1
FOURIER COEFF
..........
..--~~~
~~~~~
SMOOTH SPECTRA
SMOOlHED COEFF
.-~~~_..~~~~--.
PEAK DETECTION
PEAKS
CONTOUR PLOT
GENERATION
WHITE: MULTl-YEAR (MY)
GREY= FIRST YEAR ROUGH
DARK GREY= FIRST YEAR SMOOTH
BLACK = NEW ICE
Figure C.13 Comparison of classified SAR images with those acquired simultaneously by the
KRMS 33.4 GHz passive radiometer (Holt et al., 1990a).
611
PRODUCT
(WAVENUMBERS, WAVE DIRECTIONS, PLOT)
Fig ure C.14
612
C.6
October to May. The large difference in dielectric constant ?~tween sea ice
covered with dry snow and sea ice covered with snow contammg free water
molecules and melt ponds make classification during the summer (June to
September) season significantly more complex and not sufficiently reli.able .for
an operational system. An illustration of the performance of the classification
algorithm is shown by the comparison of simulated E-ERS-1 data (from the
10Km
['-~'"'~
,--/_ ! ;,>(
j\
I
-f :\\
r.... -
L-_._~_._~_._--'
Fig ure C.15 Seasat SAR image or Chukchi Sea ( 10/ 9/78) showing fou r 512 ~ 512 pixel framelettes
and their spectral contour plots: (A) Open sea; (B) Frazil ice; (C) Pancake ice ; and (D) Open sea
(Holt et al., 1990b).
61 3
NASA / JPL DC-8 aircraft) with the NORDA KRMS 33.6 GHz passive
radiometer system (Eppler et al., 1986) in Fig. C.13 (Holt et al., 1990a).
Ocean Wave Spectra
The GPS ocean wave spectra routine is designed to extract wave parameters
from the SAR image data (Holt et al., 1990b). Its input is a full resolution
(four-look) image that is output from the SPS on HDDT. These data are read
into the AOS where they are subdivided into 512 x 512 pixel blocks for
processing by the GPS. The functional block diagram of the wave product
generation algorithm is given in Fig. C.14. The processing consists of a two
dimensional transform of each 512 x 512 block of data, followed by a Gaussian
smoothing filter to reduce the noise. The width of this filter is a parameter that
can be adjusted by the user. A peak finding routine is then used to locate the
significant wave peaks in the smoothed spectra. These peaks are defined as
local maxima, when compared to the image mean, which are separated by some
minimum distance from other maxima. From these peak locatio ns, the wave
number is given by the radial distance of the peak from the origin; the wave
direction is given by its polar angle relative to the image axis. No corrections
are applied to compensate for the SAR system impulse response function, or
for nonlinear moving surface modulations. An example of the image spectra
and resultant contour plots is given in Fig. C.15. The contours will be available
to users as an online graphic display, however the smoothed spectra will only
be distributed on hard copy digita l media.
C.6
SUMMARY
SUMMARY
The Alaska SAR Facility is the first fully integrated SAR ground data system
in that it routinely acquires and processes SAR data from Level 0 to Level 3.
Additionally, this facility performs mission planning and has a limited capacity
for electronic distribution of data, permitting rapid data access by the science
team. We present this system as an example of the type of end-to-end design
required for the "EOS era" ground data and information system. This system
has a throughput capacity of close to a terabit of SAR data per (24 hour) day.
This computational capacity is balanced with automated systems for archiving
and cataloging these image products, as well as a system to derive information
from the images and to produce a large number of reduced volume (non-image)
high level data products for direct analysis by the science team.
We consider the ASF as a pathfinder system in that it addresses a number
of the technical challenges facing the EOS ground data system. NASA plans
to use this facility for testing advanced concepts in mission planning, data
integration, electronic browse, and high rate data distribution. We fully expect
that the ASF will contribute significantly not only to our understanding of
polar oceanography, but also to our ability to develop and operate large,
integrated ground data systems.
614
REFERENCES
Carsey, F. and W. Weeks, eds (1989)."Science Plan for the Alaska SAR Facility," JPL
Pub. 89-14, Jet Propulsion Laboratory, Pasadena, CA.
Carande, R. E. and B. Charney ( 1988). "The Alasak SAR Processor," Proc. IGARSS
'88, Edinburgh, Scotland, ESA SP-284, pp. 695-698.
CEOS ( 1988). "Committee on Earth Observations Satellites: SAR Data Product Format
Standard," Rev. 2., ESA ESRIN, Frascati, Italy.
Colony, R. and A. s. Thorndike ( 1984 ):'"An Estimate of the Mean Field of Arctic Sea
Ice Motion," J. Geophys. R., 89, pp. 10623-10629.
Eppler, D. T., L. D. Farmer and A. W. Lohanick ( 1986). "Classification of Sea Ice Types
with Single-Band (33.6 GHz) Airborne Passive Microwave Imagery," J. Geophys. R.,
91, pp. 10661-10695.
Holt, B., R. Kwok and E. Rignot (1990a). "Status of the Ice Classification Algorit.hm
in the Alaska SAR Facility Geophysical Processor System," Proc. IGARSS 90,
Washington, DC, pp. 2221-2224.
Holt, B., R. Kwok, and J. Shimada (1990b). "Ocean Wave Products from .the Alaska
SAR Facility Geophysical Processor System," Proc. IGARSS '90, Washington, DC,
pp. 1469-1473.
Kwok, R., J. C. Curlander, R. McConnell and S. S. Pang (1990). "An Ice-Motion
Tracking System for the Alaska SAR Facility," IEEE J. Oceanic Eng., 15, pp. 44-54.
Onstott, R. G., R. K. Moore and W. F. Weeks (1979). "Surface-Band Scatterometer
Measurements of Sea Ice," IEEE Trans. Geosci. Elec., GE-17, pp. 78-85.
APPENDIX D
NONLINEAR DISTORTION
ANALYSIS
For a linear, time invariant system, where the principle of superposition applies,
a stimulus such as a step or a series of sinusoids is a suitable input for a complete
characterization of the system. Assuming causality, the resulting impulse
response can be used to predict the output for any input from the standard
convolution integral:
r(t) =
Lx' h(t')s(t -
t') dt'
(D.l)
where s(t) is the input signal, r(t) is the output, and h(t) is the impulse response
function (Appendix A). In a nonlinear system, however, the output function is
not a simple convolution using the input. A sinusoidal stimulus can produce
an output not only at the input (fundamental) frequency, but at all higher
harmonics of this frequency. Since the relative contribution of these harmonics
to the system response is dependent on the stimulus amplitude and frequency
characteristic, there can be no single transfer function that will predict the
response to a general input. Instead, a separate characterizing function would
be required for the response of the system to each amplitude and frequency of
input.
To circumvent this problem, the traditional approach in characterizing a
system is to assume linearity and to use a small amplitude test input to produce
a transfer function dependent only on the fundamental component of the
frequency response. This approach has been used extensively in the characterization
of nonlinear physiological systems (Rodieck, 1965; Tate, 1971; Toyoda, 1974).
615
616
f f
r(t)=n~O _00
00
-ookn(t;,
... ,t~)
(D.6)
which is simply the convolution integral. The first order functional, in fact, can
be used as a measure of the linearity of a system since the difference between
the measured nonlinear response and the predicted linear response is that system
component which cannot be characterized as linear.
For nonlinear systems, higher order functionals are required to describe the
system behavior. To construct the higher functionals, a procedure similar to
the Gram-Schmidt orthogonalization technique is used. The resulting second
and third order functionals are given by
00
00
G2[h2;s(t)] = {
s(t - t;) ... s(t - t~)dt; ... dt~
617
00
(D.2)
fo
00
where kn(ti. ... , tn) are the Volterra kernels. The Volterra series description is
very powerful conceptually, but practically it is rarely used since no simple
method exists for calculating the kernels of the system. This problem can be
solved by using a functional series originally proposed by Wiener (1958) that
simplifies evaluation of the system kernels by making the terms of the Volterra
series orthogonal for a specific stimulus. Since, to exhaustively test a nonlinear
system, the stimulus must cover all possible amplitudes and frequencies over
which the system operates, Wiener chose a Gaussian white noise (GWN) input
from which to construct a hierarchy of orthogonal functionals. Setting the zero
order Wiener functional to a constant value h0 , i.e.,
(D.3)
- p
00
G3 [h3; s(t)] = {
(D.7)
00
00
{
h2(t;,t;)dt'
(D.8)
where P is the power spectral density of the white noise. The power level P of
the white noise input is assumed constant for all frequencies over which the
system operates and is equivalent to the Fourier transform of the autocorrelation
function of n(t). In a form similar to the Volterra series in Eqn. (D.2), the
response of a nonlinear system can be written in terms of Wiener functionals as
00
r(t) =
(D.9)
m=O
(D.4)
where the functionals are now orthogonal and satisfy the equation
where h 1 (t') is the first order Wiener kernel and k 1 is the constant term necessary
to make the first order functional orthogonal to h0 To deermine k1 , we solve
the equation
(D.5)
where <ff { } is the expected value. For an s( t) of zero mean, k 1 = 0 and the first
order functional becomes
(D.10)
for all i =I= j.
Because of this relationship between the functionals, the kernels can be easily
calculated as follows (Lee and Schetzen, 1965)
h0 =ff { r(t)}
(D.11)
(D.12)
618
hn(t'1 , .
,t~) = (l/n!Pn)s{[r(t)-
:t:
t~)}
(D.14)
BIBLIOGRAPHY
Therefore, using Eqn. (D.9) the output r(q; of any nonlinear system ca~ be
exactly characterized for any input signal s(t). The number of terms req~tred
in the summation depends on the degree of nonlinearity of the system. Typically
three to four terms are sufficient.
REFERENCES
Lee, K. w. and K. Schetzen (1965). "Measurement of the Wiener Kernels of a Nonlinear
System by Cross-correlation," Int. J. Control, 2, pp. 237-254.
M armarel.1s, p . z. and v . z. Marmarelis ( 1978). Analysis of Physiological Systems: A
White Noise Approach, Plenum Press, New York.
Rodieck, R. w. (1965). "Quantitative Analysis of Cat Retinal Ganglion Cell Response
to Visual Stimuli," Vision Res., 5, pp. 583-601.
.
. ,,
Tate, c. and M. M. Woolfson (1971). "On Modeling Neural Networks m the Retma,
Vision Res., 11, pp. 167-633.
Toyoda, J. ( 1974 ). "Frequency Characteristics of Retinal Neurons in the Carp," J. Gen.
Physiol., 63, pp. 214-234.
Volterra, v. (1959). Theory of Functionals and of Integral and Integro-Differential
.
Equations, Dover Publications, New York.
Wiener, N. (1958). Nonlinear Problems in Random Theory, Wiley, New York.
Among some materials which are of general interest and help in the study of
SAR are the foUowing.
Books
Apel, J. R. Principles of Ocean Physics, Academic Press, New York, 1987. Chapter 8 is
devoted to scattering of electromagnetic waves from the sea surface.
Barton, D. K. Modern Radar System Analysis, Artech House, Norwood, MA 1988.
Comprehensive coverage of topics in conventional radar systems.
Elachi, C. Introduction to the Physics and Techniques of Remote Sensing, Wiley, New
York, 1987. A tutorial overview of the field.
Elachi, C. Spaceborne Radar Remote Sensing: Applications and Techniques, IEEE Press,
New York, 1988. Technical description of principles and instruments for radar imaging,
altimetry, and scatterometry.
Fitch, J. P. Synthetic Aperture Radar, Springer-Verlag, New York, 1988. A concise
well-illustrated survey.
Harger, R. 0. Synthetic Aperture Radar Systems, Theory and Design, Academic Press,
New York, 1970. A detailed treatment from the point of view of communication theory
and signal processing.
\
619
620
BIBLIOGRAPHY
BIBLIOGRAPHY
Skolnik, M. I. Introduction to Radar Systems, 2nd Ed., McGraw-Hill, New York, 1980.
Broad range of topics in radar, with extensive pointers to the literature.
Ulaby, F. T., R. K. Moore and A. K. Fung, Microwave Remote Sensing Active and Passive.
Vol. l, Microwave Remote Sensing Fundamentals and Radiometry, Addison-Wesley,
Reading, MA, 1981. Vol. 2, Radar Remote Sensing and Surface Scattering and Emission
Theory, Addison-Wesley, Reading, MA, 1982. Vol. 3, From Theory to Applications,
Artech House. Encyclopedic. Remote sensing physics, with a long chapter on SAR
(Vol. 2, Ch. 9).
Wehner, D.R. High Resolution Radar, Artech House, Norwood, MA, 1987. Techniques
for wideband radar systems, including synthetic aperture. Special attention to inverse
synthetic aperture (ISAR) systems.
Survey Articles
Ausherman, D. A., A. Kozma, J. L. Walker, H. M. Jones and E. C. Poggio, "Developments
in radar imaging," Trans. IEEE Aero. and Elec. Sys., AES-20(4), 1984, pp. 363-400.
Barber, B. C. "Theory of digital imaging from orbital synthetic-aperture radar," Int. J.
Rem. Sens., 6(7), 1985, pp. 1009-1057.
Elachi, C., T. Bicknell, R. L. Jordan and C. Wu, "Spaceborne synthetic-aperture
imaging radars: Applications, techniques, and technology," Proc. IEEE, 70( 10), 1982,
pp. 1174-1209.
Moore, R. K. "Radar fundamentals and scatterometers," Chapter 9 in Manual of Remote
Sensing, 2nd Ed., Vol. I Theory, Instruments, and Techniques (Colwell, R. N.,
D. S. Simonett and F. T. Ulaby, eds.), American Society of Photogrammetry, Falls
Church, VA, 1983.
Moore, R. K. "Imaging radar systems," Chapter 10 in Manual of Remote Sensing, 2nd
Ed., Vol. I Theory, Instruments, and Techniques (Colwell, R. N., D. S. Simonett and
F. T. Ulaby, eds.), American Society of Photogrammetry, Falls Church, VA, 1983.
Tomiyasu, K. "Tutorial review of synthetic-aperture radar (SAR) with applications to
imaging of the ocean surface," Proc. IEEE, 66(5), 1978, pp. 563-583.
Reports
Cimino, J. B., B. Holt and A. H. Richardson, "The Shuttle Imaging Radar B (SIR-B)
Experiment Report," Puhl. 88-2, Jet Propulsion Lab., Pasadena, March 15, 1988.
Ford, J.P., R. G. Blom, M. L. Bryan, M. I. Daily, T. H. Dixon, C. Elachi and E. C. Xenos,
"Seasat Views North America, the Caribbean, and Western Europe with Imaging
Radar," Puhl. 80-67, Jet Propulsion Lab., Pasadena, November 1, 1980.
Ford, J.P., J.B. Cimino, B. Holt and M. R. Ruzek, "Shuttle Imaging Radar Views the
Earth From Challenger: The SIR-B Experiment," Puhl. 86-10, Jet Propulsion Lab.,
Pasadena, March 15, 1986.
Fu, L.-L. and B. Holt, "Seasat Views Oceans and Sea Ice With Synthetic-Aperture
Radar," Pub!. 81-120, Jet Propulsion Lab., Pasadena, February 15, 1982.
Kasischke, E. S., G. A. Meadows and P. L. Jackson, "The Use of Synthetic Aperture
Radar Imagery to Detect Hazards to Navigation," Environmental Research Institute
of Michigan, Ann Arbor, 1984.
Pravdo, S. H., B. Huneycutt, B. M. Holt and D. N. Held, "Seasat Synthetic-Aperture
Radar Data User's Manual," Puhl. 82-90, Jet Propulsion Lab., Pasadena, March 1,
1983.
621
"Spaceborne Imaging Radar Symposium, January 17-20, 1983," Puhl. 83-11, Jet
Propulsion Lab., Pasadena, July 1, 1983.
"The Second Spaceborne Imaging Radar Symposium, April 28~30, 1986," Puhl. 86-26,
Jet Propulsion Lab., Pasadena, December l, 1986.
"Shuttle Imaging Radar-C Science Plan," Pub!. 85-29, Jet Propulsion Lab., Pasadena,
September 1, 1986.
MATHEMATICAL SYMBOLS
c
d
d( (), "')
D
Do
MATHEMATICAL SYMBOLS
D'
D'
D'( (), </>)
D,,
e
E
a
a;
A
A( ro)
Ad
A.
A,
A.
A.
b
b;
B
B(ro)
B2 .
B(f,R)
B(f, v)
622
FL
Fop
Fsys
g(f, R)
g(s, R)
g(s, t)
goa
gor
gua
g(J
623
624
MATHEMATICAL SYMBOLS
Ga
Ge
G1po1
Gop
G,
G'
GSTc
GI
Gxpo1
G(f, v)
~(s, v)
G( fJ, </J)
h
h(x)
h(s, Rls., R 0 )
H
Ho
i
I
J;
k
k'
ka
k,
K
K(R)
MATHEMATICAL SYMBOLS
k.J
nb
nB
nd
nh
nP
n1
N
Na
Naz
NA
Nb
Ne
Next
N;a
N;n1
Ni
Noa
Nq
N,
N.
Ns
p
P( fJ, </J)
Pav
Pb
P;a
pn
p~
Poa
P,
p~
Prad
P.
P!
pt
qd
qt
r
625
Mismatch loss
Range reference function length in complex samples
Radar system electronics losses
Number of quantization levels
Number of bits in block floating point quantized sample
Mean anomaly of satellite orbital position
Number of quantizer bits per sample
Number of bytes per sample
Complex raw data number
Number of slant range pulses to horizon
Complex pixel data number
Number of bits in block floating point quantizer threshold
One-sided noise power spectral density, W /Hz
Available noise power spectral density, W /Hz
Azimuth processing block length in samples
Samples per azimuth aperture
Bit error noise
Number of range processing blocks
Available external noise power spectral density
One-sided available input noise power spectral density
Effective output available noise power density of self noise
Data samples in two-dimensional correlator pattern
One-sided available output noise power spectral density
Quantization noise
Complex samples per range pulse; Range processing block length
Source noise power spectral density
Saturation noise
Sidereal period of orbit
Antenna radiation pattern, W /sr
Average power over TP
Data link bit error probability
Input available power (W)
Receiver noise power
Image noise power at a resolution cell
Output available power (W)
Receiver total power = P. + Pn
Image power at a resolution cell
Antenna radiated power
Receiver signal power
Image signal power at a resolution cell
Power delivered to antenna structure
Instrument duty cycle fraction
Fractional real time rate
Platform roll angle
Platform roll angle rate
626
roL
ri
rt
R
R
gt
R.
Re
Rg
Rm
Ro
RP
R.
R.
Rt
Rt
R1(h)
s
Sa
Sa;
s.
sd
Si
SK
s
s
ff
so
Su
SNRi
SNRO
tr
T
!Y
1d1
7;,
T..nt
r.
MATHEMATICAL SYMBOLS
MATHEMATICAL SYMBOLS
T.xt
~
Tn
1;,p
i;,
'I;,hys
T.
T,,(O, </J)
v
Vi
vr(s,t)
t)r(s,t)
v.
v.
V.(s, v)
v.
V.1
V.w
Vt
w..
WA
W..z
~
w.
Wa
Wg
x
xc Xi
Xo
Xi
x
Y1
z,
Zo
IX
<Xj
627
628
MATHEMATICAL SYMBOLS
<5( t)
<5p
'51, '52, '53, c54
<5fo
<5R
<5Rg
c5x
c5xaz
<5xgr
<5x1
<5xm
c5xP
<5x.
c5(}
An.,
AR
ARg
AR.
Ax
AXaz
Ax,8
8
80
8'
8"
8,
8x
8y
8z
'
'
''t
(c
11
(}
MATHEMATICAL SYMBOLS
Ou
o.
Ov
l
A
o
p
Pa
Pe
(]
ii
<Jo
ao
r
't'c
't'e
t1
to
rP
't'RP
't'w
</>
</>q
x
Xe
x.
Xi
"'
w
We
629
Horizontal beamwidth = l/ La
Radar beam squint angle relative to broadside
Vertical beamwidth = l/W..
Carrier wavelength
Bragg wavelength
Gravitational constanttimes earth mass = 3.986 x 10 14 m 3 /s 2
Permeability of free space = 4x x 10 - 7
Antenna efficiency= PaPe
Reflectivity
Aperture efficiency of antenna
Radiation efficiency of antenna
Target cross section (m 2 )
Mean cross section of extended homogeneous region
Specific backscatter coefficient = a I dA
Mean specific backscatter coefficient= Ca 0
Range delay time
Azimuth integration time
Pulse delay in radar electronics relative to vacuum
Pulse delay through ionosphere relative to vacuum
Integration time interval
Radar pulse length in time
Receiver protect window extension about transmitted pulse
Range sampling time window
Antenna elevation pattern angle
One of the polar coordinates (R, 0, </>)
Phase angle
Quadratic phase error at aperture edge
Geodetic longitude
Geocentric longitude
Satellite longitude (spherical coordinate)
Target longitude (spherical coordinate)
Angle of Fresnel coefficient (
Spectrum phase function
Argument of perigee of orbit
Radian frequency= 2xf
Earth angular velocity, m.; = 7.29212 x 10-s rad/s
Longitude of ascending node of orbit
Solid angle, sr
LIST OF ACRONYMS
CEOS.
CMOS
COR
CPU
CRT
cw
LIST OF ACRONYMS
lD
2D
A/C
AASR
ADC
ADCT
ADSP
AGC
AOS
APL
ASF
ASI
BAQ
BER
BFPQ
BITE
bps
Bps
CAL
CAT
CCRS
CCSDS
CCT
CE
630
One-Dimensional
Two-Dimensional
Aircraft
Azimuth Ambiguity to Signal Ratio
Analog to Digital Convertor
Adaptive Discrete Cosine Transform
Advanced Digital SAR Processor
Automatic Gain Control
Archive and Operations System
Applied Physics Laboratory
Alaska SAR Facility
Italian Space Agency
Block Adaptive Quantizer
Bit Error Rate
Block Floating Point Quantization
Built-In Test Equipment
Bits per Second
Bytes per Second
Calibration Subsystem
Catalog and Database Subsystem
Canadian Centre for Remote Sensing
Consultative Committee on Space Data Systems
Computer Compatible Tape
Computational Element
DAC
DC
DDL
DEC
DEM
DFT
DLR
DMA
DN
DOD
DSP
DWP
E-ERS-1
ECL
EHF
EIRP
EM
EOS
ERIM
ERS
ESA
FDC
FDDI
FET
FFT
FLOP
FLOPS
GaAs
GPS
GSFC
HDDR
HDDT
HF
HP
HPA
HSC
I, Q
I/O
631
632
LIST OF ACRONYMS
IBM
IEEE
IF
IPP
ISLR
J-ERS-1
JHU
JPL
JSC
LNA
LTPP
MDA
MGN
MIMD
MIT
MMIC
MTI
NASA
NASDA
pdf
PE
PIN
PN
PPI
PRF
PS
PSLR
QA
QPSK
RADAR
RAE
RAM
RAR
RASR
RCS
RF
RGS
rms
S/C
SAR
SAW
SCNR
SCR
SDNR
SIMD
LIST OF ACRONYMS
SIR
SLAR
SNR
SPAN
SPECAN
SPOT
SPS
SQNR
STALO
STC
T/R
TB
TDC
TDRSS
TM
TWT
UHF
USGS
UTM
VF
VGA
VHF
VHSIC
VLSIC
VQ
VSWR
WORM
633
INDEX
A-scan, 72-73
Aarons. J., 315
Absolute location of target:
algorithm, 374-376, 600
error sources, 345, 377-382
Acceleration of satellite, 569-571
Adaptive discrete cosine transform (ADCT).
493-496
Advanced Digital SAR Processor (ADSP).
-456-458,600,601
Agarwal, R C . 560
Alaska SAR Facility (ASF). 13, 592-614
Archive and Operations System (AOS),
603-605
Geophyskal Processor System (GPS).
605-613 '
Mission Planning Subsystem (MPS), 603
Receiving Ground Station (RGS), 596-597
SAR Processor System (SPS), 458, 598-(J03
station reception mask. 595
Aliasing:
of Doppler spectra, 238, 241
in image resampling, 389, 396, 482
of sampled signal, 211, 544-545
in step transform, 510-511
Allan variance, 263-264
Alliant computer, 466, 486, 487
ALMAZ, 12-13
Alpers, W. 52
635
636
INDEX
Antenna:
active array. 276-277. 318-319. 335-336. 357
aperture efficiency. 83, 273
aperture field distribution. 78-79. 85. 94-96
ASF ground station. 597. 598
beamwidth. 87-88
cross-polarized pattern. 278. 351
cross-talk, 351
current distribution function. 150
directional temperature. 105
directivity pattern. 77. 83-91, 104. 273. 335
effective aperture. 95
effective isotropic radiated power (EIRP),
74,341
Fraunhofer region, 79
Fresnel region, 78
gain function, 76. 80-84. 127. 223, 273
Goldstone ground station. 176, 178. 179,
180. 181
microstrip phased array, 274-275
minimum ambiguity area. 21, 274
noise, 106-108
polarization purity, 278
power pattern, 81
quad-polarized design. 274-278
radiation efficiency. 74. 82-84, 106, 273
reciprocity. 95, 341, 352. 364
sidelobes, 86-88
slotted waveguide array. 275
two-way power pattern. 228
uniformly illuminated aperture. 85-86. 88
Yagi. 34
AOS (Archive and Operations System). 592.
603-605
Apogee. 573, 576
Apollo Lunar Sounder Experiment. 34-38
command module diagram, 38
optical recorder, 37
Appiani. E.G., 470
Applied Physics Laboratory. see Johns
Hopkins University Applied Physics
Laboratory
Aptec,464
Archive and Catalog Subsystem, 603
Archive and Operations System (AOS), 592.
603-605
Argument of perigee. 573, 576
Ascending node. 576
Ascending node of satellite orbit, 576
ASF, see Alaska SAR Facility
Aspect angle. 156, 217
Atmospheric:
absorption spectrum. 5. 48
amplitude scintillation. 315
INDEX
637
638
INDEX
INDEX
639
ELSAG,468
El'yasberg, P. E., 570, 574, 577, 579, 580
Emissivity, 117
EMMA multiprocessor, 470-472
computational analysis, 471-472
functional block diagram, 471
Entropy, 288-289
Environmental Research Institute of
Michigan (ERIM), 33-35
Environmental Science Services
Administration (ESSA), 381
EOS (Earth Observing System), 9-11. 613
Ephemeris (restituted). 594
Eppler, D. T., 613
Equation of motion, 570
Equatorial coordinate system, 572
Equipartition principle, 97-98
ERIM (Environmental Research Institute
of Michigan), 33-35
ESA (European Space Agency), 10-13, 44,
592
Euclid's algorithm, 243-245
European Remote Sensor (E-ERS-1),.274,
329,331,375,467,470,471,592
European Space Agency (ESA), 10-13, 44,
592
Exciters:
analog (SAW) designs, 265-266
autocorrelation function, 267, 268
digital designs, 267-268
pulse codes, 265
pulse jitter effects, 268
SAW geometries, 266
.
Exponential probability distribution
function,215,216,228
External calibration, 337-349
distributed targets, 347-349
ground sites, 327, 344-346, 357
point targets, 337-343
Fairbanks, Alaska, 592, 595
Farnett, E. C., 150, 213
Farr, T . 54
Fast convolution, see Frequency domain
convolution
Feature extraction, 610
Fenson, D . 464
Fermat transform, 561
Filtering, 551-553
Filter weighting functions, 148
Fitch, J. P., 504
Foreshortening distortion, 382-384, 399, 479.
484
Fourier:
pair, 540
640
INDEX
Fourier (Continued)
series, 540
spectrum, 554
Fourier transform:
algorithms:
bit-reversed ordering, 557
in-place, 557
not-in-place, 557
butterfly operation, 556
coefficient ordering, 557
computational analysis, 555
discrete, 547-549
fast, 553, 554-558
pair, 540
radix formulation, 558
three-dimensional inverse, 523
twiddle factors , 555, 556
zero padding, 188, 212
Freden, S. C., 7
Fredholm integral equation, 140
Freeman,A.,327,341,343,344,349,351,358
Frequency domain (fast) convolution, 169,
187
ADSP design, 456-457
azimuth algorithm, 196-208
azimuth computational complexity,
443-444,446-448,452-453
azimuth processing block size, 435-436
range algorithm, 182-187
range computational complexity, 449-452
range processing block size, 436
Frequency shift, 162
Fresnel, see also Reflectivity of target (scene)
integral, 145
reflection coefficient, 136-139, 231
region of antenna, 78
Friedman, D. E., 395, 482
Frost, V. S., 419
Fu, L.-L., 37
Functionals, 616
Fundamental component of frequency
response, 615
Fung, A. K., 55
Gagliardi, R., 104, 105, 106
Gamma probability distribution function,
220
Gatineau, Quebec, 595
Gaussian probability distribution function,
97,215
Gaussian smoothing filter, 613
Gentleman, W. M., 558
Geocoding:
computational analysis, 482-486
definition, 371
INDEX
641
942
INDEX
Lewis, A, 382
Lewis, D. J., 445
Li, F. K., 5, 217, 223, 224, 227, 228, 241, 283,
285, 299, 301
Like-polarized reflection coefficient, 137
Linde, Y., 4%
Linear convolution, 545-553
Linear FM waveform, 133, 134, 144-146, 168,
173, 504
Linear range migration, 172, 190, 193, 194,
431-432
Linear span of data in polar coordinates, 522
Linear systems, 536-541
amplitude error effects, 259-260
convolution, 537-538
distortion analysis, 257-261
nonstationary, 540
phase error effects, 259-260
radar characterization, 136-139
stationary, 128-129, 141, 541
transfer function, 539-540
Location of target:
algorithm, 374-376, 600
error sources, 345, 377-382
Louet, J., 357
Low pass filter, 544, 545
interpoiation, 561-563
Low pass waveform, 541
Luscombe, A P., 238
MacArthur, J. L., 52
MacDonald, H. C., 28
MacDonald-Dettwiler and Associates
(MDA), 33, 155
Madsen, S. N., 223, 231, 233, 234, 389, 390
Magellan (MGN) radar, 39-42, 265, 273,
292-294,317,415,456
Magnetron, 30
Mainlobe broadening, 256
Maitre, H., 422
Map projections:
datum, 371
Polar Stereographic (PS), 393, 479
Universal Transverse Mercator (UTM),
370,393,479
Marginal zone ice, 610
Marmarelis, P. Z., 616
Marr, D.,419
Martinson, L., 507
Mass of the earth, 570
Massachusetts Institute of Technology
(MIT) Radiation Laboratory, 27
Masscomp computer, 598, 600, 603
Massively Parallel Processor (MPP),
468-469. See also Concurrent processor
INDEX
Noise:
ambiguity, 2%-305
antenna, 101, 106-108
bandwidth, 110-111
bit error, 283-286, 357
distortion, 251, 281, 293-294
crossover, 262
harmonic, 261-262, 270-271
intermodulation, 261-262, 271
equivalent noise temperature, 100, 107,
110, 118-119
external, 101-106
factor, 75, 108, 111-114
figure, 100, I 08, 271
galactic, 105-106
operating noise factor, ll3-ll4. 119-120
operating noise temperature, 110, ll8-ll9
power spectral density, 75. 129. 230
quantization. 279-283
radio, 106
receiver. 108-119
saturation, 280-281
source. 101-108
spatial image compression, 489, 492
speckle,93, 121,214-217,314,324
system noise factor, 113-114
temperature ratio, 117
thermal,96-99,220-221,251.359
Nominal satellite orbit, 574
Nominal target migration locus, 517
Nonlinear system analysis. 615-618
Nonstationary linear system, 198, 540
NORDA KRMS passive radiometer, 612.
613
North, D. 0 .. 128
Number theoretic transforms. 560
Numerical transform theory, 560
Nutation of earth rotation, 572
Nyquist:
frequency, 542
rate, 184, 283, 388, 546
theorem, 99
Ocean waves:
Bragg resonance, 51
capillary waves, 51
spectra,52-53,592,613
Office of Space Science and Applications
(OSSA), 592
Offset video frequency. 183, 211
ofSeasat, 184
One-bit SAR 211
Onstott, R. G., 611
Oppenheim, A V. 548
Optical correlator, 30, 31, 32
643
644
INDEX
INDEX
sensor-to-target. 159-160
variation (intra-pulse). 159-163
Range migration, 171-172. 193, 197, 504
correction, 181, 187, 189, 217
curvature, 172, 190
interpolation, 188
maximum bounds, 431-432
memory, 432, 469
nominal migration locus, 517
phase history. 23-25
Seasat example. 178
walk, 172, 190, 193, 239, 431-432
Range signal processing:
analog formulation, 182-187
compression filter parameters, 213-214
computational complexity, 449-452
digital formulation, 210-214
efficiency factor, 450
overview, 165-167
processing blocks, 450
Rawson. R., 33
Rayleigh-Jeans law. 103
Rayleigh probability distribution function.
216. 323
Real aperture radar. see Side-looking real
aperture radar (SLAR)
Receivers:
for ground calibration. 341-342
in SAR sensor, 271-273
Receiving Ground Station (RGS), 592.
596-597
Rectangular algorithm, 155-208. See also
Compression processing
Reed, C. J., 289
Reference functions (matched filter):
azimuth, 588-591
bandwidth, 434-435
length (in samples), 435
normalization factors, 361-364
range. 213-214
Reference mixing operation. 506
Reflectivity of target (scene), 136-139, 141,
149, 155,214,215.224,228,231,237,506
Remote sensing programs. 7-13
Resolution:
azimuth:
in matched filtering processing. 26,
169-171
in polar processing. 524
in real aperture radar. 16
in spectral analysis processing, 23, 439
range:
in deramp-FFf processing, 506, 524
in matched filtering processing, 15, 162
in uncoded pulse, 15
645
646
INDEX
INDEX
Test. J . 466
Three-dimensional inverse Fourier
transform, 523
Tikhonov. A. N . 149
T!le bandwidth product, see Bandwidth
time product
Time domain:
convolution. 537-538
filter weighting function. 151
Time domain processor:
computational complexity. 444-445. 446.
448-449
image formation algorithm. 167. 187-196
Time of echo propagation, 158-159
Time of perigee passage. 573
Titan Radar Mapper (CTRM) 40. 42
Tomographic analysis. 540
Tomographic imaging. 504
Tone generators. 342-343
TOPEX,381
Touzi, R. A. 419
Toyoda. J. 615
Transform:
Fermat. 561
Fourier. see Fourier transform
Hadamard. 493
Laplace, 540
number theoretic. 560
prime factor. 560
z. 543, 547
Transmit interference. 305-307
Transmitter:
solid state. 270
traveling wave tube. 269-270
Transponders. 341
Transverse scan cassette drives. 598
Traveling Wave Tube (TWT). 30
True anomaly of satellite orbit. 576. 577
Twiddle factors in FFT. 555. 556
UHF band. 8, 27
Ulaby. F. T . 6, 47. 55. 56. 62. 63. 64. 65. 76.
92, 100. 101. 102. 121. 122
Unfocussed SAR, 23-24. 31. 438-440
Uniform quantizer. 291
Uniform spherical earth. 570
United States Geological Survey (USGS).
393,412.414.415,416
Universal Transverse Mercator (UTM). see
Map projections
University of Alaska. 592
Van der Ziel, A.. 96. 97. 99
Van Roessel. J. W .. 28
Vant. M. R.. 503
647
543. 547
Zebker, H. A.. 57. 364. 402
Zeoli. G. W . 280
Zero padding ofFFT. 188. 212
Zohar, S., 560
Curlander
McDonough
~~
z
rri
Synthetic
Aperture
z0 ()
Radar
V>
-i
3: I
V> m
)> -i
V>
)>
C)
""'O
zm
~~
-0 c
o~
a~
~o
z
C)
111111 llllllliil I;
047185770X
)>
;.o
WILEYINTEf SCIENCE
Systems and
Signal Processing
JOHN C. CURLANDER
ROBERT N. McDONOUGH