Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 36

Next: Short overview on SAR

Main level: Epsilon Nought - Radar Remote Sensing

- Synthetic Aperture Radar -


Basic Concepts and Image Formation
Andreas Reigber

This tutorial should give a basic introduction in the the image formation process of synthetic
aperture radar. It is intended for everybody who is interested in the basic concepts of SAR.
This tutorial does not focus on advanced processing methods or all the problems associated
with the SAR image generation. It just gives a quick look on the basic ideas. For further
studies, lots of references are included.

The last chapter also includes some practical exercises in form of a processing of a small SAR
raw data set. This can be used perfectly for the first step into SAR remote sensing (beginning
PhD students or similar).

For comments, corrections and so on, send a mail to: anderl-at-nought-point-de

Version 1.0 (24.5.2001)

 Short overview on SAR remote sensing


o SAR remote sensing
o SAR interferometry
o SAR polarimetry
 Synthetic Apertur Radar
o Principles of radar imaging
o Resolution of a synthetic aperture radar
o Interpretation of SAR images
 Processing of SAR-data
o Phase history of a point target
o Processing in azimuth
o Processing in range
o Structure of a SAR-processor
 Exercise
o Reading and visualizing the raw data
o Azimuth focussing
o Range adaptation
o Multilooking
 Bibliography
 About this document ...

Andreas Reigber
2001-05-24

Next: Synthetic Apertur Radar Up: processing Previous: processing


Main level: Epsilon Nought - Radar Remote Sensing

Subsections

 SAR remote sensing


 SAR interferometry
 SAR polarimetry

Short overview on SAR remote sensing


SAR remote sensing
Imaging radars are airborne or spaceborne radars which generate a reflectivity map of an
illuminated area through transmission and reception of electromagnetic energy. Among other
types of microwave sensors, special attention has been paid in the past to synthetic aperture
radar (SAR) because of its high spatial resolution and multifarious information content.

The development of the synthetic array radar originated in 1951 with Carl Wiley, who
postulated the use of DOPPLER information for increasing the azimuth resolution of the
conventional side-looking aperture radar (SLAR) [1]. Based on this idea and following
developments, the first SAR image was produced by researchers at the University of
Michigan in 1958, using an optical processing method [2]. Precision optical processors and
hologram radars were developed and fine resolution strip maps were obtained by the mid
1960's. Later on in the early 1970's digital signal processing methods were introduced to
obtain off-line or non-realtime SAR images of high quality [3],[4]. Since these early days
SAR systems evolved to an essential and powerful tool in geosciences and remote sensing.
SAR data are applicable in many scientific fields. Besides traditional applications in
geography and for topographic and thematic mapping, nowadays SAR sensors are also
utilised in areas such as oceanography, forestry, agriculture, urban planning, environmental
sciences and prediction and evaluation of natural disasters.
SAR sensors operate in the microwave region of the electromagnetic spectrum with typical
wavelengths between 1cm and several metres. As an active system, a SAR emits by itself
microwave radiation to the ground and measures the electric field backscattered by the
illuminated ground patch. These measurements are transformed into a high resolution image.
Because SAR systems are operated with an illumination of its own, they can perform equally
well during day and night.

Looking at its transmission spectrum, it can be observed that the Earth's atmosphere is nearly
transparent in the microwave region (see Fig. 1.1). Electromagnetic waves with wavelengths
longer than 1cm even permeate almost undisturbedly small water drops. Thus the operation of
a SAR is possible even in the presence of clouds, fog and rain, which are a limiting factor in
optical remote sensing. This is of great importance for areas which are regularly covered by
haze or clouds, like for example rain forests in the tropics. Here topographic and thematic
mapping was nearly impossible before the introduction of imaging radars. Weather
independence combined with day and night operation capabilities makes SAR an operational
monitoring device for the entire earth's surface, a task which cannot be achieved with optical
sensors.

Figure 1.1: The electromagnetic spectrum of the sun (top), and the transmission spectrum of
the Earth's atmosphere (middle). At the bottom the most important spectral bands in remote
sensing are marked.

Radar images contain quite different information than images obtained from optical or
infrared sensors. While in the optical range mainly molecular resonances on the object
surfaces are responsible for the characteristic object reflectivity, in the microwave region
dielectric and geometrical properties become relevant for the backscattering. Radar images
therefore emphasise the relief and morphological structure of the observed terrain as well as
changes in the ground conductivity, for example caused by differences in soil moisture.
Because of the sensitivity to dielectric properties, SAR images, in principle, also can provide
information about the condition of vegetation, an important fact for agricultural and forestry
applications.

Another important feature of SAR data results from the propagation characteristics of
microwaves. Due to their long wavelength, microwaves are capable to penetrate vegetation
and even the ground up to a certain depth [5],[6]. The penetration capabilities depend on the
wavelength as well as on the complex dielectric constants, conductivities and densities of the
observed targets. Shorter wavelengths, like the X-band, show typically a high attenuation and
are mainly backscattered on the surface or on the top of the vegetation. Consequently, at these
wavelengths mainly information about this layer is collected. Longer wavelengths, like L- and
P-band, normally penetrate deep into vegetation and often also into the ground. The
backscattering then contains contributions from the entire volume.

Figure 1.2: Wavelength dependency of the penetration capabilities of microwaves in


vegetation and ground

One main problem in the analysis of SAR data is to determine exactly this superposition of
different scattering contributions. Even though certain desired information is contained in the
data, it is not accessible because only the total backscattering can be measured. An inversion
of the measured data to parameters of interest is often ambiguous and cannot be solved
without `a priori' information. Another problem is that the exact origin of the backscattering
is, in principle, unknown as the entire SAR geometry has an elevational symmetry. Thus the
elevation angle and, respectively, the topographic height of the observed scatterer remains
unknown. Two main extensions of conventional SAR have been pursued in the past to resolve
these limitations: SAR interferometry and polarimetry.
SAR interferometry
SAR interferometry (INSAR) is a technique, which analyses the phase difference between
two SAR images acquired from slightly different positions. This phase difference is related to
the terrain topography of the scene and can be used to generate high resolution digital
elevation models (DEMs). The first basic experiments were made in 1974 by L.C. Graham
demonstrating the capability of interferometric SAR for topographic mapping [7]. In the
following years only little attention was paid to SAR interferometry. Experimental efforts
were made by R.M. Goldstein and H.A. Zebker in the 80's, using a simple two-antenna
modification of the NASA/JPL airborne AIRSAR system to demonstrate single-pass SAR
interferometry [8]. In 1988 A. Gabriel and R.M. Goldstein showed the possibility of repeat-
pass interferometry using spaceborne L-band data acquired by the SEASAT sensor [9].
Finally, first results from repeat-pass interferometry using the Canadian airborne CCRS
sensor, in X- and C-band, were published in 1992 by A.L. Gray and P.J. Farris-Manning [10].

In 1991, the European Space Agency ESA launched the ERS-1 remote sensing satellite,
carrying beside a number of other sensors a C-band SAR system. This was the starting point
for numerous advances in SAR interferometry [11]-[14]. ERS-1, originally designed mainly
for oceanographic applications, quickly turned out to possess great potential for the generation
of large scale DEMs of high precision from areas all over the world. Main reasons for this
have been the continuous and nearly global mapping performed by ERS-1, combined with a
very good orbit restitution necessary for operational DEM generation. ERS-2 followed ERS-1
in 1995 to continue its operation; and, for a while during the `TANDEM' mission period, it
was possible to operate both in parallel. In 1994, two missions with the Shuttle Imaging Radar
SIR-C/X-SAR were flown, for the first time providing spaceborne multi-frequency (X-, C-
and L-band) interferometric data in a repeat-pass mode, and over some areas even fully
polarimetric [15],[16]. Parts of the same instrument, augmented by a second antenna mounted
on a 60m boom, was used again in February 2000 for the Shuttle Radar Topography Mission
(SRTM). Its purpose was to generate a high-resolution global DEM with single-pass
interferometry within 60 N and 60 S [17].

Additionally to these and some other spaceborne sensors, also several airborne single-pass
interferometric systems are in use. Some significant examples currently are the NASA/JPL
TOPSAR system (C- and L-band) in the US; the Japanese NASDA/CRL airborne SAR (X-
and L-band); the EMISAR system (C-band) of the Danish Center for Remote Sensing; the
commercial German AeS-1 (X-band) of the company AeroSensing; and the E-SAR system of
the German Aerospace Center (DLR) which operates in X-band in a single-pass mode and in
L- and P-band fully polarimetrically in a repeat-pass mode [18]-[21].

Besides topographic mapping, an extended version of SAR interferometry, called differential


interferometry, can be used for precise mapping of elevation changes. This technique allows
the detection of surface deformations on a scale smaller than the radar wavelength, usually in
the millimetre range. This extremely high precision enables a large scale detection and
monitoring of ecological stress-change processes like sudden co-seismic displacements and
long-term tectonic movements with spaceborne sensors. Also volcanic bulging before
eruptions, land subsidence in mining areas, land sliding in mountainous areas as well as ice
deformations and glacier dynamics can be detected with this method [22]-[27].
Interferometric SAR data additionally have content of a different nature than SAR images
alone. The correlation or coherence between two SAR images is very sensitive to changes in
the arrangement of the scatterers inside the resolution cells. Particularly, the coherence of
multi-temporal, multi-frequency or multi-polarised repeat-pass data sets can be used to
analyse and characterise changing processes, for example taking place in vegetation layers
[28],[29], or on natural surfaces [30]. Finally, multi-baseline approaches take into account
influences of the imaging geometry on the interferometric coherence and even can resolve to
a limited extent for the spatial distribution of scatterers in a volume [31],[32].

SAR polarimetry
SAR polarimetry (POLSAR) is another major extension of conventional single channel SAR
imaging. Like all electromagnetic waves also microwaves have a vectorial nature, and a
complete description of the scattering problem in radar science requires a vectorial matrix
formulation. This is the task of radar polarimetry, a technique which was initiated by the
introduction of the concept of the `scattering matrix' by G.W. Sinclair in 1948 [33],[34].
Since radar polarimetry requires advanced hardware devices, which have not been available in
the late 1940's and the 1950's, radar polarimetry remained only a theoretical concept and its
practical use for civil applications was not really recognised.

This situation changed at the latest in the early 1980's with the availability of polarimetric
SAR data from the NASA/JPL airborne AIRSAR system, which allowed in practice the
implementation of more recent works by E.M. Kennaugh, J.R. Huynen [35] and W.-
M. Boerner [36]. Since then, SAR polarimetry has become an established technique in remote
sensing. This was supported by the growing number of polarimetric airborne sensors like
DLR's E-SAR or NASA/JPL's AIRSAR systems, providing high resolution polarimetric data
in several frequency bands. Additionally, in 1994, two SIR-C/X-SAR Shuttle missions took
place, recording for the first time spaceborne fully polarimetric data in C- and L-band. During
the second mission this was already combined with repeat-pass interferometric data
acquisition.

One special characteristic of SAR polarimetry is that it allows a discrimination of different


types of scattering mechanisms. This becomes possible because the observed polarimetric
signatures depend strongly on the actual scattering process. In comparison to conventional
single-channel SAR, the inclusion of SAR polarimetry consequently can lead to a significant
improvement in the quality of classification and segmentation results [37]-[39]. Certain
polarimetric scattering models [40] even provide a direct physical interpretation of the
scattering process, allowing an estimation of physical ground parameters like soil moisture
and surface roughness [41], as well as unsupervised classification methods with automatic
identification of different scatterer characteristics and target types [42],[43].

SAR polarimetry additionally offers some limited capability for separating multiple scattering
mechanisms occurring inside the same resolution cell and can be deemed as a first step in
resolving the ambiguous scattering problem in SAR, as mentioned above. With polarimetric
decomposition techniques a received signal can be split into a sum of three (intrinsically four
in case of SAR, but under the assumption of reciprocal symmetric backscattering the two
cross-polar components are equal.) scattering contributions with orthogonal polarimetric
signatures [40]. This can be used for extracting the corresponding target types in the image,
even in the case that they are occurring superimposed. Also, if a signal is disturbed by
undesired orthogonal contributions, in this way the relevant components can be extracted,
improving results for diverse applications [44].

Finally, polarimetric SAR interferometry (POLINSAR) combines the capability of


interferometry to extract height information with polarimetric decomposition techniques [45].
With POLINSAR the topographic height of the phase centre of each extracted scattering
mechanism can be estimated independently. This means that a limited volumetric imaging can
be achieved with this technique, allowing for example an estimation of tree heights or
underlying ground topography [46]. In recent times also increased attention was paid to
model-based parameter estimation based on polarimetric interferometric data. This technique
tries to invert scattering models in order to determine from the measured signals the physical
parameters of interest [47],[48].

The concepts of SAR polarimetry nowadays are applied in many scientific fields. Significant
areas of application are particularly found in agriculture and forestry for crop monitoring,
species identification and biomass estimation. Also in geology and hydrology the possibility
of a characterisation of ground roughness as well as soil and snow moisture content is of great
interest. Finally, in oceanography SAR polarimetry is used for monitoring of wave systems,
thermal and current fronts, and for estimating ice-age and thickness in polar regions. [49]-
[54].

Next: Synthetic Apertur Radar Up: processing Previous: processing


Main level: Epsilon Nought - Radar Remote Sensing
Andreas Reigber
2001-05-24

Next: Processing of SAR-data Up: processing Previous: Short overview on SAR


Main level: Epsilon Nought - Radar Remote Sensing

Subsections

 Principles of radar imaging


 Resolution of a synthetic aperture radar
 Interpretation of SAR images

Synthetic Apertur Radar


Principles of radar imaging
The objective of radar imaging is to generate a two-dimensional reflectivity map of an
examined scene in the microwave region of the electromagnetic spectrum. Radar systems are
commonly based on the measurement of signal time delays (RADAR = Radio Detection And
Ranging) [55]. A normal monostatic imaging radar system consists of a microwave
transmitter and receiver, operated on a moving platform like an air-plane or satellite. In the
simplest case the antenna is oriented parallel to the flight direction, i.e. it is looking sidewards
to the ground (see Fig. 2.1). The look direction of the antenna is normally called ` range' or `
slant-range'. The transmitter emits fast consecutively short radar pulses to the ground. These
pulses are reflected from a scatterer on the ground and after a certain time delay they reach
again the receiver. This time delay is a function of the distance between the sensor and the
scatterer

(1)

where denotes the speed of light. Different scatterers can be resolved because their echos

show different time delays. Thus, the achievable resolution in slant-range is dependent on
the transmitted pulse length , or alternatively, on the bandwidth of the pulse

(2)

Obviously, the range resolution is independent of the distance between the scatterer and the
sensor. To achieve high resolution in range, very short pulse durations are necessary. The
resulting energy densities are often difficult to handle in practice. Therefore, in modern
radars, normally a high bandwidth is reached by transmitting a longer pulse with a linear
frequency modulation (chirp) instead. The energy of this pulse is distributed over a longer
duration, but it can be compressed again after receiving it by a matched filtering operation
[56]. For example, a bandwidth of 100MHz corresponds to a resolution of 1.5m.

In the along track or azimuth direction, the resolution of a simple side-looking radar

corresponds to the size of the antenna footprint on the ground. The angular resolution of
an antenna of length in the azimuth direction is limited due to diffraction effects on its
aperture. For a wavelength of it is given by

(3)

The spatial resolution in azimuth at a given range then results as

(4)
Apparently, the resolution in azimuth decreases with increasing flight heights and the
corresponding longer distances to the object. High resolution in azimuth requires large
antennas and short object distances. For example, spaceborne systems with an orbital height
of 800km and an antenna aperture of 15m would show only a resolution of approximately
3km.

Figure 2.1: Left: SLAR/SAR imaging geometry in strip-map mode. Right: geometric

representation of the maximum possible synthetic aperture length

Resolution of a synthetic aperture radar


A Synthetic Aperture Radar (SAR) overcomes these problems and is designed to achieve high
resolutions with small antennas over long distances [57]. A SAR system takes advantage of
the fact that the response of a scatterer on the ground is contained in more than a single radar
echo, and shows a typical phase history over the illumination time. An appropriate coherent
combination of several pulses leads to the formation of a synthetically enlarged antenna - the
so-called ` synthetic aperture'. This formation is very similar to the control of an antenna
array, with the difference that only one antenna is used and the different antenna positions are
generated sequentially in time by the movement of the platform.

The angular resolution of a synthetic aperture of the length is again given by the
diffraction limit, and is two times higher as the one of a real aperture of the same length:

(5)

The factor of `two' is the result of the synthetic aperture formation. The phase differences
between elements of the synthetic aperture result from a two-way path difference and are,
therefore, two times larger than in the case of a real antenna.
The maximum length for the synthetic aperture is the length of the flight path from which a
scatterer is illuminated. This is equal to the size of the antenna footprint on the ground at the

distance , where the scatterer is located at:

(6)

If the full synthetic aperture is formed, the azimuthal spatial resolution at the distance
results as

(7)

Interestingly, the achieved resolution is now completely independent of the range distance and
is determined only by the size of the real antenna. This is a result of the increasing length of
the synthetic aperture for longer distances to the object. In contrast to a SLAR, a shorter
antenna now produces higher resolution because of its wider angular radiation characteristics.
For higher flight heights, there still exists the problem of small backscattering power.
Therefore, a sufficient long antenna is necessary for adequate focusing of the power. Due
several technical limitations (available power, data-rate) a spaceborne SAR typically has
lower resolution (approximately 5m) than an airborne SAR (up to 30cm). However, with SAR
systems, high resolution radar imaging becomes possible even in the case of very long range
distances.

Interpretation of SAR images


It is obvious that radar images look very different then optical remote sensing images.
Because of the use of of different parts of the electromagnetic spectra, the information content
is not comparable. Additionally, due to the different imaging geometries as well as the use of
a coherent sensor system in the case of SAR, further differences are introduced. For a correct
interpretation of SAR images therefore a detailed understanding of the characteristica of SAR
images is necessary.

First of all, due to the coherent data recording, SAR images have complex pixel values
(amplitude and phase), in contrast to optical images where only the image amplitude can be
recorded. To achieve a SAR image, usually only the amplitude is used for the image
brightness, as the images phase has a random distribution. Nevertheless, the image phase
becomes important in SAR interferometry and polarimetry.
Looking at a distributed scatterer, i.e. an image pixel where many differnet individual
scatterers are located, each of them has a contribution to the total backscattering. Due to the
coherent radar wave, all individually backscattered wave interfere when forming to total
signal. As depicted in Fig. 2.2, the result is the coherent vector sum of all contributions, which
has a random phase, and a amplitude which is distributed about the true backscattering
amplitude of the individual scatterers.

A characteristic of SAR images is therefore the so-called "speckle", which denotes the
distribution of the measured amplitude around an average value, even over homogeneous
areas. SAR images appear therefore not very "smooth", as it is well known for optical images.
However, this is not a result of low resolution, but a direct result of the coherent imaging
process. Examples for distributed targets are nearly all types of natural surfaces, like bare soil,
meadows or forests.

4cm

Figure 2.2: Coherent vector sum of individual scattering contributions inside one resolution
element.

As point targets, scatterers are denoted, which have only one dominant scatterer signal in each
resolution cell. This is the case for bigger metallic objects, other man-made targets like
buildings, or radar-reflectors which have a stong reflection due to their geometry. In this case
the measured amplitude and phase are a direct result of the reflectivity and phase shifts during
the backscattering process. On such targets, the true image resolution can be estimated.

Figure 2.3: Layover and shadow areas caused by terrain slope.


Figure 2.4: Beispiel f"ur ein SAR-Bild, Gebiet: Sizilien, Sensor: ERS-1, C-Band

Further differences to optical images result from the different imaging geometry. With radar
systems, time delays are measured, i.e. points in the same distance to the antenna are imaged
at the same position in the imaged. This becomes a problem, if the examined area has strong
topography. In Fig. 2.3, the interaction of a plane wavefront with a mountain is show. It can
be observed, that the signal arrives at the same time at point 1 and 2. The whole area between
1 and 2 is therefore imaged at approximately the same position in the image, and cannot be
resolved. Always if the surface slope exceeds the look-angle of the radar, the effect called
'layover' occurs. But also if the surface slope is smaller, the mountains seem to tilt towards the
sensor, because the time delay from the top is reduced due to the larger topographic height.

On the backside of the mountains it can happen, that the radar signal is shadowed by the
topography. In an optical image, which imaging technique is based on the measurment of
angles, point 2 and 3 would appear at the same image position. In a radar image, where time
delays are measured, between the response from point 2 and point 3, no echo is received. This
causes an area in the image whithout backscattering, called 'shadow'. It contains only system
noise and appears mostly black.

In order to give an impression, in Fig. 2.4 an ERS-a SAR image of a mountainous area in


Sicily/Italy is shown. The geometric disturbances due to the topography as well as the speckle
can easily be observed. Shadowing is not occuring here. ERS-1 has a very steep look-angle of
and the effect occurs only for surface slopes greater than 90 - .
Next: Processing of SAR-data Up: processing Previous: Short overview on SAR
Main level: Epsilon Nought - Radar Remote Sensing
Andreas Reigber
2001-05-24

Processing of SAR-data
Phase history of a point target

Figure 3.1: Imaging geometry of a SAR-system


Fig. 3.1 pictures the illumination of a point target by a SAR sensor during data acquisition.
The sensor is moved along the x-axis (azimuth) and emmits, perpendicular to flight direction,
the radar pulses to the ground. The distance between the sensor at position and the target can
be expressed as

(8)

where denotes the minimum distance between both at . As the extension of the radar

footprint on the ground is (usually) much smaller than the target distance ( ), the
following approximation can be made:

(9)
The phases of the received echos, resulting from the two-way distance , are:

(10)

Assuming a constant sensor velocity and the abbreviation , a quadratic phase


behaviour in time is resulting, neglecting the constant phase term, which has no time
dependency.

(11)

The quadratic phase behaviour corresponds to a linear change in the received azimuth

frequency , the so-called DOPPLER-effect.

(12)

This linear DOPPLER-effect is only present as long as is really small in comparison to .


Otherwise high order components are occuring and the correct hyperbolic phase history has to
be taken into account. Particularly, this is the case for SAR sensors with very long apertures
and for those operating not exactly perpendicular to the flight direction but under a so-called
squint-angle.

The maximal illumination time of a point target is defined by the extension of the antenna
footprint in azimuth. This length, equal to the length of the synthetic aperture, is determined
by:

(13)

The bandwidth of the signal in azimuth is, therefore,


(14)

This bandwidth in azimuth sets also the lower limit of the pulse repetion frequency (PRF) of
the radar, with which the radar pulses are emmitted to the ground. After eliminating the

carrier frequency (demodulation in the receiver hardware), frequencies between and

are present in the complex signal. According to the NYQUIST-criterion, a sampling


frequency of two times the maximum frequency is necessary for an unambigous recording of
the data. Here, the sampling frequency is given by the PRF.

Processing in azimuth
The echo of a single point target is contained in may received radar pulses and appears
therefore defocused. The aim of SAR processing, also called compression, is to focus all the
received energy of a target, distributed over the illumination time, on one point at . To
achieve this, the typical phase history, coming from the data acquisition process, is used.
Assuming the backscattering of a point target to be time- and angular-independent, and also
dominant to other signals like noise and background reflections, the received signal in
azimuth direction can be written as

(15)

with denoting the backscattering amplitude of a point target (a complex value). The idea of
azimuth compression is now to adjust all these phase value to the same value, followed by a

coherent summation. To achieve this, a correlation of with a reference function

is performed. This reference function is constructed in a way that it has


in every point exactly the opposite phase of the ideal impulse response in Eq.3.8.

As the length of the synthetic aperture and with that also the length of the signal is limited, it
makes sense to limit also the length of the reference function by a box-like wheighting

function :

(16)
(17)

The result of the correlation is then

(18)

  (19)

  (20)

Using that only small values of are important, the approximation can be

made. In the following should denote a FOURIER-transform. With this, the


correlation result can be written as
(21)

(22)

  (23)

  (24)

(25)

The result of this correlation is the image. The principal shape of the resulting impulse
response corresponds thereby to the FOURIER-transform of the weigthing function. Is the
weigthing function box-like, as above, the impulse response is a sinus cardinalis function

(sinc or ). In Fig. 3.2 this process is illustrated. The received signal, also called
'chirp', has a constant amplitude and a parabolic phase behaviour (shown is only the real part
of the complex signal). The reference function has an amplitude of one and exactly the

opposite phase than the signal itself. After the correlation with the signal appears well
located at . Its maximum amplitude increased from to and the peak
phase is zero. In reality, the neglected phase term proportional to the two-way sensor object
distance as well as the object phase appear here.

Figure 3.2: Signal compression. Real part of the complex signal of an ideal point target
response (left) and amplitude of the compressed signal (right).

It can be recognized, that the bigger gets, i.e. as longer the syntetic aperture gets, the

more appears as a DIRAC'S delta function. Defining the resolution as the half distance

between the first minima of the main peak at , a synthetic aperture


consequently has an azimuthal resolution of:

(26)

Using the more correct definition of the resolution as the half width at half maximum, a 14%
bigger value is resulting. The first sidelobes are -13dB lower than the main peak. This can
cause problems, if a strong target is near to some weaker targets. Therefore, instead of using a
box-like weigthing function, often instead other shapes are used, whose FOURIER-transform
shows a better Peak-Sidelobe Ratio (PSLR). A very common function for this is the so-called
HAMMING-weighting:

(27)
Figure 3.3: Signal compression using a HAMMING weighting function.
Choosing , the first sidelobes of the FOURIER-transform are completely suppressed
(Fig 3.3). The PSLR is now much better and has a value of only -43dB. Indeed, the height of
the maximum is lowered and also the resolution is main peak is decreased by almost 30%.
Despite of these disadvantages, images processed using a HAMMING-weighting often appears
to be better focused.

The process of azimuth focussing, as presented here, is comutationally very intensiv, as for
every single pixel a correlation has to be calculated, consisting out of a great number of
additions and multiplications. It can be significantly accelerated by utilizing the convolution
theorem [58]:

(28)

According to this theorem, the convolution of two functions is equal to the multipliction of its
FOURIER-transforms in the spectral domain. A correlation represent a convolution with a
time-inverted function [58]:

(29)

Therefore, the desired compressed signal can also be obtained as in the following:

(30)

In practice, the occuring DOPPLER-rates of the signal are dependent on and are variing with
the target distance. It is therfore necessary to adapt the reference functions to the respective
data line under investigation. Is the correct refernce function calculated, it can be used to
focus a whole azimuth line in one step, using the convolution theorem.
Another problem of the here described, conventional way of processing, are the signal
contributions with higher DOPPLER-rates. They occur under larger angles and consequently
have also a larger time delay. It might happen, that these signal parts are recorded in later
range cells ('Range Cell Migration'). The echo energy is then distributed over several range
line, and the SAR azimuth focussing process becomes a two-dimensional operation. A
conventional processing whould therefore not be able anymore to focus the whole energy. In
this case more advanced processing methods are necessary, which are able to take into
account this effect [59]-[63].

Processing in range
In range direction a SAR can work just like a conventional radar. To achieve a high resolution
in the direction perpendicular to the flight direction, only a short pulse duration is necessary.
In practice, it can be problematic to generate a very short and high power pulse, as the
resulting energy densities are hard to handle. In the spectral domain, with short pulse duration
a higher signal bandwidth can be observed. A high resolution is therefore tantamount with a
high signal bandwidth. A second possibilty to generate a high signal bandwidth is to use a
long, but frequency modulated pulse. It is common to use for this a linear frequency
modulation (called 'chirp'):

(31)

with denoting the bandwidth of the emmited pulse. Like in azimuth this introduces a
'typical' phase history in the signal, which can later be used to compress the signal. The
cirprate is now

(32)

In order to compress the extended signal, a new reference function has to be constructed,
which takes into account the typically much faster frequency variation compared to the
azimuth case. The signal compression itself takes part like part exactly in the same way, i.e. a
correllation of the signal with the new reference functiom has to be calculated. The result is,
similar to Eq. 3.18:

(33)

The resulting resolution in range direction is:


(34)

If SAR raw data is processed in azimuth and range, a two-dimensional impulse response is
resulting, which represents the product of the two individual one-dimensional impulse
responses (see Fig.3.4). This function represents the intensity distribution of a point-like target
in the final SAR image.

Figure 3.4: Two-dimensional point target response (without weighting)

Structure of a SAR-processor
A SAR processor is the technical realization of the signal compression in range and azimuth.
Its purpose is to derive from the SAR raw data, as recorded by the sensor, the high resolution
image result. Starting from optical techniques, over analog electronics up to modern digital
SAR processors, several possibilities are existing to realize the necessary computational steps.
Nowadays, in the time of very powerful digital hardware, mostly digital methods are used,
either realized in software or by using hardware signal-processing.

The principal sequence of processing SAR raw data is shown in Fig. 3.5. The input is the
complex signal, as recorded by the SAR sensor. After an one-dimensional FOURIER-transform
in range direction, each range line is multiplied with the FOURIER-transform of the reference
function in range. After the inverse FFT back to time domain, the data are compressed in
range, but are still defocused in azimuth. At this point a correction of the range-cell-migration
can happen. Then a FOURIER-transform in azimuth is performed, followed by a multiplication
of the FOURIER-transform of the reference function in azimuth. This fuction has to be adapted
to the current range distance under investigation. After the back-transformation, the complex
image result is derived.

In Fig. 3.6 a simple SAR processing scheme is shown, on the basis of an ideal point target
response. I can be observed very good how the initially defocused signal first is compessed in
range and after that in azimuth direction.
Figure 3.5: Block-diagram of a simple SAR processor
Figure 3.6: Processing of an ideal point target response (no range cell migration)

Exercise
In order to obtain some practice in SAR processing, in this chapter a SAR raw data set should
be processed. The provided data are already range compressed, therefore only the azimuth
focussing step is necessary. In this exercise we use the Mount Britney raw data set, which can
be found in the 'data' section of Epsilon Nought (8.7MB Download). The data are simulated
data with parameters which are typical for an airborne sensor, i.e. a relatively low flight level
and velocity. Nevertheless, for simplification, range-cell-migration was not simulated and has
not to be taken into account during the processing. All programming examples are given in
the language IDL, but I would be happy if someone provides an alternative formulation in C.

Warning: To process the demo data set with the presented programs, you might need around
100MB of memory. But of course, more memory efficient programming is possible.

Reading and visualizing the raw data


The first step is of course to read the data correctly from the disk and to have a first look at it.
A common technique to store complex data in an efficient format is to use an interleaved I/Q
(real/imaginary) byte format. In this format for every complex value one byte for its real part
and one byte for its imaginary part is stored. This consumes only 2 bytes instead of 8 or 16
bytes per complex value. For compatibility between different platforms, we always use the
XDR (IEEE) data representation.

The data file starts by a header of two long integer values (4 bytes), describing the dimensions
of the raw data in range and azimuth. We start by opening the raw data file, defining the two
long interger variables size_rg and size_az, read them and output their values.

openr,lun,'mt_britney.dat',/xdr,/get_lun
size_rg = 0l
size_az = 0l
readu,lun,size_rg
readu,lun,size_az
print,'Range size :',size_rg
print,'Azimuth size :',size_az

If everything is correct, you should get size_rg = 2048 and size_az = 4096. With this
information we now can generate a complex array to hold the raw data, as well as a temporary
byte array to read a line of the interleaved byte data from the disk. It has two times the length
of size_rg because it needs to pick up the real part as well as the imaginary part of the raw
data.

line = bytarr(2*size_rg)
data = complexarr(size_rg,size_az)

With a for-loop we can now read line by line the data and put it into the complex array data.
The problem here is to separate correctly the interleaved real and imaginary parts to construct
the complex values. To do so we first define two index arrays index_q and index_i which
contain the index values for the real and imaginary bytes, respectively.

index_q = findgen(size_rg)*2
index_i = findgen(size_rg)*2+1

In a descriptive way, index_q contains [0,2,4,...] while index_i contains [1,3,5,...].


Additionally, the I/Q offset of the data of 128 has to be substracted from both real and
imaginary part. This is because with bytes only positive values are possible, and in order to
convert the measured raw data to bytes, this offset has been added.
re = line[index_q]-128.0
im = line[index_i]-128.0
data = complex(re,im)

Finally, here is the whole program to read the raw data file into the varible data:

openr,lun,'mt_britney.dat',/xdr,/get_lun
size_rg = 0l
size_az = 0l
readu,lun,size_rg
readu,lun,size_az
line = bytarr(2*size_rg)
data = complexarr(size_rg,size_az)
index_q = findgen(size_rg)*2
index_i = findgen(size_rg)*2+1
for i=0,size_az-1 do begin
readu,lun,line
re = line[index_q]-128.0
im = line[index_i]-128.0
data[*,i] = complex(re,im)
endfor
free_lun,lun

Now we can have a look at the data. As it is complex data, it is appropriate to take the
absolute value before visualizing. An image can be derived with
tv,bytscl(rebin(abs(roh),512,512)). The result is depicted in Fig. 4.1. You should get
something like this! Obviously, not a lot of structures can be recognized yet in the image. Of
course, as the data is completely unfocused in azimuth with a resolution of only slighly below
one kilometer. In range direction the image has already a high resolution of 1.5 meters. This
can be recognized on some linear structures along azimuth.

Figure 4.1: Amplitude of range-compressed raw data


Azimuth focussing
Now the data should be compressed in azimuth. To do so, some sensor/imaging parameters
are necessary:
wavelength 0.056 [m]
sensor velocity 100 [m/s]
sensor heigth 3000 [m]
range distance 4000 [m]
PRF 500 [Hz]
range sampling [Hz]
azimuth resolution 0.5 [m]

With this information, first of all, we should calculate the length of the synthetic aperture in
meters and pixels, which is necessary to obtain the given azimuth resolution:

lambda = 0.056
r = 4000.0
v = 100.0
prf = 500.0
res_az = 0.50
len_sa = lambda*r/2/res_az
nr_sa = floor(len_sa/v*prf)

The next step is to generate the correct reference fuction. It can be calculated out of the
imaging geometry. The azimuth positions of every pixel inside the synthetic aperture are:

x = (findgen(nr_sa)-nr_sa/2)*len_sa/nr_sa

As we know the range distance, the true sensor-scatterer distance for every azimuth position is
obtained by:

dr = shift(sqrt(r^2+x^2),nr_sa/2)

This distance corresponds to a phase (real and complex representation) of:

phase = 4*!pi/lambda*dr
cpha = exp(complex(0,phase))

The function cpha already represents the desired reference function. But as we want to use the
FOURIER-domain representation for efficiency reasons, we have to perform a so-called zero-
padding to expand this function to the full azimuth length of the data:

ref = complexarr(size_az)
ref[0:nr_sa/2-1] = cpha[0:nr_sa/2-1]
ref[size_az-nr_sa/2:size_az-1] = cpha[nr_sa/2:*]
The rest is quite simple, we just have to muliply the spectra of every azimuth line with the
spectra of the reference function and to perform after an inverse FOURIER-transform:

for i=0,size_rg-1 do begin


data[i,*] = fft(fft(data[i,*],-1)*conj(fft(ref,-1)),1)
endfor

Finally, again the whole program:

lambda = 0.056
r = 4000.0
v = 100.0
prf = 500.0
res_az = 0.50
len_sa = lambda*r/2/res_az
nr_sa = floor(len_sa/v*prf)

x = (findgen(nr_sa)-nr_sa/2)*len_sa/nr_sa
dr = shift(sqrt(r^2+x^2),nr_sa/2)
phase = 4*!pi/lambda*dr
cpha = exp(complex(0,phase))
ref = complexarr(size_az)
ref[0:nr_sa/2-1] = cpha[0:nr_sa/2-1]
ref[size_az-nr_sa/2:size_az-1] = cpha[nr_sa/2:*]

for i=0,size_rg-1 do begin


data[i,*] = fft(fft(data[i,*],-1)*conj(fft(ref,-1)),1)
endfor

The data should be readily focused right now, so we should have a look at it. It is still
complex data, and the image information should be contained in its absolute. So, just generate
an image window by:

tva,rotate(rebin(abs(data),512,512),1),/m

The program tva.pro used here is not part of IDL. It can be downloaded in the IDL section
of Epsilon.Nought. The /m keyword performs a scaling of the data from 0 to 2.5 times the
average of the data array. This is not too important for this simulated data set, but is
absolutely necessary when visualizing real SAR data, as single bright points are not taken into
account during the brightness scaling process.

The new result is depicted in Fig. 4.2. The image has now clearly some structures in azimuth,
but still appears a bit defocused. In the next section we will address this problem. Another
feature of SAR data can already observed: The characteristic speckle. The image does not
appear smooth but kind of noisy. As already discussed in this tutorial, this is a result of the
coherent imaging technique, and has nothing to do with low resolution.
Figure 4.2: Image focussed with a fixed reference function.

Range adaptation
So what we did wrong? When calculating the reference function the range distance of 4000m
was used. In fact, this is only correct for the first azimuth line. The following lines have a
growing distance to the sensor. The spacing between two adjacent lines can be obtained from
the range sampling rate of the sensor and the speed of light (compare Eq. ):

rg_sam = 1.0e8
c = 2.99e8
dist_r = c/2 / rg_sam

The resulting value of dist_r = 1.5 shows, that every 'range bin' has the size of 1.5 meters.

The growing distance to the target has two main effects. The chirp rate of the reference
function changes, as well as the length of the synthetic aperture in azimuth. In order to obtain
a high quality image, our SAR processor has to adapt these two parameters with range. With
the experience we have now, this should be no problem. We just have to include the
calculation of them in the main processing loop.

One important thing to take care of is that the length of the synthetic aperture has an even
amount of pixels. This is because of the splitting of the reference function in two parts
(needed for the FOURIER-domain formulation), and can be solved by a simple if-condition.

Finally, the whole modified processing:

rg_sam = 1.0e8
c = 2.99e8
dist_r = c/2 / rg_sam
for i=0,size_rg-1 do begin
r_real = r+i*dist_r
len_sa = lambda*r_real/2/res_az
nr_sa = floor(len_sa/v*prf)
if nr_sa mod 2 eq 1 then nr_sa=nr_sa+1
x = (findgen(nr_sa)-nr_sa/2)*len_sa/nr_sa
dr = shift(sqrt(r_real^2+x^2),nr_sa/2)
phase = 4*!pi/lambda*dr
cpha = exp(complex(0,phase))
ref = complexarr(size_az)
ref[0:nr_sa/2-1] = cpha[0:nr_sa/2-1]
ref[size_az-nr_sa/2:size_az-1] = cpha[nr_sa/2:*]
data[i,*] = fft(fft(data[i,*],-1)*conj(fft(ref,-1)),1)
endfor
tva,rotate(rebin(abs(data),512,512),1),/m

The final result is depicted in Fig. 4.3. And yes, the image looks now much better, doesn't it?
Well, maybe you even already know Mt. Britney.

I apologize for the speckled nature of the image, but SAR is like that. To make everybody
happy, we'll approach this problem in the next section and try to get the image a bit smoother.

Figure 4.3: Image focussed with an adaptive reference function.

If you want, you can play a bit with the parameters to see what happens if you use lower or
higher resolution, wrong forward velocities and so on. Britney will do all you want....

For the lasy ones: Download here the full program sar_proz.pro
Hint: Have also a look at "Britney's Introduction to Semiconductor Physics".

Multilooking
Coming soon.....

Bibliography
1

C. Wiley:
"Pulsed Doppler Radar Method and Means",
US Patent No. 3.196.436, 1954.

L.J. Cutrona, E.N. Leith, C.J. Palermo and L.J. Porcello:


"Optical Data Processing and Filtering Systems",
IRE Transactions on Information Theory, Vol. IT-6, pp. 386-400, 1960.

J.R. Bennett, I.G. Cumming and R.A. Deane:


"The Digital Processing of SEASAT Synthetic Aperture Radar Data",
Proceedings of the IEEE International Radar Conference, pp. 168-174, 1980.

C. Wu:
"A Digital Fast Correlation Approach to Produce SAESAT SAR Imagery",
Proceedings of the IEEE International Radar Conference, pp. 153-160, 1980.

F.T. Ulaby, R.K. Moore and A.K. Fung:


"Microwave Remote Sensing Volume I",
Addison-Wesley, Reading, MA, 1981.

F.T. Ulaby, R.K. Moore and A.K. Fung:


"Microwave Remote Sensing Volume II",
Addison-Wesley, Reading, MA, 1982.

L.C. Graham:
"Synthetic Interferometric Radar for Topographic Mapping",
Proceedings of the IEEE, Vol. 62, pp. 763-768, 1974.
8

H.A. Zebker and R.M. Goldstein:


"Topographic Mapping from Interferometric Synthetic Aperture Radar Observations",
Journal of Geophysical Research, Vol. 91, pp. 4993-4999, 1986.

A. Gabriel and R.M. Goldstein:


"Crossed Orbit Interferometry: theory and Experimental Results from SIR-B",
International Journal of Remote Sensing, Vol. 9, No. 5, pp. 857-872, 1988.

10

A.L. Gray and P.J. Farris-Manning:


"Repeat-Pass Interferometry with Airborne Synthetic Aperture Radar",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 31, No. 1, pp. 180-191, 1993.

11

"Space at the Service of Our Environment",


Proceedings of the first ERS-1 symposium, Vol. 1-2, Cannes, France, 1992.

12

"Space at the Service of Our Environment",


Proceedings of the second ERS-1 symposium, Vol. 1-2, Hamburg, Germany, 1993.

13

"ERS SAR Interferometry",


Proceedings of the Fringe'96 workshop, Vol. 1-2, Zurich, Switzerland, 1996.

14

"Space at the Service of Our Environment",


Proceedings of the third ERS-1 symposium, Vol. 1-3, Florence, Italy, 1997.

15

J. Way and E.A. Smith:


"The Evolution of Synthetic Aperture Radar Systems and their Progression to the EOS SAR",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 29, No. 6, pp. 962-985, 1991.

16
R.L. Jordan, B.L. Huneycutt and M. Werner:
"The SIR-C/X-SAR Synthetic Aperture Radar System",
Proceedings of the IEEE, Vol. 79, No. 6, pp. 827-838, 1991.

17

M. Werner:
"Shuttle Radar Topography Mission (SRTM) - Mission Overview",
Proceedings of EUSAR'2000, pp. 209-212, Munich, Germany, 2000.

18

S.N. Madsen, J. Martin and H.A. Zebker:


"Analysis and Evaluation of the NASA/JPL TOPSAR Across-Track Interferometric System",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 33, No. 2, pp. 383-391, 1995.

19

E.L. Christensen, N. Skou, J. Dall, K.W. Woelders, J.H. Jørgensen, J. Granholm and S.N.
Madsen:
"EMISAR: An Absolutely Calibrated Polarimetric L- and C-band SAR",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 36, No. 6, pp. 1852-1865,
November 1998

20

F. Holecz, J. Moreira, P. Pasquali, S. Voigt, E. Meier and D. Nüesch:


"Height Model Generation, Automatic Geocoding and Mosaicing Using Airborne AeS-1 InSAR
Data",
Proceedings of IGARSS'97, pp. 1929-1931, Singapore, 1997.

21

R. Scheiber:
"Single-Pass Interferometry with the E-SAR Sysyem of DLR",
Proceedings of EUSAR'98, pp. 47-50, Friedrichshafen, Germany, 1998.

22

D. Massonet, M. Rossi, C. Carmona, F. Adragna, G. Pelzer, K. Feigl and T. Rabaute:


"The Displacement Field of the Landers Earthquake mapped by Radar Interferometry",
Nature, Vol. 364, pp. 138-142, 1993.

23

D. Massonet, P. Briole and A. Arnaud:


"Deflation of Mount Etna Monitored by Spaceborne Radar Interferometry",
Nature, Vol. 375, pp. 567-570, 1995.

24
A. Ferretti, C. Prati, and F. Rocca:
"Nonlinear Subsidence Rate Estimation Using Permanent Scatterers in Differential SAR
Interferometry",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 38, No. 5, pp. 2202-2212,
September 2000.

25

H. Rott, C. Mayer and A. Siegel:


"On the Operational Potential of SAR Interferometry for Monitoring Mass Movements in
Alpine Areas",
Proceedings of EUSAR'2000, pp. 43-46, Munich, Germany, 2000.

26

R. Kwock and M.A. Fahnestock:


"Ice Sheet Motion and Topography from Radar Interferometry",
IEEE Transactions on Geoscience and Remote Sensing Vol. 34, No. 1, pp. 189-220, 1996.

27

X. Wu, K.H. Thiel and P. Hartl:


"Estimating Ice Changes by SAR Interferometry",
Proceedings of the 3rd International Airborne Remote Sensing Conference and Exhibition,
pp. 110-117, Copenhagen, Denmark, 1997.

28

J.O. Hagberg, L.M. Ulander, and J. Askne:


"Repeat-Pass Interferometry over Forested Terrain",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 33, No. 2, pp. 331-340, 1995.

29

J. Askne, P.B. Dammert, L.M. Ulander, and G. Smith:


"C-Band Repeat-Pass Interferometric SAR Observations of the Forest",,
IEEE Transactions on Geoscience and Remote Sensing, Vol. 35, No. 1, pp. 25-35, 1997.

30

K.P. Papathanassiou, A. Reigber and M. Coltelli:


"On the Interferometric Coherence: A Multifrequency and Multitemporal Analysis",
Proceedings of the Fringe'96 workshop, pp. 319-330, Zurich, Switzerland, 1996.

31

R.N. Treuhaft, S.N. Madsen, M. Moghaddam, and J.J. van Zyl:


"Vegetation Characteristics and Underlying Topography from Interferometric Data",
Radio Science, Vol. 31, pp. 1449-1495, 1996.
32

R.N. Treuhaft, and P.R. Siqueira:


"The Vertical Structure of Vegetated Land Surfaces from Interferometric and Polarimetric
Radar",
Radio Science, Vol. 35, No. 1, pp. 141-177, 2000.

33

G. Sinclair:
"Modification of the Radar Range Equation for Arbitrary Targets and Arbitrary Polarization",
Report 302-19, Antenna Laboratory, The Ohio State University Research Foundation, 1948.

34

G. Sinclair:
"The Transmission and Reception of Elliptically Polarized Waves",
Proceedings of the IRE, Vol. 38, No. 2, pp. 148-151, 1950.

35

J.R. Huynen:
"Phenomenological Theory of Radar Targets",
PhD. thesis, University of Technology, Delft, The Netherlands, 1970.

36

W.M. Boerner and M.B. El-Arini:


"Polarisation Dependence in Electromagnetic Inverse Problems",
IEEE Transactions on Antennas and Propagation, Vol. 29, No. 2, pp. 262-271, 1981.

37

E. Rignot and R. Chellappa:


"Segmentation of Polarimetric Synthetic Aperture Radar Data",
IEEE Transactions on Image Processing, Vol. 1, pp. 281-300, 1992.

38

J.S. Lee, M.R. Grunes and R. Kwok:


"Classification of Multi-look Polarimetric SAR Imagery Based on Complex Wishart
Distribution",
International Journal of Remote Sensing, Vol. 15, No. 11, pp. 2299-2311, 1994.

39

S.R. Cloude and E. Pottier:


"An Entropy Based Classification Scheme for Land Applications of Polarimetric SAR",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 35, No. 1, pp. 68-78, 1997.

40
S.R. Cloude and E. Pottier:
"A Review of Target Decomposition Theorems in Radar Polarimetry",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 34, No. 2, pp. 498-518, 1996.

41

S.R. Cloude, I. Hajnsek and K.P. Papathanassiou:


"Eigenvector Methods for the Extraction of Surface Parameters in Polarimetric SAR",
Proceedings of CEOS SAR Worhshop 1999, CNES, Toulouse, France, 26-29 October 1999.

42

J.S. Lee, M.R. Grunes, T. Ainsworth, D. Lu, D.L. Schuler and S.R. Cloude:
"Unsupervised Classification Using Polarimetric Decomposition and Complex Wishart
Distribution",
Proceedings of IGARSS'98, pp. 2178-2180, Seattle, USA, 1998.

43

E. Pottier and J.S. Lee:


"Unsupervised Classification Scheme of POLSAR Images Based on the Complex Wishart
Distribution and the "H/A/ " Polarimetric Decomposition Theorem",
Proceedings EUSAR'2000, pp. 265-268, Munich, Germany, 2000.

44

I. Hajnsek, S.R. Cloude, J.S. Lee and E. Pottier:


"Inversion of Surface Parameters from Polarimetric SAR Data",
Proceedings of IGARSS'2000, pp. 1095-1097, Honolulu, Hawaii, 2000.

45

K.P. Papathanassiou:
"Polarimetric SAR Interferometry",
PhD. thesis, Technische Universität Graz, Austria, February 1999.

46

S.R. Cloude and K.P. Papathanassiou:


"Polarimetric SAR Interferometry",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 36, No. 5, pp. 1551-1565, 1998.

47

K.P. Papathanassiou, S.R. Cloude, and A. Reigber:


"Estimation of Vegetation Parameters using Polarimetric SAR Interferometry Part I and II",
Proceedings of CEOS SAR Worhshop 1999, CNES, Toulouse, France, 26-29 October 1999.

48
K.P. Papathanassiou, S.R. Cloude and A. Reigber:
"Single and Multi-baseline Polarimetric SAR Interferometry over Forested Terrain",
Proceedings of EUSAR'2000, pp. 123-126, Munich, Germany, 2000.

49

G.F. Lemoine, D.H. Hoekman and A.J. Sieber:


"Polarimetric Contrast Classification of Agricultural Fields Using MAESTRO-1 AIR-SAR Data",
International Journal of Remote Sensing, Vol. 15, No. 14, pp. 2851-2869, 1994.

50

E. Rignot, J. Way, C. Williams and L. Viereck:


"Radar Estimates of Aboveground Biomass in Boreal Forest Using SAR Imagery",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 32, No. 5, pp. 1117-1123, 1994.

51

D.L. Evans, T.G. Farr, J.J. van Zyl and H. Zebker:


"Imaging Radar Polarimetry: Analysis Tools and Applications",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 26, No. 5, pp. 774-789, 1988.

52

Y. Oh, K. Sarabandi and F.T. Ulaby:


"An Empirical Model and an Inversion Technique for Radar Scattering from Bare Soil
Surfaces",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 30, No. 2, pp. 370-381, 1992.

53

D.L. Schuler, J.S. Lee, and K.W. Hoppel:


"Polarimetric SAR Image Signatures of the Ocean and Gulf Stream Features",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 31, No. 6, pp. 1210-1221, 1993.

54

W.M. Boerner, H. Mott, E. Lüneburg, C. Livingstone, B. Brisco, R.J. Brown and J.S. Paterson:
"Polarimetry in Radar Remote Sensing: Basic and Applied Concepts",
Manual of Remote Sensing Vol. 2, Chapter 5, John Wiley & Sons, New York, 1998.

55

R.M. Page:
"The Origin of Radar",
Anchor Books, Garden City, NY, 1962.

56
J.R. Klauder, A.C. Price, S. Darlington and W.J. Albersheim:
"The Theory and Design of Chirp Radars",
The Bell System Technical Journal, Vol. 39, Nr. 4, July 1960.

57

J.C. Curlander and R.N. McDonough:


"Synthetic Aperture Radar: Systems and Processing",
John Wiley and Sons, New York, 1991.

58

Bronstein, Semendjajew:
"Taschenbuch der Mathematik",
Verlag Harri Deutsch, Frankfurt/Main, 1989.

59

R.K. Raney, H. Runge, R. Bamler, I. Cumming and F. Wong:


"Precision SAR Processing without interpolation for range cell migration correction",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 32, pp. 786-799, July 1994.

60

A. Moreira, J. Mittermayer and R. Scheiber:


"Extended Chirp Scaling Algorithm for Air- and Spaceborne SAR Data Processing in Stripmap
ans ScanSAR Imaging Modes",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 34, No. 5, pp. 1123-1136, Sep.
1996.

61

R. Bamler:
"A comparison of range-doppler and wavenumber domain SAR focusing algorithms",
IEEE Trans. on Geoscience and Remote Sensing, Vol. 30, pp. 706-713, July 1992.

62

J. Mittermayer, A. Moreira and O. Loffeld:


"The Frequency Scaling Algorithm for Spotlight SAR Data Processing",
IEEE Transactions on Geoscience and Remote Sensing, Vol. 37, No. 5, September 1999.

63

F. Rocca, C. Prati, and A. Monti Guarnieri:


"New Algorithms for Processing of SAR Data",
ESA Contract Report, ESRIN Contract no. 7998/88/F/FL(SC), 1989.

You might also like