Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

1

Introduction

In the last few years, high quality images of the Earth produced by synthetic
aperture radar (SAR) systems carried on a variety of airborne and spaceborne
platforms have become increasingly available. Two major leaps forward were
provided by the launch of the ERS-1 satellite by the European Space Agency
in 1991 and the advent of very flexible airborne systems carrying multifre-
quency polarimetric SARs, of which the NASA/JPL AirSAR provides perhaps
the single most influential example. These systems ushered in a new era of
civilian radar remote sensing because of their emphasis on SAR as a measure-
ment device, with great attention being paid to data quality and calibration.
This emphasis continues to play a major part in the development, deployment,
and application of current systems.
ERS-I was the first in a series of orbital SARs planned to have long
lifetimes and semioperational capabilities. The JERS-I, ERS-2, and Radarsat
satellite systems are currently in orbit, with ENVISAT planned for launch in
1999. By providing a long time series of accurate measurements of the backscat-
tering coefficient, these satellites allow dynamic processes to be observed over
most of the Earths surface, with impacts in many areas, such as vegetation
mapping and monitoring, hydrology, sea-ice mapping, and geology. The unique
capability of SAR to exploit signal phase in interferometry has given rise to
completely new tools for glaciology and the study of tectonic activity.
Because of the constraints imposed by deploying a radar in space, these
systems are simple, in the sense of using single frequencies and polarizations
with modest resolution. By contrast, airborne systems have been able to dem-
onstrate the advantages of having multiple frequencies and polarizations avail-
able. These advantages were further demonstrated, but from space, by the
SIR-C/X-SAR mission of the Space Shuttle. In addition, the AirSAR system
indicated that longer wavelength radars can have a special role in Earth ob-
servation due to their ability to penetrate vegetation canopies and interact
with structural elements of trees and the underlying soil. Much longer wave-
length systems are now in operation and promise to provide effective methods
for reconnaissance and remote sensing over heavily vegetated areas.
While civilian systems have concentrated on radiometric accuracy and
investigation of natural targets, the priority of military systems is the detection
and recognition of man-made targets (often vehicles) against a clutter back-
ground. The need for rapid reconnaissance has placed considerable emphasis
on airborne systems that can be deployed on demand. By its nature, the
military recognition task usually demands resolution better than Im and sys-
tems that can operate at long range, for survivability. These two requirements
enforce long synthetic apertures that, because airborne systems are preferred,
have needed the development of sophisticated motion-compensation (MOCO)
techniques.
While an enormous amount of effort has been expended on systems to
acquire SAR data, comparatively little has been expended on making the best
use of those data. Two lines of attack are needed to achieve this. One is the
investigation of the physical content of the data, by means of experiment,
observation, and modeling. The other is to examine the actual properties of the
data in light of what is known about the physics, the general properties of the
world, and the applications of interest, to identify and extract optimal estimates
of the information-bearing components of the data. This bridging role between
what we know about the data and how we best get at the information they
contain is the subject of this book.
Radar systems are capable of producing very high quality images of the
Earth, as demonstrated by the high-resolution image shown in Figure 1.1. In
order for this imagery to have value it must be interpreted so as to yield
information about the region under study. An image analyst soon learns to
recognize features such as trees, hedges, fields (with internal structure and
texture), shadows, and buildings, which encompass a range of natural and
man-made objects. However, it is a time-consuming task to extract the informa-
tion, particularly when large areas must be examined. Furthermore, there is no
guarantee of consistency between analysts or measure of the performance they
achieve. These limitations motivate the search for automatic algorithms to
derive the relevant information more quickly and reproducibly, or, in some
circumstances, more sensitively.
The need for automatic or semiautomatic methods becomes even more
pressing when polarimetric, multifrequency, and/or multitemporal images are
Figure 1.1 High-resolution (< 1m) DRA X-band SAR image of typical rural scene. (British
Crown Copyright 1997/DERA.)

available. For such multidimensional data it is first necessary to define the


quantities to be made available to the analyst: which parameters carry the infor-
mation? When there are more independent pieces of information at each pixel
than can be sensibly represented, for example, by color overlays, how should the
data be organized and displayed to pick out its salient points? In such circum-
stances, it may be better to let some of the decisions be made automatically as
long as they have a solid basis in understanding how information is embodied in
the data. This is particularly true when there are time constraints and large
amounts of data to analyze.
Automatic image understanding must take place on two different scales.

• Low-level analysis provides a series of tools for identifying and quanti-


fying details of local image structure.
• High-level analysis then uses this detail to construct a global structural
description of the scene, which is the information required by the image
analyst.

Note that low-level algorithms are not sufficient; local details have to be
incorporated into some overall form of understanding. This demands methods
for overcoming defects in the detailed picture, for example, by joining edges
across gaps or merging regions where appropriate. On the other hand, sophisti-
cated high-level techniques are of no value if the underlying tools are incapable
of bringing out the information with sufficient sensitivity. The aim of this book
is to develop and relate the tools needed at both these levels. It is comprised of
the following six principal components:

• The imaging process (Chapters 2 and 3);


• Properties of SAR images (Chapters 4 and 5);
• Single-image exploitation (Chapters 6 to 10);
• Multiple-image exploitation (Chapters 11 and 12);
• Application in image classification (Chapter 13);
• Further developments (Chapter 14).

Radar images have properties that are unfamiliar to those of us more


used to optical data and photographs. Anyone intending to make effective use
of SAR needs to understand and appreciate some of these basic properties.
Those connected with the sensor-scene geometry and the signal processing
involved in image formation are described in Chapter 2. Here the basic prin-
ciples of SAR image formation are outlined and used to define two funda-
mental characteristics of the system and the impact they have on the images
being produced. The first of these is the spatial resolution of the imagery. The
azimuth resolution of the SAR is independent of range, but the range resolu-
tion varies with position if an image is displayed in a map reference frame.
This has important consequences for the visibility of objects in the images and
the number of pixels available to measure their properties. The second funda-
mental quantity is the point spread function (PSF), which determines the cor-
relation and sampling properties of the imagery. It is an essential concept when
describing the properties of images of both point and distributed targets. The
interplay of these two generic target types is shown to be central in calibrating
the SAR. Calibration is essential if accurate measurements of the geophysical
quantities known as the radar cross section and radar backscattering coefficient
are to be made. Only by working in terms of such sensor-independent quan-
tities can measurements from different sensors, images gathered at different
times, or measurements from different parts of an image be sensibly compared.
The treatment in Chapter 2 assumes an ideal SAR system, but, in practice,
images can suffer from a number of defects. Inadequate calibration leading to
radiometric or phase distortion forms only part of the problem. Particularly for
airborne systems, unknown or uncorrected perturbations of the sensor position
(on the scale of fractions of a wavelength) about its expected trajectory can cause
defocusing, geometric and radiometric distortion, and increased (unpredictable)
sidelobe levels. In Chapter 3 we show how autofocus techniques can be com-
bined with a simple dynamics model for the sensor platform to recover nearly
ideal image quality, as epitomized by Figure 1.1. This signal-based motion-
compensation scheme is equally applicable to airborne and satellite systems.
Chapter 3 is atypical in that the techniques it describes use the raw (unproc-
essed) SAR data, whereas elsewhere we start from image data. However, it
should be noted that the methods involve postprocessing algorithms that can be
applied whenever full bandwidth complex data are available. For example, they
are applicable in principle to the ERS-I SLC product. Readers with no desire
to become embroiled in the details of SAR processing may prefer to ignore this
chapter. They should nonetheless be aware that images of the highest quality can
be routinely produced, even by airborne systems operating at high resolutions
and long ranges, by making use of the techniques set out here. This is particu-
larly important where high-resolution data are essential to enable the detailed
structure and position of objects within the scene to be determined.
Succeeding chapters deal with the issue of extracting information from the
images themselves. Chapter 4 is concerned with the fundamental properties of
SAR images and how we represent the nature of information within them. It
sets up a framework within which the properties of the data are described in a
series of physical and empirical models (data models). Prior knowledge is encap-
sulated in a series of world models that can be applied to different image-inter-
pretation functions. Both types of model can be combined in a Bayesian
approach in which the output takes the form of a maximum likelihood (ML) or
maximum a posteriori (MAP) solution, given the data and the world models.
After setting up this framework, we examine the consequences of the fact
that the SAR is a linear measurement system providing an estimate of the
complex reflectivity at each pixel. There are then various ways of representing
the SAR data, all based on the complex image. These different image types all
contain different manifestations of the phenomenon known as speckle, which
results from interference between many random scatterers within a resolution
cell. Speckle influences our ability to estimate image properties and thus is
central to information retrieval from individual SAR images. For many purposes
the phase inherent in the single-look complex imagery carries no useful infor-
mation and can be discarded. The information at each pixel is then carried by
the radar cross section (RCS) or backscattering coefficient a 0 . For distributed
targets, where resolution is not a critical issue, discarding the phase allows for
the incoherent averaging of pixels or independent images to provide better
estimates of a 0 (a process known as multilooking). Finally, we introduce the
image types that become available when multichannel data are available. In
doing so, the nature of speckle as a true electromagnetic scattering phenomenon
becomes manifest through its exploitation in polarimetry and interferometry.
Figure 1.1 indicates that information about the scene is contained in the
variations of the RCS. These can be at large scales (e.g., the hedgerows, shadows,
and bright areas of woodland present in this image) or at a smaller scale where
they provide a textural effect within a single type of land cover. In Chapter 5 we
discuss models that describe this latter type of image property. Most impor-
tantly, we introduce the product model that describes SAR images, particularly
those arising from natural clutter, as a combination of an underlying RCS and
speckle. A gamma-distributed RCS is shown to lead to a K-distributed intensity
or amplitude that provides a good description of many natural clutter regions.
This model provides the crucial component in our approach to extracting
information from single SAR images, as described in Chapters 6 to 10.
From Figure 1.1 it is obvious that the overall structure in the scene is
dominated by the deterministic manner in which the RCS varies as a function
of position. Strong returns correspond to identifiable objects, such as individual
trees. In Chapter 6 we introduce a series of world models (discussed in Chapter
4) and exploit them in reconstructing this deterministic RCS by separating it
from the speckle. In Chapter 7 we introduce the cartoon model, which asserts
that the image is made up of regions of constant RCS, leading to segmentation.
The role of the exploitation tool is then to define the position and strength of
the different segments within the scene. We compare both the reconstructions
(from Chapter 6) and the segmentations (from Chapter 7) in terms of a quality
measure based on the speckle model. This enables a quantitative assessment of
the different algorithms to be made.
The underlying variation of RCS is not the only source of information
within a SAR image. Many of the fields in Figure 1.1 show texture, correspond-
ing to plowed furrows. In this context the presence of furrows, indicating a
property of the field, is often more important than their individual positions. The
same is true for many types of natural clutter, such as woodland. Again, we are
not so much concerned with the positions of individual trees but with the
texture properties that characterize large regions of woodland. These properties,
encapsulated in the single-point statistics (described in Chapter 8) and the
correlation properties (discussed in Chapter 9), can then be exploited for region
classification or segmentation.
In Chapter 10 we introduce a new model that is intended to describe
man-made objects, such as targets in a military application. Extracting informa-
tion about these can be factorized into processes of detection, discrimination,
and classification. These functions operate on the detailed local level to provide
a global picture of the presence of targets within the scene. Initially we introduce
a Bayesian treatment of target detection and apply it to a uniform or textured
RCS applied to both background and targets. We discuss suitable simple dis-
criminants capable of rejecting false detections, leading to a formulation of
classification based on a Bayesian approach. In practice, this may be too com-
plicated and associated suboptimum classifiers may be preferable. Man-made
objects generally comprise only a few individual scatterers, so the deterministic
interference between these becomes crucial. This can be exploited in super-
resolution.
Earlier chapters deal with individual SAR images for which speckle can be
treated as a noiselike phenomenon dominating information extraction. How-
ever, increasingly important are multichannel data in which images of the same
scene are acquired under different operating conditions. Multidimensional data
types form the subject matter of Chapter 11. We concentrate initially on the
properties of polarimetric SAR, since this type of data illustrates many of the
major issues involved in combining images, in particular those properties asso-
ciated with interchannel coherence. As would be expected, the existence of extra
channels provides an enhanced capacity for extracting information from the
images. Indeed, fully polarimetric systems provide a complete description of the
scattering properties of the scene for a given operating frequency. Physically,
different polarization configurations interact differently with the various com-
ponents of the scattering medium. Particularly when used in combination with
frequency diversity, polarization provides a means of probing the physical prop-
erties of the medium. We provide strong theoretical and experimental evidence
to illustrate that the one-dimensional product model for texture extends readily
to polarimetric data, and this reveals that the multidimensional Gaussian model
plays a central role in representing the data. All the important image types
needed to capture the information in the data follow immediately from this data
model. We examine the properties of these images and develop a complete
theory of the distributions involved, including the estimation properties of the
important parameters. These theoretical results are verified by comparison with
measurements from multifrequency polarimetric systems. We also develop the
theory needed to take account of fluctuations in the number of scatterers
contributing to the polarimetric signal and examine the textural information
present in data as a function of frequency. Finally, the theory developed for
polarimetric data is related to other multifrequency data types, in particular
those encountered in interferometric, multifrequency, and multitemporal
images.
Chapter 12 is concerned with using the image model derived and verified
in Chapter 11 to develop techniques for extracting information from a variety
of multidimensional images. We first deal with the tools needed to estimate the
key parameters defined by the product model. These are very much based on
methods developed for single-channel data; it is very satisfying to see how
readily these extend to higher dimensional data. However, the extension is not
entirely straightforward. In particular, the concept of speckle that permeated
almost all the work for single-channel data has no obvious counterpart in higher
dimensions, unless all the channels are uncorrelated. This changes the nature of
some of the optimization problems that must be posed and solved to get at the
information in the data. However, the distinction between structural and para-
metric information that runs as a thread through all the work on single channels
is just as pertinent here. Methods are described that improve our ability to
perceive structure, by forming intensity images with reduced normalized vari-
ance compared with single-channel speckle. Change detection is an important
process when handling multiple images gathered at different times. This is, in
principle, a higher level process since it exploits joint properties of different
images. The information in this type of data can take two forms: statistical
information that describes individual regions, such as the return from a distrib-
uted target, and structural information that is concerned with the boundaries
between these regions. We show how the type of information required has a
direct impact on the methods used to interpret the data. The ultimate test of
any image-understanding technique is the extent to which it brings out the
information required in a given application. A high-level approach is adopted
in Section 12.6, which describes a case study comparing the ability of different
algorithms to provide useful information in tropical forest monitoring.
Throughout this book our approach consists of building optimized tools
for information extraction based on our best understanding of the data and
world models underlying the observations. These are continually modified as we
apply them to real data and assess their performance. A focal point of this effort
occurs in Chapter 13, which addresses one of the central issues for many
applications, namely, image classification. Here much of our work on learning
image structure and making optimal estimates of the parameters describing
objects in the scene comes face to face with attaching meaning to these con-
structs. In particular, we wish to associate parameter values at a pixel or within
a region to empirically defined classes of objects in the scene, such as fields,
woods, and towns. Approaches to this problem fall into three broad categories:
(1) purely data-driven, whose fundamental weakness is their unknown scope for
generalizability; (2) classification purely on scattering mechanism, which in
principle gives considerable insight into what the classification means but pro-
vides no discrimination when desired target classes are dominated by similar
mechanisms; and (3) rule-based methods based on an understanding of the
physical processes underlying observations. Many of the most promising meth-
ods fall into this latter category, which combines pragmatism with insight to
provide robust, transferable classification schemes. We also show how the par-
ticular properties of SAR can allow an initial classification to be refined to
retrieve more information about individual classes in the scene.
In the final chapter we review the extent to which our model-based
approach provides a complete, accurate, tractable, and useful framework for
information extraction from SAR images. In doing so we establish the status of
our best methods and algorithms and indicate directions for progress in light of
the real-world examples encountered throughout the book. In our view, the key
issue is the extent to which this work becomes embodied in applications of SAR
data. The tools have been brought to a high level of refinement but must be
judged by their ability to help in the use of the data. The interplay of tools and
the tasks to which they are applied will provide the major motivation for any
further development.

You might also like