Photogeology Remote and GIS Lecture Material Aduah Student

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 81

SENSING

UNIVERSITY OF MINES AND TECHNOLOG (UMaT)


DEPARTMENT OF GEOMATIC ENGINEE
RING
PHOTOGEOLOGYY AND REMOTE SENSING GL 279

Aduah et al. 2015, Analysis of land cover changes in Bonsa Catchment

2021/2022 ACADEMIC YEAR

(Handout: Modified after Dr E E Duncan)


LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

COURSE DESCRIPTION
This course concerns the application of photogrammetry and remote sensing to visually
extract geologic information from aerial photos and remote sensing imagery. The
following will be covered:

 Introduction to Photogeology and photogrammetry.

 Types and characteristics of aerial photographs

 Geometry of aerial photographs

 The Rapid and Careful methods for obtaining a stereoscopic view

 Relief displacement

 Measurements on aerial Photographs

 Photo-interpretation of geologic features

 Introduction to Remote Sensing

 The electromagnetic spectrum and the atmospheric windows available to remote


sensing and of interest to the geological community

 Remote Sensing platforms-Landsat and Spot systems

 Microwave systems

 Digital Image processing-image restoration, image enhancements and image


classification methodologies.

 Applications of Remote Sensing techniques to geology.

 Policy issues for Remote sensing data acquisition

2
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

References and Recommended Textbooks

Jensen J. R. (2015), Introductory Digital Image Processing: A Remote Sensing


Perspective, Pearson Education, 544 pp.

Mather, P. M. (2010), “Is there any sense in remote sensing?”, Progress in Physical
Geography, 34(6), 739. Pp. 1-19.

Lillesand, T. M. and Kiefer, R. W. (2000), Remote Sensing and Image interpretation, Wiley, 780 pp.

Jensen, J. R. (2000): Remote Sensing of the Environment: an Earth Resource Perspective, Prentice-Hall,
New Jersey, USA.

Cracknell, A. P. and Hayes, W. B. (1991), Introduction to Remote Sensing, Taylor and Francis, 420 pp.

Legg, C. (1992), Remote Sensing and Geographic Information Systems: Geological Mapping, Mineral
exploration and Mining. Ellis Horwood. 278 pp.

Lattman, L. H, and Ray, R. G. (1965), Aerial Photographs in Field Geology. Holt, Rhinehart and
Winston Incoporated, 320 pp.

ASSESSMENT OF STUDENTS
Assessment of students will be in two forms: Continuous assessment (40%)
and End of Semester Examination (60%). The Continuous assessment shall
include Quizzes, Class Attendance and Assignments. The end of semester
shall be marked over 60.

3
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

PLAGIARISM:
This is an academic offence and the instructor will take acts of plagiarism seriously.
Culprits will be punished when caught in such acts. Information on Plagiarism can be
found on Google.

All handed in work must be your own. You may seek advice from other students
regarding design, techniques or software operations, but you must not share or duplicate
files. This includes finding another student's saved file on a computer, making minor
modifications, and passing the work off as your own. Any offence will trigger a
punishment such as deductions from accumulated marks.

Schedule for this semester (Nov 2021 to March 2022)


(subject to minor modifications)
TOPICAL OUTLINE
WEEK 1 - 2: INTRODUCTION TO PHOTOGRAMMETRIC PROCESSES
WEEK 3 - 4: PHOTOGRAMMETRIC MEASUREMENTS
WEEK 5 - 6: INTERPRETATION EXERCISES
WEEK 7 - 10: REMOTE SENSING AND ITS APPLICATIONS
WEEK 11 – 12: REVISION EXERCISES
Demonstration: Introduction to some photogrammetric equipment, remote sensing
websites and tutorials
Relevance of course: To enable students to appreciate the processes and decisions taken
in geologic data gathering and interpretation using aerial photos and satellite imagery
from remote sensing. Students will also go through some of the processes involved in
digital image processing.

Note: This handout is a working handout and students are always to come to lectures with
this so that they can add vital notes and perform some exercises in the handout, thank
you.

4
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

PHOTOGEOLOGY

Photogeology is defined as ‘the visual extraction of geologic information from an aerial


photograph or satellite imagery by conventional photo-interpretation or digital
techniques’.

Photogrammetry is the process of mounting a camera on an airplane with the aim of


taking aerial photographs of the terrain underneath for mapping and other specific
purposes. The films used are of two types:
i. panchromatic film – black and white or shades of grey.
ii. Colour film – multi spectral (RGB).
Aerial photographs may also be superspectral or hyperspectral.
The panchromatic film is generally cheap but the coloured film is expensive.

Aerial photos are classified according to the orientation of the optical axis of the camera.
The optical axis can be defined as the line along which the camera points. It connects the
centre of the film with the centre of the lens and extends straight out from the front of the
camera.

There are two main types of aerial photos:

1. An oblique aerial photo is inclined intentionally from the vertical. It is extremely


difficult to compile data from oblique aerial photos for maps and they are
therefore not used in routine photo interpretation. The scale decreases from the
foreground to the background. Its advantage is the large coverage and also
features of the terrain are better illustrated.
2. Vertical aerial photos – the optical axis at the time of exposure is plumb but due
to disturbances in the attitude of the airplane during exposure the optical axis
incline a few degrees from the vertical. This is termed as ‘tilt’. Practically all
aerial survey for mapping is carried out using vertical photos. Vertical photos

5
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

present the land surface from a comparatively unfamiliar angle. The appearance is
that of a pictorial map. Unless shadows are present to accentuate some features on
the aerial photos.

Geometry of Aerial Photos

Film

b1 p1 a1
f
Perspective centre/exposure station

H
Optical axis

A P B

The orthogonal projection of the perspective centre onto the photograph is called the
principal point. It is indicated as P’ on the photo and P on the ground. The identification
of the principal point will be explained in the laboratory exercises.
Scale of Aerial photograph
The scale of aerial photograph is equal to the ratio between the focal length f and the
flight height H.

f
Scale 
H

These two parameters can be obtained from any aerial photograph.

6
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

The scale is constant for vertical photographs of flat areas, when the camera height is
constant. The height of the camera over non-flat terrain is not constant and it therefore
produces changes in scale.

For non-flat terrains the scale is given by:

f
Scale 
H h
Where h is the height of the datum

Stereoscopy and the use of stereoscopes

1. Stereoscopic vision- when you look at objects with two eyes, our eyes give us two
slightly different views, which are fused physiologically by the brain and results
in a selection of setting a ‘model’ having three dimensions, the third dimension is
only provided when objects are viewed with both eyes. This is called binocular or
stereoscopic vision. It is also possible to get a three dimensional impression if we
offer to each of our eyes instead of ‘nature’ or photo taken from two different
camera positions, the so-called stereo pair.
For three-dimensional impression, usually stereoscopes and two sequential aerial
photos are used. The two main types of stereoscopes areL:
1. lens stereoscope (pocket stereoscope)
2. Mirror stereoscopes .
1. Lens stereoscope – this makes use of a pair of simple magnifying glasses to
keep their lines of sight approximately parallel. Most lens stereo has a
magnifying power of 2-3 diameters. The primary drawback to lens stereo is
that only 1/3 to ½ of the standard overlap can be studied stereoscopically at a
time.
2. Mirror stereoscope- this provides a view of the entire overlap some through
existing prisms. Most basic models afford no magnification but 3*-8*

7
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

binocular attachments are available as options. The greater the enlargement,


however the smaller the field of view.
Stereoscopic Exaggeration:
The appearance of the stereoscopic image is that of a relief model or stereo model giving
the impression of solidity and depth. For vertical photographs, having 60% overlap, the
vertical scale of the model appears to be considerably exaggerated and the impression of
relief is far greater than what an observer on the airplane would receive visually.
Mountains and Hills appear higher and their slopes steeper than actually in nature. This
vertical exaggeration is due to the much greater angular difference between the rays from
any given ground point to 2 successive exposure stations as compared to that between
rays to the observers eyes from the same ground point.

Pseudoscopic view
In stereoscopic view it is important to orientate in such a way that the left eye only sees
the left photo and the right eye sees only the right photo. If the photos are viewed in
reverse, a pseudoscopic view results in which ups and downs are reversed example valley
appear as ridges and hills appear as depressions - This is also termed as relief reverse.

Students must also be able to explain Monoscopic view.

Geometry of Vertical Aerial photographs


The vertical aerial photograph is a central perspective view in which features further from
the camera lens at the time of exposure (such as valleys) are shown at a smaller scale than
features nearer the lens (such as hills). Scale variation is a fundamental characteristic of
all aerial photographs and thus it will be found that photographic images almost always
occupy positions other than their true relative map positions. Only for a few areas where
terrain has almost no relief is scale variation negligible. The greater the amount of relief
the greater will be the scale variation. This is significant to the field geologist mapping on
aerial photographs, positions of images affect map positions of geologic observations.
Scale variation also affects geologic interpretation and measurements from photographs

8
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

It is therefore desirable that scale variation be understood and reconciled as much as


possible in the field. Fortunately scale differences that are significant in field mapping
can be greatly reduced or removed by simple field procedures.

It is important to note that although scale variation is characteristic of aerial photographs


and must be compensated for in positioning geologic observations on a base map, the
very causes of these scale variations make it possible to determine vertical intervals that
may in turn allow the computation of stratigraphic thickness, dip of beds and related
geologic measurements. Scale differences reflect differential displacement of images and
the geometry of the aerial photograph may generally be considered in terms of:
1 displacement of images on the single photograph that causes errors in map
position (planimetric errors) and
2 displacement of images on pairs of photographs viewed stereoscopically in
which differential displacements are related to heights of terrain features.

GEOMETRY OF THE PHOTOGRAPH

The terminology and geometric elements of the single vertical photograph is as shown
below:

9
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

a i b
film negative
focal length 
Camera
Contact print

Center point

Flight height

Optical axis

d terrain e
O

The dashed line drawn through the centre of the film and the lens represents the optical
axis of the camera this is perpendicular to the plane of the photograph (film) which is
horizontal.

The position at which the axis passes through the photograph is termed the centre point or
principal point of the photograph. In a truly vertical photograph this also represents the
plumb point or nadir point, which is defined as the photographic position representing the
point on the earth's surface vertically beneath the camera lens at the time of exposure. In
practical terms the vertical photograph is rarely absolutely vertical and the nadir and
centre point do not coincide.

The deviation from the vertical is called tilt, but in modern photography this is generally
small, this can be less than two degrees but in older photos this could be more.

10
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

The distance between the camera lens and the ground represents the flight height of the
airplane; the distance between the lens and the film represents the focal length of the
camera. The above diagram is not up to scale.

The most significant geometric relationship shown is the fact \that equal angles are
subtended at a camera lens by an object and by its photographic image. This relationship
holds regardless of focal length of the camera lens and flight height of the airplane.
Because equal angles are subtended by the object and its photographic image, the very
basic relationship among focal length (f), flight height (H), size of ground object (0) and
size of film image (i) can be derived.

Triangle abc and cde are similar thus

f i
=
H O

The ratio of image size to object size is the general scale of the aerial photograph, thus

Focal length
Scale 
Flight hieght
The above relationship indicates that as focal length increases, the scale of photographs
become larger and as flight height increases the scale of photographs becomes smaller.

Preparing and arranging photographs for stereo viewing


a. Rapid adjustment
b. Careful adjustment

Rapid adjustment
i. Place photos approximately in position under the stereoscope by
estimation. The overlapping parts of the two photos must be adjacent
to each other. shadows visible on the photos must fall towards the
observer.
ii. place right and left hand index fingers on corresponding points on
right and left hand photos respectively.
iii. While viewing through the stereoscope slowly move the photos with
each hand until the two fingers merge. Take the fingers from the
points.
iv. Adjust the photos by slight rotation or linear motion so that
stereoscopic imagery is visible on the whole field of view.

11
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

v. Tape down the photos

NB: This will be demonstrated during lectures and students have to learn to represent
these in a diagrammatic way where possible.

b. Careful adjustment

i. Place the photos under the stereoscope with the overlapping parts of the two photos
next to each other. The shadows visible on the photos must fall towards the
auxiliary.

ii. Locate and mark the principal points on each photo (m1 and m2). This is done by
aligning opposite sets of fiducial marks with a straight edge.

iii. Transfer the principal points of the two overlapping photos and mark them (ml' and
m2'). By connecting the principal pints and the transferred principal points, you
have the flight line.

iv. Lay a straight edge only on both photographs and arrange the stereo pair in such a
way that the points ml, m2, ml' and m2' are lined up in a straight line. The flight line
of the left photo is in one line with the flight line of the right photo. The distance
between m1 and ml' or m2 and m2' must be about the same as the stereo base of the
used stereoscope.

v. The mirror stereoscope is placed over the stereo pair in such a way that the line
joining the centres of the stereoscopic lenses is parallel to the flight line.

vi. Although the photos should be seen 3-dimensional now, a little adjustment in
distance between the photos may still be necessary. So the right hand is moved
sideways until the spacing between corresponding images produces a comfortable
stereoscopic viewing.

vii. Tape down the right hand photo.

NB: Accurate and comfortable stereoscopic view requires that the eye base, the line
joining the centres of the stereoscopic lenses the instrument base and photo base (Flight
line) are all be parallel. All parts of the stereo model can be observed by moving the

12
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

stereoscope while maintaining this parallelism. Objects that change positions between
exposures (i.e. automobiles, trains, boats etc) can't be viewed stereoscopically.

Radial displacement due to relief


Because of terrain relief, the images of ground positions are shifted or displaced about the
central projection of an aerial photograph. If a photograph is truly vertical, the
displacement of images is in a direction radial from the photograph centre. This
displacement is called radial displacement due to relief. It represents an error in map
positioning and is the principal geometric characteristic the geologist must consider in
compiling his geologic data from aerial photographs.

Radial displacement due to relief is also responsible for scale differences within any one
photograph and for this reason a photograph is not an accurate map. The fundamental
difference between a photograph and a map can be demonstrated by comparing the
central projection of a single photograph, in which all objects are positioned as though
viewed from the same point with the orthographic projection of a map, in which each
terrain point is positioned as though viewed from vertically above.

On anyone photograph the amount of displacement due to relief increases with increasing
distance from the centre point and with increasing difference in elevation between any
point and the selected datum reference as shown below.

13
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Points A and B .are at the same height above the datum plane but point A is farther from
the ground nadir than B. The figure shows that the radial displacement due to relief is
greater for point A than for B. This represents a real difference in terms of distance on the
ground. Note also that point C is at the same distance as point A from the ground nadir,
but because of its lesser height it is not displaced as great a distance as point A.

For any one photograph the amount of radial displacement of the top of an object with
respect to its base and with respect to the image position of the base on the photograph
can be determined conveniently from
r*h
m
H

Where:
m = radial displacement of the top of an object with respect with its base and with respect
to its base
r = radial distance on the photograph from centre point to base of image displaced.
h = height of object displaced
H = flight height above the base of the object displaced
From the above relationship radial displacement due to relief increases with increasing
Distance (r) from the photograph centre point with increasing height (h) of an object.
Relief displacement also varies with flight height seemingly with focal length.

GEOMETRY OF THE STEREOSCOPIC MODEL


The stereoscopic model or stereo model is the three-dimensional representation of the
terrain seen by simultaneously viewing an overlapping pair of aerial photographs.
Airbase is the horizontal distance between the camera positions, or camera stations, for
any two successive photographs in the flight line. It is also equal to the distance between
the ground nadir points for each exposure. The equivalent distance as shown on each
photograph of the model is termed the photobase. The photobase is represented on each
photograph by the distance between the centre point (assumed to be the photograph nadir
point) and the image representing the centre point of the adjacent photograph.

14
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

The flight direction lies along a line connecting one exposure position to the next and is
represented on each photograph of an overlapping pair by a line drawn through the centre
point and the image representing the centre point of the adjacent photograph, it is thus
parallel to the photobase. This is as illustrated below:

Parallax In The Stereoscopic Model


Parallax is defined as an apparent shift in the position of an object with respect to some
reference system caused by a shift in the point of observation. This may be illustrated
simply as follows. Hold a pencil vertical as arm's length and view it with the left eye,
keeping the right eye closed. Note the position of the pencil with respect to some

15
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

background object, such as the wall across the room (the reference system). Now close
the left eye and view the pencil with the right eye and note the apparent shift of the pencil
with respect to the background object. This apparent shift in the position of the pencil is
parallax or parallactic displacement.

Similarly terrain features on two overlapping aerial photographs taken from different
positions will exhibit parallactic displacement. The amount of parallactic displacement is
related to the height of a feature and to the geometry of the stereoscopic model; it can be
measured from a pair of overlapping aerial photographs and used to calculate vertical
intervals that in turn can be used in determining stratigraphic thicknesses, dips of beds,
and other geologic parameters.

Vertical measurements by the parallax method


When an object is viewed or photographed from two different positions, as on two
overlapping vertical aerial photographs, an apparent shift in the position of that object
takes place; this is known as parallactic displacement. This apparent shift in position is a
measurable linear distance that is related to the height of the object.
When the stereometer is used to determine vertical distances, two linear horizontal
distances actually are measured from the photographs in order to calculate the vertical
interval between two points in the stereoscopic model. Although in photogrammetric
terms the determination of height is related to the measurement of absolute stereoscopic
parallaxes of the two points in question, in actual practice one merely measure distances
A and B as shown below:

16
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

For finding the parallax difference between the bottom and top of a pole, represented by
MN on the Left photograph and by M'N' on the right photograph, it is necessary only to
measure distances A and B. The distance (A-B) is the parallax distance. This also shows
distances to be measured in determining the adjusted photobase; these distances are CC'
and MM' and the photobase adjusted to the base of the pole is CC' – MM’.

Parallax measurement procedure


Prior to measuring parallax distances, the photographs must be oriented properly for
stereoscopic viewing as previously described. The necessary measurements are always
made in the x-direction or flight line as follows:
I. Place the stereometer and the stereoscope over the stereoscopic pair so that they
are parallel to the flight line such that the left-hand target dot of the stereometer is
over one of the points to be measured on the left photograph of the stereo model.
The left hand target dot will remain in a fixed position.

II. Adjust the horizontal separation of the dots by turning the drum screw, which
moves the right hand target plate until the two target dots appear to fuse into a
single dot. The single fused dot, seen stereoscopically is raised or lowered by
turning the drum screw until it appears to rest or float on the ground surface at the
first point selected.

III. For convenience and more consistent reading to be obtained, turn the drum screw
until the floating dot rises above the terrain point in question and turn back the
screw until the floating dot descends to the ground surface. The instrument
reading is recorded and the procedure is repeated for the second point selected.
This is as shown below;

17
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

The above diagram illustrates the final separation distance of sterometer targets in
measuring parallax difference between the top and bottom of the pole.
The difference in readings is the parallax difference between the two terrain points and is
the figure used to calculate the vertical interval between the points as described above. It
is best to take two or three readings of parallax. In making parallax measurements the
fused dot will readily be seen where it floats above the apparent ground surface of the
stereoscopic model, but it will appear to split into its two component dots as it is lowered
below the ground surface.

Relation of parallax to difference in elevation


The geometric relationship of parallax to elevation or height of an object, is
fundamentally related to radial displacements of images as on the individual photographs
of the stereoscopic model as illustrated by the diagram below:

18
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Assuming that the vertical pole is located exactly on the flight path. On each photograph
of the stereoscopic pair the top of the pole would appear to be displaced radially outward
from the centre point as shown below:

In order to determine the height of the pole, the displacement of the top relative to the
bottom must be known. This is fundamentally related to the absolute stereoscopic
parallax of the top and bottom of the pole as determined from the stereoscopic model. In
photogrammetric terms the absolute stereoscopic parallax of a point is defined as the
algebraic difference, parallel to the photobase, of the distances of the two images of a
point (the top of the pole on the left hand photograph and the top of the pole at the right
hand photograph) from their respective centre points.
From the diagram above the absolute stereoscopic parallax of the pole will given by

19
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

y - (y') or (y + y')

and the bottom of the pole is

x - (x') or (x +x')

The difference between the absolute stereoscopic parallax of the top and bottom of the
pole is (y +y') -( x + x').

This is termed as the parallax difference and can be shown by a simple algebra to be
equal to dl + d2 in the last two diagrams above.

In practice, determining heights from aerial photographs, the quantity dl + d2 is


determined directly by subtracting distance B from A as shown in the diagram above.

The basic relationship between the height of an object and the various geometric
elements of the stereoscopic model can be derived. In accordance with the law of similar
triangles it can be shown from the figure that
OM O' M H - h
 
P1 P2 h
and since
B OM O' M H - h
  
P P1 P2 h
B* h
Then P
H-h
Now the parallax difference (p) between the top and bottom of the pole has been shown
above to equal dl + d2, and
f * P1 f * P2 f * P
Δp  d 1  d 2   
H H H
From this
Hp
P
f
Substituting for P in eqn 1

20
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

HΔp B * h f
 But photobase b  *B
f H-h H
Further substitution for B gives
b*h
Δp 
H-h
Solving for h
H
h * p ………………………………………..eqn 2
b  p
Eqn 2 is the basic relationship of parallax difference to elements of the stereoscopic
model and is called the parallax equation. From this it can be seen that measurements
made directly from the stereoscopic model, such as photobase b and parallax difference
p, can be combined with other known or easily obtainable data to determine heights of
objects such as an outcrop or heights of terrain.

Aerial Photo interpretation

Aerial Photographs can provide us two types of information, metric and semantic
information.

1. Metric information concerns the position of objects on the earth’s surface. The
exact position of an object is determined by measuring linear dimensions such as
distances, angles etc. this is the field of photography
2. semantic information concerns the nature or identity of objects imaged on aerial
photography such as tone, texture reflectance etc.
Aerial photo interpretation techniques are widely used in geography, geology, hydrology
and geomorphology. The aerial photographs cannot be regarded as a single source of
information, however efficient photo interpretation should make available all semantic
information which is registered on the photograph.

Phases in the process of aerial photographical interpretation

The degree or quality of identification possible from the stereoscopic photo image
depends largely on the inherent visibility of the object under study. Some objects such as
streets, houses, and roads are inherently visible in the photo image although the actual
visibility in any specific case depends on the scale , the quality of the photo image and on
such incidental factors as the superposition of objects example trees, roads, shadows and
clouds. Other objects such as soils, sub surface water and many rock types are inherently
invisible. Their identification from the stereoscopic photo image is partly possible; here

21
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

deductions must be employed to assist analysis and classification. This is the reason why
photo interpretation must be combined with field and laboratory investigations. Generally
four phases are defined in photo interpretation these are:
a. photo reading ( ie detection, recognition and identification )
b. analysis
c. classification
d. deduction

a. Photo reading – The first step in photo reading is detection i.e the mere discovery
that something is there. The second step is recognition. Through its shape, size
and other visible properties the interpretator recognizes a familiar object. Finally
there is the step of identification in which he/she identifies the object or feature as
something known by a specific name or term. Recognition and identification can
be aided by the provision of photokeys. Photo keys label objects for easy
recognition on aerial photographs. Identification too is aided by some moment of
deduction.

b. Analysis – Procedure 1. Objects and features to be analysed must be chosen.


2. Legend related to aid survey must be drawn.
3. Boundaries must be drawn on the aerial photographs
according to the legend.
Analysis must be systematic; all the photographs concerned must be systematically
scanned.

c. Classification- When units are to be compared on the basis of varying physical and
cultural characteristics as identified on the photographs then the photo interpretator
will define units. A comparison based on the defined characteristics of the units
resulting from the analysis involves third phase of photo interpretation. Classification
may yield all the information needed however when the objects under study are not
clearly visible on the photograph, fieldwork or other investigations will be needed.

d. Deduction - This is sometimes considered as the fourth phase in photo


interpretation and is defined as the phase dealing with the combination of
photographic observations and knowledge drawn from other sources in order to
acquire information that cannot be obtained from the photo image. The term
deduction is also used when the interpretator also arrives at a conclusion on the basis
of a number of photographic observations.

The interpretation of aerial photographs provides great geologic data to Field geologists.
The Field geologist mapping with the aid of aerial photographs either consciously or
subconsciously practice photo interpretation to some extent.

Photo interpretation involves the observation of certain tones, shapes and other
characteristics of photographic images and then determining their geologic significance
by deductive or inductive reasoning or a combination thereof. Obtaining maximum

22
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

information usually requires a careful study. Some study illustrations of this is provided
as an appendix.

Basic considerations of photo interpretation in photographic images.

The basic observational criteria used for geologic interpretation of aerial photographs
include the following; photographic tone or shades of gay, texture, pattern, shape and
size. Other observational elements are usually just a combination of the above criteria.

1. Tone- Photographic tone is a measure of the relative amount of light reflected


by an object and actually recorded on the black and white photograph. This
usually appears as shades as grey and rarely as black and white. Photographic
tone is subject to considerable variation depending on the angle of the sun,
time of the day, type of film and filter used, the printing process, and other
factors, this implies the same rock or features photographed at different times
will not necessarily have similar tones on each of the photos taken. Thus the
significance of tone can be derived from contrasts that can be observed. The
human eye has a delicate sensitivity to tonal contrasts and is able to observe
subtle differences in the terrain, which are geologically significant.

2. Texture- this is the aggregate arrangement of minute image as expressed by


tone, shape, size and pattern. This is applied to the general appearance of an
area within the photograph. Photographic texture is therefore a comparative
feature within any one scale of photography. Texture is also used to describe
the density of a drainage network or the relative spacing of streams and this
may be called a drainage texture. Wide spacing of streams results in a ‘coarse’
drainage texture and close spacing gives a fine drainage. In addition, texture
can be used to describe the degree of dissection of the terrain termed as
‘erosional texture’. The differences in erosional resistance of various rocks,
drainage texture and erosional texture may permit broad delineation of rock
units on aerial photographs.

3. Patterns-this refers to the orderly spatial arrangement of features on the


terrain. These may be arrangements of vegetation, photographic tones, lines
topographic features, streams and other features. Patterns resulting from
particular distributions of gently curved or straight lines commonly are of
structural significance; they may represent faults, fault zones, joints, dikes or
bedding. A single line is also an illustration of pattern, this may be an orderly
arrangement of stream segments, trees, depression or other features. A single
line is also an illustration of pattern, this may be an orderly arrangement of
stream segments, trees, depression or other features; it may also be a
continuous alignment of geologic, topographic or vegetation features but more
commonly it is a discontinuous alignment. Drainage patterns also reflect
underlying geologic structure and are also important. Patterns of vegetation
may also be conspicuous on many aerial photographs. It has been suggested
that narrow linear or parallel alignments of vegetation may represent fractures,

23
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

whereas wide curved alignments may signify the distribution of outcrops of


gently, moderately or even steeply dipping beds. Where curved alignments
form closed loops they generally represent horizontal or nearly flat-lying beds.
Where certain vegetation grows preferentially on one formation it may
indicate some clue concerning the lithology of the formation.

4. shape- this concerns the general form of configuration of terrain features. It is


an important element in geologic interpretation of aerial photographs,
especially as it relates to relief or topographic expression. The most obvious
use of shape is in recognising constructional landforms such as moraines, sand
dunes, river terraces and lava flows, most which have rather diagnostic shape
characteristics.

5. Size- this is a measure of surface or volume dimensions of an object. If size is


considered a quantitative measure that includes the amount of dip of beds, the
amount of displacement on faults as well as intervals such as stratigraphic
thickness. This is of real significance in applying aerial photointerpretaion
techniques to geologic mapping and interpretation.

Vertical Exaggeration

The exaggeration of vertical distances with respect to horizontal distances is


characteristic of almost all stereoscopic models. Commonly this exaggeration helps the
geologist to delineate and interpret certain details of the terrain, especially where small
but significant differences in height are important. For example, beds dipping at an angle
of only 2 to 3 degrees may appear conspicuous in the stereoscopic model, because
vertical exaggeration makes them appear to dip at angles as great as 10 degrees.

Scale

The scale of photographs obviously bears on the ability to see details of terrain, and as
scale decreases less detail can be discend. It is almost unfortunate that small-scale
(1:50000 or smaller) and large- scale (1:20000 or larger) photographs both have useful
(and commonly different) applications in geologic mapping. it is important to remember
that only one stereoscopic pair 1:60000- scale photographs, for example, gives the same
ground coverage as nearly a dozen 1:20000- scale photographs of the same area.

GEOLOGIC INFORMAITON FROM AERIAL PHOTOGRAPHS

In general, aerial photographs give geologic information in two broad areas ie structural
and lithological areas.
Photographs of sedimentary terrain’s yield a greater amount of structural and lithologic
information than those covering areas of igneous and metamorphic rocks, because the
non homogeneous nature of sedimentary terrain results in marked differential erosion

24
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

characteristics that stand out on aerial photographs. Those erosional contrasts are
particularly conspicuous because they occur within relatively short outcrop distances.

On the other hand, diagnostic landforms may be locally important in studies of igneous
terrain’s, especially where extrusive rocks prevail. Photographs of metamorphic terrains
however may reveal little geologic information because of the very nature of the
metamorphic processes, which tend to destroy the characteristics of sedimentary and
igneous rocks.

Regardless of rock type the climatic environment also affects the relative amounts of
geologic information that can be interpreted from aerial photographs. In general, more
data can be observed in the arid and semiarid regions, where more rock surface is
exposed to view than in heavily vegetated humid areas. In addition, the stage of erosional
development of the terrain, affects what can be interpreted from aerial photographs. The
amount of information, particularly structural, generally will be greater for a particular
type of geologic terrain and climatic environment during the so-called mature stage of
erosional development. At this stage streams show their greatest adjustment to the
reflection of the structure, and a greater third dimension of the terrain is visible for study
in the stereoscopic model.

Structural information

Flat-lying beds- Flat-lying or nearly horizontal beds are readily recognised as such where
different sedimentary rocks exhibit contrasting photographic tones expressed as irregular
bands extending along the topographic contour. Tonal contrast is especially well shown
where vegetation is sparse, as in many of the arid and semiarid regions especially in the
desert areas such as the northern parts of Ghana, Libya and western United States.
For flat-lying strata the topographic break extends along the contour. In areas of heavy
vegetation the trace of the slope break may be the principal indication of flat-lying beds.

Other indications of flat-lying beds include: closed-loop patterns that reflect different
beds or vegetation growing preferentaily on one or more beds. If drainage lines are well
developed, they exhibit dendritic patterns on horizontal strata.

Dipping beds- Numerous expressions of dip of sedimentary beds may be seen on aerial
photographs. Dip direction generally is conspicuous where topographic surfaces coincide
with bedding surfaces. Where bedding is expressed by bands of differing photographic
tone or by topographic breaks in slope due to resistant units, the direction of dip may be
apparent where a stream cuts across the strata and the bedding trace forms a V-shape in
plan view. If beds are obscured by vegetation or surface materials, the direction of dip
sometimes can be deduced from drainage characteristics of the area. Major streams
commonly flow parallel to the stratified rocks. Dipping beds on the nose of a fold may be
reflected in major streams that curve around the nose; the convex side of the curve
indicates the direction of plunge of anticlinal folds.

25
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Faults: High-angle faults commonly stand out on aerial photographs. This is a direct
result of the aerial view, which allows a large area and the gross features within it to be
seen at one time. For example, many alignments that are inconspicuous on the ground are
clearly seen on aerial photographs. Most high angle fault are expressed on photographs as
straight or gently curving lines, and features that are not obviously man-made should be
carefully noted on the photographs for later field examination.

Lithologic Information

The appearance on aerial photographs of a particular rock type may vary considerably,
depending especially on the climate and the amount of relief in areas where the rock
occurs. Hence, it is usually not possible to establish a set of criteria for the recognition of
rock types that will be applicable to areas. For some heavily vegetated areas, it may be a
real accomplishment merely to distinguish on photographs, rocks of igneous,
metamorphic or sedimentary categories from one another. In a broad way, however
certain lithologic information can be obtained from photograph.

Sedimentary rocks (consolidated) - The observation of bedding is fundamental to the


interpretation of sedimentary rocks, as bedding is the most readily observable
characteristic in aerial view. Because of differential resistance to erosion the sedimentary
beds develop a typical banded pattern that generally can be seen on aerial photographs.
However the colour of rocks may reflect certain lithologic characteristics; thus as seen on
photographs, shales and other fine-grained sedimentary rocks tend to have relatively dark
photographic tones. They also tend to show a fine textured drainage and relatively closely
and regularly spaced joints. Coarse-grained clastic rocks in contrast tend to yield
relatively light photographic tones and to be associated with coarse texture drainage and
relatively widely and regularly spaced joints. In general terms, drainage density increases
with decrease of permeability.

In the absence of information that might reveal lithologic character or rocks in an area, it
is always desirable to delineate areas of difference on the photographs and to thoroughly
investigate these differences in a ground study.

Sedimentary rocks – (Unconsolidated mostly superficial deposits) Unconsolidated


sedimentary rocks, are readily distinguished on aerial photographs from consolidated
rocks, although thin veneers or superficial material may be difficult to identify. The most
significant criterion for identifying and interpreting superficial deposits from aerial
photographs probably is land form which is significant because many superficial deposits
are transported materials that have diagnostic topographic form. Sand dunes, river traces,
alluvial fans, end moraines and eskers are examples of well known features that show
characteristic form.

Igneous rocks - Intrusive igneous rocks particularly those forming stocks, cupolas and
batholites, commonly reveal a crisscross pattern of joints that is rather diagnostic of
igneous terrain. This pattern often can be seen not only in areas of good exposures but

26
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

also in many vegetation-covered areas and it is perhaps the most suggestive criteria of
igneous rocks as seen on aerial photographs.

Metamorphic rocks - Metamorphic rocks are difficult to interpret from aerial photographs
because large scale distinguishing characteristics are generally lacking. The best clue that
rocks are metamorphic is probably the conspicuous parallel alignments of minor ridges
and intervening low areas that may reflect regional cleavage or foliation. Not all
metamorphic rocks develop conspicuous cleavage or foliation but where such structures
are present, the ridges and low areas comprise a “topographic grain” that is much finer
than found in sedimentary areas. Although criteria for interpreting metamorphic rocks
from aerial photographs seem meager, it should be noted that few investigation have been
made with specific objectives of interpreting metamorphic terrains.

END OF PHOTOGEOLOGY LECTURES

27
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

REMOTE SENSING

1.0 Introduction to Remote Sensing

Remote Sensing (RS) is the study of objects from a distance without coming into contact
with them. Early photographers such as Nader in 1859 figured out that, he needed to get
up higher to see a wider view of the Earth. Thus cameras were sent up in gas balloons
and later the airplane. Thus Earth Remote Sensing or Earth observation was born.

This term applies to the acquisition of information usually in image form about the
surface of landmass, oceans and the atmosphere above it, by airborne or spaceborne
sensors. These sensors can be active or passive and receive reflected or emitted
electromagnetic radiation.

RS includes airborne and spaceborne radar and lidar systems. There is no clear distinction
between RS and Airborne geophysics since these are passive geophysical techniques
(gamma-ray spectrometry, magnetometry) and active techniques (airborne
electromagnetic which also belongs to various aspects of RS.

In the last 200 yrs RS have progressed from using gas balloons to take photographs to
highly sophisticated sensors taking imagery of various parts of the world.

Table1: shows status of World Topographic mapping in 1993


Area % mapped % mapped % mapped % mapped
At 1:25000 At 1:50000 At 1:100000 At 1:250000
Africa 2.9 41.1 21.7 89.1
Asia 15.2 84.0 66.4 100.0
Australia & Oceania 18.3 24.3 54.4 100.0
Europe 86.9 96.2 87.5 90.9
North America 45.1 77.7 37.3 99.2
South America 7.0 33.0 57.9 84.4
Former USSR 100.0 100.0 100 100
WORLD 1993 33.5 65.5 55.7 95.1
WORLD 1987 17.0 59.0 56.0 90.0

Check the following website to see some interesting basic stuff about RS:
http//observe.arc.nasa.gov/nasa/education/gis/opening.html

28
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

1.1 Relationship between aerial photographs and satellite imagery.

Aerial Photographs are obtained from photographic films, these use radiations from the
visible part of the spectrum and satellite imagery records reflectances or radiation from
both the visible and invisible part of the spectrum, using detectors which are electronic.

Aerial photographs do not have the same geometric properties as maps because of TILT
and RELIEF DISTORTIONS.

The effect of TILT distortion can be removed by re-projecting the photograph or by


carrying out a software correction to a data file derived from scanning the aerial
photograph.

The effect of RELIEF distortion cannot be removed, because it varies all over the
photograph. However with two aerial photographs of the same piece of terrain, but taken
from two different aircraft locations, features can be matched in BOTH photographs. If
this is done the same feature in two different photographs will have different height
displacement – this difference will be the height or elevation of the feature.

Satellite imagery interpretation

Every image has geometric and spectral characteristics consider a square house tiled with
orange tiles and red tiles, you may think of an imaging system which will record the
house as a perfect square (a system with high geometric fidelity) or one that records the
house as an irregular quadrilateral (a system with low geometric fidelity).

Also consider an imaging system which records the two distinct orange and red tones (
high spectral fidelity) or one that only records one tone (lower spectral fidelity).

Consider the illustrations below and comment.

Original

Red tiles

Orange tiles

29
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Interpret the following:

If you were able to interpret the above then we are getting somewhere!!.

Further more the imaging systems are electronic and not photographic-thus they can
detect reflectance’s from the Earth’s surface which are not detectable by film for example
imaging systems may detect variations in the infrared radiation reflected by trees in a
park, panchromatic film would not detect this.

Thus the student should be aware that despite superficial similarities satellite images are
both GEOMETRICALLY AND SPECTRALLY different from aerial photographs.

30
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

By correcting the geometric distortions known to exist in an aerial photograph we can get
an ORTHOPHOTO which can be digitized or otherwise handled like a paper map.

Likewise you will understand that by correcting the different geometric distortions and
other corrections known to exist in satellite images we can get an ORTHOIMAGE which
can also be digitized like a paper map.

Satellite image shave different geometric distortions from aerial photographs and these
must be corrected for in a way (less tilt and height distortion, more curvature distortion ).
Satellite images also have different spectral characteristics due to the earth’s atmosphere
and the fact that recording is electronic thus resulting in different information being
available in orthoimages from that of orthophotos.

Students should identify the basic differences between an aerial photo and a satellite
image.

31
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

1.2 The Electromagnetic(em) Spectrum

Electromagnetic radiation includes a very wide range of energy, from X-rays through
visible light to radio waves. All these energy can be transmitted through vacuum and they
do not require any medium for its transmission. A small portion of this is actually used
for Remote Sensing despite the fact that various systems in use span almost the whole of
the spectrum as shown below:

1. Ultra violet – This is of great interest to geologists, since a lot of minerals show
characteristic fluorescence at these wavelengths. This feature is used for mineral
identification and sometimes in ground prospecting. This has been used to detect
scheelite associated with good mineralisation and for monitoring oil slicks. The
fluorescence of oil films on the surface of the sea can assist in the identification of
the type and source of the oil. This can also be able to detect oil spills for
environmental applications.
2. Visible wavelength – These are not the most useful for geological purposes
because rocks, minerals and soils do not show distinctive spectral differences in
the visible portion. This was used essentially for water quality, pollution and
coastal bathymetry. 4 bands of Landsat MSS used visible light. 2 bands of Spot
used visible light.
3. Near Infrared – Imagery is usually sharp with good contrast and this was of great
value for topographical mapping purposes. Geological information content of this

32
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

waveband in terms of lithological or mineralogical discriminations was generally


low.
4. Mid Infrared – This is the most important region for geological and vegetation
studies. The inclusion of band 7 on the TM proved to be of great value in
lithological mapping and in the discrimination of clay-rich alteration zones
associated with mineral deposits. The Australian Geoscan was used to distinguish
different mineral species and have proved of great value in mineral exploration.
5. Thermal Infrared – Geological interest depends more on differences in thermal
properties between rock and soil units.
6. Microwave – The main use of microwave RS in the minerals industry is likely to
be as an aid to structural interpretation. The microwave portion of the spectrum
can be divided into a series of bands as shown in the figure below.

In general, the short wavelengths are absorbed by natural materials especially water
while longer wavelengths penetrate further into soils and overburden especially after
they are dried. The amount of energy scattered back to the sensor depends on:

1. The dielectric constant of the material


2. The surface roughness as a function of the wavelength being used.
3. The relative inclination of the surface with respect to the radar beam.

The dielectric constant is a function of lithology and also moisture content.


The main use of microwave remote sensing in the material industry is likely to be as an
aid to structural interpretation. In many parts of the globe especially the equatorial rain
forest region, cloud cover prevents the acquisition of imagery at optical wavelengths.
Microwave remote sensing is the only source of data and must be used as a sensor of last
resort.

33
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

1.3 RS systems are of two types:

1. Direct System – the incoming radiation hits a surface where the image will be
formed (i.e. a film).
2. Indirect System – incoming radiation or energy is measured and recorded in a
non-photographic manner by a sensor.

DIRECT SYSTEM
Advantages Disadvantages

1. Camera is very cheap Limited to small part of the em spectrum

2. Operation is simple passive system

3. Processing and printing is easily carried out weather dependant.

INDIRECT SYSTEM
Advantages Disadvantages

1. Operate over wider range of wavelengths, High Cost.


including microwave and IR.

2. Most systems are weather immune Variable geometry

3. Data collected and stored in computer Resolution


Compatible form. Pixel size 80m
Now products with pixel size 80cm!!.

The sensors used in these systems are either passive or active.

Table 2: shows the sensors employed in RS systems

PASSIVE SENSORS ACTIVE SENSORS


Simply records reflected or emitted This produces their own illumination and
electromagnetic radiation. record its effects.
This includes photographic (analogue) This includes imaging radar and laser
systems and scanner (digital) systems. (lidar) systems.

The scanners are also in two main groups.

34
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

1. Optomechanical scanners – This uses a rapidly moving mirror to scan the surface
below in a pre determined fashion e.g.. Landsat Thematic mapper launched in
July 1972.
2. The pushbroom scanners – This has no moving parts, but records radiation from
the surface below by means of arrays of sensitive semi-conductors (Charged –
coupled devices CCD’s) e.g. SPOT satellite launched in 1986.

An ideal RS System
This should consist of the following:

SOURCE DETECTOR

TARGET

 Source – systems can be active or passive


 Energy path to Target - Attenuation
 Scatter signal – when the energy hits the target there is a scatter signal resulting in
severe energy loss, the reflected energy then goes back to the detectors with the
energy received severely diminished.
 Target
 Emitted energy
 Energy path to Detector
 Detectors

1. Source – Systems can be active or passive systems.


Active systems – use their own illumination
Passive systems – depends on other sources for illumination (natural)

35
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

2. Energy path to Target i.e. attenuation


As electromagnetic radiation propagates through the atmosphere, it will interact with
the various layers it encounters. This interaction may as we have seen take the form of
scattering, where energy is not lost but merely redirected or of absorption, where energy
is absorbed by the layers / species where Energy is lost. The combined effect of scattering
and absorption is called attenuation.

3. Scatter signal – when the energy hits the target there is a scatter, resulting in severe
energy loss, the reflected energy then goes back to the detector with the signal adequately
diminished.

4. em spectrum – In the visible region about 100% of the energy gets to the target i.e.
transmitted, in other regions of the em spectrum limited energy is transmitted due to
presence of gases in the atmosphere i.e. gases like water, Carbon dioxide and the ozone
layer. Indirect systems are capable of recording a greater part of the em spectrum
example the landsat MSS.

5. Detectors – these have to be kept cool.

6. Target – each target will sense different features differently.

What do Remote Sensing devices record and why?

I will illustrate this with a simple example hoping not to confuse the student!.

Our skins are natural near infra red (heat) sensing device. You should expect the
underside of your arm to feel warmer if it is over vegetation, than over bare soil, and also
warmer over soil than over water!.

HOT WARM

Green vegetation Dry bare soil

36
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

COOL

Water body

Thus remote sensing devices record reflected sunlight in different bands.

For example a very well known Satellite Remote Sensing system called THEMATIC
MAPPER, records reflected sunlight in the following bands:

1. 0.45-0.52 micrometers = used for e.g. coastal water mapping , soil or vegetation
differentiation, deciduous and coniferous differentiation.
2. 0.52-0.60 micrometers = used for e.g. monitoring healthy vegetation
3. 0.63-0.69 micrometers = used for e.g. plant species differentiation
4. 0.76-0.90 micrometers = used for e.g. delineating water bodies
5. 1.55-1.75 micrometers = used for e.g. snow / cloud delineation
6. 10.4-12.5 micrometers = used for e.g. stressed plant detection
7. 2.08-2.35 micrometers = used for e.g. hydrothermal mapping

The above is also some of the application areas of Remote Sensing, a lot more of this
will be studied during the main course.

The SPOT satellite also records sunlight in the following bands:

i. 0.50 - 0.59 micrometers


ii. 0.61 – 0.68 micrometers
iii. 0.79 – 0.89 micrometers

Thus by applying remote sensing principles we may with an appropriate mix of data from
different bands detect e.g. sick conifers in mixed woodland or areas of new flooding.

37
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

How will these devices record?

A Remote Sensing system will have a detector or a set of detectors for each band in
which there is some interest.

The detector is electronic – not film!. A great variety of detectors are found, but stated
simply a detector is coated with a material (such as Lead Sulphide, Indium Antimonide,
Cadmium etc.) which when irradiated by specific wavelengths will generate an electric
current. Indium Antimonide (InSb) is sensitive in the range 1.55-1.75 micrometers – a
wavelength band in which snow is highly reflective, thus an InSb detector held over snow
would be expected to generate considerable electric current, but over green vegetation not
at all. So depending on where the detector is directed we can tell what is there, also by
recording the electric current generated on a data file we have a permanent record of what
was there - or at least where the detector was pointing.

The detectors are carried on board satellites. Each detector’s record is called a pixel and
will have a value related to the amount of electrical current generated at each recording
position. Satellites follow a strictly predefined orbital path. So the actual position of a
satellite at any time of the day or night is known.

Within the satellites themselves the detectors too have strictly predefined pointing
directions (achieved in a great variety of ways – depending on the class of remote sensing
imaging system – such as regularly scanning in a certain way, or always being fixed at a
particular direction). More of this Later!!.

Summary of the distinguishing characteristics of Remote Sensing Images, Aerial


Photographs and Maps:

 Remote Sensing images are produced by electronically recording the reflectivity


of the Sun’s radiation at wavelengths within and well outside those parts of the
Electromagnetic Spectrum (em spectrum) to which our eyes are sensitive.
Photographic film only records that part of the em spectrum within or very close
to that part of the em spectrum to which our eyes are sensitive.
 Remote Sensing images are recorded as data files, one data file per band. Each
data file will show, pixel by pixel, the reflectivity in a band. The reflectivity
information is thus much more detailed than with photographic film which
records over a single very broad band.
 Because of all the bands we can make visible images of what was ‘invisible’ –
i.e. outside the visible part o the spectrum, e.g. thermal images, false colour
images.

38
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

 Geometrically aerial photographs are much affected by tilt, but with Remote
Sensing images not at all. Both aerial photographs and Remote Sensing images
are affected by relief distortion, but the seriousness of this distortion is much
reduced for satellite Remote Sensing images. Earth curvature introduces
considerable distortion to satellite Remote Sensing images and very little to aerial
photography.
 Maps are usually produced from vector data sets and represent selected.
Information such as names, grid, graticule etc is added to maps. Images have
much complex textural information compared to maps.

1.3 Satellite Orbits

There are three main classes of orbits into which satellites can be placed:

1. Equatorial Orbits
Satellite orbits the earth in or near the plane of the equator. The only current
equatorial orbits used for a remote sensing satellite is the geostationary orbit. This
implies a satellite in this orbit will appear to remain fixed over the same point of
the equator hence the term geostationary orbit. These are mainly used for
meteorological remote sensing where great surface detail is not required but very
frequent images are essential.

2. Polar Orbits
Satellite orbits around or near the North and the South poles. Most remote sensing
satellites are in the polar orbit. The reasons for this are:
 The earth rotates beneath the surface track as it travels from pole to pole.
The satellite orbit is essentially fixed in space relative to the centre of the
earth but the earth itself rotates in a plane approximately at right angles to
the plane of the orbit. This permits a fragile satellite to eventually cover
the centre surface of the earth in successive orbits.
 The orbit is slightly offset from the poles and oblique to lines of longitude,
the local sun time of each point along the orbital track will be the same.
Most earth observation satellites are in sun-synchronous orbits and acquire
images between 9.30am and 10.30am. This is so because it is the period of
least cloud cover. It also provides oblique illumination which highlights
relief and satellite imagery.
3. Molniya orbits
Usually but not always near polar. In this case, the satellite travels far out into
space on one side of the earth and then passes very close to the opposite side.
These are not used for remote sensing but only for communication purposes in
extreme northern latitudes.

The orbital period is governed by the height of the orbit. Most of the satellites are at
altitudes between 700-1000 km forming orbital periods between 98 and 103mins. Some
are human defined in lower orbits such as 300 km especially the American space shuttle
was used to acquire more detailed imagery. Some extremely lower orbits were also used

39
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

for specialized military remote sensing satellites. However the effects of drag at these
levels become serious.
Most civilian remote sensing satellites orbit the earth at approximately 700-1000 km
above the Earth’s surface. There are several consequences for this:

i. Above the Earth’s atmosphere, there is no air turbulence, therefore


little tilt distortion.
ii. The whole of the Earth’s atmosphere is between the imager and the
terrain, therefore there may be considerable atmospheric degradation;
iii. The great height of the imager means that image scale will be very
small, or at least the amount of detail showing on the imagery will be
much less than that of an aerial photograph.
iv. The great height of the imager means that height distortion will be
reduced, or at least much less (relatively) compared to aerial
photographs.
v. The great height of the imager means that the Earth’s curvature
becomes something which needs to be considered, this is not often the
case with aerial photographs.

40
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Typical examples of some satellites and their characteristics are as shown below.

1.4 Data reception and Distribution

Satellites such as the Indian IRS-1 and Japanese MOS-1 transit data at rates of about
25mbs and this order of data transfer requires specialized equipment. Antennae must be
large to increase signal to noise ratio and thus minimizes bit error rates, steering
programming of the antenna must be very precise and high density data recorders must be
used. Data rates increase still further with finer special resolution and more spectral
bands. SPOT and LANDSAT TM have data rates 50 and 85 mega bits per second
respectively. This tends to limit reception to national and regional facilities and this mode

41
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

of reception is actively encouraged by the satellite operators, these are partially


dependent on data sources and they maintain some degree of control over the reception
and distribution of their data.

Previously, satellite data operators archive the data on magnetic tape. If the archive is
connected to a receiving station, the raw imagery is usually stored in the high-density
form (HDDT’s) in which it is received.
These HDDT’s are then converted to Standard Computer Compatible Tapes (CCT’s) at
customer request. Some processing of the raw data is usually carried out during
conversion from HDDT to CCT.

The digital values are rescaled using standard gain and offset values to occupy a full eight
bits per band and a crude geometric correction is usually carried out to compensate for
each rotation beneath the satellite during imaging.

Magnetic tapes are not necessarily the ideal medium for each storage or distribution of
image data. The supply of satellite imagery on floppy disk is not a practical proposition
except for educational purposes; A CD- ROM is more likely to be used for the storage of
remote sensing imagery. A single 7-band landsat-TM scene occupies 240megabytes.

2.0 Remote Sensing Systems

2.1 Commercial Systems


The Landsat System (Whiskbroom)

1. The Return Beam Vidicon (RBV) camera.


The RBV camera is an indirect system

This has a lens with no distortion. Images are in the form of charges registered on a
photoconductive target.
Electron gun systematically phases the imagery. Its first use was for earth resources
mapping and was placed on Landsat 1 & 2, three RBV cameras bolted together, each
provided with a filter to obtain image from one part of the em spectrum.
If an object is not bigger by about 200m, it cannot be seen, when this was realised
Landsat 3 was launched with 2 RBV cameras operated in different manner.
This used a single channel system

42
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Glass tube

cathode Deflection coil


wall anode

Focus coil
Mesh electrode Photo conductive target

RBV Lens

This produced good imagery but poor geometry, this was taught to solve problems in
mapping as it implies 1/1M scale maps could be obtained.
RBV Landsat 3

185km swath

185km swath 14% overlap

This had too many geometric problems and to arrest this problem "Reso Plus" were
deposited on the phase plate of the camera and the exact position can be measured.
This can then be corrected by using a fairly simple linear relationship to remove these
geometric distortions. This had great potential but unfortunately it is not being used now,
its more of history.

43
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

2. The optical/ mechanical scanner


This is the multispectral scanner (MSS) on Landsat satellites scanning is achieved in the
West - East direction. Image built up by motion of satellites.
Scan mirror Fiber bundles collects radiation at focus

Six detectors for each 4


bands

Stationary mirrors Shutter

Radiation reflected from earth

Figure 2: Schematic diagram of Lansat MSS, based upon NASA diagram

Operation
Active scan from W → E in near polar orbit, records 4 parts of the spectrum, uses
oscillating mirror to gather data from scanned line. This scans six lines at a time, 4 bands
for six lines implies 24 detectors working at the same time. Radiation from earth is
reflected by scan mirror and focused on the fiber optics to the 4 spectral bands.

The MSS system makes use of channels 4 5 6 7

44
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Landsat - MSS

Properties:

Focal length = 0.82m


Altitude = 920km
IFOV = 0.086 = 79 * 79 on ground
Velocity = 6.456 km/s
pixel size = 56 * 79m
Swath width = 185km
Scene = 185 * 185km
= 3240pixels * 2340 scanned lines
= 7.58 million pixels or 30 million pixels for all four channels collected in 29
seconds. Landsat collects data at about 1 million bits per second.

Mirror Oscillation

Frequency 13.62 Hz
Complete scan 13.42 milliseconds

45
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Mirror scan and retrace periods 36.71 msec each


Active scan period 33.00msec
Data from swath of 185km collected in 33msec.
Its not possible to produce scan mirror of that oscillation, therefore this is calibrated and
applied during production of the image. This is why the orbit is not polar  .

Filters are placed in front of the detectors, the detectors measure the brightness or
radiances or reflectance and this is used to calibrate the detector. Voltage is sampled at
regular times. Instantaneous field of view is 79 * 79 but this is moving and some areas
would have been rescanned this giving the pixel size of 79 * 56
56

. . . 79

pixel
Sampling interval

In Landsat the sampling size is rectangular thus producing a pixel size of 56 * 79 this is
because the condition of no gaps is to be achieved.
Data is received from the sensors to ground stations stationed at various points on the
earth. These ground stations are handled by Landsat themselves and data bought from
their distributors. Each detector of the MSS integrates the lightness on the scan to
influence the brightness value of the pixel.

50 50 74 80

50 80 80 70

80 80 81 60 BV road = 20
BV field = 80
79 82 60 60

The brightness affects other features close - this depends on the contrast of the
background. This leads to the problem of the resolution.

The Landsat Thematic Mapper (TM)

A more recent Landsat MSS is the Thematic mapper.

Major differences
1. Point of difference – scans in both directions
2. Each scan is active

46
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

3. Detectors in focal plane


4. Seven spectral bands of unequal width = 7 channels
5. Increased radiometric sensitivity
6. Smaller pixel size (30m X 30m)
7. Better communications

Explanation of differences

The TM uses an oscillating mirror, which scans in both directions, this wasn’t so in the
previous landsat. To compensate for gaps and overlap regions in the ground covered a
pair of rotating parallel mirrors called the Scan Line Corrector are included in the TM
optical chain.

TM detectors are located in the focal plane with the number of spectral bands increased
to seven of unequal width. The corrector advances and retards in this fashion during both
the forward and reverse scans of the primary mirror such that the scan projections on the
ground fall alongside each other. The TM takes advantage of the increased dwell time
available from this correction to increase the spatial and spectral resolution over the MSS.
To achieve this, the ground coverage per mirror sweep is kept approximately the same
giving pixel sizes of 30m.
The TM has better Communications – GPS was used a communications link.
They also launched TDRAS – a communications satellite and also DOMSAT – another
means of communication. These will primarily send data back to the ground.
TM has proved to be better than MSS
Landsat 7 was launched on 15th April 1999 using ETM

47
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

The SPOT System (Push broom)


An alternative was to have detectors in a linear array – This is termed as the push broom
method.
Linear array
Individual element

Lens

Field of view

Recorded line

At a particular instant the linear array will pick up a particular data from the ground, the
whole line is picked up by the detectors in the linear array. This was used in the SPOT
system by the military.
The main advantages were

1. Uses linear array


2. Complex mechanical scan mechanism eliminated
3. Dwell time per pixel increased giving better sensitivity
4. Excellent geometry (central projection) along scan axis.

The same orientation for every line. This can however change for the next line.

Disadvantages

- Use of many detectors in the detector array.


- Problem of calibrating 6000 detectors as against 6 previously.
- Detectors had to be very small.
- Linear array had to be moved precisely.

The French launched the Spot system and it’s the first fully Commercial Remote Sensing
System.

48
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Characteristics of the Spot system


Focal length = 1.082 Swath width = 60km variable
Altitude = 832km
IFOV = 20m MSS mode with 3 channels
= 10m panchromatic mode with 1 channel

The problem
6000 elements

f = 1.082

H =832km

60km

Array length = 60  1082 = 78.0288mm with 6000 elements 78.0288/6000 = 13m


832

This took a long time to be able to get a linear array like this.

Spot can operate in two modes

1. MSS mode = resolution of 20m pixel size


2. Panchromatic mode = resolution of 10m pixel size

The Spot system presented

Better sensitivity
Better reliability
Better geometry

This means photogrammetric procedures can be employed for these scanners.


The Spot system does not need to point vertically to the ground as it could be tilted.
This tilting facility allows image to be obtained from a whole number of passes – This
proved to be very useful for monitoring. SPOT was more useful for monitoring than the
MSS. It also provided the first time a stereoscopic image has been obtained from space.

49
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

A major advantage for the SPOT system was using the normal procedures for
photogrammetry for it’s images.

SPOT 5 was launched in 2002 with a 5m resolution. SPOT 5 will have the advantage of
cross track stereo and long track stereo.

Cross track stereo long track stereo

4.0 Microwave Systems

Microwave systems are:

- good for all weather


- can penetrate clouds, snow etc.
- Microwave signals are usually weak.
- In practice this has not been well applied, best known application is the active
microwave system.

SLAR – Side Looking Airborne Radar

- One measures the time of transmission and the time it’s reflected back.

T R

50
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Diagram:

Pulse
generation Transmis
sion

Time
comparison
Duplexer

Radar RECORDER antennae


signal converter

CRT

Film
LENS

- Pulses sent to transmission and also to the time comparison at the same time. The
transmitted pulse goes to the Duplexer to the antenna to the target, back to the Duplexer
then recorded, goes into the time comparison to the radar signal converter to a CRT and
onto a film, all this stakes place in sub seconds them the next operation goes on.
- Pulses are sent many time/sec. Pulse at right angles of the aircraft.
- The antenna is a cylinder and can be up to 4m, this will be on both sides of the aircraft
to scan both sides of aircraft.

Why Radar does not look vertically downwards

Sensor is not likely to know the features involved as all images will be registered at once.

51
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

The SLAR looks to one side of the flight direction and is capable of producing a
continuous strip-map of the imaged surface. It transmits short pulses of radio frequency
energy than a continuous wave. The SLAR looks to one side only (hence the name) at a
time so as to remove the side-to side ambiguity in relating pulse delay to target position
which would otherwise occur

Flight Planning and Positioning of the aircraft (FOR SLAR).

The Aircraft
- a sophisticated aircraft navigation system is employed.
This is usually combined with GPS and satellite inertial system with these the aircraft
can fly day or night with no need for sight visibility.
- Introduces auto pilot and flight can be steady.
- The antenna is able to move to remove tilt or any distortion present in aircraft
movement.
- Rates at which pulses are emitted and the speed at which an aircraft can be controlled.
- Cost is high and previously a Doppler system was used.
A SLAR image is produced with one antenna at a time.

3.0 DIGITAL IMAGE PROCESSING

Introduction

Image Processing and Analysis can be defined as the “act of examining images for the
purpose of identifying objects and judging their significance”. The image analyst studies
the remotely sensed data and attempt through logical processes in correcting, detecting,
identifying, classifying, measuring and evaluating the significance of physical and
cultural objects, their patterns and spatial relationship.

Digital Data
In a most generalized way, a digital image is an array of numbers depicting spatial
distribution of a certain field parameter (such as reflectivity of EM radiation, emissivity,
temperature or some geophysical or topographical elevation. Digital image consists of
discrete picture elements called pixels. Associated with each pixel is a number
represented as DN (Digital Number) that depicts the average radiance of relatively small
area within a scene. The range of DN values being normally 0 to 255. The size of this
area effects the reproduction of details within the scene as the pixel size is reduced more
scene detail is preserved in detailed representation.

Remote sensing images are recorded in digital forms and processed by the computers to
produce images for interpretation purposes. Images are available in two forms-
photographic film form and digital form. Variations in the scene characteristics are
represented as variations in brightness on photographic films.

52
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Data Formats for Digital Satellite Imagery


Digital data from the various satellite systems supplied to the user in the form of
computer readable tapes or CD-ROM. As no worldwide standard for the storage and
transfer of remotely sensed data has been agreed upon, though the CEOS (Committee on
Earth Observation Satellites) format is becoming accepted as the standard. Digital remote
sensing data are often organized using one of the three common formats used to organize
image data. For an instance, an image consisting of four spectral channels, which can be
visualized as four superimposed images, with corresponding pixels in one band
registering exactly to those in the other bands. These common formats are:
 Band Interleaved by Pixel(BIP)
 Band Interleaved by Line(BIL)
 Band Sequential (BQ)
The above format indicates ways by which images can be reformatted.

Digital image processing using the Landsat images will be discussed.

It was observed that the image from Mars was dull and had a lot of noise apart from
geometric distortions. The noise and geometric distortions etc had to be corrected for.
The digital image stored in a computer are a 2D array of pixel value. Each element is a
picture element or pixel. Its brightness value represents the image.

1. In a Landsat image 56 * 79 pixels with a swath width of 185km we have about


30m pixels with 30m brightness values. A good photographic film can show 20 or
30 discrete levels – refer from grey 0 – 255, the human eye has a limited range of
grey values.
2. Limit to spectral bands – three independent images that can be gathered together.
This makes Digital image processing important.

Preprocessing techniques

This is to take care of the imperfections in the detectors sometimes this is done by the
image distributor. This process is termed as reformatting the data -data is converted to the
form in which your processing software can cope.

Reformatting of data

1. Band interleaved by Pixel(BIP)


2. Band interleaved by Line (BIL)
3. Band sequential (BS)

For most purposes the band sequential is used. The above is as illustrated below:

1. BIP

53
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

2 pix 2 pix 2 pix 2 pix

Band 4 Band 5 Band 6 Band 7 etc

2. BIL

Line 1
Ch4

Ch7

Line 2 Ch4

Ch7 etc

3.BSQ
line 1

Band 4

Line n

Line 1
Band 5

Line n
etc

Digital image analysis is usually conducted using Raster data structures- each image is
treated as an array of values. It offers advantages for manipulation of pixel values by any
image processing system, as it is easy to find, locate pixels and their values.
Disadvantages becomes apparent when one needs to represent the array of pixels as
discrete patches or regions, where as Vector data structures uses polygonal patches and

54
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

their boundaries as fundamental units for analysis and manipulation. The vector format is
however not appropriate for the analysis of remotely sensed data.

Image Resolution

Resolution can be defined as ‘the ability of an imaging system to record fine details in a
distinguishable manner’. A working knowledge of resolution is essential for
understanding both practical and conceptual details of remote sensing. Along with the
actual positioning of spectral bands, they are of paramount importance in determining the
suitability of remotely sensed data for a given applications. The major characteristics of
imaging remote sensing instrument operating in the visible and infrared spectral region
are described in terms as follow:
 Spectral Resolution
 Radiometric resolution
 Spatial resolution
 Temporal resolution

Spectral resolution refers to the width of the spectral bands. As different material on the
earth surface exhibit different spectral reflectances and emissitivities. These spectral
characteristics define the spectral position and spectral sensitivity in order to distinguish
materials. There is a trade off between spectral resolution and signal to noise. The use of
well- chosen and sufficiently numerous spectral bands is a necessity, if different targets
are to be successfully identified on remotely sensed images.

Radiometric Resolution or radiometric sensitivity refers to the number of digital levels


used to express the data collected by the sensor. It is commonly expressed as the number
of bits (binary digits) needed to store the maximum level. For example Landsat TM data
are quantized to 256 levels (equivalent to 8 bits). Here also there is a tradeoff between
radiometric resolution and signal to noise ratio. There is no pointer in having a step size
less than the noise level in the data. A low resolution compared with a high quality, high
signal- to –noise ratio instrument. Also higher radiometric resolution may conflict with
data storage and transmission rates.

Spatial Resolution of an imaging system is defined through various criteria, the


geometric properties of imaging system, the ability to distinguish between point targets,
the ability to measure the periodicity of repetitive targets ability to measure the spectral
properties of small targets.

The most commonly quoted quantity is the instantaneous field of view (IFOV), which is
the angle subtended by the geometrical projection of single detector element of the
earth’s surface. It may also be given as the distance, D, measured along the ground, in
which case, IFOW, is clearly dependant on sensor height, from the relation: D=hb, where
h is the height and b is the angular IFOV in radians.

A problem with IFOV definition, however, is that it is a purely geometric definition and
does not take into account spectral properties of the target. The effective resolution

55
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

element (ERE) has been defined as “the size of an area for which a single radiance value
can be assigned with reasonable assurance that the response is within 5% of the value
representing the actual relative radiance” Being based on actual image data, this quantity
may be more useful in some situations than the IFOV.

Other methods of defining the spatial resolving power of a sensor are based on the ability
of the device to distinguish between specified targets. Such as the ratio of the modulation
of the image to that for the real target.
Modulation m, is defined as:

M=Emax-Emin/Amax + Emin

Where Emax and Emin are the maximum and minimum radiance values recorded over the
image.

Temporal resolution refers to the frequency with which images of a given geographic
location can be acquired. Satellites not only offer the best chances of frequent data
coverage but also of regular coverage. The temporal resolution is determined by orbital
characteristics and swath width, the width of the imaged area.

Swath width is given by

2htan(FOV/2)

where h is the altitude of the sensor and FOV is the angular field of view of the sensor.

Analysis of remotely sensed data is done using various image processing techniques and
methods that include:
i. Analogue image processing
ii. Digital Image processing

Visual or Analogue Processing techniques is applied to hardcopy data such as


photographs or printouts. Image analyses in visual techniques adopt certain elements of
interpretation which are as follows:

The use of these fundamental elements depends not only on the area being studied, but
the knowledge the analyst has of the study area. For example, the texture of an object is
also very useful in distinguishing objects that may appear if the judging is solely on tone
i.e. water and tree canopy, may have the same mean brightness values, but their texture is
much different. Association is a very powerful image analysis tool when coupled with the
general knowledge of the site. Thus we are adept at applying collateral data and personal
knowledge to the task of image processing. With the combination of multi-concept of
examining remotely sensed data in multi-spectral, multi-temporal, multi-scales and in
conjunction with multi-disciplinary, allows us to make a verdict not only as to what an
object is but also to its importance. Apart from these analogue imaging processing

56
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

techniques also includes optical photogrammetric techniques allowing for precise


measurement of the height, width, location, etc of an object. The table below shows some
elements of image interpretation.

Table :Elements of image interpretation


Elements of Image Interpretation
Black and White Tone
Primary Elements Colour
Stereoscopic parallax
Size
Spatial Arrangement of Tone & Colour Shape
Texture
Pattern
Analysis of Primary Elements Height
Shadow
Contextual Elements Site
Association

Digital image and processing is a collection of techniques for the manipulation of digital
images by computers. The raw data received from the imaging sensors on the satellite
platforms contains flaws and deficiencies. To overcome these flaws and deficiencies in
order to get the originality of the data, it needs to undergo several steps of processing.
This will vary from image to image depending on the type of image format, initial
condition of the image and the information of interest and the composition of the image
scene. Digital Image Processing undergoes three general step:
i. image restoration
ii. image enhancement
iii. information extraction

The three main processes involved in digital image processing, is as illustrated below:

Pre-processing consists of those operations that prepare data for subsequent analysis that
attempts to correct or compensate for systematic errors. The digital imageries are
subjected to several corrections such as geometric, radiometric and atmospheric, though
all these corrections might not be necessarily applied in all cases. These errors are
systematic and can be removed before they reach the user. The investigator should decide
which pre-processing techniques are relevant on the basis of the information to be
extracted from remotely sensed data. After pre-processing is complete, the analyst may
use feature extraction to reduce the dimensionality of the data. Thus feature extraction is
the process of isolating the most useful components of the data for further study while
discarding the less useful aspects (errors, noise etc). Feature extraction reduces the
number of variables that must be examined, thereby saving time and resources.

57
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

1. IMAGE RESTORATION

Geometric Corrections Radiometric Corrections


-altitude and attitude -variable detector response
-scanner distortions -detector failure
-earth motion

2. IMAGE ENHANCEMENT
- Density Slicing
- Contrast stretching(smoothing)
- False colour composites
- Ratio images

3. INFORMATION EXTRACTION

Unsupervised Classification Supervised Classification


-purely statistical -training sites selected
-user correlates output with ground truth -output of classified areas

RESULT(orthoimage)

Figure: Flow chart of Digital Image Processing

Image Restoration is usually carried out by the distributor.

Various procedures of resampling

58
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

1. Nearest neighbour
BV of pixels are placed in other pixels. i.e.. I(X,Y) = I(U,V)

2. Bi-Lateral interpolation

3. Cubic Convolution.

Resampling methods gives the best output.

Since 1979 these have been done operationally relieving the users of this operational
stage. However BV is changed, placed somewhere and replaced by the BV of the nearest
neighbour. It may be better to interpret on the original data i.e. ask for the original data
either than that your image may come distorted.

The following image restoration techniques or corrections have to be applied.

1. Geometric corrections

This seeks to remove distortions due to altitude and attitude, scanner distortion, earth
motion. To take this out we need ground control points – large islands etc or very big
areas in terms of their co-ordinates on Lat /Long, large ground controls are always
needed. Transformations have to be undertaken to remove the distortions this is termed as
‘resampling’.

Raw digital images often contain serious geometrical distortions that arise from earth
curvature, platform motion, relief displacement, non-linearities in scanning motion. The
distortions involved are two types:

1. non-systematic distortion
2. systematic Distortions

Rectification is the process of projecting image data onto a plane and making it conform
to a map project system. Registration is the process of making image data conform to
another image. A map coordinate system is not necessarily involved. However
rectification involves rearrangement of the input pixels onto a new grid which conforms
to the desired map projection and coordinate system.

2.Radiometric corrections

This seeks to remove the noise, detector failure, variable detector response etc.
Radiometric corrections are carried out when an image data is recorded by the sensors
they contain errors in the measured brightness values of the pixels. These errors are
referred as radiometric errors and can result from the:

59
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

1. Instruments used to record the data


2. From the effect of the atmosphere
Radiometric processing influences the brightness values of an image to correct for sensor
malfunctions or to adjust the values to compensate for atmospheric degradation.
Radiometric distortion can be of two types:
1. The relative distribution of brightness over an image in a given band
can be different to that in the ground scene.
2. The relative brightness of a single pixel from band to band can be
distorted compared with spectral reflectance character of the
corresponding region on the ground.

This takes out variations due to the sun angle.


The detector will reflect two types of radiation

1. Radiation by the sun


2. Radiation by the target.

Water and other bodies will appear black i.e. with a BV of 0.

Eg. Band Lowest BV Highest BV


4 11(0) 60(49)
5 4(0) 69(65)
6 3(0) 75(72)
7 0 31

each band is the plotted.


Applying shifts like this will take out the effect of scattering.

11 60

These are done automatically. Once these processes have been done then the image
enhancement could be carried out.

60
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Image enhancement

Image enhancement techniques are instigated for making satellite imageries more
informative and help to achieve the goal of image interpretation. The term enhancement
is used to mean the alteration of the appearance of an image in such a way that the
information contained in that image is more readily interpreted visually in terms of a
particular need. The image enhancement techniques are applied either to single-band
images or separately to the individual bands of a multiband image set

These techniques are applied to simplify the image or to prepare the way for some of the
information extraction to be obtained.

A. Contrast stretching
The operating or dynamic ranges of remote sensors are often designed with a variety of
eventual data applications. For example for any particular area that is being imaged it is
unlikely that the full dynamic range of sensor will be used and the corresponding image
is dull and lack contrast or is over bright. Landsat TM images can end up being used to
study deserts, ice sheets, oceans, forests etc., and requiring relatively low gain sensors to
cope with the widely varying radiances from dark, bright, hot and cold targets.
Consequently, it is unlikely that the full radiometric range of brand is utilised in an image
of a particular area. The result is an image lacking in contrast – but by remapping the DN
distribution to the full display capabilities of an image processing system, we can recover
a beautiful image. Contrast Stretching can be displayed in three categories:

1. Contrast stretching – Data consists of BV of 0 to 255 but the human eye is not
capable of doing this. Its most unusual to obtain all this in a band. Most image
enhancement software will show the minimum, the maximum and the SD.

The spread of BV will be less than from the 0 – 255 range

61
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Eg.

We are going to use 36/255 = 14%

15 51

Every pixel will lie within a range of 14% of the range from 0 –255, this will be
stretched, this is done as shown below:

BVO = BVI – MIN  255


MAX - MIN

Image contrast is increased and the image can easily be interpretated. It assigns a linear
stretch which can be a big disadvantage.

Other categories that can be applied are:

i. Histogram

Stretch 

ii. Step stretches – Stretches over a certain area, depends on what you are after

62
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

NB: The above illustrates the idea of the histogram equalization principle and the
Gaussian stretch.

2. Density Slicing
Total range of BV is divided up into bands say eight (i.e. slices)

0 255

each band is given a special symbol, say, the BVs of a given band are “sliced” into
distinct classes. For example, for band 4 of a TM 8 bit image, we might divide the 0-255
contiguous range into discrete intervals of 0-63, 64-127, 128-191 and 192-255. These
four classes as four different grey levels, this is useful for producing grey maps and also
useful for displaying pixels – This is generalization. This gives you the idea that areas of
the same band are likely to be the same. This also finds applications in the display of
temperature maps.

3. Ratio images
Sometimes the ratio values of BV can be more useful than the original data to obtain
ratio images.
DN1 - BV OF1st channel
 DNo = a. DN1 + b
DN2 DN2 - BV of 2nd channel

a. and b are constant to select scale output

Band 7 - Band5

Band 7 + Band 5

Lots of experiments were undertaken to find out the best ratio for an area of interest.
Ratio images were also used to eliminate the effect of topography and shadow on images.

63
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Eg.

quite bright quite dark


SANDSTONE

illumination Band 4 Band 5 Ratio 4/5 New BV


Lit area 28 42 0.66 66
Shadow area 22 34 0.65 65

This means the area is likely to be the same. Its also used to detect change, these are for
individual pixels.

4. Filtering (edge detection)

This can be carried out to compare the BV with its neighbour, change BV of a pixel
within the image. A look at the tone changes over the scan line, for each scan line the
frequency and texture of each pixel will be shown.

Filtering can be carried out to do one or two things:


a. Supress high frequency changes
b. Raised with frequency changes.

Types of filtering
i. Low class filtering
ii. High class filtering
The above is sometimes referred to as a smoothing filter.

i. Low class filtering: for a homogeneous area

6 7 7 6 7 7

7 20 6  7 8 6

6 7 8 6 7 8

64
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

The simplest low class filter takes a value and replaces it with say the mean value. A
filtered image is a new image called non-recursive filters. These filters mainly reduce
noise. Low class filtering can be used to reclassify the image, there by reducing the noise
in the image, giving a general feature.

ii. High class filtering

Takes original image, smoothen it and find the difference in frequency between the two
images.
Or
Take the local average, take the difference between the local average and a pixel and
double the difference

6 7 7
 local average = 6
Difference = 20 - 6
7 20 6 Double difference = 14 * 2
New value = 28 + 6
6 7 8

Computationally DNo = 2DNI - A

These filters can be made directional

- discontinuities will be picked up which might not have been seen in the original image.

- once image is filtered there is the tendency to store and this takes up space.

Resolution of Colour composite: false colour composites:

- The output can be in b/w at one band at a time.


- These are then combined i.e. bands 1 2 3 and a false
colour composite formed to obtain hard copies, this helps
in interpretation.

INFORMATION EXTRACTION

Image Classification has formed an important part of the fields of Remote Sensing, Image
Analysis and Pattern Recognition. In some instances, the classification itself may form
the object of the analysis. Digital Image Classification is the process of sorting all the
pixels in an image into a finite number of individual classes. The classification process is
based on the following assumptions:

65
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

 Patterns of their DN, usually in multichannel data (Spectral Classification)


 Spatial relationship with neighbouring pixels.
 Relationships between the data acquired on different dates.

Pattern Recognition, Spectral Classification, Textural Analysis and Change Detection are
different forms of classification that are focused on 3 main objectives:

1. Detection of different kinds of features in an image.


2. Description of distinctive shapes and spatial patterns
3. Identification of temporal changes in image.

Fundamentally spectral classification forms the bases to map objectively the areas of the
image that have similar spectral reflectance/emissivity characteristics. Depending on the
type of information required, spectral classes may be associated with identified features
in the image (supervised classification) or may be chosen statistically (unsupervised
classification). Classification has also seen as a means to compressing image data by
reducing the large range of DN in several spectral bands to a few classes in a single
image. Classification reduces the large spectral space into relatively few regions and
obviously results in loss of numerical information from the original image. There is no
theoretical limit to the dimensionality used for the classification, though obviously the
more bands involved, the more computationally intensive the process becomes. It is often
wise to remove redundant bands before classification.

Classification generally comprises four steps:

 Pre-processing, e.g., atmospheric, correction, noise suppression, band ratioing,


Principal Component Analysis, etc.
 Training – selection of the particular features which best describe the pattern.
 Decision – choice of suitable method for comparing the image patterns with the
target patterns.
 Assessing the accuracy of the classification.

The image may be left in digital form  numerical methods can be employed.
Or It can be displayed in graphical mode  visual inspection can be:
Vegetation – different shades of green
Water – black most times
Bare soil – light blue
Built up areas - blue grey
Clouds and snow – white
Sand - white yellow

Interpreting an image visually comes out with a lot of disadvantages. An image is best
interpreted by digital methods. Two forms can be used for image interpretation:

66
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

i. Unsupervised classification
ii. Supervised classification

Information Extraction

Unsupervised classification Supervised classification


- Employs statistical methods - Training sites are identified.
coupled with ground truth

Unsupervised classification
If an area is identified as say vegetation then a ground truth has to be known then all
similar areas can be interpretated as such.

Supervised Classification
1. From previous experience you know the BV in a particular band this is
then fed into a computer before the classification is performed.
2. Find training sites or areas where the land types are known, this can then
be identified by the pixel.

The supervised classification is widely used, e.g. road classification

Surfaces of interest PAN IR


Grass D L
Cement L L
Asphalt D D
Soil L D

D - Darktone L - Light tone


Using both can solve this problem. Thus it may be possible to classify images from four
or more bands.

Simplest Classification of unknowns

1. Nearest neighbour Classification


- work out the mean BV for the pixel in each training area.
- Compute pixel from unknown
- Very easy to compute.

Problems
a. every pixel will be classified into one or other pixels

67
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

b. strictly nearest neighbour: classifier will not consider that if its further away as it
could be something else.
c. No account is taken of the different spreads in the data.
d. You can’t use this to go after one particular feature eg water.

2. Parallellepiped Classified approach


In this approach each area is bounded forming a rectangular shape, range can be
defined in a number of ways:
i. In 2D space decision spaces becomes 2D
ii. In multi-spaced this forms the parallelepiped.
This confusion is caused by the overlap which is the result of the high correlation
between the data..
- rectangular decision area is not good for a high correlation
area. Band 4 and 5 are highly correlated. A stepped
decision can be employed to solve this. Also the axis can
be rotated along the lines of correlation, this is termed as
the Principal Analysis.

An accepted training area can be spoiled by one or two pixels in the training area.

Band 5

+++++++ +
+++++
+++++++
+++++

+
B

Band 4
Suppose A and B were not present the decision area would have been the red marked area
above. This implies A and B has increased the decision area. How does this come about! ,
we took the min and max value of the data.

68
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

This can really take a long time.

BV(4) BV(5)

To solve this:

1. Work out the mean


2. Work out the SD
3. Determine the range i.e. (mean – SD  mean + SD)

This is an automatic way of getting rid of points A and B. The procedure works well if
the BV’s are normally distributed around the mean.

Another possibility:
Find the highest and lowest values and go a little bit lower and a little bit higher than
those values.

Another possibility:

Maximum Likelihood Classifier


- Uses elliptical decision area to overcome the problem of
the parallelepiped.
- This is much slower to run but gives excellent results.
All the above methods are termed as Hard Classifiers.

There are a second group of classifiers called soft classifiers or fuzzy classifiers.
This takes each pixel saying: this can be say water with a probability of say 70%. Fuzzy
classifiers are getting a lot popular.
The above are classified supervision.

69
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Detailed procedures under supervised classification.

In this system each pixel is supervised for the categorization of the data by specifying the
computer algorithm, numerical descriptors of various class types. There are three basic
steps involved in typical supervised classification.

Training Stage
The analyst identifies the training area and develops a numerical description of the
spectral attributes of the class or land cover type. During the training stage the location,
size, shape and orientation of each pixel type for each class.

Classification Stage
Each pixel is categorized into land cover class to which it closely resembles. If the pixel
is not similar to the training data, then it is labeled as unknown. Numerical mathematical
approaches to the spectral pattern recognition have been classified into various
categories.

1. Measurements on Scatter Diagram


Each pixel value is plotted on the graph as the scatter diagram indicating the
category of the class. In this case the 2-dimensional digital values attributed to
each pixel are plotted on the graph.

2. Minimum Distance to Mean Classifier/Centroid Classifier


This is a simple classification strategy. First the mean vector for each category is
determined from the average DN in each band for each class. An unknown pixel
can then be classified by computing the distance from the spectral position to each
of the means and assigning it to the class with the closest mean. One limitation of
this technique is that it overlooks the different degrees in variation.
3. Parallelpiped Classifier
For each class the estimate of the maximum and minimum DN in each band is
determined. Then parallelpiped are constructed so as to enclose the scatter in each
theme. Then each pixel is tested to see if it falls inside any of the parallelpiped
and has limitation

4. A pixel may fall outside the parallelpiped and remained unclassified.

5. Theme data are so strongly corrected such that a pixel vector that plots at some
distance from the theme scatter may yet fall within the decision box and be
classified erroneously.

6. Sometimes parallelpiped may overlap in which case the decision becomes more
complicated then boundary are slipped.

7. Gaussian Maximum Likelihood Classifier

70
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

This method determines the variance and covariance of each theme, providing the
probability function. This is then used to classify an unknown pixel by calculating for
each class, the probability that it lies in that class. The pixel is then assigned to the most
likely class or if its probability value fail to reach any close defined threshold in any of
the class, be labeled as unclassified. Reducing data dimensionally before hand is an/one
approach to speeding the process up.

Unclassified supervision

Simplest of these is: density Slicing-this needs the ground truth to check

NB: Any one classifier will not give good data and these are used interactively with the
skill and experience of the operator.

This system of classification does not utilize training data as the basis of classification.
This classifier involves algorithms that examines the unknown pixels in the image and
aggregate them into a number of classes based on the natural groupings or cluster present
in the image. The classes that result from this type of classification are spectral classes.
Unsupervised classification is the identification, labeling and mapping of these natural
classes. This method is usually used when there is less information about the data before
classification.

5.0 Current Developments and Mapping from satellite imagery

 Use of satellite imagery for Topographic mapping projects.


Satellite imagery hasn’t done much in this case, lack of success has been due to the fact
that space imagery has not been developed to give the accurate geometry etc. The figure
below depicts the current position of satellite imagery.
Years
50 GROUND SURVEY

5 AERIAL SURVEY

3 SPACE

28 SPOT

16 LANDSAT

12 NOAA

30 METEOSAT

0.1 1 5 10-20 30 1km 5km Ground resolution

71
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

Scanner Imagery
- Land sat will continue to be available; it’s been around for a long time.
- MSS hard copy has been used to interpret topographic0 maps.
- Topographic mapping has been interested in boundary lines and lines of
communication.
- Thematic mapping is after areas.
- You can’t pick line features – difficult with TM.
- Lack of stereo cover to aid interpretation.
- SPOT – received a lot of attention as it could show greater detail, line maps were placed
on such images.
- SPOT tells you where a change has taken place.

RADAR SYSTEM
- Earlier attempts produced spectacular results
- Lots of radar systems under development

Importance to attach to RS
2. RS has been very successful in Thematic mapping.
3. Images enhancement and Supervision Techniques will be with us for a long time.
4. TM at global scales has been quite spectacular.
- Sea Temperature
- Oil spills
- Changes
- Global wind fields
- Ozone layers

Recall successes of RS

1. RS successful in TM
2. Experimental systems like RADAR have yielded some good results in some
localized areas.- could not become worldwide commercial system.
3. Production systems are landsat and SPOT.

Future of RS

Four important developments:

1. Improvements from existing suppliers (Landsat and SPOT)


Landsat MSS = Landsat TM = Landsat ETM = Increasing resolution.
2. Release Russian data, especially photography.
3. Transformation of systems from experimental to production mode
- MOMS would have given substantial information.
4. Development of new, fully commercial systems, especially from USA.

72
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

From 1996 some Commercial Companies have tried to launch some systems.
E.g.. Earth watch : These systems brought
1. A big change from earlier systems
2. Has a high geometric fidelity
3. Very flexible and rapid image delivery
- within 6 – 24 hrs
4. Ability to deliver stereo imagery of high quality

Why is all this possible now:

Encouragement for Commercial developments in USA:


1. End of the cold war
- collapse of Soviet Union – emergence of Geo use Technology had to be used
by civilian population.
2. Failure of landsat 6
- seen as opportunity for commercial satellites or systems to be put up.
3. Advances in Digital Technology
- cost of satellite now is 10% the cost Ten years ago.
4. Attitude of US department of Commerce
- agreed to sale of digital information in terms of collection and supplies.

Some Commercial Groups:


1. Very High Resolution (VHR) Missions (all with stereos)

Nine (9) Companies were given licenses to put up satellites and obtain satellite data, two
of these license is being handled by earth watch – This was developed as part of strategic
planning. Some of these Commercial groups are:
1. Earth watch - Early bird, quick bird
2. Indian Space Research Organisation - IRS - 1C
3. ORBIMAGE - Orb view – 3
4. Space Imaging - Space imaging (Carterra – 1)
5. SPOT Image – SPOT 4, SPOT 5
6. Russian Data Suppliers (World map) – RESOURS – F1, F2, F3, KOSMOS
missions

1. Earth Watch – Early Bird 1


- Launch scheduled for May 1997, launched 24/12/97
- H = 470 km, Revisit 2 – 5 days, Inclination 97.3 degrees
- Two sensors with resolution of 3m (Pan) and 15 m (MS)
- 30 degree fore – aft and 28 degree side to side tracking
no plans to launch another one! - 1st license.

Earth Watch - Quick Bird 1


- Launch scheduled for late 1998, 1999
- H = 600 km. Revisit 1 – 4 days. Inclination 52 degrees
- Two sensors with resolution of 0.82 m (Pan) and 3.2 m (MS)

73
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

- 30 degree fore – aft and 30 degree side to side tracking.


Likely to be launched in 2000? Yet? Not confirmed yet
Long track and cross track stereo will be possible

2. Indian Space Research Organisation


- IRS – 1C
- Launched in Dec 1995
- H = 817 km. Revisit 5 days. Inclination 98.69 degrees
- Three sensors with resolution of 5.8 m (Pan) and 23.5 m (IR) and 188 (WIFS)
- 26 degree side to side tracking
better than SPOT, Cross track possible, long track not possible - This has been very
successful.

3. ORB IMAGE
Orb view - 3
- Launch scheduled for mid 1998, 1999, … may be this year? Current status not
known.
- H = 470 km. Revisit 3 days
- Two sensors, resolution of 1, 2 m (Pan) and 4 m (MS)
Orb view – 4 Scheduled for 2000

4. Space Imaging EOSAT


- Carterra – 1 (now IKONOS)
- Launched scheduled for the late 1997, Spring 98, Spring 99 (failed), successful
launch 24 September 99.
- H = 680 km, Revisit 4 days, inclination 98.2 degrees
- Resolution of 1m (Pan) and 4 m (MS)
This is the first 1m resolution RS system in space, this yielded fantastic results.
This is owned by a consortium in Canada – E systems, Mitsubishi joined in 1995 and
DOSAT joined in 1996.
At the heart of this system is the IKONOS Camera Telescope.

IKONOS - Quick Facts

Camera System consists of 4 main parts


Total system, weight: 376 lbs (171 kg)
Total system, power consumption : 350 Watts

i. Optical Telescope Assembly


Assembly size : (1524 mm) X (787 mm)
Assembly weight without good plane unit: (100 kg)
Optical design: Three mirror anastigmat
Focal length / focal ratio: 10 m / F14.3
Image resolution (at nadir): One – meter panchromatic
Four – meter multispectral
Primary Mirror: (at 0.7 m) diameter X (100 mm) thick (13.4 kg)

74
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

ii Imaging sensors and Electronics


Focal plane unit
unit size: (25cm X 23cm X 23cm)
Panchromatic sensor: 12 micron pixel pitch,
13,500 pixels
Multispectral sensor: 48 micron pixel pitch, 3375 pixels

iii. Digital Processing Unit


Unit size = (46cm X 19cm X 31cm)
Compression rate: 11 bits per pixel compressed to 2.6 bpp.
Compression speed: 4 million pixels per second per processing channel

iv. Power Supply Unit


Unit size: (18cm X 20cm X 41cm)

5. Spot Image - This has been very successful, spot 1 was launched in1986

Spot - 4
- Launched 24 March 1998
- Two sensors, resolution of 10m (Pan) and 20m (MS)

Spot - 5
- Launched scheduled for 2002, 5m and 2.5m resolution (Pan); 10m in MSS

This has been designed with mapping application in mind, this will have planimetric
standards of 10m accuracy at 1.50,000 scale specification and a vertical height of 5m.

6. Russian Data Sources

RESOURS - F
- Camera system with resolution of 2 - 20m
- Several 30 days missions each year
- Archive of more than 2 million multi-spectral and panchrometric scenes from all
parts of the world.

KOSMOS this used two cameras

E.g. Pavoda systems are non-military missions, this has more than 2 million MSS and
many of these are at 2m resolution, data is readily available.

KOSMOS MACHINES - Used two cameras.

7. Other Systems
These are defined as secondary because they were not set up for Commercial purposes
i.e.. For national data or scientific use.

75
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

a) China - Rocsat 3 (2002) Rocsat 1 launched 26/1/99.


These are experimental and very unlikely that data will be accessible to the common
market.

b) Germany – Tubset – B, MOMS – 2P 1996)


Tubsat – B is a small satellite and not commercial.

c) Israel – EROS (1997, failed January 98), David (1998)

d) Japan – ADOES (1996, ACOS (2002)


ACOS will have a phase delay SAR

e) USA – Clark (97), (EWIS (97), Corona and Lanyard (61 – 72).

NB: In 1999, 78 launches with 9 failures.

Detection of change and monitoring – RS is doing a lot in the areas of Geography,


Geology, Agriculture, Topographic mapping etc.

Applications of satellite imagery.

Cartography Forestry
-Ortho rectified images - Disease, windblow detection
-Map revision 1/25000 - Forest degradation

Environmental monitoring
Agricultural - Coastal studies
-Precision farming - Landuse changes
-Growth monitoring - Compliance and regulations
-Shortage prediction - Mining operations

News Gathering

- Images of disasters, world events for CNN etc.

E.g.: Agric
Usage of fertilizer and pesticides are being discouraged on mass scale or for say
every area of a farm.
- growth monitoring

Forestry
= Timber volume is possible

76
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

There are going to be problems but there are other problems


Certain legal and policy issues to be raised.

POLICY ISSUES
- United Nations Principles on Remote sensing
- US Policy on Earth observation
- CEOS Principles
- IEOS Principles
- European Commission Directive on Databases.

United Nations Principles on Remote Sensing

15 approved unanimously by UN General Assembly on 11 December 1986, some of the


15 are:

II. Carried out for the benefit of all countries.


V. Promotion of international co-operation.
VI. Maximise the benefits of remote sensing.
XII. Territory sensed should have access to data.

Principle XII – gives the right of a sensed state or country to gain access to data of it’s
own territory under three conditions.
- as soon as the data is produced
- access will be non-discriminatory
- access will be on reasonable cost terms.

To implement these were not easy as it makes no mention of the cost of putting up the
satellite.
Does this happen?

United States Policies on Remote Sensing:

- Land Remote Sensing Policy Act of 1992 and Remote Sensing Policy signed by
President Clinton in 1994 cleared the way for the development of the new, high
resolution systems, BUT.

- National Security of US and international obligations and foreign policies of US must


be observed by system operators,

- Certain data must not be collected at times of national security alerts.

Remote Sensing and GIS in Mineral exploration

The main fields of application Remote sensing and GIS to mineral exploration are in the
area of:

77
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

1. Logistics

2. Mapping structure

3. Lithology

4. Location of alteration zones

At all levels of mineral exploration RS and GIS can be of great benefit when used in
conjunction with other more conventional sources of information ranging from
topographical and geological maps to geographical and geological data.

1. Logistics

The use of remote sensing as a logistical aid is widespread and generally accepted. False-
colour composite imagery derived from Landsat TM or SPOT, geometrically computed
to standard map projections with the usual legend and geographical grids is a common
aid to field geologists and geophysicists.

Satellite imagery was generally accepted as a logistical aid in exploration for the
following reasons:

1. Mineral exploration takes place in remote areas of the world and in most cases these
areas are poorly mapped, thus the need for base maps to plan any exploration program.

2. Desert areas without permanent human settlement or sources of surface water can be
transformed to irrigated farmland removing problems of water supply and access for field
work but greatly increasing the difficulty of actually carrying out the work.

3. Government agencies may often be poorly informed about these agencies or even
unwilling to discuss them with foreigners.

4. the need for accurate and timely information on access and land cover types can often
only be met by remote sensing satellite, whose observations are not restricted by local or
national boundaries and whose imagery can provide quantitative map like information.

A preliminary reconnaissance of an unknown area is unlikely to justify more than a


standard false colour composite, purchased from a distributor. This can be used as a
source of basic information about the area and serve as a planning base for the first stage
of field work. The use of different band combinations and special filters can enhance
those features which are of particular interest to the project and geometric correction is
often desirable at his stage. The Directorate General of mineral resources in Saudi Arabia
makes very extensive use of satellite derived map-like products in its mapping and
exploration program.

78
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

The type of imagery required for logistical purposes depends to some large extent on the
size of the area to be covered, but it is usually desirable to have the finest resolution
affordable. This imagery can then be used to locate access tracks, bridges and similar
man-made features, the use of resolution which will unambiguously display these
features is important. Colour imagery is also almost essential as it allows discrimination
of densely vegetated land and areas of water. Since any mineral exploration programme
is in essence a process of progressive elimination of less promising ground and the
retention of only the most promising and often widely scattered portions, the cost of
regional coverage with such attractive products is rarely justified, although they could be
of great value furing detailed exploration and prospect evaluation stages of a programme.

2. Regional Geological Mapping - Lithology and structure

The process of preparing a geological map consists in most cases of scattered ground
observations of rock outcrops, followed by what is often a highly subjective interpolation
between the observation points. This was usually carried out with the aid of air photo
interpretation.
In any part of the world the collection of surface data is restricted by lack of natural rock
outcrops and it may even be necessary to create artificial outcrops by pitting, trenching or
drilling in cases where there are severe ambiguities in the geological interpretation or
where critical exposures are lacking. In areas of considerable exposure time rarely
permits the detailed mapping of all rock outcrops.
Satellite remote sensing has not fully replaced conventional air photo interpretation in
most geological mapping structures.
Satellite imagery is less costly than air photography and its usually available to the field
geologist at well established field camps or back at headquarters.
Modern satellite imagery has the capability of greatly enhancing geological mapping as it
now provides a very low spatial resolution due to its multispectral nature and the powers
of digital image processing.

3. Alteration zones

Mineral deposits have extensive alteration zones associated with them, these occupy
volumes which are sometimes two or three orders of magnitude grater than the actual
mineral deposit
These alteration zones may be:

i Geochemical only with no change in bulk chemistry or mineralogy of affected rocks

ii. Volumetrically small mineralogical modifications e.g. conversion of geothite to


magnetite or pyrite to pyrrhotite which have dramatic effects on the geophysical response
of the zone are usually not detected by Remote sensing.

iii. An alteration zone may show major mineralogical changes e.g. sericitisation of
feldspars and introduction of iron in the form of oxides and sulphides. These
mineralogical modifications are often exaggerated in the zone of weathering since the

79
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

altered rocks are often more susceptible to chemical weathering than their unaltered
counterparts. These are the alteration zones which can, in arid and semi-arid
environments, be located using satellite remote sensing.

The main characteristics of these alteration zones which are susceptible to detection using
satellite remote sensing are:
i. a general increase in the overall reflectance or albedo.
ii. the presence of ferriginous staining
iii. and the presence of characteristic assemblages of clay minerals.

The first two of these have been detectable since the days of the early Landsat Satellites.
The MSS with its four wavebands in the visible and near infrared portion of the spectrum
and its spatial resolution of 80 meters, was able to detect large areas of bleached rock or
weathered material, which had a much higher reflectance at visible wavelengths than the
surrounding rocks.

Band 5 of the MSS is in the red portion of the spectrum and is sensitive to ferriginous
staining, this turns rocks and their weathered products red. A whole range of ratios were
devised by mineral exploration experts in an attempt to highlight these ferruginuos zones
in MSS imagery.

The launch of Landsat 4 in 1983, carrying the TM scanner with its greatly increased
waveband coverage and finer spatial resolution and in particular with 2.2 micron band
(band 7) which was added mainly due to appeals by the geological community greatly
enhanced the chances of detecting alteration zones using satellite imagery.

The importance of data integration.

RS can provide three types of information of great importance in mineral exploration,


these are;

i. Structural - derived from satellite or airborne digital imagery.

ii. Lithostratigraphic - information can be gained on the spatial distribution of the


significant rock units.

iii. Mineralogical - the distinctive mineralogy of alteration zones associated with


mineral deposits can be discriminated using RS.

In a few cases, the geochemical halo associated with mineralisation may be sufficiently
toxic to produce vegetation changes and RS geobotany may be possible.

Undiscovered deposits are more subtle and their recognition demands the careful
combination and analysis of data from a wide range of sources, structural information
from RS and surface mapping, Lithostratigraphic data from regional mapping,
supplemented possibly by remote sensing, geochemical data from stream and soil

80
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING

samples, supplemented possibly by remote observation of geobotanical or mineralogical


halos; geophysical information from airborne and ground surveys, the results of all
previous exploration in the study area, all need to be combined in such a way that the
experienced explorationist can observe the inter-relationships of the multiple parameters
and then use his judgement to decide on the next step in the exploration programme.

GIS is a tool which will enable the explorationist to make more efficient use of the data
he acquires either during exploration or from other sources like digital maps from OS.
Remotely sensed data often forms a vital component of mineral exploration GIS's
because its digital nature lends it readily to data integration with other data sets, because
it can provide valuable indirect indications of the presence of mineralisation
complementary to information from geochemistry and geophysics and also because
remotely sensed imagery can provide a recognisable geographic background against
which to determine other data sets.

References
Jensen J. R. (2015), Introductory Digital Image Processing: A Remote Sensing
Perspective, Pearson Education, 544 pp.

Mather, P. M. (2010), “Is there any sense in remote sensing?”, Progress in Physical
Geography, 34(6), 739. Pp. 1-19.

Lillesand, T. M. and Kiefer, R. W. (2000), Remote Sensing and Image interpretation, Wiley, 780 pp.

Jensen, J. R. (2000): Remote Sensing of the Environment: an Earth Resource Perspective, Prentice-Hall,
New Jersey, USA.

Cracknell, A. P. and Hayes, W. B. (1991), Introduction to Remote Sensing, Taylor and Francis, 420 pp.

Legg, C. (1992), Remote Sensing and Geographic Information Systems: Geological Mapping, Mineral
exploration and Mining. Ellis Horwood. 278 pp.

Lattman, L. H, and Ray, R. G. (1965), Aerial Photographs in Field Geology. Holt, Rhinehart and
Winston Incoporated, 320 pp.

81

You might also like