Professional Documents
Culture Documents
Photogeology Remote and GIS Lecture Material Aduah Student
Photogeology Remote and GIS Lecture Material Aduah Student
Photogeology Remote and GIS Lecture Material Aduah Student
COURSE DESCRIPTION
This course concerns the application of photogrammetry and remote sensing to visually
extract geologic information from aerial photos and remote sensing imagery. The
following will be covered:
Relief displacement
Microwave systems
2
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Mather, P. M. (2010), “Is there any sense in remote sensing?”, Progress in Physical
Geography, 34(6), 739. Pp. 1-19.
Lillesand, T. M. and Kiefer, R. W. (2000), Remote Sensing and Image interpretation, Wiley, 780 pp.
Jensen, J. R. (2000): Remote Sensing of the Environment: an Earth Resource Perspective, Prentice-Hall,
New Jersey, USA.
Cracknell, A. P. and Hayes, W. B. (1991), Introduction to Remote Sensing, Taylor and Francis, 420 pp.
Legg, C. (1992), Remote Sensing and Geographic Information Systems: Geological Mapping, Mineral
exploration and Mining. Ellis Horwood. 278 pp.
Lattman, L. H, and Ray, R. G. (1965), Aerial Photographs in Field Geology. Holt, Rhinehart and
Winston Incoporated, 320 pp.
ASSESSMENT OF STUDENTS
Assessment of students will be in two forms: Continuous assessment (40%)
and End of Semester Examination (60%). The Continuous assessment shall
include Quizzes, Class Attendance and Assignments. The end of semester
shall be marked over 60.
3
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
PLAGIARISM:
This is an academic offence and the instructor will take acts of plagiarism seriously.
Culprits will be punished when caught in such acts. Information on Plagiarism can be
found on Google.
All handed in work must be your own. You may seek advice from other students
regarding design, techniques or software operations, but you must not share or duplicate
files. This includes finding another student's saved file on a computer, making minor
modifications, and passing the work off as your own. Any offence will trigger a
punishment such as deductions from accumulated marks.
Note: This handout is a working handout and students are always to come to lectures with
this so that they can add vital notes and perform some exercises in the handout, thank
you.
4
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
PHOTOGEOLOGY
Aerial photos are classified according to the orientation of the optical axis of the camera.
The optical axis can be defined as the line along which the camera points. It connects the
centre of the film with the centre of the lens and extends straight out from the front of the
camera.
5
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
present the land surface from a comparatively unfamiliar angle. The appearance is
that of a pictorial map. Unless shadows are present to accentuate some features on
the aerial photos.
Film
b1 p1 a1
f
Perspective centre/exposure station
H
Optical axis
A P B
The orthogonal projection of the perspective centre onto the photograph is called the
principal point. It is indicated as P’ on the photo and P on the ground. The identification
of the principal point will be explained in the laboratory exercises.
Scale of Aerial photograph
The scale of aerial photograph is equal to the ratio between the focal length f and the
flight height H.
f
Scale
H
6
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
The scale is constant for vertical photographs of flat areas, when the camera height is
constant. The height of the camera over non-flat terrain is not constant and it therefore
produces changes in scale.
f
Scale
H h
Where h is the height of the datum
1. Stereoscopic vision- when you look at objects with two eyes, our eyes give us two
slightly different views, which are fused physiologically by the brain and results
in a selection of setting a ‘model’ having three dimensions, the third dimension is
only provided when objects are viewed with both eyes. This is called binocular or
stereoscopic vision. It is also possible to get a three dimensional impression if we
offer to each of our eyes instead of ‘nature’ or photo taken from two different
camera positions, the so-called stereo pair.
For three-dimensional impression, usually stereoscopes and two sequential aerial
photos are used. The two main types of stereoscopes areL:
1. lens stereoscope (pocket stereoscope)
2. Mirror stereoscopes .
1. Lens stereoscope – this makes use of a pair of simple magnifying glasses to
keep their lines of sight approximately parallel. Most lens stereo has a
magnifying power of 2-3 diameters. The primary drawback to lens stereo is
that only 1/3 to ½ of the standard overlap can be studied stereoscopically at a
time.
2. Mirror stereoscope- this provides a view of the entire overlap some through
existing prisms. Most basic models afford no magnification but 3*-8*
7
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Pseudoscopic view
In stereoscopic view it is important to orientate in such a way that the left eye only sees
the left photo and the right eye sees only the right photo. If the photos are viewed in
reverse, a pseudoscopic view results in which ups and downs are reversed example valley
appear as ridges and hills appear as depressions - This is also termed as relief reverse.
8
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
The terminology and geometric elements of the single vertical photograph is as shown
below:
9
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
a i b
film negative
focal length
Camera
Contact print
Center point
Flight height
Optical axis
d terrain e
O
The dashed line drawn through the centre of the film and the lens represents the optical
axis of the camera this is perpendicular to the plane of the photograph (film) which is
horizontal.
The position at which the axis passes through the photograph is termed the centre point or
principal point of the photograph. In a truly vertical photograph this also represents the
plumb point or nadir point, which is defined as the photographic position representing the
point on the earth's surface vertically beneath the camera lens at the time of exposure. In
practical terms the vertical photograph is rarely absolutely vertical and the nadir and
centre point do not coincide.
The deviation from the vertical is called tilt, but in modern photography this is generally
small, this can be less than two degrees but in older photos this could be more.
10
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
The distance between the camera lens and the ground represents the flight height of the
airplane; the distance between the lens and the film represents the focal length of the
camera. The above diagram is not up to scale.
The most significant geometric relationship shown is the fact \that equal angles are
subtended at a camera lens by an object and by its photographic image. This relationship
holds regardless of focal length of the camera lens and flight height of the airplane.
Because equal angles are subtended by the object and its photographic image, the very
basic relationship among focal length (f), flight height (H), size of ground object (0) and
size of film image (i) can be derived.
f i
=
H O
The ratio of image size to object size is the general scale of the aerial photograph, thus
Focal length
Scale
Flight hieght
The above relationship indicates that as focal length increases, the scale of photographs
become larger and as flight height increases the scale of photographs becomes smaller.
Rapid adjustment
i. Place photos approximately in position under the stereoscope by
estimation. The overlapping parts of the two photos must be adjacent
to each other. shadows visible on the photos must fall towards the
observer.
ii. place right and left hand index fingers on corresponding points on
right and left hand photos respectively.
iii. While viewing through the stereoscope slowly move the photos with
each hand until the two fingers merge. Take the fingers from the
points.
iv. Adjust the photos by slight rotation or linear motion so that
stereoscopic imagery is visible on the whole field of view.
11
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
NB: This will be demonstrated during lectures and students have to learn to represent
these in a diagrammatic way where possible.
b. Careful adjustment
i. Place the photos under the stereoscope with the overlapping parts of the two photos
next to each other. The shadows visible on the photos must fall towards the
auxiliary.
ii. Locate and mark the principal points on each photo (m1 and m2). This is done by
aligning opposite sets of fiducial marks with a straight edge.
iii. Transfer the principal points of the two overlapping photos and mark them (ml' and
m2'). By connecting the principal pints and the transferred principal points, you
have the flight line.
iv. Lay a straight edge only on both photographs and arrange the stereo pair in such a
way that the points ml, m2, ml' and m2' are lined up in a straight line. The flight line
of the left photo is in one line with the flight line of the right photo. The distance
between m1 and ml' or m2 and m2' must be about the same as the stereo base of the
used stereoscope.
v. The mirror stereoscope is placed over the stereo pair in such a way that the line
joining the centres of the stereoscopic lenses is parallel to the flight line.
vi. Although the photos should be seen 3-dimensional now, a little adjustment in
distance between the photos may still be necessary. So the right hand is moved
sideways until the spacing between corresponding images produces a comfortable
stereoscopic viewing.
NB: Accurate and comfortable stereoscopic view requires that the eye base, the line
joining the centres of the stereoscopic lenses the instrument base and photo base (Flight
line) are all be parallel. All parts of the stereo model can be observed by moving the
12
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
stereoscope while maintaining this parallelism. Objects that change positions between
exposures (i.e. automobiles, trains, boats etc) can't be viewed stereoscopically.
Radial displacement due to relief is also responsible for scale differences within any one
photograph and for this reason a photograph is not an accurate map. The fundamental
difference between a photograph and a map can be demonstrated by comparing the
central projection of a single photograph, in which all objects are positioned as though
viewed from the same point with the orthographic projection of a map, in which each
terrain point is positioned as though viewed from vertically above.
On anyone photograph the amount of displacement due to relief increases with increasing
distance from the centre point and with increasing difference in elevation between any
point and the selected datum reference as shown below.
13
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Points A and B .are at the same height above the datum plane but point A is farther from
the ground nadir than B. The figure shows that the radial displacement due to relief is
greater for point A than for B. This represents a real difference in terms of distance on the
ground. Note also that point C is at the same distance as point A from the ground nadir,
but because of its lesser height it is not displaced as great a distance as point A.
For any one photograph the amount of radial displacement of the top of an object with
respect to its base and with respect to the image position of the base on the photograph
can be determined conveniently from
r*h
m
H
Where:
m = radial displacement of the top of an object with respect with its base and with respect
to its base
r = radial distance on the photograph from centre point to base of image displaced.
h = height of object displaced
H = flight height above the base of the object displaced
From the above relationship radial displacement due to relief increases with increasing
Distance (r) from the photograph centre point with increasing height (h) of an object.
Relief displacement also varies with flight height seemingly with focal length.
14
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
The flight direction lies along a line connecting one exposure position to the next and is
represented on each photograph of an overlapping pair by a line drawn through the centre
point and the image representing the centre point of the adjacent photograph, it is thus
parallel to the photobase. This is as illustrated below:
15
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
background object, such as the wall across the room (the reference system). Now close
the left eye and view the pencil with the right eye and note the apparent shift of the pencil
with respect to the background object. This apparent shift in the position of the pencil is
parallax or parallactic displacement.
Similarly terrain features on two overlapping aerial photographs taken from different
positions will exhibit parallactic displacement. The amount of parallactic displacement is
related to the height of a feature and to the geometry of the stereoscopic model; it can be
measured from a pair of overlapping aerial photographs and used to calculate vertical
intervals that in turn can be used in determining stratigraphic thicknesses, dips of beds,
and other geologic parameters.
16
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
For finding the parallax difference between the bottom and top of a pole, represented by
MN on the Left photograph and by M'N' on the right photograph, it is necessary only to
measure distances A and B. The distance (A-B) is the parallax distance. This also shows
distances to be measured in determining the adjusted photobase; these distances are CC'
and MM' and the photobase adjusted to the base of the pole is CC' – MM’.
II. Adjust the horizontal separation of the dots by turning the drum screw, which
moves the right hand target plate until the two target dots appear to fuse into a
single dot. The single fused dot, seen stereoscopically is raised or lowered by
turning the drum screw until it appears to rest or float on the ground surface at the
first point selected.
III. For convenience and more consistent reading to be obtained, turn the drum screw
until the floating dot rises above the terrain point in question and turn back the
screw until the floating dot descends to the ground surface. The instrument
reading is recorded and the procedure is repeated for the second point selected.
This is as shown below;
17
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
The above diagram illustrates the final separation distance of sterometer targets in
measuring parallax difference between the top and bottom of the pole.
The difference in readings is the parallax difference between the two terrain points and is
the figure used to calculate the vertical interval between the points as described above. It
is best to take two or three readings of parallax. In making parallax measurements the
fused dot will readily be seen where it floats above the apparent ground surface of the
stereoscopic model, but it will appear to split into its two component dots as it is lowered
below the ground surface.
18
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Assuming that the vertical pole is located exactly on the flight path. On each photograph
of the stereoscopic pair the top of the pole would appear to be displaced radially outward
from the centre point as shown below:
In order to determine the height of the pole, the displacement of the top relative to the
bottom must be known. This is fundamentally related to the absolute stereoscopic
parallax of the top and bottom of the pole as determined from the stereoscopic model. In
photogrammetric terms the absolute stereoscopic parallax of a point is defined as the
algebraic difference, parallel to the photobase, of the distances of the two images of a
point (the top of the pole on the left hand photograph and the top of the pole at the right
hand photograph) from their respective centre points.
From the diagram above the absolute stereoscopic parallax of the pole will given by
19
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
y - (y') or (y + y')
x - (x') or (x +x')
The difference between the absolute stereoscopic parallax of the top and bottom of the
pole is (y +y') -( x + x').
This is termed as the parallax difference and can be shown by a simple algebra to be
equal to dl + d2 in the last two diagrams above.
The basic relationship between the height of an object and the various geometric
elements of the stereoscopic model can be derived. In accordance with the law of similar
triangles it can be shown from the figure that
OM O' M H - h
P1 P2 h
and since
B OM O' M H - h
P P1 P2 h
B* h
Then P
H-h
Now the parallax difference (p) between the top and bottom of the pole has been shown
above to equal dl + d2, and
f * P1 f * P2 f * P
Δp d 1 d 2
H H H
From this
Hp
P
f
Substituting for P in eqn 1
20
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
HΔp B * h f
But photobase b *B
f H-h H
Further substitution for B gives
b*h
Δp
H-h
Solving for h
H
h * p ………………………………………..eqn 2
b p
Eqn 2 is the basic relationship of parallax difference to elements of the stereoscopic
model and is called the parallax equation. From this it can be seen that measurements
made directly from the stereoscopic model, such as photobase b and parallax difference
p, can be combined with other known or easily obtainable data to determine heights of
objects such as an outcrop or heights of terrain.
Aerial Photographs can provide us two types of information, metric and semantic
information.
1. Metric information concerns the position of objects on the earth’s surface. The
exact position of an object is determined by measuring linear dimensions such as
distances, angles etc. this is the field of photography
2. semantic information concerns the nature or identity of objects imaged on aerial
photography such as tone, texture reflectance etc.
Aerial photo interpretation techniques are widely used in geography, geology, hydrology
and geomorphology. The aerial photographs cannot be regarded as a single source of
information, however efficient photo interpretation should make available all semantic
information which is registered on the photograph.
The degree or quality of identification possible from the stereoscopic photo image
depends largely on the inherent visibility of the object under study. Some objects such as
streets, houses, and roads are inherently visible in the photo image although the actual
visibility in any specific case depends on the scale , the quality of the photo image and on
such incidental factors as the superposition of objects example trees, roads, shadows and
clouds. Other objects such as soils, sub surface water and many rock types are inherently
invisible. Their identification from the stereoscopic photo image is partly possible; here
21
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
deductions must be employed to assist analysis and classification. This is the reason why
photo interpretation must be combined with field and laboratory investigations. Generally
four phases are defined in photo interpretation these are:
a. photo reading ( ie detection, recognition and identification )
b. analysis
c. classification
d. deduction
a. Photo reading – The first step in photo reading is detection i.e the mere discovery
that something is there. The second step is recognition. Through its shape, size
and other visible properties the interpretator recognizes a familiar object. Finally
there is the step of identification in which he/she identifies the object or feature as
something known by a specific name or term. Recognition and identification can
be aided by the provision of photokeys. Photo keys label objects for easy
recognition on aerial photographs. Identification too is aided by some moment of
deduction.
c. Classification- When units are to be compared on the basis of varying physical and
cultural characteristics as identified on the photographs then the photo interpretator
will define units. A comparison based on the defined characteristics of the units
resulting from the analysis involves third phase of photo interpretation. Classification
may yield all the information needed however when the objects under study are not
clearly visible on the photograph, fieldwork or other investigations will be needed.
The interpretation of aerial photographs provides great geologic data to Field geologists.
The Field geologist mapping with the aid of aerial photographs either consciously or
subconsciously practice photo interpretation to some extent.
Photo interpretation involves the observation of certain tones, shapes and other
characteristics of photographic images and then determining their geologic significance
by deductive or inductive reasoning or a combination thereof. Obtaining maximum
22
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
information usually requires a careful study. Some study illustrations of this is provided
as an appendix.
The basic observational criteria used for geologic interpretation of aerial photographs
include the following; photographic tone or shades of gay, texture, pattern, shape and
size. Other observational elements are usually just a combination of the above criteria.
23
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Vertical Exaggeration
Scale
The scale of photographs obviously bears on the ability to see details of terrain, and as
scale decreases less detail can be discend. It is almost unfortunate that small-scale
(1:50000 or smaller) and large- scale (1:20000 or larger) photographs both have useful
(and commonly different) applications in geologic mapping. it is important to remember
that only one stereoscopic pair 1:60000- scale photographs, for example, gives the same
ground coverage as nearly a dozen 1:20000- scale photographs of the same area.
In general, aerial photographs give geologic information in two broad areas ie structural
and lithological areas.
Photographs of sedimentary terrain’s yield a greater amount of structural and lithologic
information than those covering areas of igneous and metamorphic rocks, because the
non homogeneous nature of sedimentary terrain results in marked differential erosion
24
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
characteristics that stand out on aerial photographs. Those erosional contrasts are
particularly conspicuous because they occur within relatively short outcrop distances.
On the other hand, diagnostic landforms may be locally important in studies of igneous
terrain’s, especially where extrusive rocks prevail. Photographs of metamorphic terrains
however may reveal little geologic information because of the very nature of the
metamorphic processes, which tend to destroy the characteristics of sedimentary and
igneous rocks.
Regardless of rock type the climatic environment also affects the relative amounts of
geologic information that can be interpreted from aerial photographs. In general, more
data can be observed in the arid and semiarid regions, where more rock surface is
exposed to view than in heavily vegetated humid areas. In addition, the stage of erosional
development of the terrain, affects what can be interpreted from aerial photographs. The
amount of information, particularly structural, generally will be greater for a particular
type of geologic terrain and climatic environment during the so-called mature stage of
erosional development. At this stage streams show their greatest adjustment to the
reflection of the structure, and a greater third dimension of the terrain is visible for study
in the stereoscopic model.
Structural information
Flat-lying beds- Flat-lying or nearly horizontal beds are readily recognised as such where
different sedimentary rocks exhibit contrasting photographic tones expressed as irregular
bands extending along the topographic contour. Tonal contrast is especially well shown
where vegetation is sparse, as in many of the arid and semiarid regions especially in the
desert areas such as the northern parts of Ghana, Libya and western United States.
For flat-lying strata the topographic break extends along the contour. In areas of heavy
vegetation the trace of the slope break may be the principal indication of flat-lying beds.
Other indications of flat-lying beds include: closed-loop patterns that reflect different
beds or vegetation growing preferentaily on one or more beds. If drainage lines are well
developed, they exhibit dendritic patterns on horizontal strata.
Dipping beds- Numerous expressions of dip of sedimentary beds may be seen on aerial
photographs. Dip direction generally is conspicuous where topographic surfaces coincide
with bedding surfaces. Where bedding is expressed by bands of differing photographic
tone or by topographic breaks in slope due to resistant units, the direction of dip may be
apparent where a stream cuts across the strata and the bedding trace forms a V-shape in
plan view. If beds are obscured by vegetation or surface materials, the direction of dip
sometimes can be deduced from drainage characteristics of the area. Major streams
commonly flow parallel to the stratified rocks. Dipping beds on the nose of a fold may be
reflected in major streams that curve around the nose; the convex side of the curve
indicates the direction of plunge of anticlinal folds.
25
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Faults: High-angle faults commonly stand out on aerial photographs. This is a direct
result of the aerial view, which allows a large area and the gross features within it to be
seen at one time. For example, many alignments that are inconspicuous on the ground are
clearly seen on aerial photographs. Most high angle fault are expressed on photographs as
straight or gently curving lines, and features that are not obviously man-made should be
carefully noted on the photographs for later field examination.
Lithologic Information
The appearance on aerial photographs of a particular rock type may vary considerably,
depending especially on the climate and the amount of relief in areas where the rock
occurs. Hence, it is usually not possible to establish a set of criteria for the recognition of
rock types that will be applicable to areas. For some heavily vegetated areas, it may be a
real accomplishment merely to distinguish on photographs, rocks of igneous,
metamorphic or sedimentary categories from one another. In a broad way, however
certain lithologic information can be obtained from photograph.
In the absence of information that might reveal lithologic character or rocks in an area, it
is always desirable to delineate areas of difference on the photographs and to thoroughly
investigate these differences in a ground study.
Igneous rocks - Intrusive igneous rocks particularly those forming stocks, cupolas and
batholites, commonly reveal a crisscross pattern of joints that is rather diagnostic of
igneous terrain. This pattern often can be seen not only in areas of good exposures but
26
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
also in many vegetation-covered areas and it is perhaps the most suggestive criteria of
igneous rocks as seen on aerial photographs.
Metamorphic rocks - Metamorphic rocks are difficult to interpret from aerial photographs
because large scale distinguishing characteristics are generally lacking. The best clue that
rocks are metamorphic is probably the conspicuous parallel alignments of minor ridges
and intervening low areas that may reflect regional cleavage or foliation. Not all
metamorphic rocks develop conspicuous cleavage or foliation but where such structures
are present, the ridges and low areas comprise a “topographic grain” that is much finer
than found in sedimentary areas. Although criteria for interpreting metamorphic rocks
from aerial photographs seem meager, it should be noted that few investigation have been
made with specific objectives of interpreting metamorphic terrains.
27
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
REMOTE SENSING
Remote Sensing (RS) is the study of objects from a distance without coming into contact
with them. Early photographers such as Nader in 1859 figured out that, he needed to get
up higher to see a wider view of the Earth. Thus cameras were sent up in gas balloons
and later the airplane. Thus Earth Remote Sensing or Earth observation was born.
This term applies to the acquisition of information usually in image form about the
surface of landmass, oceans and the atmosphere above it, by airborne or spaceborne
sensors. These sensors can be active or passive and receive reflected or emitted
electromagnetic radiation.
RS includes airborne and spaceborne radar and lidar systems. There is no clear distinction
between RS and Airborne geophysics since these are passive geophysical techniques
(gamma-ray spectrometry, magnetometry) and active techniques (airborne
electromagnetic which also belongs to various aspects of RS.
In the last 200 yrs RS have progressed from using gas balloons to take photographs to
highly sophisticated sensors taking imagery of various parts of the world.
Check the following website to see some interesting basic stuff about RS:
http//observe.arc.nasa.gov/nasa/education/gis/opening.html
28
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Aerial Photographs are obtained from photographic films, these use radiations from the
visible part of the spectrum and satellite imagery records reflectances or radiation from
both the visible and invisible part of the spectrum, using detectors which are electronic.
Aerial photographs do not have the same geometric properties as maps because of TILT
and RELIEF DISTORTIONS.
The effect of RELIEF distortion cannot be removed, because it varies all over the
photograph. However with two aerial photographs of the same piece of terrain, but taken
from two different aircraft locations, features can be matched in BOTH photographs. If
this is done the same feature in two different photographs will have different height
displacement – this difference will be the height or elevation of the feature.
Every image has geometric and spectral characteristics consider a square house tiled with
orange tiles and red tiles, you may think of an imaging system which will record the
house as a perfect square (a system with high geometric fidelity) or one that records the
house as an irregular quadrilateral (a system with low geometric fidelity).
Also consider an imaging system which records the two distinct orange and red tones (
high spectral fidelity) or one that only records one tone (lower spectral fidelity).
Original
Red tiles
Orange tiles
29
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
If you were able to interpret the above then we are getting somewhere!!.
Further more the imaging systems are electronic and not photographic-thus they can
detect reflectance’s from the Earth’s surface which are not detectable by film for example
imaging systems may detect variations in the infrared radiation reflected by trees in a
park, panchromatic film would not detect this.
Thus the student should be aware that despite superficial similarities satellite images are
both GEOMETRICALLY AND SPECTRALLY different from aerial photographs.
30
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
By correcting the geometric distortions known to exist in an aerial photograph we can get
an ORTHOPHOTO which can be digitized or otherwise handled like a paper map.
Likewise you will understand that by correcting the different geometric distortions and
other corrections known to exist in satellite images we can get an ORTHOIMAGE which
can also be digitized like a paper map.
Satellite image shave different geometric distortions from aerial photographs and these
must be corrected for in a way (less tilt and height distortion, more curvature distortion ).
Satellite images also have different spectral characteristics due to the earth’s atmosphere
and the fact that recording is electronic thus resulting in different information being
available in orthoimages from that of orthophotos.
Students should identify the basic differences between an aerial photo and a satellite
image.
31
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Electromagnetic radiation includes a very wide range of energy, from X-rays through
visible light to radio waves. All these energy can be transmitted through vacuum and they
do not require any medium for its transmission. A small portion of this is actually used
for Remote Sensing despite the fact that various systems in use span almost the whole of
the spectrum as shown below:
1. Ultra violet – This is of great interest to geologists, since a lot of minerals show
characteristic fluorescence at these wavelengths. This feature is used for mineral
identification and sometimes in ground prospecting. This has been used to detect
scheelite associated with good mineralisation and for monitoring oil slicks. The
fluorescence of oil films on the surface of the sea can assist in the identification of
the type and source of the oil. This can also be able to detect oil spills for
environmental applications.
2. Visible wavelength – These are not the most useful for geological purposes
because rocks, minerals and soils do not show distinctive spectral differences in
the visible portion. This was used essentially for water quality, pollution and
coastal bathymetry. 4 bands of Landsat MSS used visible light. 2 bands of Spot
used visible light.
3. Near Infrared – Imagery is usually sharp with good contrast and this was of great
value for topographical mapping purposes. Geological information content of this
32
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
In general, the short wavelengths are absorbed by natural materials especially water
while longer wavelengths penetrate further into soils and overburden especially after
they are dried. The amount of energy scattered back to the sensor depends on:
33
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
1. Direct System – the incoming radiation hits a surface where the image will be
formed (i.e. a film).
2. Indirect System – incoming radiation or energy is measured and recorded in a
non-photographic manner by a sensor.
DIRECT SYSTEM
Advantages Disadvantages
INDIRECT SYSTEM
Advantages Disadvantages
34
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
1. Optomechanical scanners – This uses a rapidly moving mirror to scan the surface
below in a pre determined fashion e.g.. Landsat Thematic mapper launched in
July 1972.
2. The pushbroom scanners – This has no moving parts, but records radiation from
the surface below by means of arrays of sensitive semi-conductors (Charged –
coupled devices CCD’s) e.g. SPOT satellite launched in 1986.
An ideal RS System
This should consist of the following:
SOURCE DETECTOR
TARGET
35
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
3. Scatter signal – when the energy hits the target there is a scatter, resulting in severe
energy loss, the reflected energy then goes back to the detector with the signal adequately
diminished.
4. em spectrum – In the visible region about 100% of the energy gets to the target i.e.
transmitted, in other regions of the em spectrum limited energy is transmitted due to
presence of gases in the atmosphere i.e. gases like water, Carbon dioxide and the ozone
layer. Indirect systems are capable of recording a greater part of the em spectrum
example the landsat MSS.
I will illustrate this with a simple example hoping not to confuse the student!.
Our skins are natural near infra red (heat) sensing device. You should expect the
underside of your arm to feel warmer if it is over vegetation, than over bare soil, and also
warmer over soil than over water!.
HOT WARM
36
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
COOL
Water body
For example a very well known Satellite Remote Sensing system called THEMATIC
MAPPER, records reflected sunlight in the following bands:
1. 0.45-0.52 micrometers = used for e.g. coastal water mapping , soil or vegetation
differentiation, deciduous and coniferous differentiation.
2. 0.52-0.60 micrometers = used for e.g. monitoring healthy vegetation
3. 0.63-0.69 micrometers = used for e.g. plant species differentiation
4. 0.76-0.90 micrometers = used for e.g. delineating water bodies
5. 1.55-1.75 micrometers = used for e.g. snow / cloud delineation
6. 10.4-12.5 micrometers = used for e.g. stressed plant detection
7. 2.08-2.35 micrometers = used for e.g. hydrothermal mapping
The above is also some of the application areas of Remote Sensing, a lot more of this
will be studied during the main course.
Thus by applying remote sensing principles we may with an appropriate mix of data from
different bands detect e.g. sick conifers in mixed woodland or areas of new flooding.
37
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
A Remote Sensing system will have a detector or a set of detectors for each band in
which there is some interest.
The detector is electronic – not film!. A great variety of detectors are found, but stated
simply a detector is coated with a material (such as Lead Sulphide, Indium Antimonide,
Cadmium etc.) which when irradiated by specific wavelengths will generate an electric
current. Indium Antimonide (InSb) is sensitive in the range 1.55-1.75 micrometers – a
wavelength band in which snow is highly reflective, thus an InSb detector held over snow
would be expected to generate considerable electric current, but over green vegetation not
at all. So depending on where the detector is directed we can tell what is there, also by
recording the electric current generated on a data file we have a permanent record of what
was there - or at least where the detector was pointing.
The detectors are carried on board satellites. Each detector’s record is called a pixel and
will have a value related to the amount of electrical current generated at each recording
position. Satellites follow a strictly predefined orbital path. So the actual position of a
satellite at any time of the day or night is known.
Within the satellites themselves the detectors too have strictly predefined pointing
directions (achieved in a great variety of ways – depending on the class of remote sensing
imaging system – such as regularly scanning in a certain way, or always being fixed at a
particular direction). More of this Later!!.
38
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Geometrically aerial photographs are much affected by tilt, but with Remote
Sensing images not at all. Both aerial photographs and Remote Sensing images
are affected by relief distortion, but the seriousness of this distortion is much
reduced for satellite Remote Sensing images. Earth curvature introduces
considerable distortion to satellite Remote Sensing images and very little to aerial
photography.
Maps are usually produced from vector data sets and represent selected.
Information such as names, grid, graticule etc is added to maps. Images have
much complex textural information compared to maps.
There are three main classes of orbits into which satellites can be placed:
1. Equatorial Orbits
Satellite orbits the earth in or near the plane of the equator. The only current
equatorial orbits used for a remote sensing satellite is the geostationary orbit. This
implies a satellite in this orbit will appear to remain fixed over the same point of
the equator hence the term geostationary orbit. These are mainly used for
meteorological remote sensing where great surface detail is not required but very
frequent images are essential.
2. Polar Orbits
Satellite orbits around or near the North and the South poles. Most remote sensing
satellites are in the polar orbit. The reasons for this are:
The earth rotates beneath the surface track as it travels from pole to pole.
The satellite orbit is essentially fixed in space relative to the centre of the
earth but the earth itself rotates in a plane approximately at right angles to
the plane of the orbit. This permits a fragile satellite to eventually cover
the centre surface of the earth in successive orbits.
The orbit is slightly offset from the poles and oblique to lines of longitude,
the local sun time of each point along the orbital track will be the same.
Most earth observation satellites are in sun-synchronous orbits and acquire
images between 9.30am and 10.30am. This is so because it is the period of
least cloud cover. It also provides oblique illumination which highlights
relief and satellite imagery.
3. Molniya orbits
Usually but not always near polar. In this case, the satellite travels far out into
space on one side of the earth and then passes very close to the opposite side.
These are not used for remote sensing but only for communication purposes in
extreme northern latitudes.
The orbital period is governed by the height of the orbit. Most of the satellites are at
altitudes between 700-1000 km forming orbital periods between 98 and 103mins. Some
are human defined in lower orbits such as 300 km especially the American space shuttle
was used to acquire more detailed imagery. Some extremely lower orbits were also used
39
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
for specialized military remote sensing satellites. However the effects of drag at these
levels become serious.
Most civilian remote sensing satellites orbit the earth at approximately 700-1000 km
above the Earth’s surface. There are several consequences for this:
40
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Typical examples of some satellites and their characteristics are as shown below.
Satellites such as the Indian IRS-1 and Japanese MOS-1 transit data at rates of about
25mbs and this order of data transfer requires specialized equipment. Antennae must be
large to increase signal to noise ratio and thus minimizes bit error rates, steering
programming of the antenna must be very precise and high density data recorders must be
used. Data rates increase still further with finer special resolution and more spectral
bands. SPOT and LANDSAT TM have data rates 50 and 85 mega bits per second
respectively. This tends to limit reception to national and regional facilities and this mode
41
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Previously, satellite data operators archive the data on magnetic tape. If the archive is
connected to a receiving station, the raw imagery is usually stored in the high-density
form (HDDT’s) in which it is received.
These HDDT’s are then converted to Standard Computer Compatible Tapes (CCT’s) at
customer request. Some processing of the raw data is usually carried out during
conversion from HDDT to CCT.
The digital values are rescaled using standard gain and offset values to occupy a full eight
bits per band and a crude geometric correction is usually carried out to compensate for
each rotation beneath the satellite during imaging.
Magnetic tapes are not necessarily the ideal medium for each storage or distribution of
image data. The supply of satellite imagery on floppy disk is not a practical proposition
except for educational purposes; A CD- ROM is more likely to be used for the storage of
remote sensing imagery. A single 7-band landsat-TM scene occupies 240megabytes.
This has a lens with no distortion. Images are in the form of charges registered on a
photoconductive target.
Electron gun systematically phases the imagery. Its first use was for earth resources
mapping and was placed on Landsat 1 & 2, three RBV cameras bolted together, each
provided with a filter to obtain image from one part of the em spectrum.
If an object is not bigger by about 200m, it cannot be seen, when this was realised
Landsat 3 was launched with 2 RBV cameras operated in different manner.
This used a single channel system
42
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Glass tube
Focus coil
Mesh electrode Photo conductive target
RBV Lens
This produced good imagery but poor geometry, this was taught to solve problems in
mapping as it implies 1/1M scale maps could be obtained.
RBV Landsat 3
185km swath
This had too many geometric problems and to arrest this problem "Reso Plus" were
deposited on the phase plate of the camera and the exact position can be measured.
This can then be corrected by using a fairly simple linear relationship to remove these
geometric distortions. This had great potential but unfortunately it is not being used now,
its more of history.
43
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Operation
Active scan from W → E in near polar orbit, records 4 parts of the spectrum, uses
oscillating mirror to gather data from scanned line. This scans six lines at a time, 4 bands
for six lines implies 24 detectors working at the same time. Radiation from earth is
reflected by scan mirror and focused on the fiber optics to the 4 spectral bands.
44
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Landsat - MSS
Properties:
Mirror Oscillation
Frequency 13.62 Hz
Complete scan 13.42 milliseconds
45
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Filters are placed in front of the detectors, the detectors measure the brightness or
radiances or reflectance and this is used to calibrate the detector. Voltage is sampled at
regular times. Instantaneous field of view is 79 * 79 but this is moving and some areas
would have been rescanned this giving the pixel size of 79 * 56
56
. . . 79
pixel
Sampling interval
In Landsat the sampling size is rectangular thus producing a pixel size of 56 * 79 this is
because the condition of no gaps is to be achieved.
Data is received from the sensors to ground stations stationed at various points on the
earth. These ground stations are handled by Landsat themselves and data bought from
their distributors. Each detector of the MSS integrates the lightness on the scan to
influence the brightness value of the pixel.
50 50 74 80
50 80 80 70
80 80 81 60 BV road = 20
BV field = 80
79 82 60 60
The brightness affects other features close - this depends on the contrast of the
background. This leads to the problem of the resolution.
Major differences
1. Point of difference – scans in both directions
2. Each scan is active
46
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Explanation of differences
The TM uses an oscillating mirror, which scans in both directions, this wasn’t so in the
previous landsat. To compensate for gaps and overlap regions in the ground covered a
pair of rotating parallel mirrors called the Scan Line Corrector are included in the TM
optical chain.
TM detectors are located in the focal plane with the number of spectral bands increased
to seven of unequal width. The corrector advances and retards in this fashion during both
the forward and reverse scans of the primary mirror such that the scan projections on the
ground fall alongside each other. The TM takes advantage of the increased dwell time
available from this correction to increase the spatial and spectral resolution over the MSS.
To achieve this, the ground coverage per mirror sweep is kept approximately the same
giving pixel sizes of 30m.
The TM has better Communications – GPS was used a communications link.
They also launched TDRAS – a communications satellite and also DOMSAT – another
means of communication. These will primarily send data back to the ground.
TM has proved to be better than MSS
Landsat 7 was launched on 15th April 1999 using ETM
47
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Lens
Field of view
Recorded line
At a particular instant the linear array will pick up a particular data from the ground, the
whole line is picked up by the detectors in the linear array. This was used in the SPOT
system by the military.
The main advantages were
The same orientation for every line. This can however change for the next line.
Disadvantages
The French launched the Spot system and it’s the first fully Commercial Remote Sensing
System.
48
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
The problem
6000 elements
f = 1.082
H =832km
60km
This took a long time to be able to get a linear array like this.
Better sensitivity
Better reliability
Better geometry
49
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
A major advantage for the SPOT system was using the normal procedures for
photogrammetry for it’s images.
SPOT 5 was launched in 2002 with a 5m resolution. SPOT 5 will have the advantage of
cross track stereo and long track stereo.
- One measures the time of transmission and the time it’s reflected back.
T R
50
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Diagram:
Pulse
generation Transmis
sion
Time
comparison
Duplexer
CRT
Film
LENS
- Pulses sent to transmission and also to the time comparison at the same time. The
transmitted pulse goes to the Duplexer to the antenna to the target, back to the Duplexer
then recorded, goes into the time comparison to the radar signal converter to a CRT and
onto a film, all this stakes place in sub seconds them the next operation goes on.
- Pulses are sent many time/sec. Pulse at right angles of the aircraft.
- The antenna is a cylinder and can be up to 4m, this will be on both sides of the aircraft
to scan both sides of aircraft.
Sensor is not likely to know the features involved as all images will be registered at once.
51
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
The SLAR looks to one side of the flight direction and is capable of producing a
continuous strip-map of the imaged surface. It transmits short pulses of radio frequency
energy than a continuous wave. The SLAR looks to one side only (hence the name) at a
time so as to remove the side-to side ambiguity in relating pulse delay to target position
which would otherwise occur
The Aircraft
- a sophisticated aircraft navigation system is employed.
This is usually combined with GPS and satellite inertial system with these the aircraft
can fly day or night with no need for sight visibility.
- Introduces auto pilot and flight can be steady.
- The antenna is able to move to remove tilt or any distortion present in aircraft
movement.
- Rates at which pulses are emitted and the speed at which an aircraft can be controlled.
- Cost is high and previously a Doppler system was used.
A SLAR image is produced with one antenna at a time.
Introduction
Image Processing and Analysis can be defined as the “act of examining images for the
purpose of identifying objects and judging their significance”. The image analyst studies
the remotely sensed data and attempt through logical processes in correcting, detecting,
identifying, classifying, measuring and evaluating the significance of physical and
cultural objects, their patterns and spatial relationship.
Digital Data
In a most generalized way, a digital image is an array of numbers depicting spatial
distribution of a certain field parameter (such as reflectivity of EM radiation, emissivity,
temperature or some geophysical or topographical elevation. Digital image consists of
discrete picture elements called pixels. Associated with each pixel is a number
represented as DN (Digital Number) that depicts the average radiance of relatively small
area within a scene. The range of DN values being normally 0 to 255. The size of this
area effects the reproduction of details within the scene as the pixel size is reduced more
scene detail is preserved in detailed representation.
Remote sensing images are recorded in digital forms and processed by the computers to
produce images for interpretation purposes. Images are available in two forms-
photographic film form and digital form. Variations in the scene characteristics are
represented as variations in brightness on photographic films.
52
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
It was observed that the image from Mars was dull and had a lot of noise apart from
geometric distortions. The noise and geometric distortions etc had to be corrected for.
The digital image stored in a computer are a 2D array of pixel value. Each element is a
picture element or pixel. Its brightness value represents the image.
Preprocessing techniques
This is to take care of the imperfections in the detectors sometimes this is done by the
image distributor. This process is termed as reformatting the data -data is converted to the
form in which your processing software can cope.
Reformatting of data
For most purposes the band sequential is used. The above is as illustrated below:
1. BIP
53
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
2. BIL
Line 1
Ch4
Ch7
Line 2 Ch4
Ch7 etc
3.BSQ
line 1
Band 4
Line n
Line 1
Band 5
Line n
etc
Digital image analysis is usually conducted using Raster data structures- each image is
treated as an array of values. It offers advantages for manipulation of pixel values by any
image processing system, as it is easy to find, locate pixels and their values.
Disadvantages becomes apparent when one needs to represent the array of pixels as
discrete patches or regions, where as Vector data structures uses polygonal patches and
54
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
their boundaries as fundamental units for analysis and manipulation. The vector format is
however not appropriate for the analysis of remotely sensed data.
Image Resolution
Resolution can be defined as ‘the ability of an imaging system to record fine details in a
distinguishable manner’. A working knowledge of resolution is essential for
understanding both practical and conceptual details of remote sensing. Along with the
actual positioning of spectral bands, they are of paramount importance in determining the
suitability of remotely sensed data for a given applications. The major characteristics of
imaging remote sensing instrument operating in the visible and infrared spectral region
are described in terms as follow:
Spectral Resolution
Radiometric resolution
Spatial resolution
Temporal resolution
Spectral resolution refers to the width of the spectral bands. As different material on the
earth surface exhibit different spectral reflectances and emissitivities. These spectral
characteristics define the spectral position and spectral sensitivity in order to distinguish
materials. There is a trade off between spectral resolution and signal to noise. The use of
well- chosen and sufficiently numerous spectral bands is a necessity, if different targets
are to be successfully identified on remotely sensed images.
The most commonly quoted quantity is the instantaneous field of view (IFOV), which is
the angle subtended by the geometrical projection of single detector element of the
earth’s surface. It may also be given as the distance, D, measured along the ground, in
which case, IFOW, is clearly dependant on sensor height, from the relation: D=hb, where
h is the height and b is the angular IFOV in radians.
A problem with IFOV definition, however, is that it is a purely geometric definition and
does not take into account spectral properties of the target. The effective resolution
55
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
element (ERE) has been defined as “the size of an area for which a single radiance value
can be assigned with reasonable assurance that the response is within 5% of the value
representing the actual relative radiance” Being based on actual image data, this quantity
may be more useful in some situations than the IFOV.
Other methods of defining the spatial resolving power of a sensor are based on the ability
of the device to distinguish between specified targets. Such as the ratio of the modulation
of the image to that for the real target.
Modulation m, is defined as:
M=Emax-Emin/Amax + Emin
Where Emax and Emin are the maximum and minimum radiance values recorded over the
image.
Temporal resolution refers to the frequency with which images of a given geographic
location can be acquired. Satellites not only offer the best chances of frequent data
coverage but also of regular coverage. The temporal resolution is determined by orbital
characteristics and swath width, the width of the imaged area.
2htan(FOV/2)
where h is the altitude of the sensor and FOV is the angular field of view of the sensor.
Analysis of remotely sensed data is done using various image processing techniques and
methods that include:
i. Analogue image processing
ii. Digital Image processing
The use of these fundamental elements depends not only on the area being studied, but
the knowledge the analyst has of the study area. For example, the texture of an object is
also very useful in distinguishing objects that may appear if the judging is solely on tone
i.e. water and tree canopy, may have the same mean brightness values, but their texture is
much different. Association is a very powerful image analysis tool when coupled with the
general knowledge of the site. Thus we are adept at applying collateral data and personal
knowledge to the task of image processing. With the combination of multi-concept of
examining remotely sensed data in multi-spectral, multi-temporal, multi-scales and in
conjunction with multi-disciplinary, allows us to make a verdict not only as to what an
object is but also to its importance. Apart from these analogue imaging processing
56
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Digital image and processing is a collection of techniques for the manipulation of digital
images by computers. The raw data received from the imaging sensors on the satellite
platforms contains flaws and deficiencies. To overcome these flaws and deficiencies in
order to get the originality of the data, it needs to undergo several steps of processing.
This will vary from image to image depending on the type of image format, initial
condition of the image and the information of interest and the composition of the image
scene. Digital Image Processing undergoes three general step:
i. image restoration
ii. image enhancement
iii. information extraction
The three main processes involved in digital image processing, is as illustrated below:
Pre-processing consists of those operations that prepare data for subsequent analysis that
attempts to correct or compensate for systematic errors. The digital imageries are
subjected to several corrections such as geometric, radiometric and atmospheric, though
all these corrections might not be necessarily applied in all cases. These errors are
systematic and can be removed before they reach the user. The investigator should decide
which pre-processing techniques are relevant on the basis of the information to be
extracted from remotely sensed data. After pre-processing is complete, the analyst may
use feature extraction to reduce the dimensionality of the data. Thus feature extraction is
the process of isolating the most useful components of the data for further study while
discarding the less useful aspects (errors, noise etc). Feature extraction reduces the
number of variables that must be examined, thereby saving time and resources.
57
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
1. IMAGE RESTORATION
2. IMAGE ENHANCEMENT
- Density Slicing
- Contrast stretching(smoothing)
- False colour composites
- Ratio images
3. INFORMATION EXTRACTION
RESULT(orthoimage)
58
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
1. Nearest neighbour
BV of pixels are placed in other pixels. i.e.. I(X,Y) = I(U,V)
2. Bi-Lateral interpolation
3. Cubic Convolution.
Since 1979 these have been done operationally relieving the users of this operational
stage. However BV is changed, placed somewhere and replaced by the BV of the nearest
neighbour. It may be better to interpret on the original data i.e. ask for the original data
either than that your image may come distorted.
1. Geometric corrections
This seeks to remove distortions due to altitude and attitude, scanner distortion, earth
motion. To take this out we need ground control points – large islands etc or very big
areas in terms of their co-ordinates on Lat /Long, large ground controls are always
needed. Transformations have to be undertaken to remove the distortions this is termed as
‘resampling’.
Raw digital images often contain serious geometrical distortions that arise from earth
curvature, platform motion, relief displacement, non-linearities in scanning motion. The
distortions involved are two types:
1. non-systematic distortion
2. systematic Distortions
Rectification is the process of projecting image data onto a plane and making it conform
to a map project system. Registration is the process of making image data conform to
another image. A map coordinate system is not necessarily involved. However
rectification involves rearrangement of the input pixels onto a new grid which conforms
to the desired map projection and coordinate system.
2.Radiometric corrections
This seeks to remove the noise, detector failure, variable detector response etc.
Radiometric corrections are carried out when an image data is recorded by the sensors
they contain errors in the measured brightness values of the pixels. These errors are
referred as radiometric errors and can result from the:
59
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
11 60
These are done automatically. Once these processes have been done then the image
enhancement could be carried out.
60
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Image enhancement
Image enhancement techniques are instigated for making satellite imageries more
informative and help to achieve the goal of image interpretation. The term enhancement
is used to mean the alteration of the appearance of an image in such a way that the
information contained in that image is more readily interpreted visually in terms of a
particular need. The image enhancement techniques are applied either to single-band
images or separately to the individual bands of a multiband image set
These techniques are applied to simplify the image or to prepare the way for some of the
information extraction to be obtained.
A. Contrast stretching
The operating or dynamic ranges of remote sensors are often designed with a variety of
eventual data applications. For example for any particular area that is being imaged it is
unlikely that the full dynamic range of sensor will be used and the corresponding image
is dull and lack contrast or is over bright. Landsat TM images can end up being used to
study deserts, ice sheets, oceans, forests etc., and requiring relatively low gain sensors to
cope with the widely varying radiances from dark, bright, hot and cold targets.
Consequently, it is unlikely that the full radiometric range of brand is utilised in an image
of a particular area. The result is an image lacking in contrast – but by remapping the DN
distribution to the full display capabilities of an image processing system, we can recover
a beautiful image. Contrast Stretching can be displayed in three categories:
1. Contrast stretching – Data consists of BV of 0 to 255 but the human eye is not
capable of doing this. Its most unusual to obtain all this in a band. Most image
enhancement software will show the minimum, the maximum and the SD.
61
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Eg.
15 51
Every pixel will lie within a range of 14% of the range from 0 –255, this will be
stretched, this is done as shown below:
Image contrast is increased and the image can easily be interpretated. It assigns a linear
stretch which can be a big disadvantage.
i. Histogram
Stretch
ii. Step stretches – Stretches over a certain area, depends on what you are after
62
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
NB: The above illustrates the idea of the histogram equalization principle and the
Gaussian stretch.
2. Density Slicing
Total range of BV is divided up into bands say eight (i.e. slices)
0 255
each band is given a special symbol, say, the BVs of a given band are “sliced” into
distinct classes. For example, for band 4 of a TM 8 bit image, we might divide the 0-255
contiguous range into discrete intervals of 0-63, 64-127, 128-191 and 192-255. These
four classes as four different grey levels, this is useful for producing grey maps and also
useful for displaying pixels – This is generalization. This gives you the idea that areas of
the same band are likely to be the same. This also finds applications in the display of
temperature maps.
3. Ratio images
Sometimes the ratio values of BV can be more useful than the original data to obtain
ratio images.
DN1 - BV OF1st channel
DNo = a. DN1 + b
DN2 DN2 - BV of 2nd channel
Band 7 - Band5
Band 7 + Band 5
Lots of experiments were undertaken to find out the best ratio for an area of interest.
Ratio images were also used to eliminate the effect of topography and shadow on images.
63
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Eg.
This means the area is likely to be the same. Its also used to detect change, these are for
individual pixels.
This can be carried out to compare the BV with its neighbour, change BV of a pixel
within the image. A look at the tone changes over the scan line, for each scan line the
frequency and texture of each pixel will be shown.
Types of filtering
i. Low class filtering
ii. High class filtering
The above is sometimes referred to as a smoothing filter.
6 7 7 6 7 7
7 20 6 7 8 6
6 7 8 6 7 8
64
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
The simplest low class filter takes a value and replaces it with say the mean value. A
filtered image is a new image called non-recursive filters. These filters mainly reduce
noise. Low class filtering can be used to reclassify the image, there by reducing the noise
in the image, giving a general feature.
Takes original image, smoothen it and find the difference in frequency between the two
images.
Or
Take the local average, take the difference between the local average and a pixel and
double the difference
6 7 7
local average = 6
Difference = 20 - 6
7 20 6 Double difference = 14 * 2
New value = 28 + 6
6 7 8
- discontinuities will be picked up which might not have been seen in the original image.
- once image is filtered there is the tendency to store and this takes up space.
INFORMATION EXTRACTION
Image Classification has formed an important part of the fields of Remote Sensing, Image
Analysis and Pattern Recognition. In some instances, the classification itself may form
the object of the analysis. Digital Image Classification is the process of sorting all the
pixels in an image into a finite number of individual classes. The classification process is
based on the following assumptions:
65
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Pattern Recognition, Spectral Classification, Textural Analysis and Change Detection are
different forms of classification that are focused on 3 main objectives:
Fundamentally spectral classification forms the bases to map objectively the areas of the
image that have similar spectral reflectance/emissivity characteristics. Depending on the
type of information required, spectral classes may be associated with identified features
in the image (supervised classification) or may be chosen statistically (unsupervised
classification). Classification has also seen as a means to compressing image data by
reducing the large range of DN in several spectral bands to a few classes in a single
image. Classification reduces the large spectral space into relatively few regions and
obviously results in loss of numerical information from the original image. There is no
theoretical limit to the dimensionality used for the classification, though obviously the
more bands involved, the more computationally intensive the process becomes. It is often
wise to remove redundant bands before classification.
The image may be left in digital form numerical methods can be employed.
Or It can be displayed in graphical mode visual inspection can be:
Vegetation – different shades of green
Water – black most times
Bare soil – light blue
Built up areas - blue grey
Clouds and snow – white
Sand - white yellow
Interpreting an image visually comes out with a lot of disadvantages. An image is best
interpreted by digital methods. Two forms can be used for image interpretation:
66
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
i. Unsupervised classification
ii. Supervised classification
Information Extraction
Unsupervised classification
If an area is identified as say vegetation then a ground truth has to be known then all
similar areas can be interpretated as such.
Supervised Classification
1. From previous experience you know the BV in a particular band this is
then fed into a computer before the classification is performed.
2. Find training sites or areas where the land types are known, this can then
be identified by the pixel.
Problems
a. every pixel will be classified into one or other pixels
67
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
b. strictly nearest neighbour: classifier will not consider that if its further away as it
could be something else.
c. No account is taken of the different spreads in the data.
d. You can’t use this to go after one particular feature eg water.
An accepted training area can be spoiled by one or two pixels in the training area.
Band 5
+++++++ +
+++++
+++++++
+++++
+
B
Band 4
Suppose A and B were not present the decision area would have been the red marked area
above. This implies A and B has increased the decision area. How does this come about! ,
we took the min and max value of the data.
68
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
BV(4) BV(5)
To solve this:
This is an automatic way of getting rid of points A and B. The procedure works well if
the BV’s are normally distributed around the mean.
Another possibility:
Find the highest and lowest values and go a little bit lower and a little bit higher than
those values.
Another possibility:
There are a second group of classifiers called soft classifiers or fuzzy classifiers.
This takes each pixel saying: this can be say water with a probability of say 70%. Fuzzy
classifiers are getting a lot popular.
The above are classified supervision.
69
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
In this system each pixel is supervised for the categorization of the data by specifying the
computer algorithm, numerical descriptors of various class types. There are three basic
steps involved in typical supervised classification.
Training Stage
The analyst identifies the training area and develops a numerical description of the
spectral attributes of the class or land cover type. During the training stage the location,
size, shape and orientation of each pixel type for each class.
Classification Stage
Each pixel is categorized into land cover class to which it closely resembles. If the pixel
is not similar to the training data, then it is labeled as unknown. Numerical mathematical
approaches to the spectral pattern recognition have been classified into various
categories.
5. Theme data are so strongly corrected such that a pixel vector that plots at some
distance from the theme scatter may yet fall within the decision box and be
classified erroneously.
6. Sometimes parallelpiped may overlap in which case the decision becomes more
complicated then boundary are slipped.
70
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
This method determines the variance and covariance of each theme, providing the
probability function. This is then used to classify an unknown pixel by calculating for
each class, the probability that it lies in that class. The pixel is then assigned to the most
likely class or if its probability value fail to reach any close defined threshold in any of
the class, be labeled as unclassified. Reducing data dimensionally before hand is an/one
approach to speeding the process up.
Unclassified supervision
Simplest of these is: density Slicing-this needs the ground truth to check
NB: Any one classifier will not give good data and these are used interactively with the
skill and experience of the operator.
This system of classification does not utilize training data as the basis of classification.
This classifier involves algorithms that examines the unknown pixels in the image and
aggregate them into a number of classes based on the natural groupings or cluster present
in the image. The classes that result from this type of classification are spectral classes.
Unsupervised classification is the identification, labeling and mapping of these natural
classes. This method is usually used when there is less information about the data before
classification.
5 AERIAL SURVEY
3 SPACE
28 SPOT
16 LANDSAT
12 NOAA
30 METEOSAT
71
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
Scanner Imagery
- Land sat will continue to be available; it’s been around for a long time.
- MSS hard copy has been used to interpret topographic0 maps.
- Topographic mapping has been interested in boundary lines and lines of
communication.
- Thematic mapping is after areas.
- You can’t pick line features – difficult with TM.
- Lack of stereo cover to aid interpretation.
- SPOT – received a lot of attention as it could show greater detail, line maps were placed
on such images.
- SPOT tells you where a change has taken place.
RADAR SYSTEM
- Earlier attempts produced spectacular results
- Lots of radar systems under development
Importance to attach to RS
2. RS has been very successful in Thematic mapping.
3. Images enhancement and Supervision Techniques will be with us for a long time.
4. TM at global scales has been quite spectacular.
- Sea Temperature
- Oil spills
- Changes
- Global wind fields
- Ozone layers
Recall successes of RS
1. RS successful in TM
2. Experimental systems like RADAR have yielded some good results in some
localized areas.- could not become worldwide commercial system.
3. Production systems are landsat and SPOT.
Future of RS
72
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
From 1996 some Commercial Companies have tried to launch some systems.
E.g.. Earth watch : These systems brought
1. A big change from earlier systems
2. Has a high geometric fidelity
3. Very flexible and rapid image delivery
- within 6 – 24 hrs
4. Ability to deliver stereo imagery of high quality
Nine (9) Companies were given licenses to put up satellites and obtain satellite data, two
of these license is being handled by earth watch – This was developed as part of strategic
planning. Some of these Commercial groups are:
1. Earth watch - Early bird, quick bird
2. Indian Space Research Organisation - IRS - 1C
3. ORBIMAGE - Orb view – 3
4. Space Imaging - Space imaging (Carterra – 1)
5. SPOT Image – SPOT 4, SPOT 5
6. Russian Data Suppliers (World map) – RESOURS – F1, F2, F3, KOSMOS
missions
73
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
3. ORB IMAGE
Orb view - 3
- Launch scheduled for mid 1998, 1999, … may be this year? Current status not
known.
- H = 470 km. Revisit 3 days
- Two sensors, resolution of 1, 2 m (Pan) and 4 m (MS)
Orb view – 4 Scheduled for 2000
74
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
5. Spot Image - This has been very successful, spot 1 was launched in1986
Spot - 4
- Launched 24 March 1998
- Two sensors, resolution of 10m (Pan) and 20m (MS)
Spot - 5
- Launched scheduled for 2002, 5m and 2.5m resolution (Pan); 10m in MSS
This has been designed with mapping application in mind, this will have planimetric
standards of 10m accuracy at 1.50,000 scale specification and a vertical height of 5m.
RESOURS - F
- Camera system with resolution of 2 - 20m
- Several 30 days missions each year
- Archive of more than 2 million multi-spectral and panchrometric scenes from all
parts of the world.
E.g. Pavoda systems are non-military missions, this has more than 2 million MSS and
many of these are at 2m resolution, data is readily available.
7. Other Systems
These are defined as secondary because they were not set up for Commercial purposes
i.e.. For national data or scientific use.
75
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
e) USA – Clark (97), (EWIS (97), Corona and Lanyard (61 – 72).
Cartography Forestry
-Ortho rectified images - Disease, windblow detection
-Map revision 1/25000 - Forest degradation
Environmental monitoring
Agricultural - Coastal studies
-Precision farming - Landuse changes
-Growth monitoring - Compliance and regulations
-Shortage prediction - Mining operations
News Gathering
E.g.: Agric
Usage of fertilizer and pesticides are being discouraged on mass scale or for say
every area of a farm.
- growth monitoring
Forestry
= Timber volume is possible
76
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
POLICY ISSUES
- United Nations Principles on Remote sensing
- US Policy on Earth observation
- CEOS Principles
- IEOS Principles
- European Commission Directive on Databases.
Principle XII – gives the right of a sensed state or country to gain access to data of it’s
own territory under three conditions.
- as soon as the data is produced
- access will be non-discriminatory
- access will be on reasonable cost terms.
To implement these were not easy as it makes no mention of the cost of putting up the
satellite.
Does this happen?
- Land Remote Sensing Policy Act of 1992 and Remote Sensing Policy signed by
President Clinton in 1994 cleared the way for the development of the new, high
resolution systems, BUT.
The main fields of application Remote sensing and GIS to mineral exploration are in the
area of:
77
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
1. Logistics
2. Mapping structure
3. Lithology
At all levels of mineral exploration RS and GIS can be of great benefit when used in
conjunction with other more conventional sources of information ranging from
topographical and geological maps to geographical and geological data.
1. Logistics
The use of remote sensing as a logistical aid is widespread and generally accepted. False-
colour composite imagery derived from Landsat TM or SPOT, geometrically computed
to standard map projections with the usual legend and geographical grids is a common
aid to field geologists and geophysicists.
Satellite imagery was generally accepted as a logistical aid in exploration for the
following reasons:
1. Mineral exploration takes place in remote areas of the world and in most cases these
areas are poorly mapped, thus the need for base maps to plan any exploration program.
2. Desert areas without permanent human settlement or sources of surface water can be
transformed to irrigated farmland removing problems of water supply and access for field
work but greatly increasing the difficulty of actually carrying out the work.
3. Government agencies may often be poorly informed about these agencies or even
unwilling to discuss them with foreigners.
4. the need for accurate and timely information on access and land cover types can often
only be met by remote sensing satellite, whose observations are not restricted by local or
national boundaries and whose imagery can provide quantitative map like information.
78
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
The type of imagery required for logistical purposes depends to some large extent on the
size of the area to be covered, but it is usually desirable to have the finest resolution
affordable. This imagery can then be used to locate access tracks, bridges and similar
man-made features, the use of resolution which will unambiguously display these
features is important. Colour imagery is also almost essential as it allows discrimination
of densely vegetated land and areas of water. Since any mineral exploration programme
is in essence a process of progressive elimination of less promising ground and the
retention of only the most promising and often widely scattered portions, the cost of
regional coverage with such attractive products is rarely justified, although they could be
of great value furing detailed exploration and prospect evaluation stages of a programme.
The process of preparing a geological map consists in most cases of scattered ground
observations of rock outcrops, followed by what is often a highly subjective interpolation
between the observation points. This was usually carried out with the aid of air photo
interpretation.
In any part of the world the collection of surface data is restricted by lack of natural rock
outcrops and it may even be necessary to create artificial outcrops by pitting, trenching or
drilling in cases where there are severe ambiguities in the geological interpretation or
where critical exposures are lacking. In areas of considerable exposure time rarely
permits the detailed mapping of all rock outcrops.
Satellite remote sensing has not fully replaced conventional air photo interpretation in
most geological mapping structures.
Satellite imagery is less costly than air photography and its usually available to the field
geologist at well established field camps or back at headquarters.
Modern satellite imagery has the capability of greatly enhancing geological mapping as it
now provides a very low spatial resolution due to its multispectral nature and the powers
of digital image processing.
3. Alteration zones
Mineral deposits have extensive alteration zones associated with them, these occupy
volumes which are sometimes two or three orders of magnitude grater than the actual
mineral deposit
These alteration zones may be:
iii. An alteration zone may show major mineralogical changes e.g. sericitisation of
feldspars and introduction of iron in the form of oxides and sulphides. These
mineralogical modifications are often exaggerated in the zone of weathering since the
79
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
altered rocks are often more susceptible to chemical weathering than their unaltered
counterparts. These are the alteration zones which can, in arid and semi-arid
environments, be located using satellite remote sensing.
The main characteristics of these alteration zones which are susceptible to detection using
satellite remote sensing are:
i. a general increase in the overall reflectance or albedo.
ii. the presence of ferriginous staining
iii. and the presence of characteristic assemblages of clay minerals.
The first two of these have been detectable since the days of the early Landsat Satellites.
The MSS with its four wavebands in the visible and near infrared portion of the spectrum
and its spatial resolution of 80 meters, was able to detect large areas of bleached rock or
weathered material, which had a much higher reflectance at visible wavelengths than the
surrounding rocks.
Band 5 of the MSS is in the red portion of the spectrum and is sensitive to ferriginous
staining, this turns rocks and their weathered products red. A whole range of ratios were
devised by mineral exploration experts in an attempt to highlight these ferruginuos zones
in MSS imagery.
The launch of Landsat 4 in 1983, carrying the TM scanner with its greatly increased
waveband coverage and finer spatial resolution and in particular with 2.2 micron band
(band 7) which was added mainly due to appeals by the geological community greatly
enhanced the chances of detecting alteration zones using satellite imagery.
In a few cases, the geochemical halo associated with mineralisation may be sufficiently
toxic to produce vegetation changes and RS geobotany may be possible.
Undiscovered deposits are more subtle and their recognition demands the careful
combination and analysis of data from a wide range of sources, structural information
from RS and surface mapping, Lithostratigraphic data from regional mapping,
supplemented possibly by remote sensing, geochemical data from stream and soil
80
LECTURE HANDOUT FOR PHOTOGEOLOGY AND REMOTE SENSING
GIS is a tool which will enable the explorationist to make more efficient use of the data
he acquires either during exploration or from other sources like digital maps from OS.
Remotely sensed data often forms a vital component of mineral exploration GIS's
because its digital nature lends it readily to data integration with other data sets, because
it can provide valuable indirect indications of the presence of mineralisation
complementary to information from geochemistry and geophysics and also because
remotely sensed imagery can provide a recognisable geographic background against
which to determine other data sets.
References
Jensen J. R. (2015), Introductory Digital Image Processing: A Remote Sensing
Perspective, Pearson Education, 544 pp.
Mather, P. M. (2010), “Is there any sense in remote sensing?”, Progress in Physical
Geography, 34(6), 739. Pp. 1-19.
Lillesand, T. M. and Kiefer, R. W. (2000), Remote Sensing and Image interpretation, Wiley, 780 pp.
Jensen, J. R. (2000): Remote Sensing of the Environment: an Earth Resource Perspective, Prentice-Hall,
New Jersey, USA.
Cracknell, A. P. and Hayes, W. B. (1991), Introduction to Remote Sensing, Taylor and Francis, 420 pp.
Legg, C. (1992), Remote Sensing and Geographic Information Systems: Geological Mapping, Mineral
exploration and Mining. Ellis Horwood. 278 pp.
Lattman, L. H, and Ray, R. G. (1965), Aerial Photographs in Field Geology. Holt, Rhinehart and
Winston Incoporated, 320 pp.
81