Lectuers 6-8 (Compatibility Mode)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

Name of the Presentation

Image Geometric Correction

Why Geometric Correction ???

• To remove inherent optical distortions in the image


• Better cartographic accuracy for generation of maps.
• To prepare a mosaic of images.
• To register multi-temporal data.
• To bring the images of different spatial resolutions to the
same scale.

2/6/2019 1
Name of the Presentation

It is usually necessary to preprocess remotely sensed data and


remove geometric distortion so that individual picture elements
(pixels) are in their proper planimetric (x, y) map locations. This
allows remote sensing–derived information to be related to other
thematic information in geographic information systems (GIS) or
spatial decision support systems (SDSS). Geometrically corrected
imagery can be used to extract accurate distance, polygon area, and
direction (bearing) information.

Internal and External Geometric Error

Remotely sensed imagery typically exhibits internal and


external geometric error. It is important to recognize the
source of the internal and external error and whether it is
systematic (predictable) or nonsystematic (random).
Systematic geometric error is generally easier to identify
and correct than random geometric error.

2/6/2019 2
Name of the Presentation

Internal Geometric Error


Internal geometric errors are introduced by the remote sensing
system itself or in combination with Earth rotation or curvature
characteristics. These distortions are often systematic (predictable)
and may be identified and corrected using pre-launch or in-flight
platform ephemeris (i.e., information about the geometric
characteristics of the sensor system and the Earth at the time of
data acquisition). Geometric distortions in imagery that can
sometimes be corrected through analysis of sensor characteristics
and ephemeris data include:
• skew caused by Earth rotation effects,
• scanning system–induced variation in ground resolution cell
size,
• scanning system one-dimensional relief displacement, and
• scanning system tangential scale distortion.

Image Offset (skew) caused


by Earth Rotation Effects
Earth-observing Sun-synchronous satellites are
normally in fixed orbits that collect a path (or swath) of
imagery as the satellite makes its way from the north to
the south in descending mode. Meanwhile, the Earth
below rotates on its axis from west to east making one
complete revolution every 24 hours. This interaction
between the fixed orbital path of the remote sensing
system and the Earth’s rotation on its axis skews the
geometry of the imagery collected.

2/6/2019 3
Name of the Presentation

Scanning System-induced Variation in


Ground Resolution Cell Size
An orbital multispectral scanning system scans through just a few
degrees off-nadir as it collects data hundreds of kilometers above
the Earth’s surface (e.g., Landsat 7 data are collected at 705 km AGL).
This configuration minimizes the amount of distortion introduced by
the scanning system. Conversely, a suborbital multispectral scanning
system may be operating just tens of kilometers AGL with a scan field
of view of perhaps 70°. This introduces numerous types of geometric
distortion that can be difficult to correct.

The ground resolution cell size


along a single across-track
scan is a function of
a) the distance from the aircraft
to the observation where H is
the altitude of the aircraft
above ground level (AGL) at
nadir and H sec f off-nadir;
b) the instantaneous-field-of-
view of the sensor,β,
measured in radians; and c)
the scan angle off-nadir, Φ.
Pixels off-nadir have semi-
major and semi-minor axes
(diameters) that define the
resolution cell size.
d) The total field of view of one scan line is θ. One-dimensional relief displacement
and tangential scale distortion occur in the direction perpendicular to the line of
flight and parallel with a line scan.

2/6/2019 4
Name of the Presentation

Ground Swath Width

The ground swath width (gsw) is the length of the terrain


strip remotely sensed by the system during one complete
across-track sweep of the scanning mirror. It is a function of
the total angular field of view of the sensor system, , and
the altitude of the sensor system above ground level, H. It is
computed as:

Ground Swath Width

The ground swath width of an across-track scanning system


with a 90 total field of view and an altitude above ground
level of 6000 m would be 12,000 m:

2/6/2019 5
Name of the Presentation

Scanning System One-Dimensional


Relief Displacement
Images acquired using an across-track scanning system also contain
relief displacement. However, instead of being radial from a single
principal point as in a vertical aerial photograph, the displacement
takes place in a direction that is perpendicular to the line of flight for
each and every scan line. In effect, the ground-resolution element at
nadir functions like a principal point for each scan line. At nadir, the
scanning system looks directly down on a tank, and it appears as a
perfect circle. The greater the height of the object above the local
terrain and the greater the distance of the top of the object from nadir
(i.e., the line of flight), the greater the amount of one-dimensional
relief displacement present. One-dimensional relief displacement is
introduced in both directions away from nadir for each sweep of the
across-track mirror.

Scanning System Tangential Scale Distortion


The mirror on an across-track scanning system rotates at a constant speed and
typically views from 70° to 120 of terrain during a complete line scan. Of
course, the amount depends on the specific sensor system. The terrain
directly beneath the aircraft (at nadir) is closer to the aircraft than the terrain
at the edges during a single sweep of the mirror. Therefore, because the
mirror rotates at a constant rate, the sensor scans a shorter geographic
distance at nadir than it does at the edge of the image. This relationship
tends to compress features along an axis that is perpendicular to the line
of flight. The greater the distance of the ground-resolution cell from nadir,
the greater the image scale compression. This is called tangential scale
distortion. Objects near nadir exhibit their proper shape. Objects near the
edge of the flight line become compressed and their shape distorted. For
example, consider the tangential geometric distortion and compression of the
circular swimming pools and one hectare of land the farther they are from
nadir in the hypothetical diagram.

2/6/2019 6
Name of the Presentation

Scanning System Tangential Scale Distortion

The tangential scale distortion and compression in the far range causes
linear features such as roads, railroads, utility right of ways, etc., to
have an s-shape or sigmoid distortion when recorded on scanner
imagery. Interestingly, if the linear feature is parallel with or
perpendicular to the line of flight, it does not experience sigmoid
distortion.

a) Hypothetical perspective geometry of a vertical aerial photograph obtained over level terrain. Four
50-ft-tall tanks are distributed throughout the landscape and experience varying degrees of radial relief
displacement the farther they are from the principal point (PP). b) Across-track scanning system
introduces one-dimensional relief displacement perpendicular to the line of flight and tangential scale
distortion and compression the farther the object is from nadir. Linear features trending across the
terrain are often recorded with s-shaped or sigmoid curvature characteristics due to tangential scale
distortion and image compression.

2/6/2019 7
Name of the Presentation

Original Image Restored image

External Geometric Error

External geometric errors are usually introduced by


phenomena that vary in nature through space and time. The
most important external variables that can cause geometric
error in remote sensor data are random movements by the
aircraft (or spacecraft) at the exact time of data collection,
which usually involve:

• altitude changes, and/or


• attitude changes (roll, pitch, and yaw).

2/6/2019 8
Name of the Presentation

Attitude Changes
Remote sensing systems flown at a constant altitude above ground level
(AGL) result in imagery with a uniform scale all along the flightline. For
example, a camera with a 12-in. focal length lens flown at 20,000 ft. AGL will
yield 1:20,000-scale imagery. If the aircraft or spacecraft gradually changes
its altitude along a flightline, then the scale of the imagery will change.
Increasing the altitude will result in smaller-scale imagery (e.g., 1:25,000-
scale). Decreasing the altitude of the sensor system will result in larger-scale
imagery (e.g, 1:15,000). The same relationship holds true for digital remote
sensing systems collecting imagery on a pixel by pixel basis.

The diameter of the spot size on the ground (D; the nominal spatial
resolution) is a function of the instantaneous-field-of-view (b) and the
altitude above ground level (H) of the sensor system, i.e.,

a) Geometric modification in imagery may


be introduced by changes in the aircraft or
satellite platform altitude above ground level
(AGL) at the time of data collection.
Increasing altitude results in smaller-scale
imagery while decreasing altitude results in
larger-scale imagery.
b) Geometric modification may also be
introduced by aircraft or spacecraft changes
in attitude, including roll, pitch, and yaw. An
aircraft flies in the x-direction. Roll occurs
when the aircraft or spacecraft fuselage
maintains directional stability but the wings
move up or down, i.e. they rotate about the
x-axis angle (omega: ). Pitch occurs when
the wings are stable but the fuselage nose or
tail moves up or down, i.e., they rotate about
the y-axis angle (phi: ). Yaw occurs when
the wings remain parallel but the fuselage is
forced by wind to be oriented some angle to
the left or right of the intended line of flight,
i.e., it rotates about the z-axis angle (kappa:
). Thus, the plane flies straight but all
remote sensor data are displaced by .
Remote sensing data often are distorted due
to a combination of changes in altitude and
attitude (roll, pitch, and yaw).

2/6/2019 9
Name of the Presentation

Attitude Changes
Satellite platforms are usually stable because they are not buffeted by
atmospheric turbulence or wind. Conversely, suborbital aircraft must
constantly contend with atmospheric updrafts, downdrafts, head-winds,
tail-winds, and cross-winds when collecting remote sensor data. Even
when the remote sensing platform maintains a constant altitude AGL, it
may rotate randomly about three separate axes that are commonly
referred to as roll, pitch, and yaw.
Quality remote sensing systems often have gyro-stabilization
equipment that isolates the sensor system from the roll and pitch
movements of the aircraft. Systems without stabilization equipment
introduce some geometric error into the remote sensing dataset through
variations in roll, pitch, and yaw that can only be corrected using ground
control points.

Earth as an Ellipsoid

•Spherical earth models represent the shape of the earth with a sphere of a
specified radius. Spherical earth models are often used for short range
navigation (VOR-DME) and for global distance approximations. May be used
for small scale maps 1:5,000,000.

•Ellipsoidal earth models are required for accurate range and bearing
calculations over long distances. Loran-C, and GPS navigation receivers use
ellipsoidal earth models to compute position and waypoint information.

2/6/2019 10
Name of the Presentation

2/6/2019 11
Name of the Presentation

Geodetic Datum
• Geodetic datums define the size and shape of the earth and the origin and
orientation of the coordinate systems used to map the earth.
• Minimum set of parameters required to define location and orientation of
local system w.r.to global reference system.

•A cartesian datum will have X, Y, Z, ,,, scale factor ().

•An ellipsoidal datum will have in addition a, f.

• Referencing geodetic coordinates to the wrong datum can result in position


errors of hundreds of meters. Different nations and agencies use different
datums as the basis for coordinate systems used to identify positions in
geographic information systems, precise positioning systems, and navigation
systems.

Earth centered datum Local datum

GEIOD
•The equipotential surface of the earth’s gravity field which would coincide with
the ocean surface, if the earth were undisturbed and without topography.

•Geoid models attempt to represent the surface of the entire earth over both land
and ocean as though the surface resulted from gravity alone.

2/6/2019 12
Name of the Presentation

Map Projection
•Map projections are systematic transformations that allow the orderly
representation of the Earth's spherical graticule on a flat map. Some distortions
of conformality, distance, direction, scale, and area always result from this
process.
•Conformality
When the scale of a map at any point on the map is the same in any direction,
the projection is conformal. Meridians (lines of longitude) and parallels (lines
of latitude) intersect at right angles. Shape is preserved locally on conformal
maps.

•Distance
A map is equidistant when it portrays distances from the center of the
projection to any other place on the map.

•Direction
A map preserves direction when azimuths (angles from a point on a line to
another point) are portrayed correctly in all directions.

•Scale
Scale is the relationship between a distance portrayed on a map and the same
distance on the Earth.

•Area
When a map portrays areas over the entire map so that all mapped areas have
the same proportional relationship to the areas on the Earth that they
represent, the map is an equal-area / Equivalent map.

•Different map projections result in different spatial relationships between


regions.

2/6/2019 13
Name of the Presentation

Map projections fall into three general classes.

Cylindrical

Conical

Planar

Conical

Cylindrical

Planar

2/6/2019 14
Name of the Presentation

•Cylindrical projections result from projecting a spherical surface onto a


cylinder.

•When the cylinder is tangent to the sphere, contact is along a great circle (the
circle formed on the surface of the Earth by a plane passing through the center
of the Earth)..

Meridians of longitude are straight lines, equally spaced along, and perpendicular
to the Equator. Parallels of latitude are represented as straight line parallel to and
having the same length as the Equator. In the tangent case, the Equator is true to
scale and distortion increases with distance from the Equator. In the secant case,
the standard parallels which lie equidistant north and south of the Equator are true
to scale and distortion increases with distance from the standard lines

2/6/2019 15
Name of the Presentation

Gnomonic
The cylindrical gnomonic projection illustrates the basic pattern of normal
cylindrical projections. The principles are the same as for the azimuthal
gnomonic projection. A light source positioned at the centre of the globe casts
shadows of the graticule on the projection surface, which in this instance, is a
cylinder placed tangent to the globe along the Equator. The Equator is shown
as true to scale on the map, and as is typical of cylindrical projections, there
is a narrow band along the Equator in which distortion of all geometric
characteristics is minimal. The spacing of parallels increases rapidly toward
the poles. The polar regions cannot be represented since the poles would be
located an infinite distance from the Equator.

In India
For defense
Datum : Everest
Projection : Polyconic

For civilian
Datum : WGS84
Projection: UTM

2/6/2019 16
Name of the Presentation

Ground Control Points


Geometric distortions introduced by sensor system attitude (roll, pitch, and yaw) and/or
altitude changes can be corrected using ground control points and appropriate
mathematical models. A ground control point (GCP) is a location on the surface of
the Earth (e.g., a road intersection) that can be identified on the imagery and
located accurately on a map. The image analyst must be able to obtain two distinct
sets of coordinates associated with each GCP:

• image coordinates specified in i rows and j columns, and


• map coordinates (e.g., x, y measured in degrees of latitude and longitude, feet in
a state plane coordinate system, or meters in a Universal Transverse Mercator
projection).

The paired coordinates (i, j and x, y) from many GCPs (e.g., 20) can be modeled to
derive geometric transformation coefficients. These coefficients may be used to
geometrically rectify the remote sensor data to a standard datum and map projection.

Ground Control Points


Several alternatives to obtaining accurate ground control point (GCP) map
coordinate information for image-to-map rectification include:
• hard-copy planimetric maps (e.g., SOI 1:50,000-scale topographic
maps)

• digital planimetric maps

• digital orthophotoquads that are already geometrically rectified

• global positioning system (GPS) instruments that may be taken into the
field to obtain the coordinates of objects to within +20 cm if the GPS
data are differentially corrected.

2/6/2019 17
Name of the Presentation

Types of Geometric Correction


Commercially remote sensor data (e.g., SPOT Image, DigitalGlobe,
Space Imaging) already have much of the systematic error removed.
Unless otherwise processed, however, unsystematic random error
remains in the image, making it non-planimetric (i.e., the pixels are not
in their correct x, y planimetric map position). Two common geometric
correction procedures are often used by scientists to make the digital
remote sensor data of value:
• image-to-map rectification, and
• image-to-image registration.
The general rule of thumb is to rectify remotely sensed data to a
standard map projection whereby it may be used in conjunction with
other spatial information in a GIS to solve problems. Therefore, most
of the discussion will focus on image-to-map rectification.

Image to Map Rectification


Image-to-map rectification is the process by which the
geometry of an image is made planimetric. Whenever accurate
area, direction, and distance measurements are required,
image-to-map geometric rectification should be performed. It
may not, however, remove all the distortion caused by
topographic relief displacement in images. The image-to-map
rectification process normally involves selecting GCP image
pixel coordinates (row and column) with their map coordinate
counterparts (e.g., meters northing and easting in a Universal
Transverse Mercator map projection).

2/6/2019 18
Name of the Presentation

Image to Image Registration


Image-to-image registration is the translation and rotation
alignment process by which two images of like geometry
and of the same geographic area are positioned coincident
with respect to one another so that corresponding elements
of the same ground area appear in the same place on the
registered images. This type of geometric correction is used
when it is not necessary to have each pixel assigned a unique
x, y coordinate in a map projection. For example, we might
want to make a cursory examination of two images obtained
on different dates to see if any change has taken place.

Image to Map Geometric Rectification Logic

Two basic operations must be performed to geometrically


rectify a remotely sensed image to a map coordinate
system:

• Spatial interpolation, and


• Intensity interpolation.

2/6/2019 19
Name of the Presentation

Spatial Interpolation
The geometric relationship between the input pixel
coordinates (column and row; referred to as x, y ) and the
associated map coordinates of this same point (x, y) must be
identified. A number of GCP pairs are used to establish the
nature of the geometric coordinate transformation that must be
applied to rectify or fill every pixel in the output image (x, y)
with a value from a pixel in the unrectified input image
(x, y ). This process is called spatial interpolation.

Intensity Interpolation
Pixel brightness values must be determined. Unfortunately,
there is no direct one-to-one relationship between the
movement of input pixel values to output pixel locations. It
will be shown that a pixel in the rectified output image often
requires a value from the input pixel grid that does not fall
neatly on a row-and-column coordinate. When this occurs,
there must be some mechanism for determining the brightness
value (BV ) to be assigned to the output rectified pixel. This
process is called intensity interpolation.

2/6/2019 20
Name of the Presentation

Spatial Interpolation Using Coordinate Transformations

Image-to-map rectification requires that polynomial equations be fit to


the GCP data using least-squares criteria to model the corrections
directly in the image domain without explicitly identifying the source of
the distortion. Depending on the distortion in the imagery, the number of
GCPs used, and the degree of topographic relief displacement in the
area, higher-order polynomial equations may be required to
geometrically correct the data. The order of the rectification is simply
the highest exponent used in the polynomial.

Concept of how different-order


transformations fit a
hypothetical surface illustrated
in cross-section.
a) Original observations.
b) First-order linear
transformation fits a
plane to the data.
c) Second-order quadratic
fit.
d) Third-order cubic fit.

2/6/2019 21
Name of the Presentation

Spatial Interpolation Using Coordinate Transformations

Generally, for moderate distortions in a relatively small area of an image


(e.g., a quarter of a Landsat TM scene), a first-order, six-parameter, affine
(linear) transformation is sufficient to rectify the imagery to a geographic
frame of reference.

This type of transformation can model six kinds of distortion in the


remote sensor data, including:

• translation in x and y,
• scale changes in x and y,
• skew, and
• rotation.

Translation

Translation: X’=X+4, Y’=Y+3

2/6/2019 22
Name of the Presentation

Rotation

Rotation: X’=X cos(θ)-Y sin(θ), Y’=X sin(θ) + Y cos(θ)

Scale

Scale: X’= X *Sx, Y’=Y *Sy

2/6/2019 23
Name of the Presentation

Skew

X axis transformation different than Y axis

Spatial Interpolation Using CoordinateTransformations:

When all six operations are combined into a single expression it


becomes:

x = a0 + a1x’ +a2y’ + a3x’2 + a4x’y’ + a5y’2


y = b0 + b1x’ + b2y’ + b3x’2 + b4x’y’ + b5y’2 2nd order polynomial

where x and y are positions in the output-rectified image or map, and x


and y represent corresponding positions in the original input image.
These two equations can be used to perform what is commonly referred
to as input-to-output, or forward-mapping. In this example, each pixel in
the input grid (e.g., value 15 at x , y = 2, 3) is sent to an x,y location in
the output image according to the six coefficients shown.

2/6/2019 24
Name of the Presentation

x’ y’ x y
23 31 4123 4567
20 30 4100 4500
40 50 5100 5200

x = ax’ + by’ + c y = dx’ + ey’ + f

4123 = 23a + 31b + c 4567 = 23d + 31e + f


4100 = 20a + 30b + c 4500 = 20d + 30e + f
5100 = 40a + 50b + c 5200 = 40d + 50e + f

Accuracy of polynomial Model / No. of GCP required


N = ( P + 1) * (P + 2) * 0.5
Where P is the order of the polynomial
N is the No. of GCPs

Coefficient Matrix of Matrix of known


Matrix - A unknowns - x Vectors - L

2/6/2019 25
Name of the Presentation

Solution contd…

• x = (Inverse of A) * L

0.5 -0.475 -0.025 • a = -13.5 d = 16


4123
• b = 63.5 e = 19
-0.5 0.425 0.075 4100

5100
• c = 2465 f = 3610
5 -2.25 -1.75

x = -13.5 x 23 + 63.5 x 31 + 2465


= 4123
y = 16 x 23 + 19 x 31 + 3610
= 4567

Compute the Root-Mean-Squared Error of the


Inverse Mapping Function
Using the six coordinate transform coefficients that model distortions in
the original scene, it is possible to use the output-to-input (inverse)
mapping logic to transfer (relocate) pixel values from the original
distorted image x, y to the grid of the rectified output image, x, y.
However, before applying the coefficients to create the rectified output
image, it is important to determine how well the six coefficients derived
from the least-squares regression of the initial GCPs account for the
geometric distortion in the input image. The method used most often
involves the computation of the root-mean-square error (RMSerror) for
each of the ground control points.

2/6/2019 26
Name of the Presentation

Spatial Interpolation Using Coordinate Transformation

A way to measure the accuracy of a geometric rectification algorithm


(actually, its coefficients) is to compute the Root Mean Squared Error
(RMSerror) for each ground control point using the equation:

where:
xorig and yorig are are the original row and column coordinates of the GCP in
the image and x’ and y’ are the computed or estimated coordinates in the
original image when we utilize the six coefficients. Basically, the closer these
paired values are to one another, the more accurate the algorithm (and its
coefficients). The square root of the squared deviations represents a measure
of the accuracy of each GCP. By computing RMSerror for all GCPs, it is
possible to (1) see which GCPs contribute the greatest error, and 2) sum all the
RMSerror.

Spatial Interpolation Using Coordinate Transformation

All of the original GCPs selected are usually not used to compute the final six-parameter
coefficients and constants used to rectify the input image. There is an iterative process
that takes place. First, all of the original GCPs (e.g., 20 GCPs) are used to compute an
initial set of six coefficients and constants. The root mean squared error (RMSE)
associated with each of these initial 20 GCPs is computed and summed. Then, the
individual GCPs that contributed the greatest amount of error are determined and
deleted. After the first iteration, this might only leave 16 of 20 GCPs. A new set of
coefficients is then computed using the16 GCPs. The process continues until the RMSE
reaches a user-specified threshold (e.g., <1 pixel error in the x-direction and <1 pixel
error in the y-direction). The goal is to remove the GCPs that introduce the most error
into the multiple-regression coefficient computation. When the acceptable threshold is
reached, the final coefficients and constants are used to rectify the input image to an
output image in a standard map projection as previously discussed.

2/6/2019 27
Name of the Presentation

Characteristics of Ground Control Points


Point Order of Easting on Northing on X’ pixel Y’ Pixel Total RMS
Number Points Map Map error after
Deleted x y this point
deleted
1 12 597120 3,627,050 150 185 0.501

2 9 597,680 3,627,800 166 165 0.663

…..

If we delete
20 1 601,700 3,632,580 283 12 8.542 GCP #20,
the RMSE
will be
8.452
Total RMS error with all 20 GCPs used: 11.016

Intensity Interpolation
Intensity interpolation involves the extraction of a brightness value from an x, y
location in the original (distorted) input image and its relocation to the appropriate x, y
coordinate location in the rectified output image. This pixel-filling logic is used to
produce the output image line by line, column by column. Most of the time the x and y
coordinates to be sampled in the input image are floating point numbers (i.e., they are not
integers). For example, in the Figure we see that pixel 5, 4 (x, y) in the output image is to
be filled with the value from coordinates 2.4, 2.7 (x, y ) in the original input image.
When this occurs, there are several methods of brightness value (BV) intensity
interpolation that can be applied, including:

• nearest neighbor,
• bilinear interpolation, and
• cubic convolution.

The practice is commonly referred to as resampling.

2/6/2019 28
Name of the Presentation

Nearest-Neighbor Resampling
The brightness value closest to the predicted x’, y’ coordinate
is assigned to the output x, y coordinate.

Bilinear Interpolation
Assigns output pixel values by interpolating brightness values in two
orthogonal direction in the input image. It basically fits a plane to the 4
pixel values nearest to the desired position (x’, y’) and then computes a new
brightness value based on the weighted distances to these points. For
example, the distances from the requested (x’, y’) position at 2.4, 2.7 in the
input image to the closest four input pixel coordinates (2,2; 3,2; 2,3;3,3) are
computed . Also, the closer a pixel is to the desired x’,y’ location, the more
weight it will have in the final computation of the average.

where Zk are the surrounding four data point values,


and D2k are the distances squared from the point in
question (x’, y’) to the these data points.

2/6/2019 29
Name of the Presentation

Bilinear Interpolation

Cubic Convolution
Assigns values to output pixels in much the same manner as bilinear
interpolation, except that the weighted values of 16 pixels surrounding
the location of the desired x’, y’ pixel are used to determine the value
of the output pixel.

where Zk are the surrounding four data point


values, and D2k are the distances squared from
the point in question (x’, y’) to the these data
points.

2/6/2019 30
Name of the Presentation

Cubic Convolution

Image Mosaicking
Mosaicking n rectified images requires several steps:

1. Individual images should be rectified to the same map


projection and datum. Ideally, rectification of the n images is
performed using the same intensity interpolation resampling
logic (e.g., nearest-neighbor) and pixel size (e.g., multiple
Landsat TM scenes to be mosaicked are often resampled to 30
 30 m).

2/6/2019 31
Name of the Presentation

Image Mosaicking
2. One of the images to be mosaicked is designated as the base image.
The base image and image 2 will normally overlap a certain amount (e.g.,
20% to 30%).

3. A representative geographic area in the overlap region is identified.


This area in the base image is contrast stretched according to user
specifications. The histogram of this geographic area in the base image is
extracted. The histogram from the base image is then applied to image 2
using a histogram-matching algorithm. This causes the two images to
have approximately the same grayscale characteristics.

Image Mosaicking
4. It is possible to have the pixel brightness values in one scene simply
dominate the pixel values in the overlapping scene. Unfortunately, this
can result in noticeable seams in the final mosaic. Therefore, it is common
to blend the seams between mosaicked images using feathering. Some
digital image processing systems allow the user to specific a feathering
buffer distance (e.g., 200 pixels) wherein 0% of the base image is used in
the blending at the edge and 100% of image 2 is used to make the output
image. At the specified distance (e.g., 200 pixels) in from the edge, 100%
of the base image is used to make the output image and 0% of image 2 is
used. At 100 pixels in from the edge, 50% of each image is used to make
the output file.

2/6/2019 32
Name of the Presentation

Mosaicking
The seam between adjacent images
being mosaicked may be
minimized using
a) cut-line feathering logic, or
b) edge feathering.

Image Mosaicking
Sometimes analysts prefer to use a linear feature such as a river
or road to subdue the edge between adjacent mosaicked images.
In this case, the analyst identifies a polyline in the image (using
an annotation tool) and then specifies a buffer distance away from
the line as before where the feathering will take place. It is not
absolutely necessary to use natural or man-made features when
performing cut-line feathering. Any user-specified polyline will
do.

2/6/2019 33
Name of the Presentation

Mosaicking

2/6/2019 34

You might also like