Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

5/28/2015

DIP FUNDAMENTALS
&
IMAGE RECTIFICATION

T S Viswanadham
viswamandham_ts@nrsc.gov.in

A beautiful picture!

1
5/28/2015

Analog vs. Digital


(Continuous vs. Discrete)

A continuous-time signal will contain a value for all


real numbers along the time axis. In contrast to
this, a discrete-time signal, often created by
sampling a continuous signal, will only have values
at equally spaced intervals along the time axis.

Rows: 153
Columns: 171

Digital Photo (Blue band only)

2
5/28/2015

DIGITAL IMAGE

A single-band (B&W) digital image is a 2-dimensional


array of numbers. The image consisting of M x N pixels,
such that the location of each pixel is specified by (i, j).

The amplitude f at any pair of coordinates (i, j) is called


the intensity, gray level, pixel value or DN of the image
at that point.

3
5/28/2015

RASTER MODEL

► Cell or “pixel” is the basic spatial unit for a digital


image, also known as a Raster data
► Pixels are generally square in shape (Square
Tessellation)
► Pixels are organized into an array of Rows and
Columns called a Raster/Grid
► Rows and columns are numbered from 0
► Hence, origin for raster data is upper left corner
► Pixel locations are referenced by their row and
column position
► Every pixel can be uniquely identified by its row
and column position
► Pixels are assigned an integer, floating point value,
known as Gray Value / DN
► Each pixel represent some kind of geographic
phenomenon
► Number of rows and columns does not have to be
the same

Number of bits: n
Number of gray levels: 2n
Range of gray levels: 0 to 2n-1

4
5/28/2015

Binary image

Binary images are also called bi-level or


two-level. Binary images often arise in
image processing as masks or as the result
of certain operations such as
segmentation, thresholding, and dithering.

Grayscale image

Gray scale images, also known as black-and-white,


are composed exclusively of shades of gray, varying
from black at the lowest intensity to white at the
highest. Grayscale images are also called
monochromatic, denoting the absence of any
chromatic variation. Grayscale images are often the
result of measuring the intensity of light at each pixel
in a single band of the electromagnetic spectrum .

5
5/28/2015

Multispectral image

A multi-spectral image is one that captures image data at


specific wavelength regions across the electromagnetic
spectrum. The wavelengths may be separated by filters or by
the use of instruments that are sensitive to particular
wavelengths, including light from frequencies beyond the
visible light range, such as infrared. Multi-spectral imaging can
allow extraction of additional information that the human eye
fails to capture with its receptors for red, green and blue.

6
5/28/2015

PIXEL SIZE

PIXEL SIZE
vs.
SCALE

The higher the resolution


of a raster, the smaller
the cell size and, thus,
the greater the detail.
This is the opposite of
scale. The smaller the
scale, the less detail
shown.

7
5/28/2015

Colour Image Display


Mechanism

Colour monitor works on the “additive” principle of mixing Red,


Green and Blue light in the right proportions in order to create
almost any colour. You can see this “RGB” pattern by looking at a
TV screen or computer monitor with a magnifying glass. You will
typically see a pattern of dots.

Single band images Image to colour gun assignment

Displayed colour image

TRUE COLOUR
IMAGE

8
5/28/2015

IRS LISS II Single band images - Kakinada, A.P.


Band 1 Band 2

Band 3 Band 4

Concept of False Colour Composite - Kakinada, AP


Band 2 Band 3

Band 4 FCC

9
5/28/2015

FCC (341) FCC (324)

FCC (132)

IMAGE HISTOGRAM

IRS– 1C NIR Band image of New Delhi

10
5/28/2015

F
r
e
q
u
e
n
c
y

DN values

The Histogram shows the total tonal quantitative


distribution (but not spatial distribution) in the image.

Linear Stretch (NIR band)

Input DN ranges is from 8 to 89


Because of Bi-modal Histogram & wide gap
between the modes, full dynamic range of
output device is not met

11
5/28/2015

Non-Linear Enhancement

XOut = f(xin) Where f is a Non-Linear function


A simple one is
X out  X inN
When N=1; is a Linear, N= 2; square , N= 3 : Cube
When N = 0.5: Square-root, N=0.33; Cube-root
XOut = Log (xin) ; Logarithmic stretch
XOut =Exp(xin) ; Exponential stretch

Logarithmic & Exponential Stretches

Exponential
Logarithmic

12
5/28/2015

Linear stretch Histogram equalization

Logarithmic stretch Exponential stretch

Raw Data

Low pass filter smoothens image and


hence sharpness in the image is lost.
Usually low pass filters are not applied
on image except when you need to
suppress speckles

Low Pass filtered Image

1 1 1
1/9 1 1 1
1 1 1

13
5/28/2015

High Pass Filters

Edge Deatection

-1 -1 -1
-1 8 -1
-1 -1 -1

Edge Enhancement

-1 -1 -1 0 0 0
-1 8 -1 + 0 1 0
-1 -1 -1 0 0 0

-1 -1 -1
= -1 9 -1
-1 -1 -1

Raw data Low pass

Sharpen Edge Detection

14
5/28/2015

Normalized Difference Vegetation Index


• Normalized Difference Vegetation Index (NDVI) is computed using Near-IR and
Red Spectral Band data using equation given bellow

• The NDVI value becomes negative for water, 0 for soil and > 0 for vegetation
based on Bio-mass and health condition
• NDVI is used to study vegetation status
• NDVI removes topographic effect . It also reduces the effect of gain and Offset
factor of Sensor
• NDVI is less sensitive to noise compared to Ratio Vegetation Index

False Colour Composite and NDVI image of


Kakinada and surroundings, A.P.
FCC NDVI Image (B/W)

15
5/28/2015

IRS- LISS-II, FCC (4,3,2 in R,G,B) part of Jaipur

PC1 PC2

PC3
PC4

16
5/28/2015

FCC of PC1,2,3

Eigen Values
170.06
29.96
6.40
0.98

Eigenvectors

PC1 0.362 0.333 0.690 0.531

PC2 -0.308 -0.216 -0.383 0.840

PC3 -0.819 -0.125 0.553 -0.081

PC4 0.319 -0.909 0.267 0.005

Objective of Classification

• The objective of classification is to allocate all


pixels in an image to one of land cover classes or
themes.
• Hard classifications assign each pixel to a single
class. Such techniques include density slicing,
maximum likelihood classifications, and isodata
classifications.
• More recent soft classification techniques (e.g.,
fuzzy classifications, spectral mixture models)
attempt to consider the varying proportions of
specific classes within each pixel.

17
5/28/2015

Supervised vs Unsupervised
classification
• Supervised Classification Algorithms require
maximum human supervision.
• The ‘supervision’ is accomplished using training
samples of the desired classes present on the image.
• Unsupervised classifiers require less human
interaction
• Based on the number of classes needed the computer
groups similar category pixels in to the desired
number of classes using clustering methods

An overview of classification

18
5/28/2015

Preparation for Supervised Image Classification

FCC 4,3,2 (RGB) Marking training sets

Image Classification

Supervised (Maximum likelihood) Unsupervised 6 Classes

19
5/28/2015

PROJECTIONS
A map projection is the manner in which the
spherical surface of the Earth is represented on a
flat (two-dimensional) surface. This can be
accomplished by direct Geometric projection or
by a Mathematically derived transformation.

or
A method by which the curved 3-D surface of the
earth is represented on a flat 2-D map surface.

Geometrical projections by developable surfaces


A surface that can be unfolded or unrolled into a flat plane or sheet
without stretching, tearing or shrinking is called a 'developable surface'.

The three most common developable surfaces are


the cylinder, cone, and plane.
• Cylindrical projections result from projecting a spherical
surface onto a cylinder
• Conic projections result from projecting a spherical
surface onto a cone
• Azimuthal projections result from projecting a spherical
surface onto a plane
• Miscellaneous projections include un-projected
rectangular latitude and longitude grids and other
examples of that do not fall into the cylindrical, conic, or
azimuthal categories

20
5/28/2015

Cylindrical
projections

Cylindrical axis is parallel to Cylindrical axis is perpendicular Cylindrical axis is at an


polar axis of earth to polar axis of earth angle to polar axis of earth

Classification of projections according to Aspect

21
5/28/2015

Cylindrical Equal Area


Projection with
Normal aspect

Cylindrical Equal Area


Projection with
Transverse aspect

Cylindrical Equal Area


Projection with Oblique
aspect

Standard parallels

Tangent Secant Polycylindrical

Classification of projections according to earth contact with


developable surface (Cylinder)

22
5/28/2015

Map projections always introduce error and distortion

Distortion may be minimized in one or more of the


following properties

Shape - Conformal
Area - Equivalence / equal area
Distance - Equidistant
True Direction - True direction

23
5/28/2015

POLYCONIC
Conic Projection - Mathematically based on an infinite number of cones tangent
to an infinite number of parallels.

 Directions are true only along central meridian.


 Distances are true only along each parallel and along central meridian. Shapes
and areas true only along central meridian.
 Distortion increases away from central meridian.
 Map is a compromise of many properties.
 It is not conformal, perspective, or equal area.

Universal Transverse Mercator (UTM)

No. of zones: 60 (between 84 N & 80 S, rest of polar areas projected by UPS system)
Zone dimension: 6 Long x 8 Lat Scale Factor: 0.99996 at central meridian (CM)
Projection: Secant TM (to maintain an accuracy 1 in 2500)
Unit : Meter Latitude of Origin: Equator
False Easting: 500,000 m Std. meridians: 180 km on either side of CM
False Northing: 0 m (10,000,000 for southern hemisphere

24
5/28/2015

IMAGE REGISTRATION

All remote sensing images are subject to some form of


geometric distortions, depending on the manner in which
the data are acquired. These errors may be due to a
variety of factors, including one or more of the following:

 perspective of the sensor optics


 motion of the scanning system
 motion and (in)stability of the platform
 platform altitude, attitude, and velocity
 terrain relief, and
 curvature and rotation of the Earth

25
5/28/2015

IMAGE DISTORTIONS

GEOMETRIC
DISTORTIONS

Systematic Random

Earth Rotation Altitude Attitude

Earth Curvature Pitch

Panoramic Roll

Yaw

IMAGE DISTORTIONS

GEOMETRIC
DISTORTIONS

Systematic Random

Earth Rotation Altitude Attitude

Earth Curvature Pitch

Panoramic Roll

Yaw

26
5/28/2015

IMAGE DISTORTIONS

GEOMETRIC
DISTORTIONS

Systematic Random

Earth Rotation Altitude Attitude

Earth Curvature Pitch

Panoramic Roll

Yaw

27
5/28/2015

ATTITUTE DISTORTIONS

Source: Remote sensing geology, by Ravi P. Gupta

Geometric correction involves:

1. Registration
2. Resampling

Usually, rectification is the conversion of data file


coordinates to some other grid and coordinate system,
called a reference system. Rectifying or registering image
data involves the following general steps, regardless of the
application:

1. Locate GCPs.
2. Compute and test a transformation.
3. Create an output image file.

The pixels must be resampled to conform to the new grid.

28
5/28/2015

Rectification
The process of transforming the data from one grid system into
another grid system using a geometric transformation. Rectification
is not necessary if there is no distortion in the image.

Registration
The process of making an image conform to another image. A map
coordinate system is not necessarily involved.

Georeferencing
The process of assigning map coordinates to image data. The
image data may already be projected onto the desired plane, but
not yet referenced to the proper coordinate system. Rectification,
by definition, involves georeferencing.

Orthorectification
Orthorectification is a form of rectification that corrects for terrain
displacement and can be used if there is a DEM of the study area.
It is based on collinearity equations, which can be derived by using
3D GCPs. In relatively flat areas, orthorectification is not necessary,
but in mountainous areas (or on aerial photographs of buildings),
where a high degree of accuracy is required, orthorectification is
recommended.

Resampling Methods

 Nearest Neighbor

 Bilinear Interpolation

 Cubic Convolution

29
5/28/2015

There are several reasons for rectifying image data:

 comparing pixels scene to scene in applications, such


as change detection or thermal inertia mapping (day
and night comparison)
 developing GIS data bases for GIS modeling
 identifying training samples according to map
coordinates prior to classification
 creating accurate scaled photomaps
 overlaying an image with vector data, such as ArcInfo
 comparing images that are originally at different scales
 extracting accurate distance and area measurements
 mosaicking images
 performing any other analyses requiring precise
geographic locations

There are several reasons for rectifying image data:

 comparing pixels scene to scene in applications, such


as change detection or thermal inertia mapping (day
and night comparison)
 developing GIS data bases for GIS modeling
 identifying training samples according to map
coordinates prior to classification
 creating accurate scaled photomaps
 overlaying an image with vector data, such as ArcInfo
 comparing images that are originally at different scales
 extracting accurate distance and area measurements
 mosaicking images
 performing any other analyses requiring precise
geographic locations

30
5/28/2015

31

You might also like