Professional Documents
Culture Documents
Rs and Gis-Unit-3
Rs and Gis-Unit-3
❑ Specialized printing equipments were used to code the images and reproduced at receiver using
telegraphic printers.
Fig. 2 Fig. 3
In 1921, photographic printing press improved the resolution and tonal quality of images.
Bartlane system was capable of coding 5 distinct brightness levels. It increased to 15 by 1929
The Landsat 7 sensor records 8-bit images; thus it can measure 256 unique grey values of the reflected
energy while Ikonos-2 has an 11-bit radiometric resolution (2048 grey values)
History
In 1964, Computer processing techniques were used to improve picture of moon transmitted by Ranger 7
at JPL.
Typical applications:
➢ Noise filtering
➢ Content Enhancement
➢ Contrast enhancement
Deblurring
Filtering is a technique for modifying or enhancing an image
https://www.youtube.com/watch?v=mG45mTcz_M4
Filtering is a neighborhood operation, in which the value of any given pixel in the
output image is determined by applying some algorithm to the values of the pixels in
the neighborhood of the corresponding input pixel.
A pixel's neighborhood is some set of pixels, defined by their locations relative to that
pixel.
•Low pass filters (Smoothing) Low pass filtering (aka smoothing), is employed to remove high spatial frequency noise from a
digital image.
•High pass filters (Edge Detection, Sharpening) A high-pass filter can be used to make an image appear sharper.
https://www.mathworks.com/discovery/image-enhancement.html
Image enhancement refers to the process of highlighting certain information of an image, as well as weakening
or removing any unnecessary information according to specific needs
Fig. Thermal / IR view of a Chip
IR satellite view of
Augustine volcano
Fig. IR image of a sloth at night
Night vision system used by soldiers
Fig. IR Satellite Imagery
Nebula NGC 151401 in Visible (left) and Infrared (right)
Fig. Fetus and Thyroid using ultrasound
Fig. Detecting Earthquakes and its cause using cross sectional view
Seismic imaging directs an intense sound source into the ground to evaluate subsurface conditions and
to possibly detect high concentrations of contamination.
Fig. Detect ozone layer damage
Fig. Identify sun spots
Fig. Identify solar flares
Fig. X ray of neck
Fig. X ray of head
Fig. X ray of a Glucose meter circuit
Fig. Gamma ray exposed images
Fig. Gamma ray bursts in space
Fig. Gamma ray bursts in space
Gamma ray bursts in space
Fig. Satellite images of Mumbai suburban(Left) and Gateway of India (Right)
Since the digital image is “invisible” it must be
prepared for viewing on one or more output device
(laser printer, monitor,etc.)
It might be possible to analyze the image in the computer and provide cues to the radiologists
to help detect important/suspicious structures (e.g.: Computed Aided Diagnosis, CAD)
DIGITAL IMAGE PROCESSING
• Remote sensing data can be
interpreted visually using a hard copy
of the image or by digital method
using digital image employing suitable
software.
1.Preprocessing
2.Enhancement
3.Transformation
4.Classification
Image Preprocessing
Operations aim to correct distorted or degraded image
data to create a more faithful representation of the
original scene.
Pre – processing
Remotely sensed raw data generally contains flaws and deficiencies received from imaging sensor mounted on
the satellite.
The correction of deficiencies and removal of flaws present in the data through some methods are termed as
pre–processing methods.
This correction model involves to correct geometric distortions, to calibrate the data radiometrically and
to eliminate noise present in the data.
All the pre–processing methods are consider under three heads, namely,
a) Radiometric correction method,
b) Atmospheric correction method,
c) Geometric correction methods.
Radiometric correction method
Radiometric corrections are transformation on the data in order to remove error, which are geometrically
independent.
Radiometric corrections are also called as cosmetic corrections and are done to improve the visual appearance
of the image.
1. Correction for missing lines
2. Correction for periodic line striping
3. Random noise correction
4. Sun spot
5. Shadowing
6. Random Noise
7. Line-Dropouts
8. De-Striping
Correction for Missing Lines
9
The atmosphere has effect on the measured brightness value of a pixel. Other
difficulties are caused by the variation in the illumination geometry. Atmospheric
path radiance introduces haze in the imagery where by decreasing the contrast of
the data.
Figure 1.5: Major Effects due to Atmosphere.
Figure 1.6: Atmospheric correction
Geometric correction methods
Remotely sensed images are not maps.
Frequently information extracted from remotely sensed images is integrated with the map data in Geographical
Information System (GIS).
The transformation of the remotely sensed image into a map with the scale and projection properties is called
geometric corrections
Geometric corrections of remotely sensed image is required when the image is to be used in one of the
following circumstances:
To transform an image to match your map projection
To locate points of the interest on map and image.
To overlay temporal sequence of images of the same area, perhaps acquired by different sensors.
To integrate remotely sensed data with GIS.
Image Geometric Correction
Geometric Correction is the process of giving an image a Real World
coordinate system
What is Geometric correction?
18
In a 2-bit image, there are four possible combinations: 00, 01, 10, and 11. If "00" represents
black, and "11" represents white, then "01" equals dark gray and "10" equals light gray. The
bit depth is two, but the number of tones that can be represented is 2 2 or 4. At 8 bits, 256
(2 8 ) different tones can be assigned to each pixel.
Image Processing:
Digital Image Processing contains five different operations, namely,
A.Image Registration
B. Image Enhancement
C. Image Filtering
D.Image Transformation
E. Image Classification
All these are discussed in details below:
Sequence for digital image analysis
LISS-III
MERGED PRODUCT
PAN
UTILITIES
Image
Mosaicking
57 M 6 57 M 10
y = L−x
0 L x
Contrast Stretching
x 0 xa
yb
y = ( x − a ) + ya a xb
( x − b) + y bxL
ya
b
a b x
0 L
Other pixel
values are
preserved
2500
2000
1500
1000
500
0
0 50 100 150 200
Why Histogram?
4
x 10
4
3.5
2.5
1.5
0.5
Image Example
before after
Histogram Comparison
3000 3000
2500 2500
2000 2000
1500 1500
1000
1000
500
500
0
0 50 100 150 200 0
0 50 100 150 200 250 300
A digital image processing system allows for assigning a different colour to each DN or
a range of DNs in an image.
IMAGE FILTERATION / SPATIAL FILTERING
❖Spatial filtering encompasses another set of digital processing functions that are used to
enhance the appearance of an image.
❖Spatial filters highlight or suppress specific features in an image based on their spatial
frequency which is related to the concept of image texture (manner in which grey scale
values change relative to their neighbours within an image).
❖ A common filtering procedure involves moving a window of a few pixels in dimension
over each pixel in the image, applying a mathematical technique using the pixel values
under the window, and replacing the central pixel with the new value.
❖A low pass filter is designed to emphasize larger, homogeneous areas of similar tone and
reduce the smaller detail in an image. It serves to smoothen the appearance of image.
❖ High pass filter serve to sharpen the appearance of fine details in an image.
Highpass filters (sharpening)
A high pass filter is the basis for most sharpening methods. An
image is sharpened when contrast is enhanced between adjoining
areas with little variation in brightness or darkness.
A high pass filter tends to retain the high frequency information
within an image while reducing the low frequency information.
The kernel of the high pass filter is designed to increase the
brightness of the center pixel relative to neighboring pixels. The
kernel array usually contains a single positive value at its center,
which is completely surrounded by negative values. The
following array is an example of a 3 by 3 kernel for a high pass
filter:
Original Image High Pass Filter Image
Lowpass filter (smoothing)
A low pass filter is the basis for most smoothing methods. An image is smoothed by decreasing the disparity between pixel
values by averaging nearby pixels. Using a low pass filter tends to retain the low frequency information within an image while
reducing the high frequency information.
37
Edge Enhancement
Most valuable information can be derived from edges. Thus, edge enhancement delineates
these 32
edges and makes the shape and details comprising the image more conspicuous and
perhaps easier to analyze. The edges may be enhanced using either linear or non-linear edge
enhancement techniques.
Non-linear edge enhancement: Nonlinear filters are widely used in image processing for
reducing noise without blurring or distorting edges. Median filters were among the first
nonlinear filters used for this purpose.
Recently a wider variety of options have become available, including mathematical morphology,
anisotropic diffusion, and wavelet-based enhancement techniques. Most of these methods focus
on the problem of
Image Transformation
All the transformations in an image based on arithmetic operations. The resulting images
may will have properties that make them more suited for a particular purpose than the
original.
The two transformations which are commonly used. These two techniques are:
Typically, the spectral bands use for this purpose are visible Red band and Near Infrared
Bands.
NIR-Red -1<NDVI<1
NDVI=
NIR+Red
It is a method of re-expressing the information content of multi spectral image set of “m”
images in terms of “m” principal components.
The application of PCA on raw remote sensing data produces a new image which is
more interpretable than the original data.
The objective of this transformation is to reduce the number of bands in the data and
compress as much of the information in the original bands into fewer bands. The “new”
bands that result from this statistical procedure are called Principal Components,
This process attempts to maximize the amount of information from the original data into
the least number of new components.
43
Original PCA
IMAGE TRANSFORMATIONS
PCA ORIGINAL
FCC NDVI
IMAGE CLASSIFICATION
Spectral pattern recognition refers to the family of classification procedures that utilizes this
pixel-by-pixel spectral information as the basis for automated land cover classification. Spatial
pattern recognition of image based on the spatial relationship with pixel surrounding.
➢ Types of Classification :
1. Supervised Classification
2. Unsupervised Classification
LEGEND
Kharif crop
Fallow
Plantation
Forest
Wasteland/Scrub
Cloud/shadow
Waterbody
Builtup area
Snow
Mangrove
Grassland
Shifting cultivation
SUPERVISED CLASSIFICATION
➢ Once the computer has determined the signatures for each class, each pixel
in the image is compared to these signatures and labeled as the class it closely
resembles digitally.
Cluster 3 Cluster 6
Next Pixel to
be Classified
Cluster 5 Cluster 2
Cluster 1 Cluster 4
Unknown
Unsupervised Classification
The result of the The analyst determines the
unsupervised classification is ground cover for each of the
not yet information until… clusters…
??? Water
??? Water
??? Conifer
??? Conifer
??? Hardwood
??? Hardwood
Unsupervised Classification
The Unsupervised Classifiers do not utilize training data as basis for classification.
These classifiers examine the unknown pixel and aggregate them into a number of class based on natural grouping
or clusters present in the image values.
The classes that result from unsupervised classification are spectral classes. The analyst must compare the
classified data into some form of reference data to identify the informational value of the spectral classes.
It has been found that in areas of complex terrain, the unsupervised approach is preferable.
In such conditions, if the supervised approach is used, the user will have difficulty in selecting training sites
because of variability of spectral response within them each class. Consequently, group data collected but it is
very time consuming.
Also, the supervised approach is subjective in the sense that the analyst tries to classify information categories,
which are often composed of several spectral classes, whereas spectrally distinguishable classes will be
revealed by the unsupervised approach.
Additionally, the unsupervised approach has potential advantage of revealing discrimination classes unknown
from the previous supervised classification.
Figure 1.22: In an unsupervised classification, a thematic map of the input image is created
automatically in ENVI without requiring any pre-existing knowledge of the input image. The
unsupervised classification method performs an automatic grouping; the next step in this workflow is
for the user to assign names to each of the automatically generated regions (for
Primary ordering of image elements
Tone/Color
Tone can be defined as each distinguishable variation from white to black.
If there is no sufficient contrast between an object and its background to permit at least detection,
there can be no identification
While a human interpreter may only be able to distinguish between ten and twenty shades of gray;
interpreters can distinguish at least 100 times more variations of color-on-color photography than
shades of gray on black and white photography.
Tone/Color
Resolution
Resolution can be defined as the ability of the entire photographic system, including
lens, exposure, processing, and other factors, to render a sharply defined image.
There are resolution targets that help to determine this when testing camera lenses for
metric quality.
Photo interpreters often talk about resolution in terms of ground resolved distance
which is the smallest normal contrast object that can be identified and measured.
Resolution
Geometrical - size
Size of objects in an image is a function of scale.
Size can be important in discriminating objects and features (cars vs. trucks or buses,
single family vs. multifamily residences, brush vs. trees, etc. ).
In the use of size as a diagnostic characteristic both the relative and absolute sizes of
objects can be important.
Geometrical - size
The size of runways gives an indication of the types of aircraft that can be
accommodated.
Geometrical - shape
Shape refers to the general form, structure, or outline of individual objects
The shape of objects/features can provide diagnostic clues that aid identification.
Man-made features have straight edges, natural features tend not to. Roads can
have right angle (90°) turns, railroads can't.
Geometrical - shape
How deep an excavation is can tell something about the amount of material that was
removed used in mining operations excavators.
Geologists like low sun angle photography because of the features that shadow
patterns can help identify (e.g. fault lines and fracture patterns).
Infrared aerial photography shadows are typically very black and can render
targets in shadows uninterpretable.
height and shadow
height and shadow
Association
Association takes into account the relationship between other recognizable
objects or features in proximity to the target of interest
The identification of features that one would expect to associate with other features
may provide information to facilitate identification.