Download as pdf or txt
Download as pdf or txt
You are on page 1of 125

❑ In 1920s Submarine cables (https://www.submarinecablemap.

com/) were used to transmit digitized


newspaper pictures between London and New York using Bartlane systems.

❑ Specialized printing equipments were used to code the images and reproduced at receiver using
telegraphic printers.

Fig. Image produced using telegraphic printer


History

Fig. 2 Fig. 3

In 1921, photographic printing press improved the resolution and tonal quality of images.

Bartlane system was capable of coding 5 distinct brightness levels. It increased to 15 by 1929

The Landsat 7 sensor records 8-bit images; thus it can measure 256 unique grey values of the reflected
energy while Ikonos-2 has an 11-bit radiometric resolution (2048 grey values)
History

In 1964, Computer processing techniques were used to improve picture of moon transmitted by Ranger 7
at JPL.

Fig. 4 Fig. 5 Fig. 6


 Why do we need Image Processing?

It is motivated by 3 major applications:

• Improvement of pictorial information for human perception.

• Image processing for autonomous machine application.

• Efficient storage & transmission.


Improvement of pictorial information for human interpretation
and analysis.

Typical applications:
➢ Noise filtering
➢ Content Enhancement
➢ Contrast enhancement
Deblurring
Filtering is a technique for modifying or enhancing an image

https://www.youtube.com/watch?v=mG45mTcz_M4

Filtering is a neighborhood operation, in which the value of any given pixel in the
output image is determined by applying some algorithm to the values of the pixels in
the neighborhood of the corresponding input pixel.
A pixel's neighborhood is some set of pixels, defined by their locations relative to that
pixel.

•Low pass filters (Smoothing) Low pass filtering (aka smoothing), is employed to remove high spatial frequency noise from a
digital image.
•High pass filters (Edge Detection, Sharpening) A high-pass filter can be used to make an image appear sharper.
https://www.mathworks.com/discovery/image-enhancement.html

Image enhancement refers to the process of highlighting certain information of an image, as well as weakening
or removing any unnecessary information according to specific needs
Fig. Thermal / IR view of a Chip
IR satellite view of
Augustine volcano
Fig. IR image of a sloth at night
Night vision system used by soldiers
Fig. IR Satellite Imagery
Nebula NGC 151401 in Visible (left) and Infrared (right)
Fig. Fetus and Thyroid using ultrasound
Fig. Detecting Earthquakes and its cause using cross sectional view

Seismic imaging directs an intense sound source into the ground to evaluate subsurface conditions and
to possibly detect high concentrations of contamination.
Fig. Detect ozone layer damage
Fig. Identify sun spots
Fig. Identify solar flares
Fig. X ray of neck
Fig. X ray of head
Fig. X ray of a Glucose meter circuit
Fig. Gamma ray exposed images
Fig. Gamma ray bursts in space
Fig. Gamma ray bursts in space
Gamma ray bursts in space
Fig. Satellite images of Mumbai suburban(Left) and Gateway of India (Right)
Since the digital image is “invisible” it must be
prepared for viewing on one or more output device
(laser printer, monitor,etc.)

The digital image can be optimized for the


application by enhancing or altering the appearance
of structures within it (based on: body part,
diagnostic task, viewing preferences, etc). http://www.sprawls.org/resources/DICHAR/module.htm

It might be possible to analyze the image in the computer and provide cues to the radiologists
to help detect important/suspicious structures (e.g.: Computed Aided Diagnosis, CAD)
DIGITAL IMAGE PROCESSING
• Remote sensing data can be
interpreted visually using a hard copy
of the image or by digital method
using digital image employing suitable
software.

•Digital Image Processing is the task of


processing and analysing the digital data
using some image processing software.
Digital Image Processing Steps

1.Preprocessing
2.Enhancement

3.Transformation

4.Classification
Image Preprocessing
Operations aim to correct distorted or degraded image
data to create a more faithful representation of the
original scene.
Pre – processing
 Remotely sensed raw data generally contains flaws and deficiencies received from imaging sensor mounted on
the satellite.
 The correction of deficiencies and removal of flaws present in the data through some methods are termed as
pre–processing methods.

 This correction model involves to correct geometric distortions, to calibrate the data radiometrically and
to eliminate noise present in the data.

 All the pre–processing methods are consider under three heads, namely,
a) Radiometric correction method,
b) Atmospheric correction method,
c) Geometric correction methods.
Radiometric correction method

 Radiometric errors are caused by detected imbalance andatmospheric deficiencies.

 Radiometric corrections are transformation on the data in order to remove error, which are geometrically
independent.
 Radiometric corrections are also called as cosmetic corrections and are done to improve the visual appearance
of the image.
1. Correction for missing lines
2. Correction for periodic line striping
3. Random noise correction
4. Sun spot
5. Shadowing
6. Random Noise
7. Line-Dropouts
8. De-Striping
Correction for Missing Lines
9

1. Replace by either preceding or succeeding line


2. Averaging of the neighboring pixel values
3. Replacing the line with other highly correlated band
RADIOMETRIC CORRECTION
LINE DROP CORRECTION

Original Image contains Restored image by averaging


two line drops above and below pixel values
12

Figure 1.4: Correction for periodic line striping


Figure 1.3a: Random noise correction
11

Figure 1.3b: Random noise correction


Atmospheric correction method
 The value recorded at any pixel location on the remotely sensed image is not a
record of the true ground – leaving radiant at that point, for the signal is
attenuated due to absorption and scattering.

 The atmosphere has effect on the measured brightness value of a pixel. Other
difficulties are caused by the variation in the illumination geometry. Atmospheric
path radiance introduces haze in the imagery where by decreasing the contrast of
the data.
Figure 1.5: Major Effects due to Atmosphere.
Figure 1.6: Atmospheric correction
Geometric correction methods
 Remotely sensed images are not maps.

 Frequently information extracted from remotely sensed images is integrated with the map data in Geographical
Information System (GIS).

 The transformation of the remotely sensed image into a map with the scale and projection properties is called
geometric corrections

 Geometric corrections of remotely sensed image is required when the image is to be used in one of the
following circumstances:
 To transform an image to match your map projection
 To locate points of the interest on map and image.
 To overlay temporal sequence of images of the same area, perhaps acquired by different sensors.
 To integrate remotely sensed data with GIS.
Image Geometric Correction
Geometric Correction is the process of giving an image a Real World
coordinate system
What is Geometric correction?
18

Figure 1.7: Sample Geometric Distortions


Figure 1.8: Geometric correction
A grayscale image is composed of pixels represented by multiple bits of information,
typically ranging from 2 to 8 bits or more.

In a 2-bit image, there are four possible combinations: 00, 01, 10, and 11. If "00" represents
black, and "11" represents white, then "01" equals dark gray and "10" equals light gray. The
bit depth is two, but the number of tones that can be represented is 2 2 or 4. At 8 bits, 256
(2 8 ) different tones can be assigned to each pixel.
Image Processing:
 Digital Image Processing contains five different operations, namely,
A.Image Registration
B. Image Enhancement
C. Image Filtering
D.Image Transformation
E. Image Classification
All these are discussed in details below:
Sequence for digital image analysis

 There is no single enhancement procedure which is best


 Best one is that which best displays the features of interest to the Analyst
IMAGE REGISTRATION

❖ Process of geo coding and registration of image to real

world coordinate system.


IMAGE MERGING

LISS-III

MERGED PRODUCT
PAN
UTILITIES

Image
Mosaicking
57 M 6 57 M 10

Mosaicing is blending together of several


arbitrarily shaped images to form one large
Mosaicked Image of 57 M 6 &57 M 10 radiometrically balanced image so that the
boundaries between the original images are not
seen.
I R S P6 AWIFS data of
Entire India of Oct 2004
Figure 1.11: Image Registration
27

Figure 1.12: Image Registration


IMAGE ENHANCEMENT

Image is NOT PERFECT Sometimes ...


IMAGE ENHANCEMENT

Human eye is poor at discriminating the slight radiometric or spectral


differences that may characterize the features.

Image enhancement increases the contrast between the interested classes or


features

Main aim of digital image enhancement is to amplify these slight


differences for better clarity of Image scene.
Image enhancement may be defined as some mathematical operations that are to be
applied to digital remote sensing input data to improve visual appearance of an image
for accurate analysis .
Converts to black & white

Linear Part contributes to the contrast stretching


Digital Negative
L

y = L−x
0 L x
Contrast Stretching

 x 0 xa
 yb
y =   ( x − a ) + ya a xb
  ( x − b) + y bxL
ya
 b
a b x
0 L

a = 50, b = 150, = 0.2,  = 2,  = 1, ya = 30, yb = 200


Piecewise-Linear Transformations

An example of piecewise linear


transformation function
Piecewise-Linear Transformations

grey-level slicing Pixel values


between
[A,B] are
highlighted

Other pixel
values are
preserved

Other pixel values are darkened


Histogram based Enhancement
Histogram of an image represents the relative frequency
of occurrence of various gray levels in the image
3000

2500

2000

1500

1000

500

0
0 50 100 150 200
Why Histogram?
4
x 10
4

3.5

2.5

1.5

0.5

0 50 100 150 200 250

It is a baby in the cradle !


Histogram information reveals that image is Under-exposed
Histogram Equalization

Image Example

before after
Histogram Comparison
3000 3000

2500 2500

2000 2000

1500 1500

1000
1000

500
500

0
0 50 100 150 200 0
0 50 100 150 200 250 300

Before equalization After equalization


Figure 1.13: Contrast Enhancement by Histogram Stretch
Application
Density Slicing
30

A digital image processing system allows for assigning a different colour to each DN or
a range of DNs in an image.
IMAGE FILTERATION / SPATIAL FILTERING

❖Spatial filtering encompasses another set of digital processing functions that are used to
enhance the appearance of an image.

❖Spatial filters highlight or suppress specific features in an image based on their spatial
frequency which is related to the concept of image texture (manner in which grey scale
values change relative to their neighbours within an image).
❖ A common filtering procedure involves moving a window of a few pixels in dimension
over each pixel in the image, applying a mathematical technique using the pixel values
under the window, and replacing the central pixel with the new value.

❖A low pass filter is designed to emphasize larger, homogeneous areas of similar tone and
reduce the smaller detail in an image. It serves to smoothen the appearance of image.

❖ High pass filter serve to sharpen the appearance of fine details in an image.
Highpass filters (sharpening)
A high pass filter is the basis for most sharpening methods. An
image is sharpened when contrast is enhanced between adjoining
areas with little variation in brightness or darkness.
A high pass filter tends to retain the high frequency information
within an image while reducing the low frequency information.
The kernel of the high pass filter is designed to increase the
brightness of the center pixel relative to neighboring pixels. The
kernel array usually contains a single positive value at its center,
which is completely surrounded by negative values. The
following array is an example of a 3 by 3 kernel for a high pass
filter:
Original Image High Pass Filter Image
Lowpass filter (smoothing)
A low pass filter is the basis for most smoothing methods. An image is smoothed by decreasing the disparity between pixel
values by averaging nearby pixels. Using a low pass filter tends to retain the low frequency information within an image while
reducing the high frequency information.
37

Figure 1.17: In the example, we add some sinusoidal noise to an


image and filter it with Band Pass Filter
31

Edge Enhancement
Most valuable information can be derived from edges. Thus, edge enhancement delineates
these 32
edges and makes the shape and details comprising the image more conspicuous and
perhaps easier to analyze. The edges may be enhanced using either linear or non-linear edge
enhancement techniques.

Linear edge enhancement: A straightforward method of extracting edges in remotely sensed


imagery is the application of a directional first-difference algorithm and approximates the first
derivative between two adjacent pixels. The algorithm produces the first difference of the
image input in the horizontal, vertical, and diagonal directions.

Non-linear edge enhancement: Nonlinear filters are widely used in image processing for
reducing noise without blurring or distorting edges. Median filters were among the first
nonlinear filters used for this purpose.

Recently a wider variety of options have become available, including mathematical morphology,
anisotropic diffusion, and wavelet-based enhancement techniques. Most of these methods focus
on the problem of
Image Transformation
All the transformations in an image based on arithmetic operations. The resulting images
may will have properties that make them more suited for a particular purpose than the
original.

The two transformations which are commonly used. These two techniques are:

1. Normalized Difference Vegetation Index (NDVI)

2. Principal Component Analysis (PCA)


Normalized Difference Vegetation Index (NDVI)
Remotely sensed data is used extensively for large area vegetation monitoring.

Typically, the spectral bands use for this purpose are visible Red band and Near Infrared
Bands.

NIR-Red -1<NDVI<1
NDVI=
NIR+Red

Healthy vegetation reflects a lot of NIR but absorbs Red.


HIGH NDVI= Dense and healthy vegetation
Low NDVI= Less healthy vegetation
Negative NDVI= Water
Figure 1.18: Normalised Difference Vegetation Index of Honshu, Japan
Figure 1.19: Normalised Difference Vegetation Index of India
PCA – Principal Component Analysis

It is a method of re-expressing the information content of multi spectral image set of “m”
images in terms of “m” principal components.

The application of PCA on raw remote sensing data produces a new image which is
more interpretable than the original data.

The objective of this transformation is to reduce the number of bands in the data and
compress as much of the information in the original bands into fewer bands. The “new”
bands that result from this statistical procedure are called Principal Components,

This process attempts to maximize the amount of information from the original data into
the least number of new components.
43

Figure 1.20: The application of Principal


Component Analysis (PCA) on raw remotely
sensed data of Urban Area by Unsupervised
Classification.
Principal Component Analysis

Original PCA
IMAGE TRANSFORMATIONS
PCA ORIGINAL

FCC NDVI
IMAGE CLASSIFICATION

➢Image Classification is a procedure to automatically categorize all pixels in an image of a


terrain into land cover classes.

Spectral pattern recognition refers to the family of classification procedures that utilizes this
pixel-by-pixel spectral information as the basis for automated land cover classification. Spatial
pattern recognition of image based on the spatial relationship with pixel surrounding.

➢ Types of Classification :
1. Supervised Classification
2. Unsupervised Classification

Classification process is a form of pattern recognition.


NATIONAL LAND USE AND LAND COVER MAPPING USING
MULTI-TEMPORAL AWiFS DATA
September-October 2004

LEGEND
Kharif crop

Fallow

Plantation

Forest

Wasteland/Scrub

Cloud/shadow

Waterbody

Builtup area

Snow

Mangrove

Grassland

Shifting cultivation
SUPERVISED CLASSIFICATION

In a supervised classification, the analyst identifies in the imagery,


homogeneous representative samples of different classes of interest.
These samples are referred to as training areas.

Thus, the analyst is supervising the categorization of a set of


specific classes.
➢Computer uses special programs or algorithms to determine the numerical
signatures for each training class.

➢ Once the computer has determined the signatures for each class, each pixel
in the image is compared to these signatures and labeled as the class it closely
resembles digitally.

➢ Success of this classification depends upon the training data used to


identify different classes.
SUPERVISED CLASSIFICATION

TRAINING COMPUTER FOR COMPUTER CLASSIFED MAP


CLASSIFICATION
Supervised Classification
 Different phases in supervised Classification technique are as follows:
(i) Appropriate classification scheme for analysis is adopted.
(ii) Selection of representative training sites and collection of spectral signatures.
(iii) Evaluation of statistics for the training site spectral data.
(iv) The statistics are analyzed to select the appropriate features to be used in
classification process.
(v) Appropriate classification algorithm is selected.
(vi)Then classify the image into ‘n’ number of classes.
46

Figure 1.21: Supervised Classification.


Unsupervised Classification
Output Classified Image
Saved Clusters

Cluster 3 Cluster 6

Next Pixel to
be Classified
Cluster 5 Cluster 2

Cluster 1 Cluster 4

Unknown
Unsupervised Classification
The result of the The analyst determines the
unsupervised classification is ground cover for each of the
not yet information until… clusters…

??? Water

??? Water

??? Conifer

??? Conifer

??? Hardwood

??? Hardwood
Unsupervised Classification
The Unsupervised Classifiers do not utilize training data as basis for classification.
These classifiers examine the unknown pixel and aggregate them into a number of class based on natural grouping
or clusters present in the image values.
The classes that result from unsupervised classification are spectral classes. The analyst must compare the
classified data into some form of reference data to identify the informational value of the spectral classes.

It has been found that in areas of complex terrain, the unsupervised approach is preferable.

In such conditions, if the supervised approach is used, the user will have difficulty in selecting training sites
because of variability of spectral response within them each class. Consequently, group data collected but it is
very time consuming.

Also, the supervised approach is subjective in the sense that the analyst tries to classify information categories,
which are often composed of several spectral classes, whereas spectrally distinguishable classes will be
revealed by the unsupervised approach.

Additionally, the unsupervised approach has potential advantage of revealing discrimination classes unknown
from the previous supervised classification.
Figure 1.22: In an unsupervised classification, a thematic map of the input image is created
automatically in ENVI without requiring any pre-existing knowledge of the input image. The
unsupervised classification method performs an automatic grouping; the next step in this workflow is
for the user to assign names to each of the automatically generated regions (for
Primary ordering of image elements
Tone/Color
Tone can be defined as each distinguishable variation from white to black.

Color may be defined as each distinguishable variation on an image produced by a multitude of


combinations of hue, value and chroma.

If there is no sufficient contrast between an object and its background to permit at least detection,
there can be no identification

While a human interpreter may only be able to distinguish between ten and twenty shades of gray;
interpreters can distinguish at least 100 times more variations of color-on-color photography than
shades of gray on black and white photography.
Tone/Color
Resolution
 Resolution can be defined as the ability of the entire photographic system, including
lens, exposure, processing, and other factors, to render a sharply defined image.

 An object or feature must be resolved in order to be detected and/or identified.


Resolution is one of the most difficult concepts to address in image analysis because
it can be discussed for camera lenses in terms of so many line pairs per millimeter.

 There are resolution targets that help to determine this when testing camera lenses for
metric quality.
 Photo interpreters often talk about resolution in terms of ground resolved distance
which is the smallest normal contrast object that can be identified and measured.
Resolution
Geometrical - size
 Size of objects in an image is a function of scale.

 Size can be important in discriminating objects and features (cars vs. trucks or buses,
single family vs. multifamily residences, brush vs. trees, etc. ).

 In the use of size as a diagnostic characteristic both the relative and absolute sizes of
objects can be important.
Geometrical - size

The size of runways gives an indication of the types of aircraft that can be
accommodated.
Geometrical - shape
 Shape refers to the general form, structure, or outline of individual objects

 The shape of objects/features can provide diagnostic clues that aid identification.

 Man-made features have straight edges, natural features tend not to. Roads can
have right angle (90°) turns, railroads can't.
Geometrical - shape

Goryokaku, an old castle in Hokkaido, is a diagnostic shape.

Other examples include freeway interchanges in Rainbow Bridge in Tokyo.


Spatial - texture and pattern
 Texture is the frequency of change and arrangement of tones and is
a micro image characteristic.

 The visual impression of smoothness or roughness of an area can


often be a valuable clue in image interpretation.

 Pattern is the spatial arrangement of objects. Patterns can be either


man-made or natural and is a macro image characteristic.

 Typically an orderly repetition of similar tones and


textures will produce a distinctive and ultimately recognizable
Spatial - texture and pattern
height and shadow
 Height can add significant information in many types of interpretation tasks,
particularly those that deal with the analysis of man-made features and vegetation.

 How deep an excavation is can tell something about the amount of material that was
removed used in mining operations excavators.

 Geologists like low sun angle photography because of the features that shadow
patterns can help identify (e.g. fault lines and fracture patterns).

 Infrared aerial photography shadows are typically very black and can render
targets in shadows uninterpretable.
height and shadow
height and shadow
Association
 Association takes into account the relationship between other recognizable
objects or features in proximity to the target of interest
 The identification of features that one would expect to associate with other features
may provide information to facilitate identification.

You might also like