Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Major steps in DIP Band Rationing

Sometimes differences in brightness values from identical surface materials are caused by topographic slope and aspect,
1. Image correction: Corrects errors in image data related to geometry and brightness using mathematical models. shadows, atmospheric constitutional change, or season’s changes in sun angle and intensity. Band ratio can be applied
2. Image enhancement: Modifies pixel brightness to improve visual impact. This is done by adjusting the brightness to reduce the effects of such environmental conditions. In addition, band ratio also help to discriminate between soils
value of a pixel based on its current value or the values of surrounding pixels. and vegetation
3. Image transformation: Transforms multi-spectral image data into new components or bands to highlight certain
information or reduce dimensions while preserving essential information. Where:
4. Image classification: Automatically categorizes all pixels in an image into land cover classes. BVi , j ,k
- BVi,j,k is the original input brightness value in band k
BVi , j ,ratio 
BVi , j ,l
- BVi,j,l is the original input brightness value in band l
Major DIP data formats
- BV is the ratio output brightness value
1. Band Interleaved by Pixel (BIP) Format Principal Component Analysisi,j,ratio
The BIP format places the brightness values in n bands associated with each pixel in the dataset in sequential order. Principal Component Analysis is a pre-processing technique used to create new images from uncorrelated values of
The brightness values for pixel (1,2) are then placed in the dataset, and so on. An end-of-file (EOF) marker is placed multispectral image data. It operates on all bands together, making it easier to select appropriate bands. Principal
at the end of the dataset. components describe data more efficiently than original band reflectance values, accounting for up to 98% of the
2. Band Interleaved by Line (BIL) Format variance. They are used for spectral pattern recognition and image enhancement, especially in areas with limited
The BIL format creates a file that places the brightness values in n bands associated with each line in the dataset in information.
sequential order.
3. Band Sequential (BSQ) Formats 2 main image classification methods
The BSQ format places all of the individual pixel values in each band in a separate and unique file. Each band has
Unsupervised classification involves software analyzing an image without user input for sample classes. The computer
its own beginning header record and EOF marker. Change Stored data in one band to another separate band.
uses techniques to determine related pixels and group them into classes. The user can specify algorithms and output
OIF classes, but the software doesn't aid in classification. The user must know the area being classified when groupings are
related to actual features on the ground.
OIF is one of the most common statistical methods which were applied in order to designate the most favorable three
band combinations that has the information with the least amount of duplication. It is based on the total variance within Supervised classification involves selecting sample pixels in an image representing specific classes and using these
bands and correlation coefficient between bands. as references for image processing. The user selects training sites based on their knowledge and sets the similarity
bounds for other pixels, often based on spectral characteristics. The user also designates the number of classes the
Sk – Standard Devia on of K band image is classified into. Many analysts use a combination of supervised and unsupervised classification processes for
r – Correla on matrix value final output analysis and classified maps.
Elements of Visual Image Interpretations Steps involved in supervised classification
1. Definition of Information Classes
1. Tone: Measures the intensity of reflected or emitted radiation from objects. Lower reflection appears dark, higher
2. Training/Calibration Site Selection - delineate areas of known identify on the digital image
reflection appears bright.
3. Generation of Statistical Parameters - define the unique spectral characteristics (signatures)
2. Shape: The form or outline of an object. Regular shapes indicate man-made objects, irregular shapes indicate
4. Classification - assignment of “unknown” pixels to the appropriate information class
natural environments.
5. Accuracy Assessment - test/validation data for accuracy assessment
3. Size: The dimensions of objects in relation to the image scale or resolution. It helps in interpreting the target.
6. Output Stage
4. Pattern: The spatial arrangement of objects. Repetition of certain forms or relationships creates a pattern.
Steps involved in Unsupervised Classification
5. Texture: The frequency of tonal variation in an image, produced by aggregate units of features. It depends on
shape, size, pattern, and shadow of terrain features. 1. Definition of minimum and maximum number of categories to be generated by the particular classification
6. Shadow: Provides an impression of the profile view of objects and can help in height estimation. It can also algorithm (based on an analyst’s knowledge or user requirements).
create difficulties in object identification. 2. Random selection of pixels to form cluster centres.
7. Association: The occurrence of certain features in relation to others. For example, a smooth vegetation pattern in 3. Algorithm then finds distances between pixels and forms initial estimates of cluster centers as permitted by user
an urban area generally refers to a playground or grassland, not agricultural land. defined criteria.
8. Site: The topographic or geographic location of an object. It’s important when objects are not clearly identified 4. As pixels are added to the initial estimates, new class means are calculated. This is an iterative process until the
using the previous elements. mean does not change significantly from one iteration to the next.
Major 3 Classification algorithms
Scattering
Minimum distance to mean classifier
Scattering is the redirection of electromagnetic energy by suspended particles in the atmosphere. The type and amount
of scattering that occurs depends on the size of the particles and the wavelength of the energy. Atmospheric 1. Calculate of the mean spectral value in each band and for each category.
constituents such as gas molecules, smoke and dust, water vapor may cause for the scattering. The three main 2. Relate each mean value by a vector function
constituents which absorb radiation are Ozone, Carbon Dioxide, and Water Vapor. 3. A pixel of unknown identity is calculated by computing the distance between the value of the unknown pixel and
each of the category means.
1. Ozone serves to absorb the harmful (to most living things) ultraviolet radiation from the sun. Without this 4. After computing the distances, the unknown pixel is then assigned to the closest class.
protective layer in the atmosphere our skin would burn when exposed to sunlight. This process calculates a "centroid" for each class by calculating the mean value by band, then calculates the distance
2. Carbon Dioxide absorbs in the far infrared portion of the spectrum which is related to thermal heating and results in n-dimensional distance to each centroids for each image pixel.
in a 'greenhouse' effect. Benefits: mathematically simple and computationally efficient
3. Water Vapor absorbs energy depending upon its location and concentration, and forms a primary component of
the Earth's climatic system. Drawback: insensitive to different degrees of variance in spectral response data.

Advantages of radar
• All weather, day or night
– Some areas of Earth are persistently cloud covered.
• Penetrates clouds, vegetation, dry soil, dry snow.
• Sensitive to water content, surface roughness
– Can measure waves in water.
• Sensitive to polarization and frequency
• Interferometry (later) using 2 receiving antennas.
Disadvantages of radar
• Penetrates clouds, vegetation, dry soil, dry snow.
– Signal is integrated over a depth range and a variety of materials.
• Sensitive to water content, surface roughness
– Small amounts of water affect signal.
– Hard to separate the volume response from the surface response.
• Sensitive to polarization and frequency
– Many choices for instrument, expensive to cover range of possibilities.
– The math can be formidable.
a) Shadows: Shadows occur in RADAR remote sensing when the radar beam is unable to illuminate the ground
surface. This typically happens behind vertical features or steep slopes, towards the far range.
b) Layover: Layover happens when the radar beam reaches the top of a tall feature before it reaches the base. As a
result, the top of the feature is displaced towards the radar from its true position on the ground, and “lays over” the
base of the feature.
c) Foreshortening: Foreshortening occurs when the radar beam reaches the base of a tall feature (like a mountain)
tilted towards the radar before it reaches the top. Because the radar measures distance in slant-range, the slope will
appear compressed and the length of the slope will be represented incorrectly.
LIDAR (Light Detection and Ranging)- Remote sensing technology. Similar to RADAR. Uses shorter wavelength
of EM-spectrum. Measures properties of scattered light. System based on a Laser Sensor.
LASER (Light Amplification by Stimulated Emission of Radiation) - Laser + Receiver System=LIDAR.
Monochromatic, Directional, Coherent. Use of Lasers: Measure objects that are the same size or larger than its own
wavelength. The Scattering Process – Rayleigh, RAMAN, Fluorescence. Different Performance: on solid surface,
water, on vegetation. pinpoint targeting. Lidar operates in U.V, visible and infrared regions.
Technologies in LIDAR: - Lasers – Laser sensor, Global Positioning System (GPS) – sensor position, Inertial
Navigation System (INS) – exact sensor measurement

LIDAR RADAR
Uses optical signals (Near IR, visible). Uses microwave signals. Wavelengths~1cm
wavelengths~1um (Approx 100,000 times longer than Near IR)
Shorter wavelengths allow detection of Target size limited by longer wavelength
smaller objects (cloud particles, aerosols).
Focused beam and high frequency permit high Beam width and antenna length limit
spatial resolution. (<1m horizontal). spatial resolution (10s of meters)
Downward looking sensor Side looking sensor
Limited to clear atmospheric conditions, Can operate in presence of clouds. Daytime or
daytime or nighttime coverage. Nighttime coverage.
LIDAR TYPES - Based on the physical process (range finders, DIAL, doppler lidar), Based on scattering process
(Mie, Rayleigh, Raman, Fluorescence Lidar), Based on the platform (Ground based, Airborne, Spaceborne)
What can we measure with LIDAR? Clouds, Aerosol, Water Vapor, Minor constituents (e.g.: Ozone,
hydrocarbons), Temperature
Transmitter Data Acquisition & Control System Receiver
Function of Transmitter
1. Provide laser pulses.
2. Consist of lasers, wavelength control system, diagnostic equipment
3. Determines the performance of lidar system.
The Parallelepiped classifier considers the range of spectral measurements, dividing the highest and lowest digital
numbers assigned to each band from the training data. It classifies unknown pixels based on their location within the
class range. However, overlaps can cause issues, which can be resolved by introducing stepped borders. The minimum Main types of scattering
and maximum DNs are used as thresholds.
1. Rayleigh Scatter: Occurs when light interacts with particles smaller than its wavelength. Shorter wavelengths
Benefits: simple to train and use, computationally fast
(blue and violet) are scattered more, causing the sky’s blue color and image haze. At sunrise/sunset, the longer
Drawbacks: pixels in the gaps between the parallelepiped cannot be classified; pixels in the region of overlapping path scatters short wavelengths completely, leaving only red and orange visible.
parallelepiped cannot be classified. 2. Mie Scatter: Occurs when the radiation wavelength is similar to atmospheric particles’ size. Influences radiation
from near UV to mid-infrared spectrum. Mostly occurs in the lower atmosphere with larger particles, causing
Maximum Likelihood Classifier is a classification method that uses the variance and covariance of trained spectral image haze.
response patterns to determine the fate of an unknown pixel. It assumes normal distribution of points for each cover- 3. Non-selective Scattering: Occurs when atmospheric particles are much larger than the radiation wavelength.
type and computes the probability that unknown pixels belong to a particular category. The classifier determines a Primarily caused by water droplets, it scatters all visible light evenly, making fog and clouds appear white. It can
probability surface for a given DN, classifying pixels as most likely or "Other" if the probability is not over a certain block all energy from reaching the Earth’s surface, causing difficulties in interpreting remote sensed imagery.
threshold. 3 sampling techniques in geometric rectification of DIP
Benefits: takes variation in spectral response into consideration 1. Nearest Neighbor Resampling: Uses the value of the closest pixel from the original image. It’s simple and
Drawbacks: computationally inefficient, multimodal or non-normally distributed classes can be misclassified. retains original values, but can result in blocky images due to duplicated or lost pixel values.
2. Bilinear Interpolation Resampling: Computes a weighted average of the four nearest pixels. This alters original
A Radar system has three primary functions: values and creates new ones in the output, leading to sharper images but potentially affecting further analysis.
3. Cubic Convolution Resampling: Calculates a distance-weighted average from a block of sixteen surrounding
 It transmits microwave (radio) signals towards a scene. pixels. Like bilinear interpolation, it creates new pixel values and results in sharper images.
 It receives the portion of the transmitted energy backscattered from the scene.
 It observes the strength (detection) and the time delay (ranging) of the return signals.
2 geometric correction procedures
RAR and SAR The purpose of georeferencing is to transform the image coordinate system (u,v), which may be distorted due to several
Radar images are categorized into circularly scanning plan-position indicator (PPI) images and side-looking images, factors, to a specific map projection (x,y) as shown in Figure below. There are two kinds of geometric corrections
with PPI applications primarily used for monitoring air and naval traffic, and side-looking images, including real procedures such as Image-to-image registration and Image-to-map registration.
aperture radar and synthetic aperture radar, used for remote sensing applications.
Image-to-image registration refers to transforming one image coordinate system into another image coordinating
The SLAR imaging system uses a long antenna on a platform, emitting pulses of electromagnetic energy perpendicular system. Image-to-map registration refers to transformation of one image coordinate system to a map coordinate
to the platform's flight path and downward to Earth's surface. These pulses scatter in various directions, including the system resulted from a particular map projection.
antenna's direction, and return echoes arrive at the antenna at different times, depending on the distance from the
scattering object. High-frequency filters in spatial domain
SAR uses the Doppler history of radar echoes to synthesize a large antenna, allowing high azimuth resolution in images.
As the radar moves, pulses are transmitted, and return echoes are recorded in an echo store. SAR requires a complex Spatial filtering enhances images by altering pixel values based on spatial characteristics. High Frequency filters
array of onboard navigational and control systems for location accuracy. emphasize high spatial frequency, preserving high frequencies and removing slowly varying components. They are
used for edge detection and enhancement. There are linear and nonlinear types of high pass filters, with linear filters
focusing on linear combinations of BV's, while nonlinear filters emphasize shapes and details.
SAR enhances azimuth resolution by increasing antenna size using pulse compression technique, while synthetic
aperture processing involves complex data processing of received signals and phases from moving targets. Linear Contrast Stretch

This is the reason why SAR has a high azimuth resolution with a small size of antenna regardless of the slant range, or Satellite sensors are designed to handle various illumination conditions, but their pixel values often have low display
very high altitude of a satellite. SLAR measures ground scattering range, producing low-resolution images, while contrast. To improve image contrast, a contrast enhancement is used. The simplest enhancement is a linear contrast
Synthetic Aperture Radar uses image processing to create high-resolution images. stretch, which involves identifying lower and upper bounds from the histogram and applying a transformation to fill
the full range. This process enhances contrast in the image, making light-toned areas appear lighter and dark areas
Range resolution and Azimuth ewsolution appear darker, making visual interpretation easier. This technique is particularly useful in areas with limited brightness
levels.
The spatial resolution of RAR is primarily determined by the size of the antenna used: the larger the antenna, the better Edge Enhancement
the spatial resolution. Other determining factors include the pulse duration (τ) and the antenna beam width.
For many remote sensing Earth science applications, the most valuable information that may be derived from an image
Range resolution c pulse length  speed of light is contained in the edges surrounding various objects of interest. Edge enhancement delineates these edges. Edges
Rr   may be enhanced using either linear or nonlinear edge enhancement techniques.
2 cos  2 cos  depression angle 
There are two kinds of linear edge enhancement techniques. They are
Azimuth resolution S  slant range  wavelength
Ra  - Directional
L antenna length
Vertical Horizontal or any direction
 - Laplacian
For systems where the antenna beam width is controlled by the physical length of the antenna, typical resolutions are
in the order of several kilometres. Highlight points, lines, edges, suppress uniform and smooth regions

Function of Receiver
1. Collect & detect returned photon signals.
2. Consist of telescopes, filters, photon detectors, discriminators etc.
3. Distinguishes the returned photons.
Function of Data Acquisition & Control System
1. Record returned data and time-of-flight.
2. Provide system control and coordination.
3. Consists of multi-channel scalar, discriminator, computer and software
4. Enabling various data acquisition modes
APPLICATIONS
Agriculture - Create a topological map of the fields.
Archaeology - Provide an overview of hidden sites
Biology & Conservation - Used to retrieve forest canopy structural information.
Hydrology - Used for Under water investigation: flood risk mapping.
Military & Law enforcement - Lidar speed gun-measure speed of vehicles,
Identifying one vehicle from the traffic stream
Physics & astronomy - Measure the distance to reflectors placed on the moon, Used in Mars-Orbiting Satellite, Used
to detect snow in Mars atmosphere, Used to measure molecular density, calculate temperature
Meteorology - Used for studies of atmospherics conditions, clouds, and aerosols. Used for measurement of atm.
Gases, Measures wind speed, Use Mie scattering, DIAL and space-based Doppler wind Lidar (DWL)
Robotics - Allowing it to map the surrounding area and avoid obstacles.
Geology - Detect fault and measure uplift, Monitors glaciers, Use Terrestrial & airborne LIDARS
Shoals-Specific Applications - Undersea cable routing, Wreck detection, Charting and safety issues
ADVANTAGES - Higher accuracy, Minimum human dependence, Weather or Light independence, Canopy
penetration, Higher data density.
DIS-ADVANTAGES - Most lidar data are collected at night, but unlike RADAR, LIDAR cannot penetrate clouds,
rain, or dense haze and must be flown during fair weather.

COMPONENTS of LIDAR 1. Return pulse.


8. Terrestrial Lidar 2. High speed counter
9. Laser-Nd:YAG Laser 3. Calculate Distance: - Distance=(Speed of Light * Time of flight) / 2
10. Optical Telescope 4. Lidar Transmitter, Scanner, and Receiver
11. Mirror Shutter 5. Aircraft Positioning –Differential GPS (with post-processing)
12. Photo multiplier tubes 6. Aircraft Attitude –Pitch, Roll, Yaw –Inertial Navigation System
13. Detector- spectrometer
14. Laser-generates an optical pulse.

You might also like