Professional Documents
Culture Documents
Fundamental Concepts of Remote Sensing
Fundamental Concepts of Remote Sensing
Energy-frequency-wavelength relationship
Boltzman Law
The Stefan–Boltzmann law describes the power radiated from a black body in terms of
its temperature. Specifically, the Stefan–Boltzmann law states that the total energy radiated per
unit surface area of a black body across all wavelengths per unit time (also known as the black-
body radiant exitance or emissive power), , is directly proportional to the fourth power of the black
body's thermodynamic temperature T:
Wien Law
Plank’s law of radiation
Electromagnetic radiation spectrum
The electromagnetic spectrum ranges from the shorter wavelengths (including
gamma and x-rays) to the longer wavelengths (including microwaves and broadcast
radio waves). There are several regions of the electromagnetic spectrum which are
useful for remote sensing.
Visible Spectrum
The light which our eyes can detect forms the visible spectrum. It is important to note
how small a portion of the electromagnetic spectrum is represented by the visible
region.
Radiation that is not absorbed or scattered in the atmosphere can reach and interact
with the Earth's surface. There are three (3) forms of interaction that can take place
when energy strikes, or is incident (I)upon the surface. These are: absorption
(A); transmission (T); and reflection (R).
Reflection: reflected light is what we know as color; i.e. chlorophyll in plants reflects
green light.
Absorbtion: the incident energy is not reflected or transmitted but is transformed into
another form, such as heat i.e. a rock, or absorbed by chlorophyll in the process
of photosynthesis.
The Earth's atmosphere acts as a filter to remove radiation such as cosmic rays,
gamma rays, x-rays, UV rays, and large portions of the electromagnetic spectrum
through the process of absorbtion and scattering by gases, water vapor, and particulate
matter (dust).
Rayleigh Scatter occurs when particles are very small compared to the wavelength of
the radiation. These could be particles such as small specks of dust or nitrogen and
oxygen molecules. This is the cause of the blue sky; it is red in the mornings and
evenings because light has a longer path through the atmosphere and the blue
wavelengths (or shorter wavelenths) are scattered so completely that it leaves only red
(the longer) wavelengths.
Mie Scattering occurs when the particles in the atmosphere are the same size as the
wavelengths being scattered. Dust, pollen, smoke and water vapour are common
causes of Mie scattering which tends to affect longer wavelengths. Mie scattering
occurs mostly in the lower portions of the atmosphere where larger particles are more
abundant, and dominates when cloud conditions are overcast.
Non-Selective Scattering occurs when the particles are much larger than the
wavelength of the radiation. Water droplets and large dust particles can cause this type
of scattering and causes fog and clouds to appear white to our eyes because blue,
green, and red light are all scattered in approximately equal quantities
(blue+green+red light = white light).
Atmospheric Absorbtion
Ozone serves to absorb the harmful (to most living things) ultraviolet radiation from
the sun. Without this protective layer in the atmosphere our skin would burn when
exposed to sunlight.
Carbon Dioxide absorbs in the far infrared portion of the spectrum which is related to
thermal heating and results in a 'greenhouse' effect.
Water Vapor absorbs energy depending upon its location and concentration, and
forms a primary component of the Earth's climatic system.
Atmospheric Windows
Solar Reflectance
Solar Reflectance is the fraction of the incident solar energy which is reflected by the
surface in question. The best standard technique for its determination uses
spectrophotometric measurements, with an integrating sphere to determine the
reflectance at each different wavelength. The average reflectance is then determined
by an averaging process, using a standard solar spectrum.
The ratio of the size of any object, feature, or area within the photo to its actual size in
the picture is called the scale
Q. One meaning of scale is this: 1 inch on the photo equals X inches on the
ground. For the 1:100,000 photo above, determine how many feet are represented
by an inch (on the photo, or in this case, the image on your screen) and likewise
how many mile(s) extend across that inch
Let's move now from the spectral part of photogrammetry to the spatial part. Scale,
mentioned before, is just the comparison of the dimensions of an object or feature in a
photo or map to its actual dimensions in the target. We state scale in several ways,
such as, "six inches to the mile", "1/20,000", and, most common "1:2,000". These
mean that one measurement unit in the numerator (image or map) is equivalent to the
stated number of that unit in the denominator (scene). Thus, 1:2,000 simply states that
one of any length unit, such as an inch, in the photo corresponds to 2,000 inches on
the ground or air (cloud). Or, one cm is equivalent to 20,000 cm. "six inches to the
mile" translates to six inches in the photo represents 63,360 (5,280 ft x 12 in/ft) inches
in the real world, but we can further reduce it to 1:10,560, because six and 63,360 are
divisible by six. Note that, if we enlarge or contract a photo of a given scale, say by
projection as a transparency onto a screen, then one inch on the screen no longer
corresponds to the same denominator number but now represents some other scale
determined by the magnification factor. However, the effective resolution, the area
covered, and the relative details remain the same.
AERIAL PHOTOGRAPHS
We determine the scale of the aerial photo, expressed as its Representative Fraction
(RF) by the height of the moving platform and by the focal length of the camera,
according to this equation:
RF = f/H*,
where H* = H - h,
so that H - h is the distance between the platform and the point (assuming a flat ground
surface; in rugged terrain, scale in effect varies with the elevations).
We can also show that RF is also proportional to resolution and distance ratios, as
given by RF = rg/rs = d/D, where rg is ground resolution (in line pairs per meter; see
below) and rs is the sensor system resolution (in line pairs per millimeter); d is the
distance between two points in the photo and D is the actual distance between these
points on the ground (the definition of scale).
QSN. Two points appear on a map 1.75 inches apart. The actual horizontal ground
distance between the two points is 1108.0 meters. What is the denominator of the
scale fraction (RF) for this map?
Qsn. A vertical airphoto is taken from a flying height of 5000 ft relative to the
ocean with a camera having a focal length of 6 inches. The flat surface is 1000 ft
above sealevel. What is the RF for the resulting photo?
TRUE COLOR
To understand false color, a look at the concept behind true color is helpful. An image
is called a "true-color" image when it offers a natural color rendition, or when it
comes close to it. This means that the colors of an object in an image appear to a
human observer the same way as if this observer were to directly view the object: A
green tree appears green in the image, a red apple red, a blue sky blue, and so
on. When applied to black-and-white images, true-color means that the perceived
lightness of a subject is preserved in its depiction.
False color
In contrast to a true-color image, a false-color image sacrifices natural color rendition
in order to ease the detection of features that are not readily discernible otherwise –
for example the use of near infrared for the detection of vegetation in satellite
images. While a false-color image can be created using solely the visual spectrum
(e.g. to accentuate color differences), typically some or all data used is from
electromagnetic radiation (EM) outside the visual
spectrum (e.g. infrared, ultraviolet or X-ray). The choice of spectral bands is governed
by the physical properties of the object under investigation.
As the human eye uses three "spectral bands" three spectral bands are commonly
combined into a false-color image. At least two spectral bands are needed for a false-
color encoding, and it is possible to combine more bands into the three visual RGB
bands – with the eye's ability to discern three channels being the limiting factor.In
contrast, a "color" image made from one spectral band, or an image made from data
consisting of non-EM data (e.g. elevation, temperature, tissue type) is a pseudocolor
image .
For true color, the RGB channels (red "R", green "G" and blue "B") from the camera
are mapped to the corresponding RGB channels of the image, yielding a
"RGB→RGB" mapping. For false color this relationship is changed. The simplest
false-color encoding is to take an RGB image in the visible spectrum, but map it
differently, e.g. "GBR→RGB". For "traditional false-color" satellite
images of Earth a "NRG→RGB" mapping is used, with "N" being the near-infrared
spectral band (and the blue spectral band being unused) – this yields the typical
"vegetation in red" false-color images.
Satellites are placed at various heights and orbits to achieve desired coverage of the
Earth's surface. When the orbital speed exactly matches that of the Earth's rotation, the
satellite stays above the same point at all times, in a geostationary orbit.
This is useful for communications and weather monitoring satellites Satellite
platforms for electro-optical imaging systems are usually placed in a sun-
synchronous(link is external), low-earth orbit so that images of a given place are
always acquired at the same local time. The revisit time for a particular location is a
function of the individual platform and sensor, but generally it is on the order of
several days to several weeks. While orbits are optimized for time of day, the satellite
track may not always coincide with cloud-free conditions or specific vegetation
conditions of interest to the end-user of the imagery. Therefore, it is not a given that
usable imagery will be collected on every sensor pass over a given site.
Aircraft often have a definite advantage because of their mobilization flexibility. They
can be deployed wherever and whenever weather conditions are favorable. Clouds
often appear and dissipate over a target over a period of several hours during a given
day. Aircraft on site can respond with a moment's notice to take advantage of clear
conditions, while satellites are locked into a schedule dictated by orbital parameters.
Aircraft can also be deployed in small or large numbers, making it possible to collect
imagery seamlessly over an entire county or state in a matter of days or weeks simply
by having lots of planes in the air at the same time.
Flight planning
Geosynchronous and sun synchronous orbits
Geo Synchronous Orbits:
A geostationary orbit (or Geostationary Earth Orbit - GEO) is a geosynchronous
orbit directly above the Earth's equator (0° latitude), with a period equal to the Earth's
rotational period and an orbital eccentricity of approximately zero. An object in a
geostationary orbit appears motionless, at a fixed position in the sky, to ground
observers. Communications satellites and weather satellitesare often given
geostationary orbits, so that the satellite antennas that communicate with them do not
have to move to track them, but can be pointed permanently at the position in the sky
where they stay. Due to the constant 0° latitude and circularity of geostationary orbits,
satellites in GEO differ in location by longitude only.
Geostationary orbits are useful because they cause a satellite to appear stationary with
respect to a fixed point on the rotating Earth, allowing a fixed antennato maintain a
link with the satellite. The satellite orbits in the direction of the Earth's rotation, at an
altitude of 35,786 km (22,236 mi) above ground, producing an orbital period equal to
the Earth's period of rotation, known as the sidereal day.
Sun Synchorous Orbit
A Sun-synchronous orbit (sometimes incorrectly called a heliosynchronous orbit) is a
geocentric orbit which combines altitude and inclination in such a way that an object
on that orbit ascends or descends over any given point of the Earth's surface at the
same local mean solar time. The surface illumination angle will be nearly the same
every time. This consistent lighting is a useful characteristic for satellites that image
the Earth's surface in visible or infrared wavelengths (e.g. weather and spy satellites)
and for other remote sensing satellites (e.g. those carrying ocean and atmospheric
remote sensing instruments that require sunlight). For example, a satellite in sun-
synchronous orbit might ascend across the equator twelve times a day each time at
approximately 15:00 mean local time.
These orbits allows a satellite to pass over a section of the Earth at the same time of
day. Since there are 365 days in a year and 360 degrees in a circle, it means that the
satellite has to shift its orbit by approximately one degree per day. These satellites
orbit at an altitude between 700 to 800 km.
Sensors
Resolution
In general, the resolution is the minimum distance between two objects that can be
distinguished in the image. Objects closer than the resolution appear as a single object
in the image. However, in remote sensing the term resolution is used to represent the
resolving power, which includes not only the capability to identify the presence of two
objects, but also their properties. In qualitative terms resolution is the amount of
details that can be observed in an image. Thus an image that shows finer details is said
to be of finer resolution compared to the image that shows coarser details. Four types
of resolutions are defined for the remote sensing systems.
Spatial resolution
Spectral resolution
Temporal resolution
Radiometric resolution
Spatial resolution
A digital image consists of an array of pixels. Each pixel contains information about a
small area on the land surface, which is considered as a single object. Spatial
resolution is a measure of the area or size of the smallest dimension on the Earth’s
surface over which an independent measurement can be made by the sensor. It is
expressed by the size of the pixel on the ground in meters. Fig.1 shows the examples
of a coarse resolution image and a fine resolution image
A measure of size of pixel is given by the Instantaneous Field of View (IFOV). The
IFOV is the angular cone of visibility of the sensor, or the area on the Earth’s surface
that is seen at one particular moment of time. IFOV is dependent on the altitude of the
sensor above the ground level and the viewing angle of the sensor.
The size of the area viewed on the ground can be obtained by multiplying the IFOV
(in radians) by the distance from the ground to the sensor. This area on the ground is
called the ground resolution or ground resolution cell. It is also referred as the spatial
resolution of the remote sensing system.
Spectral resolution Spectral resolution represents the spectral band width of the filter
and the sensitiveness of the detector. The spectral resolution may be defined as the
ability of a sensor to define fine wavelength intervals or the ability of a sensor to
resolve the energy received in a spectral bandwidth to characterize different
constituents of earth surface. The finer the spectral resolution, the narrower the
wavelength range for a particular channel or band.
Many remote sensing systems are multi-spectral, that record energy over separate
wavelength ranges at various spectral resolutions. For example IRS LISS-III uses 4
bands: 0.52-0.59 (green), 0.62-0.68 (red), 0.77-0.86 (near IR) and 1.55-1.70 (mid-IR).
The Aqua/Terra MODIS instruments use 36 spectral bands, including three in the
visible spectrum. Recent development is the hyper-spectral sensors, which detect
hundreds of very narrow spectral bands. Figure 5 shows the hypothetical
representation of remote sensing systems with different spectral resolution. The first
representation shows the DN values obtained over 9 pixels using imagery captured in
a single band. Similarly, the second and third representations depict the DN values
obtained in 3 and 6 bands using the respective sensors. If the area imaged is say A
km2, the same area is being viewed using 1, 3 and 6 number of bands.
Generally surface features can be better distinguished from multiple narrow bands,
than from a single wide band. For example, in Fig. 6, using the broad wavelength
band 1, the features A and B cannot be differentiated. However, the spectral
reflectance values of the two features are different in the narrow bands 2 and 3. Thus,
a multi-spectral image involving bands 2 and 3 can be used to differentiate the
features A and B.
Temporal resolution
The frequency of flyovers by the satellite or plane, and is only relevant in time-series
studies or those requiring an averaged or mosaic image as in deforesting monitoring.
This was first used by the intelligence community where repeated coverage revealed
changes in infrastructure, the deployment of units or the modification/introduction of
equipment. Cloud cover over a given area or object makes it necessary to repeat the
collection of said location.
Radiometric Resolution
Radiometric Resolution refers to the smallest change in intensity level that can be
detected by the sensing system. The intrinsic radiometric resolution of a sensing
system depends on the signal to noise ratio of the detector. In a digital image, the
radiometric resolution is limited by the number of discrete quantization levels used to
digitize the continuous intensity value. The following images illustrate the effects of
the number of quantization levels on the digital image. The first image is a SPOT
panchromatic image quantized at 8 bits (i.e. 256 levels) per pixel. The subsequent
images show the effects of degrading the radiometric resolution by using fewer
quantization levels.
The IFOV (C) of the sensor and the altitude of the platform determine
the ground resolution cell viewed (D), and thus the spatial resolution.
The angular field of view (E) is the sweep of the mirror, measured in
degrees, used to record a scan line, and determines the width of the
imaged swath (F). Airborne scanners typically sweep large angles (between
90º and 120º), while satellites, because of their higher altitude need only to
sweep fairly small angles (10-20º) to cover a broad region. Because the
distance from the sensor to the target increases towards the edges of the
swath, the ground resolution cells also become larger and introduce
geometric distortions to the images. Also, the length of time the IFOV "sees"
a ground resolution cell as the rotating mirror scans (called the dwell time),
is generally quite short and influences the design of the spatial, spectral, and
radiometric resolution of the sensor.
Along-track scanners also use the forward motion of the platform to
record successive scan lines and build up a two-dimensional image,
perpendicular to the flight direction. However, instead of a scanning mirror,
they use a linear array of detectors (A) located at the focal plane of the
image (B) formed by lens systems (C), which are "pushed" along in the flight
track direction (i.e. along track). These systems are also referred to as
pushbroom scanners, as the motion of the detector array is analogous to
the bristles of a broom being pushed along a floor. Each individual detector
measures the energy for a single ground resolution cell (D) and thus the size
and IFOV of the detectors determines the spatial resolution of the system. A
separate linear array is required to measure each spectral band or channel.
For each scan line, the energy detected by each detector of each linear array
is sampled electronically and digitally recorded.
Along-track scanners with linear arrays have several advantages over
across-track mirror scanners. The array of detectors combined with the
pushbroom motion allows each detector to "see" and measure the energy
from each ground resolution cell for a longer period of time (dwell time).
This allows more energy to be detected and improves the radiometric
resolution.
The increased dwell time also facilitates smaller IFOVs and narrower
bandwidths for each detector. Thus, finer spatial and spectral resolution can
be achieved without impacting radiometric resolution. Because detectors are
usually solid-state microelectronic devices, they are generally smaller,
lighter, require less power, and are more reliable and last longer because
they have no moving parts. On the other hand, cross-calibrating thousands
of detectors to achieve uniform sensitivity across the array is necessary and
complicated.
Thermal scanners
Microwave remote sensing
UNIQUE CAPABILITIES
Sensors operating in the microwave region of the electromagnetic spectrum are
used for remote sensing of regions covered by clouds and for 24 h data collection.
Microwave sensors have advantages and unique capabilities over the optical
sensors. These are:
All weather penetration capability through clouds
From the above discussion, it is clear that in such weather conditions, it
is impossible to get results using optical remote sensing techniques and
only sensors operating at microwave frequencies produce good results.
Day and night capability (independent of intensity and angle of sun illumination)
At microwave frequencies, the sensors receive the target signals which
are wholly dependent on their dielectric properties and physical
properties like surface roughness and texture of the target. Thus, the
signal received by the sensor does not depend upon the illumination of
sun and its angle or its intensity. Thus, at microwave frequencies, one
gets information about the target object even during night. This is not
possible using optical sensors because they need the illumination to
image the target. Similarly, infrared also require illumination of sun to
some extent.
Penetration through vegetation and soil to a certain extent
Sensitivity to moisture (in liquid or vapour forms).
Introduction to Geographic Information Systems (GIS)
Introduction
Geographical Information Systems (GIS) are computer-based systems that enable
users to collect, store, process, analyze and present spatial data.
It provides an electronic representation of information, called spatial data, about
the Earth's natural and man-made features. A GIS references these real-world
spatial data elements to a coordinate system. These features can be separated into
different layers. A GIS system stores each category of information in a separate
"layer" for ease of maintenance, analysis, and visualization. For example, layers
can represent terrain characteristics, census data, demographics information,
environmental and ecological data, roads, land use, river drainage and flood plains,
and rare wildlife habitats. Different applications create and use different layers. A
GIS can also store attribute data, which is descriptive information of the map
features. This attribute information is placed in a database separate from he
graphics data but is linked to them. A GIS allows the examination of both spatial
and attribute data at the same time.
Defining GIS
A “geographic information system” (GIS) is a computer-based tool that allows you
to create, manipulate, analyze, store and display information based on its location.
GIS makes it possible to integrate different kinds of geographic information, such
as digital maps, aerial photographs, satellite images and global positioning system
data (GPS), along with associated tabular database information (e.g., ‘attributes' or
characteristics about geographic features).
In short, a GIS can be defined as a computer system capable of assembling, storing,
manipulating, and displaying geographically referenced information.
GIS SUBSYSTEMS
1. data input subsystem;
2. data storage and retrieval subsystem;
3. data manipulation and analysis subsystem; and
4. data output and display subsystem.
COMPONENTS OF A GIS
Hardware
Hardware is the computer system on which a GIS operates. Today, GIS software
runs on a wide range of hardware types, from centralized computer servers to
desktop computers used in stand-alone or networked configurations.
Software
GIS software provides the functions and tools needed to store, analyze, and display
geographic information. A review of the key GIS software subsystems is provided
above.
Data
Perhaps the most important component of a GIS is the data. Geographic data and
related tabular data can be collected in-house, compiled to custom specifications
and requirements, or occasionally purchased from a commercial data provider. A
GIS can integrate spatial data with other existing data resources, often stored in a
corporate DBMS. The integration of spatial data (often proprietary to the GIS
software), and tabular data stored in a DBMS is a key functionality afforded by
GIS.
People
GIS technology is of limited value without the people who manage the system and
develop plans for applying it to real world problems. GIS users range from
technical specialists who design and maintain the system to those who use it to
help them perform their everyday work. The identification of GIS specialists
versus end users is often critical to the proper implementation of GIS technology.
Methods
A successful GIS operates according to a well-designed implementation plan and
business rules, which are the models and operating practices unique to each
organization.
SPATIAL DATA MODELS
Traditionally spatial data has been stored and presented in the form of a map.
Three basic types of spatial data models have evolved for storing geographic data
digitally. These are referred to as:
Vector; Raster; Image