Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

Fundamental concepts of remote sensing

Energy-frequency-wavelength relationship
Boltzman Law

The Stefan–Boltzmann law describes the power radiated from a black body in terms of
its temperature. Specifically, the Stefan–Boltzmann law states that the total energy radiated per
unit surface area of a black body across all wavelengths per unit time (also known as the black-
body radiant exitance or emissive power), , is directly proportional to the fourth power of the black
body's thermodynamic temperature T:

Wien Law
Plank’s law of radiation
Electromagnetic radiation spectrum
The electromagnetic spectrum ranges from the shorter wavelengths (including
gamma and x-rays) to the longer wavelengths (including microwaves and broadcast
radio waves). There are several regions of the electromagnetic spectrum which are
useful for remote sensing.

Visible Spectrum

The light which our eyes can detect forms the visible spectrum. It is important to note
how small a portion of the electromagnetic spectrum is represented by the visible
region.

Electromagnetic energy and its interactions in the atmosphere and with


terrain features
Radiation Interaction with the Earth

Radiation that is not absorbed or scattered in the atmosphere can reach and interact
with the Earth's surface. There are three (3) forms of interaction that can take place
when energy strikes, or is incident (I)upon the surface. These are: absorption
(A); transmission (T); and reflection (R).

Reflection: reflected light is what we know as color; i.e. chlorophyll in plants reflects
green light.
Absorbtion: the incident energy is not reflected or transmitted but is transformed into
another form, such as heat i.e. a rock, or absorbed by chlorophyll in the process
of photosynthesis.

Transmission: when energy propagates through a medium, what is not absorbed or


reflected will be transmitted through i.e. an ultraviolet filter on a camera absorbs UV
rays but allows the remaining energy to expose the film. Changes in density can also
slow the velocity resulting in refraction such as light through a prism.

Radiation Interaction with the Atmosphere

The Earth's atmosphere acts as a filter to remove radiation such as cosmic rays,
gamma rays, x-rays, UV rays, and large portions of the electromagnetic spectrum
through the process of absorbtion and scattering by gases, water vapor, and particulate
matter (dust).

Rayleigh Scatter occurs when particles are very small compared to the wavelength of
the radiation. These could be particles such as small specks of dust or nitrogen and
oxygen molecules. This is the cause of the blue sky; it is red in the mornings and
evenings because light has a longer path through the atmosphere and the blue
wavelengths (or shorter wavelenths) are scattered so completely that it leaves only red
(the longer) wavelengths.

Mie Scattering occurs when the particles in the atmosphere are the same size as the
wavelengths being scattered. Dust, pollen, smoke and water vapour are common
causes of Mie scattering which tends to affect longer wavelengths. Mie scattering
occurs mostly in the lower portions of the atmosphere where larger particles are more
abundant, and dominates when cloud conditions are overcast.
Non-Selective Scattering occurs when the particles are much larger than the
wavelength of the radiation. Water droplets and large dust particles can cause this type
of scattering and causes fog and clouds to appear white to our eyes because blue,
green, and red light are all scattered in approximately equal quantities
(blue+green+red light = white light).

Atmospheric Absorbtion

In addition to the scattering of EM radiation, the atmosphere also absorbs


electromagnetic radiation. The three main constituents which absorb radiation
are Ozone, Carbon Dioxide, and Water Vapor.

Ozone serves to absorb the harmful (to most living things) ultraviolet radiation from
the sun. Without this protective layer in the atmosphere our skin would burn when
exposed to sunlight.

Carbon Dioxide absorbs in the far infrared portion of the spectrum which is related to
thermal heating and results in a 'greenhouse' effect.

Water Vapor absorbs energy depending upon its location and concentration, and
forms a primary component of the Earth's climatic system.

Atmospheric Windows

The absorbtion by various constituents in the atmosphere results in limiting portions


of the electromagnetic radiation from reaching the Earth. For remote sensing, this
limits us to portions of the electromagnetic spectrum where radiation is not strongly
absorbed.
Reflectance and emittance

Solar Reflectance
Solar Reflectance is the fraction of the incident solar energy which is reflected by the
surface in question. The best standard technique for its determination uses
spectrophotometric measurements, with an integrating sphere to determine the
reflectance at each different wavelength. The average reflectance is then determined
by an averaging process, using a standard solar spectrum.

Elements of photographic systems


An aerial photo is just a black and white (b & w) or color "picture" of an area on the
Earth's surface (plus clouds, often), either on print or on transparency. A film camera
shoots the picture from a free-flying platform (airplane, helicopter or balloon) some
preplanned distance above the surface. Two types depend on the angle of view
relative to the surface. The first, oblique photography, snaps images from an angle,
low to high relative to vertical. The example below is the most common type (high
oblique)
The second type of aerial photos is oriented vertically, that is, it results from pointing
the camera straight down (to the nadir, at the photo center point) to show the surface
directly from above. The size of the photo and the sizes of the features represented
within the photos can vary depending on the following: the camera's optical
parameters, the surface area of the exposed film (frame size), the subsequent printing
sizes (e.g., enlargement), and the altitude of the camera platform.
Image Scale

The ratio of the size of any object, feature, or area within the photo to its actual size in
the picture is called the scale

Ex. Landsat image (scale = 1:1,000,000)

Q. One meaning of scale is this: 1 inch on the photo equals X inches on the
ground. For the 1:100,000 photo above, determine how many feet are represented
by an inch (on the photo, or in this case, the image on your screen) and likewise
how many mile(s) extend across that inch

ANS- 1 inch = 100,000 divided by 12 or 8333.3 feet. In miles that would be


8333/5280 = 1.5783.

Let's move now from the spectral part of photogrammetry to the spatial part. Scale,
mentioned before, is just the comparison of the dimensions of an object or feature in a
photo or map to its actual dimensions in the target. We state scale in several ways,
such as, "six inches to the mile", "1/20,000", and, most common "1:2,000". These
mean that one measurement unit in the numerator (image or map) is equivalent to the
stated number of that unit in the denominator (scene). Thus, 1:2,000 simply states that
one of any length unit, such as an inch, in the photo corresponds to 2,000 inches on
the ground or air (cloud). Or, one cm is equivalent to 20,000 cm. "six inches to the
mile" translates to six inches in the photo represents 63,360 (5,280 ft x 12 in/ft) inches
in the real world, but we can further reduce it to 1:10,560, because six and 63,360 are
divisible by six. Note that, if we enlarge or contract a photo of a given scale, say by
projection as a transparency onto a screen, then one inch on the screen no longer
corresponds to the same denominator number but now represents some other scale
determined by the magnification factor. However, the effective resolution, the area
covered, and the relative details remain the same.

AERIAL PHOTOGRAPHS

We determine the scale of the aerial photo, expressed as its Representative Fraction
(RF) by the height of the moving platform and by the focal length of the camera,
according to this equation:

RF = f/H*,

where H* = H - h,

with H = height (elevation with reference to sea level) of the camera

and h is the height of a reference point on the surface,

so that H - h is the distance between the platform and the point (assuming a flat ground
surface; in rugged terrain, scale in effect varies with the elevations).

We can also show that RF is also proportional to resolution and distance ratios, as
given by RF = rg/rs = d/D, where rg is ground resolution (in line pairs per meter; see
below) and rs is the sensor system resolution (in line pairs per millimeter); d is the
distance between two points in the photo and D is the actual distance between these
points on the ground (the definition of scale).

QSN. Two points appear on a map 1.75 inches apart. The actual horizontal ground
distance between the two points is 1108.0 meters. What is the denominator of the
scale fraction (RF) for this map?

ANS. 1.75 inches x 1/39.37 inches/meter = 0.04 meters. 0.05/1108 = 1/27700

Qsn. A vertical airphoto is taken from a flying height of 5000 ft relative to the
ocean with a camera having a focal length of 6 inches. The flat surface is 1000 ft
above sealevel. What is the RF for the resulting photo?

Ans. Scale = f/H* = 0.5/(5000 - 1000) = 1/8000


QSN. Points A and B are 2.2 inches apart on a map having a RF =
1/20000. They are 6.83 inches apart on an airphoto. What is the scale of
the airphoto?

ANS. 2.2" x 20000/1 = 44000. 6.83/44000 = 1/6442.

Advantages of Aerial Photographic Systems

False color composites


False color (or false colour) refers to a group of color rendering methods used to
display images in color which were recorded in the visible or non-visible parts of
the electromagnetic spectrum. A false-color image is an image that depicts an object
in colors that differ from those a photograph (a "true-color" image) would show. In
addition, variants of false color such as pseudocolor (see discussion), density
slicing (see discussion), and choropleths (see discussion) are used for information
visualization of either data gathered by a single grayscale channel or data not
depicting parts of the electromagnetic spectrum (e.g. elevation in relief maps or tissue
types in magnetic resonance imaging).

TRUE COLOR

To understand false color, a look at the concept behind true color is helpful. An image
is called a "true-color" image when it offers a natural color rendition, or when it
comes close to it. This means that the colors of an object in an image appear to a
human observer the same way as if this observer were to directly view the object: A
green tree appears green in the image, a red apple red, a blue sky blue, and so
on. When applied to black-and-white images, true-color means that the perceived
lightness of a subject is preserved in its depiction.

False color
In contrast to a true-color image, a false-color image sacrifices natural color rendition
in order to ease the detection of features that are not readily discernible otherwise –
for example the use of near infrared for the detection of vegetation in satellite
images. While a false-color image can be created using solely the visual spectrum
(e.g. to accentuate color differences), typically some or all data used is from
electromagnetic radiation (EM) outside the visual
spectrum (e.g. infrared, ultraviolet or X-ray). The choice of spectral bands is governed
by the physical properties of the object under investigation.

As the human eye uses three "spectral bands" three spectral bands are commonly
combined into a false-color image. At least two spectral bands are needed for a false-
color encoding, and it is possible to combine more bands into the three visual RGB
bands – with the eye's ability to discern three channels being the limiting factor.In
contrast, a "color" image made from one spectral band, or an image made from data
consisting of non-EM data (e.g. elevation, temperature, tissue type) is a pseudocolor
image .

For true color, the RGB channels (red "R", green "G" and blue "B") from the camera
are mapped to the corresponding RGB channels of the image, yielding a
"RGB→RGB" mapping. For false color this relationship is changed. The simplest
false-color encoding is to take an RGB image in the visible spectrum, but map it
differently, e.g. "GBR→RGB". For "traditional false-color" satellite
images of Earth a "NRG→RGB" mapping is used, with "N" being the near-infrared
spectral band (and the blue spectral band being unused) – this yields the typical
"vegetation in red" false-color images.

Remote sensing platforms, flight planning


Using the broadest definition of remote sensing, there are innumerable types of
platforms upon which to deploy an instrument. the sensors typically deployed on these
platforms include film and digital cameras, light-detection and ranging (lidar)
systems, synthetic aperture radar (SAR) systems, multispectral and hyperspectral
scanners.
Ground-Based Platforms
Many of these instruments can also be mounted on land-based platforms, such as
vans, trucks, tractors, and tanks. In the future, it is likely that a significant percentage
of GIS and mapping data will originate from land-based sources; however, due to time
constraints, we will only cover satellite and aircraft platforms in this course.

Ground-Based Platforms Portable Masts


AIRBORNE PLATFORMS
BALLOON BASED MISSIONS AND MEASUREMENTS
High flying balloons provide an important tool for probing the atmosphere. Such balloon
launches form an essential part of high altitude atmospheric research.
There are three major advantages of the balloon program Including the following:
A) The balloon has extensive altitude range they can cover. They provide a unique way of
covering a broad range of altitudes for in-situ or remote sensing measurements in the
stratosphere. Of particular interest is the 22-40 km region, which is higher than the altitude range
of current aircraft such as the ER-2.
B) The balloon instruments provide the opportunity for additional, correlative data for satellite
based measurements, including both validation ("atmospheric truth") and complementary data
(for example, measurement of species not measured from the space based instrument).
C) Balloon based platforms constitute an important and inexpensive venue for testing
instruments under development. These can be either potential instruments for unmanned aerial
vehicles (UAV) or, in some cases, for satellite based remote sensing instruments.
AIRCRAFT PLATFORMS INSTALLED IN THE WINGS

In airborne remote sensing, downward or sideward looking sensors are mounted on an


aircraft to obtain images of the earth's surface. An advantage of airborne remote
sensing, compared to satellite remote sensing, is the capability of offering very high
spatial resolution images (20 cm or less). The disadvantages are low coverage area
and high cost per unit area of ground coverage. It is not cost-effective to map a large
area using an airborne remote sensing system. Airborne remote sensing missions are
often carried out as one-time operations, whereas earth observation satellites offer the
possibility of continuous monitoring of the earth.
SPACE BORNE PLATFORMS.
In space borne remote sensing, sensors are mounted on-board a spacecraft (space
shuttle or satellite) orbiting the earth. Space borne platforms include the following:
Rockets, Satellites and space shuttles. Space borne platforms range from 100 to 36000
km above the earth’s surface.
While the number of satellite platforms is quite low compared to the number of
airborne platforms, the optical capabilities of satellite imaging sensors are
approaching those of airborne digital cameras. However, there will always be
important differences, strictly related to characteristics of the platform, in the
effectiveness of satellites and aircraft to acquire remote sensing data.
One obvious advantage satellites have over aircraft is global accessibility; there are
numerous governmental restrictions that deny access to airspace over sensitive areas
or over foreign countries. Satellite orbits are not subject to these restrictions, although
there may well be legal agreements to limit distribution of imagery over particular
areas.
Towers

Satellites are placed at various heights and orbits to achieve desired coverage of the
Earth's surface. When the orbital speed exactly matches that of the Earth's rotation, the
satellite stays above the same point at all times, in a geostationary orbit.
This is useful for communications and weather monitoring satellites Satellite
platforms for electro-optical imaging systems are usually placed in a sun-
synchronous(link is external), low-earth orbit so that images of a given place are
always acquired at the same local time. The revisit time for a particular location is a
function of the individual platform and sensor, but generally it is on the order of
several days to several weeks. While orbits are optimized for time of day, the satellite
track may not always coincide with cloud-free conditions or specific vegetation
conditions of interest to the end-user of the imagery. Therefore, it is not a given that
usable imagery will be collected on every sensor pass over a given site.
Aircraft often have a definite advantage because of their mobilization flexibility. They
can be deployed wherever and whenever weather conditions are favorable. Clouds
often appear and dissipate over a target over a period of several hours during a given
day. Aircraft on site can respond with a moment's notice to take advantage of clear
conditions, while satellites are locked into a schedule dictated by orbital parameters.
Aircraft can also be deployed in small or large numbers, making it possible to collect
imagery seamlessly over an entire county or state in a matter of days or weeks simply
by having lots of planes in the air at the same time.
Flight planning
Geosynchronous and sun synchronous orbits
Geo Synchronous Orbits:
A geostationary orbit (or Geostationary Earth Orbit - GEO) is a geosynchronous
orbit directly above the Earth's equator (0° latitude), with a period equal to the Earth's
rotational period and an orbital eccentricity of approximately zero. An object in a
geostationary orbit appears motionless, at a fixed position in the sky, to ground
observers. Communications satellites and weather satellitesare often given
geostationary orbits, so that the satellite antennas that communicate with them do not
have to move to track them, but can be pointed permanently at the position in the sky
where they stay. Due to the constant 0° latitude and circularity of geostationary orbits,
satellites in GEO differ in location by longitude only.
Geostationary orbits are useful because they cause a satellite to appear stationary with
respect to a fixed point on the rotating Earth, allowing a fixed antennato maintain a
link with the satellite. The satellite orbits in the direction of the Earth's rotation, at an
altitude of 35,786 km (22,236 mi) above ground, producing an orbital period equal to
the Earth's period of rotation, known as the sidereal day.
Sun Synchorous Orbit
A Sun-synchronous orbit (sometimes incorrectly called a heliosynchronous orbit) is a
geocentric orbit which combines altitude and inclination in such a way that an object
on that orbit ascends or descends over any given point of the Earth's surface at the
same local mean solar time. The surface illumination angle will be nearly the same
every time. This consistent lighting is a useful characteristic for satellites that image
the Earth's surface in visible or infrared wavelengths (e.g. weather and spy satellites)
and for other remote sensing satellites (e.g. those carrying ocean and atmospheric
remote sensing instruments that require sunlight). For example, a satellite in sun-
synchronous orbit might ascend across the equator twelve times a day each time at
approximately 15:00 mean local time.
These orbits allows a satellite to pass over a section of the Earth at the same time of
day. Since there are 365 days in a year and 360 degrees in a circle, it means that the
satellite has to shift its orbit by approximately one degree per day. These satellites
orbit at an altitude between 700 to 800 km.

Sensors
Resolution
In general, the resolution is the minimum distance between two objects that can be
distinguished in the image. Objects closer than the resolution appear as a single object
in the image. However, in remote sensing the term resolution is used to represent the
resolving power, which includes not only the capability to identify the presence of two
objects, but also their properties. In qualitative terms resolution is the amount of
details that can be observed in an image. Thus an image that shows finer details is said
to be of finer resolution compared to the image that shows coarser details. Four types
of resolutions are defined for the remote sensing systems.
 Spatial resolution
 Spectral resolution
 Temporal resolution
 Radiometric resolution
Spatial resolution
A digital image consists of an array of pixels. Each pixel contains information about a
small area on the land surface, which is considered as a single object. Spatial
resolution is a measure of the area or size of the smallest dimension on the Earth’s
surface over which an independent measurement can be made by the sensor. It is
expressed by the size of the pixel on the ground in meters. Fig.1 shows the examples
of a coarse resolution image and a fine resolution image

A measure of size of pixel is given by the Instantaneous Field of View (IFOV). The
IFOV is the angular cone of visibility of the sensor, or the area on the Earth’s surface
that is seen at one particular moment of time. IFOV is dependent on the altitude of the
sensor above the ground level and the viewing angle of the sensor.
The size of the area viewed on the ground can be obtained by multiplying the IFOV
(in radians) by the distance from the ground to the sensor. This area on the ground is
called the ground resolution or ground resolution cell. It is also referred as the spatial
resolution of the remote sensing system.

Spectral resolution Spectral resolution represents the spectral band width of the filter
and the sensitiveness of the detector. The spectral resolution may be defined as the
ability of a sensor to define fine wavelength intervals or the ability of a sensor to
resolve the energy received in a spectral bandwidth to characterize different
constituents of earth surface. The finer the spectral resolution, the narrower the
wavelength range for a particular channel or band.
Many remote sensing systems are multi-spectral, that record energy over separate
wavelength ranges at various spectral resolutions. For example IRS LISS-III uses 4
bands: 0.52-0.59 (green), 0.62-0.68 (red), 0.77-0.86 (near IR) and 1.55-1.70 (mid-IR).
The Aqua/Terra MODIS instruments use 36 spectral bands, including three in the
visible spectrum. Recent development is the hyper-spectral sensors, which detect
hundreds of very narrow spectral bands. Figure 5 shows the hypothetical
representation of remote sensing systems with different spectral resolution. The first
representation shows the DN values obtained over 9 pixels using imagery captured in
a single band. Similarly, the second and third representations depict the DN values
obtained in 3 and 6 bands using the respective sensors. If the area imaged is say A
km2, the same area is being viewed using 1, 3 and 6 number of bands.
Generally surface features can be better distinguished from multiple narrow bands,
than from a single wide band. For example, in Fig. 6, using the broad wavelength
band 1, the features A and B cannot be differentiated. However, the spectral
reflectance values of the two features are different in the narrow bands 2 and 3. Thus,
a multi-spectral image involving bands 2 and 3 can be used to differentiate the
features A and B.

Temporal resolution
The frequency of flyovers by the satellite or plane, and is only relevant in time-series
studies or those requiring an averaged or mosaic image as in deforesting monitoring.
This was first used by the intelligence community where repeated coverage revealed
changes in infrastructure, the deployment of units or the modification/introduction of
equipment. Cloud cover over a given area or object makes it necessary to repeat the
collection of said location.
Radiometric Resolution
Radiometric Resolution refers to the smallest change in intensity level that can be
detected by the sensing system. The intrinsic radiometric resolution of a sensing
system depends on the signal to noise ratio of the detector. In a digital image, the
radiometric resolution is limited by the number of discrete quantization levels used to
digitize the continuous intensity value. The following images illustrate the effects of
the number of quantization levels on the digital image. The first image is a SPOT
panchromatic image quantized at 8 bits (i.e. 256 levels) per pixel. The subsequent
images show the effects of degrading the radiometric resolution by using fewer
quantization levels.

Parallax and vertical exaggeration


Parallax is the apparent displacement of a viewed point or small object at a distance
that results from a change in the point of observation. For a person, that change in the
point of observation could simply be from one eye to the other at a fixed location or
from relocating from one viewing spot to another.
Our line of sight from each eye is not quite parallel to the line between our nose and
the selected target, but converges from the eye pair, so that the left eye sees a bit of
the left side of the target not seen by the right eye, while the right eye sees a bit of the
right side, missed by the left eye.
In this way our eyes send a signal to the brain which, on further processing, creates
the impression of depth.
This same effect lies at the heart of stereo viewing of photo pairs, taken either from
two lateral positions on the ground, or, in aerial photography, as successive photo
pairs with about 50% overlap along a single flight line, or with similar sidelap
between pairs from adjacent flight lines, with some of the scene in common.
When we position the stereo pair properly left-right and then view them through a
stereoscope, the eye-brain reaction is an impression of surface curvature or relief, as
though we're looking down from a plane at the ground. A pocket stereoscope consists
of two lenses that we can adjust along a slide bar to be as far apart as our eyes, placed
in a raised mount (on collapsible legs) about six inches above the central region of the
stereo pair.
The sense of relief may be exaggerated relative to reality. The degree of vertical
exaggeration (VE) depends on the base to height ratio (B/H), which depends on the
scale of the photos. The scale, in turn, shows the actual horizontal ground distance (B)
between any two equivalent points, identifiable in the two photos, and the height (H)
of the camera, during the exposure of each photo in the pair. These points will, of
course, not occupy the same position in the two photos because of the forward motion
of the imaging platform.
The vertical exaggeration also depends on the apparent height (h) of the viewer's eyes
and the breadth (b) between the eye centers of the particular viewer. So VE =
(B/H)(h/b).
Qsn.Calculate the vertical exaggeration of this set of conditions:
Distance on ground = 1800 meters; Height of camera above ground =
4000 meters; Apparent height of viewer's eyes = 40 cm; width between
eyes = 6 cm.
Ans. B =1800,H=4000,h=40,b=6(All in meter)
(1800/4000) x (40/6) = 3.0 (Vertical exaggeration is threefold)
Relief displacement
Mosaic
Multispectral Scanning
Many electronic (as opposed to photographic) remote sensors acquire data
using scanning systems, which employ a sensor with a narrow field of view
(i.e. IFOV) that sweeps over the terrain to build up and produce a two-
dimensional image of the surface. Scanning systems can be used on both
aircraft and satellite platforms and have essentially the same operating
principles. A scanning system used to collect data over a variety of different
wavelength ranges is called a multispectral scanner (MSS), and is the
most commonly used scanning system. There are two main modes or
methods of scanning employed to acquire multispectral image data -
across-track scanning, and along-track scanning.
Across-track scanners scan the Earth in a series of lines. The lines are
oriented perpendicular to the direction of motion of the sensor platform (i.e.
across the swath). Each line is scanned from one side of the sensor to the
other, using a rotating mirror (A). As the platform moves forward over the
Earth, successive scans build up a two-dimensional image of the Earth´s
surface. The incoming reflected or emitted radiation is separated into several
spectral components that are detected independently. The UV, visible, near-
infrared, and thermal radiation are dispersed into their constituent
wavelengths. A bank of internal detectors (B), each sensitive to a specific
range of wavelengths, detects and measures the energy for each spectral
band and then, as an electrical signal, they are converted to digital data and
recorded for subsequent computer processing.

The IFOV (C) of the sensor and the altitude of the platform determine
the ground resolution cell viewed (D), and thus the spatial resolution.
The angular field of view (E) is the sweep of the mirror, measured in
degrees, used to record a scan line, and determines the width of the
imaged swath (F). Airborne scanners typically sweep large angles (between
90º and 120º), while satellites, because of their higher altitude need only to
sweep fairly small angles (10-20º) to cover a broad region. Because the
distance from the sensor to the target increases towards the edges of the
swath, the ground resolution cells also become larger and introduce
geometric distortions to the images. Also, the length of time the IFOV "sees"
a ground resolution cell as the rotating mirror scans (called the dwell time),
is generally quite short and influences the design of the spatial, spectral, and
radiometric resolution of the sensor.
Along-track scanners also use the forward motion of the platform to
record successive scan lines and build up a two-dimensional image,
perpendicular to the flight direction. However, instead of a scanning mirror,
they use a linear array of detectors (A) located at the focal plane of the
image (B) formed by lens systems (C), which are "pushed" along in the flight
track direction (i.e. along track). These systems are also referred to as
pushbroom scanners, as the motion of the detector array is analogous to
the bristles of a broom being pushed along a floor. Each individual detector
measures the energy for a single ground resolution cell (D) and thus the size
and IFOV of the detectors determines the spatial resolution of the system. A
separate linear array is required to measure each spectral band or channel.
For each scan line, the energy detected by each detector of each linear array
is sampled electronically and digitally recorded.
Along-track scanners with linear arrays have several advantages over
across-track mirror scanners. The array of detectors combined with the
pushbroom motion allows each detector to "see" and measure the energy
from each ground resolution cell for a longer period of time (dwell time).
This allows more energy to be detected and improves the radiometric
resolution.

The increased dwell time also facilitates smaller IFOVs and narrower
bandwidths for each detector. Thus, finer spatial and spectral resolution can
be achieved without impacting radiometric resolution. Because detectors are
usually solid-state microelectronic devices, they are generally smaller,
lighter, require less power, and are more reliable and last longer because
they have no moving parts. On the other hand, cross-calibrating thousands
of detectors to achieve uniform sensitivity across the array is necessary and
complicated.
Thermal scanners
Microwave remote sensing
UNIQUE CAPABILITIES
Sensors operating in the microwave region of the electromagnetic spectrum are
used for remote sensing of regions covered by clouds and for 24 h data collection.
Microwave sensors have advantages and unique capabilities over the optical
sensors. These are:
All weather penetration capability through clouds
From the above discussion, it is clear that in such weather conditions, it
is impossible to get results using optical remote sensing techniques and
only sensors operating at microwave frequencies produce good results.
Day and night capability (independent of intensity and angle of sun illumination)
At microwave frequencies, the sensors receive the target signals which
are wholly dependent on their dielectric properties and physical
properties like surface roughness and texture of the target. Thus, the
signal received by the sensor does not depend upon the illumination of
sun and its angle or its intensity. Thus, at microwave frequencies, one
gets information about the target object even during night. This is not
possible using optical sensors because they need the illumination to
image the target. Similarly, infrared also require illumination of sun to
some extent.
Penetration through vegetation and soil to a certain extent
Sensitivity to moisture (in liquid or vapour forms).
Introduction to Geographic Information Systems (GIS)
Introduction
Geographical Information Systems (GIS) are computer-based systems that enable
users to collect, store, process, analyze and present spatial data.
It provides an electronic representation of information, called spatial data, about
the Earth's natural and man-made features. A GIS references these real-world
spatial data elements to a coordinate system. These features can be separated into
different layers. A GIS system stores each category of information in a separate
"layer" for ease of maintenance, analysis, and visualization. For example, layers
can represent terrain characteristics, census data, demographics information,
environmental and ecological data, roads, land use, river drainage and flood plains,
and rare wildlife habitats. Different applications create and use different layers. A
GIS can also store attribute data, which is descriptive information of the map
features. This attribute information is placed in a database separate from he
graphics data but is linked to them. A GIS allows the examination of both spatial
and attribute data at the same time.
Defining GIS
A “geographic information system” (GIS) is a computer-based tool that allows you
to create, manipulate, analyze, store and display information based on its location.
GIS makes it possible to integrate different kinds of geographic information, such
as digital maps, aerial photographs, satellite images and global positioning system
data (GPS), along with associated tabular database information (e.g., ‘attributes' or
characteristics about geographic features).
In short, a GIS can be defined as a computer system capable of assembling, storing,
manipulating, and displaying geographically referenced information.
GIS SUBSYSTEMS
1. data input subsystem;
2. data storage and retrieval subsystem;
3. data manipulation and analysis subsystem; and
4. data output and display subsystem.
COMPONENTS OF A GIS

Hardware
Hardware is the computer system on which a GIS operates. Today, GIS software
runs on a wide range of hardware types, from centralized computer servers to
desktop computers used in stand-alone or networked configurations.
Software
GIS software provides the functions and tools needed to store, analyze, and display
geographic information. A review of the key GIS software subsystems is provided
above.
Data
Perhaps the most important component of a GIS is the data. Geographic data and
related tabular data can be collected in-house, compiled to custom specifications
and requirements, or occasionally purchased from a commercial data provider. A
GIS can integrate spatial data with other existing data resources, often stored in a
corporate DBMS. The integration of spatial data (often proprietary to the GIS
software), and tabular data stored in a DBMS is a key functionality afforded by
GIS.
People
GIS technology is of limited value without the people who manage the system and
develop plans for applying it to real world problems. GIS users range from
technical specialists who design and maintain the system to those who use it to
help them perform their everyday work. The identification of GIS specialists
versus end users is often critical to the proper implementation of GIS technology.
Methods
A successful GIS operates according to a well-designed implementation plan and
business rules, which are the models and operating practices unique to each
organization.
SPATIAL DATA MODELS
Traditionally spatial data has been stored and presented in the form of a map.
Three basic types of spatial data models have evolved for storing geographic data
digitally. These are referred to as:
Vector; Raster; Image

Data Retrieval and Querying


The ability to retrieve data is based on the unique structure of the DBMS and
command interfaces are commonly provided with the software. Most GIS software
also provides a programming subroutine library, or macro language, so the user
can write their own specific data retrieval routines if required. Querying is the
capability to retrieve data, usually a data subset, based on some userdefined
formula. These data subsets are often referred to as logical views. Often the
querying is closely linked to the data manipulation and analysis subsystem.
Querying can be either by example of by content.
Data Manipulation and Analysis Subsystem
The Data Manipulation and Analysis subsystem allows the user to define and
execute spatial and attribute procedures to generate derived information. This
subsystem is commonly thought of as the heart of a GIS, and usually distinguishes
it from other database information systems and computer-aided drafting (CAD)
systems.
Manipulation and Transformations of Spatial Data
The maintenance and transformation of spatial data concerns the ability to input,
manipulate, and transform data once it has been created. Some specific functions
are:
• Coordinate thinning: involves the reduction of the coordinate pairs (X and Y)
from arcs.
• Geometric Transformations
• Map Projection Transformations
• Edge Matching
• Interactive Graphic Editing
Analytical Functions in a GIS
The primitive analytical functions that must be provided by any GIS are:
• Retrieval, Reclassification, and Generalization
• Topological Overlay Techniques
• Neighbourhood Operations
• Connectivity Functions

You might also like