Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 34

REMOTE SENSING TECHNOLOGY

A Seminar report submitted in partial fulfillment of the requirements

For the award of the degree of

BACHELOR OF TECHNOLOGY

In

ELECTRONICS AND COMMUNICATION ENGINEERING

Under the esteemed guidance of


R.MOHANA RANGARAO

Asst.Prof .ECE

By

T.RAJ KUMAR 09QQ1A0430

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

KITE COLLEGE OF PROFESSIONAL ENGINEERING AND SCIENCES

(Affiliated to JNTU, Hyderabad) SHABAD R.R DISTRICT-509217 2009-2013)


ACKNOWLEDGEMENT

I take this opportunity to remember and acknowledge the cooperation, good will and
support both moral and technical extended by several individuals out of which this seminar has evolved. I
shall always cherish our association with them.

I greatly thankful to Dr.K.BRAHMANANDAM, PRINCIPAL of our college, for extending


his help. I shall forever cherish my association with him for his encouragement, perennial
approachability, absolute freedom of thought and action.

I greatly thankful to M. RAMANJANEYULU, HEAD OF THE DEPARTMENT, Department


of ELECTRONICS AND COMMUNICATION ENGINEERING, for his enthusiastic assistance.

I have immense pleasure in expressing my thanks and deep sense of gratitude to my guide Mr.
R.Mohana Ranga Rao,Asst. Prof, E.C.E Dept for his guidance and assistance offered in an amiable and
pleasant manner throughout my seminar.

A lot of thanks to other faculty members of the department who gave their valuable suggestions
at different stages of my seminar.

I am very much thankful to our parents who helped me with utmost friendliness and warmth
always. They kept my spirit flying high and persistently encouraged me to undertake and complete this
seminar.

By

T.RAJ KUMAR 09QQ1A0430


CERTIFICATE
Date:

This is to certify that the project report entitled “REMOTE SENSING TECHNOLOGY“being
submitted by T.RAJKUMAR, bearing Roll. No. 09QQ1A0430, during 2009-2013 in partial fulfillment
of the requirements for the award of the degree of BACHLEOR OF TECHNOLOGY in
ELECTRONICS AND COMMUNICATION ENGINEERING is a bonfire work carried out by him.

The results enclosed in this report have been verified and found correct. The results embodied
in this project report have not been submitted to any other University or Institute for the award of any
degree or diploma.

Head of the Department


Internal guide
M. RAMANJANEYULU
R MOHANA RANGA RAO,
B.E., M.E
Asst. Prof, E.C.E
HOD&Asooc.Professor

Department of ECE
ABSTRACT

For the purposes of this course, we will use the following general definition: “Is the technology
of measuring the characteristics of an object or surface from a distance”.
In the case of earth resource monitoring, the object or surface is on the land mass of the earth or the sea,
and the observing sensor is in the air or space.

In order for an observing sensor to acquire knowledge about remote object, there must be a flow
of information between the object and the observer. There has to be a carrier of that information.
In our case, the carrier is the electromagnetic radiation (EMR).

The modern discipline of remote sensing arose with the development of flight. The balloonist G.
Tournachon (alias Nadar) made photographs of Paris from his balloon in 1858. Messenger
pigeons, kites, rockets and unmanned balloons were also used for early images. With the
exception of balloons, these first, individual images were not particularly useful for map making
or for scientific purposes.

Systematic aerial photography was developed for military surveillance and reconnaissance
purposes beginning in World War I and reaching a climax during the Cold War with the use of
modified combat aircraft such as the P-51, P-38, RB-66 and the F-4C, or specifically designed
collection platforms such as the U2/TR-1, SR-71, A-5 and the OV-1 series both in overhead and
stand-off collection. A more recent development is that of increasingly smaller sensor pods such
as those used by law enforcement and the military, in both manned and unmanned platforms.
The advantage of this approach is that this requires minimal modification to a given airframe.
Later imaging technologies would include Infra-red, conventional, doppler and synthetic aperture
radar.
CONTENT LIST

Introduction 1

Spatial technologies
Geographic information system

The global positioning system 2

Remote sensing 3

• 1 Overview
• 2 Data acquisition techniques
• 2.1 Applications of remote sensing data 5
• 2.2 Geodetic
• 2.3 Acoustic and near-acoustic
• 3 Data processing
• 3.1 Data processing levels
• 4 History
• 5 Training and Education
• 6 Remote Sensing software 9
• how deos remote sensing work
Fundamental consideration 11

Types of sensor

passive sensor
• 1 ASTER Bands 13
• 2 ASTER Global Digital Elevation Model
2.1 Version 15
12.2 Version
Advance very high radiometer resolution 16
lidar 19
application 28

conclusion 29
INTRODUCTION
The rapid development of spatial technologies in recent years has made available new tools
and capabilities to Extension services and clientele for management of spatial data. In particular,
the evolution of geographic information systems (GIS), the global positioning system (GPS), and
remote sensing (RS) technologies has enabled the collection and analysis of field data in ways
that were not possible before the advent of the computer.

How can potential users with little or no experience with GIS-GPS-RS technologies determine if
they would be useful for their applications? How do potential users learn about these
technologies? Once a need is established, what potential pitfalls or problems should the user
know to avoid? This article describes some uses of GIS-GPS-RS in agricultural and resource
management applications, provides a roadmap for becoming familiar with the technologies, and
makes recommendations for implementation.

SPATIAL TECHNOLOGIES
Geographic Information Systems:

GIS applications enable the storage, management, and analysis of large quantities of spatially
distributed data. These data are associated with their respective geographic features. For
example, water quality data would be associated with a sampling site, represented by a point.
Data on crop yields might be associated with fields or experimental plots, represented on a map
by polygons.

A GIS can manage different data types occupying the same geographic space. For example, a
biological control agent and its prey may be distributed in different abundances across a variety
of plant types in an experimental plot. Although predator, prey, and plants occupy the same
geographic region, they can be mapped as distinct and separate features.

The ability to depict different, spatially coincident features is not unique to a GIS, as various
computer aided drafting (CAD) applications can achieve the same result. The power of a GIS lies
in its ability to analyze relationships between features and their associated data (Samson, 1995).
This analytical ability results in the generation of new information, as patterns and spatial
relationships are revealed.
The Global Positioning System:

GPS technology has provided an indispensable tool for management of agricultural and natural
resources. GPS is a satellite- and ground-based radio navigation and locational system that
enables the user to determine very accurate locations on the surface of the Earth. Although GPS
is a complex and sophisticated technology, user interfaces have evolved to become very
accessible to the non-technical user. Simple and inexpensive GPS units are available with
accuracies of 10 to 20 meters, and more sophisticated precision agriculture systems can obtain
centimeter level accuracies.

Remote Sensing:

Remote sensing technologies are used to gather information about the surface of the earth from a
distant platform, usually a satellite or airborne sensor. Most remotely sensed data used for
mapping and spatial analysis is collected as reflected electromagnetic radiation, which is
processed into a digital image that can be overlaid with other spatial data.

Reflected radiation in the infrared part of the electromagnetic spectrum, which is invisible to the
human eye, is of particular importance for vegetation studies. For example, chlorophyll strongly
absorbs blue (0.48 mm) and red (0.68 mm) wavelength radiation and reflects near-infrared
radiation (0.75 - 1.35 mm). Leaf vacuole water absorbs radiation in the infrared region from 1.35
- 2.5 mm (Samson, 2000). The spectral properties of vegetation in different parts of the spectrum
can be interpreted to reveal information about the health and status of crops, rangelands, forests
and other types of vegetation.

Definition:
Remote sensing can be defined as any process whereby information is gathered about an object,
area or phenomenon without being in contact with it. Our eyes are an excellent example of a
remote sensing device. We are able to gather information about our surroundings by gauging the
amount and nature of the reflectance of visible light energy from
some external source (such as the sun or a light bulb) as it reflects off objects in our field of
view. Contrast this with a thermometer, which must be in contact with the phenomenon it
measures, and thus is not a remote sensing device
Remote sensing:
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For the technique in archaeological surveying, see remote sensing (archaeology). For the claimed
psychic ability, see remote viewing. For the electrical measurement technique, see four-terminal
sensing.

Synthetic aperture radar image of Death Valley colored using polarimetry.

Remote sensing is the acquisition of information about an object or phenomenon without


making physical contact with the object. In modern usage, the term generally refers to the use of
aerial sensor technologies to detect and classify objects on Earth (both on the surface, and in the
atmosphere and oceans) by means of propagated signals (e.g. electromagnetic radiation emitted
from aircraft or satellites).[1][2]

This video is about how Landsat was used to identify areas of conservation in the Democratic
Republic of the Congo, and how it was used to help map an area called MLW in the norh.

There are two main types of remote sensing: passive remote sensing and active remote sensing.
Passive sensors detect natural radiation that is emitted or reflected by the object or surrounding
areas. Reflected sunlight is the most common source of radiation measured by passive sensors.
Examples of passive remote sensors include film photography, infrared, charge-coupled devices,
and radiometers. Active collection, on the other hand, emits energy in order to scan objects and
areas whereupon a sensor then detects and measures the radiation that is reflected or
backscattered from the target. RADAR and LiDAR are examples of active remote sensing where
the time delay between emission and return is measured, establishing the location, speed and
direction of an object.

Remote sensing makes it possible to collect data on dangerous or inaccessible areas. Remote
sensing applications include monitoring deforestation in areas such as the Amazon Basin, glacial
features in Arctic and Antarctic regions, and depth sounding of coastal and ocean depths.
Military collection during the Cold War made use of stand-off collection of data about dangerous
border areas. Remote sensing also replaces costly and slow data collection on the ground,
ensuring in the process that areas or objects are not disturbed.

Orbital platforms collect and transmit data from different parts of the electromagnetic spectrum,
which in conjunction with larger scale aerial or ground-based sensing and analysis, provides
researchers with enough information to monitor trends such as El Niño and other natural long
and short term phenomena. Other uses include different areas of the earth sciences such as
natural resource management, agricultural fields such as land usage and conservation, and
national security and overhead, ground-based and stand-off collection on border areas.[4]

By satellite, aircraft, spacecraft, buoy, ship, and helicopter images, data is created to analyze and
compare things like vegetation rates, erosion, pollution, forestry, weather, and land use. These
things can be mapped, imaged, tracked and observed. The process of remote sensing is also
helpful for city planning, archaeological investigations, military observation and
geomorphological surveying.

Data acquisition techniques

The basis for multispectral collection and analysis is that of examined areas or objects that reflect
or emit radiation that stand out from surrounding areas.
Applications of remote sensing data

• Conventional radar is mostly associated with aerial traffic control, early warning, and
certain large scale meteorological data. Doppler radar is used by local law enforcements’
monitoring of speed limits and in enhanced meteorological collection such as wind speed
and direction within weather systems. Other types of active collection includes plasmas
in the ionosphere. Interferometric synthetic aperture radar is used to produce precise
digital elevation models of large scale terrain (See RADARSAT, TerraSAR-X,
Magellan).

• Laser and radar altimeters on satellites have provided a wide range of data. By measuring
the bulges of water caused by gravity, they map features on the seafloor to a resolution of
a mile or so. By measuring the height and wavelength of ocean waves, the altimeters
measure wind speeds and direction, and surface ocean currents and directions.

• Light detection and ranging (LIDAR) is well known in examples of weapon ranging,
laser illuminated homing of projectiles. LIDAR is used to detect and measure the
concentration of various chemicals in the atmosphere, while airborne LIDAR can be used
to measure heights of objects and features on the ground more accurately than with radar
technology. Vegetation remote sensing is a principal application of LIDAR.

• Radiometers and photometers are the most common instrument in use, collecting
reflected and emitted radiation in a wide range of frequencies. The most common are
visible and infrared sensors, followed by microwave, gamma ray and rarely, ultraviolet.
They may also be used to detect the emission spectra of various chemicals, providing
data on chemical concentrations in the atmosphere.

• Stereographic pairs of aerial photographs have often been used to make topographic maps
by imagery and terrain analysts in trafficability and highway departments for potential
routes.

• Simultaneous multi-spectral platforms such as Landsat have been in use since the 70’s.
These thematic mappers take images in multiple wavelengths of electro-magnetic
radiation (multi-spectral) and are usually found on Earth observation satellites, including
(for example) the Landsat program or the IKONOS satellite.
Level Description
Reconstructed, unprocessed instrument and payload data at full resolution, with any and
0 all communications artifacts (e. g., synchronization frames, communications headers,
duplicate data) removed.
Reconstructed, unprocessed instrument data at full resolution, time-referenced, and
annotated with ancillary information, including radiometric and geometric calibration
1a coefficients and georeferencing parameters (e. g., platform ephemeris) computed and
appended but not applied to the Level 0 data (or if applied, in a manner that level 0 is
fully recoverable from level 1a data).
Level 1a data that have been processed to sensor units (e. g., radar backscatter cross
1b section, brightness temperature, etc.); not all instruments have Level 1b data; level 0
data is not recoverable from level 1b data.
Derived geophysical variables (e. g., ocean wave height, soil moisture, ice
2
concentration) at the same resolution and location as Level 1 source data.
Variables mapped on uniform spacetime grid scales, usually with some completeness
3 and consistency (e. g., missing points interpolated, complete regions mosaicked
together from multiple orbits, etc.).
Model output or results from analyses of lower level data (i. e., variables that were not
4
measured by the instruments but instead are derived from these measurements).

A Level 1 data record is the most fundamental (i. e., highest reversible level) data record that has
significant scientific utility, and is the foundation upon which all subsequent data sets are
produced. Level 2 is the first level that is directly usable for most scientific applications; its value
is much greater than the lower levels. Level 2 data sets tend to be less voluminous than Level 1
data because they have been reduced temporally, spatially, or spectrally. Level 3 data sets are
generally smaller than lower level data sets and thus can be dealt with without incurring a great
deal of data handling overhead. These data tend to be generally more useful for many
applications. The regular spatial and temporal organization of Level 3 datasets makes it feasible
to readily combine data from different sources.
The TR-1 reconnaissance/surveillance aircraft.

The 2001 Mars Odyssey used spectrometers and imagers to hunt for evidence of past or present
water and volcanic activity on Mars.

The modern discipline of remote sensing arose with the development of flight. The balloonist G.
Tournachon (alias Nadar) made photographs of Paris from his balloon in 1858. Messenger
pigeons, kites, rockets and unmanned balloons were also used for early images. With the
exception of balloons, these first, individual images were not particularly useful for map making
or for scientific purposes.

Systematic aerial photography was developed for military surveillance and reconnaissance
purposes beginning in World War I and reaching a climax during the Cold War with the use of
modified combat aircraft such as the P-51, P-38, RB-66 and the F-4C, or specifically designed
collection platforms such as the U2/TR-1, SR-71, A-5 and the OV-1 series both in overhead and
stand-off collection. A more recent development is that of increasingly smaller sensor pods such
as those used by law enforcement and the military, in both manned and unmanned platforms.
The advantage of this approach is that this requires minimal modification to a given airframe.
Later imaging technologies would include Infra-red, conventional, doppler and synthetic aperture
radar.
The development of artificial satellites in the latter half of the 20th century allowed remote
sensing to progress to a global scale as of the end of the Cold War. Instrumentation aboard
various Earth observing and weather satellites such as Landsat, the Nimbus and more recent
missions such as RADARSAT and UARS provided global measurements of various data for
civil, research, and military purposes. Space probes to other planets have also provided the
opportunity to conduct remote sensing studies in extraterrestrial environments, synthetic aperture
radar aboard the Magellan spacecraft provided detailed topographic maps of Venus, while
instruments aboard SOHO allowed studies to be performed on the Sun and the solar wind, just to
name a few examples.

Recent developments include, beginning in the 1960s and 1970s with the development of image
processing of satellite imagery. Several research groups in Silicon Valley including NASA Ames
Research Center, GTE and ESL Inc. developed Fourier transform techniques leading to the first
notable enhancement of imagery data.

Training and Education

Remote Sensing has a growing relevance in the modern information society. It represents a key
technology as part of the aerospace industry and bears increasing economic relevance - new
sensors e.g. TerraSAR-X & RapidEye are developed constantly and the demand for skilled
labour is increasing steadily. Furthermore, remote sensing exceedingly influences everyday life,
ranging from weather forecasts to reports on climate change or natural disasters. As an example,
80% of the German students use the services of Google Earth; in 2006 alone the software was
downloaded 100 million times. But studies has shown that only a fraction of them know more
[9]
about the data they are working with. There exists a huge knowledge gap between the
application and the understanding of satellite images. Remote sensing only plays a tangential role
in schools, regardless of the political claims to strengthen the support for teaching on the subject.
A lot of the computer software explicitly developed for school lessons has not yet been
implemented due to its complexity. Thereby, the subject is either not at all integrated into the
curriculum or does not pass the step of an interpretation of analogue images. In fact, the subject
of remote sensing requires a consolidation of physics and mathematics as well as competences in
the fields of media and methods apart from the mere visual interpretation of satellite images.
Many teachers have great interest in the subject “remote sensing”, being motivated to integrate
this topic into teaching, provided that the curriculum is considered. In many cases, this
encouragement fails because of confusing information. [11] In order to integrate remote sensing in
a sustainable manner organizations like the EGU or digital earth encourages the development of
learning modules and learning portals (e.g. FIS - Remote Sensing in School Lessons or Landmap
- Spatial Discovery) promoting media and method qualifications as well as independent working.

Remote Sensing software

Remote Sensing data is processed and analyzed with computer software, known as a remote
sensing application. A large number of proprietary and open source applications exist to process
remote sensing data. Remote Sensing Software packages include:

• TNTmips from MicroImages,

• PCI Geomatica made by PCI Geomatics, the leading remote sensing software package in
Canada,

• IDRISI from Clark Labs,

• Image Analyst from Intergraph,

• and RemoteView made by Overwatch Textron Systems.

• Dragon/ips is one of the oldest remote sensing packages still available, and is in some
cases free.

Open source remote sensing software includes:

• OSSIM,

• Opticks (software),

• Orfeo toolbox

• Others mixing remote sensing and GIS capabilities are: GRASS GIS, ILWIS, QGIS, and
TerraLook.

How Remote Sensing Works


The NMA has long been a proponent of remote sensing for air quality standards, preferring it
over the "test-everyone" approach embodied in the BAR90, ASM or I/M-240 tests where all
motorists must drive their vehicle to the testing station.

Remote sensing offers all of the advantages and none of the drawbacks of centralized, drive-in
tests.

The fact is that one out of every ten vehicles is responsible for the same amount of pollution as
the other nine combined. This pattern of "gross emitters" is consistent regardless of what form of
testing program is in place.

How do we identify gross emitters now? Most localities where emissions testing is required use
some form of I/M (inspection and maintenance) drive-in test on a regular basis. However, there
is a lot of room for improvement, and remote sensing provides the answer.

How does a remote sensing system work?

Remote sensing was developed by Dr. Robert Stedman at the University of Denver, Colorado.
This system uses one basic principle: certain gases absorb infrared light at different rates. By
placing an infrared (IR) light transmitter on one side of the road and aiming its beam into an IR
receiver on the other side, when a vehicle drives through the beam, the computer compares the
wavelength of the light after it passed through the exhaust plume to the wavelength of the normal
IR light. It then calculates the percentage of hydrocarbons (HC), oxides of nitrogen (NOx),
carbon dioxide (CO2) and carbon monoxide (CO). If, and only if, your vehicle is over the
maximum limits, a camera records your license plate number and the state is notified.
(Enforcement actions or penalties depend on state laws.)

Remote sensing devices are being sold or rented commercially now. The average price of a
stationary setup is over $90,000, which includes training. If a mobile remote sensing station is
desired, the cost is over $140,000. Expensive? You bet. But it's far cheaper than constructing a
slew of drive-in "test-everyone" stations.

Remote sensing system diagram


COMPARISON OF REMOTE SENSING VS. I/M

Cost
Remote Sensing -- less than 50 cents per car. | I/M -- greater than $10 per car.

Time
Remote Sensing -- 3600 cars per hour. | I/M -- 10 cars per hour.

Accuracy
Remote Sensing -- better than 10%. | I/M -- better than 10%.

Convenience
Remote Sensing -- unobtrusive, on-road test. | I/M -- everyone must make special trip.

Tampering
Remote Sensing -- no way to cheat. | I/M -- advance notice helps cheating.

Energy Source
Sensors can be divided into two broad groups—passive and active. Passive sensors measure
ambient levels of existing sources of energy, while active ones provide their own source of
energy. The majority of remote sensing is done with passive
sensors, for which the sun is the major energy source. The earliest example of this is
photography. With airborne cameras we have long been able to measure and record the reflection
of light off earth features. While aerial photography is still a major form of remote sensing,
newer solid state technologies have extended capabilities for viewing in the visible
and near-infrared wavelengths to include longer wavelength solar radiation as well. However,
not all passive sensors use energy from the sun. Thermal infrared and passive microwave sensors
both measure natural earth energy emissions.
The texture of earth surface materials causes significant interactions with several of the
microwave wavelength regions.This can thus be used as a supplement to information gained in
other wavelengths, and also offers the significant advantageof being usable at night (because as
an active system it is independent of solar radiation) and in regions of persistent
cloud cover (since radar wavelengths are not significantly affected by clouds).
Cosmic rays Wavelength (¼m) 10 6 10 5 10 4 10 3 10 2 10 1 1 10 10 2 10 3 10 4 10 5 10 6
10 7 10 8 10 9 Wavelength (¼m) (1mm) (1m) ³ rays Xrays Visible Mid - IR
Chapter 3 Introduction to Remote Sensing and Image Processing 25
the spectral bands that the satellite sensor detects. In addition, issues of cost and imagery
availability must also be considered.
LANDSAT
The LANDSAT system of remote sensing satellites is currently operated by the EROS Data
Center of the United States Geological Survey. This is a new arrangement following a period of
commercial distribution under the Earth Observation Satellite Company (EOSAT) which was
recently acquired by Space Imaging Corporation. As a result, the cost of imagery has
dramatically dropped, to the benefit of all. Full or quarter scenes are available on a variety of
distribution media, as well as photographic products of MSS and TM scenes in false color and
black and white. There have been seven LANDSAT satellites, the first of which was launched in
1972. The LANDSAT 6 satellite was lost on launch. However, as of this writing, LANDSAT 5 is
still operational. LANDSAT 7 was launched in April, 1999.
LANDSAT carries two multispectral sensors. The first is the Multi-Spectral Scanner (MSS)
which acquires imagery in four
spectral bands: blue, green, red and near infrared. The second is the Thematic Mapper (TM)
which collects seven bands:
blue, green, red, near-infrared, two mid-infrared and one thermal infrared. The MSS has a spatial
resolution of 80 meters,
while that of the TM is 30 meters. Both sensors image a 185 km wide swath, passing over each
day at 09:45 local time, and
returning every 16 days. With LANDSAT 7, support for TM imagery is to be continued with the
addition of a co-registered
15 m panchromatic band.

ASTER image draped over terrain model of Mount Etna

ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer) is a


Japanese sensor which is one of five remote sensory devices on board the Terra satellite
launched into Earth orbit by NASA in 1999. The instrument has been collecting superficial data
since February 2000.

ASTER provides high-resolution images of the planet Earth in 15 different bands of the
electromagnetic spectrum, ranging from visible to thermal infrared light. The resolution of
images ranges between 15 to 90 meters. ASTER data are used to create detailed maps of surface
temperature of land, emissivity, reflectance, and elevation.

ASTER Bands

ASTER image of Rub' al Khali (Arabia's Empty Quarter).


Resolutio
Wavelength Nadir or
BandLabel n Description
(µm) Backward
(m)

Visible
B1 VNIR_Band1 0.520–0.600 15 Nadir
green/yellow

B2 VNIR_Band2 0.630–0.690 15 Nadir Visible red

B3 VNIR_Band3N0.760–0.860 15 Nadir
Near infrared
B4 VNIR_Band3B 0.760–0.860 15 Backward

B5 SWIR_Band4 1.600–1.700 30 Nadir

B6 SWIR_Band5 2.145–2.185 30 Nadir

B7 SWIR_Band6 2.185–2.225 30 Nadir Short-wave

B8 SWIR_Band7 2.235–2.285 30 Nadir infrared

B9 SWIR_Band8 2.295–2.365 30 Nadir

B10 SWIR_Band9 2.360–2.430 30 Nadir

B11 TIR_Band10 8.125–8.475 90 Nadir Long-wave


infrared
B12 TIR_Band11 8.475–8.825 90 Nadir
or thermal IR
B13 TIR_Band12 8.925–9.275 90 Nadir

10.250–
B14 TIR_Band13 90 Nadir
10.950

B15 TIR_Band14 10.950– 90 Nadir


11.650

ASTER Global Digital Elevation Model

Version 1

On 29 June 2009, the Global Digital Elevation Model (GDEM) was released to the public. A
joint operation between NASA and Japan's Ministry of Economy, Trade and Industry (METI),
the Global Digital Elevation Model is the most complete mapping of the earth ever made,
covering 99% of its surface. The previous most comprehensive map, NASA's Shuttle Radar
Topography Mission, covered approximately 80% of the Earth's surface, with a global resolution
of 90 meters, and a resolution of 30 meters over the USA. The GDEM covers the planet from 83
degrees North to 83 degrees South (surpassing SRTM's coverage of 56 °S to 60 °N), becoming
the first earth mapping system that provides comprehensive coverage of the polar regions. It was
created by compiling 1.3 million VNIR images taken by ASTER using single-pass stereoscopic
correlation techniques,[1] with terrain elevation measurements taken globally at 30 meter (98 ft)
intervals.

Despite the high nominal resolution, however, some reviewers have commented that the true
resolution is considerably lower, and not as good as that of SRTM data, and serious artifacts are
present.

Some of these limitations have been confirmed by METI and NASA, who point out that the
current version of the GDEM product is "research grade".[10]

Screen-reader users, click here to turn off Google Instant.


Related searches:
Advanced Very High Resolution Radiometer

From Wikipedia, the free encyclopedia


Jump to: navigation, search

An image of global sea surface temperatures acquired from the NOAA/ AVHRR satellite

The Advanced Very High Resolution Radiometer (AVHRR) is a space-borne sensor


embarked on the National Oceanic and Atmospheric Administration (NOAA) family of polar
orbiting platforms (POES). AVHRR instruments measure the reflectance of the Earth in 5
relatively wide (by today's standards) spectral bands. The first two are centered around the red
(0.6 micrometres, 500 THz) and near-infrared (0.9 micrometres, 300 THz) regions, the third one
is located around 3.5 micrometres, and the last two sample the thermal radiation emitted by the
planet, around 11 and 12 micrometres, respectively. The NOAA satellite has equator crossing
times of 0730 and 1930 local solar time.[1]

The first AVHRR instrument was a 4-channel radiometer, while the latest version (known as
AVHRR/3, first carried on the NOAA-15 platform launched in May 1998) acquires data in a 6th
channel located at 1.6 micrometres.

Operational procedures
NOAA has at least two polar-orbiting meteorological satellites in orbit at all times, with one
satellite crossing the equator in the early morning and early evening and the other crossing the
equator in the afternoon and late evening. The primary sensor on board both satellites is the
AVHRR instrument. Morning-satellite data are most commonly used for land studies, while data
from both satellites are used for atmosphere and ocean studies. Together they provide twice-daily
global coverage, and ensure that data for any region of the earth are no more than six hours old.
The swath width, the width of the area on the Earth's surface that the satellite can "see", is
approximately 2,500 kilometers (about 1,500 miles). The satellites orbit between 833 or 870
kilometers (+/- 19 kilometers, between 516 and 541 miles) above the surface of the Earth. [2]

The highest ground resolution that can be obtained from the current AVHRR instruments is 1.1-
kilometer (0.68 mi), which means that the satellite records discrete information for areas on the
ground that are 1.1 by 1.1 kilometers. This smallest recorded unit is called a pixel. AVHRR data
have been collected continuously since 1981.[2]

Applications

The primary purpose of these instruments is to monitor clouds and to measure the thermal
emission (cooling) of the Earth. These sensors have proven useful for a number of other
applications, however, including the surveillance of land surfaces, ocean state, aerosols, etc.
AVHRR data are particularly relevant to study climate change and environmental degradation
because of the comparatively long records of data already accumulated (over 20 years). The
main difficulty associated with these investigations is to properly deal with the many limitations
of these instruments, especially in the early period (sensor calibration, orbital drift, limited
spectral and directional sampling, etc.).

The AVHRR instrument also flies on the MetOp series of satellites. The three planned MetOp
satellites are part of the Eumetsat Polar System (EPS) run by Eumetsat.

Calibration and validation

Remote sensing applications of the AVHRR sensor are based on validation (matchup) techniques
of co-located ground observations and satellite observations. Alternatively, radiative transfer
calculations are performed. There are specialized codes which allow simulation of the AVHRR
observable brightness temperatures and radiances in near infrared and infrared channels.[3][4]
Next-generation system

Operational experience with the Moderate Resolution Imaging Spectroradiometer MODIS[5]


sensor onboard NASA's Terra and Aqua led to the development of AVHRR's follow-on, VIIRS.
[6]
VIIRS is currently operating onboard the NPOESS Preparatory Project satellite and will be
deployed onboard the Joint Polar Satellite System in the late 2016 timeframe.[7]

Launch and service dates

Satellite name Launch date Service start Service end


TIROS-N 13 Oct 1978 19 Oct 1978 30 Jan 1980
NOAA-6 27 Jun 1979 27 Jun 1979 16 Nov 1986
NOAA-7 23 Jun 1981 24 Aug 1981 7 Jun 1986
NOAA-8 28 Mar 1983 3 May 1983 31 Oct 1985
NOAA-9 12 Dec 1984 25 Feb 1985 11 May 1994
NOAA-10 17 Sep 1986 17 Nov 1986 17 Sep 1991
NOAA-11 24 Sep 1988 8 Nov 1988 13 Sep 1994
NOAA-12 13 May 1991 14 May 1991 15 Dec 1994
NOAA-14 30 Dec 1994 30 Dec 1994 23 May 2007
NOAA-15 13 May 1998 13 May 1998 Present
NOAA-16 21 Sep 2000 21 Sep 2000 Present
NOAA-17 24 Jun 2002 24 Jun 2002 Present
NOAA-18 20 May 2005 30 Aug 2005 Present
NOAA-19 6 Feb 2009 2 Jun 2009 Present
MetOp-A[8] 19 Oct 2006 20 Jun 2007 Present
TIROS/NOAA dates from USGS website[9] and from NOAA POES Status website[10]

What is LiDAR?

Light Detection and Ranging in more detail.


LiDAR, or 3D laser scanning, was conceived in the 1960s for submarine detection from aircraft
and early models were used successfully in the early 1970's in the US, Canada and Australia.
Over the past ten years there has been a proliferation in the use of LiDAR sensors in the United
Kingdom, with several regularly used in both airborne and ground surveying. This has been
accompanied by an increase in the awareness and understanding of LiDAR in previously
unrelated industries as the application of LiDAR has been adopted.

Airborne LiDAR

Most airborne LIDAR systems are made up of the LIDAR sensor, a GPS receiver, an inertial
measurement unit (IMU), an onboard computer and data storage devices.

The LIDAR system pulses a laser beam onto a mirror and projects it downward from an airborne
platform, usually a fixed-wing airplane or a helicopter. The beam is scanned from side to side as
the aircraft flies over the survey area, measuring between 20,000 to 100,000 points per second.
When the laser beam hits an object it is reflected back to the mirror. The time interval between
the pulse leaving the airborne platform and its return to the LIDAR sensor is measured.
Following the LiDAR mission, the data is post-processed and the LIDAR time-interval
measurements from the pulse being sent to the return pulse being received are converted to
distance and corrected to the aircraft's onboard GPS receiver, IMU, and ground-based GPS
stations. The GPS very accurately determines the aircraft's position in terms of latitude, longitude
and altitude which are also know as the x, y and z coordinates. The LiDAR sensor collects a
huge amount of data and a single survey can easily generate millions of points totalling several
terabytes.

An IMU is used to determine the attitude of the aircraft as the sensor is taking measurements.
These are recorded in degrees to an extremely high accuracy in all three dimensions as roll, pitch
and yaw, the vertical and horizontal movements of the aircraft in flight. From these two datasets
the laser beam's exit geometry is calculated relative to the Earth's surface coordinates to a very
high accuracy.

The initial LiDAR data can be further enhanced using additional post processing, some of which
can be automated and some are manual. Further processing utilises the multiple return signals
from each laser pulse. By evaluating the time differences between the multiple return signals the
post processing system can differentiate between buildings and other structures, vegetation, and
the ground surface. This process is used to remove surface features to produce bare earth models
(DTM) and other enhanced data products.

It is also possible to do selective feature extraction, for example, the removal of trees and other
vegetation to leave just the buildings.

Ground Based LiDAR

Ground based LiDAR systems are very similar, only that an IMU is not required as the LiDAR is
usually mounted on a tripod which the LiDAR sensor rotates 360 degress. The pulsed laser beam
is reflected from objects such as building fronts, lamp posts, vegetation, cars and even people.
The return pulses are recorded and the distance between the sensor and the object is calculated.
The data produced is in a 'point cloud' format, which is a 3-dimensional array of points, each
having x, y and z positions relative to a chosen coordinate system.

How does LiDAR work?

The science behind the technology.

The principle behind LiDAR is really quite simple. Shine a small light at a surface and measure
the time it takes to return to its source. When you shine a torch on a surface what you are
actually seeing is the light being reflected and returning to your retina. Light travels very fast -
about 300,000 kilometres per second, 186,000 miles per second or 0.3 metres per nanosecond so
turning a light on appears to be instantaneous. Of course, it's not! The equipment required to
measure this needs to operate extremely fast. Only with the advancements in modern computing
technology has this become possible.

The actual calculation for measuring how far a returning light photon has travelled to and from
an object is quite simple: Distance = (Speed of Light x Time of Flight) / 2

The LiDAR instrument fires rapid pulses of laser light at a surface, some at up to 150,000 pulses
per second. A sensor on the instrument measures the amount of time it takes for each pulse to
bounce back. Light moves at a constant and known speed so the LiDAR instrument can calculate
the distance between itself and the target with high accuracy. By repeating this in quick
succession the insturment builds up a complex 'map' of the surface it is measuring. With airborne
LiDAR other data must be collected to ensure accuracy. As the sensor is moving height, location
and orientation of the instrument must be included to determine the position of the laser pulse at
the time of sending and the time of return. This extra information is crucial to the data's integrity.
With ground based LiDAR a single GPS location can be added for each location where the
instrument is set up.

Generally there are two types of LiDAR detection methods. Direct energy detection, also known
as incoherent, and Coherent detection. Coherent systems are best for Doppler or phase sensitive
measurements and generally use Optical heterodyne detection. This allows them to operate at
much lower power but has the expense of more complex transceiver requirements. In both types
of LiDAR there are two main pulse models: micropulse and high-energy systems. Micropulse
systems have developed as a result of more powerful computers with greater computational
capabilities. These lasers are lower powered and are classed as eye-safe allowing them to be
used with little safety precautions. High energy systems are more commonly used for
atmospheric research where they are often used for measuring a variety of atmospheric
parameters such as the height, layering and density of clouds, cloud particles properties,
temperature, pressure, wind, humidity and trace gas concentration.

Most LiDAR systems use four main components:

Lasers

Lasers are categorised by their wavelength. 600-1000nm lasers are more commonly used for
non-scientific purposes but, as they can be focused and easily absorbed by the eye, the maximum
power has to be limited to make them 'eye-safe'. Lasers with a wavelength of 1550nm are a
common alternative as they are not focused by the eye and are 'eye-safe' at much higher power
levels. These wavelengths are used for longer range and lower accuracy purposes. Another
advantage of 1550nm wavelengths is that they do not show under night-vision goggles and are
therefore well suited to military applications.

Airborne LiDAR systems use 1064nm diode pumped YAG lasers whilst Bathymetric systems
use 532nm double diode pumped YAG lasers which penetrate water with much less attenuation
than the airborne 1064nm version. Better resolution can be achieved with shorter pulses provided
the receiver detector and electronics have sufficient bandwidth to cope with the increased data
flow.
Scanners and Optics

The speed at which images can be developed is affected by the speed at which it can be scanned
into the system. A variety of scanning methods are available for different purposes such as
azimuth and elevation, dual oscillating plane mirrors, dual axis scanner and polygonal mirrors.
They type of optic determines the resolution and range that can be detected by a system.

Navigation and positioning systems

When a LiDAR sensor is mounted on a mobile platform such as satellites, airplanes or


automobiles, it is necessary to determine the absolute position and the orientation of the sensor to
retain useable data. Global Positioning Systems provide accurate geographical information
regarding the position of the sensor and an Inertia Measurement Unit (IMU) records the precise
orientation of the sensor at that location. The uses of LiDAR

What applications are there for LiDAR systems?

Airborne LiDAR Mapping

Forestry Management and Planning

LiDAR is unique in its ability to measure the vertical structure of forest canopies. As well as
mapping the ground beneath the forest, LiDAR is able to predict canopy bulk density and canopy
base height. Both of these factors can be used for, amongst other things, canopy fuel capacity for
use in fire behaviour models. LiDAR surveys allow large scale surveys to be taken with a level
of cost-effectiveness not previously available. Another use of LiDAR is the measurement of peak
height to estimate the root expanse.

Fig:Flood plain modelling.


Flood Modelling

Features like buildings, constructed river banks or roads have a great effect on flow dynamics
and flood propagation. Only high-resolution input data can solve the purpose that relates to the
systems topography as well as to the identified features. Frequent urban flooding is observed in
many parts of the world over the past decades and an urgent need is identified to improve and
increase our modelling efforts to address the effect model input data has on the simulation
results. Even differences of a few meters can means a lot in loss calculations in urban areas.
LiDAR has brought this level of detail to the industry allowing for much more accurate flood
prediction models to be created.

LiDAR data can also be incorporated into relief, rescue and flood simulation software to provide
advanced topographical information.

Pollution Modelling

LiDAR has a unique ability to detect particles in both water and air. As LiDAR uses short
wavelengths of light in the visible spectrum , typically ultraviolet, visible or near infrared, is it
possible to image an object or feature only about the same size as the wavelength or larger. This
makes it particularly sensitive to aerosols, cloud particles and air molecules. Pollutants such as
carbon dioxide, sulphur dioxide and methane are all detectable with LiDAR. Combined with a
building or terrain model this allows researchers to monitor and effectively reduce pollutant
build up is certain areas.

In addition to airborne pollutants is LiDARs ability to assist in the detection of noise and light
pollution. By using information gathered from known factors, such as the direction of light and
the source of noise it is also to determine affected areas.

Mapping and Cartography

LiDARs high resolution and accuracy has enabled it to be used in the creation of maps. Used in
conjunction with aerial photography, LiDAR can assist in road, building and vegetation
mapping. The 3D aspect of LiDAR makes it especially suitable for mapping terrain models,
including complex mountain topography. Other topographical data can be derived from LiDAR
such as high-resolution contour maps.
Basic Image Rendering
Once the image data has been converted to physical units, and the geometry is understood, it is
possible to generate perspective view and animation products. This was first done at JPL in the
early 1980's by a team led by Kevin Hussey. Hussey's team produced L.A. - the Movie, an
animated sequence that simulated a fly-over of Southern California utilizing multispectral image
data acquired by the Landsat earth orbiting spacecraft. The remotely sensed imagery was
rendered into perspective projections using digital elevation data sets available for the area
within a Landsat image. Figure 2 illustrates the basic process. The upper left image shows one
band extracted from the Landsat image. A segment from the image has been selected for
rendering, and the perspective viewpoint has been defined as shown by the green and blue
graphics overlay. The upper right image is a gray scale representation of the elevation data
available for the image segment, with the same perspective viewpoint indicated. The elevation
along the blue path in these images is shown graphically in the lower left image. Once the
animation producer is satisfied with the viewpoint and perspective, the scene is rendered in 3D
perspective as shown in the lower right hand image.

The scientist or animation director sketches out a desired flight path, as shown in Figure 3. The
flight path is defined by a set of "key frames." Each key frame is characterized by a specific
viewing geometry and viewpoint, and software interpolates between key frames defined along
the flight path to render intermediate frames to produce the final animation. The animator
controls the simulated speed of the flyover by specifying the number of frames to be interpolated
between each key frame. Figure 4 shows one frame from the film L.A. - the Movie, showing the
Rose Bowl with JPL in the background against the San Gabriel mountains. The vertical scale is
exaggerated by a factor of 2.5 to show small scale features.

.
The Space Shuttle has carried synthetic aperture radar systems on three separate occasions,
obtaining high resolution radar imagery of the earth's surface. The third mission, referred to as
SIR-C (Shuttle Imaging Radar mission C) provided coverage of the Mammoth Mountain area of
California in 1995. Figure 7 shows a three-dimensional perspective view created from SIR-C
SAR images acquired by the radar system. SAR imagery requires different interpretation than
imagery acquired by a more conventional imaging system. Brightness differences in SAR
imagery represent differences in surface texture and the orientation of surface features on the
surface, rather than the color or reflectance of the surface. Bright features are oriented normal to
the direction in which the radar signal travels, since the radar will be reflected strongly from
surfaces normal to the radar beam. Dark features are generally more aligned with the direction of
radar signal travel. Differing textures will also reflect the radar beam differently. This is
illustrated in Figure 8 which shows a false color perspective projection of the same area. Here,
false color is used as an interpretive aid to highlight differences in surface feature orientation and
surface texture. This false color rendered representation provides an extremely useful tool for
scientific interpretation.

Mission Planning
Visualization and animation are also useful for mission planning and mission operations. It is
possible to incorporate CAD models of spacecraft with remotely sensed imagery in animations to
illustrate spacecraft trajectories and data acquisition strategies. Animation displays are also
provided to explain planned mission events during flight operations to members of the press and
the public via the news media. Figure 9 shows one frame extracted from an animation of the
Galileo spacecraft approach to Jupiter in December 1995. The spacecraft model was rendered
from a CAD model of the spacecraft obtained from the spacecraft design team. The star
background is produced from a standard reference star catalog, and the Jupiter image was
acquired by the Hubble space telescope. The spacecraft trajectory and planet motion models
were derived for the animation from mission navigation files and command sequence files.

A Growing Roll
Visualization and animation are becoming increasingly important tools in planetary exploration.
High speed computing equipment and increasingly sophisticated software systems are making it
possible to produce the types of products shown in this article on rapid time scales. These
products are extremely useful in science analysis during flight operations, and are beginning to
play an increasingly important role in supporting future mission planning and data acquisition
strategies.

National Imagery Transmission Format

The National Imagery Transmission Format Standard (NITFS) is a U.S. Department of


Defense (DoD) and Federal Intelligence Community (IC) suite of standards for the exchange,
storage, and transmission of digital-imagery products and image-related products.

DoD policy is that other image formats can be used internally within a single system; however,
NITFS is the default format for interchange between systems. NITFS provides a package
containing information about the image, the image itself, and optional overlay graphics. (i.e. a
"package" containing an image(s), subimages, symbols, labels, and text as well as other
information related to the image(s)) NITFS supports the dissemination of secondary digital
imagery from overhead collection platforms.

• 1. Energy Source or Illumination (A) - the first requirement for remote sensing is to
have an energy source which illuminates or provides electromagnetic energy to the target
of interest.
• 2. Radiation and the Atmosphere (B) - as the energy travels from its source to the
target, it will come in contact with and interact with the atmosphere it passes through. This
interaction may take place a second time as the energy travels from the target to the sensor.
• 3. Interaction with the Target (C) - once the energy makes its way to the target through
the atmosphere, it interacts with the target depending on the properties of both the target
and the radiation.
• 4.Recording of Energy by the Sensor (D) - after the energy has been scattered by, or
emitted from the target, we require a sensor (remote - not in contact with the target) to
collect and record the electromagnetic radiation.
• 5.Transmission, Reception, and Processing (E) - the energy recorded by the sensor has to
be transmitted, often in electronic form, to a receiving and processing station where the
data are processed into an image (hardcopy and/or digital).
• 6.Interpretation and Analysis (F) - the processed image is interpreted, visually and/or
digitally or electronically, to extract information about the target which was illuminated.
• 7.Application (G) - the final element of the remote sensing process is achieved when we
apply the information we have been able to extract from the imagery about the target in
order to better understand it, reveal some new information, or assist in solving a particular
problem.
APPLICATIOIN OF REMOTE SENSING:

Remote sensing has enabled mapping, studying, monitoring and management of various
resources like agriculture, forestry, geology, water, ocean etc. It has further enabled monitoring
of environment and thereby helping in conservation. In the last four decades it has grown as a
major tool for collecting information on almost every aspect on the earth. With the availability of
very high spatial resolution satellites in the recent years, the applications have multiplied. In
India remote sensing has been used for various applications during the last four decades and has
contributed significantly towards development.

Major application activities using satellite remote sensing data in the country
include:

Groundwater Prospects and Recharge Zone Mapping

National Wastelands Monitoring

National Wetlands Inventory and Assessment


Snow and Glaciers Studies
Coastal Zone Studies
Forecasting Agricultural output using Space, Agrometeorology and Land based
observations (FASAL)
Assessment of Irrigation Potential under Accelerated Irrigation Benefit Program
(AIBP)
National Agricultural Drought Assessment and Monitoring System
Biodiversity Characterisation
National Urban Information System (NUIS)
Indian Forest Fire Response and Assessment System (INFFRAS)
Water Resources Information System (WRIS)
Space Based Information System for Decentralized Planning (SIS-DP)
Natural Resources Census (NRC)
Flood Mapping and Monitoring
Watershed Monitoring and Development
Potential Fishery Zone (PFZ) Forecasting

CONCLUSIONS

• Remotely sensed data is important to a broad range of disciplines. This will continue to
be the case and will likely grow with the greater availability of data promised by an
increasing number of operational systems.

• The availability of this data, coupled with the computer software necessary to analyze it,
provides opportunities for environmental scholars and planners, particularly in the areas
of landuse mapping and change detection, that would have been unheard of only a few
decades ago.

The inherent raster structure of remotely sensed data makes it readily compatible with raster GIS.
Thus, while IDRISI provides a wide suite of image processing tools, they are completely
integrated with the broader set of raster GIS tools the system provides

Refernces

 Wikipedia.org
 IEEE.com
 Google.com
 Electronicsforyou.com

Books:

 Remote Sensing Technology---- By THOMAS M. LILLESAND & RALPH W. KIEFER


4TH Edition

You might also like