Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Chapter 1.

Introduction
1.1 Definition of Photogrammetry
Photogrammetry is defined as “measurement and analysis of objects from photographs or images”. Science of
Photogrammetry has more than 100 year history since the invention of camera in 1980. In the photogrammetry, the lights (or
electric-magnetic energies) from the objects are sensed and recorded as photographs utilizing sensors. This is accomplished
without physical contact with the objects which is, in essence, the most obvious difference to ordinary surveying.
The remotely received information by photographs can be grouped into four categories.
- Geometric information involves the spatial position and the three dimensional (3D) shape of objects. It is the most
important information source in photogrammetry.
-Physical information refers to properties of electromagnetic radiation, e.g., radiant energy and wavelength.
-Semantic information is related to the meaning of an image. It is usually obtained by interpreting the recorded data.
-Temporal information is related to the change of an object in time, usually obtained by comparing several images which
were recorded at different times.
Various types of camera, such as old-film-type, new-digital, area or line sensor type, are used for taking photographs.
Airplanes including UAV(UAS, Drone), satellites, mobile cars or even boats or ships are used “platforms” for aerial, space
and terrestrial photography. Then, those images are processed and analyzed utilizing human’s visual perception, knowledge,
special software and instruments called a stereo plotter in order to obtain geospatial information or photogrammetric products
such as digital line map, digital terrain data (Digital Elevation Model) , digital orthophoto-map, etc.
Today, photogrammetric technologies are applied in various scientific and engineering disciplines. However,
photogrammetric techniques using aerial and satellite photographs for surveying and mapping are discussed in this textbook.

1.2 Vertical and Oblique Aerial Photographs


Aerial and satellite photographs are in general classified according to the orientation of the camera axis. Fig. 1.1 illustrates
the different cases.
●Vertical photograph: A vertical photograph is obtained when the camera’s optical axis is being vertical (perpendicular) to
the Earth’s level surface(Fig.1.1(a)). The deviation from the vertical is called “tilt”. It must not exceed some limitations
(Fig.1.1(b)). Gyroscopically controlled mounts provide stability of the camera so that the tilt is usually less than two to three
degrees. In Fig.1.2(a) and (b), examples of typical vertical aerial and satellite images for photogrammetric work are shown.
●Oblique photograph: A photograph with the camera axis intentionally tilted between the vertical and horizontal. A high
oblique photograph, depicted in Fig. 1.1(c) is tilted so much that the horizon is visible on the photograph. The main
application of oblique photographs is in reconnaissance (Fig.1.1(c) and (d), and Fig.1.2(c)).
The vertical photographs provide imagery which are very similar to maps and can be visually interpreted to extract detailed
qualitative information about the surface land use, geomorphology, and hydrology within the vertical field of view.
However, people are generally better able to photo-interpret oblique aerial photography than vertical aerial photography
because they have a lifetime of experience looking at the side of objects as they navigate in their daily environment.
Photograph Point of Photography
Horizontal

Convergence
angle

(a) (b) (c) (d)

Photograph Roads

Figure 1.1: Classification of photographs according to camera orientation. In (a) the schematic diagram of a true vertical
photograph is shown; (b) shows a low oblique and (c) depicts a high oblique photograph.

1
35 km

1300m

(a) Vertical aerial photo (b)Satellite image (c) High oblique photo
Fig.1.2 Vertical, stereo aerial and satellite images used in Photogrammetry (Images: Geospatial Data Infrastructure,
Geospatial Information Authority of Japan (GSI), and ALOS/PRISM image(JAXA))

1.3 Principle of Stereo Photogrammetry


(1) Stereo Photographs
Humans have two eyes located side-by-side in the front of our heads. Each eye takes a view of the same area from a slightly
different angle. The two eye views have plenty in common, but each eye picks up visual information the other doesn't.
Each eye captures its own view and the two separate images are sent on to the brain for processing. When the two images
arrive simultaneously in the back of the brain, they are united into one picture. The combined image is more than the sum of
its parts. It is a three-dimensional stereo picture. With stereo vision you see an object as solid in three spatial
dimensions--width, height and depth--or x, y and z. This is so-called stereoscopy or 3D perception. When two eyes are
focused on a certain point, the optical axes of the eyes converge on the point, forming a parallactic angle (θ). The near the
object, the greater the parallactic angle. For example, θA (Point A) <θB (Point B) in Fig.1.3(b). The brain has learned to
associate distances to Point A and Point B with corresponding parallactic anglesθA and θB and gives the viewer the visual
and mental impression that Point B is closer than Point A.
In photogrammetry, we employ the principle of the human eye’s visual perception for three dimensional measurement. The
light-sensitive camera is analogous to the light-sensitive eye. Both the eye and camera use a lens to focus reflected light from
the real world onto the retina or light-sensitive CCD (MOS or film), and record the image data as electric signals or
photographs. As shown in Fig. 1.3(a), we take overlapping aerial photography (usually 60 % overlap) obtained at exposure
stations 1 and 2 along a flight line, and measure parallactic angles (intersection angles) of objects on the ground using the
photograph. The changing in position of an object with height, from one photograph to the next relative to its background,
caused by the aircraft’s movement, is called “stereoscopic parallax”. Parallax is the basis for three-dimensional stereoscopic
viewing. Differences in the parallax (called differential parallax) of various objects of interest can be used to measure the
heights of objects and extract topographic information by means of stereoscopic instrument.

θA θB

A B

Parallactic angle θ

(a) Principle of the stereo photogrammetry (b) Visual perception of human eyes
Fig. 1.3 Principle of the stereo photogrammetry which is analogous to visual perception of human eyes

2
1.4 Advantages and Factors Affecting Accuracy in Photogrammetry

(1) Advantages of Photogrammetry


The basic advantages of aerial (including from space) photography over on-the-ground observation are as follows:
① With aerial photography, we can see the whole picture of the observable earth surface.
②Photographs can give us a stop action view of dynamic condition on the Earth.
③Aerial photographs are permanent records of existing conditions of the objects. These records can be studied under
office rather than field conditions.
④Photographic sensor such as CCD has broad spectral sensitivity, and can see and record over a wavelength range about
twice as broad as that we can see.
⑤With the proper selection of camera, film, and flight parameters, we are able to record more spatial detail on a
photograph than we can see with ordinary eye.
⑥We can also obtain accurate measurements of position, distance, direction, area, height, volume, and slope from photos
in the office rather than in field conditions.
By taking aerial and space photos(images), photogrammetry can be applied for mapping the earth's surface for many
purposes--- coastal mapping, engineering project, land development, utility control, natural resources management or
exploration, and countless others. It is the fastest and the most economical technology for producing a detailed map of a wide
and/or remote area on the earth.

(2) Factors Affecting Accuracy in Photogrammetry


There are a number of factors that determine the accuracy of a photogrammetric project.
The main factors are:
① Spatial and Radiometric Resolution: The higher the spatial and radiometric resolution of the images, the better chance
of achieving high accuracy because mapping items can be more precisely located and analyzed. Spatial resolution is defined
as a pixel size by the capabilities of digital camera.
Ground resolution is the area of terrain that is covered by one pixel of a camera. The altitude of the photography and focal
length of a camera lens and the spatial resolution of the camera determine the size of the ground resolution cell (Fig. 1.4).
On the other hand, the sensitivity of a sensor refers to radiometric resolution of the light from the sensed objects.
Radiometric resolution is the number of levels that a sensor can record spectral information (intensity of light). Such
information generally ranges from 0-255 (8-bit) to 0-65,535 (16-bit) as a DN (Digital Number) value of each pixel. The most
common range is 0-255, which relates to the storage capacity of an 8-bit computer byte. These numbers are integer values
(whole numbers). A single byte can hold one distinct integer value ranging from 0-255. This value represents the degree of
reflective or emitted energy recorded by a sensor for a particular ground spot on the Earth surface.

Pixel (Spatial resolution) Radiometric resolution


Focal length (Ex. 8-bit or 16-bit DN value)

Lens

Flying altitude

Ground resolution
GG
Fig.1.4 Spatial and spectral resolution of a photograph

② Camera Calibration Parameters: Calibration is the process of determining the camera’s focal length, format size,
principal point, and lens distortion (Fig.1.5). There are a number of ways of obtaining this camera information and
using a camera fully calibrated will give the best results.

3
Format size (No. of pixels, uniformity of
the sensor element spacing and the
flatness of the array)
Principal point

Focal length
Lens distortion
Optical axis of lens
Fig. 1.5 Camera calibration parameters
③Angles Between Photos: Points and objects that appear only on photographs with very low subtended angles (for example
a point appears in only two photographs that were taken very close to each other) have much lower accuracy than objects
on photos that are closer to 90 degrees apart. Making sure the camera positions have good spread will provide the best
results.

(60% overlap)

(80% overlap)

Fig.1.6 Angles between photos


④Photo Orientation Quality: In photogrammetry, it is necessary to compute the location and angle (tilt) of the camera for
each photo – this is called Exterior Orientation. The exterior orientation of a photo is a similar process for setting up a
theodolite on the tripod legs over a traverse or control point in ground surveying (Fig.1.7). One factor that contributes a lot
to the accuracy of projects is an accurate orientation for every camera position. The orientation quality improves as the
number of well-positioned points increases (Ground Control Points) and also as the points cover a greater percentage of the
photograph area.

κ φ
ω
(Xo,Yo,Zo)

Fig. 1.7 Photo orientation quality (Tilt and position of photography)

⑤ Photo Redundancy: A point’s or object’s position is usually more accurately measured when it appears on many
photographs – rather than the minimum two photographs.

(Rays from 4 photos) (2 photos)


Fig.1.8 Photo redundancy
⑥Targets: The accuracy of a 3D point is tied to the precision of its locations in the images. This image positioning can be
improved by using signalized targets with higher contrast. Digital Plotter uses the image data to sub-pixel mark the point and
this increases the precision of its placement and hence the overall accuracy of the point’s computed 3D location.
Various combinations of these factors contribute to project accuracy.

4
Lower contrast
Higher contrast

(a) Natural and artificial targets(objects) (b) Sub-pixel measurement on a digital plotter
Fig. 1.9 Targets for measurement

1.5 History
Photogrammetry has a long history of revolution. It is said that the first known aerial photograph was taken from a balloon
from an altitude of 1,200 feet over Paris in 1858.
World War II (1941-1945) brought about the development of sophisticated techniques in aerial photogrammetry such as a
large-format film (20cm-wide film) and camera with a wide-angle lens(60 to 90 degrees angular coverage with 15 to 20 cm
focal length), highly sensitive film, high-precision mechanical and optical instruments for measurements, etc. It also became
possible to take higher altitude aerial photos from airplanes for reconnaissance purposes.
After the WWII, Photogrammetry became a very common surveying and mapping technology not only for military, but
also for civilian purposes. Many private photogrammetric mapping companies were found in the world. To support such
activities, many advanced technologies also developed from time to time including color and infrared films, automatic
cameras, high precision measuring instruments, data processing technologies using computer, new mapping products such
as image map and digital map, etc. Among the others, information processing technologies using computer made the most
important role in the advancement of photogrammetry. The computers are used not only for data processing, but also as
electric control devices in many modern photogrammetric instruments. With such advancements and improved
productivities, photogrammetry has very much contributed to the development of various social infrastructures such as roads,
houses, ports, farm lands, forest, for disaster prevention, environmental conservations, etc. in the world.
Today, Photogrammetry, as a geospatial information technology, has entered new era with merging of new related
technologies such as GPS, GIS (Geographical Information System), Optical Sensors and Digital Camera, Laser Scanner,
Position Orientation System with Optical Gyro(GNSS/IMU), Satellite, UAV/UAS (Unmanned Aerial Vehicle/Unmanned
Aircraft System) and car as platforms, Powerful Computer (PC), and various Data Communication Technologies
including Internet. The world is entering the "Information Age"-a time when information is becoming a major product of,
and foundation for economic and social progress. Photogrammetry, as a major information gathering and mapping
technology, is an important component in this evolution.
Fig.1.10 shows historical development of Photogrammetry in terms of camera/film, plotting instrument, data processing
methodology, and platform.

1.6 Basic Elements of Photogrammetric Mapping and Products


The basic operations in conducting a photogrammetric mapping are as follows:
(1) Photography: obtaining suitable photography for mapping;
(2) Control: Obtaining sufficient control through field surveys and/or by GNSS/IMU, and extension by photogrammetric
method (aerial triangulation);
(3) Map Compilation or Digital Mapping: the plotting of planimetric and/or topographic features by photogrammetric
methods;
(4) Map Completion: the refinement of the map editing in the office and further, special surveys in the field; and
Final Map Drafting and/or Digital Map Filing: the completion of the map by plotting and filing of digital map data.
(5) Digital Surface Model (DSM) or DEM (Digital Elevation Model) measurement
(6) Orthophoto (Orthoimage) production
(7) Preparation of a geospatial database for GIS
Project planning and quality control are also very important components for a successful project. Fig.1.11 shows overall

5
procedures of photogrammetric mapping project.
The requirements that are responsible for the project are converted into numbers that specify the region to be mapped, the
scale at which it is to be mapped, the accuracy of the final maps and products, the date by which the map should be
completed, approximate cost of the project (or an upper limit on the cost) .
Fig. 1.12 shows typical photogrammetric mapping products and their usages.

GPS
GNSS/IMU Satellite UAV
camera/
(UAS)
Film/ POS(Position & Orientation System)
POS(

GPS
System)

GPS衛星
人工衛星 Z (X,Y,Z) GPS衛星
人工衛星
GPS

Platform IMU:回転角
INS X
GPSアンテナ X

IMU

(ω,ψ,κ)Y

レーザー測距装置

IMU付き航空写真カメラRC30
GPS地上局

電子基準点 GPS地上局アンテナ

2000
1970
1850
1980

1950

Plotting
Analytical
Instrument
Photogrammetry
s Digital Mapping(Digital Map, Digital Orthophoto, DEM)

Digital Mapping

1990
Data
Analog Map Digital Plotter

Processing

[Optical Photogrammetry] -------[Optical-Mechanical] ------[Analytical]------[Digital Photogrammetry]


Fig. 1.10 Revolution of Photogrammetry
Planning of Photogrammetric Project

Ground Control Survey and Signalization

Flight for Aerial Photography


GNSS/IMU Observation GNSS Observation at the Ground Station

Image Processing of Aerial Photographs GNSS/IMU Data Analysis

Orientation of Photographs (Aerial Triangulation)

Stereo Compilation (Digital Mapping) Measurement of Digital Surface Model


(DSM) or Digital Elevation Model (DEM)
Field Completion
Editing of DSM/DEM Data Orthophoto Production
Editing of Vector Data

Geospatial Database for GIS (Vector Data, Product of DSM/DEM Data Orthophoto Mosaicing
Raster Data, and Image Data Products)
Orthophoto Map
Fig. 1.11 Basic elements of photogrammetric mapping project.

6
b. Three dimensional model
a. Line map

c. Orthophoto map (image map) d. Tsunami simulation analysis

Fig. 1.12 Photogrammetric products and their usages

You might also like