Download as pdf
Download as pdf
You are on page 1of 24
Elements of Visual Image Interpretation ‘umans are adept at interpreting images of objects. Afterall, they have been, doing this all their lives. With some instruction they can become excellent image analysts, Photo ot image interpretation is defined as the examination of images for the purpose of identifying objects and judging their significance (Philipson, 1997; MeGlone, 2004), ‘This chapter introduces the fundamental concepts associated with the visual interpretation of images of objects recorded primarily by remote sensing sys- tems operating in the optical blue, green, red, and reflective near-infrared portions of the electromagnetic spectrum. The imagery that is interpreted may be acquired using a variety of sensors, including traditional analog cam- eras (eg, Leica RC 30), digital cameras (e.g., Leica ADS 40), multispectral scanners (¢.g., Landsat Thematic Mapper), and linear or area-array sensor systems (e., SPOT, IRS-IC, MODIS, IKONOS, QuickBird, OrbView-3, ImageSat), Introduction ‘There are a number of important reasons why photo or image interpretation is such a powerful scientific tool, including: + the acrial/regional perspective; + three-dimensional depth perception; + the ability o obtain knowledge beyond our human visual perception; + the ability to obtain ahistorical image record to document change. ‘This chapter discusses these considerations and then introduces the funda- ‘mental elements of image interpretation used by image analysts to implement them. Various methods of search are also presented, including the use of col- lateral (ancillary) information, convergence of evidence, and application of the multi-concept in image analysis [From Chapter 5 of Remote Sonsing ofthe Environment: An Earth Resource Perspective, Second Eilition, John R. Jensen, Copyright © 2007 by Pearson Education, Inc, Published by Pearson Prentice Hall. All rights reserve. 105 108 Elements of Visual Image Interpret The AerialiRegional Perspective A vertical or oblique aerial photograph or other type of visi- ble/near-infrared image records a detailed but much reduced version of reality, single image usually encompasses much more geographic area than human beings could possibly traverse or really appreciate in a given day. For example, consider the photograph obtained by the astronauts through a porthole on Apollo 17 that eaptures one-half of the entire Earth (a hemisphere) at one time (Figure 1). Much of Arica is visible, from the arid Sahara to the dark vegetation ‘of the Congo to the Cape of Good Hope shrouded in elouds. Conversely, a single 9 x 9 in. 1:63,360-scale (I in. = 1 mi) vertical aerial photograph records only 81 mi? of geography at one time ‘xamination of the Earth from an aerial perspective allows scientists and the general public to identify objects, patterns, and human-land interrelationships that may never be com pletely understood if we were constrained to a terrestrial, Earth-bound vantage point. It does not matter whether the aerial perspective is from the top of a tall building, an ele- vated hillside, a light plane, a high-altitude jet, or a satelite platform. The resultant remotely sensed image provides spa- al terrain information that we would not be able to acquire and appreciate in any other manner. This is why remote sensing image interpretation is so important for military reconnaissance and civilian Earth resource investigation, Care must be exerci sd, however, when interpreting vertical and oblique imagery. Human beings are accustomed to laok- ing at the fagade (side) of objects from a terestrial vantage point and do not normally have an appreciation for what ‘objects look like when they are recorded from a vertical or oblique perspective (Haack etal, 1997). In addition, we are rot used to looking at and interpreting the significance of ‘many square kilometers of terrain at one time. Our line of sight on the ground is usually less than a kilometer, There- fore, the regional analysis of vertical and oblique remote sensor data requires training and practice. Three-Dimensional Depth Perception We can view a single aerial photograph or image with our eyes and obtain an appreciation for the geographie distribu- sion of features in the landseape. However, itis also possible co obtain a three-dimensional view of the terrain as if we were actually in an airborne balloon or aircraft looking out che window, One way to obtain this three-dimensional effect tion Earth as Seen from Apollo 17 Figure 1 A photograph of the Earth obtained by the astro- nauts onboard Apollo 17, shooting through a port- hole ofthe spacecraft. Almost the entire continent of Africa is visible as well as Saudi Arabia and part of Iraq and India. Note the arid Sahara and the dark, vegetated terrain of the rain forest along the equator in cental Attica. Antatetica is especially apparent at the South Pole, Photographs like this helped man- kind to realize how vulnerable and precious the Earth is a it rests like a multicolored jewel in the ‘blackness of space (courtesy of NASA). is to obtain two photographs or images of the terrain from ‘vo slightly different vantage points. We can tain 01 to view the two images of the terrain al the same time, Our mind fuses this stereoscopic information into a three-dimen- sional model of the landscape that we perceive in our minds as being real (Linder, 2003). For example, the stereopair of downtown St. Louis, Missouri, shown in Figure 2 provides detailed three-dimensional information when viewed using a stereoscope, We know from life that itis important to not only know the size and shape of an object but that its height, depth, and volume are also very diagnostic characteristics. Anal- ysis of stereoscopic imagery in three dimensions allows Us to appreciate the three-dimensional nature of the un- dulating terrain and the slope and aspect of the land. In addition, the stereoscopic analysis process usually exag- gerates the height or depth of the terrain, allowing us to appreciate very subtle differences in object height and terrain slope and aspect that we might never appreciate from a terrestrial vantage point, Three-dimen Elements of Visual Image Interpr ‘Three-dimensional Perspective: Stereoscopic Image of St. Louis, Missouri Figure 2 ‘This stereopair of St, Louis, Missouri, consists of two views ofthe terrain obtained at wo different exposue stations along a single flightline.Three-dimensiogal information about the terrain canbe obtained by viewing the model using a stereoscope ‘The human mind uses the parallax information inherent in the images to produce a three-dimensional model thet can yield detailed terrain information, sional information can also be obtained by analyzing RADAR, LIDAR, and SONAR remote sensor data Obtaining Knowledge Beyond our Human Visual Perception ‘Our eyes are sensitive primarily to blue, green, and red light. ‘Therefore, we sample a very limited portion of the electro- ‘magnetic energy that is actually moving about in the envi- ronment and interacting with soil, rock, water, vegetation, the atmosphere, and urban structure. Fortunately, ingenious sensors have been invented that can measure the activity of X-rays, ultraviolet, near-infrared, middle-infrared, thermal infrared, microwave, and radiowave energy. Carefully eali- brated remote sensor data provides new information about an object that humans might never be able to appreciate in any other manner (Robbins, 1999). For example, consider che imagery of an agricultural area in Saudi Arabia shown in Figute 3. Healthy vegetation absorbs much of the green and red light from the Sun for photosynthesis. Therefore, agricultural fields show up in dark shades of gray in green and red multispectral imagery, Conversely, the greater the amount of biomass present in an agricultural field, the greater the amount of near-infrared energy reflected eausing heavily-vegetated fields to appear bright in near-infrared ‘imagery. The green and red images suggest that vegetation is present in almost all of the dark center-pivot fields. The 107 108 Elements of Visual Image interpr Remote Sensing Can Provide Knowledge by Measuring Energy Characteristics in Spectral Regions Beyond Our Human Visual Perception SS vegeated Green reflectance «, Near-infrared reflectance, Indian IRS-ICLISS II imagery (23 X23 m) ofan agricultural area in Saudi Arabia. a) Vegetation absorbs most of the green and red ineident energy causing vegetated fields to appear dark. c) Conversely, vegetation reflects a substantial amount of in- cident near-infrared energy casing it to (possibly due to recent irrigation, stubbl amination ofthe same spear bright. In this example, several fields appear dark in the green and red images fom a previous crop, or plowing) suggesting that vegetation is present, Careful ex= Ls in the near-infrared image reveals that very litle vegetation is present, The near-infrared image also provides detiled information shout the spatial distrihution ofthe biomass present in each field. A color composite ofthis, {imagery is found in Color Plate | (images courtesy of Indian National Remote Sensing Agency), near-infrared imagery provides more definitive information about the spatial distribution and amount of vegetation (bio- mass) found within the fields. A color composite of the bbands is found in Color Plate 1 Historical Image Record and Change Detection Documentation A single aerial photograph or image captures the Earth's sur- face and atmosphere at a unique moment in space and time, not to be repeated again, These photographs or images are valuable historical records ofthe spatial distribution of natu- ral and human-made phenomena, When we acquire multiple images of the Earth, we can compare the historic imagery with the new imagery to determine if there are any subtle, dramatic, or particularly significant changes (Jensen and Cowen, 1999; McCoy, 2005). The study of change usually increases our understanding about the natural and human- induced processes at work in the landscape. Knowledge about the spatial and temporal dynamics of phenomena allows us to develop predictive models about what has hap- pened inthe past and what may happen in the future (Lunetta and Elvidge, 1998). Predictive modeling is one of the major goals of science. Remote sensing image interpretation is playing an increasingly important role in predictive model- ing and simulation (Goetr, 2002; Miller et al, 2003; Jensen etal, 2005), Remote sensing is especially useful for monitoring human activity through time, which can hopefully lead to sustain- able development and good governance. For example, con- sider Figure 4, which documents the effects of President Robert G Mugabe's order to demolish rural poor informal settlements in Mozambique in 2005, President Mugabe said his urban clean-up eampaign Operation Murambatsvina was needed to “restore sanity” to Zimbabwe's cities, which he said had become overrun with criminals, The opposing political party has its support base among the utban poor, and says Operation Murambatsvina was aimed at forcing chem to rural areas where the Mugabe government could more easily control them, It is estimated that more than 200,000 people became homeless. Remote sensing provided 1 detailed historic record of the tragic events (BBC, 2005). Elements of Image Interpretation ‘To perform regional analysis, view the terrain in three dimensions, interpret images obtained from multiple regions of the electromagnetic spectrum, and perform change detec- sion, it is customary to use principles of image interpretation that have been developed through empirical experience for more than 150 years (Estes et al, 1983; Kelly et al., 1999; MeGlone, 2004). The most fundamental of these principles are the elements of image interpretation that are routinely Elements of Visual Image Interpr ‘Remote Sensing Imagery as A Historical Record: Informal City Demolition in Harare, Zimbabwe in 2005 QuickBird 61-em image obtained on April 16, 2005, Figure 4 High spatial resolution (61 x 61 em) panchromatic satelite i +, QuickBird 61-cm image obtained on June 4, 2005. sry captured the destruction (razing) of informal housing in Herare, Zimbabwe, that began May 25, 2005. President Mugabe ordered police with bulldozers and sledgehammers to demo ish more than 20,000 informal housing stractures, causing more than 200,000 people to become homeless (images courtesy ‘of DigitalGlobe, Inc) used when visually photo-interpreting an image (Bossler et, al, 2002). The elements of image interpretation include location, tone and color, size, shape, texture, pattern, shadow, height and depth, volume, slope, aspect, sit, situa tion, and association (Figure 5). Some of the adjectives associated with each of these elements of image interpreta- sion are summarized in Table 1. Each image is composed of individual silver halide crystals or pixels that have a unique color or tone at a certain geographic location. This is che fundamental building block upon which all other ele- ‘ments are based, Therefore, we may consider these to be the primary or first-order elements of image interpretation (Koneeny, 2003), The secondary and tertiary elements are basically spatial arrangements of tone and color. The higher order elements of site, situation, and association are often based on different methods of search, including: the use of collateral information, convergence of evidence, and the use of the multi-concept. A well-trained image interpreter uses many of the elements of image interpretation during analysis without really think- ‘ng about them (Lloyd et al., 2002). However, a novice inter- preter may have to systematically force himself or herself to consciously evaluate an unknown object with respect to these elements to finally identify it and judge its significance in relationship to all the other phenomena in the scene. 109 110 Elements of Visual Image Interpretation Elements of Image Interpretation: ‘Order and Methods of Search Spatiat arrangement ‘ftonetcolor oe Methods r of search Eon Figure 5 Complexity Peete Peers Photo or image interpretation is usualy based on the use of the elements of image interpretation, The location of individual silver halide crystals ina photograph or pixels in an image represent the primary (frst order) elements of image interpretation Secondary and tertiary element ae spatial arrangements of tone and color. The higher-order elements of site, ituation, and association often make use of various search methods o perform accurae image interpretation xy Location There are two primary methods of obtaining pree coordinate information about an object: 1) survey it in the field using traditional surveying techniques or global posi- ioning system (GPS) instruments, or 2) collect remote sen- sor data of the object, register (rectify) the image to 2 bbasemap, and then extract the x,y coordinate information directly from the rectified image. If option one is selected, most scientists now use relatively inexpensive GPS instruments in the field (Figure 6) to obiain a precise measurement of an object's location in degrees of longitude and latitude on the Earth's graticule or {in meters easting and northing in a map projection (e.g, Uni- versal Transverse Mercator) (MeCoy, 2005). Scientists must then transfer the coordinates ofthe point (e..,a specific tree location) or polygon (e.g. the perimeter of a small lake) onto an accurate planimetrie map. Most scientists in the United states use U.S. Geological Survey 7.5-minute quadrangle ‘maps, In Britain they use Ordinance Survey maps, Most aiteraft or spacecraft used to collect remote sensor data now have a GPS receiver, This allows the remote sensing instrument onboard to oblain accurate xy,2 GPS coordinates at each photograph exposure station or the center of each an In the case of aerial photography, this means that we can obtain information about the exact lacation of the center of each aerial photograph (ie, the prineipal point) at he instant of exposute. We can use the GPS information col- lected by the sensor (and perhaps some collected on the ground) to register (rectify) the uncontrolled photo or image ‘0 UTM or another map projection. If we also correct for the relief displacement of the topography, then the photo or image becomes an orthophoto or arthoimage with all the ‘metric qualities of a line map (MeGlone, 2004). Geographic coordinates (x,y) of points and polygons can then be extracted directly from the rectified image. Jensen (2005) de- scribes methods used to digitally rectify (i.e. register) remote sensor data to a standard map projection. line s Tone and Color Real-world surface materials such as vegetation, water, and bare soil often reflect different proportions of energy in the blue, green, red, and near-infrared portions of the electro- magnetic spectrum, We can plot the amount of energy reflected from each of these materials at specific wave- lengths and ereate a spectral reflectance curve, sometimes called a spectral signature (Jensen et al., 2005), For exam Elements of Visual Image interpr Table 1 Elements of image interpretation ‘Common Adjectives Element (quantitative and qualitative) “zy location = "image coordinates column (and row (0) coordinates in an untetted image + xy image map coordinate: silver halide crystals o pixel in photograph or image are fected toe map projection (eg, UTM) onelColor + gray tone: light (bright), intermediate (gray), dark ack) + color:TS = intensity, hue on, satura ‘ion; RGB = re, green, and blue; Mansel Size + Leng, wid, perimeter, are 2") + small, medium (intermedia), large Shape + an object's geometric characterises linear, curvilinear, circus, elipscal, radial, square ectngular,tiengult, hexagonal, pentagonal, sta, amorphous, ee: ‘Texture + charcteritc placement and arangement of repetitions of tone or color + seat intermediate (edu), ough (coarse), motled, sipped Pattern + spatial arangement of objects onthe round: systematic, unsystematie or andom, lear, curvilinear, rectangular, circular, elliptical, parle, centripetal, serrated, state, braided Shadow + sihouette eased by soar illumination from the side HeightiDepth + z-clevation (height), bathymetry (depth, VolumesSlope! volume (m', slope *, aspect ® Aspect site + Site: elevation, slope, aspect, exposure, Situation adjacency to water, tausporaton, utilities Association + Satin: object ae place in a paricular order o orienlation relative o one another + Association: related phenomena ate usually present ple, generalized spectral reflectance curves for objects found {in a south Florida mangrove ecosystem ate shown in Figure 7a (Davis and Jensen, 1998). Spectral reflectance curves, of selected materials provide insight as to why they appear as they do on black-and-white or color imagery. We will ist consider why objects appear in certain grayscale tones on black-and-white images. Figure 6 Scientist collecting global positioning system (GPS) xy, location data in smooth cordgrass (Spar- ‘ina Alternifiora) in Musrels Inet, SC. Tone A band of electromagnetic energy (e.g, green light from 0.5 = 0.6 um) recorded by a remote sensing system may be dis- played in shades of gray ranging from black to white. These shades of gray ate usually referred to as fone. We often say, “This part of an image has a ‘bright’ tone, this area has @ “dark” tone, and this feature has an intermediate ‘gray’ cone.” Of course, the degree of darkness or brightness is a function of the amount of light reflected from the scene within the specific wavelength interval (band). For example, consider three black-and-white images of a south Florida mangrove ecosystem (Figure 7 b-d). The three images record the amount of green, red, and near-infrared energy reflected from the scene, respectively. Incident green light (0.5 ~ 0.6 um) penetrates the water col- umn farther than red and near-infrared energy and is reflected off the sandy bottom or the coral reef (Figure 7b). Therefore, the green band provides subsurface detail about che reef structure surrounding the mangrove islands, Man- grove vegetation absorbs approximately 86 percent of the incident green light for photosynthetic purposes and reflects approximately 14 percent. This causes mangroves to appear relatively dark in a single-band green image. Sand refleets high, equal proportions of blue, green, red, and near-infrared incident energy, soit appeats bright in all images. Mangroves reflect approximately 9 percent of the incident red energy (0.6 ~ 0.7 um) while absorbing approximately 91 TT Elements of Visual Image Interpretation Tone and Color Mangrove Ble Gene 4 Spectral reflectance curves for sand, «Black-and-white photograph mangrove, and water in Florida of red reflected energy. 4 Blackcand-white photograph of —_¢. Stand of pine (evergreen) sutounded by Vegetation is dark, fallow is bright, and nearinfared reflected energy ‘hardwoods (courtesy Emerge, Ie). turbid water ie gray (GeoEye, Ine), ¢g. U-2 photograph ofa Russian Spumnik , High-contraeteretrial 1 High-conteasttrrestil Tune site (courtesy Tohn Pike, FAS). photograph ofa Dalmation photograph ofa cow. Figure 7 Elements of Image Interpretation — Tone and Color pereent of the incident energy for photosynthetic purposes. well into the water column, so the water has a slightly darker ‘This causes the mangroves to appear very dark in the red tone, especially in the deeper channels, As expected, sandy photograph (Figure 7c). Red light does not penetrate as areas have bright tones. 12 Elements of Visual Image interpr In the black-and-white image recording only near-infrared energy (0.7 — 0.92 jm), vegetation is displayed in bright tones (Figure 74). Healthy vegetation reflects much of the incident near-infrared energy (approximately 28 percent) Generally, the brighter the tone from a vegetated surface, the greater the amount of biological matter (biomass) present (ensen et al., 1999). Conversely, water absorbs most of the incident near-infrared energy, causing the water to appear dark. There is a great contrast between the bright upland consisting of vegetation and sand, and the dark water. There- fore, it is not surprising that the near-infrared region is con- sidered to be the best for discriminating between the upland and water interface, A black-and-white infrared image of an evergreen pine stand surrounded by deciduous hardwood forest is shown in Fig- ure 7e, The tonal contrast makes it easy to discriminate between the two major species. ‘One must be careful, however, when interpreting individual band black-and-white images. For example, consider the Landsat Thematic Mapper band 3 (ced) image of'a Colorado agricultural area (Figure 71). As expected, the greater the amount of vegetation, the greater the absorption of the inci- dent red light by the vegetation, and the darker the vegetated area within the center-pivot irrigation system. Conversely, fallow fields and areas not in agricultural production show up in much brighter tones. Unfortunately, the lake also shows up as an intermediate shade of gray, This is because che lake has received a substantial amount of suspended sed- iment, causing it to reflect more red radiant energy than it normally would if it were deep, nonturbid water. If we had additional blue, green, and perhaps near-infrared bands to analyze, it would probably be clear that ths is a water body. However, when viewing only a single band of imagery dis- played in black-and-white tones, we could come to the con- clusion thatthe lake is simply a large vegetated field perhaps in the early stages of development when some of the sandy bate soil is still visible through the canopy. In fact, it has approximately the same gray tone as several of the adjacent fields within the eenter-pivot irrigation systems. Human beings can differentiate between approximately 40- 50 individual shades of gray in a black-and-white photo- graph or remote sensor image. However, it takes practice and skill to extract useful information from broad-band pan- chromatic black-and-white images or black-and-white images of individual bands. For example, consider the U-2 photograph of a Russian Sputnik launching site shown in Figure 7g. Careful examination of the gray tones and the shadows by a trained analyst reveals thatthe excavated earth from the blast area depression was deposited in a large ‘mound nearby. Human beings simply ate not used to view- ing the tops of objects in shades of gray. They must be trained. Furthermore, humans often have a very difficult cime identifying features if the scene is composed of very high contrast information, This is exemplified by viewing cerrestrial photographs of two very well known objects: a Dalmatian and a cow in Figure Thi, respectively. Many novice analysts simply cannot find the Dalmatian or the cow in the photographs. This suggests that extremely high crast aerial photographs or images are difficult to interpret and that itis best to acquire and interpret remotely sensed ‘imagery that has a continuum of grayscale tones from black co gray to white, if possible. Color ‘We may use additive color-combining techniques to create color composite images from the individual bands of remote sensor data, This introduces hue (colot) and saturation in ad- dition to grayseale tone (intensity). A color composite of che green, red, and neat-infrared bands of the mangrove study area is found in Color Plate 2. Notice how much ‘mote visual information is present in the color composite Humans can discriminate among thousands of subtle color. In tis false-color image, all vegetation is depicted in shades of red (magenta), sand is bright white, and the water isin var- ious shades of blue, Most scientists prefer to acquire some form of multispectral data so that color composites can be made, This may include te collection of natural color arial photography, color-infiared aerial photography, or multispectral data, where pethaps many individual bands are collected and a select few are additively color-combined to produce color images. Unfortunately, some people's color perception is impaired. This means that they do not experience the same mental impression of a color (e.g., green) as does the vast majority of the population. While this may be somewhat of a disad- vantage when selecting a shirt or tie to wear, many excellent image analysts have some color perception disorder, There are special tests such as those shown in Color Plate 3 that ccan be viewed to determine if color blindness is present Size — Length, Width, Perimeter, and Area ‘The size of an object is one of its most distinguishing char- acteristics and one of the most important elements of image interpretation. The most commonly measured parameters are length (m), width (m), perimeter (m), area (m*), and occasionally volume (m°). The analyst should routinely measure the size of unknown objects. To do this itis neces- 13 14 Elements of Visual Image interpr 2 Automobiles: diverse, but approximately 15 fem length and 6 8 wide 4. Baseball: 90 f between bases; 60 f ‘rom home plate to the pitcher's mound. Figure 8 sary to know the scale of the photography (e.g., 1:24,000) and its general unit equivalent or verbal seale (ie, 1 in = 2,000 f). In the case of digital imagery it is necessary t0 know the nominal ground spatial resolution of the sensor system (e.g., 1 1 m) Measuring the size of an unknown object allows the inter- preter to rule out many possible altematives. One must be careful, however, because all ofthe objects in remote sensor. data are ata scale less than 1:1, and we are not used to look- {ng at a miniature version of an object that may measure only a few centimeters in length and width on the image, Measur- ing the size ofa few well-known objects in an image such as car length, road and railroad width, size of a typical single- family house, efe., allows us to understand the size of unknown features in the image and eventually to identify them. There are several subjective relative size adjectives, including small, medium, and large, These adjectives should be used sparingly. Size bs, Reload: 4.71 between rails and ‘in, betwen the calload ties. «Diving board: approximately 12 Win length £. Cars and trucks can be wed to scale the ‘ize of the air-conditioning unite. [Blements of Image Interpretation — Size. Objects that have relatively unique sizes can be used to judge the size of other objects in the scene, For example, ‘midsize cars are approximately 15 ft long and 6 ft wide in the United States (Figure 8a), They may be two-thirds that size in Europe, Asia, ete. Notice that itis possible to differ- entiate between automobiles and pickup trucks. Also note chat the 6-in, white line separating parking spaces is quite visible, giving some indication of the high spatial resolution of this aerial photography. The distance between regular- {gauge railroad tracks is 4.71 fin the United States (Figure 8b). This provides diagnostic information about the length of the individual railroad cars, The average length of a trailer fon a tractor-trailer rig (Figure 8c) is 45 to 50 ft, allowing us to appreciate the size of the adjacent warehouse. Field dimensions of major sports such as soccer, baseball (Figure 8d), football, and tennis are standardized world- wide, The distance between the bases on a baseball diamond {s 90 R, while the distance from the pitcher's mound to home Elements of Visual Image interpr plates 60 f, Most swimming pool diving boards (Figure 58e) are 12 long. Ifthese objects are visible within an image, it is possible to determine the size of other objects in the scene by comparing their dimensions with those of the known object’s dimen- sion, For example, the diameter ofthe two rooftop air-condi- sioning units shown in Figure 8fis at least the length of the car and truck also visible in the image. Its risky to measure the precise length, perimeter, and area of objects in unrectfied aerial photography or other types of unrectified remote sensor data, The terrain is rarely com- pletely flat within the instantancous-field-of-view of an acrial photograph or other type of image. This eauses points that are higher than the average elevation to be closer to the sensor and points that are lower than the average clevation to be farther away from the sensor system. Thus, different parts of the image have different scales, Tall buildings, bills, and depressions may have significantly different scales than those atthe average elevation within the photograph, Therefore, the optimum situation is where the aerial photog- raphy or other image data have been geometrically rectified and terrain-corrected to become, in effect, an orthophoto- graph or orthoimage where all objects are in their proper planimetric x location, It is then possible to measure the length, perimeter, and area of features using several meth- ods, including polar planimeter, tablet digitization, dot-grid analysis, or digital image analysis, Shape It would be wonderful if everything had a unique shape that could be easly discerned from a vertical or oblique perspec- ive. Unfortunately, novice interpreters sometimes have dif= ficulty even identifying the shape of the building they ate in, much less appreciating the planimetrie x,y shape of natural and man-made objects recorded in aerial photography or other imagery. Nevertheless, many features do have very unique shapes. There are numerous shape adjectives such as (Table 1) linear, curvilinear, circular, elliptical, radial, square, rectangular, triangular, hexagonal, star, elongated, and amorphous (no unique shape). ‘There are an infinite variety of uniquely shaped natural and ‘man-made objects in the real world, Unfortunately, we ean only provide a few examples (Figure 9). Moder jet air- craft (Figure 9a) typically have a triangular (delta) shape and distinctively shaped shadows, Humankind’s residential housing and public/commercial buildings may range from very simple rectangular mobile homes for sale (Figure 9b) to complex geometric patterns such as the Pentagon in Washington, DC (Figure 9¢). The 0.5 x 0.5 m black-and- white infrared image of the Pentagon was obtained using a digital camera, Human transportation systems (Figure 94) in developed countries usually have a curvilinear shape and exhibit extensive engineering. Humankind modifies nature in a tremendous variety of ways, some of them very interesting. For example, Figure 9e depicts the curvilinear shape of catefully engineered levees (rising just 2 f above the ground) that direct water continuously through a rice field in Louisiana, An adjacent field has been systematically plowed. But nature designs the ‘most beautiful shapes, patterns, and textures, including the radial frond patter of palm trees shown in Figure 9f. The best image interpreters spend a great amount of time in che field viewing and appreciating natural and man-made objects and their shapes. He ot she is then in a good position co understand how these shapes appear when recorded on vertical or oblique imagery. Texture Texture is the characteristic placement and arrangement of repetitions of tone of color in an image. In an aerial photo- graph, it is created by tonal repetitions of groups of objects that may be too small to be discerned individually. Some- mes two features that have very similar spectral character- istics (e.g, similar black-and-white tones or colors) exhibit different texture characteristics that allow a trained inter- preter to distinguish between them. We often use the textural adjectives smooth (uniform, homogeneous), intermediate, and rough (coarse, heterogeneous). It is important to understand thatthe texture in @eertain por- sion of a photograph is strictly a function of scale. For exam- ple, in a very large-scale aerial photograph (e.g., 1:500) we ‘might be able to actually see the leaves and branches in the canopy of a stand of tees and describe the area as having @ coarse texture, However, as the scale of the imagery becomes smaller (e.g. 1:5,000), the individual leaves and branches and even the tree crowns might coalesce, giving us the impression that the stand now has an intermediate tex- ture, ie., it is not smooth but definitely not rough. When the same stand of trees is viewed at a very small scale (e., 1:50,000), it might appear to be a uniform forest stand with smooth texture, Thus, texture is a function ofthe scale of the 15 16 Elements of Visual Image Interpretation 4. Triangular (delta) shape of, «typical passenger jet, 4. A curlinear cloverleaf highway ‘intersection inthe United States, Figure 9 imagery and the ability of the interpreter to perceive and deseribe it Several other texture adjectives are often used, including rotted, stippled, et. [is dffieult to define exactly whats, meant by each ofthese textures It is simply better to present a few examples, as shown in Figure 10. Both the avocado crchaed and the tees in the courtyard have a coarse texture on this large-scale photograph (Figure 10a). Conversely, the conerete road and much ofthe grass yard have a smooth texture, Just behind the pool, the soil exhibits varying degrees of moisture content, causing a mottled texture. In Figure 10b, the pine forest on the left has a relatively coarse texture as the individual tree crowns are visible. The bright sandy beach has a smooth texture, Both cattails near the shore and waterlilies farther out into L-Lake on the Savannah River Site exhibit intermediate to rough textures. », Rectangular single- and ouble-wide mobile home: for sale. The curvilinear shape of carefully engi- neered rice field levees in Louisiana Elements of Image nterpretati «. Blackeandvhite infrared image ofthe Pentagon (courtesy Positive Systems, In). £ Radial palm tee fonds in San Diego, CA, — Shape Finally, the watetlilies give way to dark, smooth-textured water. ‘Two piles of S0-M pine logs at a sawmill in Georgia are shown in Figure 10c. The logs exhibit a coarse, heteroge- neous texture with a linear pattern. The shadow between the stacks has a smooth texture Figure 10d is an interesting photograph of systematically placed circular marijuana plants interspersed in a field of corn. The physiology (structure) of the two types of plants, their spacing, and orientation combine to produce a coarse- textured agricultural field. The shadows produced by the ‘marijuana plants contribute substantially to the texture of the area. Interestingly, the goal of the farmers appears to be working, Few novice interpreters appreciate the subtle dif- ferences in texture visible inthe field Elements of Visual Image Interpretation Texture «a. Relatively coarsetexture avocado ld ‘The grass and road have a smooth texture 4. Coart-textre South Cazolina ‘interspersed with citeular manuan ‘2, Mottled texture on fallow sol ina ants. Georgia centerpivotimigaion system. Coarse texture of fesly cut pine logs at a sawnnll i Georgia. £ A variety of textures along Actuary ofthe Misssspps Figure 10. Elements of Image Interpretation — Texture Part of the agricultural field in Figure 10e is being culti- vated while the remainder isin fallow. The vegetated south- west portion of the center-pivot irrigation system has a relatively smooth texture. However, the remaining fallow portion of the field appears to have areas with varying amounts of soil moisture or different soil types. This eauses chis area to have a mottled texture. One part of the mottled cextute region still bears the circular scars of six wheels of che center-pivot irrigation system, Various vegetation and sand bar textures are present in the large scale photograph of a tributary to the Mississippi River in Figure 10f. A dense stand of willows parallels the lower shoreline, creating a relatively fine texture when compared with the hardwood behind it with its coarse texture, The sand bats interspersed with water ereate @ unique, sinuous texture as well as a serrated pattem. Some of the individual tree crowns in the upper portion of the image are spaced wel apart creating a more coarse texture, Pattern Patterns the spatial arrangement of objects in the landscape (Figure 11). The objects may be arranged randomly or sys- cematically. They may be natural, as with a drainage pattern, oo human-made, as with the Township and Range land ten- ture system present in the western United States. Patter is a very diagnostic characteristic of many features. Typical pat- cem adjectives include random, systematic, circular, eentrip- etal, oval, radiating, rectangular, hexagonal, pentagonal, octagonal, etc curvilinear, linear, Examples of typical patterns captured on remote sensor data are shown in Figure 11. The fitst example depicts the sys- cematic, triangular pattern of B-52s being dismantled at Montham Air Force Base (Figure 1a). A large metal blade cuts the fuselage into a specifie number of parts. The parts must remain visible fora certain number of days so that “7 18 Elements of Visual Image Interpretation 4 Systematic, teangular pattern of 'B52s being dismantled (courtesy USGS), '. Seven cirelar grain silos exhibit a eurvi- linear pattern on this southeastern farm. «Random, sinuous braided stream pat ‘ona sandy soil at Pen Branch, SC Township & Range survey pattem on mottled sol in Texas. Figure 11 foreign countries can use their own aerial reconnaissance cechnology to verily thatthe specified number of B-52s have bbeen removed from service as part of the strategie arms lim- itation process, Heavy equipment moving between the ait- craft ereates a unique curvilinear transportation pattern. Seven large silos used to store agricultural grain are seen in Figure 11b. The individual silos are circular but they are situated in a curvilinear pattern on the landscape. Numerous rectangular farm buildings oriented north-south are arranged in a random fashion, A random, sinuous braided stream patter is present at the mouth of Pen Branch, SC, in Figure 1c. This particular pattem resembles braided hair, hence the terminology. « Systemati, linear rows of potatoes with some damaged (dark) by late blight, £ Radiating road patter in Pais (Sovin- formsputnik and Aerial Images, ne). Blements of Image Interpretation — Patter, Figure 11d depicts the systematically surveyed Township & Range cadastral system superimposed on an agricultural region in Texas, The NAPP photograph reveals small farm- steads separated by large tracts of agricultural land, The soil ‘moisture and soil type differences in the fields combine to create an unsystematic, mottled soil texture Potatoes arranged in systematically spaced linear rows are shown in Figure Ile. Various rows are arranged in a rect- angular pattern. This near-infrared photograph reveals that some of the fields that appear dark have experienced late blight damage. A KVR-1000 Russian satellite photograph reveals the sys- tematic, radiating road pattern centered on the Arch de Tri- ‘mph in Paris (Figure 119) Elements of Visual Image interpr Shadow 4 People and benches recorded in kite ‘photography (courtesy Crs Benton) bs, Shadows cast from La Glonette- Arch ‘of Glory in Vienna, Austria. Bridge and sign shadows provide valuable information 4. Pyramids of Giza (couresy of Sovin- orsptaik and Aeral Images, Ine) «Shadows provide information about ‘object heights (Emerge, Ine) £ Orient images so that shadows toward the viewer during image azalysis. Figure 12 Elements of Image Interpretation — Shadow Shadow Most remote sensor data is collected within + 2 hours of solar noon to avoid extensive shadows in the imagery. This is because shadows from objects ean obscure other objects chat might otherwise be detected and identified, On the other hand, the shadow or silhouette cast by an object may be the only real clue to an object's identity. For example, consider che shadows cast by two people standing on a pier and the shadows cast by benches in Figure 12a. The shadows in che image actually provide more information than the objects themselves, La Gloriette — Arch of Glory — in Vienna, Austria, has unique statues on top of it (Figure 12b). Through careful evaluation of the shadows in the ver~ sical photograph, itis possible to determine the location of che statues on top of the building. Similarly, shadows east by signs or bridges (Figure 12c) are often more inf chan the objects themselves in vertical aerial photography. Very small-scale photography or imagery usually daes not contain shadows of objects unless they protrude a great dis- tance above surrounding terrain such as mountains, extremely tall buildings, etc. For example, consider the shadows cast by the great pyramids of Giza in Egypt (Figure 12d). The distinctive shadows are very diagnostic during image interpretation. In certain instances, shadows can provide clues about the height of an object when the image interpreter does not have access to stereoscopic imagery. For example, the building shadows in Figure 12e provide valuable information about the relative height ofthe building above the ground, i., that it is a one-story single-family residence. When interpreting imagery with substantial shadows, itis a good practice to orient the imagery so that the shadows fall 19 120 Elements of Visual Image interpr Height and Depth & Relief displacement i an important mnoscopie cue about object height. Francisco (courtesy GeoEye, Ine). jown San__¢, Bathymetry of Monteray Bay, CA, (courtesy EUW, Inc; SPOT Image, Inc) Figure 13 Elements of Image Interpretation — Height and Depth, coward the image interpreter such as those shown in Figure 12f, This keeps the analyst from experiencing pseudo- scopic illusion where low points appear high and vice versa, For example, itis difficult to interpret the photograph of the forest and wetland shown in Figure 12f when it is viewed with the shadows falling away from the viewer. Please turn che page around 180° and see how difficult it isto interpret correctly, Unfortunately, most aerial photography of the northem hemisphere is obtained during the leaf-off spring ‘months when the Sun casts shadows northward. This ean be quite disconcerting. The solution is to reorient the photo- graphs so that south is at the top. Unfortunately, if we have co make a photomap or orthophotomap of the study area, itis cartographic convention to orient the map with north at the cop. This ean then cause some problems when laypersons interpret the photomap because they do not know about pseudoscopie illusion. ‘Shadows on radar imagery are completely black and contain no information, Fortunately, this is not the ease with shad- ‘ows on aerial photography. While it may be relatively datk in the shadow area, there may still be sufficient light seat- cered into the area by surrounding objects to illuminate the certain to some degree and enable careful image interpreta- sion to take place. Height and Depth ‘The ability to visually appreciate and measure the height (clevation) or depth (bathymetry) of an object or landform is fone of the most diagnostic elements of image interpretation (Figure 13). Stereoscopic parallax is introduced to remotely sensed data when the same object is viewed from wo different vantage points along a flightline. Viewing chese overlapping photographs or images using stereoscopic instruments isthe optimum method for visually appreciating che three-dimensionality of the terrain and for extracting accurate x.y, and 2 topographic and/or bathymettie informa- However, there are also monascopie cues that we can use to appreciate the height or depth of an object. For example, any object such as a building or uility pole that protrudes above che local datum will exhibit radial relief displacement out- ‘ward from the principal point of atypical vertical aerial pho- cograph. In effect, we are able to see the side of the feature, as demonstrated in Figure 13a, Also, all objects protruding above the local datum cast a shadow that provides diagnostic height or elevation information such as the various buildings in San Franciseo shown in Figure 13b. Also, masking cakes place in some images where tall objects obscure objects behind them, making it clear that one object has greater elevation than another. For example, the building at che top of Figure 13b is masking the buildings behind it, suggesting that it has greater height. The optimum method of obtaining bathymetric measure- ‘ments is to use a sonar remote sensing device which sends out a pulse of sound and measures how long it takes for the sound to pass through the water column, bounce off the bot- com, and be recorded by the sensor. The image of Monterey Bay, CA, in Figure 13e was obtained using SONAR and merged with a SPOT image of the terrestrial landscape. Elements of Visual Image Interpretation Site, Situation, and Association ‘4 Thermal electric Haynes Steam Plant in Long Beach, CA, Figure 14 Elements of Image Interpretation — Site Situation, and Assocation Site, Situation, and Association Site, situation, and association characteristics are. very important when trying to identify an abject or activity. A site has unique physical andior socioeconomic characteristics. ‘The physical characteristics might include elevation, slope, aspect, and type of surface cover (e.g., bare soil, grass, shrubjserub, rangeland, forest, water, asphalt, concrete, housing, etc). Socioeconomic site characteristics might include the value of the land, the land-tenure system atthe site (metes and bounds versus Township and Range), adja- cency to water andor adjacency toa certain type of popula- tion (professional, blue-collar, retired, et.) Situation refers to how certain objects in the scene ate orga- nized and oriented relative to one another. Often, certain raw ‘materials, buildings, pipelines, and finished produets are sit- uated in a logical, predictable manner, Association refers to the fact that when you find a certain, phenomena or activity, you almost invariably encounter related or associated features or activities, Site, situation, and association elements of image interpretation are rarely used independently when analyzing an image. Rather, they are used synergistically o artive ata logical conclusion. For example, sewage disposal plants are almost always located on flat sites situated near a water soutce so they can dispose of the treated water, and they exist relatively close tothe pro- ducing community. Large commercial shopping malls typi- cally have multiple large buildings on level sites, massive parking lots, and are ideally situated near major transporta- tion arteries and population centers. ‘A sawmill wit ite associated piles of raw and finished fumber. oglle Nuclear Power Plat near Augusta, GA, Thermal electric power plants such as the Haynes Steam Plant in Long Beach, CA, shown in Figure 14a are usually located on flat, well-engineered sites with large tanks of petroleum (or other type of natural resource) nearby that is bbured to create steam to propel the electric generators. Man-made levies, called “revetments,” often encompass the tank farm to contain the petroleum in the event of an acci- dent, Thermal electric power plants are often associated with some type of recirculating cooling ponds. The water is used <0 cool critical steam-generating components, ‘Sawmills such as the one shown in Figure 14b are usually sited on flat terrain within 20 km of many stands of tees and associated with large piles of raw timber, well-organized piles of finished lumber, a furnace to dispose of wood waste products, and an extensive processing facility. Railroad spurs are often used to transport the finished lumber or wood-chip products to market. Nuclear power plants exist on extremely well-engineered level sites. They have large conerete reactor containment bbuilding(s). The site may contain large recirculating cooling ‘water ponds or enormous cooling towers such as those under construction at the Vogtle Nuclear Power Plant near Augusta, GA, in Figure 14e, Power-generating plants do not need to be adjacent to the consuming population as elec- tricity can be transported economically great distances, [Expert image analysts bring to bear site, situation, and asso- ciation knowledge to an image interpretation problem. Such knowledge is oblained by observing phenomena in the real world. The best image analysts have seen and appreciate a diverse array of natural and man-made environments. It is difficult to identify an object in an image if one has never 121 Elements of Visual Image interpr seen the object in the real world and does not appreciate its site, situation, and association characteristis. Methods of Search Photo-interpretation has been taking place since Gaspard Felix Toursachon (Nadar) took the first successful aerial photograph in France in 1858, Over the years, scientists have developed some valuable approaches to interpreting remotely sensed data, including: 1) utilizing collateral (ancillary) information, 2) converging the evidence, and 3) applying the multi-concept in image analysis. Using Collateral Information Trained image interpreters rarely interpret aerial photogra- phy or other remote sensor data in a vacuum, Instead, they collect as much collateral (often called ancillary) informa- sion about the subject and the study area as possible. Some of the major types of collateral information are summarized in Table 2, including the use of a variety of maps for orienta- sion, political boundary information, property line cadastral data, geodetic control (x12), forest stand data, geologic data, hazard information, surface and subsurface hydrologic data, socioeconomic data, soil taxonomy, topographic and bathymetric data, transportation features, and wetland infor- ‘mation. Ideally, these data are stored in a geographic infor- ‘mation system (GIS) for easy retrieval and overlay with the remote sensor data It is useful to contact the local National Weather Service to obtain quantitative information on the meteorological condi- sions that occurred on the day the remote sensor data were collected (cloud cover, visibility, and precipitation) and for che days preceding data collection, USGS water-supply and stream-gauge reports are also useful Scientists also obtain local street maps, terrestrial photo- graphs, local and regional geography books, and journal and popular magazine articles about the locale or subject matter. They talk with local experts. Well-trained image analysts get into the field to appreciate firsthand the lay of the land, its subile soil and vegetation characteristics, the drainage and geomorphic conditions, and human cultural impact. Often much of this collateral spatial information is stored in GIS. This is particularly useful since the remote sensor 122 Table 2. Collateral information offen used inthe interpreta tion of aerial photography and other remotcly sensed data in the United States. Topic Collateral Information General Tntemational Map ofthe World 1:1 000,000 orientation National Geospatil-Inteligence Agency (NGA) 1:100,000; 1:250,000 USGS 75min 1:24,000 USGS 15-min 1:63,360 Image browsing systems: Google Earth, ‘Space Imaging, DigitsiGiobe, SPOT Boundaries or USGS 75-min 1:24,000 districts USGS 15-min 1:63,360 Boards state, county ity, school, fre, voting, water/sewer Cadastral City and county tax maps Geodetic USGS digital line praph - elevation control NGS ~ nautical and bathymetric chars Forestry USFS ~ forest stand information Geology USGS ~ surface and subsurface Hazards FEMA ~ flood insurance maps USCG - environmental sensitivity index Hydrology USGS digital line graph ~ surface hydrology NGS - nautical and bathymetric chars USGS — water-supply reports USGS stream gauge reports Socio Bureau of the Census — demographic data economic TIGER block data = census tracts Soils SCS, NRCS ~ soil taxonomy maps Topography/ USGS — National elevation dataset (NED) bathymetry NGA. digital terrain clevation data (DTED) USCG — nautice! and bathymetric chars Trans: USGS digital line graph — transportation portation (County and sate transportation maps Weather! National Weather Service - NEXRAD atmosphere Wetland USGS — National Wetland Inventory maps NOAA ~ Coastal change analysis program data can be geometrically registered to the spatial informa- tion in the GIS database and important interrelationships evaluated Elements of Visual Image interpr Convergence of Evidence It is a good idea to work from the known to the unknown, For example, perhaps we are having difficulty identifying a particular type of industry in an aerial photograph, Careful examination of what we do know about things surrounding and influencing the object of interest can provide valuable clues that could allow us to make the identification. This ‘might include a careful interpretation of the building charac- teristics (length, width, height, number of stories, type of construction), the surrounding transportation pattern (¢.2. parking facilities, railroad spur to the building, adjacent to an interstate, site slope and aspect, site drainage character- istics, unique utilities coming into of out ofthe facility (pipe- lines, water intake or output), unusual raw materials or finished products in view outside the building, and methods of transporting the raw and finished goods (tractor trailers, loading docks, ramps, ete.). We bring all the knowledge we hhave to the image interpretation problem and converge our evidence to idemtify the object or process at work, Let us consider another example of convergence of evi- dence. Suppose we were asked to deseribe the type of airport facility shown in Figure 15. At first glance we might con clude that this is a civilian airport with commercial jets. However, upon closer inspection we see that jets 2, 3, and 6 appear normal (e.g, large delta-shaped Boeing 707s), but jets 4 and 5 exhibit substantially different shadow patterns ‘on the fuselage and on the ground. Jet number I also exhibits some unusual shadow characteristics. Furthermore, we note chat jets 4 and 5 have an unusual dark eircular object with a white bar on it that appears to lie on top of the jets ‘An image analyst that has seen a Boeing E-3 Airborne Warn- ing and Control System (AWACS) aireraft on the ground would probably identify the AWACS aircraft quickly. Non- military image analysts would need to + note the absence of commercial airport passenger boarding/unloading ramp facilities suggesting that this is a rilitary airport; examine the unusual shadow patterns; + consult manuals containing examples of various candidate aireraft that could cast such shadows (e.g., Figure 11); converge the evidence to arive atthe correct conclusion. ‘The first step to camouflage an E-3 sitting on the tarmac would be to align the white bar with the fuselage (please refer to jet 1). The Multi-concept Robert Colwell ofthe Forestry Department at the University of California at Berkeley put forth the multi-concept in image interpretation in the 1960s (Colwell, 1997). He sug- gested that the most useful and accurate method of scientific ‘mage interpretation consisted of performing the following types of analysis: multispectral, mulrdisciplinary, multi scale, and multitemporal, ‘The multi-concept was further elaborated upon by Estes etal, (1983) and Teng (1997). Colwell pioneered the use of multiband aerial photography and multispectral remote sensor data. He documented that in agriculture and forest environments, measurements made in multiple discrete wavelength regions (bands) ofthe electro- ‘magnetic spectrum were usually more valuable than acquit- ing single broadband panchtomatic-ype imagery. For example, Figure 7 documented the significant difference in information content found in green, red, and near-infrared ‘multispectral images of mangrove Colwell also suggested that multiseale (often called multi- stage) photography or imagery of an area was very impor- tant. Smaller-scale imagery (e.g.1:80,000) was useful for placing intermediate scale imagery (e.g., 1:40,000) in its proper regional context. Then, very large-scale imagery (eg.,> 1:10,000) could be used to provide detailed intorma- sion about local phenomena, fn situ field investigation is the largest scale utilized and is very important. Each scale of imagery provides unique information that can be used to cal- ‘ibrate the others Professor Colwell was a great believer in bringing many multidiseiplinary experts together to focus on a remote sens- ing image analysis or information extraction problem, The real world consists of soils, surface and subsurface geology, vegetation, water, atmosphere, and man-made urban struc- cure, In this age of inereasing scientific specialization, itis difficult for any one person to be able to understand and extract all the pertinent and valuable information present within a remote sensor image. Therefore, Colwell strongly suggested that image analysts embrace the input of other ‘multidiseiplinary scientists in the image interpretation pro- cess. This philosophy and process often yields synergistic, novel, and unexpected results as multidiseiplinaty scientists ceach bring their expertise to bear on the landscape apprecia- tion problem (Donnay et al., 2001). Table 3 lists the disei- 123 124 bb Royal Saudi Ait Force Boeing E-? Aiborne ‘Warning and Control System (AWACS). Figure 15 2) Panchromatie image (61 x 61 em) of an airport in Riyadh, Seudi Arabia, obtained on December 9, 2001 (courtesy of Digi- 1alGlobe, Ine). 6) Teresrial view of the Royal Saudi Air Force version of the E-3 Airborne Warning and Control System (AWACS) aircraft. The E- is equipped with a“look-down'” radar that can separate airborne targes from the ground and sea clutter returns. Its radar “eye” has a 360° view ofthe horizon and ean “see” more than 320 kam (200 mip. The U.S. Air Force, ‘NATO, United Kingdom, France and Saudi Arabia operate AWACS. [Speed » 800 ka/hr (S00 mir); Ceiling > 10,670 m (G5,000 f); Endurance > 11 hours without refusling; Range > 9,250 km (5,000 nautical mi] (Boeing, 2005) plines of colleagues that often collaborate when system- tion about the processes at work. Conversely, a multitempo- tically studying a certain topic. ral remote sensing investigation obtains more than one image of an object. Monitoring the phenomena through time While single-date remote sensing investigations can yield allows us to understand the processes at work and to develop {important information, they do not always provide informa- predictive models (Lunetta and Elvidge, 1998; Schill et al., Elements of Visual Image Interpretation Table 3. Multidisciplinary scientists bring their unique train- ing tothe image interpretation process Topic Disciplines ‘Agronomy, agricultural engineering, biology, biogeography, geology; landscape ecolog soil science Biodiversity, Biology, zoology, biogeography, landscape ecology, marine science, sol science habitat Database and Cartography, GIS, computer scence, photo- algorithm = grammetry, programming, analytical preparation modeling Forestry, Forestry agronomy, rangeland ecology, land- rangeland scape ecology, biogeography, sil science Geodetic Geodesy, surveying, photogrammetry control Geology Geology, geomorphology, agronomy, soils geography Hazards Geology, hydrology, urban and physical geography, human geography, sociology Hydrology Hydrology, chemistry, peology, geography Topography/ Geodesy, surveying, photogrammetry bathymetry ‘Trans ‘Transportation engineering, city planning, portation urban geography Urban Urban, economic, and political geography, studies city planning, transportation engineering, civil engineering, landscape ecology Weather! Meteorology, climatology, physics, chemis- atmosphere try, atmospheric science Wetland Biology, landscape ecology, biogeography 1999). Colwell pioneered the concept of developing crop phenological calendars in order to monitor the spectral changes that take place as plants progress through the grow- ing season. Once erop calendars are available, they may be used to select the optimum dates during the growing season co acquire remote sensor data. Many other phenomena, such as residential urban development, have been found to undergo predictable cycles that can be monitored using remote sensor data, A trained image analyst understands the phenological eyele of the phenomena he ot she is interpret- ‘ng and uses this information to acquire the optimum type of remote sensor data on the optimum days of the year, Conclusion ‘We now have an understanding of the fundamental elements of image interpretation, We can utilize the elements of image interpretation to carefully analyze aerial photography or other types of optical (blue, green, red, and near-infrared wavelength) remote sensor data, Based on this foundation, we are prepared to progress to more sophisticated image analysis techniques, including the extraction of quantitative information from remote sensor data using principles of photogrammetry. References BRC, 2005, “What Lies Behind the Zimbabwe Demolitions?" BBC News World Edition, London: BBC NEWS, July 26. Boeing, 2005, Airborne Warning and Control (AWACS) Aircraft, Seattle: Boeing Aircraft Corporation, www-bocing.com/de- fense-spaccliciawecs/e3svewwie3overview-html Bossler,D., Jensen, J.., MeMaster, RB. and C. Rizos, 2002, Manual of Geospatial Science and Technology, London: Tay- lor & Francis, 623 p. Colwell, R.N., 1997, “History and Place of Photographic Inter- pretation,” Manual of Photographic Interpretation, W. Philip son (Ed), 2nd Bé., Bethesda: ASPARS, 3-48, Davis, B.A. and J. R. Jensen, 1998, “Remote Sensing of Man- ‘grove Biophysical Characteristics,” Geocarto International, 13(4):55-64, Donnay, J., Barnsley, M. J. and P. A. Longley, 2001, Remote Sensing and Urban Analysis, NY.: Taylor de Francis, 268 p. Estes, 1.E., Hajic, Band L. R. Tinney, 1983, “Fundamentals ‘of Image Analysis: Analysis of Visible end Thermal Infrared Data,” Manual of Remote Sensing, R. N. Colwell, (Ed.), Be- thesda: ASPA&RS, 1:1039-1040, Goetz, 8.1, 2002, “Recent Advances in Remote Sensing of Bio- physical Variables: An Overview of the Special Issue,” Re- ‘mote Sensing of Environment, 79:145-146, Haack, B., Guptill,S., Holz, R., Jampoler, S., Jensen, J. R. and RA. Welch, 1997, “Urban Analysis and Planning,” Manual 125 126 Elements of Visual Image Interpret of Photographie Interpretation, W. Philipson (Ea), 2nd Fé, Bethesda: ASP&RS, 517-47, Jensen, J. R., 2005, Introductory Digital Image Processing: A Remote Sensing Perspective, 3rd Ed., Upper Saddle River: Prentice-Hall, In., 25 p. Jensen, J. R. and D. C. Cowen, 1999, “Remote Sensing of U Socioeconomic Attributes, ban/Suburban Infrastructure a Photogrammetric Engineering & Remote Sensing, 65(5):611~ 622. Jensen, J. R., Coombs, C., Porter, D., Jones, B, Schill, 8. and D. White, 1999, “Extraction of Smooth Cordgrass (Spartina al- terniflora) Biomass and LAI Parameters from High Resoli tion Imagery.” Geocarto International, 13(4):2S-34, Jensen, J. R,, Saalfeld, A., Broome, F, Cowen, D., Price, K., Ramsey, D., Lapine, L. and E. L. Usery, 2008, “Chapter 2: Spatial Data Acquisition and Integration,” in R.B. McMaster and E. L. Usery (Eds), 4 Research Agenda for Geographic Information Science, Boca Raton: CRC Press, 17-40. Jensen, R. R., Gatrell, J. D. and D. D, MeLean (Eds), 2005, Geo-Spatial Technologies in Urban Environments, NY. Springer, 176 p. Kelly, M., Estes, JE. and K. A. Knight, 1999," tation Keys for Validation of Global Lands Photogrammetric Engineering & Remote Sensing, 65:1041~ 049, Konceny, G, 2003, Geoinformation: Remote Sensing, Photo- _grammetry and GIS, London: Taylor & Francis, 248 p. Linder, W., 2003, Digital Photogrammetry: Theory and Applica- tons, Benin pringer-Verlag, 189. Loyd, R, Hodgson, M.E. and A. Stokes, 2002, “Visual Catego- rization with Aerial Photographs," Annals of the Association of American Geographers. 92(2):241-266, Lunetta, R$. and C. D. Elvidge, 1998, Remote Sensing Change Detection: Environmental Monitoring Methods and Applica- sions, An Arbor: Ann Arbor Press, 318 p. MeCoy, R. M., 2005, Field Methods in Remote Sensing, NY. Guilford Press, 159 p. McGlone, J.C, 2004, Manual of Photogrammetry, Sth Bé., Be- thesda: ASPARS, 1151p. Miller, R.B., Abbott, M. R., Harding, L. W., Jensen, J. R., Jo- J., Macauley, M., MacDonald, J. S. and J. S. Pearlman, 2003, Using Remote Sensing in State and Local Government: Information for Management and Decision Making, Washington: National Academy Press, 97 p. hannsen, Philipson, W., 1997, Manual of Photographic Interpretation, 2nd Bd., Bethesda: ASPARS, 555 p. Robbins, J, not,” New York Times, 1999, “High-Tech Camera Sees What Eyes Can- Science Section, September 14, DS. Schill, $, Jensen, J. R. and D. C. Cowen, 1999, “Bridging the Gap Between Government and Industry: the NASA Affiliate Research Center Program,” Geo Info Systems, 9(9):26-33, Teng, W. L., 1997, “Fundamentals of Photographic Interpreta- tion,” Manual of Photographic Interpretation, W. Philipson (B4), 2nd Ba., Bethesda: ASPARS, 49-113. Wolf, P.R. and B. A Dewitt, 2000, Blements of Photogrammetry with Applications in GIS, NY.: MeGraw Hill, 608 p. Elements of Visual Image Interpr Remote Sensing Can Provide Knowledge by Measuring Energy Characteristics in Spectral Regions Beyond Our Human Visual Perception appears vegetated bea * gw oe ae he Green reflectance ofan agricultural area in Saudi Arabia. 'b, Red reflectance, ¢, Nearingftared refletance. 4d Color composite (RGB ~neatinfared, red, green), Color Plate 1 Multispectral imagery of centr-pivot agriculture in Saudi Arabia, a,b) Vegetation absorbs most ofthe green and red incident ‘energy causing vegetated fields to appear dark in green and red multispectral images. c) Conversely, vegetation reflects a substantial amount ofthe incident near-inftared energy causing itto appear bright. In this example, several fields appear dark inthe green and red images (possibly de to recent irrigation, stubble from a previous cop, or plowing) suggesting that veg station is preset. Careful examination of the same fields inthe near-infrared image reveals that very’ litle vegetation is present. The near-infrared image also provides detailed information about the spatial distribution of the biomass present in ‘ach field d) A colorinirazed color composite makes it lear which Fels are vegetated (IRS-IC LISS ILL images courtesy of Indian Space Agency), 127 128 Color Plate 2 Cal Elements of Visual Image Interpretation A Mangrove Island near Key West, FL ated Airborne Multispectral Scanner (CAMS) data of one ofthe island keys near Key West, FL. The spatial resolution ig 2.5 X2.5 m. The image isa color composite of green, red, and near-infrared bands. The dense stands of ealthy red man- ‘rove (Rhizophora mangale) are 6 10 12 e) tall. The tallest tees are found around the periphery of the island where tidal ‘ushing provides a continuous flow of nstrenis, Two areas of bar sand are located inthe interior of the island, A coral reef surrounds the island (Davis and Jensen, 1998). Color Perception Charts Color Plate 3 ‘Some people donot perceive colo like the majority ofthe population, This dae nol preclude them from becoming excel- lent image analysts. They only need to understand the nature of their color perception differences and adjust. Two color perception charts are provided. The viewer should see 8 “3” in (a) and a "6" in (b). Ifyou see &"S”, the next time you have an eye examination you might ask the optometrist to give you a color perception test to identify the nature of your color perception (courtesy Smith, J.T, 1968, Manual of Color Aerial Photography, American Society for Photogrammetry & Remote Sensing).

You might also like