Geographical Information Systems (GIS) and Remote Sensing: Artelme

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

1

Geographical Information Systems (GIS) and Remote Sensing


G. Rantitsch, Department fr Angewandte Geowissenschaften und Geophysik, Montanuniversitt Leoben, gerd.rantitsch@mu-leoben.at CONTENT Literature and Information sources 1. Introduction and Definition Definition Components Data representation Geographic Database Concepts Analysis in GIS Description of a GIS study Advanced technologies 2. Data sources and Data Database development Data structure Attribute data GPS data Data from remote sensing 3. Georeferencing 4. GIS Databases Data types Geodatabases SQL Metadata 5. Visualization 6. GIS Analysis Statistics Geostatistics Query Overlay operations Image processing 7. 3D-GIS

Literature
Text books: BARTELME, N.: Geoinformatik. Modelle, Strukturen, Funktionen. 4. Auflage. Springer Verlag, 2005. BHR, H.P., VGTLE, Th. (Hsg.): Digitale Bildverarbeitung. Anwendungen in Photogrammetrie, Fernerkundung und GIS. 4. Auflage. Herbert Wichmann Verlag, 2005. BONHAM-CARTER, G.F.: Geographical Information Systems for Geoscientists. Modelling with GIS. Pergamon, 1994. CARR, J.R.: Data Visualization in the Geosciences. Prentice Hall, 2002.

2 DAVIS, J.C.: Statistics and data analysis in Geology. 2nd edition. John Wiley & Sons, 1986. 3rd edition, 2002. HOHN, M.E.; Geostatistics and Petroleum Geology. Kluwer. 1999. ISAAKS, E.H., SRIVASTAVA R.M.: An introduction to applied geostatistics. Oxford University Press, 1989. LILLESAND, Th.M., KIEFER, R.W.: Remote sensing and image interpretation. 4th edition. Wiley & Sons, 2005. LONGLEY, P.A., GOODCHILD, M.F., MAGUIRE, D.J., RHIND, D.W. (Eds.): Geographical Information Systems. Principles and technical issues. 2 volumes, Wiley, 1999. ORMSBY T.: Getting to know ArcGIS desktop. Basics of ArcView, ArcEditor, and ArcInfo ; updated for ArcGIS 9. ESRI Press, Redlands, Calif., 572 pp, 2007. Journals: Computer & Geosciences http://www.elsevier.com/wps/find/journaldescription.cws_home/398/description#description downloadable Software Mathematical Geosciences http://springerlink.metapress.com/content/121014/?p=e375bfd9579d463186c5d52a4d91b79d &pi=0 theoretical aspects

Internet resources: Software: International Association for Mathematical Geology [http://www.iamg.org/] Image processing: Idrisi [http://www.clarklabs.org/] ArcView: ESRI [http://www.esri.com/], Local distributor in Austria - Synergis [http://www.synergis.at/] Geostatistics: AI-GEOSTATS: The Central Server for GIS & Spatial Statistics on the Internet [http://www.ai-geostats.org/] GSLIB: Professional geostatistical software (freeware) [http://www.gslib.com/] Further education: UNIGIS distance learning program [http://www.unigis.ac.at] GIS courses at the Zentrum fr Geoinformatik Salzburg [http://www.zgis.at] ESRI virtual campus [http://campus.esri.com]

Document Credit: This document is compiled mainly from Idrisi (Eastman, 1999 a, b) and ESRI documents and from software help information. It is subject to the personal use.

1. GIS Introduction and Definition


Definition: Geography is information about the earth's surface and the objects found on it, as well as a framework for organizing knowledge. A Geographical Information System (GIS) is a computer-assisted system for the acquisition, storage, analysis and display of geographical data. Application of GIS achieves these major roles through one or more of the following activities with spatial data: organization, visualization, query, combination, analysis, prediction. People access GIS to measure change, do spatial analysis, perform spatial modeling, transact geographic accounting, and obtain decision support. GIS solutions are emerging as an integral component in nearly every type of business and government service. In contrast to GIS, computer aided drawing (CAD) systems on their own are not designed to handle non-spatial attributes, except in a basic manner, and are unsuitable for manipulating digital images. They are therefore not good for handling data tables or gridded data, nor are they able to provide the analytical functions of a GIS. GIS has had an enormous impact on virtually every field that manages and analyzes spatially distributed data. For those who are unfamiliar with the technology, it is easy to use it as a magic box. However, to experienced analysts GIS becomes simply an extension of ones own analytically thinking as a tool for thought. The system and the analyst cannot be separated. Most commercial GIS are not specifically oriented towards the geosciences. GIS designed specifically for geological work, particularly for mining and oil exploration need to be fully three-dimensional. So that each data object is characterized by its location in space with three spatial coordinates. For these tasks specialized software solutions are needed. Geological application of GIS are: hazard mapping (e.g. slope stability, earthquake damage zonation, volcanic eruption impacts, flood damage from rivers and tsunamis, coastal erosion, impacts of pollution, global warming), site selection for engineering projects (e.g. waste disposal, pipeline, road and railway routing, dams, building developments), resource evaluation for a variety of geological commodities (e.g. mineral exploration), investigation of possible cause and effect linkage of environmental interest between different spatial datasets (e.g. incidence of disease in relationship to environmental geochemistry), exploratory investigations of spatial inter-relationships between datasets during the course of geological research (e.g. evaluation of spectral signatures from satellite images in relationship to lithology and vegetation).

Components: Spatial and Attribute database: The database is a collection of maps and associated information in digital form. The database is comprised of a spatial database describing the geography (shape and position) of earth surface features, and an attribute database describing the characteristics of these features. In the most modern systems, the spatial and attribute databases are closely integrated into a single entity.

4 Cartographic display system: This component allows one to take selected elements of the database and produce map output on the screen or some hardcopy devices. Digitizing System: With a digitizing system (digitizing tablet or digitizing of scanned images on the screen), one can take existing paper maps and convert them into digital form. Database Management system: A DBMS is used to input, manage and analyze attribute and spatial components of the data stored. Geographic Analysis System: With a GIS, the capabilities of traditional database query are extended to include the ability to analyze data based on their location. The System can contribute the results of an analysis as a new addition to the database. Thus, a GIS play a virtual role in extending the database through addition of knowledge of relationship between features. Image Processing system: Some software systems (hybrid GIS systems) also include the ability to analyze (remotely sensed) images and provide specialized statistical analyses. Statistical Analyzing System: A GIS offers the capability of both traditional statistical procedures as well as some specialized routines for the statistical analysis of spatial data.

Data Representation: With vector data, the boundaries of a feature are defined by a series of points, when joint with straight lines, form the graphical representation of that feature. The points themselves are encoded with their coordinates. With raster data the study area is subdivided into a fine mesh of grid cells in which we record the attribute of the earths surface at that point. Raster data are typically data intensive since they must record data at every cell location regardless of whether that cell holds information that is of interest or not. However, the advantage is that geographical space is uniformly defined in a simple and predictable fashion. As a result, raster systems have substantially more analytically power than their vector counterparts in the analysis of continuous space and are thus ideally suited to the study of data that are continuously changing over space. Raster systems tend to be very rapid in the evaluation of problems that involve various mathematical combinations of the data in multiple layers. While raster systems are predominately analysis oriented, vector systems tend to be more database management oriented. Vector systems are quite efficient in their storage of map data because they only store the boundaries of features and not that which is inside those boundaries. They excel at problems concerning movements over a network. For many, it is the simple database management functions and excellent mapping capabilities that make vector systems attractive.

5 Geographic Database Concepts: A geographic database is organized in a fashion similar to a collection of maps (i.e. coverage). Each map will typically contain information on only a single feature type. They may contain a whole series of attributes that pertain to those features. Usually data sets are divided into unitary layers. A layer contains all the data for a single attribute. When map data are encoded in digital form, scale differences are removed. The digital data may be displayed or printed at any scale. Digital data layer that were derived from paper maps of different scale, but covering the same geographic area, may be combined. Many GIS packages provide utilities for changing the projection and reference system of digital layers. This allows multiple layers, digitized from maps having various projections and reference systems, to be converted to a common system. Layers can be merged with ease. It is important to note, that the issue of resolution of the information in the data layer remains. Although features digitized from a poster-sized world map could be combined in a GIS with features digitized from a very large scale local map, this would normally not be done. The level of accuracy and detail of the digital data can only be as good as that of the original maps. All spatial data in a GIS are georeferenced. Georeferencing refers to the location of a layer or coverage in space as defined by a known coordinate referencing system.

Analysis in GIS: The organization of the database into layers provides rapid access to the data elements which is essential for geographic analysis. Spatial analyses study the locations and shapes of geographic features and the relationships between them. Spatial analysis is useful when evaluating suitability, when making predictions, and for gaining a better understanding of how geographic features and phenomena are located and distributed. Database query asks questions about the currently-stored information. We can query by location (e.g. what lithology is at this location?) or by attribute (e.g. what areas have high levels of Pb?). Also complex combinations of conditions may be asked (e.g. where are the wetlands that are larger than 1 ha and that are adjacent to industrial lands?). Spatial modeling is a methodology or a set of analytical procedures that simulate real-world conditions within a GIS using their spatial relationships of geographic features. For example, a spatial model could simulate conditions that lead to the contamination of an aquifer or the spread of a forest fire. There are three categories of spatial modeling functions that can be applied to geographic features within a GIS: geometric modeling (generating buffers, calculating areas and perimeters, and calculating distances between features); coincidence modeling (topological overlay); and adjacency modeling (path finding, redistricting, and allocation). Map algebra combines map layers mathematically. Modeling in particular requires the ability to combine layers according to various mathematical equations (modifying the attribute data by a constant, transforming the attribute data by a mathematical operation, mathematical combination of different data layers to produce a composite result). It is manly applied in raster GIS systems.

6 Distance operators are a set of techniques where distance plays a key role in the analysis undertaken. Buffer zones describe areas within a specific distance of a feature type. An operator can evaluate the distance of all locations to the nearest of a set of designated features. Others can incorporate frictional effects (cost distances, cost surfaces) and barriers into distance calculations. Context operators (neighborhood or local operators) create new layers based on the information on an existing map and the context in which it is found (e.g. surface analysis, digital filtering). Because of their simple and uniform data structure, raster systems tend to offer a broad range of context operators. Derivative mapping combines selected components of a map to yield new derivative layers which extracts new knowledge of relationships between database elements. The relationship that forms the model will need to be known. For example digital elevation data are used to derive slope gradients, and than the slope data are taken and combined with information on soil type and rainfall regime to produce a new map of soil erosion potential. Process modeling is based on the notation that the GIS doesnt simply represent the environment, it is an environment. The database thus acts as a laboratory for the exploration of processes in a complex environment. Decision support models help decision makers develop more explicitly rational and well-informed decisions.

Description of a GIS study Step 1: Building a spatial database This is the most time-consuming phase of a project. It involves establishing the spatial extents of the study area, deciding an appropriate working projection, and assembling the various spatial data to be used in the study in digital form, properly registered so that the spatial components overlap correctly. Step 2: Data processing The second step is to manipulate the data to extract and derive those spatial patterns relevant to the aims of the project. Step 3: Integration modeling The third step consists of combining the various evidence maps.

Advanced technologies: GIS technology can be deployed on a range of mobile systems from lightweight devices to PDAs, laptops, and Tablet PCs (mobile GIS). A GIS coupled with a wireless mobile device that is location enabled is widely used for data collection and GIS information access in the field. Mobile GIS is the expansion of a GIS from the office into the field. A mobile GIS enables field based personnel to capture, store, update, manipulate, analyze, and display geographic information. Mobile GIS integrates one or more of the following technologies:

Mobile devices Global Positioning Systems (GPS) Wireless communications for Internet GIS access Traditionally, the processes of field data collection and editing have been time consuming and error prone. Geographic data has travelled into the field in the form of paper maps. Field edits were performed using sketches and notes on paper maps and forms. Once back in the office, these field edits were deciphered and manually entered into the GIS database. The result has been that GIS data has often not been as up-to-date or accurate as it could have been. Mobile GIS have enabled GIS to be taken into the field as digital maps on compact, mobile computers, providing field access to enterprise geographic information. This enables organizations to add real-time information to their database and applications, speeding up analysis, display, and decision making by using up-to-date, more accurate spatial data.

GIS Web services give a diverse user community access to geospatial content and capabilities. For example, ArcIMS (ESRI) is a solution for delivering dynamic maps and GIS data and services via the Web. e.g.: Gis Andes (a homogeneous information system of the entire Andes Cordillera) http://gisandes.brgm.fr/ GIS Steiermark (a local governmental GIS with many environmental data) http://www.gis.steiermark.at Austrian soil map http://geoinfo.lfrz.at/website/ebod/viewer.htm The 1 : 5 Million International Geological Map of Europe and Adjacent Areas http://www.bgr.de/index.html?/karten/IGME5000/igme5000.htm Some commercial GIS products provide access to data which are linked to a GIS. This GIS provides the analytical facilities only for the embedded data. e.g.: Interaktives Rohstoff-Informations-System (a spatial database of the Austrian mineral resources) http://www.geolba.ac.at/de/GEOMARKT/iris.html

2. Data and Data sources


Database development The first stage in database development is typically the identification of the data layers necessary for the project. It is also important to determine what resolution (precision) and what level of accuracy are required. There are several issues to be considered: Resolution Resolution affects storage and processing time if too fine and limits the questions you can ask of the data if too coarse. Resolution may refer to the spatial, temporal or attribute components of the database. Spatial resolution refers to the size of the pixel in raster data or the scale of the map that was digitized to produce vector data. Temporal resolution refers to the currency of the data and whether the images in the time series are frequent enough and spaced appropriately. Attribute resolution refers to the level of detail captured by the data values. Accuracy While accuracy does have a relationship to resolution, it is also a function of the methods and care with which the data were collected and digitized. Unfortunately, it is not always easy to evaluate the accuracy of data layers. However, most government mapping agencies do have mapping standards that are available. In good data sets the metadata (documentation) include accuracy information, but many other data sets carry no accuracy information. In addition, even when such information is reported, the intelligent use of accuracy information in GIS analysis is still an avenue of active research. Georeferencing Data layers from various sources will often be georeferenced using different reference systems. For display and GIS analysis, all data layers that are to be used together must use the same reference system. However, it may be possible to use graphic files that are not georeferenced. Also, while most GIS support many projections, you may find data that are in an unsupported projection. In both cases, you will not be able to project the data. It is sometimes possible to georeference an unreferenced file or a file with an unsupported projection through resampling if points of known locations can be found on the unreferenced image. Cost This must be assessed in terms of both time and money. Other considerations include whether the data will be used once or many times, the accuracy level necessary for the particular (and future) uses of the data, and how often it must be updated to remain useful. There are five main ways to get data into the database: Find data in digital format and import it. Find data in hard-copy format and digitize it. Collect data yourself in the field, and then enter it. Substitute an existing layer as a surrogate. Derive new data from existing data.

9 Data in digital format In many countries, governmental organizations provide data as a service. Commercial companies also provide data. With the proliferation of GIS and image processing technologies, many non-governmental and academic institutions may also possess data layers that would be useful. Once the data are collected, they have to be imported into the software system. Ideally, these format conversions are trivial. The file format (raster data, vector data, and spreadsheet data) of the data should be known. More specifically, it is important to know if it is in a particular software format, an agency format, or some sort of data interchange format. Metadata allow the full documentation of the history and processes involved behind the data sources as well as information such as the author, available attribute fields, date last updated, extent, or keywords. Data in hard-copy format Some hard-copy data might be digitized by simply typing it into an ASCII editor or a database table. More commonly, hard-copy data is in the form of a map, an ortho-photo, or an aerial photograph. If particular features (e.g. elevation contours, well-head locations, land use boundaries) have to be extracted from a map, then these have to be digitized as vector features using a digitizing tablet. Alternatively, features can be captured with on-screen digitizing. A digitizing tablet contains a fine mesh of wires that define a Cartesian coordinate system for the board. The user attaches the hard copy map to the board, then traces features on the map with the digitizing puck (a mouse-like device) or stylus (a pen-like device). The digitizing tablet senses the x/y positions of the puck as the features are traced and communicates these to the digitizing software. The digitizing software allows the user to register the map on the digitizing tablet. This process establishes the relationship between the tablets coordinates and the coordinate system of the map. The software compares the tablet coordinates and the map coordinates for a set of control points and then derives a best-fit translation function. This function is then applied to all the coordinates sent from the board to the software. Scanners can be used to digitize hard copy images. Scanned images may be extracted into a vector file format using specialized software (e.g. ArcScan, an extension for ArcGIS, provides e set of tools for raster-to-vector conversion). A more common method is with on-screen digitizing (heads-up digitizing). Data collecting in the field For many projects, it is necessary to collect data in the field. Doing this, it is imperative to know the location of each data point collected. Depending upon the nature of the project and level of accuracy required, paper maps may be used in conjunction with physical landmarks to determine locations. However, for many projects, traditional surveying instruments or Global Positioning Devices (GPS) are necessary to accurately locate data points. Locational coordinates from traditionally surveys are typically processed in coordinate geometry.

10 Substitute an existing layer as a surrogate At times, there is simply no way to find or create a particular data layer. In these cases, it may be possible to substitute existing data as a surrogate. For example, suppose an analysis requires powerline location information, but a powerlines data file is not available and you don't have time or funds to collect the data in the field. You know, however, that in your study area, powerlines generally follow paved roads. For the purposes of your analysis, if the potential level of error introduced is acceptable, you may use a paved roads layer as a surrogate for powerlines. Derive new data from existing data This is the primary way in which a GIS database grows. With derivative mapping, some knowledge of relationships is combined with existing data layers to create new data layers. Another common form of derivative mapping is the interpolation of a raster surface (e.g., an elevation model or temperature surface) from a set of discrete points or isolines using TIN modeling or Geostatistics.

Data structure The real world exhibits properties that are either spatially continuous or discontinuous. Discrete spatial entities can be treated as natural spatial objects, generally irregular in shape, in a data model. For variables that are, in reality, spatially continuous, space must be divided into discrete spatial objects that can either be irregular or regular in shape. Continuous fields can also be divided into naturally-shaped irregular objects bounded by contour lines. Discrete data, which is sometimes called categorical or discontinuous data, mainly represents objects in both the feature and raster data storage systems. A discrete object has known and definable boundaries. It is easy to define precisely where the object begins and ends. A lake is a discrete object within the surrounding landscape. Where the waters edge meets the land can be definitively established. Other examples of discrete objects include buildings, roads, and land parcels. Discrete objects are usually nouns. Continuous data, or a continuous surface, represents phenomena where each location on the surface is a measure of the concentration level or its relationship from a fixed point in space or from an emitting source. Continuous data is also referred to as field, nondiscrete, or surface data. One type of continuous surface data is derived from those characteristics that define a surface, where each location is measured from a fixed registration point. These include elevation (the fixed point being sea level) and aspect (the fixed point being direction: north, east, south, and west). Spatial objects can be grouped into point, lines, areas, surfaces and volumes, varying in spatial dimension. Natural spatial objects correspond to discrete spatial entities recognizable in the real world. Imposed spatial objects are artificial or man-made objects, like a property boundary, or a pixel. Raster model

11 One of the advantages of the raster model is that spatial objects is that spatial data of different types can be overlaid without need for the complex calculations required for overlaying different maps in the vector model. Each layer of grid cells in a raster model records a separate attribute. The cells are constant in size, and are generally square. The locations of cells are addressed by row and column number. Spatial coordinates are not usually explicitly stored for each cell. Information about the number of rows and columns, plus the geographic location of the origin are saved which each layer. The spatial resolution of a raster is the size of one of the pixels on the ground (10m resolution, study area of 100x100 km 10,000 columns and 10,000 rows 100 Mio pixel, 100MB with 1 b (0-255)/pixel). The size chosen for a raster cell depends on the data resolution required for the most detailed analysis. The cell must be small enough to capture the required data, but large enough so that computer storage and analysis can be performed efficiently. The more homogeneous an area is for critical variables, the larger the cell size can be without affecting accuracy. A cell finer than the input resolution will not produce more accurate data than the input data! Points are represented as single pixel and lines by strings of connected pixels. The raster model is well suited for modeling spatial continua, particularly where an attribute shows a high degree of spatial variation, such as data on satellite images. The regular spacing of pixels in a lattice is ideal for calculating and representing spatial gradients. The major disadvantage of the raster model is the loss of resolution that accompanies reconstruction data to fixed raster-cell boundaries (raster encoding). Vector model The basic type of vector model has come to be known as the spaghetti model. Points are represented as pairs of spatial coordinates, lines as strings of coordinate pairs, and areas as lines that form close polygons. In this model, the boundary between two adjacent polygons is stored twice, once for each polygon. In the topological model, the boundaries of polygons are broken down into a series of arcs and nodes. The spatial relationship between arcs, nodes and polygons are explicitly defined in attribute tables. Topological attributes of spatial objects are those spatial characteristics that are unchanged by transformations such as translation, scaling, rotation or shear. Spatial coordinates are affected by such operations. Digital files of spatial data organized by the vector model usually take up less storage space than equivalent raster files. The vector model is well-suited for representing graphical objects on the map. On the other hand, cartographic fidelity is often more apparent than real.

Surface representation Any GIS layer, whether raster or vector, that describes all locations in a study area might be called a surface. However, in surface analysis, we are particularly interested in those surfaces

12 where the attributes are quantitative and vary continuously over space. A raster Digital Elevation Model (DEM), for instance, is such a surface. It is normally impossible to measure the value of an attribute for every pixel in an image. (An exception is a satellite image, which measures average reflectance for every pixel.) More often, one needs to fill in the gaps between sample data points to create a full surface. This process is called interpolation.

Thiessen or Voronoi Tessellation The term tessellation means to break an area into pieces or tiles. With a Thiessen tessellation, the study area is divided into regions around the sample data points such that every pixel in the study area is assigned to (and takes on the value of) the data point to which it is closest. Because it produces a tiled rather than a continuous surface, this interpolation technique is seldom used to produce a surface model. Because it produces a tiled rather than a continuous surface, this interpolation technique is seldom used to produce a surface model. More commonly it is used to identify the zones of influence for a set of data points. The Thiessen polygons are constructed as follows:

All points are triangulated into a triangulated irregular network (TIN) that meets the Delaunay criterion. The perpendicular bisectors for each triangle edge are generated, forming the edges of the Thiessen polygons. The locations at which the bisectors intersect determine the locations of the Thiessen polygon vertices. The Thiessen polygons are built to generate polygon topology. The locations of the points are used as the label points for the Thiessen polygons.

Triangulated Irregular Network A Triangulated Irregular Network (TIN) is a vector data structure. The sample data points become the vertices of a set of triangular facets (e.g. Delaunay triangles) that completely cover the study area. There are many different methods of triangulation. The Delaunay triangulation process is most commonly used in TIN modeling. A Delaunay triangulation is defined by three criteria: 1) a circle passing through the three points of any triangle (i.e., its circumcircle) does not contain any other data point in its interior, 2) no triangles overlap, and 3) there are no gaps in the triangulated surface. Attribute data The attributes of objects can be divided into three types: spatial, temporal and thematic. They are organized usually into lists or tables by using a database management system (DBMS). The type of measurement systems used may have a dramatic effect on the interpretation of the resulting values. The significance of this discussion is that all numbers cannot be treated the same. Measurement values can be broken into four types: Ratio The values are derived relative to a fixed zero point on a linear scale (e.g. distance, weight, volume). Mathematical operations can be used on these values with predictable and meaningful results. Interval

13 The values are derived from a linear calibrated scale with successive constant intervals, but they are not relative to a true zero point in time or space (e.g. time, temperature, pH value). Because there is no zero point, relative comparison can be made between the measurements, but ratios and proportional determinations are not useful. Ordinal Ordinal values determine positions (e.g. Mohs hardness). They do not establish magnitude or relative proportions. Nominal The values are used to identify one instance from another (e.g. color). They may also establish the group, class, member, or category with which the object is associated. Each measurement scale is associated with appropriate operations and average values:

A second subdivision of the values is whether the values represent discrete or continuous data. Discrete data (categorical data) represent objects with known and predictable boundaries. They are best represented by ordinal or nominal numbers. Continuous data vary without limits (e.g. elevation) and may be best represented by ratio and interval values.

GPS data GPS is composed of 24 US satellites orbiting the earth at approximately 20,000 km. each satellite continuously transmits a time and location signal. A GPS receiver processes the satellites signals and calculates its position. The level of error in the position depends on the quality of the receiver and whether differential correction is undertaken. Differential correction is a post-collection processing step in which a base station file is used to correct the location gathered by the mobile unit. A GPS receiver must acquire signals from at least four satellites to reliably calculate a threedimensional position. Ideally, these satellites should be distributed across the sky. The receiver performs mathematical calculations to establish the distance from a satellite, which in turn is used to determine its position. The GPS receiver knows where each satellite is the instant its distance is measured. This position is displayed on the data logger and saved along with any other descriptive information entered in the field software. GPS can provide worldwide, three-dimensional positions, 24 hours a day, in any type of weather. However, the system does have some limitations. There must be a relatively clear "line of sight" between the GPS antenna and four or more satellites. Objects, such as buildings, overpasses, and other obstructions, that shield the antenna from a satellite can potentially weaken a satellite's signal such that it becomes too difficult to ensure reliable positioning. These difficulties are particu-

14 larly prevalent in urban areas. The GPS signal may bounce off nearby objects causing another problem called multipath interference. Until 2000, civilian users had to contend with Selective Availability (SA). The DoD intentionally introduced random timing errors in satellite signals to limit the effectiveness of GPS and its potential misuse by adversaries of the United States. These timing errors could affect the accuracy of readings by as much as 100 meters. With SA removed, a single GPS receiver from any manufacturer can achieve accuracies of approximately 10 meters. To achieve the accuracies needed for quality GIS recordsfrom one to two meters up to a few centimetersrequires differential correction of the data. The majority of data collected using GPS for GIS is differentially corrected to improve accuracy. The underlying premise of differential GPS (DGPS) is that any two receivers that are relatively close together will experience similar atmospheric errors. DGPS requires that a GPS receiver be set up on a precisely known location. This GPS receiver is the base or reference station. The base station receiver calculates its position based on satellite signals and compares this location to the known location. The difference is applied to the GPS data recorded by the second GPS receiver, which is known as the roving receiver. The corrected information can be applied to data from the roving receiver in real time in the field using radio signals or through post processing after data capture using special processing software. Real-time DGPS occurs when the base station calculates and broadcasts corrections for each satellite as it receives the data. The correction is received by the roving receiver via a radio signal if the source is land based or via a satellite signal if it is satellite based and applied to the position it is calculating. As a result, the position displayed and logged to the data file of the roving GPS receiver is a differentially corrected position. Differentially correcting GPS data by post processing uses a base GPS receiver that logs positions at a known location and a rover GPS receiver that collects positions in the field. The files from the base and rover are transferred to the office processing software, which computes corrected positions for the rover's file. This resulting corrected file can be viewed in or exported to a GIS. There are many permanent GPS base stations currently operating throughout the world that provide the data necessary for differentially correcting GPS. Depending on the technology preferred by the base station owner, this data can be downloaded from the Internet or via a bulletin board system (BBS). Because base station data is consistent (i.e., with no gaps due to multipath errors) and very reliable because base stations usually run 24 hours, seven days a week, it is ideal for many GIS and mapping applications.

Data from remote sensing Remote sensing can be defined as any process whereby information is gathered about an object, area or phenomenon without being in contact with it. The term remote sensing has come to be associated more specifically with the gauging of interactions between earth surface materials and electromagnetic energy. Sensors can be divided into two broad groups passive and active. Passive sensors measure ambient levels of existing sources of energy, while active ones provide their own source of energy. The majority of remote sensing is done with passive sensors, for which the sun is the major energy source.

15 While aerial photography is still a major form of remote sensing, newer solid state technologies have extended capabilities for viewing in the visible and near-infrared wavelengths to include longer wavelength solar radiation as well. However, not all passive sensors use energy from the sun. Thermal infrared and passive microwave sensors both measure natural earth energy emissions. In the visual interpretation of remotely sensed images, a variety of image characteristics are brought into consideration: color (or tone in the case of panchromatic images), texture, size, shape, pattern, context, and the like. However, with computer-assisted interpretation, it is most often simply color (i.e., the spectral response pattern) that is used. It is for this reason that a strong emphasis is placed on the use of multispectral sensors (sensors that, like the eye, look at more than one place in the spectrum and thus are able to gauge spectral response patterns), and the number and specific placement of these spectral bands. The Landsat satellite is a commercial system providing multi-spectral imagery in seven spectral bands at a 30 meter resolution. It can be shown through analytical techniques, such as Principal Components Analysis, that in many environments, the bands that carry the greatest amount of information about the natural environment are the near-infrared and red wavelength bands. Water is strongly absorbed by infrared wavelengths and is thus highly distinctive in that region. In addition, plant species typically show their greatest differentiation here. The red area is also very important because it is the primary region in which chlorophyll absorbs energy for photosynthesis. Thus it is this band which can most readily distinguish between vegetated and non-vegetated surfaces. Given this importance of the red and near-infrared bands, it is not surprising that sensor systems designed for earth resource monitoring will invariably include these in any particular multispectral system. Other bands will depend upon the range of applications envisioned. Many include the green visible band since it can be used, along with the other two, to produce a traditional false color compositea full color image derived from the green, red, and infrared bands (as opposed to the blue, green, and red bands of natural color images). This format became common with the advent of color infrared photography, and is familiar to many specialists in the remote sensing field. In addition, the combination of these three bands works well in the interpretation of the cultural landscape as well as natural and vegetated surfaces. However, it is increasingly common to include other bands that are more specifically targeted to the differentiation of surface materials. For example, Landsat TM Band 5 is placed between two water absorption bands and has thus proven very useful in determining soil and leaf moisture differences. Similarly, Landsat TM Band 7 targets the detection of hydrothermal alteration zones in bare rock surfaces. By contrast, the AVHRR system on the NOAA series satellites includes several thermal channels for the sensing of cloud temperature characteristics. Aerial photography is the oldest and most widely used method of remote sensing. Cameras mounted in light aircraft flying between 200 and 15,000 m capture a large quantity of detailed information. Aerial photos provide an instant visual inventory of a portion of the earth's surface and can be used to create detailed maps. Aerial photographs commonly are taken by commercial aerial photography firms which own and operate specially modified aircraft equipped with large format (23 cm x 23 cm) mapping quality cameras. Aerial photos can also be taken using small format cameras (35 mm and 70 mm), hand-held or mounted in unmodified light aircraft. Camera and platform configurations can be grouped in terms of oblique and vertical. Oblique aerial photography is taken at an angle to the ground. The resulting images give a view as if the observer is looking out an airplane window. These images are easier to interpret than ver-

16 tical photographs, but it is difficult to locate and measure features on them for mapping purposes. Vertical aerial photography is taken with the camera pointed straight down. The resulting images depict ground features in plan form and are easily compared with maps. Vertical aerial photos are always highly desirable, but are particularly useful for resource surveys in areas where no maps are available. Aerial photos depict features such as field patterns and vegetation which are often omitted on maps. Comparison of old and new aerial photos can also capture changes within an area over time. Vertical aerial photos contain subtle displacements due to relief, tip and tilt of the aircraft and lens distortion. Vertical images may be taken with overlap, typically about 60 percent along the flight line and at least 20 percent between lines. Overlapping images can be viewed with a stereoscope to create a three-dimensional view, called a stereo model. LANDSAT The Landsat system of remote sensing satellites is currently operated by the EROS Data Centre (http://edc.usgs.gov) of the United States Geological Survey. This is a new arrangement following a period of commercial distribution under the Earth Observation Satellite Company (EOSAT) which was recently acquired by Space Imaging Corporation. As a result, the cost of imagery has dramatically dropped, to the benefit of all. Full or quarter scenes are available on a variety of distribution media, as well as photographic products of MSS and TM scenes in false color and black and white. There have been seven Landsat satellites, the first of which was launched in 1972. The LANDSAT 6 satellite was lost on launch. However, as of this writing, LANDSAT 5 is still operational. Landsat 7 was launched in April, 1999. Landsat carries two multispectral sensors. The first is the Multi-Spectral Scanner (MSS) which acquires imagery in four spectral bands: blue, green, red and near infrared. The second is the Thematic Mapper (TM) which collects seven bands: blue, green, red, near-infrared, two midinfrared and one thermal infrared. The MSS has a spatial resolution of 80 meters, while that of the TM is 30 meters. Both sensors image a 185 km wide swath, passing over each day at 09:45 local time, and returning every 16 days. With Landsat 7, support for TM imagery is to be continued with the addition of a co-registered 15 m panchromatic band. SPOT The Systme Pour L'Observation de la Terre (SPOT) (www.spot.com) was launched and has been operated by a French consortium since 1985. SPOT satellites carry two High Resolution Visible (HRV) pushbroom sensors5 which operate in multispectral or panchromatic mode. The multispectral images have 20 meter spatial resolution while the panchromatic images have 10 meter resolution. SPOT satellites 1-3 provide three multi-spectral bands: Green, Red and Infrared. SPOT 4, launched in 1998, provides the same three bands plus a short wave infrared band. The panchromatic band for SPOT 1-3 is 0.51-0.73_ while that of SPOT 4 is 0.610.68_. SPOT 5 was launched in 2002. The main improvements over SPOT 4 include: higher ground resolution for the panchromatic bands of 2.5 and 10 meters; higher resolution for multispectral imagery of 10 meters in all three visible and near infrared bands; and a dedicated instrument for along track stereo acquisition. All SPOT images cover a swath 60 kilometers wide. The SPOT sensor may be pointed to image along adjacent paths. This allows the instrument to acquire repeat imagery of any area 12 times during its 26 day orbital period. SPOT Image Inc. sells a number of products, including digital images on a choice of distribution media. Existing images may be purchased, or new acquisitions ordered. Customers can request the satellite to be pointed in a particular direction for new acquisitions.

17 IKONOS The IKONOS satellite was launched in 1999 (www.spaceimaging.com) and was the first commercial venture for high resolution satellite imagery capture and distribution. IKONOS orbits the earth every 98 minutes at an altitude of 680 kilometers passing a given longitude at about the same time daily (approximately 10:30 A.M.). The IKONOS data products include 1 meter panchromatic (0.45 - 0.90 mm) and 4 meter multispectral (blue (0.45 - 0.52 mm), green (0.51 - 0.60 mm), red (0.63 - 0.70 mm), and near infrared (0.76 - 0.85 mm)) imagery taken in 10.5 km swaths. Space Imaging provides a variety of data products. Customers can customize their acquisition. Shuttle Radar Topography Mission On February 11, 2000, the Shuttle Radar Topography Mission (SRTM) payload onboard Space Shuttle Endeavour launched into space. With its radars sweeping most of the Earth's surfaces, SRTM acquired enough data during its ten days of operation to obtain the most complete near-global high-resolution database of the Earth's topography. In order to gather topographic (elevation) data of Earth's surface, SRTM used the technique of interferometry. In interferometry, two images are taken from different vantage points of the same area. The slight difference in the two images allows scientists to determine the height of the surface. Shuttle Radar Topography Mission (SRTM) obtained elevation data (25m pixel, 20m) on a near-global scale to generate the most complete high-resolution digital topographic database of Earth (http://www.dlr.de/srtm/). GTOPO30 data GTOPO30 is a global digital elevation model (DEM) with a horizontal grid spacing of 30 arc seconds (approximately 1 kilometer). GTOPO30 was derived from several raster and vector sources of topographic information (http://edcdaac.usgs.gov/gtopo30/gtopo30.asp). Some useful links: Global Land Cover Facility (GLCF): http://glcf.umiacs.umd.edu/ The GLCF develops and distributes remotely sensed satellite data and products concerned with land cover from the local to global scales. GEOSPACE http://ofd.ac.at/ GEOSPACE offers Remote Sensing Services and Remote Sensing Products to public institutions and the private sector. Seamless Data Distribution System: http://seamless.usgs.gov/ The Seamless Data Distribution System (SDDS) is a location to explore and retrieve data. U.S. Geological Survey (USGS) and the EROS Data Centre (EDC) are committed to providing access to geospatial data through The National Map. An approach is to provide free downloads of national base layers, as well as other geospatial data layers. ESRI data portal: http://www.esri.com/data/data_portals.html 3DEM: http://www.visualizationsoftware.com/3dem.html 3DEM will produce three dimensional terrain scenes and flyby animations from a wide variety of freely available data sources

18

3. Georeferencing
Georeferencing refers to the manner in which map locations are related to earth surface locations. Georeferencing requires several ingredients: a logic for referring to earth surface locations, a concern of the field of geodesy; a specific implementation of that logic, known as a Geodetic Datum, a concern of the field of surveying; a logic for referring locations to their graphic positions, a concern of the field of cartography; and an implementation of that logic, known as a data structure. The oldest reference surface used for mapping is known as the geoid. The geoid can be thought of as mean sea level, or where mean sea level would be if the oceans could flow under the continents. More technically, the geoid is an equipotential surface of gravity defining all points in which the force of gravity is equivalent to that experienced at the ocean's surface. Since the earth spins on its axis and causes gravity to be counteracted by centrifugal force progressively towards the equator, one would expect the shape of the geoid to be an oblate spheroid, a sphere-like object with a slightly fatter middle and flattened poles. In other words, the geoid would have the nature of an ellipsoid. Unfortunately, as it turns out, the geoid is itself somewhat irregular. Because of broad differences in earth materials (such as heavier ocean basin materials and lighter continental materials, irregular distributions such as mountains, and isostatic imbalances), the geoid contains undulations that also introduce ambiguities of distance and location. As a result, it has become the practice of modern geodetic surveys to use abstract reference surfaces that are close approximations to the shape of the geoid, but which provide perfectly smooth reference ellipsoids. By choosing one that is as close an approximation as possible, the difference between the level of a surveying instrument (defined by the irregular geoid) and the horizontal of the reference ellipsoid is minimized. Moreover, by reducing all measurements to this idealized shape, ambiguities of distance (and position) are removed. There are many different ellipsoids in geodetic use. They can be defined either by the length of the major (a) and minor (b) semi-axes, or by the length of the semi-major axis along with the degree of flattening [f = (a-b) / a]. The reason for having so many different ellipsoids is that different ones give better fits to the shape of the geoid at different locations. The ellipsoid chosen for use is that which best fits the geoid for the particular location of interest. Selecting a specific reference ellipsoid to use for a specific area and orienting it to the landscape, defines what is known in Geodesy as a datum. A datum thus defines an ellipsoid (itself defined by the major and minor semi-axes), an initial location, an initial azimuth (a reference direction to define the direction of north), and the distance between the geoid and the ellipsoid at the initial location. Establishing a datum is the task of geodetic surveyors, and is done in the context of the establishment of national or international geodetic control survey networks. A datum is thus intended to establish a permanent reference surface, although recent advances in survey technology have led many nations to redefine their current datums. Most datums only attempt to describe a limited portion of the earth (usually on a national or continental scale). For example, the North American Datum (NAD) and the European Datum each describe large portions of the earth, while the Kandawala Datum is used for Sri Lanka alone. Regardless, these are called local datums since they do not try to describe the entire earth. By contrast, we are now seeing the emergence of World Geodetic Systems (such as WGS84) that do try to provide a single smooth reference surface for the entire globe. Such systems are particularly appropriate for measuring systems that do not use gravity as a refer-

19 ence frame, such as GPS. However, presently they are not very commonly found as a base for mapping. More typically one encounters local datums, of which several hundred are currently in use. Perhaps the most important thing to bear in mind about datums is that each defines a different concept of geodetic coordinates, latitude and longitude. Thus, in cases where more than one datum exists for a single location, more than one concept of latitude and longitude exists. It can almost be thought of as a philosophical difference. It is common to assume that latitude and longitude are fixed geographic concepts, but they are not. There are several hundred different concepts of latitude and longitude currently in use (one for each datum). It might also be assumed that the differences between them would be small. However, that is not necessarily the case. Clearly, combining data from sources measured according to different datums can lead to significant discrepancies. The possibility that more than one datum will be encountered in a mapping project is actually reasonably high. In recent years, many countries have found the need to replace older datums with newer ones that provide a better fit to local geoidal characteristics. In addition, regional or international projects involving data from a variety of countries are very likely to encounter the presence of multiple datums. As a result, it is imperative to be able to transform the geodetic coordinates of one system to those of another. Some examples: Austrian Reference System (MGI, i.e. Militr-Geographisches Institut): Bessel-Ellipsoid a = 6 377 397.155 m b = 6 356 078.963 m

Ellipsoid centre eccentric to the gravity centre International reference System WGS 84 (World Geodetic System) Ellipsoid GRS 80 a = 6 378 137.000 m b = 6 356 752.314 m

Ellipsoid centre in the gravity centre

Normally, the datum refers to the WGS system as: x, y, z shift of the ellipsoid centre , , - rotation angle along the x, y, and z axis m scale factor e.g.: MGI WGS84 [http://www.bev.gv.at/prodinfo/koordinatensysteme/koordinatensysteme_3f.htm] x = -577.326 m, y= -90.129 m, z= -463.919 m = -15.8537, = -4.5500, = -16.3489 m= -2.4232 ppm

20 The process of transforming spheroidal geodetic coordinates to plane coordinate positions is known as projection, and falls traditionally within the realm of cartography. Originally, the concern was only with a one-way projection of geodetic coordinates to the plane coordinates of a map sheet. With the advent of GIS, however, this concern has now broadened to include the need to undertake transformations in both directions in order to develop a unified database incorporating maps that are all brought to a common projection. Thus, for example, a database developed on a Transverse Mercator projection might need to incorporate direct survey data in geodetic coordinates along with map data in several different projections. Back projecting digitized data from an existing projection to geodetic coordinates and subsequently using a forward projection to bring the data to the final projection is thus a very common activity in GIS. A grid referencing system can be thought of very simply as a systematic way in which the plane coordinates of the map sheet can be related back to the geodetic coordinates of measured earth positions. Clearly, a grid referencing system requires a projection (most commonly a conformal one). It also requires the definition of a plane cartesian coordinate system to be superimposed on top of that projection. This requires the identification of an initial position that can be used to orient the grid to the projection, much like an initial position is used to orient a datum to the geoid. This initial position is called the true origin of the grid, and is commonly located at the position where distortion is least severe in the projection. Then, like the process of orienting a datum, a direction is established to represent grid north. Most commonly, this will coincide with the direction of true north at the origin. However, because of distortion, it is impossible for true north and grid north to coincide over many other locations. Once the grid has been oriented to the origin and true north, a numbering system and units of measure are determined. For example, the UTM (Universal Transverse Mercator) system uses the equator and the central meridian of a 6-degree wide zone as the true origin for the northern hemisphere. The point is then given arbitrary coordinates of 500,000 meters East and 0 meters North. This then gives a false origin 500 kilometers to the west of the true origin. In other words, the false origin marks the location where the numbering system is 0 in both axes. A geographic coordinate system (GCS) uses a three-dimensional spherical surface to define locations on the earth. A GCS includes an angular unit of measure, a prime meridian, and a datum (based on a spheroid). A point is referenced by its longitude and latitude values. Longitude and latitude are angles measured from the earth's centre to a point on the earth's surface. A projected coordinate system is defined on a flat, two-dimensional surface. Unlike a geographic coordinate system, a projected coordinate system has constant lengths, angles, and areas across the two dimensions. A projected coordinate system is always based on a geographic coordinate system that is based on a sphere or spheroid. Conformal projections preserve local shape. To preserve individual angles describing the spatial relationships, a Conformal projection must show the perpendicular graticule lines intersecting at 90-degree angles on the map. A map projection accomplishes this by maintaining all angles. The drawback is that the area enclosed by a series of arcs may be greatly distorted in the process. No map projection can preserve shapes of larger regions. Equal area projections preserve the area of displayed features. To do this, the other propertiesshape, angle, and scaleare distorted. In Equal area projections, the meridians and parallels may not intersect at right angles. In some instances, especially maps of smaller regions,

21 shapes are not obviously distorted, and distinguishing an Equal area projection from a Conformal projection is difficult unless documented or measured. Equidistant maps preserve the distances between certain points. Scale is not maintained correctly by any projection throughout an entire map. However, there are in most cases, one or more lines on a map along which scale is maintained correctly. Most Equidistant projections have one or more lines in which the length of the line on a map is the same length (at map scale) as the same line on the globe, regardless of whether it is a great or small circle, or straight or curved. Such distances are said to be true. For example, in the Sinusoidal projection, the equator and all parallels are their true lengths. In other Equidistant projections, the equator and all meridians are true. Still others (for example, Two-point Equidistant) show true scale between one or two points and every other point on the map. Keep in mind that no projection is equidistant to and from all points on a map. The shortest route between two points on a curved surface such as the earth is along the spherical equivalent of a straight line on a flat surface. That is the great circle on which the two points lie. True-direction, or Azimuthal projections maintain some of the great circle arcs, giving the directions or azimuths of all points on the map correctly with respect to the centre. Some True-direction projections are also conformal, equal area, or equidistant. Because maps are flat, some of the simplest projections are made onto geometric shapes that can be flattened without stretching their surfaces. These are called developable surfaces. Some common examples are cones, cylinders, and planes. A map projection systematically projects locations from the surface of a spheroid to representative positions on a flat surface using mathematical algorithms. The first step in projecting from one surface to another is creating one or more points of contact. Each contact is called a point (or line) of tangency. A Planar projection is tangential to the globe at one point. Tangential cones and cylinders touch the globe along a line. If the projection surface intersects the globe instead of merely touching its surface, the resulting projection is a secant rather than a tangent case. Whether the contact is tangent or secant, the contact points or lines are significant because they define locations of zero distortion. Lines of true scale include the central meridian and standard parallels and are sometimes called standard lines. In general, distortion increases with the distance from the point of contact. Many common map projections are classified according to the projection surface used: conic, cylindrical, or planar. Each map projection has a set of parameters (linear, angular, unit less parameters) that you must define. The parameters specify the origin and customize a projection for your area of interest. Angular parameters use the geographic coordinate system units, while linear parameters use the projected coordinate system units. Gau-Krger-System Also known as Transverse Mercator, this projection is similar to the Mercator except that the cylinder is longitudinal along a meridian instead of the equator. The result is a conformal projection that does not maintain true directions. The central meridian is placed on the region to be highlighted. This centering minimizes distortion of all properties in that region. This projection is best suited for land masses that stretch northsouth. The GaussKrger (GK) coordinate system is based on the GaussKrger projection. Austria: referring meridians at 28, 31, and 34 to the east of Ferro

22 The origin is defined at the point of intersection of the meridian with the equator. Unit = m Ellipsoid = Bessel Along the refering meridian distances are measured accurately. The margins are distorted in the cm-scale. Rechtswert (x) = Distance to the refering meridian Hochwert (y) = Distance to the equator (5 Mio as false easting value) sterreichisches Bundesmeldenetz: Refering Meridian (Bezugsmeridian) 1740' to the west of Greenwich (Ferro) M28 1020' 150 000 m M31 1320' 450 000 m M34 - 1620' 750 000 m Gives positive 6-digit coordinates Universal Transverse Mercator Universal Transverse Mercator (UTM) system is a specialized application of the Transverse Mercator projection. The globe is divided into 60 north and south zones, each spanning 6 of longitude. Each zone has its own central meridian. Zones 1N and 1S start at -180 W. The limits of each zone are 84 N and 80 S, with the division between north and south zones occurring at the equator. The Polar Regions use the Universal Polar Stereographic coordinate system. Refering Meridian in Europe at 3 (31), 9 (32), 15 (33)

23

4. Geodatabases
Data types: When creating tables, you will need to select a data type for each field in your table. The available types include a variety of number types, text, date, binary large objects (BLOBs), or globally unique identifiers (GUIDs). Choosing the correct data type allows you to correctly store the data and will facilitate your analysis, data management, and business needs. Text, String, Character Text with <256 letters Numbers Integer) Decimal number (Float, Real, Double, Number, Numeric) Exponential number Logical state Date Time Current Memo (long Text) Multimedia data

Name

Specific range, length, or format -32,768 to 32,767 -2,147,483,648 to 2,147,483,647 approximately -3.4E38 to 1.2E38 approximately -2.2E308 to 1.8E308 up to 64,000 characters mm/dd/yyyy hh:mm:ss A/PM varies

Size (Bytes)

Applications numeric values without fractional values within specific range; coded values numeric values without fractional values within specific range numeric values with fractional values within specific range numeric values with fractional values within specific range names or other textual qualities

Short integer

Long integer Single precision floating point number (Float) Double precision floating point number (Double) Text

8 varies

Date

date and/or time

BLOB

varies

images or other multimedia

Numeric fields can be stored as one of four numeric data types: short integers; long integers; single-precision floating point numbers, often referred to as floats; and double-precision floating point numbers, commonly called doubles. Each of these numeric data types varies in the size and method of storing a numeric value. In numeric data storage, it is important to understand the difference between decimal and binary numbers. The majority of people are accustomed to decimal numbers, which are a series of digits between zero and nine with negative or positive values and the possible placement of a decimal point. On the other hand, computers store numbers as binary numbers. A binary number is simply a series of 0s and 1s. In the different numeric data types, these 0s and 1s represent different coded values, including the positive or negative nature of the number, the

24 actual digits involved, and the placement of a decimal point. Understanding this type of number storage will help you make the correct decision in choosing numeric data types. In choosing the numeric data type, there are two things to consider. First, it is always best to use the smallest byte size data type needed. This will not only minimize the amount of storage required for your geodatabase but will also improve the performance. You should also consider the need for exact numbers versus approximate numbers. For example, if you need to express a fractional number, and seven significant digits will suffice, use a float. However, if the number must be more precise, choose a double. If the field values will not include fractional numbers, choose either a short or long integer. The most basic numeric data type is the short integer. This type of numeric value is stored as a series of 16 0s or 1s, commonly referred to as 16 bits. Eight bits are referred to as a byte; thus, a short integer takes up two bytes of data. One bit states if the number is positive or negative and the remaining 15 translate to a numeric value with five significant digits. The actual numeric value for a short integer is approximately between -32,000 and +32,000. A long integer is a four-byte number. Again, one bit stores the positive or negative nature of the number while the remaining bits translate to a numeric value with 10 significant digits. The actual range for a long integer is approximately between -2 billion and +2 billion. Both short and long integers can store only real numbers. In other words, you cannot have fractions or numbers to the right of the decimal place. To store data with decimal values, you will need to use either a float or a double. Floats and doubles are both binary number types that store the positive or negative nature of the number, a series of significant digits, and a coded value to define the placement of a decimal point. This is referred to as the exponent value. Floats and doubles are coded in a format similar to scientific notation. For example, if you wanted to represent the number -3,125 in scientific notation, you would say -3.125 x 103 or -3.125 E3. The binary code would break this number apart and assign one bit to state that it is a negative number; another series of bits would define the significant digits 3125; another bit would indicate whether the exponent value is positive or negative; and the final series of bits would define the exponent value of 3. A float is a four-bit number and can store up to seven significant digits, producing an approximate range of values between -3.4E-38 to -1.2E38 for negative numbers and from 3.4E38 to 1.2E38 for positive numbers. A double is an eight-byte number and can store up to 15 significant digits, producing an approximate range of values between -2.2E-308 and -1.8E308 for negative numbers and 2.2E-308 and 1.8E308 for positive numbers. It is important to note, however, that floats and doubles are approximate numbers. This is due to two factors. First, the number of significant digits is a limiting factor. For example, you could not express the number 1,234,567.8 as a float because this number contains more than the permissible seven digits. To store the number as a float, it will be rounded to 1,234,568, a number containing the permissible seven digits. This number could easily be expressed as a double, since it contains less than the permissible 15 significant digits. There are also some limitations to numbers a binary value can represent. One analogy that can be made would be in expressing fractions versus decimals. The fraction 1/3 represents a particular value. However, if you try to express this number as a decimal, the number will need to be rounded at some point. It could be expressed as 0.3333333, however, this is still an approximation of the actual value. Just as fractions cannot always be expressed as decimals, some numbers cannot be exactly expressed in binary code, and these numbers are replaced by approximate values. One example of such a number is 0.1. This number cannot be expressed

25 as a binary number. However, the number 0.099999 can be expressed in binary. Thus, 0.1 would be replaced with an approximate value of 0.099999. A text field represents a series of alphanumeric symbols. This can include street names, attribute properties, or other textual descriptions. An alternative to using repeating textual attributes is to establish a coded value. A textual description would be coded with a numeric value. For example, you might code road types with numeric values assigning a 1 to paved improved roads, a 2 to gravel roads, and so on. This has the advantage of using less storage space in the geodatabase; however, the coded values must be understood by the data user. If you define your coded values in a coded value domain in the geodatabase and associate the domain with the integer field storing your codes, the geodatabase will display the textual description when the table is viewed. A BLOB, or binary large object, is simply some data stored in the geodatabase as a long sequence of binary numbers. Items such as images, multimedia, or bits of code can be stored in this type of field.

Geodatabase The geodatabase provides a framework for geographic information and supports topologically integrated feature classes. These datasets are stored, analyzed, and queried as layers. The geodatabase also extends these models with support for complex networks, topologies, relationships among feature classes, and other object-oriented elements. This framework can be used to define and work with a wide variety of different user- or application-specific models. The geodatabase supports both vector and raster data. Entities are represented as objects with properties, behavior, and relationships. Support for a variety of different geographic object types is built into the system. These object types include simple objects, geographic features, network features, annotation features, and other more specialized feature types. The model allows you to define relationships between objects and rules for maintaining referential and topological integrity between objects. How data is stored in the database, the applications that access it, and the client and server hardware configurations are all key factors to a successful multiuser geographic information system (GIS). Setting up a single-user system involves fewer considerations. In both cases, though, successfully implementing a GIS starts with a good data model design. Designing a geodatabase requires planning and revision until you reach a design that meets your requirements. You can either start with an existing geodatabase design or design your own from scratch. A collection of multiple files is called a database. The complexity of working with multiple files requires a database management system (DBMS). There are three basic types of DBMS. Hierarchical Data Structures describe a one-to-many or parent-child relationship. The parents and children are directly linked, making access to data both simple and straightforward. Branches of the structure are based on formal criteria or key descriptors that act as a decision rule for moving from one branch to another. Each relationship has to be explicitly defined before the structure and its decision rules are developed.

26 Network Systems allow, in addition to one-to-one and many-to-one relationships, many-tomany relationships, in which a single entity may have many attributes, and each attribute is linked explicitly to many entities. Each entity can be linked directly by pointers anywhere in the database. They reduce redundancy of the data. However, in complex databases the number of pointer can get quite large. The relational model has become the most widely used. In relational DBMS the data are stored as ordered records or rows of attribute values (tuples). Tuples are grouped with corresponding data rows in the form of relations. Relational algebra provides a specific set of rules for the design and function of the system. A primary key field is an attribute that uniquely identifies tuples and describe an object or entity. A foreign key links the primary key to another table column. Properties of a true relational database: 1. All data must be represented in tabular form. 2. All data must be atomic. This means that any cell in a table can contain only a single value. 3. No duplicates are allowed 4. Tuples can be rearranged without changing the meaning of the relations. An essential concept in the design of a relational database is normalization, which is the process of converting complex relations into a larger number of simpler relations that satisfy relational rules. 1st normal form: A table must contain columns and rows with only a single value in each row location. 2nd normal form: Every column that is not the primary key has to be totally dependent on the primary key. 3rd normal form: The primary key does not depend dependent on any nonprimary keys. e.g..: digitizing a geological map Table: POLYGON (Poly_Id, Formation_name, Lithology, Age) Removing textural attributes POLYGON (Poly_Id, Formation_ID, Formation_name, Lithology, Lithology_ID, Age_ID, Age) The resulting table contains repeated Formation numbers (more than one polygon can belong to the same formation). Therefore 1st normalization (eliminating repeating attributes) POLYGON (poly_ID, Formation_ID) FORMATION (Formation_ID, Formation_name, Lithology_ID, Lithology, Age_ID, Age) Further rectification (1st normal form) and 2nd normal form (each nonidentifying attribute is functionally dependant on the whole key, no composite key is present) POLYGON (poly_ID, Formation_ID) FORMATION (Formation_ID, Formation_name, Lithology_ID, Lithology, Age_ID)

27 AGE (Age_ID, Age) 3rd normal form (nonidentifying attributes are mutually independent) POLYGON (poly_ID, Formation_ID) FORMATION (Formation_ID, Formation_name, Lithology_ID, Age_ID) LITHOLOGY (Lithology_ID, Lithology) AGE (Age_ID, Age) The relational algebra is a set of operations that manipulate relations as they are defined in the relational model and as such describes part of the data manipulation aspect of this data model. Because of their algebraic properties these operations are often used in database query optimization as an intermediate representation of a query to which certain rewrite rules can be applied to obtain a more efficient version of the query. Structured Query Language (SQL) Structured Query Language (SQL) is an ANSI (American National Standards Institute) standard computer language for accessing and manipulating database systems. SQL statements are used to retrieve and update data in a database. SQL works with database programs like MS Access, DB2, Informix, MS SQL Server, Oracle, Sybase, etc. Unfortunately, there are many different versions of the SQL language, but to be in compliance with the ANSI standard, they must support the same major keywords in a similar manner (such as SELECT, UPDATE, DELETE, INSERT, WHERE, and others). SQL is a syntax for executing queries. But the SQL language also includes a syntax to update, insert, and delete records. These query and update commands together form the Data Manipulation Language (DML) part of SQL: SELECT - extracts data from a database table UPDATE - updates data in a database table DELETE - deletes data from a database table INSERT INTO - inserts new data into a database table The Data Definition Language (DDL) part of SQL permits database tables to be created or deleted. We can also define indexes (keys), specify links between tables, and impose constraints between database tables. The most important DDL statements in SQL are: CREATE TABLE - creates a new database table ALTER TABLE - alters (changes) a database table DROP TABLE - deletes a database table CREATE INDEX - creates an index (search key) DROP INDEX - deletes an index The SELECT statement is used to select data from a table. The tabular result is stored in a result table (called the result-set). Syntax SELECT column_name(s) FROM table_name

28 Select Some Columns To select the columns named "LastName" and "FirstName", use a SELECT statement like this: SELECT LastName,FirstName FROM Persons Select All Columns To select all columns from the "Persons" table, use a * symbol instead of column names, like this: SELECT * FROM Persons The WHERE Clause To conditionally select data from a table, a WHERE clause can be added to the SELECT statement. Syntax SELECT column FROM table WHERE column operator value With the WHERE clause, the following operators can be used: Operator = <> > < >= <= BETWEEN LIKE Description Equal Not equal Greater than Less than Greater than or equal Less than or equal Between an inclusive range Search for a pattern

SQL uses single quotes around text values (most database systems will also accept double quotes). Numeric values should not be enclosed in quotes. e.g. SELECT * FROM Persons WHERE FirstName='Tove'

Joining Data comes from a variety of sources. Often, the data you want to display on your map is not directly stored with your geographic data. For instance, you might obtain data from other departments in your organization, purchase commercially available data, or download data from the Internet. If this information is stored in a table, such as a dBASE, INFO, or geodatabase table, you can associate it with your geographic features and display the data on your map. Typically, you'll join a table of data to a layer based on the value of a field that can be found in both tables. The name of the field does not have to be the same, but the data type has to be the same; you join numbers to numbers, strings to strings, and so on.

29 When editing joined data, you cannot edit the joined columns directly. However, you can directly edit the columns of the origin table. When you join tables, you establish a one-to-one or many-to-one relationship between the layer's attribute table and the table containing the information you wish to join. Unlike joining tables, relating tables simply defines a relationship between two tables. The associated data isn't appended to the layer's attribute table like it is with a join. Instead, you can access the related data when you work with the layer's attributes. For example, if you select a building, you can find all the tenants that occupy that building. Similarly, if you select a tenant, you can find what building it resides in (or several buildings, in the case of a chain of stores in multiple shopping centresa many-to-many relationship). When the layers on your map don't share a common attribute field, you can join them using a spatial join, which joins the attributes of two layers based on the location of the features in the layers. With a spatial join, you can: Find the closest feature to another feature. Find what's inside a feature. Find what intersects a feature.

Metadata Metadata or "data about data" describe the content, quality, condition, and other characteristics of data. Metadata is critical for sharing tools, data, and maps and for searching to see if the resources you need already exist. Metadata describes GIS resources in the same way a card in a librarys card catalog describes a book. Once you've found a resource with a search, its metadata will help you decide whether it's suitable for your purposes. To make this decision, you may need to know how accurate or current the resource is and if there are any restrictions on how it can be used. Metadata can answer these questions. Metadata are defined by the FGDC and ISO XML Document Type Definitions (DTD's). The Federal Geographic Data Committee (FGDC) is an organization established by the United States Federal Office of Management and Budget responsible for coordinating the development, use, sharing, and dissemination of surveying, mapping, and related spatial data (see http://www.fgdc.gov/metadata/metadata.html). The committee is comprised of representatives from federal and state government agencies, academia, and the private sector. The FGDC defines spatial data metadata standards for the United States in its Content Standard for Digital Geospatial Metadata and manages the development of the National Spatial Data Infrastructure (NSDI). The International Organization for Standardization (ISO) is a federation of national standards institutes from 145 countries that works with international organizations, governments, industries, businesses and consumer representatives to define and maintain criteria for international standards. eXtensible Markup Language (XML) is similar to HyperText Markup Language (HTML). An HTML file contains both data and information about how it's presented. An XML file contains data only; presentation information is defined in a separate file, a stylesheet. For example, in an HTML file, data is embedded within tags that tell a Web browser how it should be

30 presented; <B>24</B> will display "24" in a bold font. With HTML you only know what "24" means from the context in which it appears on the page; if it precedes the text "C" you would know it represented a temperature. XML data is embedded within tags that add meaning. For example, <price>24</price> declares "24" to be a price. In XML terms this price is referred to as an element. Other elements might be product names, quantities, or totals. While a person can look at the XML and determine that "24" is a price, what's more important is that software can extract price elements from the file; this isn't possible with the HTML file, where there is nothing to distinguish "24" from "C". XML data can be displayed in a Web browser using eXtensible Style Language Transformations (XSLT) stylesheets that transform the XML data into an HTML page. A stylesheet is similar to an SQL query that selects, orders, and formats values from tables in an RDBMS and presents them as a report. You can display the same XML data in many different ways by using different stylesheets. Only values from selected XML elements will appear in the output HTML page. Contents: Identification: What is the name of the data set? Who developed the data set? What geographic area does it cover? What themes of information does it include? How current are the data? Are there restrictions on accessing or using the data? Data Quality: How good are the data? Is information available that allows a user to decide if the data are suitable for his or her purpose? What is the positional and attribute accuracy? Are the data complete? Was the consistency of the data verified? What data were used to create the data set, and what processes were applied to these sources? Spatial Data Organization: What spatial data model was used to encode the spatial data? How many spatial objects are there? Are methods other than coordinates, such as street addresses, used to encode locations? Spatial Reference: Are coordinate locations encoded using longitude and latitude? Is a map projection or grid system, such as the State Plane Coordinate System, used? What horizontal and vertical datums are used? What parameters should be used to convert the data to another coordinate system? Entity and Attribute Information: What geographic information (roads, houses, elevation, temperature, etc.) is included? How is this information encoded? Were codes used? What do the codes mean? Distribution: From whom can I obtain the data? What formats are available? What media are available? Are the data available online? What is the price of the data? Metadata Reference: When were the metadata compiled? By whom?

31

5. Visualization
In an interactive map, readers can directly display and query the tabular attribute information associated with the map features. In a static map, this information, if displayed at all, is usually in a formatted report or is used to symbolize or label map features. We can discriminate: Topographic cartography produces maps in all scales in order to give a general spatial orientation. They contain real objects. Thematic cartography produces maps showing special themes on a simplified topographic base map for a specific purpose. Components of a cartographic image Point data are represented with graphical symbols. The type, size and color of symbols can be set to constant values, or they can be allowed to vary according to one or more fields in the point attribute table. Some symbols may record vector rather than scalar data (e.g. dip symbols). Line data are represented either with continuous lines, that can vary in thickness and color, or they can be displayed in different styles. Irregular polygon data are represented as closed polygons filled with color or black and white (or grey) fill, either as solid color or as various kinds of pattern. The color and pattern of the polygon fill is defined by values in one or more of the fields in polygon attribute tables. The colors are defined using a palette (color lookup table). Pixel data are represented exclusively with color, or shades of gray. In the case of a raster with no attribute table, the color is determined from the pixel value. On the other hand the pixel value may be a pointer to an attribute table, in which case the value of a selected field is used to determine the color. Up to three raster images can be displayed at one time. Cartographic annotation is generally plotted last, overwriting the graphical elements of the image. Annotation includes labels, titles, blocks of text, legends, a scale bar, a graticule and a north arrow. Fields in an attribute table can be used to define the text string, the font, the size, color and orientation of labels. The visual variables are: size, brightness, shape, pattern, orientation, color. Some cartographic rules All elements must be recognizable and distinguishable by eye. The minimum size of points is 0.2 mm, the minimum width of a line is 0.05 mm and the minimum diameter of an area is 0.3 mm. Gray Shadings should not use more then 6 to 7 levels, color shadings not more than 12.

32 To make the data easy to store and maintain, attribute data in a GIS database is often abbreviated, coded, or unformatted. To make it easy for map readers to understand the data, you should reformat it, provide aliases, or filter the data before including it in the map. Here are some things to keep in mind when presenting attribute data to the map readers: Display only relevant data: There may be attributes in the layers data table that do not convey useful information for a given map. Make sure these fields are not visible. Field name Aliases: Field aliases should clearly describe what the attribute values represent, including how they are measured (for example (m)" for meters or (km)" for kilometers). Aliases should also communicate whether the values represent a raw count, a rate, or a ratio of other information, such as a percentage (%). Attribute data that represent specific kinds of information, such as currency or dates, should be formatted so that map readers will know what kind of information they are reading. Large numbers should be formatted with the appropriate thousands separator to make them easier to read. When data from a geodatabase is displayed, ensure that the descriptions for coded domains or subtypes are shown. Classification techniques Quantile Each class contains an equal number of features. A quantile classification is well suited to linearly distributed data. Because features are grouped by the number in each class, the resulting map can be misleading. Similar features can be placed in adjacent classes, or features with widely different values can be put in the same class. You can minimize this distortion by increasing the number of classes. Natural breaks Classes are based on natural groupings inherent in the data. Break points are identified by picking the class breaks that best group similar values and maximize the differences between classes. The features are divided into classes whose boundaries are set where there are relatively big jumps in the data values. Equal interval This classification scheme divides the range of attribute values into equal-sized subranges, allowing you to specify the number of intervals. For example, if features have attribute values ranging from 0 to 300 and you have three classes, each class represents a range of 100 with class ranges of 0100, 101200, and 201300. This method emphasizes the amount of an attribute value relative to other values, for example, to show that a store is part of the group of stores that made up the top one-third of all sales. Its best applied to familiar data ranges such as percentages and temperature. Defined interval This classification scheme allows you to specify an interval by which to equally divide a range of attribute values. Rather than specifying the number of intervals as in the equal interval classification scheme, with this scheme, you specify the interval value.

33 Standard deviation This classification scheme shows you how much a features attribute value varies from the mean. The software calculates the mean value and the standard deviations from the mean. Class breaks are then created using these values. Color The use of color for display of maps and images enormously enhances the ability to communicate spatial information. The human eye is able to perceive subtle color differences and to recognize color pattern. Any color can be produced by adding together red, green, and blue light in various combinations. Color models control the display of rasters and raster composites using the RGB and HSV color models. These models allow you to define which raster will be assigned to each component of each model. That is, you can assign which rasters will be represent the red, green, and blue colors in the RGB model and the hue, saturation, and value components in the HSV model. The group of rasters for each model can then be displayed in a composite. RGB is a color model based upon additive primary colors. Colors can be viewed spatially by using the RGB cube. In this cube, the primary colors are arranged at three corners of a color cube. In the lower corner of the cube, all three primary colors are of zero intensity, and the result is the color black. The color corresponding to any point in the cube can be described by the displacement on the red, green and blue axes. The diagonal axis cutting across the cube is an intensity axis, ranging from black at the origin, to white, where intensities are expressed in percent. Red, green and blue are additive colors, because new colors are obtained them to black. The subtractive primary colors (used for many color hardcopy devices) can also be represented in the color cube. They are cyan, magenta and yellow as shown on the remaining corners. Colors are defined by subtracting their complementary colors from white. In the RGB color model, pure gray shades are obtained by combining equal quantities of all three color values: red, green and blue. If all three values are set to 255, the total presence of color will illuminate white and, conversely, if all three color values are set to 0, the absence of color will illuminate black. This leaves 1 through 254 available indices for shades of gray. Approximately twenty shades of gray are discernible by the human eye. One of the problems within the RGB model is that linear changes of position within the color cube do not lead to a corresponding linear change in color perception by the human eye. Blue colors are relatively less sensitive than green and red. The HSV color model is based upon a color system in which the color space is represented by a single cone. The three components of the cone are the hue, saturation, and value. Hue specifies the hue (color) to which color will be set. Hue is given as an integer between 0 and 360 based on the Tektronix color standard (in which the hue is given as an angle counterclockwise around the color cone). The primary and secondary colors have the following hue values: red = 0, yellow = 60, green = 120, cyan = 180, blue = 240, magenta = 300. Saturation specifies the intensity of saturation to which color will be set. Saturation is given as an integer between 0 and 100. The saturation of a color refers to the extent to which it departs from a neutral color such as gray, or in simpler terms, its colorfulness. When saturation is

34 100, the color is fully saturated. When saturation is 0 the color is unsaturated, and will appear gray (unless value is set to 0 or 100, in which cases, it will appear black or white). Value (or intensity) specifies the intensity of white in the color. Value is given as an integer between 0 and 100. A color with value set to 0 will appear black. A color with value set to 100 and saturation set to 0 will appear white. The main advantage of the HSV model is that hue describes the human perception of color better than a RGB combination. For example an orange hue can be modified, making it brighter by changing intensity or paler by changing the saturation, adjustments that are difficult to guess at intuitively in RGB space. Similarly, saturation and intensity can be held constant, and hue can be altered to a neighboring value in the HSV cone. Text Maps convey information about geographic features, yet displaying only features on a map even with symbols that convey their meaningisn't always enough to make your point. Adding text to your map enhances how well your map conveys geographic information. There are various kinds of text you can add to your map. First, descriptive text can be placed near individual map features. For example, your map shows the name of each major city in Africa. You can also add a few pieces of text to draw attention to a particular area of the map, such as adding text to indicate the general location of the Sahara Desert. Finally, you can add text that improves the presentation of your map. For example, a map title provides context; you might also consider adding other information such as map author, data source, and date. The main types are labels, annotation, and graphic text. A label is a piece of text that is automatically positioned and whose text string is based on feature attributes. Labels offer the fastest and easiest way to add descriptive text to your map for individual features. For example, you can turn on dynamic labeling for a layer of major cities to quickly add city names to your map. Because labels are based on attribute fields, they can only be used to add feature descriptive text. The second main option when working with text is to use annotation. Annotation can be used to describe particular features as well as add general information to the map. You can use annotation, much like labels, to add descriptive text for many map features, or just to add a few pieces of text manually to describe an area of your map. Unlike labels, each piece of annotation stores its own position, text string, and display properties. Compared to labels, annotation provides more flexibility over the appearance and placement of your text because you can select individual pieces of text and edit them. Some rules: Use so few as possible font types. In the map view do not use more than 2-4 font types and 2-4 font sizes. Think about the readability in the context of the map user. Prefer Sans Serif fonts (i.e. fonts without small hooks on the end of the letter). Use the design elements (e.g. italics, underlining) sparingly. Emphasize the important information Group the text elements to logical blocks

35 Text components: Title Author(s), editor(s) Editorial remarks Sources Usage remarks Production date Validity Copyright notice Definition of the orientation Legend Arial signatures are defined in a bow with a side ratio of 1:2 to 1:3. As a rule of thumb the length of a box should be approximately 1 to 4 % of the longest map dimension. The minimum dimension is 0.6 x 0.4 cm. The boxes are separated in a distance of 1/6 to of the box height. The description of classified boxes should be continuous. Definition of the orientation The spatial orientation of the map should be clearly documented by the definition of the reference system, by the scale (numerical or graphical) and the north arrow. Visualization of surfaces Contours Contours are polylines that connect points of equal value (such as elevation, temperature, precipitation, pollution, or atmospheric pressure). The distribution of the polylines shows how values change across a surface. Where there is little change in a value, the polylines are spaced farther apart. Where the values rise or fall rapidly, the polylines are closer together. By following the polyline of a particular contour, you can identify which locations have the same value. Contours are also a useful surface representation because they allow you to simultaneously visualize flat and steep areas (distance between contours) and ridges and valleys (converging and diverging polylines). Hillshade The Hillshade function obtains the hypothetical illumination of a surface by determining illumination values for each cell in a raster. It does this by setting a position for a hypothetical light source and calculating the illumination values of each cell in relation to neighboring cells. It can greatly enhance the visualization of a surface for analysis or graphical display, especially when using transparency. By default, shadow and light are shades of grey associated with integers from 0 to 255 (increasing from black to white). The azimuth is the angular direction of the sun, measured from north in clockwise degrees from 0 to 360. An azimuth of 90 is east. The default is 315 (NW). By placing an elevation raster on top of a created hillshade, then making the elevation raster transparent, you can create realistic images of the landscape. Add other layers, such as roads, streams, or vegetation, to further increase the informational content in the display.

36 By modeling shade you can calculate the local illumination and whether the cell falls in a shadow or not. By modeling shadow, you can identify those cells that will be in the shadow of another cell at a particular time of day. Cells that are in the shadow of another cell are coded 0; all other cells are coded with integers from 1 to 255. You can reclassify all values greater than 1 to 1, producing a binary output raster. TIN TINs are made up of triangular facets and the nodes and edges that make up the triangles. They may also contain breaklineslines that follow sets of edges that play important roles in defining the shape of the surface. Examples of breaklines are ridgelines, roads, or streams. You can display one type of TIN feature in a map or scenefor example, just the triangles or all of the TIN features. You can also symbolize each type of feature in different ways. TIN nodes and triangles can be tagged with integer values to allow you to store additional information about them. These integer values can be used as lookup codesfor example, to indicate the accuracy of the input feature data source, or the land use type code for areas on the surface. The codes can be derived from fields in the input feature classes. You can symbolize tagged features with unique values. 3D data 3D data is data that has elevation or height information incorporated in its geometry. This information might be z-values of a feature class, cell values of a raster surface, or components of a TIN. In addition, there are ways to make 2D data render in 3D, for example, by using a surface as a source of base heights or using an attribute of a feature that contains elevation information. Although you can display 2D features by draping them over a surface, 3D features are displayed more rapidly, and you can share them with others without having to send along the surface data. You can convert existing 2D features to 3D in three ways:

By interpolating a shape from a surface using geoprocessing tools By deriving the height values of the features from a surface By deriving the height value from an attribute of the features By deriving the features' height from a constant value

You can also digitize new features over a surface and interpolate the features' z-values from the surface during digitizing. Viewing data in three dimensions gives you new perspectives. 3D viewing can provide insights that would not be readily apparent from a planimetric map of the same data. For example, instead of inferring the presence of a valley from the configuration of contour lines, you can actually see the valley and perceive the difference in height between the valley floor and a ridge.

37

6. GIS Analysis
Query Often, just looking at a map isn't enough; you must also query it according to feature locations and attributes to solve problems. You can discover new spatial relationships when you start asking questions such as:

Where is...? Where's the closest? What's inside? What intersects?

In many GIS, you can: Find out what a feature is by pointing to it. Find features with particular attributes such as cities with a population greater than one million. Find features with a particular spatial relationship. For instance, you can find the wildlife habitats within 50 kilometers of an oil spill or find all traffic accidents that occurred along a particular stretch of road. Aggregate features in a layer by removing the boundary between similar features, merging layers together, and clipping the boundary of a layer with another layer such as a study area.

A query is a request that selects features or records from a database. A query is often written as a SQL statement or logical expression. Once you've found the features, you can: Display their attributes and statistics. Create reports for them. Create graphs for them. Export them to a new feature class.

A Selection operation extracts selected features from an input coverage and stores them in the output coverage. Features are selected for extraction based on logical expressions or applying the criteria contained in a selection file. Any item, including redefined items, in the specified feature attribute table of the Input coverage can be used. A selection is based on spatial criteria: Interactive single- or multiple-selection Intersection with other objects Distance to Objects Neighborhood with polygons or lines

38 Or attribute-based criteria: Interactive single- or multiple-selection Identity (=) or values (<>) Composite selection with inclusions or exclusions (and, or, no) Both categories can be applied in combinations. The results are often used to aggregate databases. In the raster context an aggregate procedure generates a reduced-resolution version of a raster where each output cell contains the SUM, MIN, MAX, MEAN, or MEDIAN of the input cells that are encompassed by the extent of the output cell.

Raster cartographic modeling The functions associated with raster cartographic modeling can be divided into five types:

Those that work on single cell locations (Local functions) Those that work on cell locations within a neighborhood (Focal functions) Those that work on cell locations within zones (Zonal functions) Those that work on all cells within the raster (Global functions) Those that perform a specific application (for example, hydrologic analysis functions)

Each of these categories can be influenced by, or based on the spatial or geometric representation of the data, and not solely on the attributes that the cells portray. For example, a function that adds two layers together (work on single cell locations) is dependent on the cell's location and the value of its counterpart in the second layer. Functions applied to cell locations within neighborhoods or zones rely on the spatial configuration of the neighborhood or zone as well as the cell values in the configuration. Local functions, or per-cell functions, compute a raster output dataset where the output value at each location (cell) is a function of the value associated with that location on one or more raster datasets. That is, the value of the single cell, regardless of the values of neighboring cells, has a direct influence on the value of the output. A per-cell function can be applied to a single raster dataset or to multiple raster datasets. For a single dataset, examples of per-cell functions are the trigonometric functions (for example, sin) or the exponential and logarithmic functions (for example, exponential log). Examples of local functions that work on multiple raster datasets are functions that return the minimum, maximum, majority, or minority value for all the values of the input raster datasets at each cell location. Focal, or neighborhood functions produce an output raster dataset in which the output value at each cell location is a function of the input value at a cell location and the values of the cells in a specified neighborhood around that location. A neighborhood configuration determines which cells surrounding the processing cell should be used in the calculation of each output value. Neighborhood functions can return the mean, standard deviation, sum, and range of values within the immediate or extended neighborhood. Zonal functions compute an output raster dataset where the output value for each location depends on the value of the cell at the location and the association that location has within a cartographic zone. Zonal functions are similar to focal functions except that the definition of the neighborhood in a zonal function is the configuration of the zones or features of the input zone dataset, not a specified neighborhood shape. However, zones do not necessarily have any order or specific shapes. Each zone can be unique. Zonal functions return the mean, sum,

39 minimum, maximum, or range of values from the first dataset that fall within a specified zone of the second. Global or per-raster functions compute an output raster dataset in which the output value at each cell location is potentially a function of all the cells combined from the various input raster datasets. There are two main groups of global functions: Euclidean distance and weighted distance. Euclidean distance global functions assign to each cell in the output raster dataset its distance from the closest source cell (a source may be the location from which to start a new road). The direction of the closest source cell can also be assigned as the value of each cell location in an additional output raster dataset. A global weighted distance function determines the cost of moving from a destination cell (the location where you want to end the road) to the nearest source cell (the location where you want to start the road) over a cost surface (cost being determined by some cost schema such as cost of construction). To take this one step further, the shortest or least-cost path over a cost surface can be calculated over a non networked surface from a source cell to a destination cell using the global least-cost path function. In all the global calculations, knowledge of the entire surface is necessary to return the solution. There are a wide series of cell-based modeling functions developed to solve specific applications. An application function performs an analysis that is specific to a discipline. For example, hydrology functions create a stream network and delineate a watershed. The local, focal, zonal, and global functions are general functions and are not specific to any application. There is some overlap in the categorization of an application function and the local, focal, zonal, and global functions (such as the fact that even though slope is usually used in the application of analyzing surfaces, it is also a focal function). Some of the application functions are more general in scope, such as surface analysis, while other application functions are more narrowly defined, such as the hydrologic analysis functions. The categorization of the application functions is an aid to group and understand the wide variety of Spatial Analyst operators and functions. You may find that a specific application function can manipulate cell-based data for an entirely different application than its category. For example, calculating slope is a surface analysis function that can be useful in hydrologic analysis as well. Application functions include the following:

Density analysis Surface generation Surface analysis Hydrologic analysis Geometric transformation Generalization Resolution altering

40 Overlay operations The Overlay toolset contains tools to overlay multiple feature classes to combine, erase, modify, or update spatial features in a new feature class. New information is created when overlaying one set of features with another. There are six types of overlay operations; all involve joining two existing sets of features into a single set of features to identify spatial relationships between the input features. In the vector context the Identity tool is used to perform overlay analysis on feature classes. This tool combines the portions of features that overlap the identity features to create a new feature class. It computes a geometric intersection of the Input Features and Identity Features. The Input Features or portions thereof that overlap Identity Features will get the attributes of those Identity Features. The Intersect tool is used to perform overlay analysis on feature classes. This tool builds a new feature class from the intersecting features common in both feature classes. The Union tool is used to perform overlay analysis on feature classes. This tool builds a new feature class by combining the features and attributes of each feature class. The Clip tool allows you extract a portion of a raster dataset, based on a rectangular extent. A Mask tool identifies those cells within the analysis extent that will be considered when performing an operation or a function. Setting an analysis mask means that processing will only occur on selected locations and that all other locations will be assigned values of NoData. An Update tool (stamp overlay) computes a geometric intersection of the Input Features and Update Features. The attributes and geometry of the Input Features are updated by the Update Features. The results are written to a new feature class. In contrast, a join overlay combines the attributes of the implicated layers by a spatial join. A raster overlay performs mathematical operations between two input images to produce a single output image. Add Image 1 + Image 2 Subtract Image 1 - Image 2 Multiply Image 1 x Image 2 Ratio (Divide) Image 1 / Image 2 Normalized Ratio (Image 1 + Image 2) / (Image 1 - Image 2) Exponentiate each pixel of Image 1, raised to the power in Image 2 Minimize the minimum of Image 1 and Image 2 Maximize the maximum of Image 1 and Image 2 Cover Image 1 covers Image 2, except where zero All overlay operations work on a cell-by-cell basis.

41 Geostatistics In GIS, we often want to combine information from several layers in analyses. If we only know the values of a selection of points and these sample points do not coincide between the layers, then such analyses would be impossible. Even if the sample points do coincide, we often want to describe a process for all the locations within a study area, not just for selected points. In addition, we need full surfaces because many processes modeled in GIS act continuously over a surface, with the value at one location being dependent upon neighboring values. It is based on Georges Matheron with the publications of : "Trait de gostatistique applique" (1963) and "La thorie des variables rgionalises, et ses applications" (1970). Waldo Tobler formulated 1970 the First Law of Geography": "Everything is related to everything, but near things are more related than distant things." Any GIS layer, whether raster or vector, that describes all locations in a study area might be called a surface. However, in surface analysis, we are particularly interested in those surfaces where the attributes are quantitative and vary continuously over space. A raster Digital Elevation Model (DEM), for instance, is such a surface. Other example surfaces might describe NDVI, population density, or temperature. In these types of surfaces, each pixel may have a different value than its neighbors. A landcover map, however, would not be considered a surface by this definition. The values are qualitative, and they also do not vary continuously over the map. Another example of an image that does not fit this particular surface definition would be a population image where the population values are assigned uniformly to census units. In this case, the data are quantitative, yet they do not vary continuously over space. Indeed, change in values is present only at the borders of the census units. No GIS surface layer can match reality at every scale. Thus the term model is often applied to surface images. The use of this term indicates a distinction between the surface as represented digitally and the actual surface it describes. It also indicates that different models may exist for the same phenomenon. The choice of which model to use depends upon many things, including the application, accuracy requirements, and availability of data. It is normally impossible to measure the value of an attribute for every pixel in an image. (An exception is a satellite image, which measures average reflectance for every pixel.) More often, one needs to fill in the gaps between sample data points to create a full surface. This process is called interpolation. Interpolation techniques may be described as global or local. A global interpolator derives the surface model by considering all the data points at once. The resulting surface gives a "best fit" for the entire sample data set, but may provide a very poor fit in particular locations. A local interpolator, on the other hand, calculates new values for unknown pixels by using the values of known pixels that are nearby. Interpolators may define "nearby" in various ways. Many allow the user to determine how large an area or how many of the nearest sample data points should be considered in deriving interpolated values. Interpolation techniques are also classified as exact or inexact. An exact interpolation technique always retains the original values of the sample data points in the resulting surface, while an inexact interpolator may assign new values to known data points.

42

Trend Surface Analysis Trend surfaces are typically used to determine whether spatial trends exist in a data set, rather than to create a surface model to be used in further analyses. Trend surfaces may also be used to describe and remove broad trends from data sets so more local influences may be better understood. Because the resulting surface is an ideal mathematical model, it is very smooth and is free from local detail. The Trend Surface Interpolator is a global interpolator since it calculates a surface that gives the best fit, overall, to the entire set of known data points. It is also an inexact interpolator. The values at known data points may be modified to correspond to the best fit surface for the entire data set. The Trend Surface Interpolator fits mathematically-defined ideal surface models (e.g. linear, quadratic, or cubic) to the input point data set. To visualize how it works, we will use an example of temperature data at several weather stations. The linear surface model is flat (i.e., a plane). Imagine the temperature data as points floating above a table top. The height of each point above the table top depends on its temperature. Now imagine a flat piece of paper positioned above the table. Without bending it at all, one adjusts the tilt and height of the paper in such a way that the sum of the distances between it and every point are minimized. Some points would fall above the plane of the paper and some below. Indeed, it is possible that no points would actually fall on the paper itself. However, the overall separation between the model (the plane) and the sample data points is minimized. Every pixel in the study area could then be assigned the temperature that corresponds to the height of the paper at that pixel location. One could use the same example to visualize the quadratic and cubic trend surface models. However, in these cases, you would be allowed to bend the paper (but not crease it). The quadratic surface allows for broad bends in the paper while the cubic allows even more complex bending. The Trend Surface Interpolator operates much like this analogy except a polynomial formula describing the ideal surface model replaces the paper. This formula is used to derive values for all pixels in the image. In addition to the interpolated surface produced, it reports how well the chosen model fits the input points.

Inverse Distance Weighted (IDW) Interpolation IDW is a method of interpolation that estimates cell values by averaging the values of sample data points in the neighborhood of each processing cell. The closer a point is to the centre of the cell being estimated, the more influence, or weight, it has in the averaging process. The distance-weighted average preserves sample data values and is therefore an exact interpolation technique. The user may choose to use this technique either as a global or a local interpolator. In the global case, all sample data points are used in calculating all the new interpolated values. In the local case, only the 4-8 sample points that are nearest to the pixel to be interpolated are used in the calculation. The local option is generally recommended, unless data points are very uniformly distributed and the user wants a smoother result. With the local option, a circle defined by a search radius is drawn around each pixel to be interpolated. The search radius is set to yield, on average, 6 control points within the circle. This is calculated by dividing the total study area by the number of points and determining a radius that would

43 enclose, on average, 6 points. This calculation assumes an even distribution of points, however, so some flexibility is built in. If less than 4 control points are found in the calculated search area, then the radius is expanded until at least 4 points are found. On the other hand, if more than 8 control points are found in the calculated search area, then the radius is decreased until at most 8 control points are found. At least 4 points must be available to interpolate any new value. With either the global or local implementation, the user can define how the influence of a known point varies with distance to the unknown point. The idea is that the attribute of an interpolated pixel should be most similar to that of its closest known data point, a bit less similar to that of its next closest known data point, and so on. Most commonly, the function used is the inverse square of distance (1/d2, where d is distance). For every pixel to be interpolated, the distance to every sample point to be used is determined and the inverse square of the distance is computed. Each sample point attribute is multiplied by its respective inverse square distance term and all these values are summed. This sum is then divided by the sum of the inverse square distance terms to produce the interpolated value. The user may choose to use an exponent other than 2 in the function. Using an exponent greater than 2 causes the influence of the closest sample data points to have relatively more weight in deriving the new attribute. Using an exponent of 1 would cause the data points to have more equal influence on the new attribute value. The distance-weighted average will produce a smooth surface in which the minimum and maximum values occur at sample data points. In areas far from data points, the surface will tend toward the local average value, where local is determined by the search radius. The distribution of known data points greatly influences the utility of this interpolation technique. It works best when sample data are many and are fairly evenly distributed.

Spline interpolation The basic form of the minimum-curvature Spline interpolation imposes the following two conditions on the interpolant: 1) The surface must pass exactly through the data points. 2) The surface must have minimal curvature. The cumulative sum of the squares of the second derivative terms of the surface, taken over each point on the surface, must be a minimum. The basic minimum-curvature technique is also referred to as thin plate interpolation. It ensures a smooth (continuous and differentiable) surface, together with continuous first-derivative surfaces. Rapid changes in gradient or slope (the first derivative) may occur in the vicinity of the data points; hence, this model is not suitable for estimating second derivative (curvature).

Kriging The main difference between kriging methods and a simple distance-weighted average is that they allow the user great flexibility in defining the model to be used in the interpolation for a particular data set. These customized models are better able to account for changes in spatial dependence across the study area. Spatial dependence is simply the idea that points that are closer together have more similar values than points that are further apart. Kriging recognizes

44 that this tendency to be similar to nearby points is not restricted to a Euclidean distance relationship and may exhibit many different patterns. The kriging procedure produces, in addition to the interpolated surface, a second image of variance. The variance image provides, for each pixel, information about how well the interpolated value fits the overall model that was defined by the user. The variance image may thereby be used as a diagnostic tool to refine the model. The goal is to develop a model with an even distribution of variance that is as close as possible to zero. Kriging produces a smooth surface. Simulation, on the other hand, incorporates per-pixel variability into the interpolation and thereby produces a rough surface. Typically hundreds of such surfaces are generated and summarized for use in process modeling. The underlying notion that fuels geostatistical methods is quite simple. For continuously varying phenomena (e.g., elevation, rainfall), locations that are close together in space are more likely to have similar values than those that are further apart. This tendency to be most similar to one's nearest neighbors is quantified in geography through measures of spatial autocorrelation and continuity. In geostatistics, the complement of continuity, variability, is more often the focus of analysis. The first task in using geostatistical techniques to create surfaces is to describe as completely as possible the nature of the spatial variability present in the sample data. Spatial variability is assessed in terms of distance and direction. The analysis is carried out on pairs of sample data points. Every data point is paired with every other data point. Each pair may be characterized by its separation distance (the Euclidean distance between the two points) and its separation direction (the azimuth in degrees of the direction from one point to the other). The distance measure is typically referred to in units of lags, where the length of a lag (i.e., the lag distance or lag interval) is set by the user. In specifying a particular lag during the analysis, the user is limiting the pairs under consideration to those that fall within the range of distances defined by the lag. If the lag were defined as 20 meters, for example, an analysis of data at the third lag would include only those data pairs with separation distances of 40 to 60 meters. Direction is measured in degrees. As with distance, direction is typically specified as a range rather than a single azimuth. The h-scatterplot is used as a visualization technique for exploring the variability in the sample data pairs. In the h-scatterplot, the X axis represents the attribute at one point of the pair (the from point) and the Y axis represents that same attribute at the other point of the pair (the to point). The h-scatterplot may be used to plot all of the pairs, but is more often restricted to a selection of pairs based on a certain lag and/or direction. The h-scatterplot is typically used to get a sense of what aspects of the data pair distribution are influencing the summary of variability for a particular lag. H-scatterplots are interpreted by assessing the dispersion of the points. For example, if the pairs were perfectly linearly correlated (i.e., no variability at this separation and direction), then all the points would fall along a line. A very diffuse point pattern in the h-scatterplot indicates high variability for the given ranges of distance and direction. The semivariogram is another tool for exploring and describing spatial variability. The semivariogram summarizes the variability information of the h-scatterplots and may be presented both as a surface graph and a directional graph. The surface graph shows the average vari-

45 ability in all directions at different lags. The center position in the graph, called the origin, represents zero lags. The lags increase from the center toward the edges.

( h) =

n 1 [z ( xi ) z ( xi + h)] 2 N (h) i =1

The structure of the data may be described by four parameters: the sill, the range, the nugget and anisotropy. In most cases involving environmental data, spatial variability between sample pairs increases as the separation distance increases. Eventually, the variability reaches a plateau where an increase in separation distance between pairs no longer increases the variability between them, i.e., there is no spatial dependence at this and larger distances. The variance value at which the curve reaches the plateau is called the sill. The total separation distance from the lowest variance to the sill is known as the range. The range signifies the distance beyond which sample data should not be considered in the interpolation process when selecting points that define a local neighborhood. The nugget refers to the variance at a separation distance of zero, i.e., the y-intercept of the curve that is fit to the data. In theory, we would expect this to be zero. However, noise or uncertainty in the sample data may produce variability that is not spatially dependent and this will result in a non-zero value, or a nugget effect. A nugget structure increases the variability uniformly across the entire graph because it is not related to distance or direction of separation. The fourth parameter that defines the structure is the anisotropy of the data set. The transition of spatial continuity may be equal in all directions, i.e., variation is dependent on the separation distance only. This is known as an isotropic model. A model fit to any direction is good for all directions. In most environmental data sets, however, variability is not isotropic. Anisotropy is described by directional axes of minimum and maximum continuity. To determine the parameters to be used, the user views directional semivariograms for multiple directions. In kriging and simulation interpolation processes, structures that describe the pattern of spatial variability represented by directional semivariograms are used to determine the influence of spatial dependence on neighborhoods of sample points selected to predict unknown points. The structures influence how their attributes should be weighted when combined to produce an interpolated value. Semivariograms, however, because they are based on the inherent incompleteness of sample data, need smoother curves that define the shape of the spatial variability across all separation distances. Using ancillary information and the semivariograms, mathematical functions are combined to delineate a smooth curve of spatial variability. At this stage, a nugget structure, and sills, ranges, and anisotropies of additional structures are defined for the smooth curve. By model fitting several mathematical functions are used to design a curve for the spatial variability. Those functions that do not plateau at large separation distances, such as the linear and the power functions, are termed non-transitional. Those that do reach a plateau, such as the gaussian and exponential functions, are called transitional functions. Together, the nugget structure, and the sills, ranges, and anisotropies of additional structures mathematically define a nested model of spatial variability. This is used when locally deriving weights for the attributes of sample data within the neighborhood of a location to be interpolated. The user fits a mathematical curve described by sills, ranges, a nugget, anisotropy and selected functions to the detected spatial variability. This curve is used to derive the weights

46 applied to locally selected samples during the interpolation by kriging or conditional simulation. Semivariograms are statistical measures that assume the input sample data are normally distributed and that local neighborhood means and standard deviations show no trends. Each sample data set must be assessed for conformity to these assumptions. Transformations of the data, editing of the data set, and the selection of different statistical estimators of spatial variability are all used to cope with data sets that diverge from the assumptions. The ability to identify true spatial variability in a data set depends to a great extent on ancillary knowledge of the underlying phenomenon measured. This detection process can also be improved with the inclusion of other attribute data. The crossvariogram, like the semivariogram, plots variability along distances of joint datasets and uses one set of data to help explain and improve the description of variability in another. For example, when interpolating a rainfall surface from point rainfall data, incorporating a highly correlated variable such as elevation could help improve the estimation of rainfall. In such a case where the correlation is known, sampled elevation data could be used to help in the prediction of a rainfall surface, especially in those areas where rainfall sampling is sparse. Kriging utilizes the model developed to interpolate a surface. The model is used to derive spatial continuity information that will define how sample data will be weighted when combined to produce values for unknown points. The weights associated with sample points are determined by direction and distance to other known points, as well as the number and character of data points in a user-defined local neighborhood. Z p = Wi Z i The goal of kriging is to reduce the degree of variance error in the estimation across the surface. The variance error is a measure of the accuracy of the fit of the model and neighborhood parameters to the sample data, not the actual measured surface. The variance error is given by: (Z = Zp n

With kriging, the variance of the errors of the fit of the model is minimized. Thus it is known as a Best Linear Unbiased Estimator (B.L.U.E.). By fitting a smooth model of spatial variability to the sample data and by minimizing the error of the fit to the sample data, kriging tends to underestimate low values and overestimate large values. Kriging minimizes the error produced by the differences in the fit of the spatial continuity to each local neighborhood. In so doing, it produces a smooth surface. Kriging equation system for three local neighborhood points ( is the Lagrange multiplicator):

47 W1 (h11 ) + W2 (h12 ) + W3 (h13 ) + = (h1 p )

W1 (h12 ) + W2 (h22 ) + W3 (h23 ) + = (h2 p ) W1 (h13 ) + W2 (h23 ) + W3 (h33 ) + = (h3 p ) W1 + W2 + W3 + 0 = 1 Ordinary Kriging is the most general and widely used of the Kriging methods. It assumes the constant mean is unknown. This is a reasonable assumption unless there is some scientific reason to reject this assumption. Universal Kriging assumes that there is an overriding trend in the data, and it can be modelled by a deterministic function, a polynomial. This polynomial is subtracted from the original measured points, and the autocorrelation is modelled from the random errors. Once the model is fit to the random errors and before making a prediction, the polynomial is added back to the predictions to give you meaningful results. Universal Kriging should only be used when you know there is a trend in your data and you can give a scientific justification to describe it. Cokriging is an extension of kriging that uses a second set of points of different attributes to assist in the prediction process. The two attributes must be highly correlated with each other to derive any benefit. The description of spatial variability of the added variable can be used in the interpolation process, particularly in areas where the original sample points are sparse. In conditional simulation, a non-spatially dependent element of variability is added to the model previously developed. The variability of each interpolated point is used to randomly choose another estimate. The resulting surface maintains the spatial variability as defined by the semivariogram model, but also represents pixel-by-pixel variability. The resulting surface is not smooth. Typically many of these surfaces (perhaps hundreds) are produced, each representing one model of reality. The surfaces differ from each other because of the random selection of estimates. Conditional simulation is best suited for developing multiple representations of a surface that may serve as inputs to a Monte Carlo analysis of a process model.

48 Image Processing Digital Image Processing is largely concerned with four basic operations: image restoration, image enhancement, image classification, image transformation. Image restoration is concerned with the correction and calibration of images in order to achieve as faithful a representation of the earth surface as possiblea fundamental consideration for all applications. Image enhancement is predominantly concerned with the modification of images to optimize their appearance to the visual system. Visual analysis is a key element, even in digital image processing, and the effects of these techniques can be dramatic. Image classification refers to the computer-assisted interpretation of images, an operation that is vital to GIS. Finally, image transformation refers to the derivation of new imagery as a result of some mathematical treatment of the raw image bands. Image Restoration Remotely sensed images of the environment are typically taken at a great distance from the earth's surface. As a result, there is a substantial atmospheric path that electromagnetic energy must pass through before it reaches the sensor. Depending upon the wavelengths involved and atmospheric conditions (such as particulate matter, moisture content and turbulence), the incoming energy may be substantially modified. The sensor itself may then modify the character of that data since it may combine a variety of mechanical, optical and electrical components that serve to modify or mask the measured radiant energy. In addition, during the time the image is being scanned, the satellite is following a path that is subject to minor variations at the same time that the earth is moving underneath. The geometry of the image is thus in constant flux. Finally, the signal needs to be telemetered back to earth, and subsequently received and processed to yield the final data we receive. Consequently, a variety of systematic and apparently random disturbances can combine to degrade the quality of the image we finally receive. Image restoration seeks to remove these degradation effects. Broadly, image restoration can be broken down into the two sub-areas of radiometric restoration and geometric restoration. Radiometric Restoration Radiometric restoration refers to the removal or diminishment of distortions in the degree of electromagnetic energy registered by each detector. A variety of agents can cause distortion in the values recorded for image cells. Some of the most common distortions for which correction procedures exist include: uniformly elevated values, due to atmospheric haze, which preferentially scatters short wavelength bands (particularly the blue wavelengths); striping, due to detectors going out of calibration; random noise, due to unpredictable and unsystematic performance of the sensor or transmission of the data; and scan line drop out, due to signal loss from specific detectors. It is also appropriate to include here procedures that are used to convert the raw, unit less relative reflectance values (known as digital numbers, or DN) of the original bands into true measures of reflective power (radiance).

Geometric Restoration For mapping purposes, it is essential that any form of remotely sensed imagery be accurately registered to the proposed map base. With satellite imagery, the very high altitude of the sensing platform results in minimal image displacements due to relief. As a result, registration

49 can usually be achieved through the use of a systematic rubber sheet transformation process that gently warps an image (through the use of polynomial equations) based on the known positions of a set of widely dispersed control points. With aerial photographs, however, the process is more complex. Not only are there systematic distortions related to tilt and varying altitude, but variable topographic relief leads to very irregular distortions (differential parallax) that cannot be removed through a rubber sheet transformation procedure. In these instances, it is necessary to use photogrammetric rectification to remove these distortions and provide accurate map measurements. Failing this, the central portions of high altitude photographs can be resampled with some success. Satellite-based scanner imagery contains a variety of inherent geometric problems such as skew (caused by rotation of the earth underneath the satellite as it is in the process of scanning a complete image) and scanner distortion (caused by the fact that the instantaneous field of view (IFOV) covers more territory at the ends of scan lines, where the angle of view is very oblique, than in the middle). With commercially-marketed satellite imagery, such as Landsat, IRS and SPOT, most elements of systematic geometric restoration associated with image capture are corrected by the distributors of the imagery. Thus, for the end user, the only geometric operation that typically needs to be undertaken is a rubber-sheet resampling in order to rectify the image to a map base. Many commercial distributors will perform this rectification for an additional fee. Photogrammetry is the science of taking spatial measurements from aerial photographs. In order to provide a full rectification, it is necessary to have stereoscopic imagesphotographs which overlap enough (e.g., 60% in the along-track direction and 10% between flight lines) to provide two independent images of each part of the landscape. Using these stereoscopic pairs and ground control points of known position and height, it is possible to fully recreate the geometry of the viewing conditions, and thereby not only rectify measurements from such images, but also derive measurements of terrain height. The rectified photographs are called orthophotos. The height measurements may be used to produce digital elevation models. Image Enhancement Image enhancement is concerned with the modification of images to make them more suited to the capabilities of human vision. Regardless of the extent of digital intervention, visual analysis invariably plays a very strong role in all aspects of remote sensing. While the range of image enhancement techniques is broad, the following fundamental issues form the backbone of this area: Contrast Stretch Digital sensors have a wide range of output values to accommodate the strongly varying reflectance values that can be found in different environments. However, in any single environment, it is often the case that only a narrow range of values will occur over most areas. Grey level distributions thus tend to be very skewed. Contrast manipulation procedures are thus essential to most visual analyses. This is normally used for visual analysis onlyoriginal data values are used in numeric analyses. Composite Generation For visual analysis, color composites make fullest use of the capabilities of the human eye. Depending upon the graphics system in use, composite generation ranges from simply selecting the bands to use, to more involved procedures of band combination and associated contrast stretch.

50 Digital Filtering One of the most intriguing capabilities of digital analysis is the ability to apply digital filters. Filters can be used to provide edge enhancement (sometimes called crispening), to remove image blur, and to isolate lineaments and directional trends, to mention just a few. Filtering creates a new image by calculating new values using a mathematical operation on the original cell value and its neighbors. The nature of this operation is determined by the values stored in a 3 by 3, 5 by 5, 7 by 7, or a variable sized template or kernel that is centered over each pixel as it is processed. The simplest filter is a mean filter in which the new value is the average of the original value and that of its neighbors. The result is an image that is "smoother" than the original. Mean and Gaussian filters are often used to generalize an image or in smoothing terrain data after interpolation. The mode filter, which assigns the most common value to the center pixel, is also commonly used to remove very small areas from a qualitative image, or slivers left after rasterizing vector polygons. A median filter is useful for random noise removal in quantitative images. The adaptive box filter is good for grainy random noise and also for data where pixel brightness is related to the image scene but with an additive or multiplicative noise factor. Edge enhancement filters, such as the Laplacian, accentuate areas of change in continuous surfaces. High pass filters emphasize areas of abrupt change relative to those of gradual change. The Sobel edge detector extracts edges between features or areas of abrupt change. Finally, the user-defined filter option allows the user to specify any kernel size as well as the mathematical operation, and is useful for simulation modeling. Image Classification Image classification refers to the computer-assisted interpretation of remotely sensed images. Although some procedures are able to incorporate information about such image characteristics as texture and context, the majority of image classification is based solely on the detection of the spectral signatures (i.e., spectral response patterns) of land cover classes. The success with which this can be done will depend on two things: 1) the presence of distinctive signatures for the land cover classes of interest in the band set being used; and 2) the ability to reliably distinguish these signatures from other spectral response patterns that may be present. There are two general approaches to image classification: supervised and unsupervised. They differ in how the classification is performed. In the case of supervised classification, the software system delineates specific landcover types based on statistical characterization data drawn from known examples in the image (known as training sites). With unsupervised classification, however, clustering software is used to uncover the commonly occurring landcover types, with the analyst providing interpretations of those cover types at a later stage. Supervised Classification The first step in supervised classification is to identify examples of the information classes (i.e., land cover types) of interest in the image. These are called training sites. The software system is then used to develop a statistical characterization of the reflectances for each information class. This stage is often called signature analysis and may involve developing a characterization as simple as the mean or the range of reflectances on each band, or as complex as detailed analyses of the mean, variances and covariances over all bands. Once a statistical characterization has been achieved for each information class, the image is then classified by examining the reflectances for each pixel and making a decision about which of the signatures it resembles most. There are several techniques for making these decisions, called classifiers. Most Image Processing software will offer several, based on varying decision rules.

51 Hard Classifiers The distinguishing characteristic of hard classifiers is that they all make a definitive decision about the landcover class to which any pixel belongs. Five supervised classifiers fall in this group: Parallelepiped, Minimum Distance to Means, and Maximum Likelihood, Linear Discriminant Analysis, and Artificial Neural Network. They differ only in the manner in which they develop and use a statistical characterization of the training site data. Of the five, the Maximum Likelihood procedure is unquestionably the most widely used classifier in the classification of remotely sensed imagery. Minimum-Distance-to-Means Based on training site data, this classifier characterizes each class by its mean position on each band. For example, if only two bands were to be used, each axis indicates reflectance on one of the bands. Thus, using the mean reflectance on these bands as X,Y coordinates, the position of the mean can be placed in this band space. Similarly, the position of any unclassified pixel can also be placed in this space by using its reflectance on the two bands as its coordinates. To classify an unknown pixel, this classifier then examines the distance from that pixel to each class and assigns it the identity of the nearest class. Despite the simplicity of this approach, it actually performs quite well. It is reasonably fast and can employ a maximum distance threshold which allows for any pixels that are unlike any of the given classes to be left unclassified. However, the approach does suffer from problems related to signature variability. By characterizing each class by its mean band reflectances only, it has no knowledge of the fact that some classes are inherently more variable than others. This, in turn, can lead to misclassification. This problem of variability can be overcome if the concept of distance is changed to transformation can be accomplished with the following equation: standardized distance = (original distance - mean) / standard deviation. Parallelepiped The parallelepiped procedure characterizes each class by the range of expected values on each band. This range may be defined by the minimum and maximum values found in the training site data for that class, or (more typically) by some standardized range of deviations from the mean (e.g., 2 standard deviations). With multispectral image data, these ranges form an enclosed box-like polygon of expected values known as a parallelepiped. Unclassified pixels are then given the class of any parallelepiped box they fall within. If a pixel does not fall within any box, it is left unassigned. This classifier has the advantage of speed and the ability to take into account the differing variability of classes. In addition, the rectangular shape accommodates the fact that variability may be different along different bands. However, the classifier generally performs rather poorly because of the potential for overlap of the parallelepipeds. For example, the conifer and deciduous parallelepipeds overlap in this illustration, leaving a zone of ambiguity in the overlap area. Clearly, any choice of a class for pixels falling within the overlap is arbitrary. It may seem that the problem of overlapping parallelepipeds would be unlikely. However, they are extremely common because of the fact that image data are often highly correlated between bands. This leads to a cigar-shaped distribution of likely values for a given class that is very poorly approximated by a parallelepiped. Clearly the parallelepiped procedure would not encounter this problem, since the line of separation between these classes would fall in between these two distributions. However, in this context of correlation between bands (which is virtually guaranteed), the parallelepiped procedure produces both zones of overlap and highly non-representative areas that really should not be included in the class. In general, then, the parallelepiped procedure should be avoided, despite the fact that it is the fastest of the supervised classifiers.

52 Maximum Likelihood To compensate for the main deficiencies of both the Parallelepiped and Minimum-Distanceto-Means procedures, the Maximum Likelihood procedure is used. The Maximum Likelihood procedure is based on Bayesian probability theory. Using the information from a set of training sites, the procedure uses the mean and variance/covariance data of the signatures to estimate the posterior probability that a pixel belongs to each class. In many ways, the procedure is similar to the Minimum-Distance-to-Means procedure with the standardized distance option. The difference is that the Maximum Likelihood procedure accounts for intercorrelation between bands. By incorporating information about the covariance between bands as well as their inherent variance, the Maximum Likelihood procedure produces what can be conceptualized as an elliptical zone of characterization of the signature. In actuality, it calculates the posterior probability of belonging to each class, where the probability is highest at the mean position of the class, and falls off in an elliptical pattern away from the mean.

Soft Classifiers Contrary to hard classifiers, soft classifiers do not make a definitive decision about the land cover class to which each pixel belongs. Rather, they develop statements of the degree to which each pixel belongs to each of the land cover classes being considered. Thus, for example, a soft classifier might indicate that a pixel has a 0.72 probability of being forest, a 0.24 probability of being pasture, and a 0.04 probability of being bare ground. A hard classifier would resolve this uncertainty by concluding that the pixel was forest. However, a soft classifier makes this uncertainty explicitly available, for any of a variety of reasons. For example, the analyst might conclude that the uncertainty arises because the pixel contains more than one cover type and could use the probabilities as indications of the relative proportion of each. This is known as subpixel classification. Alternatively, the analyst may conclude that the uncertainty arises because of unrepresentative training site data and therefore may wish to combine these probabilities with other evidence before hardening the decision to a final conclusion. Hyperspectral Classifiers All of the classifiers mentioned above operate on multispectral imageryimages where several spectral bands have been captured simultaneously as independently accessible image components. Extending this logic to many bands produces what has come to be known as hyperspectral imagery. Although there is essentially no difference between hyperspectral and multispectral imagery (i.e., they differ only in degree), the volume of data and high spectral resolution of hyperspectral images does lead to differences in the way that they are handled. Unsupervised Classification In contrast to supervised classification, where we tell the system about the character (i.e., signature) of the information classes we are looking for, unsupervised classification requires no advance information about the classes of interest. Rather, it examines the data and breaks it into the most prevalent natural spectral groupings, or clusters, present in the data. The analyst then identifies these clusters as landcover classes through a combination of familiarity with the region and ground truth visits. The logic by which unsupervised classification works is known as cluster analysis. A cluster analysis performs classification based on a set of input images using a multi-dimensional histogram peak technique. It is important to recognize, however, that the clusters unsupervised classification produces are not information classes, but spectral classes (i.e., they group together features (pixels) with similar reflectance patterns). It is thus usually the case that the analyst needs to reclassify spectral classes into information classes. For example, the system might identify classes for asphalt and cement

53 which the analyst might later group together, creating an information class called pavement. With suitable ground truth and accuracy assessment procedures, this procedure can provide a remarkably rapid means of producing quality land cover data on a continuing basis. Accuracy Assessment A vital step in the classification process, whether supervised or unsupervised, is the assessment of the accuracy of the final images produced. This involves identifying a set of sample locations that are visited in the field. The land cover found in the field is then compared to that which was mapped in the image for the same location. Statistical assessments of accuracy may then be derived for the entire study area, as well as for individual classes. In an iterative approach, the error matrix produced (sometimes referred to as a confusion matrix), may be used to identify particular cover types for which errors are in excess of that desired. The information in the matrix about which covers are being mistakenly included in a particular class (errors of commission) and those that are being mistakenly excluded (errors of omission) from that class can be used to refine the classification approach. Image Transformation Digital Image Processing offers a limitless range of possible transformations on remotely sensed data. Principal Components Analysis Principal Components Analysis (PCA) is a linear transformation technique related to Factor Analysis. Given a set of image bands, PCA produces a new set of images, known as components that are uncorrelated with one another and are ordered in terms of the amount of variance they explain from the original band set. PCA has traditionally been used in remote sensing as a means of data compaction. For a typical multispectral image band set, it is common to find that the first two or three components are able to explain virtually all of the original variability in reflectance values. Later components thus tend to be dominated by noise effects. By rejecting these later components, the volume of data is reduced with no appreciable loss of information. Given that the later components are dominated by noise, it is also possible to use PCA as a noise removal technique. By zeroing out the coefficients of the noise components in the reverse transformation, a new version of the original bands can be produced with these noise elements removed. Recently, PCA has also been shown to have special application in environmental monitoring. In cases where multispectral images are available for two dates, the bands from both images are submitted to a PCA as if they all came from the same image. In these cases, changes between the two dates tend to emerge in the later components. More dramatically, if a time series of NDVI images (or a similar single-band index) is submitted to the analysis, a very detailed analysis of environmental changes and trends can be achieved. In this case, the first component will show the typical NDVI over the entire series, while each successive component illustrates change events in an ordered sequence of importance. By examining these images, along with graphs of their correlation with the individual bands in the original series, important insights can be gained into the nature of changes and trends over the time series. Cluster Analysis The procedure can best be understood from the perspective of a single band. If one had a single band of data, a histogram of the reflectance values on that band would show a number of peaks and valleys. The peaks represent clusters of more frequent values associated with commonly occurring cover types.

54 The Cluster Analysis procedure thus searches for peaks by looking for cases where the frequency is higher than that of its immediate neighbors on either side. In the case of two bands, these peaks would be hills, while for three bands they would be spheres, and so on. The concept can thus be extended to any number of bands. Once the peaks have been located, each pixel in the image can then be assigned to its closest peak, with each such class being labeled as a cluster. It is the analyst's task to then identify the thematic meaning of each cluster by looking at the cluster image and comparing it to ground features.

You might also like