Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

IPRC HUYE

CIVIL ENGINEERING DEPARTMENT

LAND SURVEYING OPTION

LSVMC 601: GIS IN MAPPING (CARTOGRAPHY)

HANDOUT NOTES ON:

L.U.3.PERFORM MAP MAKING

L.O.3.4. PLAN AND MAP GENERALIZATION

Cartographic Generalization

Map generalization is defined as the process of reducing the information content of maps due to
scale change, map purpose, intended audience, and/or technical constraints. Generalization
methods are used to remove detail that is unnecessary for an application, in order to reduce data
volume and speed up operation. One constraint on the amount of generalization is introduced by
the scale of the map. The smaller the scale, the more the amount of generalization and vise versa.
This is because the amount of space available to show any given feature decreases as scale
decreases.
Generalization is also strongly influenced by the purpose of which the map was designed. Two
maps of a given area may vary significantly in content even if they are of similar scale,
depending upon their purpose. If generalization is done properly, the distinguishing characteristic
of the mapped features will still be effectively represented.
Generalization Problem
When a map is reduced in scale, there is a tendency for the clarity of the map to be reduced as
well. Features such as roads, rivers, or cities often end up so close together in the reduced-scale
map that they end up colliding, coalescing, or becoming tiny in size as to essentially be invisible
to the person using the map. Cartographic generalization attempts to resolve this issue by
redrawing the map in a different fashion when the scale is reduced, producing a more useable
small scale map.
Models of Generalization
To better understand the complexity of generalization, researchers have attempted to design
Page 1 of 11
models of the process. Some efforts have focused on fundamental operations and the relationship
among them, whereas others have created complex models.
Robinson et al.'s Model
Arthur Robinson and his colleagues (1978) developed one of the first formal models or
frameworks to better understand the generalization process. They separated the process into two
major steps: selection (a preprocessing step) and the actual process of generalization.
Generalization involves the processes of simplification, classification and symbolization.
Selection
One means of generalization that cartographers use is the selection and retention of the more
important features in an area and the elimination of the less important ones. If a series of lakes is

Page 2 of 11
shown, for example, some of the smaller lakes in the group are eliminated as scale is reduced, so
that the larger ones can be retained.
Simplification
A second technique of generalization is the simplification of the shapes of the features retained
on the map. To continue the previous example, this approach means that the shorelines of the
lakes are made less complex at smaller scales. Ideally, enough detail is retained so that the major
characteristics of the features are identifiable. A coastline with many bays and headlands, for
example, should not be reduced to a smooth curve. Instead, sufficient complexity should be
retained so that the nature of the actual coastline can be realized, even though the individual
features are not all retained.
Combination
Another step that cartographers take in the generalization process is the combination of two or
more similar features into a single symbol. If there are many small wooded areas in a region, for
example, two or more of these items may be combined to show as one. If the combination of
features is carried too far, however, there is a danger that the map user will not be aware that the
region is characterized by scattered areas of woodland, separated by unwooded areas.
Location Shift and Size Exaggeration
The generalization of a particular map may also require the cartographer to shift the location of
some features. For example, a road, railroad and a river may all be crowded into a narrow valley,
but the purpose of the map makes it necessary to show all of item. Because of the need for
legibility, the weight of the line- work on the map must be sufficient to be distinguishable, and
there must be sufficient space between the lines to keep them from running together.
Faced with the incompatible requirements of retaining features as well as legibility, the
cartographer may exaggerate the size of the features to improve visibility and may shift their
location slightly to gain the necessary space .This combination of techniques retain legibility
even though the features lie in slightly shifted positions and occupy more space than they would
if they were represented to precise scale.
Kilpelainen's Model
Kilpelainen developed alternative frameworks for the representation of multi-scale databases.
Assuming a master cartographic database called the Digital Landscape Model (DLM), she
proposed a series of methods for generating smaller scale Digital Cartographic Models (DCMs).

Page 3 of 11
The master DLM is the largest scale, most accurate database possible, whereas secondary DLMs
are generated for smaller scale applications. The DLM is only a computer representation and
cannot be visualized. DCMs, on the other hand, are the actual graphical representations, derived
through a generalization or symbolization of the DLM. In her model, each DCM is generated
directly from the initial master data-base or from the previous version. A separate DLM is
created for each scale or resolution, and the DCM is directly generated from each DLM. The
master DLM is used to generate smaller scale DLMs, which are then used to generate a DCM at
all levels. The assumption is that DCMs are generated on an as-needed basis.

Brassel and Weibel's Model


Brassel and Weibel (1988) have worked extensively at The University of Zurich in developing
methods for terrain generalization. Their research has two primary objectives: to design a
strategy for terrain generalization that is adaptive to different terrain types, scales, and map
purposes, and to implement this strategy in an automated environment as fully as possible.
Toward these ends, they have developed a model of terrain generalization that consists of five
major stages: structure recognition, process recognition, process modeling, process execution,
and data display and evaluation of results.
McMaster and Shea's Model
In an attempt to create a comprehensive conceptual model of the generalization process,

Page 4 of 11
McMaster and Shea’s (1992) identified three significant components: the theoretical objectives or
why to generalize; the cartometric evaluation, or when to generalize, and the fundamental
operations, or how to generalize.

Theoretical Objectives of Generalization

The theoretical or conceptual elements of generalization include reducing complexity,


maintaining spatial accuracy, maintaining aesthetic quality, maintaining a logic hierarchy and
consistently applying the rules of generalization. Reducing complexity is perhaps the most
significant goal of generalization. The question for the cartographer is relatively straight forward;
how does one take a map at a scale of, perhaps, 1: 24,000 and reduce it to 1:100,000? More
important, the question is how the cartographer reduces the information content so that it is
appropriate for the scale. Obviously, the complexity of details that is provided at a scale of
1:24,000 cannot be represented at 1:100,000; some features must be eliminated and some detail
must be modified. For centuries, through considerable experience, cartographers developed a
sense of what constituted appropriate information content.
The figure below depicts the very same features a portion of Ponta creek in Kemper and
Lauderdale countries, Mississippi at four different scales; 1:24,000, 1:50,000, 1:100,000 and
1:250,000.These features effectively show the significantly different information content as one
reduces scale from 1:24,000 to 1:250,000. In the top portion of the illustration, the general form
of the fine line stays the same, although the fine level detail is lost. The bottom part of the
illustration depicts an enlargement of the smaller scale features to match the features at
1:24,000.Note, for instance, the enlargement of the 1:250,000 scale line by 1,041.67% to match
Ponta creek at 1:24,000. In this case the cartographer has manually generalized Ponta creek through
a series of transformation including simplification, smoothing and enhancement as a holistic
process, unlike current computer approaches that require a separation of these often linked
processes. The set of decisions required to generalize cartographic features based on their inherent
complexity is difficult if not impossible to quantify.

Page 5 of 11
Framework for the fundamental operations
Most of the research in generalization assumes that the process can be broken down into a series
of logical operations that can be classified according to the type of geometry of the feature. For
instance; a simplification operation is designed for linear features, whereas an amalgamation
operator works on areal features. The type of generalization operations for vector and raster
processing are fundamentally different. Vector based operations require more complicated
strategies because they operate on strings of x-y coordinate pairs and require complete searching
strategies. In raster based generalization, it is much easier to determine the proximity
relationships that are often the basis for determining conflict among the feature. Fig 3.0 provides
Page 6 of 11
graphic depictions of some key operations.

Generalization in the Digital Domain


Unfortunately a process (generalization) that was reasonably well understood for paper maps,
and even coded in certain instances by agencies such as the limited states Geological survey has
become a classical “ill-defined” problem in digital domain. The problem is how to identify the

appropriate“techniques” for computer algorithms, to accomplish what had been for many
centuries a manual process.
In a digital environment, Robert McMaster and Stuart Shea (1992) noted that “the generalization
process supports a variety of tasks including: digital data storage reduction; scale manipulation;
and statistical classification and symbolization. Digital generalization can be defined as the
process of deriving, from a data source, a symbolically or digitally encoded cartographic data set
through the application of spatial and attribute transformations. Mac Master and Shea listed the
objects of Digital Generalization as: The reduction in scope and amount, type, and cartographic
portrayal of mapped or encoded data consistent with the chosen map purpose and intended
audience; and the maintenance of graphical clarity at the target scale. The theoretical “problem”
of generalization in the digital domain is straight forward: The identification of areas to be
generalized and the application of appropriate operations.
When is Generalization Required?
In a digital cartographic environment, it is necessary to identify those specific conditions whose
generalization will be required. Although many such conditions can be identified, six of the
fundamental conditions include:
i) Congestion
ii) Coalescence
iii) Conflict
iv) Complication.
v) Inconsistency.
vi) Imperceptibility.
Congestion- refers to the problem when, under scale reduction, too many objects are compressed
into too small a space, resulting in overcrowding due to high feature density.
Significant congestion results in decreased communication, for instance, where many buildings
are in close proximity.
Page 7 of 11
Page 8 of 11
Coalescence- refers to the condition where features graphically collide due to scale change. In
these situations, features actually touch. This condition thus requires the implementation of the
displacement operations.
The condition of conflict results when, due to generalization, an inconsistency between or among
features occurs. For instance, if generalization of a coastline eliminated a bay with a city located
on it, either the city or the coastline would have to be moved to ensure that the urban area
remained on the coast. Such spatial conflicts are difficult to both detect and correct.
The condition of complication is dependent on the specific conditions that exist in a defined
space. An example is a digital line that changes in complexity from one part to the next, such as
a coastline that progress from very smooth to very crenulated.
Vector Based Operations of Generalization
Despite the fact that many problems in generalization require the development and
implementation of mathematical, statistical or geometric measures, little work on generalization
measurements has been reported.
Unfortunately, the effect of Douglas Parker algorithm as with most generalization processes are
not consistent, as the algorithm behaves differently depending on the geometric or
geomorphological significance of the feature. In areas that are more ‘‘natural’’ such as streams
and rivers, the simplification produces a relatively good approximation.

Page 9 of 11
However, in urban areas, where the census geography follows the rectangular street network,
the results are less satisfactory. In many cases, logical right angles are simplified to diagonals.
A significant problem in the generalization process involves the identification of the
appropriate tolerance values for simplification. Unfortunately, this is mostly an experimental
process, where the user has to test and retest values until an acceptable solution is empirically
found.

Smoothing although often assumed to be identical to simplification, smoothing is a


much different process. The smoothing operation shifts the position of points to
improve the appearance of the features. Smoothing algorithm relocate points in an attempt
to plane away small perturbations and capture only the most significant trends of a line (Mc
Master and Shea;
1992). Research has shown that a careful integration of simplification and smoothing
routines can produce a simplified yet aesthetically acceptable result.
Aggregation, as depicted in fig 3.0, involves the joining together of multiple point features,
such as a cluster of buildings. This process involves grouping point locations and representing
them as a real units. The critical problem in this operation is determining both the
density of points needed to identify a cluster to be aggregated and the boundary around the
cluster. The most common approach is to create a delaunay triangulation of points and use
measures of distance along the delaunay edges to calculate density and boundary.
Amalgamation is the process of fusing together nearby polygons, and is needed for both non-
continuous and continuous areal data. A non-continuous example is series of small islands in
close proximity with size and detail that cannot be depicted at the smaller scale. A continuous
example is with census tract data, where several tracts with similar statistical attributes can be
joined together. Amalgamation is a very difficult problem in urban environments where a
series of complex buildings might need to be joined.
Collapse operation involves the conversion of geometry. For instance, it might be that a
complex urban area is collapsed to a point due to scale change and resymbolised with a
geometric form such as a circle. A complex set of buildings may be replaced with a simple
rectangle – which might also involve amalgamation.

Page 10 of 11
Merging is the operation of fusing together groups of line features, such as parallel railway
lines, or edges of a river or stream. This is a form of collapse, where an areal feature is
converted to a line.

A simple solution is to average the two or multiple sides of a feature and use this average to
calculate the new feature’s position.
Refinement is another form of re-symbolization, much like collapse. However, refinement is
an operation that involves reducing a multiple set of features such as roads, buildings, and
other types of urban structures to a simplified representation. The concept with refinement is
that such complex geometries are re-symbolized to a simpler form ‘‘typification” of the
objects. Exaggeration is one of the most commonly applied generalization operation. Often it is
necessary to amplify a specific part of an object to maintain clarity in scale reduction.
Displacement is perhaps the most difficult of the generalization operation, as it require
complex measurements

Page 11 of 11

You might also like