Professional Documents
Culture Documents
Notes L.o.3.4. Map and Plan Generalization
Notes L.o.3.4. Map and Plan Generalization
Cartographic Generalization
Map generalization is defined as the process of reducing the information content of maps due to
scale change, map purpose, intended audience, and/or technical constraints. Generalization
methods are used to remove detail that is unnecessary for an application, in order to reduce data
volume and speed up operation. One constraint on the amount of generalization is introduced by
the scale of the map. The smaller the scale, the more the amount of generalization and vise versa.
This is because the amount of space available to show any given feature decreases as scale
decreases.
Generalization is also strongly influenced by the purpose of which the map was designed. Two
maps of a given area may vary significantly in content even if they are of similar scale,
depending upon their purpose. If generalization is done properly, the distinguishing characteristic
of the mapped features will still be effectively represented.
Generalization Problem
When a map is reduced in scale, there is a tendency for the clarity of the map to be reduced as
well. Features such as roads, rivers, or cities often end up so close together in the reduced-scale
map that they end up colliding, coalescing, or becoming tiny in size as to essentially be invisible
to the person using the map. Cartographic generalization attempts to resolve this issue by
redrawing the map in a different fashion when the scale is reduced, producing a more useable
small scale map.
Models of Generalization
To better understand the complexity of generalization, researchers have attempted to design
Page 1 of 11
models of the process. Some efforts have focused on fundamental operations and the relationship
among them, whereas others have created complex models.
Robinson et al.'s Model
Arthur Robinson and his colleagues (1978) developed one of the first formal models or
frameworks to better understand the generalization process. They separated the process into two
major steps: selection (a preprocessing step) and the actual process of generalization.
Generalization involves the processes of simplification, classification and symbolization.
Selection
One means of generalization that cartographers use is the selection and retention of the more
important features in an area and the elimination of the less important ones. If a series of lakes is
Page 2 of 11
shown, for example, some of the smaller lakes in the group are eliminated as scale is reduced, so
that the larger ones can be retained.
Simplification
A second technique of generalization is the simplification of the shapes of the features retained
on the map. To continue the previous example, this approach means that the shorelines of the
lakes are made less complex at smaller scales. Ideally, enough detail is retained so that the major
characteristics of the features are identifiable. A coastline with many bays and headlands, for
example, should not be reduced to a smooth curve. Instead, sufficient complexity should be
retained so that the nature of the actual coastline can be realized, even though the individual
features are not all retained.
Combination
Another step that cartographers take in the generalization process is the combination of two or
more similar features into a single symbol. If there are many small wooded areas in a region, for
example, two or more of these items may be combined to show as one. If the combination of
features is carried too far, however, there is a danger that the map user will not be aware that the
region is characterized by scattered areas of woodland, separated by unwooded areas.
Location Shift and Size Exaggeration
The generalization of a particular map may also require the cartographer to shift the location of
some features. For example, a road, railroad and a river may all be crowded into a narrow valley,
but the purpose of the map makes it necessary to show all of item. Because of the need for
legibility, the weight of the line- work on the map must be sufficient to be distinguishable, and
there must be sufficient space between the lines to keep them from running together.
Faced with the incompatible requirements of retaining features as well as legibility, the
cartographer may exaggerate the size of the features to improve visibility and may shift their
location slightly to gain the necessary space .This combination of techniques retain legibility
even though the features lie in slightly shifted positions and occupy more space than they would
if they were represented to precise scale.
Kilpelainen's Model
Kilpelainen developed alternative frameworks for the representation of multi-scale databases.
Assuming a master cartographic database called the Digital Landscape Model (DLM), she
proposed a series of methods for generating smaller scale Digital Cartographic Models (DCMs).
Page 3 of 11
The master DLM is the largest scale, most accurate database possible, whereas secondary DLMs
are generated for smaller scale applications. The DLM is only a computer representation and
cannot be visualized. DCMs, on the other hand, are the actual graphical representations, derived
through a generalization or symbolization of the DLM. In her model, each DCM is generated
directly from the initial master data-base or from the previous version. A separate DLM is
created for each scale or resolution, and the DCM is directly generated from each DLM. The
master DLM is used to generate smaller scale DLMs, which are then used to generate a DCM at
all levels. The assumption is that DCMs are generated on an as-needed basis.
Page 4 of 11
McMaster and Shea’s (1992) identified three significant components: the theoretical objectives or
why to generalize; the cartometric evaluation, or when to generalize, and the fundamental
operations, or how to generalize.
Page 5 of 11
Framework for the fundamental operations
Most of the research in generalization assumes that the process can be broken down into a series
of logical operations that can be classified according to the type of geometry of the feature. For
instance; a simplification operation is designed for linear features, whereas an amalgamation
operator works on areal features. The type of generalization operations for vector and raster
processing are fundamentally different. Vector based operations require more complicated
strategies because they operate on strings of x-y coordinate pairs and require complete searching
strategies. In raster based generalization, it is much easier to determine the proximity
relationships that are often the basis for determining conflict among the feature. Fig 3.0 provides
Page 6 of 11
graphic depictions of some key operations.
appropriate“techniques” for computer algorithms, to accomplish what had been for many
centuries a manual process.
In a digital environment, Robert McMaster and Stuart Shea (1992) noted that “the generalization
process supports a variety of tasks including: digital data storage reduction; scale manipulation;
and statistical classification and symbolization. Digital generalization can be defined as the
process of deriving, from a data source, a symbolically or digitally encoded cartographic data set
through the application of spatial and attribute transformations. Mac Master and Shea listed the
objects of Digital Generalization as: The reduction in scope and amount, type, and cartographic
portrayal of mapped or encoded data consistent with the chosen map purpose and intended
audience; and the maintenance of graphical clarity at the target scale. The theoretical “problem”
of generalization in the digital domain is straight forward: The identification of areas to be
generalized and the application of appropriate operations.
When is Generalization Required?
In a digital cartographic environment, it is necessary to identify those specific conditions whose
generalization will be required. Although many such conditions can be identified, six of the
fundamental conditions include:
i) Congestion
ii) Coalescence
iii) Conflict
iv) Complication.
v) Inconsistency.
vi) Imperceptibility.
Congestion- refers to the problem when, under scale reduction, too many objects are compressed
into too small a space, resulting in overcrowding due to high feature density.
Significant congestion results in decreased communication, for instance, where many buildings
are in close proximity.
Page 7 of 11
Page 8 of 11
Coalescence- refers to the condition where features graphically collide due to scale change. In
these situations, features actually touch. This condition thus requires the implementation of the
displacement operations.
The condition of conflict results when, due to generalization, an inconsistency between or among
features occurs. For instance, if generalization of a coastline eliminated a bay with a city located
on it, either the city or the coastline would have to be moved to ensure that the urban area
remained on the coast. Such spatial conflicts are difficult to both detect and correct.
The condition of complication is dependent on the specific conditions that exist in a defined
space. An example is a digital line that changes in complexity from one part to the next, such as
a coastline that progress from very smooth to very crenulated.
Vector Based Operations of Generalization
Despite the fact that many problems in generalization require the development and
implementation of mathematical, statistical or geometric measures, little work on generalization
measurements has been reported.
Unfortunately, the effect of Douglas Parker algorithm as with most generalization processes are
not consistent, as the algorithm behaves differently depending on the geometric or
geomorphological significance of the feature. In areas that are more ‘‘natural’’ such as streams
and rivers, the simplification produces a relatively good approximation.
Page 9 of 11
However, in urban areas, where the census geography follows the rectangular street network,
the results are less satisfactory. In many cases, logical right angles are simplified to diagonals.
A significant problem in the generalization process involves the identification of the
appropriate tolerance values for simplification. Unfortunately, this is mostly an experimental
process, where the user has to test and retest values until an acceptable solution is empirically
found.
Page 10 of 11
Merging is the operation of fusing together groups of line features, such as parallel railway
lines, or edges of a river or stream. This is a form of collapse, where an areal feature is
converted to a line.
A simple solution is to average the two or multiple sides of a feature and use this average to
calculate the new feature’s position.
Refinement is another form of re-symbolization, much like collapse. However, refinement is
an operation that involves reducing a multiple set of features such as roads, buildings, and
other types of urban structures to a simplified representation. The concept with refinement is
that such complex geometries are re-symbolized to a simpler form ‘‘typification” of the
objects. Exaggeration is one of the most commonly applied generalization operation. Often it is
necessary to amplify a specific part of an object to maintain clarity in scale reduction.
Displacement is perhaps the most difficult of the generalization operation, as it require
complex measurements
Page 11 of 11