Image-Processing Techniques Applied To Road Problems: Journal of Transportation Engineering January 1992

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/245305938

Image-Processing Techniques Applied to Road Problems

Article  in  Journal of Transportation Engineering · January 1992


DOI: 10.1061/(ASCE)0733-947X(1992)118:1(62)

CITATIONS READS

13 119

1 author:

Marcus Wigan
Edinburgh Napier University
163 PUBLICATIONS   1,190 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Motorcycles View project

Ethics View project

All content following this page was uploaded by Marcus Wigan on 20 March 2014.

The user has requested enhancement of the downloaded file.


IMAGE.PROCESSING TECHNIQUES ApPLIED TO
ROAD PROBLEMS8
By M. R. Wigan l
(Reviewed by the Urban Transportation Division)

ABs'mACT: Relevant areas of image-processing applications were reviewed. and


several initial investigation~ were carried out to assess the potential of applying
digital imaging techniques to road problems. The areas chosen were: Road surface
rating. measurement. and discrimination: calibrated sieve measurement; vehicle
outline detection; and vehicle registration number recognitiol\. Practical selective
video data-acquisition equipment was built and tested as a result of this investi-
gation. Analysis of the characteristics of road and traffic applications showed that
these and other steps need to be taken to reduce the quantity of unnecessary images
being collected for analysis. Effective test results were obtained in most of these
trial areas, and the conclusions arc that adaptive Laws masks show promise for
defect classification. automatic detection of speed and shape classification is prac-
tical, sieve mensuration and calibration is a practical objective. vehiclc number
recognition may require ancillary equipment, road surface defects could be ad-
dressed directly. and discrimination of road surfaccs and their condition may be
addressed. This paper includes a review of the various image processing methods
available for invariant moment analysis and surfacc-tellture discrimination and clas-
sification. and concludes that Laws masks are currently the preferred technique.

INTRODUCTION

The initial identification of profitable and practical use of noncontact


sensors and digital image-processing techniques for road and traffic data
acquisition has been under consideration by the writers for some time (Wi-
gan 1983; Wigan and Cullinan 1984). Readers unfamiliar with the general
area of image processing for roads may find these earlier reports a useful
introduction. Subsequently, a project was set up by the Australian Road
Research Board (ARRB) on image-processin~ applications for roads and
traffic to select and test out a number of specifiC target areas. An extensive
report of this project containing a considerably more detailed review of
imaging techniques and the underpinning mathematics appears in Wigan
and Cullinan (1987).
One of the motivations for the work was that while the costs of the
necessary equipment for encoding complete images could be seen to be
declining rapidly even in 1983, the effective data reduction and problem-
oriented work required to take advantage of these developments was not
being advanced at a similar rate in road and transport engineering_ The lead
time for such investigations is such that by the time inexpensive equipment
is immediately available, the ability, understanding. and analysis techniques

·Sections based on "Image Processing Applied to Roads: Surface Texture. Men-


suration. Vehicle Shape and Number Detection." ARR 145. by M. R. Wigan and
M. C. Cullinan (1987).
IHead. Dept. of Computing and Quantitative Methods. Victoria ColI.-Burwood
Campus, 221 Burwood Hwy .• Burwood 3125, Victoria. Australia.
Note. Discussion open until June 1. 1992. To extend the closing date one month.
a written request must be filed with the ASCE Manager of Journals. The manuscript
for this paper was submitted for review and possible publication on October 26.
1989. This paper is part of the Journal o/TronsponlJlion Engineering, Vol. US. No.
I, J anuary/February. 1992. @ASCE. ISSN 0733-947X/92100() J-(KI62/$I.00 + $. 15 per
page. Paper No. 26511.
62
to make efficient applied use of the results would not be available unless
investigative and subsequently developmental work started promptly. This
proved by 1988 to have been an entirely accurate reading of the situation.
because since then there has been a steady stream of high-quality and ever-
cheaper frame-grabber cards for IBM PCs, Apple Macintosh, and Sun work-
stations to underpin the current-and improving-practicality of imple-
menting these methods economically.
Several selected feasibility-assessment areas are considered in turn. The
major areas selected for this preliminary project were: (1) Road-surface
features and defects (adaptive texture classification); (2) classification of
vehicles (lateral profile); (3) speed measurements (line-scan camera); (4)
number-plate recognition (moment methods of number recognition); and
(5) areal mensuration (aggregate sieve calibration was the example). The
closely related subject of pattern recognition per se is not addressed in this
paper, but the reader should be aware that a wide range of the many
applications now seen to be practical and possible will require the use of
such techniques as neural networks.

ROAo-SURFACE TEXTURES AND DEFECTS

Road-surface visual rating is a standard tool for pavement management.


Consistent ratings require considerable amounts of training and care in cross
validation by experienced staff. The diagnostic value of such ratings over
time and space depends on the associations that can be built up over time,
relating ratings to vehicle and pavement performance. There are at least
three different ways in which digital processing of pictures of road surfaces
can be used to help in this task.
One approach is to compute an overall characteristic visual texture of the
surface, which would be sensitive to cracks, patching, and other surface
defects but would not measure each as a specific goal. These ratings could
then be added to subjective visual rating factors and over a period both
could be jointly assessed for their value in terms of correlations between
different manual raters, with the physical and perceptual texture values and
with road performance over time.
A second approach is to use the technique specifically to automatically
pick out the presence of cracks, patches, and other features of the surface
and compute an index weighted by the frequency of each feature identified.
A third technique is to take small pictures of developing crack areas. and
use digital image processing to automatically compute the total area and
length of the cracks: an application of image processing already in use in
manufacturing that does not answer the need for an additional tool for
linking appearance to performance (because it simply replicates the manual
task of physical measurement) and thus was not ,"cluded in this set of
exploratory trials. Subsequent to the work reported here. Cox et al. (1986a)
reported on highly sophisticated equipment designed to meet the explicit
measurement goal and devoted much of a second paper (Cox et al. 1986b)
to the problems of recombining the very detailed data extracted at so much
effort into a simple index, thereby returning to the first approach by a long
indirect route.
The present paper concentrates on a feasibility analysis of the first two
of these options, because the third is a well-established robotic-vision and
photogrammetric technique and could reasonably be delayed until the field
application development program was initiated. However, a very small scale
63
test of this family of methods was a sieve mensuration test described later
in this report.
The two primary chosen options required basic research investigations
before a field development stage could be entertained. Benke (1986) and
Newlands (1985) contributed to the work on texture analysis and number
recognition when working as advanced students at Deakin University, Gee-
long, Australia, partially supported by this project.
Published research on the subject of texture discrimination, as applied to
high-speed analysis of road-surface condition was reviewed in detail by
Wigan and Cullinan (1987), and only a brief summary is given here. The
advantages of adopting a philosophy based on the use of convolution masks
for texture discrimination are described.
Convolution masks are matrices of weighting factors applied successively
to each point (pixel) in the image. The matrix-weighted sum of the values
of this and the values of the neighboring pixels then replaces the current
value of the central pixel. These masks can carry out powerful transfor-
mations of an image. Convolution masks were applied to the following
images: (1) Artificial structured textures; (2) artificial stochastic textures;
(3) and natural textures e.g. road surfaces.
The masks were tested in the following ways as part of the project: (1)
Differences in normalization, comparing two types of histogram modifica-
tion; (2) differences between synthesized stochastic textures due to changes
in the mean and variance of gray levels; and (3) differences in the resolution
of the digitization. A basic familiarity with the terminology of digital image
and signal processing is assumed at the level of an introductory text such
as Castelman (1979).
Historically, the properties most often associated with texture are coarse-
ness and directionality. Many analytic approaches are primarily oriented
toward characterising these two properties. Over the years, the multiplicity
of theories applied to the analysis of textures has coalesced into two schools
of thought-the "statistical" approach and the "structural" approach.
Statistical models are quite general in application; structural models re-
quire placement rules and are normally applicable only when texture con-
tains repetitive elements with fairly simple geometries. Consequently, struc-
tural approaches are more suitable for artificial and highly regular patterns
than for the often more complex natural textures, which are subject to
statistical variability. Laws (1980) notes that structural approaches have the
fundamental weakness that texture elements must be located, classified, and
studied before the texture itself is analyzed. This represents a severe com-
putational burden. which is accentuated in the case of noisy. blurred. or
stochastic textures.
Many natural textures, such as road surfaces, have no easily discernible
structure and are more efficiently analyzed using a statistical approach. A
problem common to all such analytic approaches is that texture primitives
may be nested and can exist as microtextures within macrotextures. Scale
is therefore important.
The primary applied goal for road-surface assessment is to classify the
generalized textures presented by images of the road surface as a means of
summarizing the visual character of the surface and its faults and virtues,
and thereby to avoid the need to measure and identify every individual
element and feature on the surface.
Image quality in texture analyses can be degraded due to the monotonic
transformations used in forming the image. To avoid the complexity of image
64
restoration procedures, the investigator attempts to digitize the textures
under controlled conditions. The image requires normalization to compen-
sate for variations in brightness and contrast and this is often achieved by
histogram standardization across the various intensity levels in the image.
This may involve a transformation designed to yield a Gaussian distribution
of gray levels with a fixed mean and variance, or, alternatively, the global
equalization of the image histogram.
Contemporary research has shown that high classification accuracies are
possible. albeit sometimes at great complexity and considerable computa-
tional expense. Such methods include planar random-walk techniques
(Wechsler and Kidode 1980). fractal theory (Pentland 1984), gray-level
cooccurrence matrices (Vickers and Modestino 1982). and a combination
of cooccurrence and convolution (Ade 1981).
The challenge faced when addressing the problem of texture classification
lies in producing measures that are both very fast and accurate. Comparisons
between the many various analytical techniques are difficult. and the results
may be confounded by extraneous conditions. The performance of a texture
analyzer can be influenced by the following variables: (1) Imuge data base
used; (2) instrument effects, i.e. differences in digitization; (3) resolution
of data; (4) normalization and other preprocessing of the image; (5) number
and type of features analyzed; (6) number of training samples used; and (7)
classification scheme used.
The primary applied goal for road-surface assessment is to classify the
generalized textures presented by images of the road surface as a means of
summarizing the visual character of the surface and its faults and virtues,
and thereby avoiding the need to measure and identify every individual
element and feature on the surface. Convolution masks (Laws 1980) are
local operators (i.e. convolution masks with coefficients summing to zero
producing a zero response over areas of uniform brightness) (Duff 1983).
Consequently, they are particularly suitable for segmenting scenes composed
of many textures. Masks are matrices of weighting factors that are applied
successively to each point (pixel) in the image. The matrix-weighted sum
of the values of the values of these and neighboring pixels then replaces the
current value of the central pixel. Such masks are a mathematical repre-
sentation of feature extractors used by the human visual system (Laws 1980;
Julesz and Bergen 1983; Brady 1982; Hubel and Wiesel 1962; Barlow 1969).
Gagalowicz (1980) and Wermser and Liedke (1982) support the use of local
convolution operators that are suitable for simulating human perceptions
of texture. A pattern-classification system will produce results that are con-
sistent with human judgment if it operates in a similar manner.
The links between the filtering and discrete views of these operators are
direct: the digital Laplacian is an example of a high-pass filter. It is an
isotropic operator that detects points and lines, suppressing low frequencies
and retaining the high ones. The operation is carried out in the spatial
domain and involves the convolution of the image with the impulse response
of the spatial frequency filter. The impulse response, or mask, consists of
a matrix of weights applied over a moving window. When the image is
filtered by the mask, it results in a discrete approximation to a classical two-
dimensional convolution integral (Figs. 1 and 2).

volume = h(x,y) = f'g = f~ f f(u,v)g(x-u,y-u) du dv du dv ..... (I)

65
g(x-u.v-v!

u
FIG. 1. Convolution Operation for Two Continuous Functions

G(i-m. j-n)-~g91
F(m,n~

,
" ,,
L
, ,,, .
L" " ,-1
LL'
, ,
"" 7
" '1
L
F(m, n) G(i-m,j-n)

FIG. 2. Convolution Operation for Two Discrete Functions

The digital convolution of two functions, F and G, which are discrete and
nonzero over a finite domain, is represented by
volume = H(i,j) = F· G + L L F(m,n)G(i-m,j-n) .............. (2)
m n

where H(i,j) = a pixel in the output image and the summations occur over
an area of nonzero overlap (Castelman 1979; Gonzalez and Wintz 1987).
Each pixel in turn has its value replaced by the sum of all pixel values within
the window, which are weighted according to the mask coefficients.
High classification accuracies are possible albeit at great complexity and
huge computational expense. Neural-network techniques show promise to
bridge this gap in computational speed and interpretation. At present, how-
ever, the 5·5 kernels used by Laws remain the benchmark (Ade 1981) for
convolution filtering. Mask sizes matter: 3·3 masks have been generally
used for both edge detection and texture analysis. although they are sensitive
66
to noise (Haralick 1984). Marr (1982) defined large masks for edge detection
based on a Gaussiar. smoothing function convolved with a Laplacian op-
erator. Unfortunately, larger kernels also produce thicker edges, apart from
adding to the computational burden. Masks that are 5·5 and larger are an
effective compromise (Nevatia 1982; Duff 1983; Laws 1980). Several at-
tempts have been made to improve on the basic Laws approach. Normal-
ization requirements were weakened by Harwood et a!. (1983); and Duncan
and Frei (1982) endorsed it as perhaps the most successful approach to
analyzing texture and cluttered scenes, and suggested hardware very large
scale integration (VLSI) implementations to enhance operational speed.
The critical problem of texture discrimination may be approached either
from the viewpoint of models based on human vision or by heuristic ap-
proaches suitable for efficient computer implementation (Van Good et al.
1985).
The four most important Laws masks that are commonly used are the
following:
RSRS is ESS5 is
1 -4 6 -4 1 -1 0 2 0 -1
-4 16 -24 16 -4 -2 0 4 0 -2
6 -24 36 -24 6 0 0 0 0 0
-4 16 -24 16 -4 2 0 -4 0 2
1 -4 6 -4 1 1 0 -2 0 I
LSE5 is and L5S5 is
-1 -2 0 2 I -I 0 2 0 -1
-4 -8 0 8 4 -4 0 8 0 -4
-6 -12 0 12 -6 -6 0 12 0 -6
-4 -8 0 8 4 -4 0 8 0 -4
-I -2 0 2 1 -1 0 2 0 -1
The performance of these masks was tested for a variety of conditions.
The masks were convolved with binary images of vertical and horizontal
bars and checkerboard arrays, all with a fundamental periodicity of four
pixels and at three levels of intensity [see Wigan and Cullinan (1987) for
greater detail]. Under certain conditions the texture energies are propor-
tional to variance of the filtered textures.
The ESSS (diagonal) and RSRS (isotropic) masks did not discriminate
between horizontal and vertical bar patterns. Mask RSRS discriminates well
between disks and checkerboard patterns. Masks LSES and LSSS detect
vertical edges and discriminate well between all the patterns except hori-
zontal bars. The pattern discrimination produced by each mask is similar
at each level of contrast. At high contrast the patterns respond much more
strongly to each mask, resulting in much higher energies. This is due to the
greater intensity differences across the edges coupled with the operation of
differentiation performed by the masks. The critical observation is that
discrimination is a function of contrast. It is therefore necessary to apply
some form of normalization for brightness and contrast.
Synthesized Gaussian noise fields with variable mean and standard de-
viation were then convolved with three gray-level intensities with four dif-
ferent variances of the mean levels. All the masks produced good separation
between the images. Mask RSRS (spot detector) produced the highest ener-
gies for these random images. Increasing the standard deviation is equivalent
to increasing the image contrast. Textures should be compared within a
framework of contrast invariance. These experiments emphasized the need
67
for a universal method of normalization. A common method for the nor-
malization of an image by standardizing the brightness histogram.
In Table 1. the histogram population statistics are given for the three
textures in Fig. 3 [from Brodatz (1966)], with the mean and variance of the

TABLE 1. Texture Parameters for Three Standard Surfaces

Surface Histogram Normalization Equalization


(1 ) (2) (3) (4) (5) (6) (7)
Grass 62.4 38.9 127.5 36.7 128.7 73.4
Raffia 97.2 50.4 127.5 36.7 128.3 73.7
Sand 1211.2 47.0 127.5 36.7 128.3 74.0

FIG. 3. Standard Visual Textures from Brodatz (1966): (8) Grass; (b) Raffia; (c)
Sand

68
gray levels for two types of normalization. The first forces a specified mean
and standard deviation on the image and is useful for psychophysical ex-
periments. The second is a histogram equalization (Ahlers and Alexander
1985). In this case, each pixel in the image is subjected to the following
transformation:

g(q) = ~f h(P) dp ........................................ (3)

where g(q) = new value; h(p) = old value; M = number of gray levels;
and!{l = number of pixels in a square image of size N· N. Overall contrast
is enhanced for most of the image pixels, resulting in greatly improved
visibility. Masks were convolved with homogeneous samples of grass, raffia,
and sand. The results for the two methods of normalization are given in
Tables 2 and 3. The texture energies are much higher for histogram equal-
ization. The level of discrimination is good for both types of normalization.
Tables 4 and 5 apply to textures of conventional road surfaces digitized

TABLE 2. Relative Response of Each Laws Mask to Textures In Flg. 4 after Nor-
malization
Laws Mask
Texture L5E5 L5S5 E5S5 R5R5
(1) (2) (3) (4) (5)
Grass 78.9 29.9 2.4 21.4
Raffia 76.2 17.6 0.8 3.8
Sand 76.0 20.3 1.7 9.8

TABLE 3. Relative Response of Each Laws Mask to Textures In Fig. 4 after HIs-
togram Equalization
Laws Mask
Texture L5E5 L5S5 E5S5 R5R5
(1 ) (2) (3) (4) (5)
Grass 316.7 119.3 9.3 85.7
Raffia 282.9 61.7 3.0 15.2
Sand 307.9 82.9 7.0 45.8

TABLE 4. Relative Response of Each Laws Mask after Convolution with Low-
Resolution Textures of Road Surfaces
La\\'S Mask
Texture L5E5 L5S5 E5S5 R5R5
(1 ) (2) (3) (4) (5)
Roadlo 0.002 198.0 84.6 10.9 129.0
Roadlo 0.004 204.0 81.6 10.7 133.0
Roadlo 0.006 249.0 71.6 7.5 45.3
Roadlo 0.008 148.0 47.9 4.8 68.7

69
TABLE S. Relative Response of Each Laws Mask after Convolution with High-
Resolution Textures of Road Surfaces
Laws Mask
Texture L5E5 L5S5 E5S5 R5R5
(1 ) (2) (3) (4) (5)
Roadhi 0.002 321.0 77.8 6.4 24.0
Roadhi 0.004 29&.0 65.5 5.0 6.&
Roadhi 0.006 229.0 32.0 1.9 10.2
Roadhi 0.00& 145.0 28.9 2.0 24.1

FIG. 4. Road Surface Textures Digitized at Two Different Levels of Resolution

at two resolution levels (Fig. 4). The three textures arc visually similar, but
increase in coarseness of the grain size. Each mask produces a better dis-
crimination when applied to the higher-resolution images, with here a mon-
otonically decreasing response for all masks barring the R5R5 spot detector.
The high response for the L5E5 mask suggests an underlying linear structure
within the coarse. grainy road surface. Textures are scale dependent. and
must be compared at a fixed resolution.
To improve upon the performance of the Laws masks we needed new
70
masks with different (noninteger) coefficients-unlike most published models.
Mask coefficients with absolute integer values as high as 100 increase the
degree of conformity with local pattern primitives without the additional
computing expense of real arithmetic. A plausible approach to the problem
of texture classification is to filter the image with a number of primitive
masks that are feature selective.
The texture classification problem reduces to searching for symmetrical
and antisymmetrical zero-sum convolution masks with integer coefficients
in the range of - 100 to 100. For as· 5 kernel. the feasible region contains
more than 102) possible masks. The task is to define optimal masks subject
to specified performance criteria. This means discarding traditional masks
with fixed coefficients and replacing them with convolution masks containing
variables as coefficients. The proposed method involves mask optimization
and can be interpreted as solving a problem in nonlinear mathematical
programming. In particular. it is possible to generate a single convolution
mask that is capable of producing high texture classification accuracies when
tested on well-known natural textures.
The speed resulting from using only one mask is not compromised by
inaccuracy or unreliability. because the mask is adaptive and accommodates
for new circumstances. Prior training enables it to compensate for textures
with new features and for the effects of digitization. The procedure for
generating an adaptive mask is flexible in that any figure of merit may be
used for optimization. In addition. the mask itself can be varied in size and
operates as a feature extractor that can be trained on any arbitrary set of
input patterns.
The utility of an adaptive mask is demonstrated in the following example.
In Fig. 4, three different road surfaces are illustrated, before [Fig. 4(a,b)]
and after [Fig. 4(e.f)] normalization. Histogram equalization has resulted
in a noticeable increase in contrast. The image in Fig. 4(c.d) was digitized
as a 164·200 array. and was further divided into four quadrants, QI-Q4.
Quadrant Q I featured the defect, a crack pattern; quadrant Q4 consisted
of relatively uniform area of road surface and was used as the background.
The background Q4 was further divided into four quadrants Q4a-Q4d.
These subimages arc now only 41 ·50 arrays. A mask was optimized to detect
the background by training it on sample Q4a
-43 -72 -87 -72 -43
-37 -61 -83 -61 -37
-7 -16 -2 -16 -2
36 68 74 68 36
44 87 100 87 44
The convolution of this mask with the Q4a pattern yields a response that
is a function of the variance of the filtered image. The value was found to
be E = 0.192. This was repeated for a second morphologically similar
background sample. Q4c.
The result for sample Q4c was e = 0.207. which shows good agreement,
especially in view of the very small sample sizes. We also convolved the
mask with the other two background samples, Q4b and Q4d. which were
taken from the right-hand side of the Q4 image. These samples were more
uniform because they were more distant from the crack and were not subject
to disruptions arising from the transition (edge effects). The results of Q4b
and Q4d are 0.152 and 0.153, respectively. This is an excellent level of
agreement.
71
The pooled average of the four samples of 04 was e = 0.176. compared
with a value of - 0.286 when the mask was applied to the crack pattern in
quadrant 01. The response ratio is 1.63. which is a significant level of
discrimination. These results indicate that the mask is capable of providing
reproducible results on the background samples, and can clearly discriminate
between the crack pattern and the background. The second part of the
experiment involved the derivation of a mask to produce maximum dis-
crimination between the crack pattern in Oland the background road
surface in 04, producing
-40 10 38 10 -40
81 -21 -75 -21 81
-100 33 82 33 -toO
34 -31 -62 -31 84
-38 17 27 17 -38
This mask produced a response of 0.0147 for sample 01 versus 0.0327
for sample 04 for a response ratio of 2.22, which represents significantly
better discrimination than for the last mask. The twofold increase in the
response of 04 relative to 01 greatly reduces the probability of misclassi-
Cication. It is evident that even with the small image sizes used here. adaptive
masks provide a very useful analytic approach for the identification and
quantification of defects.
For pavement-crack-Iength determination it became apparent that the
filter response is sensitive to the amount of cracking within the trial region.
This has not yet been pursued. Further work requires a precise quantitative
specification of crack length and (in morphological terms) of an algorithm
to compute it.
Cox et al. (l986a.b) have subsequently concentrated on finding and im-
plementing specialized operators to discriminate particular features of in-
terest, such as crack sizes and orientations. This approach leaves unanswered
the question of how to combine the numerate results for each individual
feature to make an index. a problem shared by Haas et al. (1985). Cox et
al. used a generalized scoring vector. initially set up for the published weight-
ings from the Nevada pavement management scoring system for this pur-
pose. It is worth asking the question: How do the texture values from a
direct adaptive mask system (such as the one described) relate to these
overall scores? Perhaps the painful (and computationally intensive) process
of measuring each individual component and then combining the values
obtained before using them is no more effective-and it certainly takes
longer to do.
The basic question is to find an overall score that correlates well with
judgments on the road condition and the bases used by people to decide
what to do about it. This area is probably not sufficiently well investigated
to justify the massive computational efforts of Cox et al. (1986a.b). and
needs addressing. Pattern-matching representations and recognition tech-
niques-such as back-propagation (McClelland and Rumelhart 1988)-ca-
pable of undertaking most of the necessary intensive computation prior to
the data-acquisition process are definitely desirable.
The approach described is eminently well suited to a VLSI implementation
of image convolution with a 5·5 mask. A texture approach to road surface
and damage characterization of road-surface defects can be carried out at
the full video frame rate of 30 frames/sec. using road images of a resolution
72
and covera$e determined by lens aperture. image size, and the image-cap-
ture limitatIOns of the camera.
A review of the recent analytic aspects of imaging research was carried
out, discussing advantages and disadvantages of a number of important
approaches to texture classification and the motivation and rationale for
developing a practical approach to texture classification. The theme of tex-
ture discrimination based on the use of convolution masks was developed,
with comparisons to the Laws approach.
Convolution masks have a number of fundamental advantages over other
methods of texture classification. They are faster and more accurate texture
analyzers than Fourier techniques or cooccurrence matrices. Convolution
masks are a mathematical representation of feature extractors used by the
human visual system. A pattern-classification system will produce results
that are consistent with human judgment if it operates in a similar manner.
This can be an important consideration in applications involving the auto-
mation of human visual inspection. The masks are local operators, and
because of this they are particularly suitable for segmentation of a scene
composed of many textures.
A number of experiments were carried out with different convolution
masks applied to both test patterns and road surfaces. The results confirm
the need for normalization with respect to brightness and contrast. Textures
should also be compared at a fixed scale. Although transformations for scale
invariance are possible, there is the risk of losing the texture attributes
known as coarseness. A number of experiments were carried out with con-
volution masks applied to test patterns and road surfaces. The results con-
firm the need for normalization with respect to brightness and contrast.
Textures should also be compared at a fixed scale. Although transfor-
mations for scale invariance are possible, there is the risk of losing the
texture attributes known as coarseness. The results of these texts demon-
strate the need for new and improved masks in order to increase classifi-
cation accuracy. Improved masks may also lead to a decrease in the total
number of masks required, resulting in operational speed. This has narrowed
down the search for these unknown masks by defining a number of appro-
priate constraints.
The present work introduces the idea of replacing fixed convolution masks
with masks containing variables as coefficients. The task faced is to develop
a formal procedure for generating such convolution masks. These pro-
grammable masks are called adaptive masks and could be generated to
produce optimal performance with respect to texture recognition or texture
discrimination. The applications include the assessment of surface inho-
mogeneity as well as defect classification.
It is suggested that direct use of texture ratings via adaptive masks be
made in conjunction with manual ratings and diagnoses to determine if the
intermediate stage of detailed computation of the surface elements is entirely
necessary.
Since this exploratory work was completed in 1987, the commercial avail-
ability of practical dedicated neural-network engines inside IBM PCs has
emerged (from de Anza, SAIC, and AI Ware amongst others), and the
task defined matches the capacities of such systems very well. Neural-net-
work systems are still of restricted capacity, but have already proved to be
very effective (and extremely fast) in image-recognition applications. The
training task is also better managed than in most other image-reduction
techniques. It would therefore be well worth carrying out a further inves-
73
tigation using of neural-network systems to the tasks of road-surface-feature
identification, classification, and characterization.
The direct links between road-surface features and remedial or diagnostic
actions will require the integration of any feature- or surface-rating system
into a decision-making tool at some point, and it is this stage that now
justifies closer attention. Neural-network methods have much to offer in
this area although the fundamental problems of speed and training (ad-
dressed here by adaptive mask generation) still leave much to be desired.

CLASSIFlCAnON OF VEHICLES IN MonON

The selective collection of data is essential when using image-processing


and video-data-acquisition methods (Wigan 1986). The sheer quantity of
information obtained in a short time when using videotape can take several
orders of magnitude longer to reduce to frames of special interest, and the
parameters of these frames still have to be measured or recorded. The
reduction of the quantity of data required to arrive at the frames of interest
was discussed elsewhere (Wigan 1986).
Once such a (multistage) selection process has been applied, it then be-
comes a practical proposition to use digital image-processing techniques to
automate the extraction of appropriate values from the images on the frame
(which can easily contain over 1()6 bytes of raw pixel data). An even greater
degree of selectivity can be obtained in the field, such that appropriate sets
of values are extracted requiring only the order of 10-100 bytes to be
recorded with the rest discarded.
Using a line-scan array camera, the vehicle automatically detects itself,
and by its movement clocks itself across the picture. The speed and shape
discrimination can be extracted directly from the result. The feasibility stud-
ies of this project took this work to the stage of automatic construction of
pictures from a vehicle passing a line array camera. This investigation is
also si~nificant for road-surface-texture analysis, because the speed and
resolutIOn limitations of the frame-acquisition rate of the camera provides
a physical limit (dep'ending also on the lenses used and the size of the area
of road chosen to fill the camera frame).
In the first part of this experiment, a line-scan array was situated laterally
with respect to vehicle path, and perpendicular to the vertical plane con-
taining the line of vehicle motion. As the vehicle passed the scan array data
acquisition was initiated for each of the next 512 ms. The 512·1 strip image
formed by the array camera was transmitted to an Arlunya TF5000 model
temporal filter fitted with a slow-scan converter, where it was stored in
successive rows of the image store. In this wayan image of a moving vehicle
was built up one line at a time. The line array camera output "grabbed"
each millisecond simply because the actual task for which the system was
constructed requires millisecond intervals between successive grabs. (One
line per millisecond is also the fastest rate at which the temporal filter slow-
scan converter accepted input data.)
Several model vehicles were successively rolled down a plane incline at
a low angle. The line-scan array was set up at the side of this plane and at
right angles to it, the angle of the plane having been selected to give a
reasonable image, given that the clocking rate in the apparatus was not
variable.
Illumination consisted of a small flOodlight positioned so that the moving
vehicle passed between it and the image sensor. Fig. 5 shows a truck model
74
FIG. 5. Truck Model Used In LIne-Scan Detection

FIG. 6. Line-Scan Synthesis of Laden FIG. 7. Line-Scan Synthesis of Un-


Truck Image laden Truck Image

used in this experiment. and Figs. 6. 7. and 8 the images constructed by the
system. displayed here with the scan axis vertical. The apparent longitudinal
shortening of the models in the constructed images is an artifact of the
clocking rate. but references to Figs. 6 and 7 will show that the vertical
profile is correctly captured. The line-to-line and pixel-to-pixel intensity
variations that are evident at some places in the photographs have been
traced to radio frequency (RF) shielding deficiencies and residual-power-
supply problems.
In actual application of the technique. the clocking rate would be con-
structed separately from a knowledge of vehicle speed for each vehicle
passing the scanning point. The aim would be to sample each vehicle at
some fixed interval along its length-for example once each 100 mm; and
the reconstruction of the outline would not be necessary. because processing
of the outline character would be done at the data-capture stage and only
the speed and vehicle classification would be stored.
Vehicle speed can be measured very easily using a minor variant of this
line-scan technique. When the line-scan array is rotated so that it is parallel
to the plane of motion of the vehicle (i.e. so that the array strip is parallel
to the velocity vector of the vehicle). vehicle velocity is quite readily esti-
mated. Fig. 9 shows the trace resulting from the motion of the model down
the inclined plane. Suitably normalized. the slope of this trace can be used
to yield a direct estimate of vehicle speed. In Fig. 9 the front of the vehicle
75
FIG. 8. Line-Scan Synthesis of Car Image

FIG. 9. Velocity Slew Line-Scan Camera Image Output

traverses the 512 pixels of the linear array in about InO scan lines (i.e. about
160 ms). By direct measurement. this distance of 512 pixels corresponds to
approximately 72 mOl of the "road." giving an effective velocity of around
43 m/s. Some line array cameras permit direct measurement of this "slope"
parameter without an intervening image-analysis system.
Once again. the capture of the actual image is not necessary. and any
field device would not have to replicate the imaging and slow-scan conver-
I sion equipment used here to study the system. The limitations of this tech-
I nique are set by the speed at which the charge coupled devices can be read
I
I from the line array of the camera. This may debar the technique from very
I high resolution acquisition. which would be required for number-plate rec-
I
I ognition.
,
\ Vehicle speed. as gaged by the line-scan method just sketched or some
other suitable technique. is used to clock lateral line-scan image capture so
I
76
I
1
that the vehicle length of this lateral image can be captured and analyzed
once each 100 mm, say, to give a height reading for the vehicle at that point.
The resulting vector of height measurements forms the input to a trained
pattern classifier, which compares the vector to each vector of a set of height-
vector templates to determine which class the input vector belongs to (sedan,
truck, etc.).
This is a task at which neural-network methods excel. In particular, the
length of the vehicle may be determined from a knowledge of the vehicle's
speed and its height profile (the height output of the system will obviously
be zero immediately prior to the first nonnull scan, as well as immediately
after the last one; or, rather, the start of the vehicle will be signified by the
first nonnull scan and its end by a null scan).
Although the investigation was supported by a sophisticated image-anal-
ysis system in the experiment described here, a practical realization of the
technique would be based on direct electronic processing of the line-array-
output signal; an image analysis system per se would not be required.
On the basis of the quality of images obtained under the foregoing lab-
oratory conditions, it appears to be feasible to perform a complete classi-
fication of vehicles by a vertical-feature vector describing the height vari-
ations in profile on the basis of data derived from a suitably positioned line-
scan array. Furthermore, there seems to be no reason why the bulk of this
analysis should not be performed quite cheaply using dedicated electronic
hardware. It would now be worth building field-ready trial equipment.

DIGITAL IMAGE MENSURATION EXAMPLE

The object of the next experiment was to gain insight into the application
of digital image region analysis to repetitive areal and linear measurement
in which precision is required. The specific example of this type was the
calibration and assessment of fine-particle sieves of the kind used in road
construction materials assessment. Such sieves are subject to deterioration,
and efficient calibration and recalibration processes for both sieves and the
ball bearings used for field checks would be highly desirable. A suite of
programs was developed to digitize an image of a set of particles of unknown
size thresholded to produce white regions against a dark background. The
image is then analyzed to provide particle x- and y- extents, areas, diameters
of area-equivalent circle, and the population parameters of mean and var-
iance with a histogram of the areas computed.
The application of this technique to measurement of sieve characteristics
(as used for road materials standardization) is straightforward. Computing
time for a grid of cells was on the order of several minutes on a small 8086
PC used with an Imaging Technology PC-Vision image digitizer and frame
store and aDage MTl65 (black and white) camera mounted on a standard
Polaroid lighting base modified to permit back-illumination of samples and
dimmer control of light levels. Analysis of the results showed that the edge
distortions would require special attention in an applied system, but that
the central areas were well dealt with by the imaging system. Refinement
would present only developmental, rather than basic. research problems.
These and many other mensuration problems are now economically ad-
dressable using personal computer (PC) based equipment, but are not really
a subject for research (although the development efforts may be more de-
manding than this would seem to suggest).
n
AUTOMATED DETERMINATION OF MOTOR VEHICLE REGISTRATION
NUMBERS

Work on selective acquisition of vehicle data has been considered else-


where (Wigan 1986) but the last stage of a highly selective data-acquisition
system is to actively identify the particular vehicle. While such methods as
passive (and active) transponders (electronic number plates) are coming
into use, they are unlikely to be available for data collection on all vehicles
for a considerable time. An alternative approach is to capture and interpret
the registration markings on the vehicle. Several countries have started
developing commercial equipment. Computer Recognition Systems. in the
U.K .• developed a specialized neural-net-based hardware platform that when
connected to a data base of stolen vehicles proved to recognize up to 50%
of number plates via infrared cameras (Dickinson and Waterfall 1986).
It was therefore sensible to limit our investigative research to the minimum
to enable such techniques to be properly evaluated and applied and reserve
available effort for the task of evaluating equipment when it becomes avail-
able for purchase. Australian work at CSIRO (Commonwealth Scientific
and Industrial Research Organisation) has been directed specifically toward
real-time digitization and multiple-moment generation; and toward direct
fast Fourier transform generation using a special-purpose chip. The relevant
investigation in the traffic and transport domain was therefore to work on
the requirements for number recognition using a restricted number of gen-
erated moments (Newlands 1985). The ancillary problem of locating num-
bers on an image is different. on which published experience already exists.
The recognition of visual patterns by computer is subject to a number of
constraints that are difficult to satisfy simultaneously. The recognition method
should not be disabled by translation, rotation, or change in size of the
image (Hu 1962). In addition, between-class discrimination features should
be insensitive to within-class variations and vice versa (Abu-Mostafa and
Psaltis 1985). Traditional approaches include property lists and statistical
(template) matching. Moment invariants developed by Hu (1962) are indeed
insensitive to variations in size, position. and orientation. The method uses
two-dimensional invariants derived from moments that are themselves de-
rived from the intensity distribution of the image.
Investigating the moment-invariant literature was useful both for this
imaging work and the characteristics of the information-retrieval results
themselves (Wigan 1985a). One might expect that most of the references
in this fast moving area would be very recent. This was the case only for
online searches of a specialist data base. The most useful review literature
and textbooks were of a far earlier vintage. Fig. 10 shows the age profiles
of references found in three different types of resource. This should en-
courage those seeking information on new subjects to include searches of
much earlier material, especially when a clear view of the fundamentals is
needed. For example, in the case of shape recognition by moment invariants
the seminal paper is Hu (1962).
An infinite series of these moment invariants exists for any image cor-
responding to the infinite amount of detail in the image. All of the infor-
mation in a digitized picture can be captured in a series of moment invariants
of finite length. Even as few as two members of this series can sometimes
be sufficient to discriminate between images. This approach to number-
plate-character recognition is likely to offer effective field-data reduction.
and far less computation effort than template-matching approaches. The
reconstruction of alphanumeric images from a subset of the moments of the
78
% of
100

80 /,,'
//
/-'
",
~",. -- .-0
citations
within 60
/-,.' ,/~
,'/""
x
years of
host 40 /-
" /'
"
- , ."P/ '
-
/

1984·5 Online search


/ ,/' - - 1978 Review article
publication " ...... in 1983 Anthology
date 20 ~ -0- 1977 Reference text
/'

g.':-'---::2:-'.3=-~4:-'--5=--~6"':-7~8"""-""9-''''''0'''''--'1--'12-13
Years aller host publication date
Immediacy of identification of literature
sources in imaging

FIG. 10. Immediacy of Identification of Imaging Literature Sourcing Year. after


Postpubllcatlon Date

images was the small step taken as part of this investigation project. The
reconstructions show what information is contained in the moments of var-
ious orders.
Hu's (1962) Cartesian moments are computationally difficult both at or-
ders and in image reconstruction. A new set of (Legendre) moments was
defined that does not suffer these problems (Newlands 1985; Wigan and
Cullinan 1987); they are easily extended to higher orders and straightforward
to use to reconstruct an image.
For each object. the list of image points is processed to extract the centroid
coordinates of the object and then again to calculate the moments relative
to the centroid to obtain moments with translational invariance. For small
pictures it is found that moments up to order seven are sufficient for good-
quality reconstructions (less than 3% error) and that moments up to order
11 will enhance the edges of the image (less than 0.5% error). The higher-
order moments appear to be responsible for the better definition of corners
in the image. There arc disparities between the present work and results
published by others in this field. The availability of cheap, robust imaging
devices for the field capable of direct moment computation suggests that
this approach could well become very effective.
A detailed review of moment invariance is given elsewhere (Wigan 1985;
Wigan and Cullinan 1987). Particularly relevant work is reported by Hu
(1962). Maitra (1979). Hsia (1981). and Yin and Mack (1981). Boyce et al.
applied Zernike moment invariants to a picture of a model vehicle using a
64 x 64 pixel array with 256 gray levels. The picture was then rotated in
10° steps up to 90". and a number of dilations and contrast changes were
carried out. The values of the invariants varied within an acceptable range.
and the image could be reconstructed to an accuracy of 90% + using in-
variants up to the eighth order-but the visual image was still almost un-
recognizable. A substantial subjective improvement was gained by using
moments up to order 20 even though the reconstruction error was still 6%.
As few as two invariants can be used to classify Objects effectively (e.g. a
subset of the alphabet); more are normally used. Cartesian invariants per-
form better on a silhouette than variable-intensity images, suggesting that
the internal detail in objects is not well represented by these moments.
79
Teague (1980) reconstructed the letters E and F using a 21 x 21 pixel
array. Newlands (1985) revised the basis of these functions to use Legendre
polynomials, which are both orthogonal and simple to calculate. The results
were used to assess the data compression and discrimination available using
this moment-invariant method. A recursive specification can be based on
the following equations:
Po(x) = 1 .................................................. (4)
Pl(x) =x .................................................. (5)
(n + I)Pn + l(x) = (2n + 1) x Pn(x) - nPn - l(x) ........... (6)
Cartesian (Clm ) and Legendre (L m ,,) moments have a simple relationship

Lm " = (-41) (2m + 1)(2n + 1) L L


m "

j~O k~O
CjkMjk •••••••••••••••••••• (7)

The reconstruction of images from these orthogonal Legendre moments is


done by summation on a finite diagonal terminated at an (arbitrary) max-
imum value of N mox •
N~ N
Ix)' = L L
Ngil "gO
L"_NP,,_,A,x)P,,Y ............................... (8)

The good behavior of the Legendre polynomials as N mox ~"' is in contrast


to the Cartesian series, and provides a direct summation instead of an
algebraic solution of coupled equations. The reconstructed image intensity
is continuous, and so thresholding is required to recover the original two-
level image.
The results of applying these methods was to show that moments up to
seven are highly effective in reconstructing the images, in good agreement
with previous work [e.g. Hsia (1981)J on Cartesian moments. However, a
marked deterioration in accuracy was found with the addition of high-order
moments (20 + ). The extension to images of other types is not certain, given
the experience of Boyce et al. (Hsia 1981) with Zernike polynomials to
order 20.
The limited work reported here on moment-invariant applications pro-
vides the basis for a reasonable alternative to template-matching techniques,
but confirms that limiting computation to order-seven moments are an ef-
fective means of discriminating and reconstructing binary intensity-level
images of alphanumeric characters.

SELECT10N OF FULL DEVELOPMENTAL ApPUCATlONS

This paper summarizes some of the scope of practical application areas


for digital image analysis in road and traffic research, and demonstrates that
at least the first stage of such applications is now both practical and eco-
nomical.
The importance of such applications is rising as the cost of equipment
drops and the cost of staff effort rises. Imaging applications are also be-
coming of practical importance in other areas of road research, such as
visual discrimination. Gallagher and Lerner (1983) describe U.S. work in
this area, using regionalization and successive simplification as a means of
determining a measure of the visual complexity of a scene. This is one of
80
the pattern-recognition applications referred to in the introduction as being
in the next phase of applications work on this line of applied research.
The two major initial applied-research targets should be road-surface
ratings and defects, including the basic correlation studies proposed between
visual ratings, texture ratings. and diagnostic ratings by individuals; and
vehicle classification. Automated sieve calibration, crack-size measurement,
and other linear areal and measurement tasks are now simply developmental
tasks; the example reported suffices to demonstrate the techniques available.
Road-surface rating and defect classification is recommended as the best
choice for further work. with vehicle classification methods as the second
priority. The close associations with video data logging. pavement manage-
ment systems. expert-systems decision support. and consistency checks on
surface ratings have been addressed elsewhere (Wigan 1985b). and are the
basis for the choice of surface rating as the preferred next project. especially
as visual rating correlations are an independently valuable and timely re-
source that should be obtained in parallel.
The equipment required to do this is now simply that required for general
imaging-processing work. Interested bodies can now confidently obtain at
least the basic in-house equipment for capturing. processing. and re-pre-
senting imaging materials and know that they can be used effectively for
imaging applications. A wide range of microsystem options were already
available in 1985. from the most primitive digitilcr to a full-scale automatic
motion-analysis system relying totally on video inputs. Currently this type
of equipment is now well matched to desktop processing power. Advanced
tool kits for image-processing developments and neural networks are readily
available from a number of vendors for all the tasks addressed in this paper.
It is now probably appropriate for interested bodies to obtain at least the
basic in-house equipment for capturing, processing and re-presenting im-
aging materials. The cost of such equipment has fallen by a factor of 20 and
more since this investigative project was initiated, and the power has in-
creased: improvements in the readiness and ability of the road engineering
community have not kept pace to be ready to exploit the productivity and
technical gains now available.

ACKNOWLEDGMENTS

The work reported here drew from a number of different projects pursued
under this joint program. Those at Deakin were under the supervision of
Michael Cullinan. The discussion of road-surface textures and defects was
drawn from material developed toward a Ph.D. thesis to be submitted by
Kurt Benke. The application of moments to number recognition was drawn
from a B.Sc.lhonors thesis by Douglas Newlands. The sieve mensuration
results were derived using proprietary programs written by Imaging Appli-
cation Pty Ltd. and equipment from a wide range of sources including the
writer and Deakin and Melbourne Universities. Subsequent to the speci-
fication defined here. the selective video trigger system was built by John
Dods at ARRB.

REFERENCES

Ade. F. (19111). "Characterization of textures by eigenfilters." Sigllal Proce.uilfg. 5.


451.
Abu-Mostafa. Y. S.• and Psaltis. D. (19114). "Image normalization by complex mo-
81
mcnts." IEEE Tram. Patte", Allalysis alld Machille IfllelligcIICI? PAMI-7(1/. -1<1-
55.
Ahlers. K. D .. and Alexander. D. R. (1985). "Microcomputer based digital image
processing system developed to count and size laser generated small particle im-
ages." Optical Engrg .• 24(6). 1060-1065.
Barlow. H. B. (1969). "Pattern Recognition and the responses of sensory neurons."
Anll. N. Y. Acad. Sci.• 156. 822-881.
Benke. K. K. (1986). "The analysis of visual texture." thesis. presented to Deakin
University. at Geelong. Australia. in partial fulfillment of the requirements for
the degree of Doctor of Philosophy.
Brady. M. (1982). "Computational Approaches to Image Understanding." Compo
Surl' .• 14(1). 3-71.
Brodatz. F. (1966). Textures: A photographic album for arlisls alld desigllers. Dover.
New York. N.Y.
Cas.1sent. D .• and Psaltis. D. (1979). "Optical pattern recognition using invariant
moments." Soc. of Photo-Opticallnstrum. Engineers. (201).107-114.
Castleman. K. R. (1979). Digital image processing. Prentice Hall. Englewood-Cliffs.
N.J.
Cox. G .• Curphey. D .. Fronek. D .• and Wilson, J. (19860). "Remote sensing of
highway pavements at road speeds: Using the Motorola 68020 Microprocessor."
MkrocompUlers ill Cil'. EIIgrg .• 1(1). 1-13.
Cox. G .. Merrill. R. c.. and Fronek. D. (19H6b). "Pavement management system
with realtime microprocessor-based computation." Microcomplllers in Cil'. ElIgrg ..
1(2). 95-105.
Dickinson. K. W .• and Waterfall. R. C. (1984). "Image processing applied to traffic.
I: General review." Traffic Engrg. COlllrol. (1). 6-13.
Durf, M. J. B. (1983). Neighborhood operators in biological processes of images. E.
J. Braddick and A. C. Sleigh. eds .• Springer-Verlag, Herlin. Germany.
Duncan, J. S .• and Frei. W. (1982). "Very large scale integration (VLSI) approach
to feature extraction." SPIE Applications of Digital Image Processing. (4). 359.
378-385.
Gagalowicz. A. (1980). "Visual discrimination of stochastic texture fields based upon
their second order statistics." Proc., 5,h Int. Con/. Pal/ern Recognition. Institute
of Electrical and Electronics Engineers, New York, N.Y .. 786-788.
Gallagher. V. P.. and Lerner. N. (1983). "A model of visual complexity of highway
scenes." Report FHWAIRD-831083. U.S. Dept. of Transportation. Washington.
D.C.
Gonzalez. R. C .• and Wintz, P. (1987). Digital image prot·essillg. 2nd cd .• Addison-
Wesley. Reading. Mass.
Haas, L.. Shen. H., Phang. W. A .• and Haas. R. (1985). "Application of image
analysis technology to automation of pavement condition surveys. Transportation
towards the year 2000." Proc., Int. Transp. Congress RTAC. Ottawa. Canada.
(5)C57-C72.
Haralick. R. M. (1984). "Digital step edges from zero crossings of second directional
derivatives." IEEE Trans. Pallen! Analysis and Maehille Intelligellce. PAMI-6.
58-68.
Harwood. D .• Subbarao. M.. and Davis. L. S. (1983). "Texture classification by
local rank order correlation." TR-J314. Computer Vision Laboratory. Center for
Automation Research. University of Maryland. Baltimore. Md.
Hu, M.-K. (1962). "Visual pattern recognition by invariant moments." IRE TrailS.
on Info. Theory, 8(Feb.). 179-187.
Hubel, P. H .• and Wiesel. T. N. (1962). "Receptive fields. binocular interaction and
functional architecture in the cats visual cortex." Physiology. London, England.
(160). 106-154.
Hsia. T. C. (1981). "A note on invariant moments in image processing." IEEE
Trans. on Systems Man and Cybernetics SMC-lI. (12). H71-H74.
Laws. K. I. (1980). "Textured image segmentation." Tech. Report 940. Image Pro-
cessing Institute, University of Southern California, Los Angeles. Calif.
Maitra. S. (1979). "Moment Invariants," Proc., IEEE, 67(4). 697-699.

82
Marr, D. (1982). Vision. W. H. Freeman and Co., San Francisco, Calif.
McClelland, J. L., and Rumelhart, D. E. (1988). Explorations in parallel distributed
processing. MIT Press, Cambridge, Mass.
Newlands, D. A. (1985). "Image reconstruction from moment series," thesis pre-
sented to Deakin University, Geelong, Australia, in partial fulfillment of the re-
quirements for the degree of Bachelor of Science.
Pentland, A. P. (1984). "Fractal-based description of natural scenes." IEEE Trans.
Pattern Analysis and Machine Intelligence, PAMI-6,661-674.
Teague, M. R. (1980). "Image analysis via the general theory of moments." J. Optical
Soc. America, 70(8), 920-30.
Triendl. E. E. (1972). "Automatic terrain mapping by texture recognition." Proc.,
8th Int. Symp. on Remote Sensing of Environment. Environmental Research In-
stitute of Michigan, Ann Arbor, Mich.
Van Gool, L., Dewaele, P.• and Oosterlinck, A. (1985). "Texture analysis Anno
1983." Complller Vision, Graphics and Image Praces.fing. (29), 336-357.
Vickers, A. L., and Modestino. J. W. (1982). "A maximum-likelihood approach to
texture classification." IEEE Trans. Pallern Analysis and Machine Intelligence
PAMI-4, 1,61-68.
Wechlser, H .• and Kidode, M., (1980). "A random walk procedure for texture
discrimination." IEEE Trans. Pattern Analysis and Machine Intelligence PAMI-] ,
3.272-280.
Wermser. D .• and Liedtke, C. E. (1982). "Texture analysis using a model of the
visual system," Proc., 6tl. Int. Conf. on Pattern Recognition. Munich. Germany.
1078-1080.
Wigan. M. R. (1983). "Information technology and transport: Research proposals."
Technical Note TN 126. Institute for Transport Studies, University of Leeds, Eng-
land.
Wigan, M. R. (1985a). Image processing for roads: An on-line literature review and
text data base assessment. Awtralian Road Res., Melbourne. Australia, 15(1), 50-
55.
Wigan, M. R. (1985b). "The potential for computer based data logging and inter-
pretation." Proc., Pavement Mana~ement Systems Workshop, Australian Road
Research Board, Vermont, Australia. 65-71.
Wigan, M. R. (1986). "Selective road and traffic data acquisition (SDA) for roads
and traffic and video capture control," Internal Report AIR 413-3, Australian Road
Research Board, Vermont, Australia.
Wigan, M. R .• and Cullinan, M. C. (1984). "Machine vision for road research: New
tasks and old problems," Proc., 12th ARRB Conf., Vermont, Australia, 12(4),
76-86.
Wigan. M. R., and Cullinan, M. C. (1986). "Digital image processing: An appli-
cations review for road research applications." Proc., 2nd AUSGRAPH Conf.,
Australian Computer Graphics Association, Melbourne, Australia, 57-60.
Wigan. M. R., and Cullinan. M. C. (1987). "Image processing applied to roads:
Surface texture. mensuration, vehicle shape and number detection." Res, Report
ARR 145, Australian Road Research Board, Vermont, Australia.
Yin, B. H .• and Mack, H. (1981). "Target classification algorithms for video and
forward looking improved (FLlR) imaging." Soc. Photo-Optical Instrument En-
gineers, (302), 134-140.

83

View publication stats

You might also like