Professional Documents
Culture Documents
Image-Processing Techniques Applied To Road Problems: Journal of Transportation Engineering January 1992
Image-Processing Techniques Applied To Road Problems: Journal of Transportation Engineering January 1992
Image-Processing Techniques Applied To Road Problems: Journal of Transportation Engineering January 1992
net/publication/245305938
CITATIONS READS
13 119
1 author:
Marcus Wigan
Edinburgh Napier University
163 PUBLICATIONS 1,190 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Marcus Wigan on 20 March 2014.
INTRODUCTION
65
g(x-u.v-v!
u
FIG. 1. Convolution Operation for Two Continuous Functions
G(i-m. j-n)-~g91
F(m,n~
,
" ,,
L
, ,,, .
L" " ,-1
LL'
, ,
"" 7
" '1
L
F(m, n) G(i-m,j-n)
The digital convolution of two functions, F and G, which are discrete and
nonzero over a finite domain, is represented by
volume = H(i,j) = F· G + L L F(m,n)G(i-m,j-n) .............. (2)
m n
where H(i,j) = a pixel in the output image and the summations occur over
an area of nonzero overlap (Castelman 1979; Gonzalez and Wintz 1987).
Each pixel in turn has its value replaced by the sum of all pixel values within
the window, which are weighted according to the mask coefficients.
High classification accuracies are possible albeit at great complexity and
huge computational expense. Neural-network techniques show promise to
bridge this gap in computational speed and interpretation. At present, how-
ever, the 5·5 kernels used by Laws remain the benchmark (Ade 1981) for
convolution filtering. Mask sizes matter: 3·3 masks have been generally
used for both edge detection and texture analysis. although they are sensitive
66
to noise (Haralick 1984). Marr (1982) defined large masks for edge detection
based on a Gaussiar. smoothing function convolved with a Laplacian op-
erator. Unfortunately, larger kernels also produce thicker edges, apart from
adding to the computational burden. Masks that are 5·5 and larger are an
effective compromise (Nevatia 1982; Duff 1983; Laws 1980). Several at-
tempts have been made to improve on the basic Laws approach. Normal-
ization requirements were weakened by Harwood et a!. (1983); and Duncan
and Frei (1982) endorsed it as perhaps the most successful approach to
analyzing texture and cluttered scenes, and suggested hardware very large
scale integration (VLSI) implementations to enhance operational speed.
The critical problem of texture discrimination may be approached either
from the viewpoint of models based on human vision or by heuristic ap-
proaches suitable for efficient computer implementation (Van Good et al.
1985).
The four most important Laws masks that are commonly used are the
following:
RSRS is ESS5 is
1 -4 6 -4 1 -1 0 2 0 -1
-4 16 -24 16 -4 -2 0 4 0 -2
6 -24 36 -24 6 0 0 0 0 0
-4 16 -24 16 -4 2 0 -4 0 2
1 -4 6 -4 1 1 0 -2 0 I
LSE5 is and L5S5 is
-1 -2 0 2 I -I 0 2 0 -1
-4 -8 0 8 4 -4 0 8 0 -4
-6 -12 0 12 -6 -6 0 12 0 -6
-4 -8 0 8 4 -4 0 8 0 -4
-I -2 0 2 1 -1 0 2 0 -1
The performance of these masks was tested for a variety of conditions.
The masks were convolved with binary images of vertical and horizontal
bars and checkerboard arrays, all with a fundamental periodicity of four
pixels and at three levels of intensity [see Wigan and Cullinan (1987) for
greater detail]. Under certain conditions the texture energies are propor-
tional to variance of the filtered textures.
The ESSS (diagonal) and RSRS (isotropic) masks did not discriminate
between horizontal and vertical bar patterns. Mask RSRS discriminates well
between disks and checkerboard patterns. Masks LSES and LSSS detect
vertical edges and discriminate well between all the patterns except hori-
zontal bars. The pattern discrimination produced by each mask is similar
at each level of contrast. At high contrast the patterns respond much more
strongly to each mask, resulting in much higher energies. This is due to the
greater intensity differences across the edges coupled with the operation of
differentiation performed by the masks. The critical observation is that
discrimination is a function of contrast. It is therefore necessary to apply
some form of normalization for brightness and contrast.
Synthesized Gaussian noise fields with variable mean and standard de-
viation were then convolved with three gray-level intensities with four dif-
ferent variances of the mean levels. All the masks produced good separation
between the images. Mask RSRS (spot detector) produced the highest ener-
gies for these random images. Increasing the standard deviation is equivalent
to increasing the image contrast. Textures should be compared within a
framework of contrast invariance. These experiments emphasized the need
67
for a universal method of normalization. A common method for the nor-
malization of an image by standardizing the brightness histogram.
In Table 1. the histogram population statistics are given for the three
textures in Fig. 3 [from Brodatz (1966)], with the mean and variance of the
FIG. 3. Standard Visual Textures from Brodatz (1966): (8) Grass; (b) Raffia; (c)
Sand
68
gray levels for two types of normalization. The first forces a specified mean
and standard deviation on the image and is useful for psychophysical ex-
periments. The second is a histogram equalization (Ahlers and Alexander
1985). In this case, each pixel in the image is subjected to the following
transformation:
where g(q) = new value; h(p) = old value; M = number of gray levels;
and!{l = number of pixels in a square image of size N· N. Overall contrast
is enhanced for most of the image pixels, resulting in greatly improved
visibility. Masks were convolved with homogeneous samples of grass, raffia,
and sand. The results for the two methods of normalization are given in
Tables 2 and 3. The texture energies are much higher for histogram equal-
ization. The level of discrimination is good for both types of normalization.
Tables 4 and 5 apply to textures of conventional road surfaces digitized
TABLE 2. Relative Response of Each Laws Mask to Textures In Flg. 4 after Nor-
malization
Laws Mask
Texture L5E5 L5S5 E5S5 R5R5
(1) (2) (3) (4) (5)
Grass 78.9 29.9 2.4 21.4
Raffia 76.2 17.6 0.8 3.8
Sand 76.0 20.3 1.7 9.8
TABLE 3. Relative Response of Each Laws Mask to Textures In Fig. 4 after HIs-
togram Equalization
Laws Mask
Texture L5E5 L5S5 E5S5 R5R5
(1 ) (2) (3) (4) (5)
Grass 316.7 119.3 9.3 85.7
Raffia 282.9 61.7 3.0 15.2
Sand 307.9 82.9 7.0 45.8
TABLE 4. Relative Response of Each Laws Mask after Convolution with Low-
Resolution Textures of Road Surfaces
La\\'S Mask
Texture L5E5 L5S5 E5S5 R5R5
(1 ) (2) (3) (4) (5)
Roadlo 0.002 198.0 84.6 10.9 129.0
Roadlo 0.004 204.0 81.6 10.7 133.0
Roadlo 0.006 249.0 71.6 7.5 45.3
Roadlo 0.008 148.0 47.9 4.8 68.7
69
TABLE S. Relative Response of Each Laws Mask after Convolution with High-
Resolution Textures of Road Surfaces
Laws Mask
Texture L5E5 L5S5 E5S5 R5R5
(1 ) (2) (3) (4) (5)
Roadhi 0.002 321.0 77.8 6.4 24.0
Roadhi 0.004 29&.0 65.5 5.0 6.&
Roadhi 0.006 229.0 32.0 1.9 10.2
Roadhi 0.00& 145.0 28.9 2.0 24.1
at two resolution levels (Fig. 4). The three textures arc visually similar, but
increase in coarseness of the grain size. Each mask produces a better dis-
crimination when applied to the higher-resolution images, with here a mon-
otonically decreasing response for all masks barring the R5R5 spot detector.
The high response for the L5E5 mask suggests an underlying linear structure
within the coarse. grainy road surface. Textures are scale dependent. and
must be compared at a fixed resolution.
To improve upon the performance of the Laws masks we needed new
70
masks with different (noninteger) coefficients-unlike most published models.
Mask coefficients with absolute integer values as high as 100 increase the
degree of conformity with local pattern primitives without the additional
computing expense of real arithmetic. A plausible approach to the problem
of texture classification is to filter the image with a number of primitive
masks that are feature selective.
The texture classification problem reduces to searching for symmetrical
and antisymmetrical zero-sum convolution masks with integer coefficients
in the range of - 100 to 100. For as· 5 kernel. the feasible region contains
more than 102) possible masks. The task is to define optimal masks subject
to specified performance criteria. This means discarding traditional masks
with fixed coefficients and replacing them with convolution masks containing
variables as coefficients. The proposed method involves mask optimization
and can be interpreted as solving a problem in nonlinear mathematical
programming. In particular. it is possible to generate a single convolution
mask that is capable of producing high texture classification accuracies when
tested on well-known natural textures.
The speed resulting from using only one mask is not compromised by
inaccuracy or unreliability. because the mask is adaptive and accommodates
for new circumstances. Prior training enables it to compensate for textures
with new features and for the effects of digitization. The procedure for
generating an adaptive mask is flexible in that any figure of merit may be
used for optimization. In addition. the mask itself can be varied in size and
operates as a feature extractor that can be trained on any arbitrary set of
input patterns.
The utility of an adaptive mask is demonstrated in the following example.
In Fig. 4, three different road surfaces are illustrated, before [Fig. 4(a,b)]
and after [Fig. 4(e.f)] normalization. Histogram equalization has resulted
in a noticeable increase in contrast. The image in Fig. 4(c.d) was digitized
as a 164·200 array. and was further divided into four quadrants, QI-Q4.
Quadrant Q I featured the defect, a crack pattern; quadrant Q4 consisted
of relatively uniform area of road surface and was used as the background.
The background Q4 was further divided into four quadrants Q4a-Q4d.
These subimages arc now only 41 ·50 arrays. A mask was optimized to detect
the background by training it on sample Q4a
-43 -72 -87 -72 -43
-37 -61 -83 -61 -37
-7 -16 -2 -16 -2
36 68 74 68 36
44 87 100 87 44
The convolution of this mask with the Q4a pattern yields a response that
is a function of the variance of the filtered image. The value was found to
be E = 0.192. This was repeated for a second morphologically similar
background sample. Q4c.
The result for sample Q4c was e = 0.207. which shows good agreement,
especially in view of the very small sample sizes. We also convolved the
mask with the other two background samples, Q4b and Q4d. which were
taken from the right-hand side of the Q4 image. These samples were more
uniform because they were more distant from the crack and were not subject
to disruptions arising from the transition (edge effects). The results of Q4b
and Q4d are 0.152 and 0.153, respectively. This is an excellent level of
agreement.
71
The pooled average of the four samples of 04 was e = 0.176. compared
with a value of - 0.286 when the mask was applied to the crack pattern in
quadrant 01. The response ratio is 1.63. which is a significant level of
discrimination. These results indicate that the mask is capable of providing
reproducible results on the background samples, and can clearly discriminate
between the crack pattern and the background. The second part of the
experiment involved the derivation of a mask to produce maximum dis-
crimination between the crack pattern in Oland the background road
surface in 04, producing
-40 10 38 10 -40
81 -21 -75 -21 81
-100 33 82 33 -toO
34 -31 -62 -31 84
-38 17 27 17 -38
This mask produced a response of 0.0147 for sample 01 versus 0.0327
for sample 04 for a response ratio of 2.22, which represents significantly
better discrimination than for the last mask. The twofold increase in the
response of 04 relative to 01 greatly reduces the probability of misclassi-
Cication. It is evident that even with the small image sizes used here. adaptive
masks provide a very useful analytic approach for the identification and
quantification of defects.
For pavement-crack-Iength determination it became apparent that the
filter response is sensitive to the amount of cracking within the trial region.
This has not yet been pursued. Further work requires a precise quantitative
specification of crack length and (in morphological terms) of an algorithm
to compute it.
Cox et al. (l986a.b) have subsequently concentrated on finding and im-
plementing specialized operators to discriminate particular features of in-
terest, such as crack sizes and orientations. This approach leaves unanswered
the question of how to combine the numerate results for each individual
feature to make an index. a problem shared by Haas et al. (1985). Cox et
al. used a generalized scoring vector. initially set up for the published weight-
ings from the Nevada pavement management scoring system for this pur-
pose. It is worth asking the question: How do the texture values from a
direct adaptive mask system (such as the one described) relate to these
overall scores? Perhaps the painful (and computationally intensive) process
of measuring each individual component and then combining the values
obtained before using them is no more effective-and it certainly takes
longer to do.
The basic question is to find an overall score that correlates well with
judgments on the road condition and the bases used by people to decide
what to do about it. This area is probably not sufficiently well investigated
to justify the massive computational efforts of Cox et al. (1986a.b). and
needs addressing. Pattern-matching representations and recognition tech-
niques-such as back-propagation (McClelland and Rumelhart 1988)-ca-
pable of undertaking most of the necessary intensive computation prior to
the data-acquisition process are definitely desirable.
The approach described is eminently well suited to a VLSI implementation
of image convolution with a 5·5 mask. A texture approach to road surface
and damage characterization of road-surface defects can be carried out at
the full video frame rate of 30 frames/sec. using road images of a resolution
72
and covera$e determined by lens aperture. image size, and the image-cap-
ture limitatIOns of the camera.
A review of the recent analytic aspects of imaging research was carried
out, discussing advantages and disadvantages of a number of important
approaches to texture classification and the motivation and rationale for
developing a practical approach to texture classification. The theme of tex-
ture discrimination based on the use of convolution masks was developed,
with comparisons to the Laws approach.
Convolution masks have a number of fundamental advantages over other
methods of texture classification. They are faster and more accurate texture
analyzers than Fourier techniques or cooccurrence matrices. Convolution
masks are a mathematical representation of feature extractors used by the
human visual system. A pattern-classification system will produce results
that are consistent with human judgment if it operates in a similar manner.
This can be an important consideration in applications involving the auto-
mation of human visual inspection. The masks are local operators, and
because of this they are particularly suitable for segmentation of a scene
composed of many textures.
A number of experiments were carried out with different convolution
masks applied to both test patterns and road surfaces. The results confirm
the need for normalization with respect to brightness and contrast. Textures
should also be compared at a fixed scale. Although transformations for scale
invariance are possible, there is the risk of losing the texture attributes
known as coarseness. A number of experiments were carried out with con-
volution masks applied to test patterns and road surfaces. The results con-
firm the need for normalization with respect to brightness and contrast.
Textures should also be compared at a fixed scale. Although transfor-
mations for scale invariance are possible, there is the risk of losing the
texture attributes known as coarseness. The results of these texts demon-
strate the need for new and improved masks in order to increase classifi-
cation accuracy. Improved masks may also lead to a decrease in the total
number of masks required, resulting in operational speed. This has narrowed
down the search for these unknown masks by defining a number of appro-
priate constraints.
The present work introduces the idea of replacing fixed convolution masks
with masks containing variables as coefficients. The task faced is to develop
a formal procedure for generating such convolution masks. These pro-
grammable masks are called adaptive masks and could be generated to
produce optimal performance with respect to texture recognition or texture
discrimination. The applications include the assessment of surface inho-
mogeneity as well as defect classification.
It is suggested that direct use of texture ratings via adaptive masks be
made in conjunction with manual ratings and diagnoses to determine if the
intermediate stage of detailed computation of the surface elements is entirely
necessary.
Since this exploratory work was completed in 1987, the commercial avail-
ability of practical dedicated neural-network engines inside IBM PCs has
emerged (from de Anza, SAIC, and AI Ware amongst others), and the
task defined matches the capacities of such systems very well. Neural-net-
work systems are still of restricted capacity, but have already proved to be
very effective (and extremely fast) in image-recognition applications. The
training task is also better managed than in most other image-reduction
techniques. It would therefore be well worth carrying out a further inves-
73
tigation using of neural-network systems to the tasks of road-surface-feature
identification, classification, and characterization.
The direct links between road-surface features and remedial or diagnostic
actions will require the integration of any feature- or surface-rating system
into a decision-making tool at some point, and it is this stage that now
justifies closer attention. Neural-network methods have much to offer in
this area although the fundamental problems of speed and training (ad-
dressed here by adaptive mask generation) still leave much to be desired.
used in this experiment. and Figs. 6. 7. and 8 the images constructed by the
system. displayed here with the scan axis vertical. The apparent longitudinal
shortening of the models in the constructed images is an artifact of the
clocking rate. but references to Figs. 6 and 7 will show that the vertical
profile is correctly captured. The line-to-line and pixel-to-pixel intensity
variations that are evident at some places in the photographs have been
traced to radio frequency (RF) shielding deficiencies and residual-power-
supply problems.
In actual application of the technique. the clocking rate would be con-
structed separately from a knowledge of vehicle speed for each vehicle
passing the scanning point. The aim would be to sample each vehicle at
some fixed interval along its length-for example once each 100 mm; and
the reconstruction of the outline would not be necessary. because processing
of the outline character would be done at the data-capture stage and only
the speed and vehicle classification would be stored.
Vehicle speed can be measured very easily using a minor variant of this
line-scan technique. When the line-scan array is rotated so that it is parallel
to the plane of motion of the vehicle (i.e. so that the array strip is parallel
to the velocity vector of the vehicle). vehicle velocity is quite readily esti-
mated. Fig. 9 shows the trace resulting from the motion of the model down
the inclined plane. Suitably normalized. the slope of this trace can be used
to yield a direct estimate of vehicle speed. In Fig. 9 the front of the vehicle
75
FIG. 8. Line-Scan Synthesis of Car Image
traverses the 512 pixels of the linear array in about InO scan lines (i.e. about
160 ms). By direct measurement. this distance of 512 pixels corresponds to
approximately 72 mOl of the "road." giving an effective velocity of around
43 m/s. Some line array cameras permit direct measurement of this "slope"
parameter without an intervening image-analysis system.
Once again. the capture of the actual image is not necessary. and any
field device would not have to replicate the imaging and slow-scan conver-
I sion equipment used here to study the system. The limitations of this tech-
I nique are set by the speed at which the charge coupled devices can be read
I
I from the line array of the camera. This may debar the technique from very
I high resolution acquisition. which would be required for number-plate rec-
I
I ognition.
,
\ Vehicle speed. as gaged by the line-scan method just sketched or some
other suitable technique. is used to clock lateral line-scan image capture so
I
76
I
1
that the vehicle length of this lateral image can be captured and analyzed
once each 100 mm, say, to give a height reading for the vehicle at that point.
The resulting vector of height measurements forms the input to a trained
pattern classifier, which compares the vector to each vector of a set of height-
vector templates to determine which class the input vector belongs to (sedan,
truck, etc.).
This is a task at which neural-network methods excel. In particular, the
length of the vehicle may be determined from a knowledge of the vehicle's
speed and its height profile (the height output of the system will obviously
be zero immediately prior to the first nonnull scan, as well as immediately
after the last one; or, rather, the start of the vehicle will be signified by the
first nonnull scan and its end by a null scan).
Although the investigation was supported by a sophisticated image-anal-
ysis system in the experiment described here, a practical realization of the
technique would be based on direct electronic processing of the line-array-
output signal; an image analysis system per se would not be required.
On the basis of the quality of images obtained under the foregoing lab-
oratory conditions, it appears to be feasible to perform a complete classi-
fication of vehicles by a vertical-feature vector describing the height vari-
ations in profile on the basis of data derived from a suitably positioned line-
scan array. Furthermore, there seems to be no reason why the bulk of this
analysis should not be performed quite cheaply using dedicated electronic
hardware. It would now be worth building field-ready trial equipment.
The object of the next experiment was to gain insight into the application
of digital image region analysis to repetitive areal and linear measurement
in which precision is required. The specific example of this type was the
calibration and assessment of fine-particle sieves of the kind used in road
construction materials assessment. Such sieves are subject to deterioration,
and efficient calibration and recalibration processes for both sieves and the
ball bearings used for field checks would be highly desirable. A suite of
programs was developed to digitize an image of a set of particles of unknown
size thresholded to produce white regions against a dark background. The
image is then analyzed to provide particle x- and y- extents, areas, diameters
of area-equivalent circle, and the population parameters of mean and var-
iance with a histogram of the areas computed.
The application of this technique to measurement of sieve characteristics
(as used for road materials standardization) is straightforward. Computing
time for a grid of cells was on the order of several minutes on a small 8086
PC used with an Imaging Technology PC-Vision image digitizer and frame
store and aDage MTl65 (black and white) camera mounted on a standard
Polaroid lighting base modified to permit back-illumination of samples and
dimmer control of light levels. Analysis of the results showed that the edge
distortions would require special attention in an applied system, but that
the central areas were well dealt with by the imaging system. Refinement
would present only developmental, rather than basic. research problems.
These and many other mensuration problems are now economically ad-
dressable using personal computer (PC) based equipment, but are not really
a subject for research (although the development efforts may be more de-
manding than this would seem to suggest).
n
AUTOMATED DETERMINATION OF MOTOR VEHICLE REGISTRATION
NUMBERS
80 /,,'
//
/-'
",
~",. -- .-0
citations
within 60
/-,.' ,/~
,'/""
x
years of
host 40 /-
" /'
"
- , ."P/ '
-
/
g.':-'---::2:-'.3=-~4:-'--5=--~6"':-7~8"""-""9-''''''0'''''--'1--'12-13
Years aller host publication date
Immediacy of identification of literature
sources in imaging
images was the small step taken as part of this investigation project. The
reconstructions show what information is contained in the moments of var-
ious orders.
Hu's (1962) Cartesian moments are computationally difficult both at or-
ders and in image reconstruction. A new set of (Legendre) moments was
defined that does not suffer these problems (Newlands 1985; Wigan and
Cullinan 1987); they are easily extended to higher orders and straightforward
to use to reconstruct an image.
For each object. the list of image points is processed to extract the centroid
coordinates of the object and then again to calculate the moments relative
to the centroid to obtain moments with translational invariance. For small
pictures it is found that moments up to order seven are sufficient for good-
quality reconstructions (less than 3% error) and that moments up to order
11 will enhance the edges of the image (less than 0.5% error). The higher-
order moments appear to be responsible for the better definition of corners
in the image. There arc disparities between the present work and results
published by others in this field. The availability of cheap, robust imaging
devices for the field capable of direct moment computation suggests that
this approach could well become very effective.
A detailed review of moment invariance is given elsewhere (Wigan 1985;
Wigan and Cullinan 1987). Particularly relevant work is reported by Hu
(1962). Maitra (1979). Hsia (1981). and Yin and Mack (1981). Boyce et al.
applied Zernike moment invariants to a picture of a model vehicle using a
64 x 64 pixel array with 256 gray levels. The picture was then rotated in
10° steps up to 90". and a number of dilations and contrast changes were
carried out. The values of the invariants varied within an acceptable range.
and the image could be reconstructed to an accuracy of 90% + using in-
variants up to the eighth order-but the visual image was still almost un-
recognizable. A substantial subjective improvement was gained by using
moments up to order 20 even though the reconstruction error was still 6%.
As few as two invariants can be used to classify Objects effectively (e.g. a
subset of the alphabet); more are normally used. Cartesian invariants per-
form better on a silhouette than variable-intensity images, suggesting that
the internal detail in objects is not well represented by these moments.
79
Teague (1980) reconstructed the letters E and F using a 21 x 21 pixel
array. Newlands (1985) revised the basis of these functions to use Legendre
polynomials, which are both orthogonal and simple to calculate. The results
were used to assess the data compression and discrimination available using
this moment-invariant method. A recursive specification can be based on
the following equations:
Po(x) = 1 .................................................. (4)
Pl(x) =x .................................................. (5)
(n + I)Pn + l(x) = (2n + 1) x Pn(x) - nPn - l(x) ........... (6)
Cartesian (Clm ) and Legendre (L m ,,) moments have a simple relationship
j~O k~O
CjkMjk •••••••••••••••••••• (7)
ACKNOWLEDGMENTS
The work reported here drew from a number of different projects pursued
under this joint program. Those at Deakin were under the supervision of
Michael Cullinan. The discussion of road-surface textures and defects was
drawn from material developed toward a Ph.D. thesis to be submitted by
Kurt Benke. The application of moments to number recognition was drawn
from a B.Sc.lhonors thesis by Douglas Newlands. The sieve mensuration
results were derived using proprietary programs written by Imaging Appli-
cation Pty Ltd. and equipment from a wide range of sources including the
writer and Deakin and Melbourne Universities. Subsequent to the speci-
fication defined here. the selective video trigger system was built by John
Dods at ARRB.
REFERENCES
82
Marr, D. (1982). Vision. W. H. Freeman and Co., San Francisco, Calif.
McClelland, J. L., and Rumelhart, D. E. (1988). Explorations in parallel distributed
processing. MIT Press, Cambridge, Mass.
Newlands, D. A. (1985). "Image reconstruction from moment series," thesis pre-
sented to Deakin University, Geelong, Australia, in partial fulfillment of the re-
quirements for the degree of Bachelor of Science.
Pentland, A. P. (1984). "Fractal-based description of natural scenes." IEEE Trans.
Pattern Analysis and Machine Intelligence, PAMI-6,661-674.
Teague, M. R. (1980). "Image analysis via the general theory of moments." J. Optical
Soc. America, 70(8), 920-30.
Triendl. E. E. (1972). "Automatic terrain mapping by texture recognition." Proc.,
8th Int. Symp. on Remote Sensing of Environment. Environmental Research In-
stitute of Michigan, Ann Arbor, Mich.
Van Gool, L., Dewaele, P.• and Oosterlinck, A. (1985). "Texture analysis Anno
1983." Complller Vision, Graphics and Image Praces.fing. (29), 336-357.
Vickers, A. L., and Modestino. J. W. (1982). "A maximum-likelihood approach to
texture classification." IEEE Trans. Pallern Analysis and Machine Intelligence
PAMI-4, 1,61-68.
Wechlser, H .• and Kidode, M., (1980). "A random walk procedure for texture
discrimination." IEEE Trans. Pattern Analysis and Machine Intelligence PAMI-] ,
3.272-280.
Wermser. D .• and Liedtke, C. E. (1982). "Texture analysis using a model of the
visual system," Proc., 6tl. Int. Conf. on Pattern Recognition. Munich. Germany.
1078-1080.
Wigan. M. R. (1983). "Information technology and transport: Research proposals."
Technical Note TN 126. Institute for Transport Studies, University of Leeds, Eng-
land.
Wigan, M. R. (1985a). Image processing for roads: An on-line literature review and
text data base assessment. Awtralian Road Res., Melbourne. Australia, 15(1), 50-
55.
Wigan, M. R. (1985b). "The potential for computer based data logging and inter-
pretation." Proc., Pavement Mana~ement Systems Workshop, Australian Road
Research Board, Vermont, Australia. 65-71.
Wigan, M. R. (1986). "Selective road and traffic data acquisition (SDA) for roads
and traffic and video capture control," Internal Report AIR 413-3, Australian Road
Research Board, Vermont, Australia.
Wigan, M. R .• and Cullinan, M. C. (1984). "Machine vision for road research: New
tasks and old problems," Proc., 12th ARRB Conf., Vermont, Australia, 12(4),
76-86.
Wigan. M. R., and Cullinan, M. C. (1986). "Digital image processing: An appli-
cations review for road research applications." Proc., 2nd AUSGRAPH Conf.,
Australian Computer Graphics Association, Melbourne, Australia, 57-60.
Wigan. M. R., and Cullinan. M. C. (1987). "Image processing applied to roads:
Surface texture. mensuration, vehicle shape and number detection." Res, Report
ARR 145, Australian Road Research Board, Vermont, Australia.
Yin, B. H .• and Mack, H. (1981). "Target classification algorithms for video and
forward looking improved (FLlR) imaging." Soc. Photo-Optical Instrument En-
gineers, (302), 134-140.
83