Professional Documents
Culture Documents
Automated System For Detection and Classification of Leather Defects
Automated System For Detection and Classification of Leather Defects
of leather defects
Antonella Branca Abstract. A leather inspection system based on visual textural proper-
Maria Tafuri ties of the material surface is presented. Defects are isolated from the
Giovanni Attolico complex and nonhomogeneous background, analyzing their oriented
Arcangelo Distante structure. The patterns to be analyzed are represented in an appropriate
Consiglio Nazionale delle Ricerche parameter space using a neural network, in this way, a parameter vector
Instituto Elaborazione Segnali ed Immagini is associated to each different textured region in the original image. Fi-
Via Amendola 166/5 nally a filter process, based on knowledge about the parameter vectors
70126 Bari, Italy representing the leather without defects, detects and classifies any ab-
E-mail: attolico@iesi.ba.cnr.it normality. The resulting system is flexible and does not depend on di-
mensions, structure, and color of defects. © 1996 Society of Photo-Optical
Instrumentation Engineers.
Opt. Eng. 35(12) 3485–3494 (December 1996) 0091-3286/96/$6.00 © 1996 Society of Photo-Optical Instrumentation Engineers 3485
Downloaded From: http://opticalengineering.spiedigitallibrary.org/ on 05/12/2015 Terms of Use: http://spiedl.org/terms
Branca et al.: Automated system for detection and classification . . .
criterion for determining the final coefficients. the vector fields associated to defects can be defined. The
This optimization is efficiently performed through a neu- coefficients of these projections are the parameters by
ral network initialized to recognize a limited number of means of which texture classification can be performed.
patterns ~the basis vector fields!. The parameters provided To represent a vector V(x,y), either exactly or in some
by the neural network are able to identify a broader family optimal sense, by projecting it onto a chosen set of vectors
of patterns analyzing the estimates of their projection coef- $ c i (x,y) % we must find the projection coefficients c i mak-
ficients. ing the linear combination of the basis [H(x,y)] either
In Sec. 2 oriented field modeling and a method for its identical to or with minimum distance from V(x,y). The
computation are presented. Section 3 describes the param- desired set of coefficients is determined minimizing the
eter space modeling and the neural network implementa- squared norm of the difference vector:
tion. The problem of defect detection and classification on
leather is treated in Sec. 4. Finally, Sec. 5 shows experi- E5 i V~ x,y ! 2H ~ x,y !i 2 . ~3!
mental results obtained on several defects on different
kinds of leather. This quadratic error function E reaches its global minimum
only when its partial derivatives with respect to any of the
2 Oriented Flow Field Modeling M coefficients c i equal 0. Satisfying this condition for each
The first step in the proposed analysis of oriented textures c i generates a system of M simultaneous equations in M
is the estimation of the associated orientation field, based unknowns:
S D
on the gradient of Gaussian filtering of the textured
image.16 The orientation field of an image I(x,y) comprises
the angle image and the coherence image. The former ~rep-
resenting the dominant local orientation! is computed over
U(
~ x,y !
V~ x,y ! c i ~ x,y ! 5 ( k51
~ x,y !
M
( c k c k~ x,y ! U
c i ~ x,y ! . ~4!
vectors of the corresponding basis field. The output of this that still need to be classified. They are allowed to overlap
first layer, the dot products between the input field and the no more than half of their size. The orientation field in each
basis fields, is fed to the second layer. patch is compared with the linear combination of the basis
At the same time, the fourth layer computes the dot field vectors ~expressed with respect to the reference sys-
products between the linear combination of the basis vec- tem of the seed patch! weighted using the texture param-
tors ~evaluated by the third layer using the current estimate eters of the seed patch. A single network iteration allows
of $ c i % i51,...,M ! and the basis fields. Also these dot products the merging decision, based on the difference between the
are fed back to the second layer, which, comparing its in- orientation field of the patch and its approximation ~evalu-
puts, evaluates the updated estimate of the c i . ated with respect to the reference system and the texture
The weights on this second layer—initialized to 0 and parameters of the seed patch!. During each patch-growing
updated iteratively by an amount Di , depending on the dif- process the weights of the second layer are kept fixed to the
ference between the feed forward signal ~the level of activ- values c i estimated by classifying the seed patch. The num-
ity of the first layer! and the feedback signal ~the level of ber of patches in the final region is used for judging the
activity of the fourth layer!—provide, at the stable state, the correctness of texture parameters. In fact, wrong classifica-
desired optimal coefficients representing the texture param- tions do not expand. In those cases, the seed patch remains
eters of the considered image. On 838 windows, the con- unclassified, waiting for its subsequent inclusion into an-
vergence requires 300 to 400 iterations ~with a time of the other region. In the case of successful expansion, all the
order of 1023 s!. The window size must be choosen as a patches in the region are marked as classified and excluded
trade-off between accuracy ~434 windows produce untrust- from any further processing.
able results! and computational load ~16316 windows re- This extension policy increases the computational effi-
quires more than 2000 iterations!. ciency of the process. Testing the validity of a parameter
The weight adjustment rule is vector in a neighboring patch requires a single iteration of
the neural net instead of the 300 to 400 iterations required
1 ]E for solving Eq. ~4!.
D i 52 5 (
2 ] c i ~ x,y !
V ~ x,y ! c i ~ x,y ! Furthermore it accounts for the misclassification that a
local analysis of orientation fields could introduce keeping
2 (
~ x,y !
FS (
M
k51
D G
c k c k ~ x,y ! c i ~ x,y ! , ~5!
the reference system used for the basis vector fields fixed
onto the seed patch during all the growing process.
S D
A region-growing method has been designed that at-
tempts to extend seed patches if their parameters are able to 2x
c 5 ~ x,y ! 5 . ~11!
describe the textural contents of their neighborhoods. The y
reference system is kept centered on the seed patch during
all the growing trials. This provides a meaningful estima- Essentially the whole inspection process can be divided
tion of textural similarity and better final results. into two stages.
The region-growing algorithm is quite simple. It starts
from a temporary classified patch ~seed patch! and consid- 4.1 Off-Line Learning
ers a set of surrounding patches ~having the same dimen- Each different kind of leather requires @see Fig. 2~a!# the
sion of the seed patch and the center lying on its border! determination of, first, the optimal values for the
Fig. 2 Defect detection process: (a) for each kind of leather not defected, small patterns are pro-
cessed to determine the related BCV and (b) a generic pattern is processed and its map of vector of
coefficient is matched with the associated BCV to determine the map of defects.
Fig. 3 Lash: (a) original image, (b) image of coherence, (c) orientation field, and (d) linear combination
of the basis vectors using the coefficients estimated by the neural network.
$ C i (x,y) % i51,...,M to each pixel (x,y) of the image. We call source was at about 45 deg from the optical axis!. The
this map the computed map. The difference between the images were digitized using a COHU black and white cam-
$ C̄ i % ~the BCV associated to the leather without defect! and era.
the $ C i (x,y) % ~computed coefficients! produces a new map Several samples were considered with different kinds of
$ D i (x,y) % j51,...,M ~defect map! of 5-D coefficient vectors, defects. Their names are related to the events, which often
which are not null only for pixels in defective areas: occurred during the animal’s life, that have caused them.
Each defect produces a different oriented pattern that the
D i ~ x,y ! 5 i C i ~ x,y ! 2C̄ i i . ~12! proposed method is able to separate from the textural struc-
ture of the background.
This filter process is able to isolate defect from the back- The time required for preprocessing an image is about
ground.
1.5 s for the Gaussian convolution and about 10 s for the
orientation field estimation. They are actually accomplished
5 Experimental Results by software ~without optimization!. Being standard opera-
For our experiments, we analysed images ~5123512! ac- tions for which special boards can be used, they are not
quired from leather samples of about 10 to 15310 to 15 cm expected to pose timing problems in engineering the final
using the same geometrical and lighting setup ~the camera system.
was held perpendicular to the leather sample and the light Figure 3 shows, for one of the samples, the original
Fig. 4 Fold: (a) first original image, (b) linear combination of the basis vectors using the coefficients
estimated by the neural network for (a), (c) second original image, and (d) same as (b) but for (c).
gray-level image, the computed coherence image r (x,y), 6 Conclusions and Future Work
the associated orientation field V(x,y), and finally the The technique presented in this paper is able to analyze
linear combination of the basis vectors using the coeffi-
oriented texture in an efficient way enabling defect detec-
cients estimated by the neural network. The bright areas
in the coherence images correspond to regions that exhibit tion and classification in all contexts where such oriented-
oriented textures and therefore a stronger accordance structures are present. It has proved to be effective also on
among gradient vectors. In the same figure, it is possible other materials17 ~wood, ferromagnetic materials, etc.!. It is
to appreciate the expressiveness of the coefficients in de- especially suited for the analysis of natural materials exhib-
scribing the oriented structures characterizing the defective iting complex and nonhomogeneous visual appearances.
areas. It seems promising for on-line automated inspection
Figures 4 through 12 show the results obtained on the since the most computationally expensive task ~finding the
other samples, juxtaposing the original textured image and optimal values for the sab parameters and determining the
the vector field obtained as a linear combination of the BCV! is made once as an off-line training phase. Therefore
choosen basis vectors using the coefficients estimated by the best setup for each particular kind of leather can be
the proposed system. In all these cases, the basis vectors
recalled every time such kinds of material need to be em-
provide an effective tool for describing the oriented textural
patterns associated to defective areas. ployed in the manufacturing process. Nevertheless further
Fig. 5 Fold: (a) first original image, (b) linear combination of the basis vectors using the coefficients
estimated by the neural network for (a), (c) second original image, and (d) same as (b) but for (c).
Fig. 6 Cut: (a) original image and (b) linear combination of the basis Fig. 7 Scar: (a) original image and (b) linear combination of the
vectors using the coefficients estimated by the neural network. basis vectors using the coefficients estimated by the neural network.
Fig. 8 Gored: (a) first original image, (b) linear combination of the basis vectors using the coefficients
estimated by the neural network for (a), (c) second original image, and (d) same as (b) but for (c).
Fig. 9 Wart: (a) original images and (b) linear combination of the basis vectors using the coefficients
estimated by the neural network.
Fig. 10 Eczema: (a) original image and (b) linear combination of the basis vectors using the coeffi-
cients estimated by the neural network.
Fig. 11 Stretch mark: (a) original image and (b) linear combination of the basis vectors usin the
coefficients estimated by the neural network.
Fig. 12 Not uniform color: (a) original image and (b) linear combination of the basis vectors using the
coefficients estimated by the neural network.
investigations are in progress to fix all the parameters in an 15. A. K. Jain, F. Farrokhina, and D. H. Alman, ‘‘Texture analysis of
automotive finishes,’’ in Proc. of SME Machine Vision Applications
automatic or semiautomatic way. Conf., pp. 1–16, Detroit ~Nov. 1990!.
Current research is devoted also to the optimization of 16. R. Rao and B. G. Schunk, ‘‘Computing oriented texture fields,’’ CV-
the time required by the on-line process for dealing with the GIP: Graph. Models Image Process. 53, 157–185 ~1991!.
large pieces of leather ~up to 5 m2! often involved in manu- 17. A. Branca, O. Quarta, W. Delaney, and A. Distante, ‘‘A neural net-
work for defect classification in industrial inspection,’’ Proc. SPIE
facturing processes. This activity will involve: 2423, 236–247 ~1995!.
Acknowledgments
This work was partially supported by a grant from the Na-
tional Council of Research project on Innovative Automatic
Systems for Industrial Inspection. Maria Tafuri received a degree in com-
puter science from the University of Bari in
References 1994 and has since been a research as-
1. S. M. Sokolov and A. S. Treskunov, ‘‘Automatic vision system for sociate at the Institute for Signal and Im-
final test of liquid crystal displays,’’ in Proc. IEEE Int. Conf. on age Processing under a grant from the
Robotics and Automation, pp. 1578–1582, Nice ~1992!. Italian National Research Council. Her in-
2. H. A. Beyer, ‘‘Automated dimensional inspection of cars in crash test terests include computer vision, artificial
with digital photogrammetry,’’ in Industrial Vision Meteorology,
Proc. SPIE 1526, 134–141 ~1991!. neural networks, and robotics.
3. K. Ratcliff, A. Harris, C. Allen, and R. Bicker, ‘‘3-dimensional da-
tuming and inspection of mechanical parts using wire-frame model-
ing,’’ Proc. SPIE 1197, 230–236 ~1989!.
4. W. Xian, Y. Zhang, Z. Tu, and E. L. Hall, ‘‘Unrolled problem of
rotary objects and automatic visual inspection of appearance defects
of bearing rollers,’’ in Proc. SPIE 1004, 69–76 ~1988!. Giovanni Attolico received a degree in
5. J. Wilder, ‘‘Finding and evaluating defects in glass,’’ in Machine computer science from the University of
Vision for Inspection and Measurement, H. Freeman, Ed., pp. 237–
255, Academic Press, New York ~1989!. Bari in 1986. From 1986 to 1992 he was a
6. O. Laligant, F. Truchetet, and E. Fauvet, ‘‘Wavelets transform in research associate at the Institute for Sig-
artificial vision inspection of threading,’’ in Proc. IEEE Int. Conf. on nal and Image Processing on applications
Industrial Electronics, Control, and Instrumentation (IECON) ’93, pp. of computer vision to robotics. He was with
513–518 ~1993!. Olivetti Ricerca for 2 years working on im-
7. J. Olsson and S. Gruber, ‘‘Web process inspection using neural clas-
sification of scattering light,’’ in Proc. IEEE Int. Conf. on Industrial age acquisition, archiving, and processing
Electronics, Control, Instrumentation and Automation (IECON) ’92, applied to medicine. He is currently a re-
pp. 1443–1448, San Diego ~1992!. searcher at the Institute for Signal and Im-
8. Y. Marutani, ‘‘Inspection of the silk crapes by image processing,’’ in age Processing working on image analysis
Proc. Japan/USA Symp. on Flexible Automation, pp. 1219–1224, San and machine learning.
Francisco ~1992!.
9. R. W. Conners, C. W. McMillin, K. Lin, and R. E. Vasques-Espinosa,
‘‘Identifying and locating surface defects in wood: part of an auto- Arcangelo Distante received degrees in
mated lumber processing system,’’ IEEE Trans. Pattern Anal. Mach.
Intell. 5, 573–583 ~1983!. computer science at the University of Bari,
10. C. Fernandez, C. Platero, P. Campoy, and R. Arcil, ‘‘Vision system Italy, in 1976. Until 1981 he was with INFN
for on-line surface inspection in aluminum casting process,’’ in Proc. (National Nuclear Physics Institute) and
IEEE Int. Conf. on Industrial Electronics, Control, and Instrumenta- was subsequently with IESI (Image and
tion (IECON) ’93, pp. 1854–1859 ~1993!. Signal Processing Institute) of the CNR
11. P. Dewaele, P. Van Gool, and A. Oosterlinck, ‘‘Texture inspection (National Research Council). Currently he
with self-adaptive convolution filters,’’ in Proc. 9th Int. Conf. on Pat-
tern Recognition, pp. 56–60, Rome ~Nov. 1988!. is the coordinator of the Robot Vision
12. D. Chetverikov, ‘‘Detecting defects in texture,’’ in Proc. 9th Int. Group at IESI-CNR and directs the IESI
Conf. on Pattern Recognition, pp. 61–63, Rome ~Nov. 1988!. Institute. His research interests are 3-D
13. J. Chen and A. K. Jain, ‘‘A structural approach to identify defects in object reconstruction, representation of vi-
textured images,’’ in Proc. IEEE Int. Conf. on Systems, Man, and sual information and generation of 3-D modeling, shape represen-
Cybernetic, pp. 29–32, Beijing ~1988!.
14. L. H. Siew, R. M. Hodgson, and E. J. Wood, ‘‘Texture measures for tation for image understanding, vision for robotic navigation, and
carpet wear assessment,’’ IEEE Trans. Pattern Anal. Mach. Intell. 10, architectures for computer vision. Dr. Distante is a member of the
92–105 ~1988!. IAPR.