Download as pdf or txt
Download as pdf or txt
You are on page 1of 93

VISION BASED DEFECT DETECTION

ALGORITHM DEVOLOPMENT FOR MAS PAD


PRINTING MACHINE

Author: Sudeera Milakshan

University of Wolverhampton
School of Engineering

Award: BEng (Hons) Mechatronics Engineering


Student ID: 1932603
Presented in partial fulfilment of the assessment requirements for the above
Award
Project Undertaken During: Semesters 1 & 2 2021/2022
University Supervisor: Mr. Randeera Liyanage
Credit Rating of Project: 40
Mode of Attendance: FT

Grade Awarded: ……………………………………….


Signed: ……………………………………….

1|Page
Declaration

This work or any part thereof has not previously been presented in any form to the University or
any other institutional body whether for assessment or other purposes. Save for any express
acknowledgements, references, and bibliographies cited in the work, I confirm that the intellectual
content of the work is the result of my efforts and no other person’s.

Signature:
Date: 20th of September 2022

2|Page
Abstract

Computer vision, digital image processing and machine learning have a wide range of applications
in a variety of applied disciplines and automated industrial processes. In the apparel and textile
manufacturing industry, pad printing is one of the most important printing methods. A visual online
detection approach is presented to identify pad printing defects and remove them in the
manufacturing line in real-time to assure label quality. Previously, in the textile industry, manual
human efforts were used to identify defects in the fabric production process. The primary
disadvantages of the manual human fabric defect identification technique include lack of focus,
human weariness, and time consumption. This research has paid attention to Al-based machine
learning algorithms and image processing techniques to identify the problems in the pad printing
process. The main purpose of this computer vision system is to check for wrinkles and folds on
the surface before the pad printing process starts and check for defects on the pad print after the
pad printing process is completed. Selecting the vision equipment such as a camera, a Central
processing unit (CPU), etc. and installing a computer vision system in a right place is another one
of the tasks of this research. The main purpose of this thesis is development the image processing-
based machine learning algorithms to design the vision-based defect detection system.

Keywords: machine learning, pad prints, defect detection, computer vision, computer vision
equipment, image processing, Wrinkles and folds

3|Page
Acknowledgement

First and foremost, I would like to express my deepest gratitude and respect to my project
supervisor, Mr. Randeera Liyanage, for his unwavering support, direction, and knowledge
throughout this project and Miss. Hansani Aravindya, her ongoing encouragement, inspiration,
and support.

In addition, I must express my gratitude and respect to Mr Amila Cabral who is the automation
manager and training supervisor of Ansell Lanka in Biyagama. He gave huge support to help me
finish my job successfully.

Finally, I really would like to express my appreciation to the entire faculty of engineering and
technology at CINEC, as well as the project module leader at the University of Wolverhampton.

4|Page
Table of Figures

Figure 1: Pad Printing Process. .............................................................................................................. 12


Figure 2: Defect-free pad print. ............................................................................................................... 13
Figure 3: Defective pad prints. ................................................................................................................ 13
Figure 4:The image classification mode's training procedure (Gafurov, A.N., et al., 2022). ......... 20
Figure 5: Detection example of the printing defect detector (Gafurov, A.N., et al., 2022). ........... 21
Figure 6: Schematic diagram of system light field design (Xu, B., et al., 2018).............................. 22
Figure 7: Character Extraction (Sato, K., Kan'no, H. and Ito, T., 1991). .......................................... 23
Figure 8: Segmentation of Character (Sato, K., Kan'no, H. and Ito, T., 1991)................................ 24
Figure 9: Warp and weft pattern of a plain weave (Vladimir, G., et al., 2019). ............................... 25
Figure 10: Threshold image (Vladimir, G., et al., 2019)...................................................................... 25
Figure 11: Detected defects (Vladimir, G., et al., 2019). .................................................................... 26
Figure 12: Process of Gas Turbine Anomaly Detection (Shisheng Zhong, et al., 2019) ............... 27
Figure 13: Selected Vision Camera. ...................................................................................................... 29
Figure 14: Algorithm Testing with Raspberry PI 4B. ........................................................................... 30
Figure 15: Algorithm testing with Nvidia Jetson Nano. ....................................................................... 31
Figure 16: Jetson Nano Developer Kit. ................................................................................................. 31
Figure 17: Jetson Nano Developer Kit with Waveshare case. .......................................................... 32
Figure 18: Isolation Forest Isolation structure. (Liu, F.T., et al., 2008)............................................. 34
Figure 19: Isolation Forest Anomaly Score contour (Liu, F.T., et al., 2012). ................................... 36
Figure 20: Program of Data Collecting. ................................................................................................. 37
Figure 21: Program of colour histogram extraction implementing. ................................................... 37
Figure 22: Original Image. ....................................................................................................................... 38
Figure 23: Colour Histogram of Image. ................................................................................................. 38
Figure 24: Program of Dataset Training. ............................................................................................... 39
Figure 25: Program of Real-Time Testing. ........................................................................................... 40
Figure 26: Camera setup for testing. ..................................................................................................... 41
Figure 27: Defect-free pad print (Anomaly Score: + 0.13172575). ................................................... 42
Figure 28: Defect-free pad print (Anomaly Score: - 0.16278505). .................................................... 42
Figure 29: Defect-free pad print (Anomaly Score: - 0.22621908). .................................................... 43
Figure 30: Defect-free pad print (Anomaly Score: - 0.22621908). .................................................... 43
Figure 31: Defect-free pad print (Anomaly Score: - 0.16278505). .................................................... 44
Figure 32: Defective pad print (5) ........................................................................................................... 45
Figure 33: Cropped pad print (5). ........................................................................................................... 45
Figure 34: Defect-free pad print but tested in different light conditions (2). ..................................... 46
Figure 35: Defect-free pad print in experiment 02 I. ............................................................................ 47
Figure 36: Defective pad print in experiment 02 II. .............................................................................. 47
Figure 37: Defective pad print in experiment 02 III.............................................................................. 48
Figure 38: Defective pad print in experiment 02 IV ............................................................................. 48
Figure 39: Defective pad print in experiment 02 V. ............................................................................. 49

5|Page
Figure 40: Defective pad print in experiment 02 VI. ............................................................................ 49
Figure 41: Defective pad print in experiment 02 VII. ........................................................................... 50
Figure 42: Example of Pad Print that has only Characters. ............................................................... 51
Figure 43: Algorithm of Image Processing techniques in this method. ............................................ 52
Figure 44: Grayscale Image. ................................................................................................................... 52
Figure 45: Blurred Image. ........................................................................................................................ 52
Figure 46: Algorithm of OCR process and contour Drawing.............................................................. 53
Figure 47: Contour Detection of character............................................................................................ 53
Figure 48: Coordinate Comparing Program of this method. .............................................................. 54
Figure 49: Complete Program of OCR-Based Position Checking Method. ..................................... 55
Figure 50: Examples of characteristic Pad Prints I .............................................................................. 56
Figure 51: Results of OCR-based position checking method I .......................................................... 56
Figure 52: Examples of characteristic Pad Prints II. ........................................................................... 57
Figure 53: Results of OCR-based position checking method II. ....................................................... 57
Figure 54: Examples of characteristic Pad Prints III. .......................................................................... 58
Figure 55: Results of OCR-based position checking method III. ...................................................... 58
Figure 56: MAKESENCE.AI Online Tool. ............................................................................................. 59
Figure 57: Installed Library Packages in Object Detection Method. ................................................. 60
Figure 58: The Program of Image Augmentation. ............................................................................... 60
Figure 59: The Program of Training Process. ...................................................................................... 61
Figure 60: The Program of testing. ........................................................................................................ 61
Figure 61: Object Detection Result. ...................................................................................................... 62
Figure 62: Reference Coordinate. .......................................................................................................... 62
Figure 63: Experiment of Position Detection. ....................................................................................... 63
Figure 64: Result of the Experiment. ..................................................................................................... 63
Figure 65: Example of fold on the fabric Figure 66: Example of wrinkles on the fabric. ............ 64
Figure 67: Example of a good fabric surface........................................................................................ 64
Figure 68: Testing results for good fabric surfaces. ............................................................................ 65
Figure 69: Testing results for fold fabric surfaces................................................................................ 66
Figure 70: Testing results for wrinkled fabric surfaces. ...................................................................... 66
Figure 71: Camera mount of final implantation. ................................................................................... 67
Figure 72: The place where the final implementation was performed. ............................................. 68
Figure 73: Experiment 01 result I. .......................................................................................................... 71
Figure 74: Experiment 01 result II. ......................................................................................................... 72
Figure 75: Experiment 01 result III ......................................................................................................... 73
Figure 76: Experiment 02 result I. .......................................................................................................... 74
Figure 77: Experiment 03 result I. .......................................................................................................... 75
Figure 78: Experiment 01 result II .......................................................................................................... 76
Figure 79: Testing results with white balancing issue. ........................................................................ 77
Figure 80: Testing results without white balancing issue. .................................................................. 78
Figure 81: The program used for fixed camera parameters. ............................................................. 78
Figure 82: The source code of converting RGB image to grayscale. ............................................... 79

6|Page
Figure 83: Not detected detective Pad Print. ........................................................................................ 80
Figure 84: Cropped Image. ..................................................................................................................... 80
Figure 85: Enclosed computer vision system ....................................................................................... 81

Tables

Table 1: Parameter values of Isolation forest ....................................................................................... 28


Table 2: Gantt Chart. ................................................................................................................................ 82
Table 3: Cost Calculation of Project. ..................................................................................................... 83

7|Page
Table of Contents

Abstract ......................................................................................................................................... 3
Acknowledgement ......................................................................................................................... 4
Table of Figures ............................................................................................................................ 5
NOTIONS .................................................................................................................................... 10
1. INTRODUCTION .................................................................................................................. 11
1.1 Background ................................................................................................................... 11
1.2 Problem Identification.................................................................................................... 12
1.3 Objectives ..................................................................................................................... 15
1.4 Limitations ..................................................................................................................... 16
1.4.1 Camera Mounting (Position & Angle) ........................................................................ 16
1.5 Methodology ................................................................................................................. 17
2. LITERATURE REVIEW ........................................................................................................ 18
3. DEVELOPMENT OF A SUITABLE IMAGE ACQUISITION SYSTEM.................................. 29
3.1 Camara selection .......................................................................................................... 29
3.2 CPU selection ............................................................................................................... 30
4. MODELING, DEVELOPMENT AND TESTING THE ALGORITHM ..................................... 33
4.1 Algorithm for Detecting the defective pad prints ........................................................... 33
Anomaly detection (Isolation Forest) .................................................................................... 33
Mathematical Modeling of isolation Forest Concept............................................................. 33
Development of algorithm in python ..................................................................................... 36
Testing and Results: ............................................................................................................ 41
Problems Encountered During Testing and Their Solutions ................................................ 44
4.2 Algorithm for checking the position of pad prints .......................................................... 51
OCR-Based Position Checking Method ............................................................................... 51
Testing and Results of OCR-based position checking method: ........................................... 56
Object Detection-Based Position Checking Method............................................................. 59
Testing and Results of Object detection-based position checking method .......................... 63
4.3 Algorithm of Identifying Folds and Wrinkles .................................................................. 64
Testing and Results of Folds and Wrinkles Identifying Algorithm ........................................ 65
5. FINAL IMPLEMENTATION AND RESULTS ........................................................................ 67
Results ..................................................................................................................................... 70
Experiment 01 ...................................................................................................................... 71
Experiment 02 ...................................................................................................................... 74

8|Page
Experiment 03 ...................................................................................................................... 75
6. DISCUSSIONS .................................................................................................................... 77
7. PROJECT MANEGMENT .................................................................................................... 82
7.1 Gantt Chart ........................................................................................................................ 82
7.2 Cost calculation for the project........................................................................................... 83
8. CONCLUSIONS .................................................................................................................... 84
9. FURTHER DEVELOPMENT ................................................................................................... 85
10. APPENDIX ........................................................................................................................... 86
References .................................................................................................................................. 90

9|Page
NOTIONS

AI - Artificial Intelligence

GUI - Graphical User Interface

OCR - Optical Character Recognition

ICR - Intelligent Character Recognition

DNN - Deep Neural Networks

IOU - Intersection Over Union

MFC - Mean-field Control

CNN - Convolutional Neural Network

SVM - Support Vector Machine

CPU - Central Processing Unit

RANSAC - Random Sample Consensus

RBF - Radial Basis Function

FOV - Field of View

GPU - Graphics Processing Unit

PLC - Programmable Logic Controller

GPIO - General Purpose Input Output

10 | P a g e
1. INTRODUCTION

1.1 Background

The purpose of computer vision is to transfer human abilities for data sensing, data analysis, and
action taking depending on prior and present outcomes into computers. Computer vision is a field
of artificial intelligence (AI) that enables computers and systems to extract meaningful information
from image data, videos, and other visual sources, as well as to take actions or make suggestions
based on that information. If artificial intelligence allows computers to think, computer vision
enables them to sense, analyze, and learn the data through a camera with image processing.

Computer vision is similar to human vision in several ways. However, human vision is unique in
its ability to identify objects apart, how far apart they are, if they are moving, and whether there is
anything incorrect with an image. In computer vision systems, computer vision trains machines to
perform similar jobs with high accuracy using suitable camera equipment, collected data, and
efficient algorithms within much less time rather than human vision. The perfect machine learning
algorithm, good training data sets, suitable ambient light conditions, and equipment with good
performance such as processing units, cameras, sensors, and connecting cables lead to
successful computer vision projects.

The automatic surface defects detection methods using computer vision systems have been
widely utilized in a wide range of fields, including semiconductor production, electronic
components manufacturing, textile manufacturing, and print manufacturing. Defect identification
is critical in the printing business, especially to assure the quality of the printed output.

This research focuses on how to identify defects in pad printing in the apparel industry using
computer vision technology. The conventional technique of defect identification still depends on
human visual observation, which has a lot of disadvantages such as slower speed, higher cost,
and subjective instability. Therefore, computer vision-based defect identifier systems have a
higher demand in the industry.

11 | P a g e
1.2 Problem Identification

The pad printing process will be discussed step by step below.

Figure 1: Pad Printing Process.

• After the engraved cells have been filled with ink and the extra material from the walls has
been removed by blading (a).
• A soft silicone pad is pushed against the printing form (b).
• Next, the pad is lifted from the printing form (c).
• Again, the printing form filled the ink out of the cells (d).
• After that, it is deposited on the substrate's surface(e).
• Finally, the pad is lifted from the substrate's surface(f).

This whole process takes less than a second. There are three basic types of problems arising
commonly in this fast procedure.

• An incomplete print or a print with a lot of voids


• Distorted and blurred prints
• Excess ink or dirt outside of the desired image on a pad

Some example images of good pad prints and defective pad prints are shown below.

12 | P a g e
Figure 2: Defect-free pad print.

Figure 3: Defective pad prints.

13 | P a g e
The task of this computer vision project is to decide whether it is a defect-free pad print or a
defective pad print before the next pad printing process is started. To do this, after the pad printing
process is completed, the vision camera should capture an image of the pad print and the machine
learning algorithm should process the pad print in less than half a second. This whole process
should be highly accurate and high speed. Furthermore, the machine learning algorithm should
be able to highlight defects in pad print in real-time and give a signal as an output to stop the
operation. Also, the pad print should be printed in the same place on a surface which will also be
checked before starting the next pad printing process.

14 | P a g e
1.3 Objectives

1. Complete the collection of types of defective pad prints and defect-free pad prints for a
machine learning training dataset.
2. Develop the machine learning algorithm which can obtain highly accurate results with a
minimum training dataset.
3. Develop the Deep learning algorithm to identify the defects in pad print in real-time and
provide an alert during the operation if there is a defect.
4. Develop the image processing technique to make sure the pad print is printed in the right
place on the surface.
5. Develop the algorithm to identify the wrinkles and folds on the fabric panel, before the pad
printing process is started
6. The machine learning algorithm developed on the desktop computer must be successfully
running on the NVIDIA Jetson Nano module.
7. Fixing the vision camera and lights in the pad printing machine and testing.

15 | P a g e
1.4 Limitations

1.4.1 Camera Mounting (Position & Angle)

With the attached pneumatic hoses, wires and moving parts, the installation of a camera on a
printing machine is difficult. Therefore, selecting a place to mount the camera to get a clear image
while preventing it from getting damaged from the all moving parts was very difficult. As a result
of the aforementioned problems, the vision camera we expected to use initially in this project
cannot be utilized due to its large form factor of it. As a solution, a camera with a smaller form
factor was selected for this experiment.

1.4.2 Lightning
In computer vision projects, it is essential to keep the same lighting conditions. Commonly it is
achieved by using flicker-free led lights mounted with a camera. However, maintaining the light
conditions using such equipment is not possible with the smaller size of the camera mounting
area.

1.4.3 Number of samples


Usually, a machine learning algorithm is trained using a larger number of training samples to
improve the accuracy of the algorithm. Since the client’s request (MAS Holdings) was to train the
algorithm with a limited number of samples to make the system versatile to use with different pad
print designs, the AI algorithm will be optimized to be trained using a smaller number of samples.

1.4.4 Processing Power and Limitations of Resolutions.


When the prototype of this experiment was built, a desktop computer was assigned to train the
data set. With the higher processing power of the computer, the time taken to complete a train
was far less. However, to meet the industrial requirements the dataset training process was done
utilizing an Nvidia Jetson nano processer which took more time to train samples compared to the
desktop computer.

16 | P a g e
1.5 Methodology

Selection of a Suitable Image Acquisition System

Modelling/ Development of Algorithm

Testing and Results

Mechanical Setup Devolopment

Testing and Training the Algorithm Under Realistic Conditions

Graphical User Interface (GUI) Design

17 | P a g e
2. LITERATURE REVIEW

For almost 60 years, Engineers and researchers have been attempting to develop methods for
computers to understand and analyze visual signals (Manovich, L., 1996). In the early 1920s, one
of the first applications of image imaging was in the newspaper industry (Sarfraz, M., 2020). By the
1950s, digital computer technology was developing rapidly. At the same time, different methods,
methodologies, and new techniques of image processing were added to the image processing
technology in various places in the world. In those days, Image processing was once primarily
used to improve the quality of images. The basic techniques of image processing such as
compression, restoration, encoding, and enhancement of images have been applied to that
(Sarfraz, M., 2020). Around the same time, the first computer image scanning technology that
allows computers to digitize and capture images was invented (Wongsuphasawat, K., et al., 2017),
In the 1960s, Although the concept of Artificial intelligence (AI) was introduced, it was the
academic level due to the lack of advanced level computers to operate them. In 1974, Optical
character recognition (OCR) and intelligent character recognition (ICR) were invented (Mori, S., et
al., 1992). Developed as a combination of image processing and AI, the OCR and ICR technology
was used for common applications such as document processing (Unoski, J., et al., 2000). David
Marr, a neurologist, developed methods enabling computers to identify fundamental structures
such as edges in 1982 (Viola, P. and Jones, M.J., 2004). By 2000, scientists and engineers who
were researching image processing and AI had focused on feature-based object recognition
methods. As a result of that, by 2001, Paul Viola and Michael Jones were able to introduce the
first real-time face detection system to the world (Viola, P. and Jones, M.J., 2004). In the years
the 2000s and after, technologically advanced cameras and computers were born. Today, as a
result of the research and development of technology, vision-based defect identifier computer
systems are commonly used in the industry. Vision-based computer systems are highly accurate
and fast which speeds up the manufacturing process and enhances product quality. The
methodologies used to develop such systems and their success are described below in this
section.

Erhu Zhan et al reported automatic defect detection in web printing (Jing, C., et al., 2019). The first
step in this research was template development, and the second was defect identification for the
printed image. Multiple defect-free samples were collected in the first stage and then aligned to
create a template of web print images that include the reference image, and bright and dark
images. The reference image is generated by taking the average of several aligned defect-free
images. The dark image and the bright image showed the highest and lowest pixel value
deviations, respectively, and serve as template images for additional defect detection when
combined with the reference image. In the second part of this defect identification system, the first
step was that the image processing algorithm initially cut the image to the size of the reference
image. Following that, the image was aligned with the reference image and compared with the
bright and dark images. A defect pixel was identified when a pixel value of the detected image is
not in the range of the bright and dark images. The methods presented in this research were tested
on a desktop computer with an Intel Core i3-2100 and running VC++6.0. After conducting

18 | P a g e
experiments that collected defect-free images of a different number of samples, it was concluded
that selecting 30 defect-free images is the best option for developing excellent template images.
This method has taken less than 160 milliseconds to detect the defects in each image and it
showed 96.03 % accuracy. This method was called template matching in image processing. in
this study, the defect identification method was dependent on the range of pixel values. That
technique cannot be used in our MAS vision-based pad printing defect identification project.
Because the place of the pad printing machine may change. Then the brightness of the image can
be changed with the place. Another thing is the pad printing design also may be changed with the
design of the apparel. Therefore, if this technique is used, the reference template has to be created
from time to time. The most suitable method for our vision-based pad printing defect identification
project is to develop a deep learning-based algorithm that can be trained with a minimum number
of sample images.

The study “AI‑assisted reliability assessment for gravure offset printing system” was carried out
by (Gafurov, A.N., et al., 2022) based on the artificial intelligence (AI) and deep convolutional neural
networks approach to computer vision. This report has explained the defect identification of
gravure offset printing in the electronic device manufacturing industry. Gravure offset printing was
one of the printing processes used to manufacture different electronics, such as planar inductors
and pressure sensors, using fine-line patterning. The purpose of this research was to design a
technique for evaluating printing quality based on modern AI computer vision methods for printing
dependability. This AI computer vision technique had several steps to achieve its goal. After each
printing, the camera captures images of the printed pattern at a resolution of 2.41 megapixels. The
computer used to execute the model had a Core i7 processor and Nvidia GeForce GTX 750 Ti 1
Gb graphics card. The printed lines' images were obtained during the printing process. According
to the presence of local defects and overall quality, images were labelled. There are 299 images
in all, split into two categories. 225 images of defect-free and 74 images with defects. All 74 images
with unique local defects were selected and labelled using the Labeling program for object
detection for local defect detection. For model validation, 25% of the whole data set was selected
for classification and 20% for object-detecting performance. In this experiment, the DNN (Deep
neural networks) model with skip connections had been developed and trained from scratch using
the data set for the image classification task. TensorFlow and Keras frameworks in Python were
used to program that. TensorFlow is an open-source machine learning platform that has a flexible
ecosystem of tools, libraries, and community resources (Wongsuphasawat, K., et al., 2017). Keras is
a TensorFlow-based high-level neural network library (Gulli, A. and Pal, S., 2017). The combination
of both TensorFlow and Keras is used to build building and training models easily because of the
high-level application program interface (API) (Thera, J., 2022). DNN models are capable of
resolving extremely nonlinear problems like image classification or object recognition. These
approaches might help with reliability analysis by estimating the printed pattern either qualitatively
by evaluating if the entire image of offset printing meets excellent quality standards or
quantitatively by detecting the number of local printing defects classified by class. The training
procedure is shown in the diagram below. It was performed for four hundred epochs, and
predictions were made using the model weights with the highest validation accuracy.

19 | P a g e
Figure 4:The image classification mode's training procedure (Gafurov, A.N., et al., 2022).

Here The training loss is a metric used to assess how well a deep learning model fits the training
data (Urtasun, R., et al., 2018). In other words, it assesses the model's inaccuracy on the training
data. Validation loss is a statistic used to evaluate a deep learning model's performance on the
validation set (Chun, J., et al., 2020). In this study, with pre-trained weights, the fine-tuning technique
was used to retrain the YOLO mode. That model was designed to predict the position of objects
in images by providing bounding box positions. High detection speed, prediction can be obtained
within one step and high accuracy were the features of this YOLO mode. The data set ground
through bounding boxes were analyzed using the standard unsupervised learning approach k-
means clustering, using intersection over union (IoU) metrics, to enhance the accuracy of
proposed bounding boxes. These kinds of Al algorithms are called Ensemble Learning Algorithms.
Ensemble learning is a basic meta-machine learning technique that aggregates predictions from
many models to enhance predictive performance (Che, D., et al., 2011). As a result of combining
these AI algorithms, the printing defect detector was able to predict printing defects within the
captured image like this.

20 | P a g e
Figure 5: Detection example of the printing defect detector (Gafurov, A.N., et al., 2022).

In this study, advanced Deep learning methods have been used. To run these algorithms, a
powerful computer has been used. This kind of accurate defect detection method will be expected
in our vision-based pad printing defect identification system. But in our case, Jetson Nano
Developer Kit will be expected to use. The ensemble learning method used here can be used to
get a good result.

21 | P a g e
A similar study on the Design of a Machine Vision Defect Detecting System Based on Halcon was
done by Bin Xu et al (Xu, B., et al., 2018). In this work, a defect detection machine vision system
based on Halcon image processing software is developed to detect defective spots on the screen
in the manufacturing line, crackers, and scratches in the ceramic tiles on the production line, and
PCB circuit board defects on the production line. This paper explained more details about the
vision camera setup and light field in the ceramic tile production lines.

Figure 6: Schematic diagram of system light field design (Xu, B., et al., 2018).

To provide a strong light field, a Ring LED light source was utilized, which has a longer life and
more stable performance. In this experiment, HALCON commercial software package was used
for image processing and defect detection. HALCON has a library of image processing methods
with over 1000 individual algorithms and a data management core. Filtering, rectification, shape
search, mathematical transformations, classification and identification, computational analysis,
and other fundamental geometry and image computing tasks are included in this software. The
library files used in this project were OpenCV2.4.9 and Halcon12, respectively, and the project
was originally configured on Visual Studio2010. The project was debugged with Visual Studio2017
under the Windows10 system due to development and debugging requirements. Based on MFC
and Halcon mixed programming configurations, this research proposed an efficient image
processing approach. Each image defect detection took about 141ms in this method. In this study,
A Halcon software-based defect identifier system has been implemented. There are efficient and
accurate image processing algorithm methods in Halcon. But the thing is, Halcon software is very
expensive. Therefore, we should carry out thorough research for our vision-based pad printing
defect identification system and develop an efficient deep learning algorithm that fulfils the
objectives without resorting to such a method.

22 | P a g e
Pandia Rajan Jeyaraj et al designed and tested an efficient and automated innovative computer
vision-based fabric defect detection and classification system using a prototype model invented
on a real-time industrial platform (Jeyaraj, P. R. & Nadar., E. R. S., 2019). The system was developed
with two modules. They were the phase of offline learning for defective textiles and an online real-
time testing phase for defective fabric materials. In this study, a multi-scaling CNN was used to
develop a deep learning algorithm for intelligent fabric defect classification in real-time in an
industrial platform. The multi-scaling Deep CNN algorithm for fabric defect-detecting systems was
less complicated than conventional automated systems such as SVM-based classifiers. According
to experimental result validation, the multi-scaling Deep CNN algorithm achieved 96.55 %
accuracy with a 0.94 success rate. In this method, a deep learning algorithm has been designed
for use in real-time applications. The use of that kind of method is more apt for our vision-based
pad printing defect identification system as well.

The study carried out by Kazuhiko Sato et al has explored and developed a System for inspecting
pad-printed characters on labels of video cassettes using the normalized correlation of the
segmented character images (Sato, K., Kan'no, H. and Ito, T., 1991). This research is about
detecting blurred characters, characters with missing parts, and undesirable spots. This algorithm
was a template–matching technique Utilizing the grayscale image's normalized correlation. The
method split a character into segments and calculated the normalized correlation between each
segment and the main segment. In this research, defect identification has been divided into main
three methods. There was character extraction, positioning, and segmental inspection using
correlation. When it comes to character extraction, there was a little variation in position between
each cassette in several lines. Therefore, each character was extracted using the standard
horizontal and vertical profile approach. x and y coordinates of each character's upper left corner
can be obtained in this way. It was called the reference point.

Figure 7: Character Extraction (Sato, K., Kan'no, H. and Ito, T., 1991).

In the positioning state, sometimes the position of characters has some variations from where it
should be on the cassette due to cases of blurred characters and missing parts of characters. In
this research, a method has been explained to identify that kind of defect by doing calculations

23 | P a g e
with the pixel values of the image. In the case of the segmental inspection part, every character in
the image of the cassette was divided into several segments. To make sure the character was
completely printed in the designated place, calculated pixel values and compared them with
threshold values. It was mentioned that segmental inspection can be detected defects that could
not be detected in the positioning and character extraction parts. These defect identification
methods are the oldest methods used around 1991 and it took about 1.2 seconds to detect defects
in one image. Today the characters of the print can be easily identified using the optical character
recognition (OCR) method (Mathur, A., et al., 2019), which is an application in machine learning. In
this study, the basic techniques of image processing have been used. But in our vision-based pad
printing defect identification system, it is necessary to find out if it has been printed in right place
on the surface. For that, the positioning method and Character Extraction methods can be
developed with modern image processing techniques and used.

Figure 8: Segmentation of Character (Sato, K., Kan'no, H. and Ito, T., 1991).

An experimental study on Computer Vision-Based Crack Detection and Analysis on concrete


surfaces was done by Prateek Prasanna et al (Prasanna, P., et al., 2012). It is essential to examine
the quality of concrete platforms at regular interval to ensure safety. This study focused on using
machine-learning-based classification methods to detect and analyze faults. When it comes to the
methodology of this experiment, A histogram-based classification technique was used in the
method. The step in this experiment was to collect the images of cracked and uncracked concrete
surfaces and labeled the data. The Canny edge detection method was used for the edge detection
as a second step of this method. Image blurring, a basic method of image processing has been
used to remove spurious edges. Next, Using the bioinformatics toolbox in MATLAB, the Support
Vector Machine (SVM) algorithm was performed for linear, quadratic, and RBF (Radial Basis
Function) kernels, respectively. Classification using histogram-based features was the third step.
Random Sample Consensus (RANSAC) (Taylor, C.N., et al., 2014) and least-squares estimation
(Chi, H., et al., 2015) were the methods used in this step. The classifier's detection rate has been
estimated to be around 76 %. In this study, we can see that this method was able to get a medium
percentage of the classifier's detection accuracy. But there are differences between concrete
cracks and pad printing defects. Sometimes if this method is applied to pad defect detection, it

24 | P a g e
may show high accuracy. Therefore, this method also can be implemented for the testing in our
vision-based pad printing defect identification research.

The research paper published in the IEEE Xplore in 2019 by Gorbunov Vladimir et al gives details
about the automatic detection and classification of weaving fabric defects based on Digital Image
Processing (Vladimir, G., et al., 2019). The goal of the project was to detect defects faster and
more accurately than human eyesight and to identify the cause of the defects. The experiment
was carried out using the OpenCV, Imutils, and NumPy libraries and the Python programming
language. The pattern of the plain weave will be shown in figure 09.

Figure 9: Warp and weft pattern of a plain weave (Vladimir, G., et al., 2019).

Then it comes to the process of experimenting, there were several steps. To detect and
characterize the defect, A combination of morphological procedures and contour detection
algorithms were used in this experiment. In the first stage, the noise of the image was removed.
Thresholding was then done with the Threshold Binary function. That means the maximum and
minimum intensities were set to 255 and 0 respectively.

Figure 10: Threshold image (Vladimir, G., et al., 2019).

25 | P a g e
Then, to eliminate noises from the foreground image, the morphological closing method was used.
in the next step, the Contours of wefts and warps were defined by using OpenCV functions.
Contours are defined as a line that links all of the points with the same intensity along the image's
boundaries (Arbelaez, P., et al., 2010). In this research, defects of plain weave fabric have been
detected by considering the pattern of centroid contours and the size and shape of contours. The
detected defects of a sample of plain weave fabric are shown in figure 13. The defect regions are
highlighted in red.

Figure 11: Detected defects (Vladimir, G., et al., 2019).

When describing this study, it can be seen that the method used here is more suitable to detect
defects in surfaces that have the same pattern all over the surface. But in the pad print, characters,
numbers, and some logos are also available. Therefore, it is more appropriate to develop an AI-
based algorithm to detect defects in pad printing.

An experimental study on the Performance comparative of the OpenCV Template Matching


method on Jetson TX2 and Jetson Nano developer kits was done by (Basulto-Lantsova, A., et al.,
2020) Because of their high performance and low power consumption, embedded systems such
as those in the NVIDIA Jetson family (Vladimir, G., et al., 2019) are increasingly popular for
computer vision projects. Because they are platforms with Graphics Processing Units (GPU), they
give a significant processing capacity and allow the execution of operations in parallel, in addition
to their mobility. The Jetsons use the Linux operating system, which allows them to utilize a range

26 | P a g e
of image-processing packages. Open-Source Computer Vision Library (OpenCV), a multi-platform
computer vision library, is one of the most widely used. This research provides a comparison
between the Jetson TX2 and the jetson Nano developer kit’s performance for image processing
template matching. When considering the results of this experiment, it can be seen that Jetson
TX2 is faster than the jetson Nano developer kit. Although Jetson TX2 is more powerful than the
Jetson nano developer kit, the Jetson Nano developer kit is expected to be used in the pad printing
defect identification system. Because, when considering our pad printing defect identification
system requirements, the performance of the Jetson Nano developer is too much enough to
accomplish those requirements.

The research paper published in the IEEE Xplore in 2019 by Shisheng Zhong et al gives details
about the novel unsupervised anomaly detection for gas turbines using Isolation Forest (Vladimir,
G., et al., 2019). In this research, the Isolation Forest method has been used for anomaly detection.
(Liu, F.T., Ting, K.M. and Zhou, Z.H., 2008) introduced the Isolation Forest method as an unsupervised
anomaly detection algorithm for data sets. There are main three processes in this research paper.
They are categorizing the gas turbine monitoring data, anomaly detection by using the isolation
forest method with low contamination (preliminary anomaly detection) and anomaly detection by
using the isolation forest method with high contamination (precise anomaly detection).

Figure 12:Process of Gas Turbine Anomaly Detection (Shisheng Zhong, et al., 2019)

27 | P a g e
The equation of Contamination of the isolation forest is shown below.

𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑝𝑝𝑝𝑝𝑝𝑝 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 (𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠)


𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 =
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑝𝑝𝑝𝑝𝑝𝑝 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 ( 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠)

The aim of utilizing a low-contamination isolated forest for preliminary anomaly detection is to
check out the unusual group and eliminate the normal group. The isolated forest with high
contamination is used to detect anomalies precisely. The parameters of these two methods are
shown below.

PARAMETERS METHOD 01 METHOD 02


Number of iTrees 100 100
Max Samples 50 50
Contamination 0.005 0.04
Max Features 4 4
Table 1: Parameter values of Isolation forest

Using a combination of these two methods, the system of anomaly detection for the gas turbine
achieved more than 94 % prediction accuracy (Shisheng Zhong, et al., 2019). The method used
in this research is the unsupervised anomaly detection method. This kind of anomaly detection
method is more suitable for our pad printing defect identification project. Because lots of new
defects can occur frequently in the process of pad printing. Therefore, it is not easy to label the
data, if a supervised machine learning method is used.

28 | P a g e
3. DEVELOPMENT OF A SUITABLE IMAGE ACQUISITION SYSTEM

3.1 Camara selection

Usually, In the industry, there are two types of cameras used for computer vision systems. They
are area scan and line scan cameras. At exceptionally high speeds, a line-scan camera may
capture images in the shape of lines from the fabric surface area. A rectangular-shaped sensor
captures an image in a single frame in area scan cameras. The width and height of the generated
image are identical to the number of pixels on the sensor. Because of that, Area-scan cameras
are suited for machine vision applications. In a pad printing defect identification system, there is
no need to capture images at high speeds. Therefore, An Area-Scan camera can be chosen for
this project.

In the process of choosing a vision camera, there were a few things to consider in this project. The
first one is that the camera should be mounted in a small space. The other thing is that the camera
must be able to connect to the microcontroller used in this project. Furthermore, the focusing range
is also important to pay attention to when selecting a vision camera. The camera used here should
be a manual focus camera.

Figure 13: Selected Vision Camera.

29 | P a g e
Specifications of the selected camera,

 5 Megapixel
 720 p Resolution
 Manual focus
 6 – 22 mm lens focal length

3.2 CPU selection

In the beginning, it was suggested to use an Industrial PC. But further investigation revealed that
even a raspberry pi microcontroller or jetson Nano Developer Kit can be used for this purpose.
Experiments of testing the algorithm were carried out with both these devices while developing
the computer vision defect identification system.

Figure 14: Algorithm Testing with Raspberry PI 4B.

30 | P a g e
Figure 15: Algorithm testing with Nvidia Jetson Nano.

The NVIDIA Jetson Nano has a 128-core Maxwell GPU at 921 MHz. When compared with the
NVIDIA Jetson Nano and Raspberry Pi 4, the Jetson Nano's GPU is more powerful than Raspberry
Pi 4's. Because of that, Jetson Nano is more suitable for AI and ML applications. Therefore, we
decided to use Jetson Nano Developer Kit as a CPU for this vision system.

Figure 16:Jetson Nano Developer Kit.

31 | P a g e
Jetson Nano was built by California-based NVIDIA Technology Company to solve one of the most
difficult challenges in computer vision. NVIDIA's Jetson is a revolutionary embedded accelerator
device produced by the use of modern artificial learning techniques. The Jetson Nano card is a
low-power, low-cost, compact, and capable AI computer that allows a large number of AI
algorithms to execute in parallel. The NVIDIA Jetson Nano is similar to a minicomputer. To work
with it, need such equipment as a mouse, keyboard, and monitor. The Jetson Nano Developer Kit
requires a microSD card for boot and primary storage. Usually, a 32 GB UHS-1 micro-SD card is
recommended for fast operating and enough space. Jetson nano developer kit required AC to DC
5V 2000mA Barrel Jack wall power supply or micro-USB cable for powering up. Nvidia Jetson nano
developer kit uses an Ubuntu Linux-based operating system to run its application (KOCER, et al.,
2021).

The NVidia Jetson nano developer was fixed in the special metal case for industrial uses. This
metal case provides excellent protection for Jetson Nano. The enclosure includes a cooling fan, a
WIFI module with antennas, power and reset buttons and a micro-SD card adapter board. This
case includes an integrated power on and reset button to carefully power on the Jetson nano. The
GPIO pins and SD card slot are specifically designed openings in the casing. It also contains a
mesh allowing the fan to gather cool air from the surroundings, which assists in maintaining
optimum airflow and avoiding the jetson nano overheating. This specified jetson nano protested
metal case was designed by Waveshare Electronics.

Figure 17:Jetson Nano Developer Kit with Waveshare case.

32 | P a g e
4. MODELING, DEVELOPMENT AND TESTING THE ALGORITHM

There is main three algorithms that have to be developed in this pad printing defect identification
system.

1. Algorithm for Detecting the defective pad prints


2. Algorithm for checking the position of pad prints
3. Algorithm for identifying folds and wrinkles

4.1 Algorithm for Detecting the defective pad prints

This section has discussed the anomaly detection algorithm. This is the main algorithm of this
project. Training image samples capturing, image dataset training and testing, Machine learning,
image processing and deciding if it is defective or defect-free in real-time are included in this
section. Mainly, Python programming language and OpenCV library are used for developing
algorithms.

Anomaly detection (Isolation Forest)

Anomaly detection is a data mining technique for detecting data points or observations that vary
from a dataset's expected behaviour (Liu, F.T., Ting, K.M. and Zhou, Z.H., 2012). There are
several anomaly detection algorithms. In this section, the Isolation Forest algorithm has been used
for anomaly detection. Usually, the isolation forest algorithm takes from fewness and different
properties of anomalous samples. The Fewness property is the presence of a small number of
anomalous samples in the dataset. Differ property is the presence of values/attributes that are
very different from those of normal samples in anomalous samples. Our defect identification
system, its function is also to identify defective pad prints that are different from normal pad prints.
Therefore, the Anomaly detection Isolation Forest algorithm seems appropriate for this defect
identification system.

Mathematical Modeling of isolation Forest Concept

The majority of current model-based anomaly detection algorithms first establish a normal profile
and then identify occurrences that do not belong to the normal profile as anomalies. This generic
strategy is used in well-known examples such as statistical approaches, classification-based
methods, and clustering-based methods. This work suggests an alternative kind of model-based
approach that explicitly isolates anomaly instances instead of profiling normal instances.
Anomalies are more easily susceptible to isolation than normal points because they are few and

33 | P a g e
unusual. The tree structure is used in this method to isolate anomaly instances efficiently. This
isolation forest method builds an ensemble of iTrees for a given data collection, where anomalies
are instances with short average path lengths on the iTrees (Liu, F.T., et al., 2008). This approach
has only two variables. There are the number of trees to create and the size of the sub-sampling.

Figure 18: Isolation Forest Isolation structure. (Liu, F.T., et al., 2008).

As shown in the figures above, in the same data set, the number of partitions needed to isolate
normal data values is greater than the number of partitions required to isolate anomalous data
values. The path length is equivalent to the number of partitions needed to isolate a point.
Individual trees are generated with various sets of partitions since each partition is generated at
random. The average of path lengths across several trees are considered as estimated path
length. It can be seen that the path value of 𝑥𝑥𝑖𝑖 is higher than the path value of 𝑥𝑥0 . It demonstrates

34 | P a g e
that anomalies have lower path lengths than normal cases. One method for detecting anomalies
is to order data points based on their path lengths and anomalies are points at the top of the list
(Liu, F.T., et al., 2008).

Any anomaly detection method requires an anomaly score to make a decision.

−𝐸𝐸(ℎ(𝑥𝑥))
𝑆𝑆(𝑥𝑥, 𝑛𝑛) = 2
𝑐𝑐(𝑛𝑛)

Here,

𝑆𝑆(𝑥𝑥, 𝑛𝑛) − 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆

ℎ(𝑥𝑥) − 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 ℎ𝑒𝑒𝑒𝑒𝑒𝑒ℎ𝑡𝑡 𝑜𝑜𝑜𝑜 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 𝑖𝑖𝑖𝑖 𝑡𝑡ℎ𝑒𝑒 𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑜𝑜𝑜𝑜 𝑛𝑛

2(𝑛𝑛 − 1)
𝑐𝑐(𝑛𝑛) = 2𝐻𝐻 (𝑛𝑛 − 1) − ( )
𝑛𝑛

𝑐𝑐(𝑛𝑛) − 𝑇𝑇ℎ𝑒𝑒 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 𝑜𝑜𝑜𝑜 ℎ(𝑥𝑥)𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 𝑛𝑛

𝐸𝐸�ℎ(𝑥𝑥)� − 𝑃𝑃𝑃𝑃𝑃𝑃ℎ𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙ℎ𝑡𝑡 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑡𝑡ℎ𝑒𝑒 𝑎𝑎 𝑐𝑐𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑜𝑜𝑜𝑜 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖

• When 𝐸𝐸�ℎ(𝑥𝑥)� 𝑐𝑐(𝑛𝑛), 𝑠𝑠 0.5


• When 𝐸𝐸�ℎ(𝑥𝑥)� 0, 𝑠𝑠 1
• When 𝐸𝐸�ℎ(𝑥𝑥)� 𝑛𝑛 − 1, 𝑠𝑠 0.5

If the anomaly score is closer to 1, there is a high chance of being anomaly instances. In normal
instances, the anomaly score is much smaller than 0.5. Therefore, an anomaly score of 0.5 can
be used as the threshold value of this method. (Ahmed, S., et al., 2019).

The basics of iTrees concept have been discussed so far. When it comes to iForeset concept, it
does not isolate all normal instances. iForest can perform effectively with a partial model without
isolating all normal points and generates models with minimal sample size. iForest method works
more efficiently with a minimum number of sample sizes. Large sampling size limits iForest's
capability to isolate anomalies since normal instances can interfere with the isolation process,
limiting its ability to cleanly identify anomalies (Liu, F.T., et al., 2008).

The anomaly score contour of the isolation Forest method for a Gaussian distribution of sixty-four
points is shown below.

35 | P a g e
Figure 19: Isolation Forest Anomaly Score contour (Liu, F.T., et al., 2012).

Development of algorithm in python

First, all required libraries and modules should be installed in the virtual environment which
developed the algorithm. A virtual environment is a Python workplace in which the Python
interpreter, libraries, and scripts are isolated from those in other virtual environments.

Required libraries - OpenCV, imutils, NumPy, scikit-learn, glob, matplotlib

First of all, a large data set should be collected. For that, an easy method is used here. The
program of that method is shown below.

36 | P a g e
Figure 20:Program of Data Collecting.

After importing library files, the path to save the images should be specified. In line number 5, the
program connected with the camera. The program of capturing images, cropping image sizes, and
naming them is put inside the for loop. In this method, by specifying the number of times to run
the loop, the same number of images can be captured as the number of times. It can be seen that
the above program is arranged in such a way that 6000 images can be captured.

The next step was a dataset for the training. To detect outliers, we must train a model using a
machine-learning training algorithm. That training algorithm should be able to analyze and
categorize the contents of our training dataset. To analyze the training image, this program
generates a colour histogram for each and every image. Colour histograms are a basic but efficient
way of characterizing an image's colour distribution. The program to get the graph of the colour
histogram is shown below.

Figure 21: Program of colour histogram extraction implementing.

37 | P a g e
An example of a color histogram of the image and the original image is shown below.

Figure 22: Original Image.

Figure 23: Colour Histogram of Image.

In lines 1-3, libraries are imported. To create a list of all images in an input directory, we'll utilize
paths from my imutils package. Histograms will be calculated and normalized using OpenCV. For
array operations, NumPy is utilized. In lines 6-9, Calculation and normalization of the colour
histogram were done. The next step is accepting a directory path that contains our image data set

38 | P a g e
and Looping over the image paths while quantifying them using our colour histogram extraction
method.

Figure 24: Program of Dataset Training.

In this above training process, 11000 defect-free images and 1500 defective images were trained
under these parameters. Defective images are introduced as “anomaly images” and defect-free
images are introduced as “normal images”. Ten defect free samples were used to capture 11000
defect-free images. It has captured 1100 images from one sample. Because this computer vision
system is not fixed in an enclosed environment. Therefore, light conditions may affect to the
system’s performance. Isolation forest parameters are defined below.

• n estimators: The ensemble's number of base estimators.


• contamination: The percentage of outliers.
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑝𝑝𝑝𝑝𝑝𝑝 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 (𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠)
𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 =
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑓𝑓𝑓𝑓𝑒𝑒𝑒𝑒 𝑝𝑝𝑝𝑝𝑝𝑝 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 ( 𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛𝑛 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠)
• max_samples: sum of all defective and defect-free samples
• random_state: For repeatability, the seed value of the random number generator is used.

After completing this step, an anomaly detection model is created. That anomaly detection model
is used to test this algorithm. Unsupervised data set is used to generate the model file here.
Unsupervised learning applies to algorithms to find patterns in data sets that comprise data points
that are neither classed nor labelled. As a result, the algorithms can categorize, label, and organize

39 | P a g e
data points inside data sets without any external assistance (Dridi, S., 2021). A lot of tests are done
in this data set training section to get the highest accurate parameters. The number of training
samples is one of the main factors affecting the accuracy of this project’s results. According to this
program, the model file is saved as “finalized_model.sav”. After completing this training process
under this parameter, the model file can be used for testing in real-time.

Figure 25: Program of Real-Time Testing.

40 | P a g e
In line number 1,2 and 3, library files are imported. In line number 6, the computer vision camera
is connected to the program. While the Image of the pad print is captured in line number 8, the
image is cropped so that only the pad print is visible in line number 9. Loading the image and
converting it to the HSV color space, extracting the features of the image, determining if the
captured image is defective or not by using the anomaly detector model, and extracting features
in the testing images are done in the next line numbers 12 to 18. Line number 21,22,25 and 26 is
the most important lines in this program. According to the extracted features of the testing image,
this algorithm gives a valve. Here, that variable is named “x”. In the testing image of pad prints, “x”
was varied from +3 to -3. But in some rare cases, the “x” value is not in that range. Defective cases
are introduced as an “anomaly” and defect-free samples are introduced as “normal”. If the testing
image is defect-free, usually the “x” value will be shown as a value between 0 and +3. Otherwise,
if the testing image is defective, the “x” value will be in the range of 0 and -3. When considering
the above statement, “0” act as the threshold value of this computer vision system. But in some
cases, that value has to be changed. As an example, all MAS says is defective samples are not
100 % the same. There may be fabric variations that are invisible to the eye which does not affect
the quality. But this computer vision algorithm detects that type of fabric variation as a defect
sample. The changes in lighting conditions also affect the creation of these types of issues. It can
be avoided that those types of issues by adjusting the threshold value a little bit. Here, the
threshold value was set as -0.08. According to this program’s instructions, if the image of the pad
print is identified as a defective pad print, it is labelled “anomaly” in the middle of the image.
Otherwise, it is labelled “normal”. This algorithm determines whether the pad print is defective or
not, and then uses the next algorithm to check whether it has been printed in the correct position.

Testing and Results:

Defect-free pad prints and different types of defective pad prints are tested under this algorithm.
This experiment was done in real-time. The algorithm of this experiment shows the anomaly score
in the PyCharm command prompt. It is easy to get an idea of how anomalous the pad print is.
There are different types of defective pad prints. The defects in some pad prints are easily visible
whereas some defects might not be visible to the naked eye. The algorithm should be able to
detect both types of defects. The results of the testing are shown below.

Figure 26: Camera setup for testing.

41 | P a g e
Figure 27: Defect-free pad print (Anomaly Score: + 0.13172575).

It can be seen that the algorithm gives a positive value as the anomaly score for defect-free pad
prints. Some examples of defective pad prints that were successfully identified are shown below.

Figure 28: Defect-free pad print (Anomaly Score: - 0.16278505).

42 | P a g e
Figure 29: Defect-free pad print (Anomaly Score: - 0.22621908).

Figure 30: Defect-free pad print (Anomaly Score: - 0.22621908).

43 | P a g e
Figure 31:Defect-free pad print (Anomaly Score: - 0.16278505).

Problems Encountered During Testing and Their Solutions

Problem 01:

• Those results were obtained in the first testing experiment. Although the algorithm
performed well for the easily visible defects in pad prints, some defects which might not be
visible to the naked eye were not detected. The algorithm ignored such small defects and
considered them as defect free pad prints.

44 | P a g e
Figure 32: Defective pad print (5)

The characters in the last row are not printed properly. But the algorithm considers those kinds of
defective pad prints as defect-free pad prints. To detect these types of small defects, some
changes were done in the size of testing and training images.

Solution:

 The training and testing images were cropped so that only the pad printed was visible.

Figure 33: Cropped pad print (5).

45 | P a g e
Problem 02:

The algorithm performs for the defect-free pad prints well. But it is mentioned above that all of the
training images were captured in the same light condition. So sometimes this algorithm is given
wrong predictions for some images in different light conditions.

Figure 34: Defect-free pad print but tested in different light conditions (2).

It can be seen that the above pad print is a defect-free pad print, but it was considered as a
defective pad print by the algorithm. The reason for that is this pad print was tested under low light
conditions than the training image captured light conditions.

Solution:

The best solution is to enclose the computer vision camera system and supply the external flicker-
free light source to keep the same light condition in both testing and training scenarios. To avoid
this problem to some extent, it is also advisable to train the images of pad prints captured under
different light conditions.

After making the above-mentioned improvements to the algorithm, the second experiment was
done. The results of the second experiment are shown below. If the algorithm identified the pad
print as the defective pad print, “an” is displayed in the middle of the pad print in red colour.
Otherwise “no” is displayed in the middle of the pad print in green colour.

46 | P a g e
Figure 35:Defect-free pad print in experiment 02 I.

Figure 36: Defective pad print in experiment 02 II.

47 | P a g e
Figure 37: Defective pad print in experiment 02 III.

Figure 38: Defective pad print in experiment 02 IV

48 | P a g e
Figure 39: Defective pad print in experiment 02 V.

Figure 40: Defective pad print in experiment 02 VI.

49 | P a g e
Figure 41: Defective pad print in experiment 02 VII.

The results of the second experiment confirmed that the issues of the algorithm in the first
experiment have been avoided after making the improvements.

50 | P a g e
4.2 Algorithm for checking the position of pad prints

In this research paper, there are two algorithms have been developed for checking the position
of the pad print.

• OCR-based position checking method


• Object detection-based position checking method

OCR-Based Position Checking Method

In some cases, pad prints that have only characters have to be checked that they are printed in
the right place. This OCR- Based position checking algorithm can be used efficiently in these types
of situations.

Figure 42: Example of Pad Print that has only Characters.

To convert printed text into editable text, the optical character recognition (OCR) technology is
utilized. The words of pad print are separated by using OCR technology in this method. To identify
characters in the pad print clearly, there are several image processing techniques have been used.
First of all, the coloured image of the pad print has been converted into a grayscale image. Next,
the grayscale image has been blurred to reduce the noise of the image. The algorithm of this
image processing technique is shown below.

51 | P a g e
Figure 43: Algorithm of Image Processing techniques in this method.

Figure 44: Grayscale Image.

Figure 45: Blurred Image.

After identifying the all characters on this pad print, the algorithm selects the last character among
them and draws the rectangular contour. The algorithm of the OCR process and contour drawing
is shown below.

52 | P a g e
Figure 46: Algorithm of OCR process and contour Drawing.

Figure 47: Contour Detection of character.

The last character identified through the OCR technique as above has its unique coordinate in
relation to the place where the contour is drawn. In this method, this coordinate is obtained as a
list of numbers. This coordinate should be the same for each pad print if the pad prints are printed
in the correct place. At the beginning of the method, the reference coordinate should be
introduced. The reference coordinate is the coordinate that the program should return when the
pad print is printed in the correct position. That way, by comparing the reference coordinate with
the coordinate of the pad print being checked, it can be determined whether the pad print has
been printed in the correct position. The program for comparing reference coordinates with a
sample pad print is shown below.

53 | P a g e
Figure 48 : Coordinate Comparing Program of this method.

The complete program with all sections of this OCR based positing checking method is shown
below.

54 | P a g e
Figure 49 : Complete Program of OCR-Based Position Checking Method.

55 | P a g e
There are two experiments of this method and results are shown below.

• Example 01

Figure 50: Examples of characteristic Pad Prints I

It can be seen that there is no difference between reference pad print and testing pad print.

Testing and Results of OCR-based position checking method:

Figure 51: Results of OCR-based position checking method I

This program shows reference pad print coordinate and testing pad print coordinate in command
prompt. It presents as a list with five elements. While first four elements of the list display the

56 | P a g e
coordinate of the position of the drawn contour, the last element shows the identified last character
of the pad print. It can be seen that “A” seems to be recognized as the last character.

• Example 2

Figure 52: Examples of characteristic Pad Prints II.

It can be seen that testing image has been rotated about 50 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 anticlockwise.

Results:

Figure 53: Results of OCR-based position checking method II.

57 | P a g e
• Example 03

Figure 54: Examples of characteristic Pad Prints III.

It can be seen that testing image has been rotated about 20 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 clockwise. This algorithm
has ability to identify even this kind of small issues.

Results:

Figure 55: Results of OCR-based position checking method III.

58 | P a g e
Object Detection-Based Position Checking Method

In some cases, pad prints that have only logo have to be checked that they are printed in the right
place. This Object detection- based position checking algorithm can be used efficiently in these
types of situations. In this method, it is checked whether the pad print was printed in the right
position based on the logo of the printed pad print.

There are five steps in this method.

Step 01

The initial stage in every object detection model is to gather images, create a good data set and
do annotation. There are 15 images of the same pad prints that have been collected for
annotations. A data set of 15 images was sufficient for our task. But usually, in some complex
object detection tasks, a large data set should be used to get a good result.

The next process is to label the images. For accomplishing that task, MAKESENSE.AI which is
an online tool for labeling has been used in this project.

Figure 56: MAKESENCE.AI Online Tool.

The logo of pad prints should be labelled individually. After completing the labelling process, this
online tool is generated the annotation file in XML format. It can be used for the training process.

59 | P a g e
Step 02

These four library packages should be installed in python to run this program.

• NumPy
• PyTorch
• Detecto
• Matplotlib

Figure 57: Installed Library Packages in Object Detection Method.

Step 03

The next step is image augmentation. The process of artificially enhancing data by generating a
modified version of images is known as image augmentation.

Figure 58: The Program of Image Augmentation.

Step 04

The model training process is the next step in this method.

60 | P a g e
Figure 59: The Program of Training Process.

The training process takes some time to train the data set and generate the model file.

Step 05

The last step of this method is testing. The predictions can be made by using the trained model
file.

Figure 60: The Program of testing.

61 | P a g e
Figure 61:Object Detection Result.

It can be seen that the program has identified the logo and it Outputs the tensor coordinate for the
position of the logo. By comparing that tensor coordinate, the position of the pad print can be
checked. The output tensor coordinate given for the correct position of the pad print should be
introduced as the reference coordinate.

Figure 62:Reference Coordinate.

One of experiments of this method and its result is shown below.

62 | P a g e
Figure 63: Experiment of Position Detection.

It can be seen that the pad print used for testing had rotated clockwise a little bit. That means the
testing pad print is not printed in the correct position. In image (b), there is a tensor coordinate
named as the sample tensor. It represents the tensor coordinate of the testing pad print. It can be
seen that there is a difference between the reference tensor coordinate and the sample tensor
coordinate. By comparing those two coordinates, it can be checked the position of the pad print.

Testing and Results of Object detection-based position checking method

If those tensor coordinates are equal, this program outputs “True” and otherwise outputs “False”.
The results of the previous experiment is shown below.

Figure 64: Result of the Experiment.

63 | P a g e
4.3 Algorithm of Identifying Folds and Wrinkles

During the pad printing process, after the pad printing process on a piece of fabric is finished, the
next a piece of fabric is brought mechanically to the place where the print is made. Some folds
and wrinkles may occur on the fabric surface due to mechanical issues that occur during this
process. That kind of folds and wrinkle cases are shown below.

Figure 65: Example of fold on the fabric Figure 66: Example of wrinkles on the fabric.

Figure 67: Example of a good fabric surface

64 | P a g e
if the piece of fabric surface to be printed has these types of defects, it will be a defective pad print.
To start the pad printing process, the piece of fabric surface to be printed should not be in folds
and wrinkles. Checking for such bugs is another part of this project. The same isolation forest
anomaly detection algorithm method can be used to identify this kind of case.

First, a good data set must be prepared to identify these kinds of cases using the isolation forest
method. Following are the results of the experiment which was done using the isolation forest
anomaly detection method after training 3000 images of good fabric surfaces and 500 images of
folds and wrinkles on the fabric.

Testing and Results of Folds and Wrinkles Identifying Algorithm

Figure 68: Testing results for good fabric surfaces.

65 | P a g e
Figure 69: Testing results for fold fabric
surfaces. Figure 70: Testing results for wrinkled fabric
surfaces.

This method separates fabric surfaces that are in a suitable condition for pad printing as normal
cases and fabric surfaces that are in unsuitable conditions as anomaly cases. The isolation forest
anomaly detection method used to detect defects in pad prints seems to work well for this folds
and wrinkle identification task as well.

66 | P a g e
5. FINAL IMPLEMENTATION AND RESULTS

When it comes to the final implementation of this project, there are several steps have to be done
to complete pad printing identification system successfully. All the algorithms discussed
individually above are implemented in this section in some order. Before starting the final
implementation, the devices here with camera, Jetson nano developer kit and other stuffs should
be fixed systematically in pad printing machine. Here the pad printing machine are operated by
the PLC. This pad printing defect identification computer vision system should be started after
each and every pad printing process is finished and it should give a decision on pad printing before
the next pad printing process is started. Therefore, this system should be setup so that this process
starts according to the signal received from PLC. if a defective pad print is detected, it should also
be possible to signal the PLC to stop the pad printing process before the next pad printing process
is started. The GPIO (General Purpose Input Output) pins of the jetson nano developer kit were
used to establish a connection with PLC by sending and receiving signals. To do the final
implementation of this project, the camera used for this had to be fixed in the pad printing machine
so that a clear and visible image of the pad print could be obtained.

Figure 71: Camera mount of final implantation.

67 | P a g e
Figure 72: The place where the final implementation was performed.

After completing the camera mounting and positioning other equipment correctly, the algorithm
pre–developed was modified for this implementation. In the final implementation of this project,
algorithms which were discussed in separate scenes, had to be combined together due to the
facts that checking the folds and wrinkles before the pad printing process is started, checking
defects and position of printed pad print after printing and checking. The sequence of algorithms
executed for final implementation is shown below.

68 | P a g e
Before the training process, good data set should be collected. Here, 6000 of defect free pad print
images and 500 of defective pad print images were collected and trained under defect detection
algorithm. 6000 of defect free pad print images were captured from 15 of good samples and 500
of defective pad print images were captured from using different five defective samples. And also,
for the process of identifying wrinkles and folds, 3000 of good fabric surfaces images and 300
images with the wrinkles and folds on the fabric surface were trained. In final implementation,

69 | P a g e
Object detection-based position checking method was used. For that, the NIKE logo of 15 samples
were labeled and trained.

According to the flow chart, the wrinkles and fold checking process is started after the starting
signal received from PLC. After the pad printing process is completed, the algorithm is received
the second signal from PLC to start the defects and position checking process of the pad print. In
this way the process continues and stops the pad printing process if it detects an error condition
in the printed pad print. To stop the pad printing process, algorithm output the signal through the
GPIO pins in the jetson nano developer kit to the PLC.

Results

Below are the results of three different types of experiment. All experiments were performed under
the following condition.

• Same light conditions.


• Same components have been used.
• Same training models have been used.
• In wrinkles and folds identification process, it was considered that anomaly score value of
– 0.001 as the threshold value.

 If Anomaly score is higher than -0.001, the algorithm is considered as the fabric
surface is good surface.
 If Anomaly score is lower than -0.001, the algorithm is considered that the fabric
surface has wrinkles or folds.
• In defect identification process, it was considered that anomaly score value of – 0.07 as
the threshold value.

 If Anomaly score is higher than -0.07, the algorithm is considered as the pad print
is defect free pad print.
 If Anomaly score is lower than -0.07, the algorithm is considered as the pad print is
defective pad print.

• In position verifying process, it was considered that coordinate of [ 90.1451 ,14.4568


,410.4589 ,106.7896] as reference coordinate

70 | P a g e
Experiment 01

1. Checking the wrinkles and folds

Figure 73: Experiment 01 result I.

71 | P a g e
2. Checking defects

Figure 74: Experiment 01 result II.

72 | P a g e
3. Checking the position of pad print

Figure 75: Experiment 01 result III

It can be seen that there is a small difference between reference coordinate and sample coordinate
that given coordinate for this NIKE logo of pad print. Even if the pad print is printed in the right
position, in practically, it does not always give the same coordinate. Therefore, this reference
coordinate is given with a tolerance value of + or – point 2.

73 | P a g e
The results of experiment 01 is shown above. It appears that the pad print has completed the
printing process successfully without any defects. Here, it took 0.1 seconds to perform the wrinkle
and fold identification process and give the results, and about 0.4 seconds to perform both defect
identification process and position checking process and give the results.

Experiment 02

1. Checking the wrinkles and folds

Figure 76: Experiment 02 result I.

74 | P a g e
In this experiment, the surface was identified as having wrinkles and folds by the algorithm.
Printing done on this type of surfaces is bound to be defective. In this type’s cases, the algorithm
sends the signal to the PLC to stop the printing process. Therefore, the second experiment has
ended here.

Experiment 03

1. Checking the wrinkles and folds

Figure 77: Experiment 03 result I.

75 | P a g e
2. Checking defects

Figure 78: Experiment 01 result II

In this experiment, it was identified that printed pad print is defective one. Therefore, the third
experiment was ended in this stage.

76 | P a g e
6. DISCUSSIONS

The pad defect identifier computer vision is basically design to detect defects while pad printing
process is occurring in real time. It takes less than 500 milliseconds to complete one pad print.
Therefore, the above three tests (defect detection, position detection, wrinkles and folds detection)
should be done within the same period. In order to perform these actions quickly, the algorithm
should be fully optimized and the processing speed of the microcontroller used should be
sufficient. In our case, we have used Nvidia jetson nano developer kit which is the most efficient
and speed microcontroller unit used for modern AI based computer vision systems.

Many challenges with light conditions, such as white balance, had to be dealt with while this
computer vision system was being deployed in the field. The examples of that kind of problems
are shown below.

Figure 79: Testing results with white balancing issue.

77 | P a g e
Figure 80:Testing results without white balancing issue.

The same defect free pad print has been used under both of the above conditions. it can be seen
that the white balance of the two testing samples is different. it caused the sample shown in the
figure 79 to give incorrect results. Therefore, the camera parameters such as focal length,
aperture, field-of-view, resolution and exposure should not be changed while the computer vision
system is working. To avoid this white balancing issue, the camera parameters should be fixed
before computer vision system is started. The other thing is that the camera used for this should
not be the autofocus and it must be manual focus. If an auto-focus camera is used, it is difficult to
keep the camera parameters constant.

The program used for fixed camera parameters is shown below.

Figure 81:The program used for fixed camera parameters.

78 | P a g e
Although the camera parameter was fixed, sometimes the white balance temperature was varied
automatically. It was cause to given wrong predictions. Usually, the image captured by this
algorithm is RGB (Red Green Blue) image. A single RGB image is represented using a three-
dimensional (3D) NumPy array. To avoid that white balance temperature issue, the RGB images
is covert to the grayscale image. A single grayscale image is represented using a two-dimensional
(2D) NumPy array. The hue and saturation information of image can be eliminated while retaining
the luminance by converting the RGB images to grayscale. The source code of converting RGB
image to grayscale is shown below.

Figure 82: The source code of converting RGB image to grayscale.

Our main algorithm of isolation forest anomaly detection method needs three-dimensional (3D)
NumPy array to generate colour histograms of images. Therefore, the gray scale image is
converted to the RBG image again. The hue and saturation information of image is removed in by
converting RGB to grayscale. Therefore, although the grayscale image converts to the RGB image
again, that hue and saturation information is not regenerated. It can be avoided from the white
balance temperature issue by utilizing this method.

Sometimes, even though that the algorithm successfully detected the easily noticeable pad print
defects, some defects that may not have been visible to the naked eye were not caught. Such
small defects were neglected by the program and they were considered as the defect-free pad
prints. As an example, the characters in the last row are not printed properly in figure 83. But the
algorithm considers those kinds of defective pad prints as defect-free pad prints. To detect these
types of small defects, some changes were done in the size of testing and training images.

79 | P a g e
Figure 83: Not detected detective Pad Print.

To solve this problem, we cropped the training and testing images so that only the pad print is
visible. The example of cropped image is shown below.

Figure 84: Cropped Image.

The second issue is that, although the algorithm's good performance for defect-free pad prints, all
of the training images should be captured in the same lighting condition. But it is not an easy task
in practice. So sometimes this algorithm is given wrong predictions for some images in different
light conditions. The best solution for that problem is to enclose the computer vision camera
system and supply the external flicker-free light source to keep the same light condition in both
testing and training scenarios. To avoid this problem to some extent, it is also advisable to train
the images of pad prints captured under different light conditions. But considering the results of
both methods, enclosing the vision area and fixing the flicker-free light source can be
recommended as a best solution.

80 | P a g e
In our pad printing machine, it is difficult to enclose the vision area of the computer vision system
completely.

Figure 85: Enclosed computer vision system.

As can be seen in the figure 85 above, the effect caused by ambient light condition can be
minimized by enclosing the vision area as much as possible and by increasing the flicker-free light
intensity.

Although the good light intensity is essential to get a good clear image, sometimes the higher light
intensity also affects badly to identify the small defects of pad prints. The reason for that is, the
brightness of image is increased high due to high light intensity. To avoid that problem, shutter
speed of the camera was speed up to reduce the exposure time.

This project has been done to identify the pad printing defects in the apparel industry. in the
apparel industry, the same pad print design is not used every day. They change frequently. This
computer vision system has to be trained separately for each different pad print design. In some
cases, threshold value has to be determined to identify defective pad prints and defect free pad
prints. Threshold value also can change from pad print design to pad print design. Therefore, even
if GUI (Graphical User Interface) is designed for this computer vision system, it is preferable to
employ a person with some experience or knowledge to control this computer vision system.

In order for this system to work efficiently and with more accuracy, it should be installed in a
suitable environment. This is not at all suitable for working outdoors. It is preferable to establish
this computer vision system in an air-conditioned location where the light intensity does not change
frequently and is not obstructed by particulate matter such as dust. When this system is running
continuously, the jetson nano developer kit will be overheated. That is why the air-condition
environment is recommended for this computer vision system.

81 | P a g e
7. PROJECT MANEGMENT

7.1 Gantt Chart

Table 2: Gantt Chart.

82 | P a g e
7.2 Cost calculation for the project

Quantit
Component y Price ($) Total

Nvidia Jetson Nano Developer Kit 4GB 1 99 99

Manual Focus Vision Camera 1 50 50

Waveshare Protection Case 1 25 25

Power Adapter 5v 2A (Jetson Nano) 1 5 5

Camera Mounting Bracket 1 5 5

USB Extension Cable 2 2 4

Flicker-free LED light Panel 2 10 20

TOTAL COST ($) 208


Table 3: Cost Calculation of Project.

83 | P a g e
8. CONCLUSIONS

An overview of pad print defect detection in the apparel industry were offered in this review study.
Due to more market competition and demand, the manual human inspection technique is
insufficient and ineffective for meeting the today's needs of the textile sector. The current need is
that the inspection process be done utilizing some industrial automation to improve the quality and
lower the manufacturing cost of the finished textile product. Computer vision, digital image and
artificial intelligent processing can provide a foundation for solving this industrial need.

In this thesis, mainly three algorithms have been developed for improve the quality of pad print
and increase the production speed. This computer vision system checks the folds and wrinkles
before the pad printing process is started. If any wrinkle or folds on the fabric surface, the pad
printing process is not allowed to happen. Because if the pad printing process is done on wrinkled
or folded surface, it’s would to be defective pad print. This computer vision system is caused to
prevent the wastage of raw material also. After pad printing process is completed, this computer
vision system checks their defects and position. Some small defects and position of the pad print
are not easy to checked by manual human inspection. In some cases, human inspection may not
be correct and may take more time.

The current tendency is to apply deep learning models, which require a high computational cost
for training and a large amount of training data. But in our research, the color histogram-based
anomaly detection machine learning algorithm has been used to reduce the computational
complexity and for ease of use in industry. This research paper has described in depth about the
techniques used to increase the accuracy of histogram-based anomaly detection machine learning
algorithm we used.

Finally, this system was installed on a pad printing machine and was able to provide good results
with more accuracy when tested in real-time.

84 | P a g e
9. FURTHER DEVELOPMENT

This computer vision system is based on pad printing. When considering the final results of this
defect detection algorithm, it can be seen that this algorithm performs well for detecting the defects
of pad print. But computer vision system is also not enough to fully automate the pad printing
process. Although computer vision system is identified the defect pad print, there is no method to
remove the defective pad print and continue the pad printing process. To design a fully automated
pad printing machine, a PLC (programable logic controller) and pneumatic based automation
system can be suggested as further development.

When it comes to further developments of AI based computer vision algorithm, lots of


developments can be done. In here, isolation forest method was used as the anomaly detection
method. Although the efficient of algorithm is enough for this pad printing defect identification
project, this algorithm can be made more optimized than the current algorithm by continuing to
perform more researches. Making the algorithm more efficient increases the success of the entire
process.

The jetson nano developer kit with WIFI module has been used in this industrial experiment. But
the WIFI module here is not used for one special purpose. Using that WIFI module, the IOT based
remote real time data monitoring system can be developed. Furthermore, the cloud database also
can be developed. If such a IOT based technical system is implemented, how many defect free
pad prints have been printed, how many defective pads print have been printed and whether the
pad printing process is done properly at the time of printing can be checked from anywhere in the
world. Usually, the modern industrial 4.0 production lines are using this kind of IOT based remote
controlling system and production database. By analyzing the data obtained from the databases,
if there are defects in the production process, they are easy to identify and help to design new
concepts to increase the production volume.

85 | P a g e
10. APPENDIX

The source code of the final implementation for Pad printing defect detection is below.

# Importing Libraries

from features import quantify_image_AD

import pickle

import cv2

import glob

#connected with GPIO pins

import Jetson.GPIO as GPIO

GPIO.setmode(GPIO.BOARD)

mode = GPIO.getmode()

GPIO.setwarnings(False)

# (where channel is based on the pin numbering mode discussed above)

channel = 12

channel2 = 40

GPIO.setup(channel, GPIO.IN)

GPIO.setup(channel2, GPIO.IN)

GPIO.input(channel)

GPIO.input(channel2)

vid = cv2.VideoCapture(0)

86 | P a g e
y=0

while(y == 0):

if (GPIO.input(channel) == 0 and GPIO.input(channel2) == 1):

y =10

elif (GPIO.input(channel) == 1 and GPIO.input(channel2) == 0):

y =20

elif (GPIO.input(channel) == 0 and GPIO.input(channel2) == 0):

y=0

elif (GPIO.input(channel) == 1 and GPIO.input(channel2) == 1):

y=0

#webcam connect to python

while (y == 10 or y == 20) :

while(y == 10):

ret,image = vid.read()

image = image[40:440, 170:450]

if cv2.waitKey(1) & 0xFF == ord("q"):

break

print("[Loding] loading _W&F_anomaly detection model_AD...")

# construct the argument parser and parse the arguments

filename = 'finalized_model_AD.sav'

model = pickle.load(open(filename, 'rb'))

print("[Loading] loading _AD_ anomaly detection model_AD...")

#model = pickle.loads(open(args["model"], "rb").re ad())

#covert to HSV

hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

features = quantify_image(hsv, bins=(3, 3, 3))

#use the model to extract features

preds = model.predict([features])[0]

87 | P a g e
x = model.decision_function([features])

#print(features)

label = "anomaly" if x <- 0.08 else "normal"

color = (0, 0, 255) if x < -0.08 else (0, 255, 0)

print(x)

print(preds)

# draw the predicted label text on the original image

cv2.putText(image, label, (100, 250), cv2.FONT_HERSHEY_SIMPLEX, 5, color, 2)

cv2.imshow("Output", image)

cv2.waitKey(1000)

y =0

# Complete checking folds and wrinkle

# Waiting for a signal to start defect detecting process

while (y == 20):

ret, image = vid.read()

image = image[40:440, 170:450]

if cv2.waitKey(1) & 0xFF == ord("q"):

break

print("[Loarding] loading anomaly detection model_F&W...")

filename = 'finalized_model_F&W.sav'

model = pickle.load(open(filename, 'rb'))

print("[Loading] loading anomaly detection model_F&W...")

# model = pickle.loads(open(args["model"], "rb").re ad())

# convert HSV

88 | P a g e
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

features = quantify_image(hsv, bins=(3, 3, 3))

# use the model to extract features

preds = model.predict([features])[0]

x = model.decision_function([features])

# print(features)

label = "anomaly" if x < - 0.08 else "normal"

color = (0, 0, 255) if x < -0.08 else (0, 255, 0)

print(x)

print(preds)

# draw the predicted label text on the original image

cv2.putText(image, label, (100, 250), cv2.FONT_HERSHEY_SIMPLEX,

5, color, 2)

# display the image

cv2.imshow("Output", image)

cv2.waitKey(1000)

y=0

# complete the defect detecting process

vid.release()

cv2.destroyAllWindows()

89 | P a g e
References

1. Wongsuphasawat, K., Smilkov, D., Wexler, J., Wilson, J., Mane, D., Fritz, D., Krishnan,
D., Viégas, F.B. and Wattenberg, M., 2017. Visualizing dataflow graphs of deep learning
models in TensorFlow. IEEE transactions on visualization and computer graphics, 24(1),
pp.1-12.

2. Gafurov, A.N., Phung, T.H., Kim, I. and Lee, T.M., 2022. AI-assisted reliability
assessment for gravure offset printing system. Scientific Reports, 12(1), pp.1-11.

3. Xu, B., Ye, W. and Wang, Y., 2018, May. Design of Machine Vision Defect Detecting
System Based on Halcon. In Proc. 2018 International Conference on Mechanical,
Electrical, Electronic Engineering & Science (MEEES) (pp. 361-365).

4. Mathur, A., Pathare, A., Sharma, P. and Oak, S., 2019, June. AI-based reading system
for the blind using OCR. In 2019 3rd International conference on Electronics,
Communication and Aerospace Technology (ICECA) (pp. 39-42). IEEE.

5. Unoski, J., 2000. The history of recognition in banking. American Bankers Association.
ABA Banking Journal, 92(5), p.69

6. Che, D., Liu, Q., Rasheed, K. and Tao, X., 2011. Decision tree and ensemble learning
algorithms with their applications in bioinformatics. Software tools and algorithms for
biological systems, pp.191-199.

7. Chi, H., 2015, December. A discussion on the least-square method in the course of error
theory and data processing. In 2015 International conference on computational
intelligence and communication networks (CICN) (pp. 486-489). IEEE.

8. Demuthush, R., 2019. A Brief History of Computer Vision (and Convolutional Neural
Networks). [Online]
Available at: https://medium.com/hackernoon/a-brief-history-of-computer-vision-and-
convolutional-neural-networks-8fe8aacc79f3
[Accessed 11 04 2022].

9. Vladimir, G., Evgen, I. and Aung, N.L., 2019, January. Automatic Detection and
Classification of Weaving Fabric Defects Based on Digital Image Processing. In 2019
IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering
(EIConRus) (pp. 2218-2221). IEEE.

10. Prasanna, P., Dana, K., Gucunski, N. and Basily, B., 2012, April. Computer-vision based
crack detection and analysis. In Sensors and smart structures technologies for civil,
mechanical, and aerospace systems 2012 (Vol. 8345, pp. 1143-1148). SPIE.

90 | P a g e
11. Jeyaraj, P. R. & Nadar. , E. R. S., 2019. Computer vision for automatic detection and
classification of fabric defects employing deep learning algorithm. [Online]
Available at: https://www.proquest.com/docview/2269411392?accountid=14685
[Accessed 16 04 2022].

12. Liu, F.T., Ting, K.M. and Zhou, Z.H., 2012. Isolation-based anomaly detection. ACM
Transactions on Knowledge Discovery from Data (TKDD), 6(1), pp.1-39

13. Sato, K., Kan'no, H. and Ito, T., 1991, October. System for inspecting pad-printed
characters using the normalized correlation of the segmented character images.
In Proceedings IECON'91: 1991 International Conference on Industrial Electronics,
Control and Instrumentation (pp. 1929-1932). IEEE.

14. Chun, J., Kim, Y., Shin, K.Y., Han, S.H., Oh, S.Y., Chung, T.Y., Park, K.A. and Lim, D.H.,
2020. Deep learning–based prediction of refractive error using photorefraction images
captured by a smartphone: model development and validation study. JMIR Medical
Informatics, 8(5), p.e16225

15. Viola, P. and Jones, M.J., 2004. Robust real-time face detection. International journal of
computer vision, 57(2), pp.137-154.

16. Sarfraz, M., 2020. Introductory Chapter: On Digital Image Processing. [Online]
Available at: https://www.intechopen.com/chapters/71817
[Accessed 10 04 2021].

17. Arbelaez, P., Maire, M., Fowlkes, C. and Malik, J., 2010. Contour detection and
hierarchical image segmentation. IEEE transactions on pattern analysis and machine
intelligence, 33(5), pp.898-916.

18. Thera, J., 2022. Keras vs Tensorflow vs Pytorch: Key Differences Among the Deep
Learning Framework. [Online]
Available at: https://www.simplilearn.com/keras-vs-tensorflow-vs-pytorch-
article#:~:text=TensorFlow%20is%20an%20open-sourced,because%20it's%20built-
in%20Python

19. Zhang, E., Chen, Y., Gao, M., Duan, J. and Jing, C., 2019. Automatic defect detection for
web offset printing based on machine vision. Applied sciences, 9(17), p.3598.

20. Yu, H., Keshavamurthy, S., Bai, H., Sheorey, S., Nguyen, H. and Taylor, C.N., 2014,
December. Uncertainty estimation for random sample consensus. In 2014 13th
International Conference on Control Automation Robotics & Vision (ICARCV) (pp. 395-
400). IEEE.

21. Ren, M., Zeng, W., Yang, B. and Urtasun, R., 2018, July. Learning to reweight examples
for robust deep learning. In International conference on machine learning (pp. 4334-
4343). PMLR.

91 | P a g e
22. Manovich, L., 1996. The automation of sight: from photography to computer
vision. Electronic Culture: Technology and Visual Representation, pp.229-239.

23. Mori, S., Suen, C.Y. and Yamamoto, K., 1992. Historical review of OCR research and
development. Proceedings of the IEEE, 80(7), pp.1029-1058.

24. Gulli, A. and Pal, S., 2017. Deep learning with Keras. Packt Publishing Ltd.

25. Basulto-Lantsova, A., Padilla-Medina, J.A., Perez-Pinal, F.J. and Barranco-Gutierrez,


A.I., 2020, January. Performance comparative of OpenCV Template Matching method on
Jetson TX2 and Jetson Nano developer kits. In 2020 10th Annual Computing and
Communication Workshop and Conference (CCWC) (pp. 0812-0816). IEEE.

26. Dridi, S., 2021. Unsupervised Learning-A Systematic Literature Review.

27. Liu, F.T., Ting, K.M. and Zhou, Z.H., 2008, December. Isolation forest. In 2008 eighth
ieee international conference on data mining (pp. 413-422). IEEE.

28. Ahmed, S., Lee, Y., Hyun, S.H. and Koo, I., 2019. Unsupervised machine learning-based
detection of covert data integrity assault in smart grid networks utilizing isolation forest.
IEEE Transactions on Information Forensics and Security, 14(10), pp.2765-2777.

29. KOCER, S., DUNDAR, O. and BUTUNER, R., 2021. Programmable Smart
Microcontroller Cards.

30. Zhong, S., Fu, S., Lin, L., Fu, X., Cui, Z. and Wang, R., 2019, June. A novel
unsupervised anomaly detection for gas turbine using isolation forest. In 2019 IEEE
International Conference on Prognostics and Health Management (ICPHM) (pp. 1-6).
IEEE.

31. Liu, F.T., Ting, K.M. and Zhou, Z.H., 2012. Isolation-based anomaly detection. ACM
Transactions on Knowledge Discovery from Data (TKDD), 6(1), pp.1-39.

32. Rasheed, A., Zafar, B., Rasheed, A., Ali, N., Sajid, M., Dar, S.H., Habib, U., Shehryar, T.
and Mahmood, M.T., 2020. Fabric defect detection using computer vision techniques: a
comprehensive review. Mathematical Problems in Engineering, 2020.

92 | P a g e
93 | P a g e

You might also like