Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

IAES International Journal of Artificial Intelligence (IJ-AI)

Vol. 12, No. 2, June 2023, pp. 840~850


ISSN: 2252-8938, DOI: 10.11591/ijai.v12.i2.pp840-850  840

Iban plaited mat motif classification with adaptive smoothing

Silvia Joseph, Irwandi Hipiny, Hamimah Ujir


Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, Sarawak, Malaysia

Article Info ABSTRACT


Article history: Decorative mats plaited by the Iban communities in Borneo contains motifs
that reflect their traditional beliefs. Each motif has its own special meaning
Received May 29, 2022 and taboos. A typical mat motif contains multiple smaller patterns that
Revised Aug 19, 2022 surround the main motif hence is likely to cause misclassification. We
Accepted Sep 18, 2022 introduce a classification framework with adaptive sampling to remove
smaller features whilst retaining larger (and discriminative) image structures.
Canny filter and probabilistic hough transform are gradually applied to a
Keywords: clean greyscale image until a threshold value pertaining to the image’s
structural information is reached. Morphological dilation is then applied to
Binary robust invariant scalable improve the appearance of the retained edges. The resulting image is
keypoints described using binary robust invariant scalable keypoints (BRISK) features
Canny edge detector with random sample consensus (RANSAC). We reported the classification
Image classification accuracy against six common image deformations at incremental degrees:
Morphological dilation scale+rotation, viewpoint, image blur, joint photographic experts group
Plaited mat motifs (JPEG) compression, scale and illumination. From our sensitivity analysis,
Probabilistic hough line we found the optimal threshold for adaptive smoothing to be 75.0%. The
transform optimal scheme obtained 100.0% accuracy for JPEG compression,
Random sample consensus illumination, and viewpoint set. Using adaptive smoothing, we achieved an
average increase in accuracy of 11.0% compared to the baseline.
This is an open access article under the CC BY-SA license.

Corresponding Author:
Silvia Joseph
Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak
94300 Kota Samarahan, Sarawak, Malaysia
Email: silvia.joe8@gmail.com

1. INTRODUCTION
Borneo is widely known as a home to the world’s richest and aesthetically appealing plait work
traditions. These traditions derive from the island’s diverse cultural history and a product of creativity as well
as ingenuity of its different communities [1]. Decorative mat plaiting is a craft skill known in most
indigenous communities in Borneo. Plaited mats are used for basic floorcovering, for sitting, for sleeping and
for ritual purposes during religious ceremonies. Sleeping and ritual mats tend to be the most elaborately
decorated and may feature sacred motifs that are believed to be carrying spiritually powerful patterns that
establish link between humans and the gods [2]. Modern practice of naming plaited mat motifs has suffered
from a relative neglect in material culture studies, often resulting to a devoid of meaning. The relation
between a motif and the name by which it is called is often not relevant. Thus, it is specious to interpret
motifs only through their names [3]. Our previous work [4] investigated the use of an invariant image-based
feature descriptor for recognizing Iban plaited mat motifs. We argue that this is the best route (i.e., image-
based classification) to automate the mat motif recognition task. With modern smartphones being equipped
with camera(s) as standard, we can integrate feature into educational and edutainment app. Such apps will be
an excellent way to promote and educate the public about this invaluable cultural heritage to the masses.

Journal homepage: http://ijai.iaescore.com


Int J Artif Intell ISSN: 2252-8938  841

This paper focuses on plaited mat work by the Iban communities. Plaited mats contain unique motifs
that are either simple or complex. The motifs are mostly stylized beyond recognition. Most motifs are plaited
based on loose categories that include natural elements such as plants, animals, firmaments, and faerie [5].
The main motif plays a dominant role for determining the specific meaning, whereas the smaller motifs only
serve as fillers for filling in the empty areas surrounding the main motif. These motifs are usually arranged
repeatedly to illustrate the mat pattern as a whole see Figure 1. Each motif has its own distinctive features.
Thus, to recognize a plaited mat motif, we need to extract discriminative as well as robust visual features.
Nevertheless, due to the diversity of appearances, variety of positions, scales and rotations of the major and
minor motifs, the recognition task is non-trivial. Furthermore, the image captured by the camera may undergo
common image deformations hence making the task harder.

Figure 1. An example of an Iban plaited mat motif, i.e., Buah Tungku. The (1) and (2); small ornament that
act as a filler, Buah Mata Punai. The image was sourced from [4]

To date, a significant amount of research had been done on the robust detection, description and
matching of invariant features related to motif and pattern classification. Features extraction algorithm and
classification methods were applied to batik motif and batik making using convolutional neural network
(CNN) model architectures [6]–[10], using multiwindow and multiscale extended center symmetric local
binary patterns (MU2ECS-LBP) [11] and [12], using scale invariant feature transform (SIFT) [13], [14] using
gray level co-occurrence matrices (GLCM) [15], [16], fine arts [17]–[19], for license plate recognition [20],
[21], and tattoo recognition [22]–[24]. Even though the reported performance was quite high but these
methods still suffer from false positives due to similar features that contain more than one pattern and noisy
background. In order to deal with the mentioned problems, we introduce a classification framework with
adaptive smoothing to remove small image features whilst protecting the discriminative structural
information (i.e., edges). The appearance of the retained edges is enhanced using morphological dilation. The
rest of this paper is organized: section 2 details out the data collection and pre-processing phase, section 3
outlines the proposed method. The results and analysis are presented in section 4 whilst the conclusion is
provided in section 5.

2. DATA COLLECTION AND PRE-PROCESSING


Red, green and blue (RGB) images of the decorative plaited mat motifs were captured indoor using
a setup consisting of a downward-facing camera with normal setting. For this initial work of ours, only 20
motifs were used in the experiments out of the possible 50. We labelled the class by the most prominent
motif visible on the plaited mat upon consultation with the experts. The images were resized to a resolution
of 800×533 pixels (original resolution is 5,184×3,456 pixels) for performance reason. The resized RGB
images were then cropped and perspective-corrected to produce synthetic images. Following the decision
made during our previous work [4] synthetic images are used as the target image instead of the actual
photographed images to prevent background noises from polluting the resulting image descriptor. Examples
of synthetic plaited mat motif images of different classes are shown in Figure 2.
Common image deformations [25] of incremental degrees were applied to the synthetic images to
produce our query sets. We used the same set of incremental degree values as in our previous work [4]. Since
1
we have a total of 20 classes, the precision of random classification by chance is =0.05.
20

Iban plaited mat motif classification with adaptive smoothing (Silvia Joseph)
842  ISSN: 2252-8938

Figure 2. Examples of synthetic images, i.e., Buah Lang Antu, Buah Burung Garuda, Buah Bandau, Buah
Lintan Tanah and Buah Nabau Besundang (from left to right)

3. PROPOSED METHOD
The proposed method utilizes adaptive smoothing and morphological dilation to not only remove
noises but also to recover lost image information and to enhance image details. We start by convolving the
800×533 greyscale image with a Gaussian kernel of size 5×5, with the 1st derivation of 2D Gaussian
smoothing filter function. The purpose of this smoothing function is to destroy the small image features that
we considered as noise. The smoothed image is then converted into an edge image using Canny filter. In our
proposed classification framework with adaptive smoothing, we gradually reduce the number of remaining
edges by iteratively applying Canny filter of increasing low threshold value to the original edge image. We
use a fixed increment of 5 units. By setting a higher value for the Canny filter’s low threshold parameter, the
number of kept edges is lower as the algorithm becomes more stringent in rejecting weak candidates. We
stop the iteration once the percentage of remaining edges (PoRE) is lower or equal to the set threshold values
(i.e., 100.0%, 90.0%, 85.0%, 80.0%, 75.0%, 70.0%, 50.0%, and 20.0%). These threshold values are selected
randomly as it is possible to tune the threshold and locate the optimal value directly. These thresholds are
evaluated using a validating set. PoRE is calculated using,
Current number of detected edges using PHT
PoRE = x 100 (1)
Initial number of detected edges using PHT

We calculate the current percentage at each iteration using probabilistic hough transform (PHT) to
extract the image’s structural information. The iteration will stop once PoRE is equal or less than the target
value, as shown in Figure 3 for a flowchart of the adaptive smoothing function. This function is implemented
inside our classification framework.

Figure 3. Our classification framework with adaptive smoothing

Int J Artif Intell, Vol. 12, No. 2, June 2023: 840-850


Int J Artif Intell ISSN: 2252-8938  843

To validate our classification framework, we conducted an experiment for motif image classification
against the following common image deformations, i.e., image blur, illumination changes, JPEG
compression, viewpoint changes, scale+rotation changes, and scale changes. Using our dataset, we prepared a
test sequence (i.e., query images) for each motif class. A test sequence contains five images. Each image
exhibits a gradual increment of geometric and/or photometric transformation of the initial image. Following
the recommendation by [25], we used the same 6 image deformation types as in our previous work [4]. For
each motif class, both test image (i.e., synthetic image) and the test sequence’s images were converted into
edge images at different PoRE values. The final edge images were used during the classification exercise. As
a comparison set, we also produced a second set by applying PHT on the final edge images.
In total, we have 20 motif classes x 6 image deformation types x 5 deformation levels x 8 sets of
PoRE values range of (100%, 90%, 85%, 80%, 75%, 70%, 50%, and 20%), yielding a final total of 9,600
query images. Figure 4(a) shows examples of query image produced at different PoRE values, whilst Figure
4(b) shows the corresponding query images with dilation (to improve appearance of edges). Figure 5 shows
selected query images with incremental degrees of image deformation. For the feature matching and
classification of the prepared target and query images, binary robust invariant scalable keypoints (BRISK)
[26] were used. BRISK detects corners using adaptive and generic accelerated segment test (AGAST)
algorithm [27] and perform filtering with features from accelerated segment test (FAST) corner score while
searching for maxima inside the scale space pyramid. Outlier-rejection from matched features was performed
using random sample consensus (RANSAC). RANSAC estimates the homography matrix and use that
information to eliminate any mismatches/possible false key points. We used the ready implementation of
BRISK and RANSAC in VLFeat Version 0.9.21 library [28]. This work was implemented on a Windows
laptop with the following specifications: Intel Core i7-8550U @ 1.80GHz processor and 8 GB RAM.

100% 75% 20% 100% 75% 20%

(a) (b)

Figure 4. From left to right, query images produced using PoRE values of 100.0%, 75.0%, and 20.0% for the
following motif classes (top to bottom row) Buah Lang Antu, Buah Burung Garuda, Buah Bandau, Buah
Tungku Lebur and Buah Nabau Besundang. Two versions are shown, i.e., (a) without dilation and
(b) with dilation

Iban plaited mat motif classification with adaptive smoothing (Silvia Joseph)
844  ISSN: 2252-8938

Blur JPEG compression Illumination Viewpoint Scale Scale+rotation

Fronto-parallel
5% 5% 5% surface to the right 95% 15º
10º

Fronto-parallel
35% 30% 20% surface to the left 65% 50º
10º

50% 60% 40% Slanted surface 60º 35% 100º

65% 90% 60% Slanted surface 40º 15% 200º

80% 100% 90% Slanted surface 20º 5% 300º

Figure 5. From top to bottom, selected query images with incremental degrees of image deformation. From
left to right, a column showing query images of Buah Lang Antu sequence for blur changes, Buah Burung
Garuda sequence for JPEG compression changes, Buah Bandau sequence for illumination changes, Buah
Tungku Lebur sequence sequence for viewpoint changes, Buah Nabau Besundang for scale changes, and
Buah Tangkai Gajai Besembah sequence for scale + rotation changes

4. RESULTS AND ANALYSIS


We recorded the number of detected BRISK key points inside the query vs. template image, the
number of correspondences, and number of correct matches after RANSAC. For image blurring, when the
level of distortion increases, the number of detected BRISK+RANSAC key points decreases dramatically
especially at the 5th level of deformation without dilation, shows in Table 1. However, with dilation enabled
of PoRE=75.0%, the decrease in number does not correlate with a loss of accuracy. False matches only
started at the 3rd deformation level for image blurring, while for scale+rotation, we observed that the
BRISK+RANSAC is able to detect a considerable number of key points, even at increasing degrees of
deformation. We also found that that our method performed well for viewpoint with front parallel surface
images at a value of 10º (left/right) and slanted surface images, at a value ranging from 20º to 60º.
BRISK+RANSAC managed to detect enough key points, even at the worst deformation level of
scale+rotation changes. Clearly it is demonstrated that BRISK+RANSAC using our proposed method
performed well against JPEG compression, viewpoint and illumination changes, shows in Table 2. The
unique properties of symmetrical, repetitive, similar and complex motifs, i.e., Buah Tangkai Gajai
Bersembah and Buah Tungku Lebur creates a challenge for the classification task that leads to some errors
during the matching process as shown in Figures 6(a) and 6(b). Motifs with simpler features, i.e., Buah Mata
Punai that sometimes act as a filler for other motifs, have a smaller number of strong edges resulting to a
reduced number of discovered key points, shows in Figure 6(c).

Int J Artif Intell, Vol. 12, No. 2, June 2023: 840-850


Int J Artif Intell ISSN: 2252-8938  845

(a) (b) (c)

Figure 6. Three motifs with high misclassification rate (a) Buah Bulan Bekarung,
(b) Buah Tangkai Gajai Besembah, and (c) Buah Mata Punai

Table 1. Number of detected BRISK+RANSAC keypoints using the optimal PoRE value of 75.0% for each
image deformation category, with and without morphological dilation
Image deformation w/o dilation w/ dilation
1st level 5th level 1st level 5th level
Blur 155 0 7882 164
Illumination 117275 50991 140871 64996
Viewpoint 457 36 2994 231
JPEG compression 82992 1300 128084 10379
Scale+rotation 397 30 1307 122
Scale 795 138 2516 527

The following metrics were calculated:


a) True positives (TP): the number of query images being correctly classified with a similarity score higher
than the threshold.
b) False positives (FP): the number of query images being incorrectly classified with a similarity score
higher than the threshold.
c) False negatives (FN): the number of instances where multiple top matches were returned, or the top
matches similarity score is lower than the threshold.

Table 2. Comparative failure point of using BRISK+RANSAC with optimal PoRE value of 75.0%,
for each image deformation category. We show the failure point (a) w/o morphological dilation, and
(b) w/ morphological dilation
(a)
Level of image deformation Blur Illumination JPEG _Compression Viewpoint Scale+rotation Scale
Image1 FN TP TP TP FN FN
Image2 FN TP TP TP FN FN
Image3 FN TP TP TP FN FN
Image4 FN TP TP FN FN FN
Image5 FN TP TP FN FN FN
(b)
Level of image deformation Blur Illumination JPEG _Compression Viewpoint Scale+rotation Scale
Image1 TP TP TP TP TP TP
Image2 TP TP TP TP FN TP
Image3 FN TP TP TP TP TP
Image4 FN TP TP TP FN TP
Image5 FN TP TP TP FN FN

The results of keypoint matching from both query and template image were summarized in the form
of confusion matrix as shown in Figures 7(a) to 7(f). We have 20 actual classes and 20+1 predicted classes.
The additional predicted class, i.e., class unresolved, represent cases where our classification returns multiple
top matches or a similarity score is lower than the threshold. We consider such cases as a FN to penalise the
configuration. Based on the confusion matrix obtained from the classification results using our proposed
framework with optimal PoRE value of 75.0% + dilation, for Blur set as in Figure 7(a) shows the highest
number of unresolved classes, followed by scale set as in Figure 7(d), and scale+rotation set, in Figure 7(f).
Most of the results in class unresolved for blur set are due to the false top matches produced by simple motif
Buah Mata Punai and complex motifs of Buah Bulan Bekarung, Buah Kandung Nibung Berayah, Buah
Sebayan and Buah Tangkai Gajai Besembah. From our experiment, we observed that blurring is the most
destructive image deformation type (compared to the rest) as its application produces many false top
matches. There is no class unresolved for JPEG compression set, illumination set, and viewpoint set, see
Iban plaited mat motif classification with adaptive smoothing (Silvia Joseph)
846  ISSN: 2252-8938

Figures 7(b) to 7(e). Figure 8(a) shows a synthetic image of Buah Tungku Lebur, and Figure 8(b) shows the
resulting Canny edge image. At PoRE value of 75.0%, the visualized detected BRISK features after blurring
at increasing levels are shown in Figures 8(c) to 8(f). Side-by-side comparisons of visualized matching
results for selected motifs using BRISK+ RANSAC at deformation level 1 (left column) vs. level 5 (right
column) are shown in Figure 9. We include a sample visualized matching result each for the following image
deformation types, i.e., i) blur, see Figure 9(a); ii) viewpoint, see Figure 9(b); iii) scale, see Figure 9(c); and
iv) scale+rotation, see Figure 9(d). Clearly demonstrated that with our proposed method we managed to
match enough key points, even at the highest deformation level. Based on the confusion matrix, accuracy is
calculated as,
𝑇𝑃+𝑇𝑁
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = (2)
𝑃

Where; TP= true positive, TN= true negative (by default is 0), P= population.

Predicted classes Predicted classes

BBBepayung
BBBepayung

BBBekarung
BBBekarung

unresolved
unresolved

BKNB

BTGB
BKNB

BTGB

BLM
BLM

BMP
BMP

BLA

BBG

BNN
BNB
BNN

BSA
BLT

BTL
BLA

BBG

BNB

Plaited mat Plaited mat


BSA
BLT

BTL

BM
BM

BA

BK
BB
BA

BK

BT
BP

BS
BB

BT
BP

BS

motif motif

BLA 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BLA 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BB 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BB 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BA 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BA 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BBBekarung 0.00 0.00 0.00 0.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 BBBekarung 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BBBepayung 0.00 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 BBBepayung 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BBG 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BBG 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Actual classes
Actual classes

BKNB 0.00 0.00 0.00 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 BKNB 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 BP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BK 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 BK 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BLM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 BLM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BLT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20 BLT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BMP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.60 BMP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 BM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BNB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BNB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BNN 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 BNN 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00
BS 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.60 0.00 0.00 0.00 0.00 0.40 BS 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00
BSA 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 BSA 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00
BTGB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 BTGB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00
BTL 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 BTL 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00
BT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 0.20 BT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00

(a) (b)
Predicted classes Predicted classes
BBBepayung

BBBepayung
BBBekarung

BBBekarung
unresolved

unresolved
BKNB

BKNB
BTGB

BTGB
BLM

BMP

BLM
BLA

BBG

BNN
BNB

BMP
BLA

BBG

BNN
BNB
BSA
BLT

BTL

Plaited mat

BSA
BLT

BTL
Plaited mat
BM

BM
BA

BK
BB

BA

BK
BB
BT
BP

BS

BT
BP

BS
motif motif

BLA 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BLA 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BB 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BB 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BA 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BA 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BBBekarung 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BBBekarung 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BBBepayung 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BBBepayung 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BBG 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BBG 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Actual classes

Actual classes

BKNB 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BKNB 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BK 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BK 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BLM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BLM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BLT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BLT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BMP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BMP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BNB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BNB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BNN 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 BNN 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00
BS 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 BS 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00
BSA 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 BSA 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00
BTGB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 BTGB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 0.00 0.00 0.20
BTL 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 BTL 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00
BT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 BT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00

(c) (d)
Predicted classes Predicted classes
BBBepayung
BBBepayung

BBBekarung
BBBekarung

unresolved
unresolved

BKNB
BKNB

BTGB
BTGB

BLM
BLM

BMP
BMP

BLA

BBG

BNN
BNB
BLA

BBG

BNN
BNB

BSA
BLT

BTL
BSA
BLT

BTL

Plaited mat Plaited mat


BM
BM

BA

BK
BB
BA

BK
BB

BT
BP

BS
BT
BP

BS

motif motif

BLA 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BLA 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BB 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BB 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BA 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BA 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BBBekarung 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BBBekarung 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20
BBBepayung 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BBBepayung 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BBG 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BBG 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Actual classes
Actual classes

BKNB 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BKNB 0.00 0.00 0.00 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20
BP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BK 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BK 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BLM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BLM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BLT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BLT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BMP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BMP 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40
BM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BM 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.20
BNB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 BNB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
BNN 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 BNN 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00
BS 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 BS 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00
BSA 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 BSA 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00
BTGB 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 BTGB 0.00 0.00 0.00 0.20 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.60 0.00 0.00 0.20
BTL 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00 BTL 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 0.00 0.20
BT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 BT 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00

(e) (f)

Figure 7. Confusion matrix obtained from the classification results using our proposed framework with
optimal PoRE value of 75.0%+dilation, for (a) blur set, (b) JPEG compression set, (c) illumination set, (d)
scale set, (e) viewpoint set, (f) scale+rotation set

Int J Artif Intell, Vol. 12, No. 2, June 2023: 840-850


Int J Artif Intell ISSN: 2252-8938  847

(a) (b) (c)

(d) (e) (f)

Figure 8. Synthetic image of (a) Buah Tungku Lebur, (b) the resulting edge image produced by Canny filter,
detected BRISK key points when using optimal PoRE value of 75.0% at (c) blur level 1, (d) level 2, and
(e) level 3, and (f) level 5

In Figure 10, we presented the average matching accuracy achieved by the following schemes:
scheme 1: synthetic image and BRISK+RANSAC; scheme 2: synthetic image and BRISK+RANSAC with
Canny filter; scheme 3: synthetic image and BRISK+RANSAC with Canny filter and PoRE (without
dilation); and scheme 4: synthetic image and BRISK+RANSAC with Canny filter, PoRE (with dilation) the
average accuracy is 85.0%, 95.0%, 79.0% and 96.0% respectively. As for dilation vs no dilation, we
observed that with dilation enabled, the average accuracy increases by 17.0%. The optimal scheme, i.e.,
scheme 4 offers an average increase of 11.0% compared to the baseline, i.e., scheme 1, and an average
increase of 1.0% compared to scheme 2. Figure 11, illustrate the average matching accuracy using
BRISK+RANSAC at different PoRE values for each image deformation type. The optimal PoRE value is
observed to be 75.0% with a comparably higher average matching accuracy of 83.0%, 100.0%, 100.0%,
100.0%, 92.0% and 99.0% across all image deformations (blur, illumination, JPEG compression, viewpoint,
scale+rotation and scale) compared to the rest.

(a)

(b)

(c)

(d)

Figure 9. Side-by-side comparisons of visualized matching results of Buah Lang Antu, Buah Tungku Lebur,
Buah Nabau Besundang and Buah Tangkai Gajai Besembah using BRISK+ RANSAC at deformation level 1
(left column) vs. level 5 (right column). We show a sample matching result (optimal PoRE=75.0%, dilation
enabled) each for the following image deformation types, i.e., (a) blur, (b) viewpoint, (c) scale, and
(d) scale+rotation
Iban plaited mat motif classification with adaptive smoothing (Silvia Joseph)
848  ISSN: 2252-8938

Figure 10. Performance comparison of average matching accuracy using BRISK+RANSAC at different
schemes, following six image deformations

Figure 11. Average matching accuracy of BRISK+RANSAC at different percentage of PoRE+dilation values
following six image deformations

5. CONCLUSION
In this work, we presented an adaptive smoothing scheme consisting of Canny edge detector and
PHT to gradually remove smaller image structures from a given mat motif greyscale image. Dilation is then
used to restore broken edges after the application of the smoothing scheme to improve detection. Motifs are
matched using BRISK features and RANSAC against six common image deformations: combination of scale
and rotation, viewpoint, image blur, JPEG compression, scale and illumination changes. Varying degrees of
each deformation type were applied to all cropped and perspective-corrected mat motif images. The optimal
PoRE value is 75.0% with a reported 100.0% matching accuracy for JPEG compression, illumination and

Int J Artif Intell, Vol. 12, No. 2, June 2023: 840-850


Int J Artif Intell ISSN: 2252-8938  849

viewpoint change set. Our optimal scheme achieved an average increase in accuracy of 11.0% compared to
classification using only BRISK+RANSAC on synthetic RGB images (i.e., baseline method). It is observed
that the proposed scheme improves the classification accuracy of mat motif images with complex motif
across common image deformation types.

ACKNOWLEDGEMENT
We wish to acknowledge Universiti Malaysia Sarawak (UNIMAS) for providing the necessary fund
towards the publication of this research work.

REFERENCES
[1] B. S. B, “Plaited arts from the Borneo rainforest,” Lontar Foundation, 2012, doi: 10.4000/moussons.2950.
[2] B. Sellato and J. Fogel, “Decorated mats of the peoples of the Borneo hinterland, & Tapis décorés des peuples de l’arrière-pays de
Bornéo,” Tribal Art Magazine, vol. XVII, no. 4, pp. 126–137, 2013.
[3] B. Sellato, “Variation as norm: names, meanings, and referents in Borneo basketry decoration.,” Borneo Research Bulletin, vol.
48, pp. 185–200, 2017.
[4] S. Joseph, I. Hipiny, H. Ujir, S. F. S. Juan, and J. L. Minoi, “Performance evaluation of SIFT against common image
deformations on iban plaited mat motif images,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 23,
no. 3, pp. 1470–1477, 2021, doi: 10.11591/ijeecs.v23.i3.pp1470-1477.
[5] B. Sellato, “Art and identity in the plaited arts of Borneo: an introduction,” Plaited Art from the Borneo Rainforest, pp. 2–38,
2012.
[6] D. G. T. Meranggi, N. Yudistira, and Y. A. Sari, “Batik classification using convolutional neural network with data
improvements,” International Journal on Informatics Visualization, vol. 6, no. 1, pp. 6–11, 2022, doi: 10.30630/joiv.6.1.716.
[7] J. Tristanto, J. Hendryli, and D. Erny Herwindiati, “Classification of Batik motifs using convolutional neural networks,” SSRN
Electronic Journal, 2018, doi: 10.2139/ssrn.3258935.
[8] H. Prasetyo and B. A. Putra Akardihas, “Batik image retrieval using convolutional neural network,” Telkomnika
(Telecommunication Computing Electronics and Control), vol. 17, no. 6, pp. 3010–3018, 2019, doi:
10.12928/TELKOMNIKA.v17i6.12701.
[9] M. A. Rasyidi and T. Bariyah, “Batik pattern recognition using convolutional neural network,” Bulletin of Electrical Engineering
and Informatics, vol. 9, no. 4, pp. 1430–1437, 2020, doi: 10.11591/eei.v9i4.2385.
[10] M. A. Rasyidi, R. Handayani, and F. Aziz, “Identification of batik making method from images using convolutional neural
network with limited amount of data,” Bulletin of Electrical Engineering and Informatics, vol. 10, no. 3, pp. 1300–1307, 2021,
doi: 10.11591/eei.v10i3.3035.
[11] A. H. Rangkuti, A. Harjoko, and A. Putra, “A novel reliable approach for image batik classification that invariant with scale and
rotation using MU2ECS-LBP algorithm,” Procedia Computer Science, vol. 179, pp. 863–870, 2021, doi:
10.1016/j.procs.2021.01.075.
[12] A. H. Rangkuti, A. Harjoko, and A. E. Putra, “Improvement of accuracy in batik i mage classification due to scale and rotation
changes using M2ECS-LBP algorithm,” Journal of Theoretical and Applied Information Technology, vol. 97, no. 14, pp. 3859–
3870, 2019.
[13] M. Alkaff, H. Khatimi, N. Lathifah, and Y. Sari, “Sasirangan motifs classification using scale- invariant feature transform (SIFT)
and support vector machine (SVM),” MATEC Web of Conferences, vol. 280, p. 5023, 2019, doi:
10.1051/matecconf/201928005023.
[14] I. Nurhaida, A. Noviyanto, R. Manurung, and A. M. Arymurthy, “Automatic Indonesian’s Batik pattern recognition using SIFT
approach,” Procedia Computer Science, vol. 59, pp. 567–576, 2015, doi: 10.1016/j.procs.2015.07.547.
[15] A. C. Siregar and B. C. Octariadi, “Classification of sambas traditional fabric ‘Kain Lunggi’ using texture feature,” IJCCS
(Indonesian Journal of Computing and Cybernetics Systems), vol. 13, no. 4, p. 389, 2019, doi: 10.22146/ijccs.49782.
[16] I. Nurhaida, R. Manurung, and A. M. Arymurthy, “Performance comparison analysis features extraction methods for Batik
recognition,” 2012 International Conference on Advanced Computer Science and Information Systems, ICACSIS 2012 -
Proceedings, pp. 207–212, 2012.
[17] F. Albert, J. M. Gómis, J. Blasco, J. M. Valiente, and N. Aleixos, “A new method to analyse mosaics based on symmetry group
theory applied to Islamic geometric patterns,” Computer Vision and Image Understanding, vol. 130, pp. 54–70, 2015, doi:
10.1016/j.cviu.2014.09.002.
[18] O. Chaowalit and P. Kuntitan, “Using deep learning for the image recognition of motifs on the Center of Sukhothai Ceramics,”
Current Applied Science and Technology, vol. 22, no. 2, 2021, doi: 10.55003/cast.2022.02.22.002.
[19] Y. Ma and O. Arandjelović, “Classification of ancient roman coins by denomination using colour, a forgotten feature in automatic
ancient coin analysis,” Sci, vol. 2, no. 2, p. 37, 2020, doi: 10.3390/sci2020037.
[20] F. Öztürk and F. Özen, “A new license plate recognition system based on probabilistic neural networks,” Procedia Technology,
vol. 1, pp. 124–128, 2012, doi: 10.1016/j.protcy.2012.02.024.
[21] P. Meghana, S. Sagar Imambi, P. Sivateja, and K. Sairam, “Image recognition for automatic number plate surveillance,”
International Journal of Innovative Technology and Exploring Engineering, vol. 8, no. 4, pp. 9–12, 2019.
[22] X. Di and V. M. Patel, “Deep tattoo recognition,” IEEE Computer Society Conference on Computer Vision and Pattern
Recognition Workshops, pp. 119–126, 2016, doi: 10.1109/CVPRW.2016.22.
[23] Z. H. Sun, J. Baumes, P. Tunison, M. Turek, and A. Hoogs, “Tattoo detection and localization using region-based deep learning,”
Proceedings - International Conference on Pattern Recognition, vol. 0, pp. 3055–3060, 2016, doi: 10.1109/ICPR.2016.7900103.
[24] R. T. Silva and H. S. Lopes, “A transfer learning approach for the tattoo detection problem,” pp. 1–8, 2021, doi:
10.21528/cbic2021-34.
[25] K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 27, no. 10, pp. 1615–1630, 2005, doi: 10.1109/TPAMI.2005.188.
[26] S. Leutenegger, M. Chli, and R. Y. Siegwart, “BRISK: binary robust invariant scalable keypoints,” Proceedings of the IEEE
International Conference on Computer Vision, pp. 2548–2555, 2011, doi: 10.1109/ICCV.2011.6126542.

Iban plaited mat motif classification with adaptive smoothing (Silvia Joseph)
850  ISSN: 2252-8938

[27] E. Mair, G. D. Hager, D. Burschka, M. Suppa, and G. Hirzinger, “Adaptive and generic corner detection based on the accelerated
segment test,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes
in Bioinformatics), vol. 6312 LNCS, no. PART 2, pp. 183–196, 2010, doi: 10.1007/978-3-642-15552-9_14.
[28] A. Vedaldi and B. Fulkerson, “Vlfeat - an open and portable library of computer vision algorithms,” MM’10 - Proceedings of the
ACM Multimedia 2010 International Conference, pp. 1469–1472, 2010, doi: 10.1145/1873951.1874249.

BIOGRAPHIES OF AUTHORS

Silvia Joseph is currently a postgraduate student at the Faculty of Computer Science


and Information Technology (FCSIT), Universiti Malaysia Sarawak (UNIMAS). Working as an
Information Technology Officer at Government Hospital, Ministry of Health, Malaysia. She
obtained her Master in Advanced IT (MAIT) in 2016. Silvia’s research interests are on features
recognition in fields of pattern analysis, image processing applications and image retrieval system.
She can be contacted at email: silvia.joe8@gmail.com.

Dr Irwandi Hipiny is a Senior Lecturer at the Faculty of Computer Science and


Information Technology (FCSIT), Universiti Malaysia Sarawak (UNIMAS). He obtained his PhD
in Computer Vision from University of Bristol in 2014. Irwandi’s current research interests are in
computer vision and animal re-identification. He can be contacted at email: mhihipni@unimas.my.

Ts. Dr Hamimah Ujir is a Senior Lecturer at the Faculty of Computer Science and
Information Technology (FCSIT), Universiti Malaysia Sarawak (UNIMAS). She obtained her PhD
in the School of Electronic, Electrical and Computer Engineering, University of Birmingham,
United Kingdom in 2013. Hamimah’s research interests lay in the interdisciplinary field of
computer vision, with related interests being in computer graphics, image processing, and
mathematical methods. Her previous and current works include 3D physical simulation and 3D
static as well as dynamic facial expression. She can be contacted at email: uhamimah@unimas.my.

Int J Artif Intell, Vol. 12, No. 2, June 2023: 840-850

You might also like