Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Fibers and Polymers 2020, Vol.21, No.

3, 590-603 ISSN 1229-9197 (print version)


DOI 10.1007/s12221-020-9425-7 ISSN 1875-0052 (electronic version)

Surface Parameter Measurement of Braided Composite Preform


Based on Faster R-CNN
Zhitao Xiao1,2, Lei Pei1, Lei Geng1,2*, Ying Sun3, Fang Zhang1,2, and Jun Wu1,2
1
School of Electronics and Information Engineering, TianGong University, Tianjin 300387, China
2
Tianjin Key Laboratory of Optoelectronic Detection Technology and System, Tianjin 300387, China
3
Institute of Textile Composites, TianGong University, Tianjin 300387, China
(Received April 26, 2019; Revised August 19, 2019; Accepted August 25, 2019)

Abstract: Pitch length and surface braiding angle are two important parameters of braided composite preforms. In this paper,
a method based on Faster R-CNN is proposed to measure the two parameters. First, after image acquisition, a fabric image
database including initial cropped images, augmented images, and target images is established. Then, the target images are
classified into four categories according to the gray change characteristics. Third, a Faster R-CNN fabric detection model is
trained on the fabric image database. Fourth, targets are detected by the trained network, and corners are detected based on
the detected targets. Finally, pitch lengths and surface braiding angles are measured based on the detected corners.
Experimental results show that the proposed method achieves the automatic measurement of pitch lengths and surface
braiding angles of 2D and 3D braided composite preforms with high accuracy.
Keywords: Braided composite preform, Pitch length, Surface braiding angle, Deep learning, Faster R-CNN

Introduction [8]. Two-dimensional braided structures can be classified as


diamond (1/1 repeat), regular (2/2 repeat) and Hercules (3/3
Fiber-reinforced/braided composites are increasingly used repeat) based on the weave pattern [9]. Three-dimensional
in the fields of aerospace, automotive, military, energy, braided structures can be classified as 3D, four-directional,
sports, etc. due to their advantageous features, such as high 3D, five-directional and 3D, six-directional structures
shear and torsional strength and stiffness, high transverse according to the reinforced yarns added in different
strength and modulus, damage tolerance and fatigue life, and directions.
notch insensitivity [1-5]. Braided composites are produced The surface braiding angle θ is defined as the angle
from fibrous architectures developed by braiding technology between the surface braiding yarn axis and the fabricated
[6,7]. Braided structures are produced by intertwining two or direction [10], and the pitch length h is the length that yarns
more yarns and are distinguished from other fibrous expand along the braiding direction in each carrier motion
architectures by the yarns being aligned diagonally to the step [11] (see Figure 1). Surface braiding angle and pitch
structure axis [2]. Braided structures are mainly separated length are important parameters of braided composites
into two categories according to how fibrous reinforcements because the mechanical properties of braided composites
are interlaced in spatial directions, i.e., 2D (two- can be affected by the parameters [10,12-15].
dimensional) and 3D (three-dimensional) braided structures However, in the industry, manual measurement of surface

Figure 1. Pitch length and surface braiding angle; (a) in ideal scenario, (b) in practical scenario (2D fabric), and (c) in practical scenario (3D
fabric).

*Corresponding author: genglei@tjpu.edu.cn

590
Braided Parameter Measurement Using Faster R-CNN Fibers and Polymers 2020, Vol.21, No.3 591

braiding angle and pitch length is still the main method,


which is time-consuming and manpower-wasting. Thus, it is
desirable to develop a method for objective, automatic, and
accurate measurement of the two parameters.
Several measurement techniques of 3D braided composite
preforms based on image processing have been proposed as
a solution. In the study of Wan et al. [16], an approach based
on curve fitting was proposed to measure the surface
braiding angle. Gong et al. [17] proposed an automatic
method based on multi-resolution analysis of a wavelet
transform to measure the pitch length. Wan et al. [18]
developed an algorithm that measures the average surface
braiding angle based on mathematical morphology and the
Fourier transform. It is noteworthy that these methods are
more applicable to measuring the average values of the two
parameters than to measuring each individual surface
braiding angle and pitch length. Fewer methods have been
reported for the two-parameter measurement of 2D braided
composite preforms.
Recently, with the development of deep convolutional
neural networks (CNNs), many tasks in computer vision
have been dominated by them [19,20]. For object detection,
region-based CNN detection methods are now the main
paradigm. It is such a rapidly developing area that three
generations of region-based CNN detection models, from R- Figure 2. Braided composite preform samples; (a) 2D biaxial
carbon-fiber (3k) braided composite preform sample and (b) 3D,
CNN [21], to Fast R-CNN [22], and finally Faster R-CNN,
4-directional braided composite preform samples.
were proposed by Girshick et al. [23] in 2015, with
increasingly better accuracy and faster processing speed. In
recent years, Faster R-CNN has been used in many fields of
image processing, such as medical fields [24-28], intelligent
transportation and monitoring fields [29-33], textile fields
[34-36], etc.
In this paper, we propose a new method based on a deep
learning net: Faster R-CNN to measure the surface braiding
angle and pitch length of 2D and 3D braided composite
preforms.

Experimental

Materials
The materials include 2D and 3D fabrics. The 2D fabrics
are biaxial braided by carbon-fiber (6k, 6,000 filaments), Figure 3. Image acquisition system.
carbon-fiber (3k, 3,000 filaments), glass-fiber and aramid-
fiber, separately, which are braided tubes provided by
Shuomin Technology Co. Ltd., China (see Figure 2(a)). The Image Acquisition
3D fabrics are 3D, four-directional braided by carbon-fiber The original images are acquired by the image acquisition
(12k, 12,000 filaments), and carbon-fiber (24k, 24,000 system displayed in Figure 3. First, the fabric is put on the
filaments), separately, supplied by the Institute of Textile surface of the measurement platform with micrometer on
Composites of Tianjin Polytechnic University (see Figure one side of the fabric, keeping the surface of the fabric and
2(b)). The average diameters of the 2D braided carbon-, micrometer on the same horizontal plane. Second, the CCD
glass- and aramid-fiber filaments are, respectively, 5.70 µm, camera (Vieworks VA-8MC-M16AO) with a camera lens
7.91 µm and 11.02 µm, tested by the College of Textiles of (Nikkor 28 mm f/2.8D) and circular polarizing filter (CPF)
Tianjin Polytechnic University using the fiber fineness (NiSi CPL 52 mm) attached is vertically installed above the
analyser. fabric, and the dome light source (DLS) (OPT-RID180-
592 Fibers and Polymers 2020, Vol.21, No.3 Zhitao Xiao et al.

RGB) is put on the surface of the fabric, making the specific image datasets and corresponding annotations, such
camera’s optical centre go through the central axis of the as the Microsoft COCO dataset [42] and Pascal VOC dataset
DLS light hole and the centre of the fabric surface. Finally, [43]. Faster R-CNN is composed of RPN (Region Proposal
the camera exposure value and the rotated angle of CPF are Network) and Fast R-CNN modules. The RPN module,
adjusted, and a clear image shown on the computer screen is which is a deep fully convolutional network, proposes
selected. regions, and Fast R-CNN module uses the proposed regions.
The image acquisition system can obtain clear and The detailed steps of corner detection based on Faster R-
accurate gray-level fabric images. The reasons are described CNN are described as follows.
as follows: first, the CCD camera is an industrial black-and- Step 1: Establish the fabric image database, including
white CCD camera with a high resolution of 3296×2472 initial cropped images, augmented images and target images.
pixels, which has less distortion and clearer image quality Step 1.1: Crop initial fabric images from the original
than normal cameras. Second, the DLS can uniformly images acquired in the Image Acquisition Section. The
illuminate the fabric surface [37], and the CPF can filter cropped initial fabric images contain eight types of fabrics,
reflections with certain vibration directions [38,39], which as shown in Figure 4; Figure 4(a) to (c) are, respectively, 2D
are helpful to reduce the reflection on the fabric image carbon-fiber (6k), carbon-fiber (3k) and carbon-fiber (3k)
surface. fabrics with different of pitch lengths and surface braiding
angles; Figure 4(d) to (e) are respectively, 2D glass- and
Corner Detection based on Faster R-CNN aramid-fiber fabrics; and Figure 4(f) to (h) are, respectively,
The acquired images are then processed by corner 3D carbon-fiber (12k), carbon-fiber (24k) and carbon-fiber
detection based on Faster R-CNN. (24k) fabrics with different of pitch lengths and surface
Faster R-CNN [23] contains the Zeiler and Fergus (ZF) braiding angles.
model [40] and VGG-16 model [41] pre-trained with Step 1.2: Augment the initial fabric images by image

Figure 4. Original training sample images; (a) F dc1


2
(6k), (b) F
2 dc2 (3k), (c) F
2 dc3 (3k), (d) F dg, (e) F da, (f) F3dc (12k), (g) F
2 2 3 3 dc3 (24k), and (h)
F dc (24k).
3 3
Braided Parameter Measurement Using Faster R-CNN Fibers and Polymers 2020, Vol.21, No.3 593

augmented fabric images. First, click corner points of the


fabric images shown on the computer screen, obtaining the
coordinates of the clicked corners. Second, compute the
upper-left and lower-right coordinates in (2m+1)×(2n+1)
neighborhoods centered at the clicked corners. For example,
let A(x0, y0) be one clicked corner; then the upper-left and
lower-right coordinates in the m×n neighborhoods centered
at A are (x0−m, y0−n) and (x0+m, y0+n), respectively. We can
thus obtain the target images of (2m+1)×(2n+1) size centered
at the clicked corners.
Figure 6 shows fabric images and related target images;
for example, Figure 6(a-1) and (a-2) are target images
matted from Figure 6(a) of the area labeled by the blue
rectangle centered at the blue labeled corner, and the yellow
rectangle centered at the yellow labeled corner, respectively.
Similarly, other target images can be matted from Figure
6(a) and from other fabric images (see Figure 6(b) to (h)).
Step 1.4: Classify the target images into four categories
based on gray changes of the target images. As shown in
Figure 5. Training image and related augmentation; (a) image Fori, Figure 6, target images matted from Figure 6(a) to (c) are
(b) horizontal mirror of Fori, (c) vertical mirror of Fori, and (d) L classified into class one and class two, target images matted
image of (a). from Figure 6(d) and (e) are classified into class three, and
target images matted from Figure 6(f) to (h) are classified
into class four.
rotation, horizontal mirroring (see Figure 5(b)), vertical The screening mechanism of target images is described as
mirroring (see Figure 5(c)) and enhancement (see Figure follows. Firstly, for 2D carbon-fiber fabrics, as shown in
5(d)). Here, we use L channel image abstracted from the Lab Figure 6(a), (a-1), (a-2), (b), (b-1), (b-2), (c), (c-1) and (c-2),
transform [44] as image enhancement. the gray change characteristic of the target images centered
CIE Lab was developed by CIE (Commission Internationale at corners is that they all include areas with good contrast
de L'Eclairage) in 1976; it first converts the R, G and B between light and dark. The difference between Figure 6(a-
channels of the original image to X, Y and Z space and then 1), (b-1) and (c-1), and Figure 6(a-2), (b-2) and (c-2) is that
converts them to L, a and b space, as calculated by [44] the former have a greater proportion of dark areas than light
areas, while the latter have the opposite characteristic. Thus,
⎡0.412453 0.357580 0.180423 ⎤ they are classified into class one and class two. Second,
⎢ ⎥ target images cropped from glass-fiber and aramid-fiber
[ X , Y , Z ] = ⎢0.212671 0.715150 0.072169 ⎥ [ R, G, B] (1)
fabric (see Figure 6(d), (d-1), (d-2), (e), (e-1) and (e-2)) have
⎢⎣0.019334 0.119193 0.950227 ⎥⎦ similar gray change characteristics with no significant
contrast between light and dark areas. Thus, they are
⎧ classified into class three. Finally, the target images of 3D
⎪ carbon-fiber fabric (see Figure 6(f), (f-1), (f-2), (g), (g-1), (g-
⎪ L = 116 f (Y ) − 16 2), (h), (h-1) and (h-2)) have similar gray change

⎪ ⎡ ⎛ X ⎞ ⎤ characteristics with weak significant contrast between light
⎨a = 500 ⎢ f ⎜ ⎟ − f (Y ) ⎥ (2)
⎪ ⎣ ⎝ 0.9504 ⎠ ⎦ and dark areas. Thus, they are classified into class four.
⎪ Thus, the fabric images database, including initial fabric
⎪b = 200 ⎢⎡ f (Y ) − f ⎛ Z ⎞ ⎥⎤ images, augmented fabric images, and target images, is
⎪⎩ ⎜ 1.0887 ⎟
⎣ ⎝ ⎠⎦ established. Table 1 shows the image numbers of the
training, validating, testing and target sets of the fabric
⎧ ⎛ 6 ⎞
3 images database, where the proportion of images in training,
⎪ x1/3 , x>⎜ ⎟ validating and testing sets is 3:1:1.
⎪ ⎝ 29 ⎠
f ( x) = ⎨ (3) Step 2: Train the Faster R-CNN fabric detection model on
2 3
⎪ 1 ⎛ 29 ⎞ 4 ⎛ 6 ⎞ the established database, where the ZF net trained with the
⎪ ⎜ ⎟ x + , x≤⎜ ⎟
⎩3 ⎝ 6 ⎠ 29 ⎝ 29 ⎠ Pascal VOC dataset (2007) is used as the pre-trained
network. For comparison with the VGG-16 net, the depth
Step 1.3: Mat the target images from the initial and configuration of the ZF net is shallower (see Figure 7), and
594 Fibers and Polymers 2020, Vol.21, No.3 Zhitao Xiao et al.

Figure 6. Training sample images and related target images; (a) F2dc (6k) with labeled corners and cropped rectangular box, (a-1) target
1

image of (a) (Class one), (a-2) target image of (a) (Class two), (b) F2dc (3k), (b-1) target image of (b) (Class one), (b-2) target image of (b)
2

(Class two), (c) F2dc (3k), (c-1) target image of (c) (Class one), (c-2) target image of (c) (Class two), (d) F2dg (3k), (d-1) target image of (d)
3

(Class three), (d-2) target image of (d) (Class three), (e) F2da (3k), (e-1) target image of (e) (Class three), (e-2) target image of (e) (Class
three), (f) F3dc (12k), (f-1) target image of (f) (Class four), (f-2) target image of (f) (Class four), (g) F3dc (24k), (g-1) target image of (g)
1 2

(Class four), (g-2) target image of (g) (Class four), (h) F3dc (24k), (h-1) target image of (h) (Class four), and (h-2) target image of (h) (Class
3

four).

Table 1. Number of images in the training, validating, testing and target sets of the fabric images database
Image number F dc
2 1
F dc
2 2
F dc
2 3
F dg
2
F da
2
F3dc1
F3dc2
F3dc3

Total set 800 800 800 800 800 700 700 700
Training set 3,660
Validating set 1,220
Testing set 1,220
Target set 388,026
Braided Parameter Measurement Using Faster R-CNN Fibers and Polymers 2020, Vol.21, No.3 595

Table 2. Testing results with the ZF and VGG pretrained nets


AP AP AP AP
mAP
(Class 1) (Class 2) (Class 3) (Class 4)
ZF net 0.882 0.887 0.866 0.897 0.883
VGG net 0.865 0.865 0.857 0.896 0.871

the target samples are not complex, so the shallower pre-


trained network, i.e., the ZF net is more suitable. As shown
in Table 2, the AP (Average Precision) and mAP (Mean
Average Precision) values tested in the validating set by the
trained Faster R-CNN fabric detection model with the ZF
pre-trained network are higher than those for the VGG-16
pre-trained network, where the AP and mAP are usually
used to evaluate the trained model.
Figure 8 is the PR (Precision-Recall) curve tested in the
validating set by the trained Faster R-CNN fabric detection
model with the ZF pre-trained network. It can be seen that
the AP values of classes 1 to 4 are high, so the trained
network exhibits good performance. Figure 9 shows partial
results tested by the trained model in the testing set (1,220
testing sample images). From Figure 9 and other images
tested in the testing set, no false targets are detected, and
Figure 7. Depth configurations of the ZF and VGG-16 nets; (a)
several targets near the image edges are missed because the
ZF net and (b) VGG-16 net. sizes of missing targets are smaller than the sizes of targets

Figure 8. PR curves; (a) PR curve of class 1, (b) PR curve of class 2, (c) PR curve of class 3, and (d) PR curve of class 4.
596 Fibers and Polymers 2020, Vol.21, No.3 Zhitao Xiao et al.

Figure 9. Partial tested results; (a) tested results of F dc (2D, 6k, carbon-fiber), (b) tested results of F dc (2D, 3k, carbon-fiber), (c) tested
2 1 2 2

results of F dc (2D, 3k, carbon-fiber), (d) tested results of F dg (2D, glass-fiber), (e) tested results of F da (2D, aramid-fiber), (f) tested results
2 3 2 2

of F3dc (3D,12k, carbon-fiber), (g) tested results of F dc (3D, 24k, carbon-iber), and (h) tested results of F dc (3D, 24k, carbon-fiber).
3 3 3 3 3

in the non-marginal region, so the characteristics of the x = x +w


o ul
(4)
missing targets are not significant enough to be detected.
Step 3: Detect initial corners based on the detected targets. y = y +r
o ul
(5)
Step 3.1: Crop the final test image into small blocks FBi
(i=1, 2, ..., 9) (see Figure 10(b)) that are overlapping so as where w=(xlr−xul)/2 and r=(ylr−yul)/2. Similarly other corners
not to miss targets near image edges. of FTi can be calculated.
Step 3.2: Test the block images FBi using the trained Faster Step 3.4: Merge FCi based on the positions where they are
R-CNN fabric detection model, obtaining images FTi with cropped form the original image FOr, yielding the corner
detected targets (see Figure 10(c)). map FOrC (see Figure 10(e) and (f)).
Step 3.3: In FTi, calculate the center point coordinates of From Figure 10(e) and (f), the corner map FOrC has false
each detected target as the original corners, yielding the corners caused by the overlapping areas of adjacent image
corner map FCi (see Figure 10(d)). For example, let A(xul, yul) blocks, so false corners will be removed in the following
and B(xlr, ylr) be, respectively, the upper-left and lower-right step.
coordinates of one known detected target, so the center point Step 3.5: Remove false corners of FOrC. First, compare all
C(xo, xo) can be calculated by corner gray values of the original image FOr in certain
Braided Parameter Measurement Using Faster R-CNN Fibers and Polymers 2020, Vol.21, No.3 597

neighborhoods, and then only reserve these corners with


minimum gray values, yielding the initial corner map FC
(see Figure 10(g) and (h)).
Step 4: Adjust corner position based on phase congruency.
Step 4.1: Calculate the minimum moments of the phase
congruency covariance [45] of FBM3D in (2R+1)×(2R+1)
neighborhoods centered at the detected corners, yielding
FPCj (j=1, 2, ..., nc; nc is the corner number detected in FC)
(see Figure 11(c)), where FBM3D is the image FOr processed
by the L transform [44] and then by the BM3D filter [46];
R=(Thx/2)×(1/2), and Thx can be calculated by [47].
1. Calculate each row autocorrelation map Facxi of FL,
where i = 1, 2, ..., m; m is the row number of FL.
2. In the middle of Facxi, for each row, calculate the
average peak-to-peak value Vaci of three peaks and
denote the coordinates of the middle peaks by Vmpi; let
the maximum of Vaci be Txmax.
3. If three peaks are found in the interval [Vmpi−Txmax
Vmpi+Txmax], then reserve this Vaci.
4. Set Thx as the average of the reserved Vaci.
Step 4.2: Binarize image FPCj by a threshold T. Then
remove those corners whose distances to the image edges
are less than 3 pixels, yielding image FBj (see Figure 11(d)).
Finally, calculate the distance dccjk (k=1, 2, ..., nk, nk is the
number of corners in FBj) between the corners of FBj and the
center point coordinates (R, R) of FBj, and let the minimum
of dccjk be dminc. Label the corners with dminc; if the number of
the labeled corners with dminc is 1, then the labeled corner is
used as the adjusted corner, else if the number of the labeled
corners with dminc is more than 1, then compare the gray
values of these labeled corners in FL, reserving these corners
with minimum gray values as the adjusted corners.
As shown in Figure 11(e), the corner position processed
by corner adjustment is more accurate than before (see
Figure 11(a)).
Step 4.3: Remove those corners for which the distances to
image edges are less than q pixels. Then, further adjust the
Figure 10. Process of initial corner detection; (a) original image corners: compare the gray values of FOr in 3×3
FOr, (b) block images FBi cropped from (a), (c) block images FTi neighborhoods centered at the detected corners, and only
with detected targets, (d) block images FCi with detected corners, reserve these corners with minimum gray values.
(e) merged original corner map FOrC, (f) part of (e) in the white
square frame, (g) corner map FC, and (h) part of (g) in the white Pitch Length and Surface Braiding Angle Measurement
square frame. The pitch lengths and surface braiding angles are

Figure 11. Process of initial corner detection; (a) FCOj, (b) FBM3Dj, (c) Fpcj, (d) FBj, and (e) adjusted corner.
598 Fibers and Polymers 2020, Vol.21, No.3 Zhitao Xiao et al.

between corners Ai and Aj. Similarly, define dBiBj, dCiCj, dDiDj,


dEiEj and dFiFj. The pitch length dplA can be calculated by
dplA=dAiAj×dc, where dc is the true length per pixel according
to image calibration in image acquisition. Similarly, the
pitch lengths dplB, dplC, dplD, dplE and dplF can be calculated.
The surface braiding angle θ/2 is obtained by θ/2=
(θA2C1+θB1D2)/2, where θA2C1 and θB1D2 are the acute angle
between A2C1 and the horizontal direction, and the acute
angle between B1D2 and the horizontal direction, respectively.

Figure 12. Labeled corners and angle. Results and Discussion

In this section, eight types of sample images are tested


measured based on the corners detected above; the detailed using the proposed method, as shown in Figure 13, where
steps are described as follows: Figure 13(a)-(e) are 2D braided glass-fiber, aramid-fiber,
Consider Figure 12, for example, let Ai, Bi, Ci, Di, Ei, and carbon-fiber (6k), carbon-fiber (3k) and carbon-fiber (3k)
Fi (i=1, 2, 3, 4) be the detected corners marked by white composite preforms, respectively, and Figure 13(f)-(h) are,
points. Define dAiAj (j=i+1, i=1, 2, 3) as the pixel distance respectively, 3D braided carbon-fiber (12k), carbon-fiber

Figure 13. Tested images with labeled angles; (a) image F with labeled angles and part corners, (b) image F with labeled angles, (c) image
1 2

F with labeled angles, (d) image F with labeled angles, (e) image F with labeled angles, (f) image F with labeled angles, (g) image F
3 4 5 6 7

with labeled angles, and (h) image F with labeled angles.


8
Braided Parameter Measurement Using Faster R-CNN Fibers and Polymers 2020, Vol.21, No.3 599

Figure 14. Pitch length and surface braiding angle measured by hand; (a) pitch length measured by hand, (b) part of (a), and (c) surface
braiding angle measured by hand.

(24k) and carbon-fiber (24k) composite preforms cropped From Table 3, the standard deviations of pitch length and
from the original images acquired in the Image Acquisition surface braiding angle measured by hand are small. It is thus
Section. reasonable to use the manual measurement results to
With manual measurements, the pitch length dplm is benchmark the results of our proposed method.
measured by clicking the relevant points on the image, and Figure 15 shows the detected corner maps, pitch length
the surface braiding angle θm is measured by protractor. As relative errors, and surface braiding angle relative errors,
shown in Figure 14(a), for example, the coordinates of corner where the relative errors are arranged in ascending order.
C1(188, 238) and C2(190, 399) are acquired by clicking Table 4 is the average measurement values and relative
points on the image shown on the computer screen and the errors of pitch length and surface braiding angle, where d
pitch length is computed as d plm = dc (188 −190)2 + (238 − 399)2 , and θ denote the values of pitch length and surface braiding
where dc is the calibration value in image acquisition, the angle, respectively. The subscripts ah, adp, and aac denote
accuracy of the corner position can be the sub-pixel level, the average value measured by hand, by our method and by
and the corner points are clicked in the case of image auto-correlation (Thx; see the Corner Detection based on
magnification (see Figure 14(b)) to improve the accuracy of Faster R-CNN Section, Step 4), respectively. eddp, edac and
the corner positions. In Figure 14(c), the angle θ measured eθdp are relative errors calculated by:
by protractor is 49.2 o, so the surface braiding angle θm is
(1/2)θ=(1/2)49.2=24.6 o. Each pitch length and surface d ah − d adp
braiding angle measured by hand is the average value of ten eddp = ×100% (6)
d ah
measurements. The measured angles are labeled in Figure
13(a)-(h). In Figure 13(a), for example, the measured surface d ah − d aac
braiding angles are (1/2)θi (i=1, 2, 3, ..., 35), and the edac = ×100% (7)
d ah
measured pitch lengths are dAiAjpl=dc×dAiAj (i=1, 2, ...; j=i+1),
where dAiAj is the pixel distance between corners Ai and Aj. θ ah − θ adp
Similarly, the pitch length dBiBjpl, pitch lengths in other lines, eθ dp = ×100% (8)
θ ah
and pitch lengths in other images can be measured.

Table 3. Standard deviation of pitch length and surface braiding angle measured by hand
Standard deviation of pitch length Standard deviation of surface braiding angle
Minimum Maximum Average Minimum Maximum Average
F 1
0.0072 0.079 0.021 0.25 0.78 0.52
F 2
0.0045 0.042 0.016 0.28 0.82 0.56
F 3
0.0084 0.035 0.018 0.30 0.75 0.52
F 4
0.0054 0.035 0.017 0.25 0.88 0.46
F 5
0.0054 0.040 0.014 0.17 0.62 0.40
F 6
0.011 0.093 0.041 0.22 0.77 0.53
F 7
0.016 0.089 0.042 0.14 0.77 0.53
F 8
0.015 0.096 0.048 0.24 0.77 0.53
600 Fibers and Polymers 2020, Vol.21, No.3 Zhitao Xiao et al.

Figure 15. Corner maps and relative errors; (a) corner map of F , (b) pitch length relative errors of F , (c) surface braiding angle relative
1 1

errors of F , (d) corner map of F , (e) pitch length relative errors of F , (f) surface braiding angle relative errors of F , (g) corner map of F ,
1 2 1 2 3

(h) pitch length relative errors of F , (i) surface braiding angle relative errors of F , (j) corner map of F , (k) pitch length relative errors of F ,
3 3 4 4

(l) surface braiding angle relative errors of F , (m) corner map of F , (n) pitch length relative errors of F , (o) surface braiding angle relative
4 5 5

errors of F , (p) corner map of F , (q) pitch length relative errors of F , (r) surface braiding angle relative errors of F , (s) corner map of F ,
5 6 6 6 7

(t) pitch length relative errors of F , (u) surface braiding angle relative errors of F , (v) corner map of F , (w) pitch length relative errors of
7 7 8

F , and (x) surface braiding angle relative errors of F .


8 8
Braided Parameter Measurement Using Faster R-CNN Fibers and Polymers 2020, Vol.21, No.3 601

Figure 15. Continued.

From Figure 15 and Table 4, some observations can be achieves smaller average relative errors than the existing
made: autocorrelation-based method. In particular, image F2, i.e.,
• Our method can not only measure individual pitch the aramid-fiber fabric image, has larger relative errors when
lengths and surface braiding angles but also measure measured by autocorrelation than by our method, because
average values of the two parameters, and it generally the aramid-fiber fabric image has weaker contrast in textile
602 Fibers and Polymers 2020, Vol.21, No.3 Zhitao Xiao et al.

Table 4. Average measurement values and relative errors of pitch length and surface braiding angle
dah (mm) dadp (mm) daac (mm) θah (°) θadp (°) eddp (%) edac (%) eθdp (%)
F 1
2.49 2.49 2.56 44.20 45.75 0 2.81 3.51
F 2
2.37 2.37 1.78 31.60 31.88 0 24.9 0.89
F 3
4.10 4.10 4.18 35.31 34.89 0 1.95 1.19
F 4
1.82 1.82 1.81 43.40 43.71 0 0.55 0.71
F 5
1.97 1.97 1.97 35.17 34.96 0 0 0.60
F 6
3.89 3.88 3.91 24.01 23.20 0.26 0.51 3.37
F 7
3.89 3.88 3.92 32.65 33.92 0.26 0.77 3.89
F 8
5.48 5.47 5.58 24.48 24.02 0.18 1.82 1.88

edges than other types of images, and the autocorrelation- This work was supported by Applied Basic Research
based method is sensitive to image contrast. Programs of China National Textile and Apparel Council
• Our method achieves higher accuracy for pitch length (No. J201509) and the Program for Innovative Research
than surface braiding angle, and the reasons for this are as Team in University of Tianjin (No. TD13-5034).
follows: first, the individual pitch length depends on two
corners, while the individual surface braiding angle depends References
on four corners, which adds to the affecting factors. Second,
the influence of corner position changes on angle is greater 1. O. Bacarreza, P. Wen, and M. H. Aliabadi in “Micromechanical
than that on distance. Modelling of Textile Composites” (M. H. Aliabaki Ed.),
• Our method achieves higher accuracy for pitch length Computational and Experimental Methods in Structures,
and surface braiding angle of 2D braided fabrics than 3D Vol. 6, pp.1-74, Woven Composites, World Scientific
braided fabrics because the fabric bundle crimp of 3D Publishing Co., Hackensack, 2015.
braided fabrics is larger than that of 2D braided fabrics, and 2. S. Rana and R. Fangueiro, “Advanced Composites in
thus the contrast of neighborhoods centered at the corners of Aerospace Engineering, Advanced Composite Materials
3D braided fabrics is weaker than that of 2D braided fabrics. for Aerospace Engineering”, Woodhead Publishing, 2016.
3. A. Fouladi, R. J. Nedoushan, J. Hajrasouliha, M. Sheikhzadeh,
Conclusion Y. M. Kim, W. J. Na, and W. R. Yu, Appl. Compos. Mater.,
26, 479 (2019).
In this paper, we achieve the automatic measurement of 4. X. Gao, B. Sun, and B. Gu, Aerosp. Sci. Technol., 82, 46
pitch length and surface braiding angle of braided composite (2018).
preforms based on Faster R-CNN. Various conclusions are 5. W. Ye, W. Li, Y. Shan, J. Wu, H. Ning, D. Sun, N. Hu, and
described as follows: S. Fu, Compos. Part B-Eng., 156, 355 (2019).
1. Our method has good robustness for luminance and 6. S. Rana and R. Fangueiro, “Braided Structures and
contrast, has good applicability for the automatic Composites: Production, Properties, Mechanics and
measurement of pitch lengths and surface braiding angles Technical Applications”, Vol. 3, CRC Press, 2015.
of 2D and 3D fabrics, and is also suitable for measuring 7. S. Rana, S. Parveen, and R. Fangueiro in “Advanced
different fiber braided fabrics, such as carbon-, glass- and Carbon Nanotube Reinforced Multi-scale Composites”
aramid-fiber fabrics. (Bakerpur Ehsan Ed.), Advanced Composite Materials:
2. In the fabric database, the target matting method based on Manufacturing, Properties, and Applications, De Gruyter
the corner centered efficiently improved the accuracy of Open, 2015.
the initial corner position. 8. G. Balokas, S. Czichon, and R. Rolfes, Compos. Struct.,
3. The Faster R-CNN net used for initial corner detection 183, 550 (2018).
improves the applicability of the corner detection 9. A. Rawal, H. Saraswat, and A. Sibal, Text. Res. J., 85,
algorithm for weak-contrast images, which efficiently 2083 (2015).
avoids missing corners. 10. Q. Guo, G. Zhang, and J. Li, Mater. Des., 46, 291 (2013).
11. J. Sun, Y. Wang, G. Zhou, and X. Wang, Polym. Compos.,
Acknowledgements 39, 1076 (2018).
12. H. Zhou, W. Zhang, T. Liu, B. Gu, and B. Sun, Compos.
The authors wish to thank Prof. Jae Ryoun Youn for Part A: Appl. Sci. Manuf., 79, 52 (2015).
handling the review of the paper and the two anonymous 13. Y. Wang, Z. Liu, N. Liu, L. Hu, Y. Wei, and J. Ou, Compos.
reviewers for their helpful comments. Struct., 136, 75 (2016).
Braided Parameter Measurement Using Faster R-CNN Fibers and Polymers 2020, Vol.21, No.3 603

14. X. Li, C. Chu, L. Zhou, J. Bai, C. Guo, F. Xue, P. Lin, and Multimed. Tools Appl., 77, 3303 (2018).
P. K. Chu, Compos. Sci. Technol., 142, 180 (2017). 32. L. Wu, H. Li, J. He, and X. Chen, J. Phys.: Conf. Ser.,
15. B. Shi, S. Liu, A. Siddique, Y. Du, B. Sun, and B. Gu, Int. 1176, 032045 (2019).
J. Damage Mech., 28, 404 (2019). 33. H. Xie, Y. Chen, and H. Shin, Appl. Intell., 49, 1200
16. Z. Wan, J. Shen, and X. Wang, J. Text. Res. (in Chinese), (2019).
25, 42 (2004). 34. D. Siegmund, A. Prajapati, F. Kirchbuchner, and A.
17. L. Gong and Z. Wan, Comput. Measurement Control (in Kuijper, International Workshop on Artificial Intelligence
Chinese), 14, 730 (2006). and Pattern Recognition, pp.77-84, Springer, Cham, 2018.
18. Z. Wan and J. Li, AUTEX Res. J., 6, 30 (2006). 35. B. Wei, K. Hao, X.S. Tang, and L. Ren, International
19. M. Kang, X. Leng, Z. Lin, and K. Ji, International Conference on Artificial Intelligence on Textile and
Workshop on Remote Sensing with Intelligent Processing Apparel, pp.45-51, Springer, Cham, 2018.
(RSIP), IEEE, pp.1-4, 2017. 36. Z. Lin, Z. Guo, and J. Yang, Proceedings of the 11th
20. H. Jiang and E. Learned-Miller, arXiv preprint arXiv, International Conference on Machine Learning and
1606.03473 (2017). Computing, ACM, pp.429-433, 2019.
21. R. B. Girshick, J. Donahue, T. Darrell, and J. Malik, 37. Opt Machine Vision Web. http://www.optmv.net/index.php/
CVPR, pp.580-587, 2014. Productionservice/pro_cdetail/pro_id/64 (Accessed July 18,
22. R. B. Girshick, ICCV, pp.1440-1448, 2015. 2019).
23. S. Ren, K. He, R. Girshick, and J. Sun, NIPS, pp.91-99, 38. E. Liepinsh, J. Kuka, and M. Dambrova, J. Pharmacol.
2015. Toxicol., 67, 98 (2013).
24. R. Sa, W. Owens, R. Wiegand, M. Studin, D. Capoferri, K. 39. S. Umeyama and G. Godin, IEEE Transactions on Pattern
Barooha, A. Greaux, R. Rattray, A. Hutton, J. Cintineo, Analysis and Machine Intelligence, 26, 2338 (2004).
and V. Chaudhary, 39th Annual International Conference of 40. M. D. Zeiler and R. Fergus, European Conference on
the IEEE Engineering in Medicine and Biology Society Computer Vision (ECCV), p.818, 2014.
(EMBC), pp.564-567, 2017. 41. K. Simonyan and A. Zisserman, International Conference on
25. W. Fan, H. Jiang, L. Ma, J. Gao, and H. Yang, 10th Learning Representations (ICLR), 2015.
International Conference on Digital Image Processing 42. T. Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D.
(ICDIP), Vol. 10806, p.108065A, 2018. Ramanan, P. Dollár, and C. L. Zitnick, European
26. D. Zhang, W. Zhu, H. Zhao, F. Shi, and X. Chen, Medical Conference on Computer Vision, p.740, 2014.
Imaging: Image Processing, International Society for 43. M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn,
Optics and Photonics, Vol. 10574, p.105741U, 2018. and A. Zisserman, Int. J. Comput. Vision, 88, 303 (2010).
27. E. Yahalomi, M. Chernofsky, and M. Werman, Intelligent 44. J. J. Hernández-López, A. L. Quintanilla-Olvera, J. L.
Computing-Proceedings of the Computing Conference, López-Ramírez, F. J. Rangel-Butanda, M. A. Ibarra-
pp.971-981, Springer, Cham, 2019. Manzano, and D. L. Almanza-Ojeda, Procedia Technology, 3,
28. X. Gao, L. Peng, and M. Sun, J. Nucl. Med., 60, 284 196 (2012).
(2019). 45. P. Kovesi, Australian Pattern Recognition Society
29. J. E. Espinosa, S. A. Velastin, and J. W. Branch, 9th Conference: DICTA, p.309, 2003.
International Conference on Pattern Recognition Systems 46. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian,
(ICPRS), Valparaíso, Chile, 2018. Proceedings of SPIE, Image Processing: Algorithms and
30. C. C. Tsai, C. K. Tseng, H. C. Tang, and J. I. Guo, Asia- Systems, Neural Networks, and Machine Learning, Vol.
Pacific Signal and Information Processing Association 6064, p.606414, 2006.
Annual Summit and Conference (APSIPA ASC), IEEE, 47. Z. Xiao, L. Pei, F. Zhang, L. Geng, J. Wu, J. Tong, J. Xi,
pp.1605-1608, 2018. and P. O. Ogunbona, Text. Res. J., 88, 2641 (2018).
31. M. Zhang, C. Gao, Q. Li, L. Wang, and J. Zhang,

You might also like