Download as pdf or txt
Download as pdf or txt
You are on page 1of 44

EXPERIMENTAL BEHAVIOR MONITORING OF

COLD-FORMED STEEL MEMBERS BY FEATURE


TRACKING

HAFİF ÇELİK ELEMANLARIN DENEYSEL


DAVRANIŞLARININ ÖZELLİK İZLEME YÖNTEMİ İLE
TAKİBİ

MERT OZAN AYDOĞAN

ASST. PROF. DR. BURCU GÜLDÜR ERKAL


Supervisor

Submitted to
Graduate School of Science and Engineering of Hacettepe University
as a Partial Fulfillment to the Requirements
for the Award of the Degree of Master of Science
in Civil Engineering

May 2022
ABSTRACT

EXPERIMENTAL BEHAVIOR MONITORING OF COLD-FORMED


STEEL MEMBERS BY FEATURE TRACKING

Mert Ozan Aydoğan


Master of Science, Civil Engineering
Supervisor: Asst. Prof. Dr. Burcu GÜLDÜR ERKAL
May 2022, 48 pages

Behavioral properties of cold-formed steel (CFS) members, the use of which has increased
in recent years, are determined by standard tests, including axial compression and bending
tests. During experimental testing, various sensors record physical changes of the test
specimen, such as local displacements and strains. The local displacements are generally
measured using linear variable differential transformers, commonly known as LVDTs. The
LVDTs are positioned carefully on the test specimen and connected to a data acquisition
device. Even though LVDTs are quite common in experimental testing, it has various
disadvantages. The LVDT setup process requires particular expertise and is time-consuming.
Besides, the entire data acquisition system is usually pricey. In this research, a novel
image processing-based displacement measurement procedure, which is more economical
and sustainable, is developed to create an alternative hands-off and easy-to-implement
displacement measurement method. First, the test videos recorded from a fixed position
during the axial compression tests of CFS columns have been used to extract the test
member’s displacements automatically. The obtained displacement tracking results are then
compared with the LVDT measurements recorded during testing to verify the accuracy of
the developed methodology. The general results showed that video-based image processing
i
is a promising method for displacement recording. However, the proposed method fails to
capture the small displacement values recorded at the beginning of the axial compression
tests as the image resolution is not sufficient. Nevertheless, the determined image-based
displacement values match with the recorded displacement values as the displacement values
increase.

Keywords: Cold Formed Steel, Axial Compression Tests, LVDTs, Image Processing,
Feature Tracking

ii
ÖZET

HAFİF ÇELİK ELEMANLARIN DENEYSEL DAVRANIŞLARININ


ÖZELLİK İZLEME YÖNTEMİ İLE TAKİBİ

Mert Ozan Aydoğan


Yüksek Lisans, İnşaat Mühendisliği
Danışman: Asst. Prof. Dr. Burcu GÜLDÜR ERKAL
Mayıs 2022, 48 sayfa

Son yıllarda kullanımı artan soğuk şekillendirilmiş çelik elemanların davranışsal özellikleri,
eksenel basma ve eğilme testleri de dahil olmak üzere standart testler ile belirlenmektedir.
Testler sırasında, çeşitli sensörler kullanılarak, yer değiştirme ve gerinim gibi fiziksel
değişimler ölçülebilir. Yerel yer değiştirmeler genellikle LVDT’ler olarak bilinen, doğrusal
değişken diferansiyel transformatörler, kullanılarak ölçülür. LVDT’ler test numunesi üzerine
dikkatlice yerleştirilir ve kayıt cihazına bağlanır. LVDT’ler deneysel testlerde oldukça
yaygın kullanılsa da birkaç dezavantajı vardır. LVDT kurulum süreci özel uzmanlık gerektirir
ve zaman alıcıdır. Ayrıca, tüm veri toplama sistemi genellikle pahalıdır. Bu araştırmada,
alternatif bir yer değiştirme ölçüm metodolojisi oluşturmak için uzaktan ve uygulaması
kolay, mevcut olan yöntemlerden daha ekonomik ve sürdürülebilir olan yeni bir görüntü
işleme tabanlı yer değiştirme ölçüm yöntemi geliştirilmiştir. soğuk şekillendirilmiş çelik
elemanların eksenel basınç testleri sırasında sabit bir konumdan kaydedilen test videoları,
test elemanının yer değiştirmelerini otomatik olarak ölçmek için kullanılmıştır. Elde edilen
deplasman verileri, geliştirilen metodolojinin doğruluğunu sorgulamak için; test sırasında
kaydedilen LVDT ölçümleriyle karşılaştırılır. Sonuçlar, geliştirilen görüntü işleme tabanlı
yöntemin yer değiştirme hesaplarında umut verici bir yöntem olduğunu göstermektedir.
iii
Ancak görüntü kalitesinin yeterli olmadığı için; önerilen yöntem kullanılarak eksenel basınç
deneylerinin başlangıcında oluşturduğu küçük yer değiştirme değerlerinin ölçülmesinde
hatalı sonuçlar vermektedir. Yer değiştirme değerleri arttıkça belirlenen görüntü tabanlı yer
değiştirme değerleri, kaydedilen yer değiştirme değerleri ile örtüşmektedir.

Keywords: Soğuk Şekillendirilmiş Çelik, Eksenel Basınç Testleri, LVDT’ler, Görüntü


İşleme, Özellik İzleme

iv
ACKNOWLEDGEMENTS

First of all, I would like to express my gratitude to Asst. Prof. Burcu GÜLDÜR ERKAL,
Without the data that she provided during the thesis study, her supervision, and valuable
advice, this study would not have been possible.

Also, I would also like to thank my committee members Prof.Dr. Ahmet TÜRER, Assoc.
Prof. Alper ALDEMİR, Assoc. Prof. Baki Öztürk, Assoc. Prof. Dr. Mustafa K. KOÇKAR
for giving me the opportunity to defend my master thesis.

Lastly, I would like to thanks my family, for their generous support and endless patience
during the completion of my education. This accomplishment will not be provided without
their limitless support and their encouragements.

Endless thanks ...

Mert Ozan AYDOĞAN


May 2022, Ankara

v
CONTENTS

Page

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
ÖZET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
ABBREVIATIONS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1. Scope of The Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2. Literature Survey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3. Outline of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2. EXPERIMENTAL PROGRAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1. Available CFS Members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2. Axial Compression Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3. Video Collection Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3. FEATURE TRACKING FOR DISPLACEMENT MEASUREMENTS . . . . . . . . . . . . . . . 10
3.1. Image Processing Towards Feature Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2. Feature Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3. Extracting Measurement Data from Tracking Results and Display . . . . . . . . . . . . . . . 16
4. RESULTS AND DISCUSSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5. CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

vi
TABLES

Page
Table 2.1 Section properties of CFS Ω members used in axial compression tests. 6
Table 3.1 Rotation Values Degree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Table 4.1 Equalization Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Table 4.2 Area Comparison in Percentage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Table 4.3 Operation Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

vii
FIGURES

Page
Figure 2.1 Ideal cross-section dimensions and weight per meter of the
investigated CFS members. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Figure 2.2 Test Setup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Figure 2.3 LVDT Placements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Figure 3.1 Mechanism of Test Number 110-02 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Figure 3.2 Successive Image Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Figure 3.3 Rotation Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Figure 3.4 Object Detection Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 3.5 Motion-Based Multiple Object Tracking Method Algorithm . . . . . . . . . . 17
Figure 3.6 Camera Parameter Extraction Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Figure 3.7 (a) Detected objects in the initial frame (b) The detected objects in
the final frame (deformed shape). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Figure 3.8 Top Plate Tracking Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Figure 3.9 Noise Reduction Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Figure 3.10 Computer vision applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Figure 3.11 (a) Comparison of Data Size Reduced, (b) Multiplied with
Equalization Coefficient, (c) Only Multiplied with Equalization
Coefficient Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Figure 3.12 Graphical Comparison of Specimen 110-02 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Figure 3.13 Complete Algorithm Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Figure 4.1 Graphical Comparison of Specimen (a)90-01 (b)90-02 . . . . . . . . . . . . . . . . 26
Figure 4.2 Graphical Comparison of Specimen (a)100-02 (b)100-03 . . . . . . . . . . . . . 26
Figure 4.3 Graphical Comparison of Specimen (a)110-01 (b)120-01 . . . . . . . . . . . . . 26
Figure 4.4 Graphical Comparison of Specimen (a)120-02 (b)120-03 . . . . . . . . . . . . . 27

viii
ABBREVIATIONS

LVDT : Linear Variable Differential Transformer


DSLR : Digital Single Lens Reflex
CFS : Cold-Formed Steel
DIC : Digital Image Correlations
MSE : Mean Square Error
MP : Mega Pixel
1D : One Dimension
2D : Two Dimension
3D : Three Dimension
kNN : k-Nearest Neighbor
mm : mili meter
px : pixel
sec : second

ix
1. INTRODUCTION

It is a common interest to track the behavior of a test specimen during experiments in order
to extract practical information on specimen behavior under specific loading conditions.
One of the principal variables measured during experiments is displacements at critical
locations of the tested specimens. Generally, to record displacements during experiments,
specialized sensors are utilized. The most common of these sensors are linear variable
differential transformers, referred to as LVDTs. Even though LVDTs are an accurate and
relatively simple way of measuring linear deformations, installation and data acquisition
processes are tedious and require a certain level of expertise. At the same time, physical
interruptions such as damage or slip-off may disrupt test continuity and increase test costs.
Thus, developing a low-cost and easy-to-implement method for automatically performing
displacement measurements would be beneficial. Furthermore, the overall displacement field
of the specimen can be obtained instead of the displacement of the LVDT contact points, and
the strain profile can be obtained.

1.1. Scope of The Thesis

This study aims to develop a comparable method that automatically extracts linear
deformations from videos recorded during axial compression tests with high-accuracy and
low-cost cameras. Several axial compression tests performed on cold-formed steel members
with various lengths are used as benchmarks for developing this method. Displacements are
recorded using several LVDTs during the mentioned axial compression tests, and at the same
time, videos are captured. The MATLAB [1] platform is used to develop the algorithms
required to track and extract displacements during axial compression tests automatically.
First, the black dots placed on each CFS member are extracted by segmentation in the
captured videos. Following the segmentation, extracted objects composed of black dots
are tracked to automatically examine CFS members’ deformation behavior at the LVDT
locations. The recorded displacements are later compared with the extracted ones to
investigate the accuracy of the developed method.
1
The developed camera-based displacement extraction method has the following advantages
compared to the traditional test setups containing LVDTs. Generally, LVDTs have to be
placed and protected carefully during testing; on the other hand, the regular distribution of
the black dots on each CFS member is sufficient for extracting member deformations. In
addition, the LVDTs only measure displacements in one dimension at the point of contact.
However, the developed method can measure two-dimensional location changes of detected
black dots in image plane.

1.2. Literature Survey

In the literature, several studies have focused on the usage of image-based photogrammetric
methods for measuring various geometric variables. In many of these studies, the
measurements are recorded with specially manufactured equipment that takes the important
factors for image processing methods into account. For example, lighting conditions are
extremely important for collecting high-quality data for further processing. Therefore,
each of the following factors that can be listed as camera, light source and lighting
technique as well as the utilized image processing techniques, classification techniques
and learning algorithms could highly affect the accuracy of the obtained image processing
results while performing feature tracking [2]. The most important factors affecting the
photogrammetry-based measurement accuracy are the camera quality and the smoothness
of the object surface [3]. The results obtained from the tests performed in [4] showed that
the lighting conditions, camera quality, and distance from the camera affect the collected data
accuracy . In another study performed on concrete beams, uneven light conditions adversely
affected the quality of the collected closed proximity data [5].

A general method used for feature tracking in changing environments is digital image
correlation (DIC). DIC is an optical method for determining the displacement / deformation
of a structural element / material exposed to external loading that is precise, non-contact,
and non-interferometric. Comprehensive summary on image correlation for shape, motion
and deformation measurements [6]. It should be noted that DIC is not only used for

2
performing one dimensional (1D) measurement like LVDTs, it is very common to perform
two-dimensional (2D) measurements by using DIC in the image plane [7]. Utilized
upsampled cross correlation, an enhanced template matching algorithm, to further developed
into a software package for real-time displacement extraction from video pictures [8].
Performed a study where the measurements for DIC were recorded using special lenses
[9]. This study reported that the measurement results obtained from simple lenses are
commonly more precise than the special lenses. Developed a method for measuring
multi-point displacement responses utilizing a digital image processing methodology [10].
A commercial digital camcorder is used to capture digital images, and the sensor is used
to measure remote displacement reactions while keeping convenience and cost in mind.
Since a commercial camera with standard lenses are used for this study, [9] and [10] studies
are important examples that proves low-cost regular cameras could be used for performing
high-accuracy feature tracking.

Deformations are not the only entities that could be extracted and tracked from images.
The DIC method can be used to measure stress and strain as well as displacements. The
comparison of motion between two images could be used as an artificial strain gauge.
Primarily experiments conducted on clay beams to perform sub-pixel calculations [11]. In
another study, the elastic properties of the solid member were computed using the image data
collected during brazilian disk experiments [7]. A method is then developed for predicting
the response of the investigated member under external loading. Also presents a general
way for extracting displacement fields from collected images at different instances during an
experiment .

Even though image processing techniques are often utilized for extracting and tracking
static entities, they could also be used for monitoring dynamic systems. Extracted vibration
information from collected image data proving that image processing could be effectively
used for performing dynamic measurements. The developed system is capable of performing
measurements with mm accuracy [12]. The measurement accuracy could be further
improved by selecting high frame rates. The usage of collected images for performing
dynamic measurements reduces experiment costs by limiting the sensor costs. A 2-storey
3
structure was tested dynamically in a laboratory environment, and test results were obtained
at a lower cost compared to the previously used equipment [13]. The deformations and
vibrations of the object could also be calculated performing Fourier analysis on the brightness
changes in the collected videos [14]. This method has been applied to high-speed videos
taken with over the shelf smartphones.

Images and videos record 2D data. However, if the data is collected from various angles
covering the surface of an object, then three-dimensional (3D) information could also be
extracted from the recorded images and videos. It is possible to switch from two-dimensional
measurements to a three-dimensional model, even though it is a difficult task, by developing
appropriate algorithms. This provides a method that is capable of collecting data that
represents surface changes for the entire surface of an investigated object. This method is also
cheaper and more user friendly compared to using sensors mounted on the test specimens for
data collections given that low-cost cameras are utilized and continuous data is recorded. It
should be noted again that the camera location and proper lighting conditions are important
for recording image data that could be effectively used for further processing [15]. Used
low-cost cameras instead of 3D sensors to develop algorithms for marked-object tracking,
displacement area tracking and visual mapping [16]. Learning-based systems could be used
for creating 3D representations from 2D visuals. The generated 3D representations could
then be used for measuring the deformations occur on the investigated objects [17]. In
another study, the change in volume of an object was calculated by comparing the before
and after 3D representations obtained by two cameras [18].

Image-based feature tracking could also be effectively used in large-scale structures such as
bridges. Even small deformations occur in bridges can be determined from sequential images
or videos recorded from distance. The key point of using images for deformation extraction is
to have a continuous record for within the investigated time period [19]. If the displacements
of bridges cannot be completely tracked from the land, displacement measurements can be
performed using the measurements recorded from the sea as the vibrations embedded in
the collected records can be determined and excluded successfully [20]. Deformations are
not the only entity that could be extracted from the recording of the bridges, the collected
4
images could also be used for performing model identification, damage detection, and cable
strength calculations. The image data could be recorded by using a wide range of equipment
that range from high-resolution digital cameras to telephones. The most crucial point for
collected image data with various equipment is that the recording must always be made from
a fixed point [21]. Otherwise, system errors are introduced to the setup that makes extracting
meaningful information from the recorded image data impossible.

Video recordings or live broadcasts are used for tracking systems. With appropriate filtering
methods, the desired objects are separated from the background and tracked. For this
separation process, the static background must be recorded. This can be done using
surveillance cameras [22] and [23]. The algorithm can be made by, according to its contrast,
tracking the corner of the objects. Thanks to these methods, it is possible to predict and
follow the figure, which is started to be tracked in the crowd, depending on its speed [24] and
[25]. Tracking systems are widely used and new algorithms are emerging every day; even
old and poor quality cameras can be track easily [26]. When making choices, going through
the method that is most suitable for the experimental setup to be created will greatly affect
the accuracy of the results.

1.3. Outline of the Thesis

In the study carried out, unlike most of the studies presented in the literature, the
deformation information is obtained automatically by tracking the black dots placed on bright
objects, CFS members, from the image data collected with a low-cost camera and poor
lighting conditions. The videos used for feature tracking in this work are recorded during
axial loading tests performed on omega-sectioned CFS members. Algorithms capable of
performing feature segmentation and tracking are developed instead of DIC method, which
are commonly preferred in the literature.

The structure of the paper is as follows. First the experimental program including the
available CFS members, performed axial compression tests, and video collection procedure

5
is presented. Later, the developed feature tracking method is discussed. The results are then
presented and discussed. Finally, the conclusions and future work is laid out.

2. EXPERIMENTAL PROGRAM

As mentioned in the introduction, the aim of this work to develop a camera-based method
that is capable of extracting and tracking linear deformation information from collected
videos. In order to prove the validity and test the accuracy of the developed method, the
extracted displacement information have to be compared with physically recorded ones. In
this research, to achieve this, the recorded data obtained during axial compression test on CFS
Ω sections are used. The details of the utilized CFS members, performed axial compression
test equipment, video collection procedure are explained in the following subsections.

2.1. Available CFS Members

In order to investigate the behavior of CFS Ω sections with geometric imperfections under
axial compression, axial loading tests are performed on nine CFS Ω members. The section
names and lengths of these nine Ω members are given in Table 2.1 below. In addition, the
ideal cross-section drawing is shown in Figure 2.1. The thickness, web width, flange width,
and lip length are respectively 1.2 mm, 42 mm, 59 mm, and 14 mm for all nine Ω members.

Number Specimen ID Type Thickness (mm) Web (mm) Flange (mm) Lip (mm) Length (mm)
1 O-900-01 Ω 1.2 42 59 14 900
2 O-900-02 Ω 1.2 42 59 14 900
3 O-1000-02 Ω 1.2 42 59 14 1000
4 O-1000-03 Ω 1.2 42 59 14 1000
5 O-1100-01 Ω 1.2 42 59 14 1100
6 O-1100-02 Ω 1.2 42 59 14 1100
7 O-1200-01 Ω 1.2 42 59 14 1200
8 O-1200-02 Ω 1.2 42 59 14 1200
9 O-1200-03 Ω 1.2 42 59 14 1200

Table 2.1 Section properties of CFS Ω members used in axial compression tests.

6
Figure 2.1 Ideal cross-section dimensions and weight per meter of the investigated CFS members.

2.2. Axial Compression Tests

In the test setup used for CFS members’ axial compression tests, the specimens are placed
between two steel frames created by using the IPE330 section. The frames are connected
by four steel bars with a diameter of 50 mm. The bottom frame is obtained by combining
two 3000 mm long IPE300 profiles at 5 points. On the other hand, the upper frame consists
of two pieces of 1950 mm long IPE300 profiles connected at three points. The test setup is
given in Figure 2.2

Uniaxial loads of up to 30 tons can be applied to the testing frame. A hydraulic cylinder
(Enerpac RR7513) is mounted to the upper frame to execute axial compression tests. A
load cell (Esit HSC-V) with a 60-ton capacity is installed beneath the hydraulic cylinder to
monitor the applied load during column tests. Two steel plates are placed on top and bottom
of each specimen to fix the test members and to ensure load transfer. Load is applied at a
constant speed to achieve an axial displacement of 0.6 mm per minute. In total, 13 LVDTs are
used. Nine of them are placed to measure the displacements of the test specimen. Six of these
nine LVDTs are placed on the flanges to measure horizontal displacement, and three LVDTs
are placed on the web of the specimen to measure displacement perpendicular to the plane.
One of the remaining four LVDTs is placed at the lower plate to measure displacement in the
test setup. The final three LVDTs are placed at the upper plate. They are used to measure the

7
Figure 2.2 Test Setup

displacement of the load cell. Figure 2.3 shows the locations of the eight LVDTs positioned
on the same plane during axial compression tests. Four of the remaining five LVDTs are
located at the back of the test specimen, and the last one is attached to the bottom plate.
All measurements are recorded and processed using the Kyowa UCAM550 Data Acquisition
System.

2.3. Video Collection Procedure

Black dots with a size of 10 mm are placed onto each test specimen in order to enable
segmentation and tracking. These points are arranged in three lines from top to bottom, two
rows on the flange and one row on the web. There is a 100 mm distance between the two
successive black dots in a row. The tracking objects are placed in order to create high contrast
on the element and to facilitate the measurement of displacements.

8
Figure 2.3 LVDT Placements

Videos are recorded with a Nikon D7100 DSLR camera fixed on a tripod. This camera has
a 24 MP CMOS sensor, resolution of image is 6000 by 4000 pixel and it can record 1920px
by 1080px 30 frame/sec videos. The camera is equipped with a Nikon AF-S DX NIKKOR
18-105mm f/3.5-5.6G lens. Each video is recorded with a wide focal length of 18 mm, which
creates distortion. Thus, before any video processing, lens distortions must be corrected. In
this example, the specimen’s height is 1100 mm and it is represented with 595px, which
means that every pixel corresponds to 1.85 mm.

9
3. FEATURE TRACKING FOR DISPLACEMENT
MEASUREMENTS

The general aim of the study; it is to be able to measure displacement with a wider scope
compared to LVDTs. For this purpose, code created by using MATLAB application. As
a sample member 110-02 (Figure 3.1) has been used. In the first step object features
are exported; this task is done with image and video processing techniques. Every
frame imported to MATLAB. By using different techniques objects are separated for
tracking. In the second step, these objects are tracked in every frame. Object properties
changes are observed. In the third phase, all properties are compared step by step and
displacement calculations are made. Additionally, for this task algorithm optimizations and
data transformations are made.In this project Matlab’s Image Processing Toolbox, Computer
Vision Toolbox and Curve Fitting Toolbox are used.

Figure 3.1 Mechanism of Test Number 110-02

10
3.1. Image Processing Towards Feature Tracking

Image processing starts with reading the image from the recorded graphics file. This process
converts the imported image files and records them in matrix format (2D data). In this
study since the videos are used, first the region-of-interest, in this case the CFS member, is
cropped and saved as a new video. Later, successive images that compose the video are read
separately and stored in matrix form to perform image processing towards feature tracking.

In this work, the tracking objects, which the developed method must detect and track are
black. The easiest way to track these black dots located on the surface of the investigated CFS
members is to first separate and remove the background, and then locate and track the black
dots. To separate these objects represented as black dots, images are initially transformed
into binary form using the built in MATLAB functions described in [27] [28]. Binarization
is crucial since this procedure makes it possible for algorithms to understand which regions
of the image belong to tracked objects when the images are represented with ones and zeros.

In order to achieve this, the images are first transformed into grayscale. Each image is
composed of three separate layers storing red, green, and blue values (RGB values). The
RGB color model is an additive color model in which the red, green, and blue primary colors
of light are added together in various ways to reproduce a broad array of colors. A RGB
file consists in composite layers of red, green and blue, each being coded on 256 levels from
0 to 255. The grayscaling method computes the average of the red, green and blue values
and stores these new values as luminance. In a grayscale image, every pixel contains a value
ranging from 0 (black) to 255(white).

Once the images are converted into grayscale, binarization is applied with a threshold value
of 0.09 determined by performing an incremental sensitivity analysis on the images extracted
from the recorded videos. This threshold separates objects within an image according to their
luminance values. However, noise and other dot-like objects in images causes problems since
binarization could not be directly used for automatically extracting tracking objects. Thus,
other separation techniques are investigated.

11
The small objects are neglected by using the built-in remove small objects from binary image
method. This algorithm runs in the pixel square range and neglects values below a certain
threshold. Thus, the area of the tracked feature/object is important. It should be noted that
objects that are considered as noise in high-resolution images could be represented larger
than the real objects in low-resolution images. Therefore, the threshold selection is very
important and case specific as it depends on image resolution. For example, a normal DSLR
camera with 24MP image capturing capacity could capture images with a size of 6000 by
4000 pixels, but the same camera can only record video at 1080p resolution. Thus, the
images within a video (1920 by 1080 pixels) is nearly 12 times smaller than the images
taken separately. Because of this, the utilized threshold values have to be optimized.

Once the object detection step is completed, physical properties of the detected objects
such as area, centroid, diameter, perimeter, eccentricity, etc. are exported using the
built-in image regions method. These properties are then utilized to separate real objects
from the pseudo-objects. During this separation, the areas of tracking objects and the
object eccentricities are observed. While the shapes close to circle or even elliptical are
reserved for tracking, non-circle objects such as lines and rectangles are eliminated. It
is only possible observe the rotations and movements of the real objects once all the
non-circular/non-elliptical objects are removed.

If the movement of a certain object is aimed to be tracked in two successive images, the
tracked object need to have the same label in each image. However, the built-in functions
in MATLAB fails to provide matching labels for the objects detected in successive images.
The same objects in different images are labeled different from each other with MATLAB’s
labeling algorithm. To overcome this problem, objects in two successive images are
matched with the k-Nearest Neighbor classification algorithm [29]. This algorithm calculates
distances between investigated objects by taking the user selected parameters into account.
The closest two objects to one another at a certain region are then paired.

12
3.2. Feature Tracking

Feature tracking could only be performed if the object labeling process described in the
previous section is performed successfully. To perform feature tracking, an algorithm that
specifically detects the location changes of each tracked object within two successive images.
The center locations of detected images are extracted for each image. In Figure 3.2-a,b object
detection in consecutive frames is shown, and in Figure 3.2-c comparison of the centers is
shown. Later, the linear distance between the two-points representing the center points of the
same object in two successive images is calculated for all the detected objects. This process
is then repeated for the entire video composed of successive images. It has been observed
that some of the detected objects (dots) move and rotate perpendicular to the image plane,
and these result in area and eccentricity changes. In order to solve this problem, the major
axis length of each object in two successive images are investigated. The length changes in
both major and minor axis of the tracked objects are then used to compute the out-of-plane
rotations. First, the major axis lengths of the objects are proportioned (factor), both major
and minor axis lengths of the successive image are multiplied with the obtained factor so that
the major axis of the tracked objects have the same length value. The length of the object’s
minor axis length in the successive image is divided by major axis length; arccosine of this
value is then used to compute the perpendicular rotation with respect to the image plane.

The accuracy of this rotation approach was tested, on printed and rotated objects. In Figure
3.3 30°rotated version shown, over that visual non-rotated version boundaries drawn. The
described method is used to determine the image’s rotation automatically. The obtained
results which were given in the Table 3.1 matched well with the real rotations.

ID 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Rotation 30,01 30,32 29,36 30,01 31,17 30,05 28,73 29,92 29,34 29,72 28,63 29,34 30,03 30,02 30,10 29,83

Table 3.1 Rotation Values Degree

Video processing techniques have been improved over the last decades as the time required
to perform image processing has been reduced tremendously with advancing technology.

13
(a) (b) (c)
Figure 3.2 Successive Image Detection

As mentioned above, in order to perform image processing effectively, first the investigated
videos should be reduced down to images that composes them. The image processing steps
described above (binarization and segmentation) needs to be applied frame by frame.

On the other hand, in video processing all the image processing steps could be applied
without exporting images individually. Besides, since it allows to track the changes occur
during a certain time frame, video processing is an efficient method for extracting the
deformations of the objects tracked in this project.

14
Figure 3.3 Rotation Calculation

First, every frame (separate image) of the video is rotated according to the skew of the
image computed based on the checkerboard image attached to the testing frame, cropped,
and accelerated 10 times. For this task, one frame out of 10 frames are read. These selected
and read frames are then saved as a new video. This is performed to reduce excessive video
processing duration as the management of the function parameters becomes a lot easier for
shorter videos.

It should be noted that generally built-in cropping functions tend to reduce resolution of
the initial video. However, since the aim of this research is to track detected objects to
extract deformation information, the initial video resolution is kept intact. Once the video
is both reduced in size and cropped, grayscaling and binarization are then completed. Later,
the data for labeled regions is extracted from MATLAB to obtain the necessary physical
properties. The embedded algorithm calculates properties for labeled regions in binary
images and exports quantities such as bounding box, label matrix and centroid. However,

15
as mentioned above obtaining the correct labels (same labels at each successive step) is not
straightforward. In order to deal with this issue, an objects-based definition has been used.

A motion-based multiple object tracking method is used. The outcomes are modified with
respect to the extracted object properties. The steps of the utilized motion-based multiple
object tracking method is visualized in Figure 3.4 a to e. The object detection method
consists of the following steps: conversion to grayscale (Figure 3.4-a), binarization (Figure
3.4-b), and noise reduction (Figure 3.4-c) and noise-canceling (Figure 3.4-d). Then, by
using the statistics for the labeled regions method (Figure 3.4-e) segmented objects are
re-labelled and their properties are extracted. Later, Kalman Filter, the tracking and motion
prediction method explained in [30] and [31], has been used. This method has two different
configurations: one of which uses constant velocity and the other uses constant acceleration.
In the constant acceleration option velocity must be increased or decreased however in this
test setup velocity was stable because of it, constant velocity option is more suitable for
this work. Without Kalman filter, MATLAB labels objects differently at each successive
frame. Even when the Kalman Filter is used, sometimes several of the tracked objects. These
vanishing objects are considered as noise. In order to neglect noise, the assign detections to
tracks for multi-object tracking method is used ( [32] and [33]).Algorithm of this method is
given in Figure 3.5.

Once the labeling issue is resolved, the detected objects are tracked within every frame. The
developed method updates the change in object properties and records them in a table at
each video frame. Vanishing objects, which disappear in more than 30 frames, are deleted to
hinder confusion. If new objects occur, those objects are added to the recording tables with
new labels.

3.3. Extracting Measurement Data from Tracking Results and Display

Finally step of the entire feature tracking process is to analyze and visualize the obtained
results. The results obtained with the method described in the previous section are stored in
a structured data format. This format allows to manipulate the recorded data as it eases the

16
(a) Gray Scale (b) Binarization (c) Noise Reduction (d) Noise Canceling (e) Segmented
Object
Figure 3.4 Object Detection Steps

Figure 3.5 Motion-Based Multiple Object Tracking Method Algorithm

user control. The properties that are exported and recorded are the object area, displacements
on x-axis and y-axis, orientation, major and minor axis lengths.

For each property, the recorded data is formatted such that rows represent different objects
and the columns represent the results computed in each frame. By using these recorded
tables, the frame-by-frame differences for each property can be obtained. It should be

17
noted that all the displacement calculations are performed in image coordinates, the obtained
results should be converted to the world coordinate system to obtain comparable results to
the LVDT recordings.

In order to convert image coordinates to world coordinates, different samples (including


checkerboard image) are extracted from videos and then fed into the camera calibration tool
of MATLAB. The algorithms used in this tool are discussed in [34] and [35]. MATLAB
calculates distortions and creates matrices for both rotation and translation (’R’, ’t’) with
the help of the checkerboard (given that tat the checkerboard individual box dimensions
are known). The checkerboard box size used in the axial compression test setup is 22 mm.
However, it should be noted that the obtained rotation and translation matrices cannot be used
on cropped images.The flowchart for algorithm can be seen in Figure 3.6. When an image is
cropped, MATLAB considers that it’s a new image and places the origin point to 0,0 (x,y).
Unfortunately, the world coordinate converter doesn’t work with this newly generated image.
The solution that was found to solve this problem is to change the coordinates of the cropped
image’s origin. By this way, the cropped image is treated such that it is still a part of the
initial image (uncropped one).

Figure 3.6 Camera Parameter Extraction Algorithm

Another correction has to performed to deal with the distortion present in the collected videos
resulting from lens distortions. Once the camera related distortions are excluded from each
individually processed image, these images in world coordinate system could be successfully
used. Additionally, there are obstacles between objects and camera in some parts of the
recorded videos (researches removing LVDTs are they become redundant during the axial
compression tests); hence, the parts of the videos including these objects are removed. Once
all the distortions are excluded and the images are converted to world coordinates, real
distance can then be calculated accurately. For the visualization, coordinates of the detected

18
(a) (b)
Figure 3.7 (a) Detected objects in the initial frame (b) The detected objects in the final frame
(deformed shape).

objects are exported and objects are marked with dots. These dots are then connected
with each other by using polynomial curve fitting method. Figure 3.7 shows the initial
(undeformed) and deformed object shapes in the first and last frames. Polynomial curve
fitting is important because black dot locations and LVDT contact locations do not coincide.
The generated polynomial curves give the displacement in mm range at the locations of
LVDTs.

19
To compare the experiment data with the results obtained by feature tracking, load cell or top
plate displacements values that corresponds to LVDT readings must be known. However,
there is no tracking object placed on load cell. Hence, load cell must be tracked in alternative
ways. Otherwise, an accurate comparison cannot be made. Since the performed experiments
are not displacement controlled, it is hard to match the recorded videos with experiment
results. The flowchart for the algorithm of top plate displacement tracking is given in Figure
3.8

Figure 3.8 Top Plate Tracking Algorithm

The calculate the top plate displacement each test specimen, the two bolts on the load cell are
tracked. If these bolts can be tracked properly, then the displacement values corresponding
to each tracked frame can be computed. For this purpose, the developed tracking function
is modified. In this modified version of the tracking function, segmentation settings like
binarization coefficients and streel values are changed. The tracking function follows both
bolts and to compute the top plate displacement values the average of the tracking results are
used. However, the obtained results turned to be very noisy due to the low-resolution. Later,
1-D digital filter [36] is used to reduce noise. In Figure 3.9 -a raw data can be seen, Figure
3.9 -b shows the noise reduction applied to load cell tracking data version of Figure 3.9 -c
represents the noise reduction by filtering and polynomial curve fitting performed version.

Even though the top plate displacement values are obtained with the described modified
tracking function, the collected data have to be calibrated with equalization coefficients. The
equalization coefficients are calculated by neglecting the data corresponding to first 1mm
displacement because feature tracking results are not reliable at this region. To solve this,

20
(a) (b)

(c)
Figure 3.9 Noise Reduction Visualization

the first 40 frames of data have been excluded. However, the obtained data still has a lot of
noise, therefore the load cell tracking strategy has been modified.

A trackable object is placed automatically to the plate attached to the load cell. This object is
composed of two large dots and a line that connects them (as it can be seen in Figure 3.10-b).
As the tracking object has changed, the function inputs and outputs such as minimum,
maximum object area and binarization coefficients are changed. Figure 3.10 represents what
the developed algoritms detect and label them (both black dots on the test specimen and the
load cell tracking object). When the tracing is completed, the extracted displacement values
are compared with the ones recorded with LVDTs. The initial slope region that excluses the
first 1 mm displacement is then used to match these two data. The equalization coefficients
for each member is calculated by dividing the modified values for the initial slope region with
the initial data. The entire tracking data is then multiplied with this equalization coefficient
to resize the feature tracking data.

Results showed that the obtained results resized values were successful, but the number
of records in data is different from the extracted dataset. To compare these two datasets

21
(a) (b)
Figure 3.10 Computer vision applications

numerically, the number of records in datasets must be equal; hence extracted data size has
to be matched with the recorded. Data size reduction has been made with one dimensional
data interpolation method, The Akima algorithm described in [37] and [38], which performs
cubic interpolation to produce piecewise polynomials with continuous first-order derivatives.
Prior to that, time-series, fit curve or surface to data method, reshape array method, and
decrease sample rate by integer factor methods were also tried but the best result are obtained
with the one dimensional data interpolation function. The data interpolated with the one
dimensional data interpolation function can be seen at Figure 3.11-a, the multiplied version
of these data by the equalization coefficient can be seen at Figure 3.11-b, and independent
from these; Figure 3.11-c. shows the results for which the 1D digital filtering results is
multiplied by the equalization coefficient only. It can be seen that the polynomial curve fitting
results multiplied by the equalization coefficient gives the best results. To calculate data
accuracy; Wilcoxon rank sum test [39] [40], two-sample t-test method, fit k-nearest neighbor
classifier method, loss of k-nearest neighbor classifier method, and compare accuracies of
two classification models using new data methods were tried [41] [42] [43]. But these

22
methods are working with ’null hypothesis’ and it is not compatible with this system.
Then, the mean square error is tried for performing the comparison, but it gave only one
value as the result. As the final approach, the comparison is done by performing area
calculation. Area below the top plate displacement-displacement graph gives work and
this entity can be used for making comparisons. However, test data have force version
too, but obtained data only has top plate displacement-to-displacement. Hence, the top
plate displacement-to-displacement version will be used in this work. For every LVDT,
comparisons are made one by one and accuracy values calculated.

(a) (b)

(c)
Figure 3.11 (a) Comparison of Data Size Reduced, (b) Multiplied with Equalization Coefficient, (c)
Only Multiplied with Equalization Coefficient Version

For area calculation, the trapezoidal method has been used. This method approximates
integration over an interval by breaking the area down into trapezoids. Due to its algorithm,
the trapezoidal method does not require sample size equalization. That means without 1D
data interpolation method data can be compared. 1D data interpolation method is disabled.
Graphical result of the specimen 110-02 given in Figure 3.12 The process made for the other
8 samples as well. This will be explained in the next part. In the Figure 3.13 the flowchart
for the complete algorithm is given.

23
Figure 3.12 Graphical Comparison of Specimen 110-02

Figure 3.13 Complete Algorithm Chart

In this study, a laptop suite with an Intel (R) Core (TM) i7-9750H CPU @ 2.60GHz
processor, 16GB RAM, 64-bit operating system, and NVIDIA GTX1650 graphics card
was used to develop the algorithms and run the developed algorithms. All the algorithm
development and testing is conducted in MATLAB (2020) release 2020b.

4. RESULTS AND DISCUSSION

In this project, 9 different specimens were used, and every specimen had 13 mounted LVDT
in its system. 1 LVDT measures the bottom plate’s displacement. 3 LVDTs measure the
top plate’s displacement. 9 LVDTs measure horizontal displacements; 3 of them are placed
24
in the middle of the objects, and measure displacement perpendicular to the image plane.
The other 6 of them measure lateral displacement in the image plane. These measurements
have been used while comparisons are made. To make a comparison; firstly, the horizontal
displacement must be calculated. However, this information is not enough to decide. For a
full comparison to be made, the displacement of the load cell must also be known or followed.
However, load cell displacement was different from test displacement. To overcome this
problem, tracked displacement is multiplied with calculated equalization coefficients which
are given in Table. 4.1. The object that is placed on the load cell or top plate during the test
will help with tracking the data and prevent the need for another object to be added later. As
another alternative, the top 3 points can be followed, but any specimen deformation should
be ignored.

Member 110-02 90-01 90-02 100-02 100-03 110-01 120-01 120-02 120-03
Equalization Coefficients 1.243 1.476 1.513 0.663 0.832 0.856 1.158 1.422 1.505

Table 4.1 Equalization Coefficients

After this multiplication, graphs can be seen Figure 3.12 , Figure 4.1, Figure 4.2, Figure
4.3, Figure 4.4. The comparison was made with the area under the coordinate system. Area
of force-displacement graph; gives the work that test specimens are made. But in this study
outputs are displacement; because of it, graphics are displacement-displacement, which units
is mm range for both coordinates. Even if this information does not give the work that
specimen makes; the comparison of the area under the graph, of both systems, will give
information about the accuracy of the system.

In Table 4.2, comparisons are given for the values of tracked objects’ top plate displacements
vs. displacement graph areas, and for LVDT top plate displacements vs. displacement graph
areas. Even though some of the areas are way bigger than others, in general the values are
all within the range of 20% - which is considered an acceptable margin of error. These
values can be used as a good starting point for image-based displacement calculations of
cold-formed steel components.

25
(a) (b)
Figure 4.1 Graphical Comparison of Specimen (a)90-01 (b)90-02

(a) (b)
Figure 4.2 Graphical Comparison of Specimen (a)100-02 (b)100-03

(a) (b)
Figure 4.3 Graphical Comparison of Specimen (a)110-01 (b)120-01

First, photographs were used in the creation of the algorithm. Afterwards, due to the data
owned, the operations were continued using video. Although the number of frames per
second in the video is high; the displacement information to be obtained from the photograph
will be much better due to its dimensions. A 6000 x 4000 photo frame contains 24 million
pixels, while a 1920x1080 video contains 2073600 pixels. This means that the area occupied

26
(a) (b)
Figure 4.4 Graphical Comparison of Specimen (a)120-02 (b)120-03

Member Point 6 Point 9 Point 12 Point 4 Point 7 Point 10


ID LVDT 6 LVDT 9 LVDT 12 LVDT 4 LVDT 7 LVDT 10
90-01 124.89 121.69 131.15 96.40 83.99 92.13
90-02 100.75 113.48 97.74 119.30 107.28 141.59
100-02 73.14 107.31 137.78 123.89 118.48 87.30
100-03 78.49 96.51 97.73 133.27 92.08 90.51
110-01 190.55 102.41 112.45 101.69 92.59 97.78
110-02 107.78 114.78 107.91 85.81 82.66 99.67
120-01 89.41 79.03 68.86 81.10 80.88 109.65
120-02 118.78 97.64 102.51 127.32 120.54 105.90
120-03 102.64 110.96 138.72 135.94 117.45 96.63

Table 4.2 Area Comparison in Percentage

by each pixel is 1.85mm for video, while this value is 0.16mm for photography, which allows
for much more precise measurements.

Due to single point video recording, perpendicular movement of the specimen to the
recording plane cannot be directly obtained. The sizes of the spots placed on the specimen
due to this deformation become larger or smaller depending on their convergence or distance.
Although this deformation is promising for 3rd dimension measurements, insufficient light
conditions cause the point areas to change too much. The disadvantage of this situation is
that the centers of the tracked points change. In order to prevent this, the major axes at two
different moments are proportioned, based on the assumption that the major axis does not
change, and the point areas are changed in line with this ratio.

27
For an individual CFS member, the average processing time with visualizations is 1257
seconds. However, all of that visualization code was added for bug detection. When
visualization codes are disabled, the average time becomes 380.72 seconds. Operation time
depends on frame number. If there are a lot of trackable frames, operation time gets longer.
Operation times can be seen in the Table 4.3. Unless a dynamic measurement is taken, the
number of frames has very little effect, so if the number of frames per second is reduced, the
processing of the results will be much faster.

Member 110-02 90-01 90-02 100-02 100-03 110-01 120-01 120-02 120-03
Speed (sec) 374.716 766.441 649.183 326.188 355.108 219.657 195.232 281.857 258.117

Table 4.3 Operation Time

5. CONCLUSION

With the developed algorithm; The displacement of the test specimen at all points can be
examined in two dimensions. Besides, if the algorithm is developed, it will even be possible
to measure strain and displacement in three dimensions, including dynamic measurements.
As a result, although the algorithm studied cannot completely replace LVDTs, it will provide
much more comprehensive information about the displacement of the tested element in
LVDT-controlled systems.

As seen in this study, the most basic problems with the tracking system are uneven lighting
conditions and overlapping of objects. In uneven lighting, the shadows falling on the object
change the centers of the traced points or cause the points to disappear. Objects overlapping
cause one of the objects to disappear and the tracking system completely collapses.

For future works, first, the tracking of the points placed on the object should be done by color
(colors such as red, yellow, local, or blue) instead of contrast. Light conditions need to be
improved; Since the shadows and reflections change the areas and center points of the traced
points, movements and noises that do not exist in the objects appear. It is possible to say that
these will be less noises when a color-based object tracking system is created. Finally, it is
28
important that the dots placed do not line up. When the amount of deformation increases,
points that coincide with the same level overlap and tracking becomes impossible. When
images have been used instead of video displacement measurements become more accurate.

Acknowledgements

The authors thank Arkitech Advanced Construction Technologies for their contributions to
this research. This material is based upon work supported by the Scientific and Technological
Research Council of Turkey (TUBITAK) under Grant No. 217M513 and Hacettepe
University. Any opinions, findings, and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of the TUBITAK.

29
REFERENCES

[1] MATLAB. version 9.9.0 (R2020b). The MathWorks Inc., Natick, Massachusetts,
2020.

[2] Xiaohong Sun, Jinan Gu, Shixi Tang, and Jing Li. Research progress of visual
inspection technology of steel products—a review. Applied Sciences, 8:2195,
2018.

[3] Ardalan Hosseini, Davood Mostofinejad, and Masoud Hajialilue Bonab.


Displacement measurement of bending tests using digital image analysis method.
International Journal of Engineering and Technology, 4:642–644, 2012.

[4] Neil A. Hoult, W. Andy Take, Chris Lee, and Michael Dutton. Experimental
accuracy of two dimensional strain measurements using digital image correlation.
Engineering Structures, 46:718–726, 2013. ISSN 0141-0296.

[5] S. Bilici Z. Fırat Alemdar. Structural deformation measurement using matlab


image processing toolbox, 2017.

[6] Michael Sutton, Jean-José Orteu, and Hubert Schreier. Image Correlation
for Shape, Motion and Deformation Measurements. Basic Concepts, Theory
and Applications. Springer New York, NY, 2009. ISBN 978-0-387-78746-6.
doi:10.1007/978-0-387-78747-3.

[7] F. Hild and S. Roux. Digital image correlation: from displacement measurement
to identification of elastic properties – a review. Strain, 42(2):69–80, 2006. ISSN
0039-2103.

[8] Dongming Feng, Maria Feng, Ekin Ozer, and Yoshio Fukuda. A vision-based
sensor for noncontact structural displacement measurement. Sensors, 15, 2015.
doi:10.3390/s150x0000x.

30
[9] Bing Pan, Liping Yu, and Dafang Wu. High-accuracy 2d digital image correlation
measurements using low-cost imaging lenses: implementation of a generalized
compensation method. Measurement Science and Technology, 25(2):11, 2014.
ISSN 0957-0233.

[10] Sung-Wan Kim and Nam-Sik Kim. Multipoint displacement response


measurement of civil infrastructures using digital image processing. Procedia
Engineering, 14:195–203, 2011. doi:10.1016/j.proeng.2011.07.023.

[11] Michael Dutton, W. Andy Take, and Neil A. Hoult. Curvature monitoring
of beams using digital image correlation. Journal of Bridge Engineering,
19(3):05013001, 2014.

[12] Shien Ri, Tatsuro Numayama, Masumi Saka, Kenichi Nanbara, and
Daisuke Kobayashi. Noncontact deflection distribution measurement for
large-scale structures by advanced image processing technique. MATERIALS
TRANSACTIONS, 53(2):323–329, 2012.

[13] Hyoung-Suk Choi, Jin-Hwan Cheung, Sang-Hyo Kim, and Jin-Hee Ahn.
Structural dynamic displacement vision system using digital image processing.
NDT & E International, 44(7):597–608, 2011. ISSN 0963-8695.

[14] David Mas, Belen Ferrer, Pablo Acevedo, and Julian Espinosa. Methods
and algorithms for video-based multi-point frequency measuring and mapping.
Measurement, 85:164–174, 2016. ISSN 0263-2241.

[15] Luigi Barazzetti and Marco Scaioni. Development and implementation of


image-based algorithms for measurement of deformations in material testing.
Sensors (Basel, Switzerland), 10(8):7469–7495, 2010. ISSN 1424-8220.

[16] Marco Scaioni, Tiantian Feng, Luigi Barazzetti, Mattia Previtali, and Riccardo
Roncella. Image-based deformation measurement. Applied Geomatics,
7(2):75–90, 2015. ISSN 1866-928X.

31
[17] S. Choi, A. Nguyen, J. Kim, S. Ahn, and S. Lee. Point cloud deformation for
single image 3d reconstruction. In 2019 IEEE International Conference on Image
Processing (ICIP), pages 2379–2383. 2019. ISBN 2381-8549.

[18] Renee Oats and Qingli Dai. Improved 2d digital image correlation method for
displacement and deflection measurements of structural beams. The Journal of
Modern Civil and Structural Engineering, 1:13–26, 2017.

[19] Yoneyama Satoru, Kitagwa Akikazu, Kitamura Koji, and Kikuta Hisao.
Deflection distribution measurement of steel structure using digital image
correlation. In Proc.SPIE, volume 5880. 2005. doi:10.1117/12.614364.

[20] Mohamad Alipour, Savannah J. Washlesky, and Devin K. Harris. Field


deployment and laboratory evaluation of 2d digital image correlation for
deflection sensing in complex environments. Journal of Bridge Engineering,
24(4):04019010, 2019.

[21] Dongming Feng and Maria Q. Feng. Computer vision for shm of civil
infrastructure: From dynamic response measurement to damage detection – a
review. Engineering Structures, 156:105–117, 2018. ISSN 0141-0296.

[22] Arnaud le troter, Sébastien Mavromatis, and Jean Sequeira. Soccer field detection
in video images using color and spatial coherence. volume 3212, pages 265–272.
2004. ISBN 978-3-540-23240-7. doi:10.1007/978-3-540-30126-4 33.

[23] Mohan Chandrajit, Girisha R, and T. Vasudev. Multiple objects tracking in


surveillance video using color and hu moments. Signal & Image Processing :
An International Journal, 7:15–27, 2016. doi:10.5121/sipij.2016.7302.

[24] Shipra Ojha and Sachin Sakhare. Image processing techniques for object tracking
in video surveillance- a survey. pages 1–6. 2015. doi:10.1109/PERVASIVE.2015.
7087180.

32
[25] Haojin Yang, Bernhard Quehl, and Harald Sack. Text detection in video images
using adaptive edge detection and stroke width verification. pages 9–12. 2012.
ISBN 978-1-4577-2191-5.

[26] Lixin Fan, Mikko Riihimaki, and Iivari Kunttu. A feature-based object tracking
approach for realtime image processing on mobile devices. pages 3921 – 3924.
2010. doi:10.1109/ICIP.2010.5651003.

[27] Derek Bradley and Gerhard Roth. Adaptive thresholding using the integral image.
Journal of Graphics Tools, 12:13 – 21, 2007.

[28] Nobuyuki Otsu. A threshold selection method from gray level histograms. IEEE
Transactions on Systems, Man, and Cybernetics, 9:62–66, 1979.

[29] T. Cover and P. Hart. Nearest neighbor pattern classification. IEEE Transactions
on Information Theory, 13(1):21–27, 1967. doi:10.1109/TIT.1967.1053964.

[30] Gene F. Franklin, Michael Workman, and J. David Powell. Digital control of
dynamic systems. In Digital Control of Dynamic Systems. 1980.

[31] Frank L. Lewis. Optimal estimation: With an introduction to stochastic control


theory. In Optimal Estimation: With an Introduction to Stochastic Control
Theory. 1986.

[32] James R. Munkres. Algorithms for the assignment and transportation problems.
Journal of The Society for Industrial and Applied Mathematics, 10:196–210,
1957.

[33] M.L. Miller, Harold S. Stone, and Ingemar J. Cox. Optimizing murty’s ranked
assignment method. IEEE Transactions on Aerospace and Electronic Systems,
33:851–862, 1997.

[34] Janne Heikkilä and Olli Silvén. A four-step camera calibration procedure with
implicit image correction. Proceedings of IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, pages 1106–1112, 1997.
33
[35] Zhengyou Zhang. A flexible new technique for camera calibration. IEEE Trans.
Pattern Anal. Mach. Intell., 22:1330–1334, 2000.

[36] Alan V. Oppenheim, Ronald W. Schafer, and John R. Buck. Discrete-Time Signal
Processing. Prentice-hall Englewood Cliffs, second edition, 1999.

[37] Hiroshi Akima. A new method of interpolation and smooth curve fitting based
on local procedures. J. ACM, 17:589–602, 1970.

[38] Hiroshi Akima. A method of bivariate interpolation and smooth surface fitting
based on local procedures. Commun. ACM, 17:18–20, 1974.

[39] Jean Dickinson Gibbons and Subhabrata Chakraborti. Nonparametric statistical


inference. In International Encyclopedia of Statistical Science. 2011.

[40] Myles Hollander, Douglas A. Wolfe, and Eric Chicken. Nonparametric statistical
methods. In Nonparametric Statistical Methods. 1973.

[41] Alan Agresti. An introduction to categorical data analysis. In An introduction to


categorical data analysis. 1990.

[42] Morten Wang Fagerland, Stian Lydersen, and Petter Laake. The mcnemar
test for binary matched-pairs data: mid-p and asymptotic are better than exact
conditional. BMC Medical Research Methodology, 13:91 – 91, 2013.

[43] Quinn Mcnemar. Note on the sampling error of the difference between correlated
proportions or percentages. Psychometrika, 12:153–157, 1947.

34

You might also like