Professional Documents
Culture Documents
PPTPPTX
PPTPPTX
• If the training data set is limited and the parameters are too many, overfitting is
likely to occur.
• The larger the network model structure is, the more complex the calculation will
be, making it difficult to carry out practical application.
• The deeper the network model structure is, the more easily the gradient is lost in
the deep layer, and the optimization model is difficult to be optimized.
• Degradation: Accuracy first increases with depth, and when saturation is
reached, the increase of model depth leads to the decrease of accuracy
Proposed Methodology
Performa Severity
nce Result detection
analysis
DenseNet Architecture
Components of DenseNet include:
Connectivity
DenseBlocks
Growth Rate
Bottleneck layers
• Connectivity
• In each layer, the feature maps of all the previous layers are not summed, but
concatenated and used as inputs. Consequently, DenseNets require fewer
parameters than an equivalent traditional CNN, and this allows for feature reuse as
redundant feature maps are discarded. So, the lth layer receives the feature-maps of
all preceding layers, x0,...,xl-1, as input:
•The use of the concatenation operation is not feasible when the size of feature maps changes.
•However, an essential part of CNNs is the down-sampling of layers which reduces the size of
feature-maps through dimensionality reduction to gain higher computation speeds.
•To enable this, DenseNets are divided into DenseBlocks, where the dimensions of the feature maps
remains constant within a block, but the number of filters between them is changed.
•The layers between the blocks are called Transition Layers which reduce the the number of
channels to half of that of the existing channels.
•For each layer, from the equation above, Hl is defined as a composite function which applies three
consecutive operations: batch normalization (BN), a rectified linear unit (ReLU) and a convolution (Conv).
Bottleneck layers
• Although each layer only produces k output feature-maps, the number of inputs
can be quite high, especially for further layers. Thus, a 1x1 convolution layer can be
introduced as a bottleneck layer before each 3x3 convolution to improve the
efficiency and speed of computations
DenseNet-121 Architecture
Software requirements
• MATLAB 2020b
Advantage of proposed
methodology
• Less training needed
• Temporal features considered
References
C.S. Nadith Pathirage, J. Li, L. Li, H. Hao, W. Liu, P. Ni, Structural damage identification based on
autoencoder neural networks and deep learning, Eng. Struct. 172 (2018) 13–28,
https://doi.org/10.1016/j.engstruct.2018.05.109.
C.S. Nadith Pathirage, J. Li, L. Li, H. Hao, W. Liu, R. Wang, Development and application of a deep
learning–based sparse autoencoder framework for structural damage identification, Struct. Health Monit.
. M.H. Rafiei, W.H. Khushefati, R. Demirboga, H. Adeli, Supervised deep restricted Boltzmann machine for
MATLAB 2020b
Input image
Input image after enhancement
gray scale conversion
Binary segmentation
Mask RCNN wit denoising
layer mapping
Results
Evaluation
Conclusion
•Cracks are typical line structures that are of interest in many computer-vision
applications. In practice, many cracks, e.g., pavement cracks, show poor continuity
and low contrast, which bring great challenges to image-based crack detection by
using low-level features. This work proposed a novel automatic damage detection
technique using Mask R-CNN model based on Dense Net framework to detect two
categories of damage (efflorescence and spalling) for historic masonry structures.
The proposed defect detection model shows better performance in terms of all the
parameters
•
Thank You
Queries??