2021
DOI: 10.1177/14759217211053776
|View full text |Cite
|
Sign up to set email alerts
|

Efficient attention-based deep encoder and decoder for automatic crack segmentation

Abstract: Recently, crack segmentation studies have been investigated using deep convolutional neural networks. However, significant deficiencies remain in the preparation of ground truth data, consideration of complex scenes, development of an object-specific network for crack segmentation, and use of an evaluation method, among other issues. In this paper, a novel semantic transformer representation network (STRNet) is developed for crack segmentation at the pixel level in complex scenes in a real-time manner. STRNet … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
63
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 165 publications
(63 citation statements)
references
References 57 publications
(102 reference statements)
0
63
0
Order By: Relevance
“…In the experiments, six recently published networks, including U‐Net (Z. Liu et al., 2019), DeepCrack (Y. Liu et al., 2019), U‐Net++ (Zhou et al., 2019), Attention U‐Net (König et al., 2019), DeepLabv3+ (L.‐C. Chen et al., 2018; Ji et al., 2020), and semantic transformer representation network (STRNet; Kang & Cha, 2021), were selected for their state‐of‐the‐art performances in the crack segmentation. All the networks as listed in Table 1 were implemented using the same dataset and environment.…”
Section: Comparative Studymentioning
confidence: 99%
See 1 more Smart Citation
“…In the experiments, six recently published networks, including U‐Net (Z. Liu et al., 2019), DeepCrack (Y. Liu et al., 2019), U‐Net++ (Zhou et al., 2019), Attention U‐Net (König et al., 2019), DeepLabv3+ (L.‐C. Chen et al., 2018; Ji et al., 2020), and semantic transformer representation network (STRNet; Kang & Cha, 2021), were selected for their state‐of‐the‐art performances in the crack segmentation. All the networks as listed in Table 1 were implemented using the same dataset and environment.…”
Section: Comparative Studymentioning
confidence: 99%
“…The added convolution in the attention gate sacrificed the width and depth of the network, which may interfere with the identification performance. Kang and Cha (2021) improved the encoder and decoder with squeeze-excitation-based and attention-based modules hoping to make the semantic representation of the network trainable. However, the matrix multiply in the attention decoder breaks the translation invariance of CNN, which probably degrades the performance on the images that are not the same size as the training images.…”
Section: Introductionmentioning
confidence: 99%
“…However, the detection based on bounding-boxes is too coarse to quantify defects such as the crack length and width, which requires pixel-level crack segmentation. 72,73 For instance, Kang et al 74 modified a R-CNN algorithm to allow for a crack segmentation into pixels and further measurement of the crack thickness and length by pixel analysis. The latter authors achieved an accuracy of 93%.…”
Section: Crack Initiation and Crack Widthmentioning
confidence: 99%
“…Thus the approach performs well on multiple sample sets. Kang et al [18], [47], [48] perform crack segmentation in complex environments and different lighting conditions by integrating three independent computer vision algorithms and developed a new encoder with an attention module. Choi et al [49] propose a real-time crack segmentation DL architecture, referred to as SDDNet-V1, which can greatly improve the time efficiency and identify relatively vague cracks.…”
Section: B Deep Learningmentioning
confidence: 99%