Road pavement cracks automated detection is one of the key factors to evaluate the road distress quality, and it is a difficult issue for the construction of intelligent maintenance systems. However, pavement cracks automated detection has been a challenging task, including strong nonuniformity, complex topology, and strong noise-like problems in the crack images, and so on. To address these challenges, we propose the CrackSeg—an end-to-end trainable deep convolutional neural network for pavement crack detection, which is effective in achieving pixel-level, and automated detection via high-level features. In this work, we introduce a novel multiscale dilated convolutional module that can learn rich deep convolutional features, making the crack features acquired under a complex background more discriminant. Moreover, in the upsampling module process, the high spatial resolution features of the shallow network are fused to obtain more refined pixel-level pavement crack detection results. We train and evaluate the CrackSeg net on our CrackDataset, the experimental results prove that the CrackSeg achieves high performance with a precision of 98.00%, recall of 97.85%, F-score of 97.92%, and a mIoU of 73.53%. Compared with other state-of-the-art methods, the CrackSeg performs more efficiently, and robustly for automated pavement crack detection.
Pavement crack detection and characterization is a fundamental part of road intelligent maintenance systems. Due to the high non-uniformity of cracks, topological complexity, and similar noise from crack texture, the challenge arises in this domain with automated crack detection and classification in a complex environment. In this work, an overarching framework for a universal and robust automatic method that simultaneously characterizes the type of crack and its severity level was developed. For crack detection, we propose a novel and efficient crack detection network that captures the crack context information by establishing a multiscale dilated convolution module. On this foundation, an attention mechanism is introduced to further refine the high-level features. Moreover, the rich features at different levels are fused in an upsampling module to generate more detailed crack detection results. For crack classification, a novel characterization algorithm is developed to classify the type of crack after detection. The crack segment branches are then merged and classified into four types: transversal, longitudinal, block, and alligator; the severity levels of cracks are assessed by calculating the average width and distance between the crack branches. The proposed crack detection method effectively detects crack information in a complex environment, and achieves the current state-of-the-art accuracy. Compared to manual classification results, the classification accuracy of transversal and longitudinal cracks is higher than 95%, and the classification accuracy of block and alligator is above 86%.
Recent convolutional neural networks have made significant advancements in the detection of road cracks. However, the lack of accurate crack training data reduces the generalisation ability of the deep model. In this Letter, a semi-automatic pavement crack labelling algorithm is proposed to solve the problem of insufficient training data. First, the modified C-V model is used to obtain the preliminary segmentation results. Second, the direction of the initial segmentation area is calculated by the ellipse fitting method, and the preliminary segmentation results are used as samples for accurate labelling. Finally, a multi-scale feature extraction module is proposed for learning rich deep convolutional features, which allows the acquired crack features under a complex background to be more discriminant. The experimental results were compared with the manual marking method, and this method can achieve accurate marking of crack images with a low amount of interaction, thereby significantly reducing the cost of ground-truth making. The results of the validation and comparison experiments on test data sets indicate that the proposed method can not only effectively identify cracks, but also overcome the interference of many factors in the environment.
The lack of large-scale, multi-scene, and multi-type pavement distress training data reduces the generalization ability of deep learning models in complex scenes, and limits the development of pavement distress extraction algorithms. Thus, we built the first large-scale dichotomous image segmentation (DIS) dataset for multi-type pavement distress segmentation, called ISTD-PDS7, aimed to segment highly accurate pavement distress types from natural charge-coupled device (CCD) images. The new dataset covers seven types of pavement distress in nine types of scenarios, along with negative samples with texture similarity noise. The final dataset contains 18,527 images, which is many more than the previously released benchmarks. All the images are annotated with fine-grained labels. In addition, we conducted a large benchmark test, evaluating seven state-of-the-art segmentation models, providing a detailed discussion of the factors that influence segmentation performance, and making cross-dataset evaluations for the best-performing model. Finally, we investigated the effectiveness of negative samples in reducing false positive prediction in complex scenes and developed two potential data augmentation methods for improving the segmentation accuracy. We hope that these efforts will create promising developments for both academics and the industry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.