Research on damage detection of road surfaces using image processing techniques has been actively conducted, achieving considerably high detection accuracies. So far, most studies focused only on the detection of the presence or absence of damages; however, in real-world scenarios, road managers from need to clearly understand the type of damage and its extent in order to take effective action in advance or to allocate the necessary resources. Moreover, currently there are few uniform and openly available road damage datasets available, leading to a lack of a common benchmark for road damage detection. Such dataset could be used for a great variety of applications; herein, it is intended to serve as the acquisition component of a physical asset management tool which can aid governments agencies for planning purposes, or by infrastructure maintenance companies, so they can implement predictive maintenance procedures.In this paper, we make two contributions to address these issues. First, we present a large-scale road damage dataset, which includes a more balanced and representative set of damages not present in previous studies. This dataset is composed of 18,0345 road damage images captured with a smartphone installed on a car, with 45,435 instances road surface damages (linear, lateral and alligator cracks, potholes, and various types of painting blurs). In order to generate this dataset, we obtained images from several public datasets and augmented it with crowdsourced images, which where manually annotated for further processing. The images were captured under a variety of weather and illumination conditions. In each image, we annotated the bounding box representing the location and type of damage and its extent. Second, we trained different types generic object detection methods, both traditional (an LBP-cascaded classifier) and deep learning-based, specifically, MobileNet and RetinaNet, which are amenable for embedded and mobile and implementations with an acceptable performance for many applications. We compared the accuracy and inference time of all these models with others in the state of the art, achieving higher accuracies in all the eight classes present in the dataset introduced by researchers at the University of Tokyo, and in other related works, with a lower inference time.
The analysis and follow up of asphalt infrastructure using image processing techniques has received increased attention recently. However, the vast majority of developments have focused only on determining the presence or absence of road damages, forgoing other more pressing concerns. Nonetheless, in order to be useful to road managers and governmental agencies, the information gathered during an inspection procedure must provide actionable insights that go beyond punctual and isolated measurements: the characteristics, type, and extent of the road damages must be effectively and automatically extracted and digitally stored, preferably using inexpensive mobile equipment. In recent years, computer vision acquisition systems have emerged as a promising solution for road damage automated inspection systems when integrated into georeferenced mobile computing devices such as smartphones. However, the artificial intelligence algorithms that power these computer vision acquisition systems have been rather limited owing to the scarcity of large and homogenized road damage datasets. In this work, we aim to contribute in bridging this gap using two strategies. First, we introduce a new and very large asphalt dataset, which incorporates a set of damages not present in previous studies, making it more robust and representative of certain damages such as potholes. This dataset is composed of 18,345 road damage images captured by a mobile phone mounted on a car, with 45,435 instances of road surface damages (linear, lateral, and alligator cracks; potholes; and various types of painting blurs). In order to generate this dataset, we obtained images from several public datasets and augmented it with crowdsourced images, which where manually annotated for further processing. The images were captured under a variety of weather and illumination conditions and a quality-aware data augmentation strategy was employed to filter out samples of poor quality, which helped in improving the performance metrics over the baseline. Second, we trained different object detection models amenable for mobile implementation with an acceptable performance for many applications. We performed an ablation study to assess the effectiveness of the quality-aware data augmentation strategy and compared our results with other recent works, achieving better accuracies (mAP) for all classes and lower inference times (3× faster).
This paper presents a camera-laser projector based system for the real-time estimation of distance to obstacles designed to assist wheelchair users with cognitive impairment. Upon falling under the specified safe distance to an obstacle an alarm alerts that it can be used by the control system to act immediately to avert a possible collision even before the user stops the wheelchair. This system consists of a fisheye camera, which allows to cover a large field of view (FOV) to enable the pattern to be available at all times, and a laser circle projector mounted on a fixed baseline. The approach uses the geometrical information obtained by the projection of the laser circle onto the plane simultaneously perceived by the camera. We show a theoretical study of the system in which the camera is modelled as a sphere and show that the estimation of a conic on this sphere allows to estimate the distance between wheel chair and obstacle. We propose some experiments based on simulated data followed by real sequences. The estimated distances from our method are comparable with commercial sensors in terms of its accuracy and correctness. The results from our cheaper system over the expensive commercial sensors prove its suitability for a cheap wheelchair able to assist users with cognitive impairments. The proposed solution is functional in low light to dark environments, where the decision making can be a challenge by the user.
Software industry has matured with time, from small application of few lines of codes to software application of millions of lines of code. In the past few years, the concern of the industry regarding software size estimation has been the convertibility issue between the International Function Point User Group (IFPUG) and the COmmon Software Measurement International Consortium (COSMIC) in order to leverage their huge investment on the IFPUG. Since there is still no cost and effort estimation tool for COSMIC function points. IFPUG is one of the early estimation methods, however, with the introduction of a more scientific method like COSMIC which has a wider applicability than the IFPUG and both method using the same measuring unit and principle, the continued relevancy of the IFPUG is called to question. Due to similar underlining principle of the two methods and for organizations that have invested so much in the IFPUG not to lose all their investment because of migrating to using COSMIC, researchers have been trying to explore the possibility of converting the output of one method to the other. This paper review some of the popular conversion formulas that have been suggested so far to see a trend or how related, consistent and reliable the formulas could be. We estimate the function point of two case studies using the COSMIC and IFPUG. Then we insert our estimation result into the formulas to see how close or diverse the output will be in comparison with our calculation. The result varied widely and nothing conclusive can be said, though, two of the formulas give closer estimation range than others. We also highlight why COSMIC may be more desirable today than the IFPUG and presented the progress level on trying to establish a convertible relationship between the two methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.