This paper presents a Sliding Window approach to viewpoint selection when exploring an environment using a RGB-D sensor mounted to the end-effector of an inchworm climbing robot for inspecting areas inside steel bridge archways which cannot be easily accessed by workers. The proposed exploration approach uses a kinematic chain robot model and information theory-based next best view calculations to predict poses which are safe and are able to reduce the information remaining in an environment. At each exploration step, a viewpoint is selected by analysing the Pareto efficiency of the predicted information gain and the required movement for a set of candidate poses. In contrast to previous approaches, a sliding window is used to determine candidate poses so as to avoid the costly operation of assessing the set of candidates in its entirety. Experimental results in simulation and on a prototype climbing robot platform show the approach requires fewer gain calculations and less robot movement, and therefore is more efficient than other approaches when exploring a complex 3D steel bridge structure.
This paper proposes an approach to improve surface-type classification of images containing inconsistently illuminated surfaces. When a mobile inspection robot is visually inspecting surface-types in a dark environment and a directional light source is used to illuminate the surfaces, the images captured may exhibit illumination variance that can be caused by the orientation and distance of the light source relative to the surfaces. In order to accurately classify the surface-types in these images, either the training image dataset needs to completely incorporate the illumination variance or a way to extract colour features that can provide high classification accuracy needs to be identified. In this paper diffused reflectance values are extracted as new colour features to classifying surface-types. In this approach, RGB-D data is collected from the environment, and a reflectance model is used to calculate a diffused reflectance value for a pixel in each Red, Green, Blue (RGB) colour channel. The diffused reflectance values can be used to train a multi-class support vector machine classifier to classify surface-types. Experiments are conducted in a mock bridge maintenance environment using a portable RGB-Depth (RGB-D) sensor package with an attached light source to collect surface-type data. The performance of a classifier trained with diffused reflectance values is compared against classifiers trained with other colour features including RGB and L*a*b* colour spaces. Results show that the classifier trained with the diffused reflectance values can achieve consistently higher classification accuracy than the classifiers trained with RGB and L*a*b* features. For test images containing a single surface plane, diffused reflectance values consistently provide greater than 90% classification accuracy; and for test images containing a complex scene with multiple surface-types and surface planes, diffused reflectance values are shown to provide an increase in overall accuracy over RGB and L*a*b* by 49.24% and 13.66%, respectively. Note to Practitioners: This paper was motivated by the problem of inspecting inconsistently illuminated steel surfaces on a bridge structure using a robot manipulator. Existing approaches for colour-based surface classification are susceptible to illumination variance. This paper proposes the use of diffused reflectance values, which combines the use of colour and depth data to improve accuracy. In this approach, the diffused reflectance values of each image pixel are calculated by using the distance and angle between the surface represented by a pixel and the light source. The diffused reflectance values are calculated in each colour channel (Red, Green, Blue) to provide three features to classify different surface-types. This proposed approach can be applied to surface classification tasks where the light source does not uniformly illuminate the scene in the image.
This paper presents a comprehensive approach to diagnose for faults that may occur during a robotic grit-blasting operation. The approach proposes the use of information collected from multiple sensors (RGB-D camera, audio and pressure transducers) to detect for 1) the real-time position of the grit-blasting spot and 2) the real-time state within the blasting line (i.e. compressed air only). The outcome of this approach will enable a grit-blasting robot to autonomous diagnose for faults and take corrective actions during the blasting operation. Experiments are conducted in a laboratory and in a grit-blasting chamber during real grit-blasting to demonstrate the proposed approach. Accuracy of 95% and above has been achieved in the experiments.
Abstract-This paper describes a novel approach for the segmentation of complex images to determine candidates for accurate material-type classification. The proposed approach identifies classification candidates based on image quality calculated from viewing distance and angle information. The required viewing distance and angle information is extracted from 3D fused images constructed from laser range data and image data. This approach sees application in material-type classification of images captured with varying degrees of image quality attributed to geometric uncertainty of the environment typical for autonomous robotic exploration. The proposed segmentation approach is demonstrated on an autonomous bridge maintenance system and validated using gray level cooccurrence matrix (GLCM) features combined with a naive Bayes classifier. Experimental results demonstrate the effects of viewing distance and angle on classification accuracy and the benefits of segmenting images using 3D geometry information to identify candidates for accurate material-type classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.