A new robust estimator based on an evolutionary opti- IntroductionThis paper describes a new approach GASAC for robust parameter estimation. Here, the general method is applied to problems in computer vision, i.e. the estimation of projective transformations, such as homographies, fundamental-, essential-and projection matrices as well as the trifocal tensor [7]. These geometric relations are used for camera calibration, narrow and wide-baseline stereo matching, structure and motion estimation as well as in object recognition tasks.A challenging step consists in automatically finding reliable correspondences in two or more images. Incorrect matches cannot be avoided at the beginning of the matching process, if only the correlation of local image descriptors is available. Due to their frequent occurrence, mismatches must be detected and removed by robust methods, which search for subsets of matches consistent with a global constraint (see Figure 1).It is assumed that we have a data set consisting of putative feature correspondences. A subset is consistent with some projective transformation, of which the parameters are unknown. The task is to estimate these parameters, along with the set of consistent correspondences (inliers). In order to render the computation robust, two different types of measurement errors must be considered:• Small errors (Noise)The localization accuracy of the coordinates may not be perfect. For such small deviations, a Gaussian error distribution can generally be assumed, and therefore a non-linear optimization can be used. • Blunders (Outlier)A serious problem are blunders (i.e. wrong correspondences), which arise particularly in automatic measurements. For robust parameter estimation, the influence of errors can be limited using MEstimators (see section 2.1).Hypothesize-and-verify approaches are more successful, which identify a minimal solution by random trials, supported by as much data as possible (see section 2.3). These simple and powerful methods are particularly insensitive to outliers, but significant in computational cost.
<p> The monitoring and inspection of structures is typically based on visual investigations. Especially the examination of very large structures, for example large retaining walls or dams, is an exceptionally complex task for civil engineers and closely associated with high risks regarding to the assessment of structural stability. For a reliable assessment of the structural stability detailed data of all parts of the structure are required. But an all-embracing inspection of such structures is technically complex and time consuming. This leads to high inspection costs.</p><p> This paper presents a new vision-based monitoring method of large scale structures based on aerial photos taken by remote controlled unmanned aerial systems. The approach offers the possibility of detailed automatic displacement detection by using photogrammetric computer vision algorithms in post flight analysis. The currently available high-end flight systems and new computer vision methods can contribute to the improvement of quality and efficiency of the inspection and safety of large structures.</p>
Unmanned aircraft systems (UAS) show large potential for the construction industry. Their use in condition assessment has increased significantly, due to technological and computational progress. UAS play a crucial role in developing a digital maintenance strategy for infrastructure, saving cost and effort, while increasing safety and reliability. Part of that strategy are automated visual UAS inspections of the building's condition. The resulting images can automatically be analyzed to identify and localize damages to the structure that have to be monitored. Further interest in parts of a structure can arise from events like accidents or collisions. Areas of low interest exist, where low resolution monitoring is sufficient. From different requirements for resolution, different levels of detail can be derived. They require special image acquisition parameters that differ mainly in the distance between camera and structure. Areas with a higher level of detail require a smaller distance to the object, producing more images. This work proposes a multi-scale flight path planning procedure, enabling higher resolution requirements for areas of special interest, while reducing the number of required images to a minimum. Careful selection of the camera positions maintains the complete coverage of the structure, while achieving the required resolution in all areas. The result is an efficient UAS inspection, reducing effort for the maintenance of infrastructure.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.