No abstract
Abstract. Structure from motion (SfM) is a common technique to recover 3D geometry and camera poses from sets of images of a common scene. In many urban environments, however, there are symmetric, repetitive, or duplicate structures that pose challenges for SfM pipelines. The result of these ambiguous structures is incorrectly placed cameras and points within the reconstruction. In this paper, we present a postprocessing method that can not only detect these errors, but successfully resolve them. Our novel approach proposes the strong and informative measure of conflicting observations, and we demonstrate that it is robust to a large variety of scenes.
Abstract-This paper describes the development of visionaided navigation (i.e., pose estimation) for a wearable augmented reality system operating in natural outdoor environments. This system combines a novel pose estimation capability, a helmetmounted see-through display, and a wearable processing unit to accurately overlay geo-registered graphics on the user's view of reality. Accurate pose estimation is achieved through integration of inertial, magnetic, GPS, terrain elevation data, and computervision inputs. Specifically, a helmet-mounted forward-looking camera and custom computer vision algorithms are used to provide measurements of absolute orientation (i.e., orientation of the helmet with respect to the Earth). These orientation measurements, which leverage mountainous terrain horizon geometry and/or known landmarks, enable the system to achieve significant improvements in accuracy compared to GPS/INS solutions of similar size, weight, and power, and to operate robustly in the presence of magnetic disturbances. Recent field testing activities, across a variety of environments where these vision-based signals of opportunity are available, indicate that high accuracy (less than 10 mrad) in graphics geo-registration can be achieved. This paper presents the pose estimation process, the methods behind the generation of vision-based measurements, and representative experimental results.
In this article we present our system for scalable, robust, and fast city-scale reconstruction from Internet photo collections (IPC) obtaining geo-registered dense 3D models. The major achievements of our system are the efficient use of coarse appearance descriptors combined with strong geometric constraints to reduce the computational complexity of the image overlap search. This unique combination of recognition and geometric constraints allows our method to reduce from quadratic complexity in the number of images to almost linear complexity in the IPC size. Accordingly, our 3D-modeling framework is inherently better scalable than other state of the art methods and in fact is currently the only method to support modeling from millions of images. In addition, we propose a novel mechanism to overcome the inherent scale ambiguity of the reconstructed models by exploiting geo-tags of the Internet photo collection images and readily available StreetView panoramas for fully automatic geo-registration of the 3D model. Moreover, our system also exploits image appearance clustering to tackle the challenge of computing dense 3D models from an image collection that has significant variation in illumination between images along with a wide variety of sensors and their associated different radiometric camera parameters. Our algorithm exploits the redundancy of the data to suppress estimation noise through a novel depth map fusion. The fusion simultaneously exploits surface and free space constraints during the fusion of a large number of depth maps. Cost volume compression during the fusion achieves lower memory requirements for high-resolution models. We demonstrate our system on a variety of scenes from an Internet photo collection of Berlin containing almost three million images from which we compute dense models in less than the span of a day on a single computer.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.