Abstract-Image based localization is an important problem with many applications. In our previous work, we presented a two step pipeline for performing image based localization of mobile devices in outdoor environments. In the first step, a query image is matched against a georeferenced 3D image database to retrieve the "closest" image. In the second step, the pose of the query image is recovered with respect to the "closest" image using cell phone sensors. As such, a key ingredient of our outdoor image based localization is a 3D georeferenced image database. In this paper, we extend this approach to indoors by utilizing a 3D locally referenced image database generated by an ambulatory depth acquisition backpack that is originally developed for 3D modeling of indoor environments. We demonstrate retrieval rate of 94% over a set of 83 query images taken in an indoor shopping center and characterize pose recovery accuracy of the same set.
pipeline for mobile depth that supports a wide array of mobile phones, and uses only the existing monocular color sensor. Through several technical contributions, we provide the ability to compute low latency dense depth maps using only a single CPU core of a wide range of (medium-high) mobile phones. We demonstrate the capabilities of our approach on high-level AR applications including real-time navigation and shopping.
oriented reticles and splats (b) ray-marching-based scene relighting (c) depth visualization and particles (d) geometry-aware collisions (e) 3D-anchored focus and aperture effect (f) occlusion and path planning Figure 1. Real-time interactive components enabled by DepthLab: (a) virtual texture decals "splatting" onto physical trees and a white oriented reticle as a 3D virtual cursor; (b) relighting of a physical scene with three virtual point lights; (c) AR rain effect on dry stairs on the left and false-color depth map on the right; (d) virtual objects colliding with physical exercise equipment; (e) "Bokeh"-like effect putting focus on a physical 3D anchor; (f) occlusion and path planning in a mobile AR game. Please refer to the accompanying video captured in real time for more results.
In order to deliver a great visual experience with standalone augmented‐reality or virtual‐reality head‐mounted displays (HMDs), the traditional display rendering pipeline needs to be re‐thought to best leverage the unique attributes of human visual perception and the features available in a rendering ecosystem. The foveation pipeline introduced in this article considers a full integration of foveation techniques, including content creation, processing, transmission, and reconstruction on the display.
This paper proposes an algorithm that generates as-built architectural floor plans by separating the floors of the Li-DAR scan of a building, selecting a representative sampling of wall scans for each floor, and triangulating these samplings to develop a watertight representation of the walls for each of the scanned areas. Curves and straight line segments are fit to these walls, in order to mitigate any registration errors from the original scans. This method is not dependent on the scanning system and can successfully process noisy scans with non-zero registration error. Most of the processing is performed after a dramatic dimensionality reduction, yielding a scalable approach. We demonstrate the effectiveness of our approach on a threestory point cloud from a commercial building as well as on the lobby and hallways of a hotel.
Abstract-Image-based positioning has important commercial applications such as augmented reality and customer analytics. In our previous work, we presented a two step pipeline for performing image based positioning of mobile devices in outdoor environments. In this chapter, we modify and extend the pipeline to work for indoor positioning. In the first step, we generate a sparse 2.5D georeferenced image database using an ambulatory backpack-mounted system originally developed for 3D modeling of indoor environments. In the second step, a query image is matched against the image database to retrieve the best-matching database image. In the final step, the pose of the query image is recovered with respect to the best-matching image. Since the pose recovery in step three only requires depth information at certain SIFT feature keypoints in the database image, we only require sparse depthmaps that indicate the depth values at these keypoints. Our experimental results in a shopping mall indicate that our pipeline is capable of achieving sub-meter image-based indoor positioning accuracy.
Abstract3D modeling of building architecture from point-cloud scans is a rapidly advancing field. These models are used in augmented reality, navigation, and energy simulation applications. State-of-the-art scanning produces accurate pointclouds of building interiors containing hundreds of millions of points. Current surface reconstruction techniques either do not preserve sharp features common in a man-made structures, do not guarantee watertightness, or are not constructed in a scalable manner. This paper presents an approach that generates watertight triangulated surfaces from input point-clouds, preserving the sharp features common in buildings. The input point-cloud is converted into a voxelized representation, utilizing a memory-efficient data structure. The triangulation is produced by analyzing planar regions within the model. These regions are represented with an efficient number of elements, while still preserving triangle quality. This approach can be applied to data of arbitrary size to result in detailed models. We apply this technique to several data sets of building interiors and analyze the accuracy of the resulting surfaces with respect to the input point-clouds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.