Background: Many context-aware techniques have been proposed to deliver cyber-information, such as project specifications or drawings, to on-site users by intelligently interpreting their environment. However, these techniques primarily rely on RF-based location tracking technologies (e.g., GPS or WLAN), which typically do not provide sufficient precision in congested construction sites or require additional hardware and custom mobile devices. Method: This paper presents a new vision-based mobile augmented reality system that allows field personnel to query and access 3D cyber-information on-site by using photographs taken from standard mobile devices. The system does not require any location tracking modules, external hardware attachments, and/or optical fiducial markers for localizing a user's position. Rather, the user's location and orientation are purely derived by comparing images from the user's mobile device to a 3D point cloud model generated from a set of pre-collected site photographs.
Results:The experimental results show that 1) the underlying 3D reconstruction module of the system generates complete 3D point cloud models of target scene, and is up to 35 times faster than other state-of-the-art Structure-from-Motion (SfM) algorithms, 2) the localization time takes at most few seconds in actual construction site.
Conclusion:The localization speed and empirical accuracy of the system provides the ability to use the system on real-world construction sites. Using an actual construction case study, the perceived benefits and limitations of the proposed method for on-site context-aware applications are discussed in detail.
A key problem in mobile computing is providing people access to cyber-information associated with their surrounding physical objects. Mobile augmented reality is one of the emerging techniques that addresses this problem by allowing users to see the cyber-information associated with real-world physical objects by overlaying that cyber-information on the physical objects' imagery. This paper presents a new vision-based context-aware approach for mobile augmented reality that allows users to query and access semantically-rich 3D cyber-information related to real-world physical objects and see it precisely overlaid on top of imagery of the associated physical objects. The approach does not require any RF-based location tracking modules, external hardware attachments on the mobile devices, and/or optical/fiducial markers for localizing a user's position. Rather, the user's 3D location and orientation are automatically and purely derived by comparing images from the user's mobile device to a 3D point cloud model generated from a set of pre-collected photographs. Our approach supports content authoring where collaboration on editing the content stored in the 3D cloud is possible and content added by one user can be immediately accessible by others. In addition, a key challenge of scalability for mobile augmented reality is addressed in this paper. In general, mobile augmented reality is required to work regardless of users' location and environment, in terms of physical scale, such as size of objects, and in terms of cyber-information scale, such as total number of cyber-information entities associated with physical objects. However, many existing approaches for mobile augmented reality have mainly tested their approaches on limited real-world use-cases and have challenges in scaling their approaches. By designing a multi-model based direct 2D-to-3D matching algorithms for localization, as well as applying a caching scheme, the proposed research consistently supports near real-time localization and information association regardless of users' location, size of physical objects, and number of cyber-physical information items. Empirical results presented in the paper show that the approach can provide millimeter-level augmented reality across several hundred or thousand objects without the need for additional non-imagery sensor inputs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.