Industry and academia have repeatedly demonstrated the transformative potential of Augmented Reality (AR) guided assembly instructions. In the past, however, computational and hardware limitations often dictated that these systems were deployed on tablets or other cumbersome devices. Often, tablets impede worker progress by diverting a user's hands and attention, forcing them to alternate between the instructions and the assembly process. Head Mounted Displays (HMDs) overcome those diversions by allowing users to view the instructions in a hands-free manner while simultaneously performing an assembly operation. Thanks to rapid technological advances, wireless commodity AR HMDs are becoming commercially available. Specifically, the pioneering Microsoft HoloLens, provides an opportunity to explore a hands-free HMD's ability to deliver AR assembly instructions and what a user interface looks like for such an application. Such an exploration is necessary because it is not certain how previous research on user interfaces will transfer to the HoloLens or other new commodity HMDs. In addition, while new HMD technology is promising, its ability to deliver a robust AR assembly experience is still unknown. To assess the HoloLens' potential for delivering AR assembly instructions, the cross-platform Unity 3D game engine was used to build a proof of concept application. Features focused upon when building the prototype were: user interfaces, dynamic 3D assembly instructions, and spatially registered content placement. The research showed that while the HoloLens is a promising system, there are still areas that require improvement, such as tracking accuracy, before the device is ready for deployment in a factory assembly setting.
Military operations are turning to more complex and advanced automation technologies for minimum risk and maximum efficiency. A critical piece to this strategy is unmanned aerial vehicles. Unmanned aerial vehicles require the intelligence to safely maneuver along a path to an intended target and avoiding obstacles such as other aircrafts or enemy threats. This paper presents a unique three-dimensional path planning problem formulation and solution approach using particle swarm optimization. The problem formulation was designed with three objectives: 1) minimize risk owing to enemy threats, 2) minimize fuel consumption incurred by deviating from the original path, and 3) fly over defined reconnaissance targets. The initial design point is defined as the original path of the unmanned aerial vehicles. Using particle swarm optimization, alternate paths are generated using B-spline curves, optimized based on the three defined objectives. The resulting paths can be optimized with a preference toward maximum safety, minimum fuel consumption, or target reconnaissance. This method has been implemented in a virtual environment where the generated alternate paths can be visualized interactively to better facilitate the decision-making process. The problem formulation and solution implementation is described along with the results from several simulated scenarios demonstrating the effectiveness of the method. Nomenclature C total cost function for a path C T , C L , C R threat, fuel, and reconnaissance components cost for a path c 1 , c 2 first and second confidence parameters for PSO K T , K L , K R weighting factors for threat, fuel, and reconnaissance components cost L length of path M number of control points for B-spline curve N number of line segments that define the B-spline curve N(u) bernstein basis function for B-spline curve p • u parametric equation for B-spline curve u set of line segments for B-spline curve V velocity vector for particle swarm optimization (PSO) w inertia weight for particle swarm optimization X i ith design variable in an optimization objective function in PSO x knot vector for B-spline curve Z T , Z R threat zone and reconnaissance zone λ w decay factor for inertia weight for PSO
Abstract-Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a rayintersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.