Our future military force will be complex: a highly integrated mix of manned and unmanned units. These unmanned units could function individually or within a swarm. The readiness of future warfighters to work alongside and utilize these new forces depends on the creation of usable interfaces and training simulators. The difficulty is that current unmanned aerial vehicle (UAV) control interfaces require too much operator attention, and common swarm control methods require expensive computational power. This paper begins with a discussion on how to improve upon current user interfaces and then reviews a swarm control method, the digital pheromone field. This method uses digital pheromones to bias the movements of individual units within a swarm toward areas that are attractive and away from areas that are dangerous or unattractive. Next, a more efficient method for performing pheromone field calculations is introduced, one that harnesses the power of the graphics processing unit (GPU) in today's graphics cards by reshaping the ADAPTIV swarm control algorithm into a form acceptable to the GPU's pipeline [1]. The GPU ADAPTIV implementation is tested in scenarios that involve up to 50,000 virtual UAVs. When compared to its counterpart CPU implementation, the GPU version performed over 30 times faster than the CPU version. This gain translates directly into lower costs for training the future warfighter today and fielding the swarms of tomorrow. Finally, this paper presents a vision of how to combine these new interface ideas and performance enhancements into an effective swarm control interface and training simulator. ABSTRACTOur future military force will be complex: a highly integrated mix of manned and unmanned units. These unmanned units could function individually or within a swarm. The readiness of future warfighters to work alongside and utilize these new forces depends on the creation of usable interfaces and training simulators.The difficulty is that current UAV control interfaces require too much operator attention and common swarm control methods require expensive computational power. This paper begins with a discussion on how to improve upon current user interfaces and then reviews a swarm control method, the digital pheromone field. This method uses digital pheromones to bias the movements of individual units within a swarm toward areas that are attractive and away from areas that are dangerous or unattractive. Next, a more efficient method for performing pheromone field calculations is introduced, one that harnesses the power of the GPU (graphics processing unit) in today's graphics cards by reshaping the ADAPTIV swarm control algorithm into a form acceptable to the GPU's pipeline (Parunak et al, 2002). The GPU ADAPTIV implementation is tested in scenarios that involve up to 50,000 virtual UAVs. When compared to its counterpart CPU implementation, the GPU version performed over 30 times faster than the CPU version. This gain translates directly into lower costs for training the future warfight...
Digital projectors have a significant advantage over CRTs for IPT setups: brightness. But they also have a number of disadvantages, one of which is color consistency. This problem is exacerbated when using the Infitec method for stereo separation, which in itself has some strong advantages for CAVE and tiled wall setups. In this paper we will describe a method for color and brightness correction of multi-projector display systems. The method itself is used in two new projection systems, which are currently under construction at Fraunhofer-IGD: The HEyewall and the Digital CAVE. The HEyeWall is the first stereo capable tiled display worldwide. The Digital CAVE is the first CAVE with digital projectors and stereo separation based on Infitec(tm). In this paper we present these new IPTs in more detail and also present our experience with digital projectors. To calibrate all the involved projectors photometric measurements of the different projectors are used to calculate a common gamut in a linear colorspace. Input colors are mapped into this gamut and from there mapped into the individual projector's colorspace. This method allows to adjust the rendering output of two or more projectors with different color gamuts in such a way that the projected images are photometrically calibrated. Since the correction has to be done for each pixel, a straightforward implementation would be very slow and far away from realtime. Consequently we will outline a method how to improve performance and overcome this limitation.
Artificial intelligence (AI) and extended reality (XR) differ in their origin and primary objectives. However, their combination is emerging as a powerful tool for addressing prominent AI and XR challenges and opportunities for cross-development. To investigate the AI-XR combination, we mapped and analyzed published articles through a multi-stage screening strategy. We identified the main applications of the AI-XR combination, including autonomous cars, robotics, military, medical training, cancer diagnosis, entertainment, and gaming applications, advanced visualization methods, smart homes, affective computing, and driver education and training. In addition, we found that the primary motivation for developing the AI-XR applications include 1) training AI, 2) conferring intelligence on XR, and 3) interpreting XR- generated data. Finally, our results highlight the advancements and future perspectives of the AI-XR combination.
This paper describes a real-time welding simulation method for use in a desktop virtual reality simulated Metal Inert Gas welding training system. The simulation defines the shape of the weld bead, the depth of penetration, and the temperature distribution in the workpiece, based on inputs from the motion-tracking system that tracks the position of the welding gun as a function of time. A finite difference method is used to calculate the temperature distribution, including the width of the weld bead and the depth of penetration. The shape of the weld bead is then calculated at each time step by assuming a semi-spherical volume, based on the width of the weld bead, the welding speed, and the wire feed rate. The real-time performance of the system is examined, and results from the real-time simulation are compared to physical tests and are found to have very good correlation for welding speeds up to 1,000 mm/min.
We introduce a new concept for improved interaction with complex scenes: multi-frame rate rendering and display. Multi-frame rate rendering produces a multi-frame rate display by optically or digitally compositing the results of asynchronously running image generators. Interactive parts of a scene are rendered at the highest possible frame rates while the rest of the scene is rendered at regular frame rates. The composition of image components generated with different update rates may cause certain visual artifacts, which can be partially overcome with our rendering techniques. The results of a user study confirm that multi-frame rate rendering can significantly improve the interaction performance while slight visual artifacts are either not even recognized or gladly tolerated by users. Overall, digital composition shows the most promising results, since it introduces the least artifacts while requiring the transfer of frame buffer content between different image generators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.