While studies for objective and subjective evaluation of the visual quality of compressed 3D meshes has been discussed in the literature, those studies were covering the evaluation of 3D-meshes that were created either by 3D artists or generated by a computationally expensive 3D reconstruction process applied on high quality 3D scans. With the advent of RGB-D sensors that operate at high frame-rates and the utilization of fast 3D reconstruction algorithms, humans can be captured and 3D reconstructed into a 3D mesh representation in real-time, enabling new (tele-)immersive experiences. The way of producing the respective 3D mesh content is dramatically different between the two cases, leading to apparent structural difference between the output meshes. On one hand, the first type of content is nearly perfect and clean, while on the other hand, the second type is much more irregular and noisy. Thus, evaluating compression artifacts on this new type of immersive 3D media, constitutes a yet unexplored scientific area. In this paper, we aim to subjectively assess the compression artifacts introduced by three open-source static 3D mesh codecs, when compressing 3D meshes generated for immersive experiences. The subjective evaluation of the content is conducted in a Virtual Reality setting, using the forced-choice pairwise comparison methodology with existing reference. The results of this study is a mapping of the compared conditions to a continuous ranking scale that can be used in order to optimize codec choice and compression parameters to achieve optimum balance between bandwidth and perceived quality in tele-immersive platforms.
Traditional drone handheld remote controllers, although well-established and widely used, are not a particularly intuitive control method. At the same time, drone pilots normally watch the drone video feed on a smartphone or another small screen attached to the remote. This forces them to constantly shift their visual focus from the drone to the screen and vice-versa. This can be an eye-and-mind-tiring and stressful experience, as the eyes constantly change focus and the mind struggles to merge two different points of view. This paper presents a solution based on Microsoft’s HoloLens 2 headset that leverages augmented reality and gesture recognition to make drone piloting easier, more comfortable, and more intuitive. It describes a system for single-handed gesture control that can achieve all maneuvers possible with a traditional remote, including complex motions; a method for tracking a real drone in AR to improve flying beyond line of sight or at distances where the physical drone is hard to see; and the option to display the drone’s live video feed in AR, either in first-person-view mode or in context with the environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.