Random outbreaks of infectious diseases in the past have left a persistent impact on societies. Currently, COVID-19 is spreading worldwide and consequently risking human lives. In this regard, maintaining physical distance has turned into an essential precautionary measure to curb the spread of the virus. In this paper, we propose an autonomous monitoring system that is able to enforce physical distancing rules in large areas round the clock without human intervention. We present a novel system to automatically detect groups of individuals who do not comply with physical distancing constraints, i.e., maintaining a distance of 1 m, by tracking them within large areas to re-identify them in case of repetitive non-compliance and enforcing physical distancing. We used a distributed network of multiple CCTV cameras mounted to the walls of buildings for the detection, tracking and re-identification of non-compliant groups. Furthermore, we used multiple self-docking autonomous robots with collision-free navigation to enforce physical distancing constraints by sending alert messages to those persons who are not adhering to physical distancing constraints. We conducted 28 experiments that included 15 participants in different scenarios to evaluate and highlight the performance and significance of the present system. The presented system is capable of re-identifying repetitive violations of physical distancing constraints by a non-compliant group, with high accuracy in terms of detection, tracking and localization through a set of coordinated CCTV cameras. Autonomous robots in the present system are capable of attending to non-compliant groups in multiple regions of a large area and encouraging them to comply with the constraints.
Based on a long-term prediction by the International Civil Aviation Organization indicating steady increases in air traffic demand throughout the world, the workloads of air traffic controllers are expected to continuously increase. Air traffic control and management (ATC/M) includes the processing of various unstructured composite data along with the real-time visualization of aircraft data. To prepare for future air traffic, research and development intended to effectively present various complex navigation data to air traffic controllers is necessary. This paper presents a mixed reality-based air traffic control system for the improvement of and support for air traffic controllers’ workflow using mixed reality technology that is effective for the delivery of information such as complex navigation data. The existing control systems involve difficulties in information access and interpretation. Therefore, taking notice of the necessity for the integration of air traffic control systems, this study presents the mixed reality (MR) system, which is a new approach, that enables the control of air traffic in interactive environments. This system is provided in a form usable in actual operational environments with a head-mounted see-through display installed with a controller to enable more structured work support. In addition, since this system can be controlled first-hand by air traffic controllers, it provides a new experience through improved work efficiency and productivity.
Immersive virtual reality (VR)-based exercise video games (exergames) are increasingly being employed as a supportive intervention in rehabilitation programs to promote engagement in physical activity, especially for elderly users. A multifaceted and iterative codesign process is essential to develop sustainable exergaming solutions. The social aspect is considered one of the key motivating factors in exergames; however, research on the social aspect of VR exergames has been limited. Previous studies have relied on competitiveness in exergames, but research has shown that competition can lead to adverse effects on users. With the aim of motivating elderly individuals to participate in physical exercise and improving social connectedness during rehabilitation, this work presents a social VR-based collaborative exergame codesigned with elderly participants and therapists. This exergame stimulates full-body exercise and supports social collaboration among users through a collaborative game task. Furthermore, this article presents a user study based on a mixed-methods approach to gather user feedback on exergame design and the effect of social collaboration versus playing alone in a VR exergame in terms of physical exertion and motivation. This study spanned five weeks (99 exergaming sessions) with 14 elderly participants divided into two groups, one playing collaboratively and the other playing individually. Between-group comparisons were performed at baseline (first week) and in the fourth week, and within-group comparisons were performed in the fifth week, when the participants played the exergame in counterbalanced order. In contrast to the first week, the participants exergaming collaboratively in the fourth week reported significantly higher intrinsic motivation on all subscales (enjoyment: p < 0.02, effort: p < 0.002, usefulness: p < 0.01) and physical exertion (p < 0.001) than those playing alone. Thereafter, exergaming in counterbalanced order during the fifth week resulted in significant differences (medium to large effect size) within groups. The participants found the social VR gameplay enjoyable and agreed that collaboration played a vital role in their motivation. They reported various health benefits, a minimal increase in symptoms of simulator sickness, and excellent usability scores (83.75±13.3). In this work, we also identify various key design principles to support healthcare professionals, researchers and industrial experts in developing ergonomic and sustainable VR-based exergames for senior citizens.
We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be multiple interesting areas within a 360° frame at the same time. Due to the narrow field of view in virtual reality head-mounted displays, a user can only view a limited area of a 360° video. Hence, our system is aimed at generating multiple experiences based on interesting information in different regions of a 360° video and efficient transferring of these experiences to prospective users. The proposed system generates experiences by using two approaches: (1) Recording of the user’s experience when the user watches a panoramic video using a virtual reality head-mounted display, and (2) tracking of an arbitrary interesting object in a 360° video selected by the user. For tracking of an arbitrary interesting object, we have developed a pipeline around an existing simple object tracker to adapt it for 360° videos. This tracking algorithm was performed in real time on a CPU with high precision. Moreover, to the best of our knowledge, there is no such existing system that can generate a variety of different experiences from a single 360° video and enable the viewer to watch one 360° visual content from various interesting perspectives in immersive virtual reality. Furthermore, we have provided an adaptive focus assistance technique for efficient transferring of the generated experiences to other users in virtual reality. In this study, technical evaluation of the system along with a detailed user study has been performed to assess the system’s application. Findings from evaluation of the system showed that a single 360° multimedia content has the capability of generating multiple experiences and transfers among users. Moreover, sharing of the 360° experiences enabled viewers to watch multiple interesting contents with less effort.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.