In this study, we design an interface optimized for the platform by adopting deep learning in an asymmetric virtual environment where virtual reality (VR) and augmented reality (AR) users participate together. We also propose a novel experience environment called deep learning-based asymmetric virtual environment (DAVE) for immersive experiential metaverse content. First, VR users use their real hands to intuitively interact with the virtual environment and objects. A gesture interface is designed based on deep learning to directly link gestures to actions. AR users interact with virtual scenes, objects, and VR users via a touch-based input method in a mobile platform environment. A text interface is designed using deep learning to directly link handwritten text to actions. This study aims to propose a novel asymmetric virtual environment via an intuitive, easy, and fast interactive interface design as well as to create metaverse content for an experience environment and a survey experiment. This survey experiment is conducted with users to statistically analyze and investigate user interface satisfaction, user experience, and user presence in the experience environment.
This study proposes simple, highly immersive x-person asymmetric interactions that account for the experience type characteristics of asymmetric virtual environments, jointly experienced by virtual reality (VR) users and augmented reality (AR) users. The first person interactions for VR users are performed through the use of hand gestures, and they define a manipulation process that maps the gestures and object control scheme to provide intuitive interactions with the virtual environment and objects. The third person interaction for AR users is designed to view the overall virtual scene and recognize and judge situations to allow for intuitive communication and interactions among the virtual environment, objects, and users based on a touch interface. The core goal of this process is to provide all users who participate in asymmetric virtual environments with satisfying experiences and presence through individualized experience modes and roles. To this end, an application that uses the x-person asymmetric interactions was created. Furthermore, a survey experiment is performed to statistically analyze the interactions and verify that they provided users with a satisfactory experience, that is, a satisfactory sense of presence and social presence in each user's situation.
This paper proposes a novel text interface using deep learning in a mobile platform environment and presents the English language teaching applications created based on our interface. First, an interface for handwriting texts is designed with a simple structure based on a touch-based input method of mobile platform applications. This input method is easier and more convenient than the existing graphical user interface (GUI), in which menu items such as buttons are selected repeatedly or step by step. Next, an interaction that intuitively facilitates a behavior and decision making from the input text is proposed. We propose an interaction technique that recognizes a text handwritten on the text interface through the Extended Modified National Institute of Standards and Technology (EMNIST) dataset and a convolutional neural network (CNN) model and connects the text to a behavior. Finally, using the proposed interface, we create English language teaching applications that can effectively facilitate learning alphabet writing and words using handwriting. Then, the satisfaction regarding the interface during the educational process is analyzed and verified through a survey experiment with users.
This study proposes the metaverse content production pipeline using ZEPETO World, one of the representative metaverse platforms in Korea. Based on the Unity 3D engine, the ZEPETO world is configured using the ZEPETO template, and the core functions of the metaverse content that enable multi-user participation such as logic, interaction, and property control are implemented through the ZEPETO script. This study utilizes the basic functions such as properties, events, and components of the ZEPETO script as well as the ZEPETO player which includes avatar loading, character movement, and camera control functions. In addition, based on ZEPETO's properties such as World Multiplayer and Client Starter, it summarizes the core synchronization process required for multiplay metaverse content production, such as object transformation, dynamic object creation, property addition, and real-time property control. Based on this, we check the proposed production pipeline by directly producing multiplay metaverse content using ZEPETO World.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.