The application of virtual reality technology in science experiment education is a research with practical significance and value in human-computer interaction. However, in some existing education tools based on virtual reality, due to the single interaction mode, the complexity of user intention and the non-physical interaction characteristics brought by virtualization, their experimental teaching ability is limited, resulting in the lack of practical value and popularity. In order to solve these problems, a multimodal interaction model is constructed by fusing gesture, speech and pressure information. Specifically, our tasks include: 1) collecting user input information and time series information to construct basic data input tuples. 2) The basic interaction information is used to identify the user's basic intention, and the correlation degree between the user's intentions is considered to determine the correctness of the current identification intention. 3) It allows users to alternate between multi-channel and single channel interaction. Based on this model, we build a multi-modal intelligent interactive virtual experiment platform (MIIVEP), and design and implement a kind of dropper with strong perception ability, which has been verified, tested, evaluated and applied in the intelligent virtual experiment system. In addition, in order to evaluate this work more effectively, we developed a fair scoring criterion for the virtual experimental system (Evaluation scale of virtual experiment system, ESVES), and invited middle school teachers and students to participate in the verification of the results of this work. Through the user's actual use effect verification and result research, we prove the effectiveness of the proposed model and the corresponding implementation.
Virtual experiments have become an interesting research topic in the field of education. However, we found that there are some limitations in the current virtual experiments: first, the researchers used the virtual effects of the simulation to represent the virtual experiments, which led to decrease the immersion of the user's simulated experiments; second, most of the virtual experiments are only mouse or touch screen interactive mode, which reduces the realism of user simulation experiments; third, students independently explore the experimental operation process and spend too much time simulating the experiment, which leads to problems such as overloading the operation and low interaction efficiency. In order to solve the above problems, we propose and implement a multimodal navigational interaction virtual and real fusion chemistry laboratory (MNIVRFCL). We design a new sensing structure intelligent equipment and propose a multimodal navigational interaction algorithm (MMNI) based on auditory and tactile channel, which are verified and applied in MNIVRFCL. The MMNI algorithm can detect user's specific behaviors to understand their behavioral intentions, and then the system guide and rectify users' current correct or incorrect behaviors in the form of voice navigation broadcasts. Finally, we achieve the purpose that students can use virtual and real fusion interactions through tactile and auditory channels, they can independently complete simulations and learning experiments based on experimental navigation. The experimental statistic results show that system's successful understanding of intention is 91.48%, and prove the MNIVRFCL operational load reduce by 23.81% compared to the pure virtual experiment, it can reduce the time consumption and improves the students' interaction efficiency.
Fingertip recognition and tracking is a key problem in gesture recognition. The current fingertip locating and tracking method is complicated, or needs to be labelled artificially. In this paper, a particle filtering method based on edge feature and pixel ratio is proposed to track the target finger in a complex background. We set the region of each particle as a fixed value, and calculate the edge orientation histograms and the proportion of the body pixels of each particle area. The similarity degree of edge features is measured by the Bhattacharyya distance, and a new similarity measure is defined to measure the similarity degree of pixel ratio. These two feature similarities will be linearly combined to track the target model. And then we calculate the farthest point from the center of the contour in the predicted model and update the target model again. The results show that the method can track the target fingertip accurately and effectively in real time, under the condition of interference.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.